Computer organization and design : the hardware/software interface
Material type:
- 9780124077263
Item type | Home library | Call number | Status | Date due | Barcode | |
---|---|---|---|---|---|---|
![]() |
Biblioteca de la Facultad de Informática | C.0 PAT (Browse shelf(Opens below)) | Available | DIF-04435 |
Browsing Biblioteca de la Facultad de Informática shelves Close shelf browser (Hides shelf browser)
Incluye índice
1 Computer Abstractions and Technology -- 1.1 Introduction -- 1.2 Eight Great Ideas in Computer Architecture -- 1.3 Below Your Program -- 1.4 Under the Covers -- 1.5 Technologies for Building Processors and Memory -- 1.6 Performance -- 1.7 The Power Wall -- 1.8 The Sea Change: The Switch from Uniprocessors to Multiprocessors -- 1.9 Real Stuff: Benchmarking the Intel Core i7 -- 1.10 Fallacies and Pitfalls -- 1.11 Concluding Remarks -- 1.12 Historical Perspective and Further Reading -- 1.13 Exercises -- 2 Instructions: Language of the Computer -- 2.1 Introduction -- 2.2 Operations of the Computer Hardware -- 2.3 Operands of the Computer Hardware -- 2.4 Signed and Unsigned Numbers -- 2.5 Representing Instructions in theComputer -- 2.6 Logical Operations -- 2.7 Instructions for Making Decisions -- 2.8 Supporting Procedures in Computer Hardware -- 2.9 Communicating with People -- 2.10 MIPS Addressing for 32-Bit Immediates and Addresses -- 2.11 Parallelism and Instructions: Synchronization -- 2.12 Translating and Starting a Program -- 2.13 A C Sort Example to Put It All Together -- 2.14 Arrays versus Pointers -- 2.15 Advanced Material: Compiling C and Interpreting Java -- 2.16 Real Stuff: ARM v7 (32-bit) Instructions -- 2.17 Real Stuff: x86 Instructions -- 2.18 Real Stuff: ARM v8 (64-bit) Instructions -- 2.19 Fallacies and Pitfalls -- 2.20 Concluding Remarks -- 2.21 Historical Perspective and Further Reading -- 2.22 Exercises -- 3 Arithmetic for Computers -- 3.1 Introduction -- 3.2 Addition and Subtraction -- 3.3 Multiplication -- 3.4 Division -- 3.5 Floating Point -- 3.6 Parallelism and Computer Arithmetic: Subword Parallelism -- 3.7 Real Stuff: x86 Streaming SIMD Extensions and Advanced Vector Extensions -- 3.8 Going Faster: Subword Parallelism and Matrix Multiply -- 3.9 Fallacies and Pitfalls -- 3.10 Concluding Remarks -- 3.11 Historical Perspective and Further Reading -- 3.12 Exercises -- 4 The Processor -- 4.1 Introduction -- 4.2 Logic Design Conventions -- 4.3 Building a Datapath -- 4.4 A Simple Implementation Scheme -- 4.5 An Overview of Pipelining -- 4.6 Pipelined Datapath and Control -- 4.7 Data Hazards: Forwarding versus Stalling -- 4.8 Control Hazards -- 4.9 Exceptions -- 4.10 Parallelism via Instructions -- 4.11 Real Stuff: The ARM Cortex-A8 and Intel Core i7 Pipelines -- 4.12 Going Faster: Instruction-Level Parallelism and Matrix Multiply -- 4.13 Advanced Topic: an Introduction to Digital Design Using a Hardware Design Language to Describe and Model a Pipeline and More Pipelining Illustrations -- 4.14 Fallacies and Pitfalls -- 4.15 Concluding Remarks -- 4.16 Historical Perspective and Further Reading -- 4.17 Exercises XXX -- 5 Large and Fast: Exploiting Memory Hierarchy -- 5.1 Introduction -- 5.2 Memory Technologies -- 5.3 The Basics of Caches -- 5.4 Measuring and Improving Cache Performance -- 5.5 Dependable Memory -- 5.6 Virtual Machines -- 5.7 Virtual Memory -- 5.8 A Common Framework for Memory Hierarchy -- 5.9 Using a Finite-State Machine to Control a Simple Cache -- 5.10 Parallelism and Memory Hierarchies: Cache Coherence -- 5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks -- 5.12 Advanced Material: Implementing Cache Controllers -- 5.13 Real Stuff: The ARM Cortex-A8 and Intel Core i7 Memory Hierarchies -- 5.14 Going Faster: Cache Blocking and Matrix Multiply -- 5.15 Fallacies and Pitfalls -- 5.16 Concluding Remarks -- 5.17 Historical Perspective and Further Reading -- 5.18 Exercises -- 6 Parallel Processors from Client to Cloud -- 6.1 Introduction -- 6.2 The Difficulty of Creating Parallel Processing Programs -- 6.3 SISD, MIMD, SIMD, SPMD, and Vector -- 6.4 Hardware Multithreading -- 6.5 Multicore and Other Shared Memory Multiprocessors -- 6.6 Introduction to Graphics Processing Units -- 6.7 Clusters and Other Message-Passing Multiprocessors -- 6.8 Introduction to Multiprocessor Network Topologies -- 6.9 Communicating to the Outside World: Cluster Networking -- 6.10 Multiprocessor Benchmarks and Performance Models -- 6.11 Real Stuff: Benchmarking Intel Core i7 versus NVIDIA Fermi GPU -- 6.12 Going Faster: Multiple Processors and Matrix Multiply -- 6.13 Fallacies and Pitfalls -- 6.14 Concluding Remarks -- 6.15 Historical Perspective and Further Reading -- 6.16 Exercises -- APPENDICES -- A Assemblers, Linkers, and the SPIM Simulator -- A.1 Introduction -- A.2 Assemblers -- A.3 Linkers -- A.4 Loading -- A.5 Memory Usage -- A.6 Procedure Call Convention -- A.7 Exceptions and Interrupts -- A.8 Input and Output -- A.9 SPIM -- A.10 MIPS R2000 Assembly Language -- A.11 Concluding Remarks -- A.12 Exercises -- B The Basics of Logic Design -- B.1 Introduction -- B.2 Gates, Truth Tables, and Logic Equations -- B.3 Combinational Logic -- B.4 Using a Hardware Description Language -- B.5 Constructing a Basic Arithmetic Logic Unit -- B.6 Faster Addition: Carry Lookahead -- B.7 Clocks -- B.8 Memory Elements: Flip-Flops, Latches, and Registers -- B.9 Memory Elements: SRAMs and DRAMs -- B.10 Finite-State Machines -- B.11 Timing Methodologies -- B.12 Field Programmable Devices -- B.13 Concluding Remarks -- B.14 Exercises