Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

20100530 关于cpu的私房菜

1.228 visualizaciones

Publicado el

Publicado en: Tecnología, Educación
  • Inicia sesión para ver los comentarios

  • Sé el primero en recomendar esto

20100530 关于cpu的私房菜

  1. 1. 关于 CPU 的私房菜 By Peng Wu
  2. 2. 目录  冯 . 诺依曼模型  Harvard architecture  Pentium Superscalar  MIPS pipeline  Hyper-threading  最小的 CPU 实现
  3. 3. 1. 冯 . 诺依曼模型
  4. 4. 冯 . 诺依曼模型
  5. 5. 2. Harvard architecture
  6. 6. Harvard architecture  The Harvard architecture is a computer architecture with physically separate storage and signal pathways for instructions and data.  These early machines had limited data storage, entirely contained within the central processing unit, and provided no access to the instruction storage as data, making loading and modifying programs an entirely offline process.
  7. 7. Harvard architecture  In a computer with the contrasting von Neumann architecture (and no cache), the CPU can be either reading an instruction or reading/writing data from/to the memory. Both cannot occur at the same time since the instructions and data use the same bus system. In a computer using the Harvard architecture, the CPU can both read an instruction and perform a data memory access at the same time, even without a cache. A Harvard architecture computer can thus be faster for a given circuit complexity because instruction fetches and data access do not contend for a single memory pathway.
  8. 8. Modified Harvard architecture  The Modified Harvard architecture is very much like the Harvard architecture but provides a pathway between the instruction memory and the CPU that allows words from the instruction memory to be treated as read-only data. This allows constant data, particularly text strings, to be accessed without first having to be copied into data memory, thus preserving more data memory for read/write variables. Special machine language instructions are provided to read data from the instruction memory.
  9. 9. 3. Pentium Superscalar
  10. 10. Pentium Superscalar  Superscalar architecture — The Pentium has two datapaths (pipelines) that allow it to complete more than one instruction per clock cycle. One pipe (called U) can handle any instruction, while the other (called V) can handle the simplest, most common instructions.
  11. 11. EGCS fork of GCC  In 1997, a group of developers formed EGCS (Experimental/Enhanced GNU Compiler System),[11] to merge several experimental forks into a single project. The basis of the merger was a GCC development snapshot taken between the 2.7 and 2.81 releases. Projects merged included g77 (Fortran), PGCC (Pentium-optimized GCC), many C++ improvements, and many new architectures and operating system variants.[12][13]  EGCS development proved considerably more vigorous than GCC development, so much so that the FSF officially halted development on their GCC 2.x compiler, "blessed" EGCS as the official version of GCC and appointed the EGCS project as the GCC maintainers in April 1999. Furthermore, the project explicitly adopted the "bazaar" model over the "cathedral" model. With the release of GCC 2.95 in July 1999, the two projects were once again united.
  12. 12. 4. MIPS pipeline
  13. 13. Instruction pipeline An instruction pipeline is a technique used in the design of computers and other digital electronic devices to increase their instruction throughput (the number of instructions that can be executed in a unit of time).
  14. 14. MIPS Architecture
  15. 15. MIPS Stage Pipeline
  16. 16. 5. Hyper-threading
  17. 17. Hyper-threading  Hyper-threading is an Intel- proprietary technology used to improve parallelization of computations (doing multiple tasks at once) performed on PC microprocessors. For each processor core that is physically present, the operating system addresses two virtual processors, and shares the workload between them when
  18. 18. Hyper-threading  Hyper-threading works by duplicating certain sections of the processor—those that store the architectural state—but not duplicating the main execution resources. This allows a hyper- threading processor to appear as two "logical" processors to the host operating system, allowing the operating system to schedule two threads or processes simultaneously. When execution resources would not be used by the current task in a processor without hyper-threading, and especially when the processor is stalled, a hyper- threading equipped processor can use those execution resources to execute another scheduled task.
  19. 19. Hyper-threading  The processor may stall and switch thread due to:  a cache miss  branch misprediction  data dependency
  20. 20. 6. 最小的 CPU 实现
  21. 21. 最小的 CPU 实现  Original Paper:  A Tiny Computer  By Chuck Thacker, MSR
  22. 22. QA
  23. 23. Thanks for coming!