Exam Syllabus

Related Courses

Students are advised to refer to syllabi of the following courses:

  • CS 370 Computer Architecture
  • CS 570 Operating Systems
  • CS 572 Microprocessor Architecture

When studying for the exams, students should focus on understanding concepts and being able to both explain and apply them.

References

CS 370 Ð Computer architecture

  1. Mano, M. Morris, and Kime, Charles R., Logic and Computer Design Fundamentals, 4th Ed, Prentice Hall, 2007.
  2. M. Morris Mano. Digital Design, 3rd edition. Prentice-Hall, Upper Saddle River, NJ, 2002.

CS 570 Ð Operating systems

  1. [VUS] Marko Vuskovic, Operating System Lecture Notes http://medusa.sdsu.edu/cs570/Lectures/Table_of_Contents.htm
  2. Andrew S. Tanenbaum. Modern Operating Systems, 2nd ed. Prentice Hall, Upper Saddle River, NJ, 2001.
  3. Abraham Silberschatz, Peter Galvin, Greg Gagne. Operating System Concepts, 7th edition. John Wiley & Sons, Inc., Hoboken, NJ, 2004.
  4. Kay A. Robbins and Steven Robbins. UNIX Systems Programming. Prentice Hall, Upper Saddle River, NJ, 2003. (This is a text which offers specific details on signals, semaphores, pipes, etc. for POSIX, and is a very different type of text than the first three entries. It can be used to supplement, but not replace the above texts.)

CS 572 Ð Microprocessor architecture

  1. John Hennessy and David Patterson, Computer Architecture: A Quantitative Approach. 4th edition. Morgan Kaufmann, Amsterdam, Boston, 2007.

Topics

For each course, students should be capable of demonstrating mastery of the listed concepts.

CS 370 – Computer architecture

  1. Overview – Levels of abstraction of digital systems (transistor, gate, register, and processor), design and analysis of digital systems.
  2. Data types and representation – Binary coding and conversions between different number systems. Ones, two’s complement, sign magnitude representation. Adding, subtracting and multiplication of binary numbers. Fixed and floating-point representation. Error detection and correction (cube representation of binary strings, Hamming code)
  3. Boolean algebra and logic design – Axiomatic definition of Boolean algebra. Boolean operators and duality principle. Theorems of Boolean algebra. Boolean functions and expression equivalence. Minterms and maxterms (standard forms of Boolean functions). Sixteen binary logic operations. Digital logic gates and example of full-adder. Design with NAND/NOR gates, complex gates. Custom design (gate arrays, FPGA).
  4. Simplification of Boolean functions – Karnaugh maps. Selection of prime implicants. Don’t care conditions. Technology mapping for gate arrays (standard to NANMD/NOR schemes, standard to gate arrays. Timing issues. Static and dynamic hazards.
  5. Combinatorial circuits – Ripple-carry serial adder. Carry-look-ahead generator. CLA adder. Twos’ complement adder subtractor. Arithmetic-logic unit. Decoders. Multiplexers. Encoders. Magnitude comparators. Shifters (including barrel shifters). Full adder with ROM, PLA.
  6. Sequential circuits – SR-latch (NOR, NAND implementation). Gated SR-latch, Gated D-latch. Flip-flops. Master-slave FF, Edge triggered FF. FF types (SR, JK, D, T). Design of sequential circuits (analysis and synthesis with state tables and diagrams). Shift registers. Serial adders. Counters (ripple, synchronous).
  7. Storage components – Simple RAM (coincident decoding). Static versus dynamic RAM. Array of RAM chips (extending the address space and the word size). Push-down stack. FIFO queue. Memory timing. Implementation of error detection and correction. ROM.
  8. Processor – Register transfer micro operations. Register transfer language. Arithmetic micro operations. Logic micro operations. ALU and shifter unit. Datapaths.
  9. System bus – Synchronous bus control. Asynchronous bus control. Connecting memory to CPU (address decoders). Implementation of interrupts. Vectored interrupts and autovectored interrupts.

CS 570 – Operating systems

  1. Overview – Operating system functionality and architecture.
  2. Programs – Basic structure of a program image and its execution environment. Generation of a simple program image (sources, objects, static libraries, load modules, translators, linker, loader). Static linking and loading (absolute and relocatable load modules). Dynamic linking (load-time, run-time linking). Shared libraries. Division of address space with traps (user space and system space). Execution environment of programs with divided images. Layered design of complex programs
  3. Processes and Threads – Evolution of operating systems (serial processing, simple and multiprogrammed batch systems). Time sharing systems. Process states (three or five state model, refinement in UNIX and Windows). Structure of a process (process control block). Process queues (simple, prioritized). Process switching and preemption. Dispatcher primitives. Time slicing (implementation of timer interrupt handler). Implementation of blocking events. Implementation of I/O interrupt handlers. Threads. Thread context switching and thread control block. Thread scheduling and priority inversion. Multiprocessor scheduling (asymmetric, symmetric).
  4. Memory Management – Address translation. Fixed-size partitioning. Dynamic partitioning. Paging (address translation scheme, page cache: translation look-aside buffer/address translation cache, protection). Multi-level paging. Inverted page tables. Segmentation. Segmentation with paging
  5. Virtual Memory – Demand paging. Implementation of the page fault handler. Page replacement strategies (FIFO, LRU, Clock algorithm, the n chance algorithm). Belady’s anomaly. Local vs. global replacement strategies. Principle of locality. Thrashing and page-fault rate. The working set and the working set trimming in Windows. Prepaging.
  6. Thread and Process Synchronization – Critical section and mutual exclusion. Enter/leave primitives (semantics, implementation and usage). Hardware supported busy waiting (test-and-set instruction with memory lock). Thread blocking in uniprocessor and multiprocessor systems. Deadlock. Semaphores (semantics, implementation and usage). Bounded buffer. Events (semantics, implementation and usage). Conjunction and disjunction of events. Condition objects (semantics, implementation and usage).
  7. Interprocess Communication – Shared memory (UNIX and Windows style). Messages. Pipes (anonymous and named). Berkeley sockets. Client-server model.
  8. File System – File allocation (contiguous, linked, indexed). Combined multilevel indexed scheme. Combined indexed and contiguous allocation. Free space management. Naming and name space. Name linking. Directories (file attributes and file header). Directory implementation. Implementation of name linking.
  9. I/O Management – Device controllers and device drivers. Basic approaches in I/O (polled, overlapped – e.g. interrupt driven, DMA). I/O buffering. Synchronous vs. asynchronous I/O. Disk cache. I/O architecture (examples in UNIX and or Windows). Disk drives and controllers. Structure of disk sectors. Physical and logical sector address. Blocking Disk scheduling (FCFS, Shortest Seek Time First, Elevator algorithm, Cyclic scan).

CS 572  Microprocessor Architecture

  1. Fundamentals of Computer Design – Introduction. Classes of computers. Defining computer architecture. Trends in technology. Trends in cost. Performance measurement. Quantitative principles of computer design.
  2. Basic types of computers – Stack-based machine. Accumulator-based machine. General register based machine. Single-cycle datapath and processor implementation. Multi-cycle datapath and processor implementation.
  3. Principles of instruction set design – Introduction. Classifying instruction set architectures. Memory addressing. Type and size of operands. Operations in the instruction set. Instructions for control flow. Encoding an instruction set. The MIPS architecture.
  4. Pipelining – Introduction. Pipeline hazards including structural hazards, data hazards, and control hazards. Pipelining implementation including interlock and forwarding. Branch prediction. Exceptions and interrupts handling.
  5. Instruction-level parallelism Ð Concepts and challenges. Reducing branch costs with prediction. Overcoming data hazards with dynamic scheduling. Dynamic scheduling and speculation including scoreboard and Tomasulo algorithms.
  6. Memory hierarchy – Introduction. Cache design including block placement (direct mapped, set associative, fully associative cache), block identification, block replacement, write strategy, and multi-level caching. Cache performance measurement including AMAT (average memory-access time) and CPU time calculation.