Table of contents
This is a survey of computer system design. It links ideas from "low-level" to "high-level" to show that they are really the same. It develops a mental model for design.
Advice on developing projects:
Resist the urge to jump straight into implementation. Use the API first. Write out a few code snippets that use the API for common use cases. How does it feel? Can the user express their intent without friction and boilerplate?
Putting user needs first pays off.
- Lea Verou <@leaverou@front-end.social>
The primary benefit of TDD is not that it produces better implementations, but that it produces better APIs. It forces engineers to do just that, since they have to use the APIs they are designing to write tests.
Performance
To evaluate the efficiency of system design we must consider performance.
A computer's time performance is determined by three factors:
- Instruction count
- Clock cycle time
- Clock cycles per instruction
From low to high level
- Your compiler will produce code eventually into assembly (machine code) interpreted directly by the processor
- Each machine code executes with a certain clock cycle per instruction (CPI).
Instruction set architecture
The RISC philosophy is that instruction sets are really compiler targets. ISAs should provide a few simple primitives to generate high-performance code. They should rip out complex hardware implementations to make what's left simple, uniform, and fast.1
A processor's implementation determines the clock cycle time and the number of clock cycles per instruction (CPI) of all programs running on it.
When designing your system, the consideration is therefore:
- What level of compiler optimization should I perform up to.
In Patterson's book, LEGv8 serves as ISA.
Every ISA instruction has common steps. The first two are identical:
- Send the program counter (PC) to memory containing the program code. Fetch the corresponding instruction from memory.
- Read one or two registers.
Select which two by using the fields (
Rn
andRd
) of the instruction.
The simplicity and regularity of LEGv8 makes the execution of many instruction classes similar.
Pipelining hazards
Definition 1: Load hazard
A special case of data hazard: a dependent instruction following a ldr
command must stall 1 cycle, even with data forwarding.
(Loads can only forward from the WB
stage)
Paging
Definition 2: Paging
Paging is a way of partitioning memory blocks to remember where we stored what.
- Pages, page frames: split up our address space into these fixed-sized units.
- Free list: OS keeps track of which pages are free
- Page table: Per-process data structure that maps virtual pages to physical pages.
To translate virtual memory addresses to physical ones:
- Split the virtual address into virtual page number and offset within the page
- Index the page table to find which physical page the data resides in.
- Replace VPN with physical page number and maintain the offset.
Page tables are stored in memory. Simplest data structure used is an array linear page table. OS indexes array by VPN, and looks up page-table entry at index to find desired physical page number (PPN) aka physical frame number (PFN).
Advantages of paging include: no external fragmentation, flexible.
To speed address translation for paging lookups, use a translation-lookaside buffer (TLB). A TLB is part of the on-chip memory-management unit (MMU) and is a hardware cache of popular virtual-to-physical translations.
Upon each virtual memory reference, the hardware first checks the TLB to see if the desired translation is there. If it is, it does it automatically without asking the page table.
vpn = (virtual_address & VPN_MASK) >> SHIFT;
(success, tlb_entry) = tlb_lookup(vpn);
if (success) {
// Use the TLB
if (can_access(tlb_entry.protect_bits)) {
offset = virtual_address & OFFSET_MASK;
phys_addr = (tlb_entry.pfn << SHIFT) | offset;
access_memory(phys_addr);
} else {
goto FAIL;
}
} else {
// Lookup the page table
pte_addr = ptbr + (vpn * sizeof(pte));
pte = access_memory(pte_addr);
if (!pte.valid) {
goto FAIL;
} else if (!can_access(pte.protect_bits)) {
goto FAIL;
} else {
tlb_insert(vpn, pte.pfn, pte.protect_bits);
retry_instruction();
}
}
- Extract VPN from virtual address.
- Check if TLB holds translation for VPN
- If yes, TLB hit. Extract PFN from TLB entry and form desired physical address.
- If no, TLB miss. Access page table to find translation and update TLB. Then retry the instruction.
The TLB, like all caches, is built on the premise that in the common case, translations are found in the cache. When there is a miss, the high paging cost will lead to a noticeably slower program. We want to avoid TLB misses as much as possible.
Takeaways: Since each page can hold a lot of memory, data structures that rely on the accesses memory locations near each other benefit from a speed increase due to spatial locality. Elements of an array are packed tightly into pages and thus only the first access leads to TLB misses. Temporal locality refers to the quick re-referencing of memory items in time, which also leads to TLB hit rates.
Any cache seeks to take advantage of spatial and temporal locality for success. When programs are designed to exhibit locality, TLB hit rate will be very high and efficient.
A TLB miss is handled by software most of the time:
- Hardware raises exception for TLB miss
- Privilege level raised to kernel mode, jumps to trap handler
- Trap handler looks up page table translation and updates TLB
Return-from-trap instruction for TLB misses must resume execution at instruction causing the trap (run instruction again for TLB hit)
OS must be extra careful not to cause infinite chain of TLB misses. Could keep unmapped TLB miss handlers in physical memory.
Abstract data types
When creating a model
Map
Directional acyclic graph
Priority queue
Deque
Table
Tables such as those used in database programs or implemented themselves are another data storage method.
Design takeaways
Make the common case fast
Caching
Caching is used again and again to make the common case fast.
Hardware caches take advantage of locality in instruction and data references. There are two types of locality:
- temporal locality (close in time). A recently accessed instruction is likely be re-accessed soon in the future.
- spatial locality (close in space). If a program accesses memory at $x$, it will likely soon access addresses near $x$.
Caches take advantage of locality by keeping copies of memory on small, fast on-chip memory. Fast memory can repeatedely serve local requests.
Abstract!
Definition 3: Clean architecture
What I will use to describe well-abstracted systems (cf. Robert Martin).
It is easier to manage a system by splitting it into divided components, each of which does only one task. Keep in mind conceptual hierarchy of components and type hierarchy using layered approach. Components of the system belong to a layer and the interfaces between components and layers are well-defined. 1
- Entity (element): individual objects making up your system model
- Use case: business rules of the model, processes that happen, using domain models to work on real data.
- Gateway: components defining interfaces for external systems. Common access models to services that do not implement business rules. Aka API, interface, connector
- External system: exposed use cases via gateways used by components outside of the specific component.
Golden rule: talk inwards with simple structures and outwards through well-defined interfaces.
Example in C: a struct represents an entity for a database. A set of use cases is provided by private functions in .c file.
The header file provides the main public interface (gateway) to use the use cases in a protected, well-defined manner. Changes to behaviour can be specified by swapping the header file and its public interface.
A process should have clearly separated stages.
Data transformation by the use of common interfaces allows us to achieve independence through abstraction.
This is the loose coupling of legend.
- Data moving from one stage to another, even if it is internal to the program, should only be through well-defined interfaces. They should be documented.
State
State is good. State is bad.
What is state? State elements are memory elements in your system design that affect the execution of routines. If we pause execution of any system, we could always get it to that state by restoring all the state elements and hitting play.
They, along with the combinational logic, completely characterize a system. Further, if saved for a time $t$, they completely describe the present and future values of the system.
System design:
- State should be stored in as few locations as possible.
- The entirety of system state should be documented and able to be dumped, inspected.
Reduction of states: systems should be continually planned such that the abstracted number of states is few. That is, global state between components should be almost non-existent.
Performance via parallelism
Performance via pipelining
Performance via prediction
Dependability via redundancy
Common algorithms
Performance
References
Remzi H. Arpaci-Dusseau, Andrea C. Arpaci-Dusseau, Operating Systems: Three Easy Pieces, Version 1.10, [Online]
Douglas Wilhelm Harder, Abstract Data Types, Accessed 2024-03-11, [Online]
David A. Patterson, Computer Organization and Design, Published x, [Book]
Egor Rogov, PostgreSQL 14 Internals, Published 2023, [PDF]