os202

A repository for operating systems class fall 2020

View on GitHub

HOME


Top 10 List of Week 04

  1. How computer memory works
    Memory – It remembers, save instructions, (your movies) , files, in a form of basic units called BITS or binary digits. Each of this is stored in memory cells that can switch between 2 states: 0 and 1. All process in the CPU as computer brains. Computer has short term memory for immediate tasks and long term memory for permanent storage. For instance, when I run a program, OS allocates area in short term memory for performing those instructions. For instance, I press a key, CPU will access one of this location to retrieve bits of data. The time it takes called memory latency. RAM, all location in memory which can be access randomly for fast time purpose. The most common is DRAM. The TED-ed video I linked above is legit.

  2. DRAM and SRAM
    DRAM: Each memory cell consists of a tiny transistor and a capacitor that stores electrical charges, 0 (no charge)and 1(charges). It must be periodically change. SRAM: To handle the low latency of RAM to CPU, there’s a small, high-speed internal memory cache made from static ram which usually made up of six interlocked transistors, which don’t need refreshing. It is the fastest memory in a computer system, but most expensive. Takes up 3 times more space than DRAM.

  3. Long-term storage device
    -Magnetic storage, cheapest Data is stored as magnetic pattern on a spinning disc coated with magnetic film. The disc must rotate to where the data is located-> low latency ( 100.000 times slower than DRAM ) -Optical disc storage like DVD and blue ray Also use spinning disc with reflective coding. Bits are encoded as light and dark spots that can be read by laser. Cheap, removable, lower latency, lower capacity than magnetic -Solid-state drive like flash disk Newest and fastest long-term storage. Has no moving parts. It uses floating gate transistors that store bits by trapping and removing electrical charges

  4. Memory vs Storage and its history
    From POK we know that computer memory is volatile. There is also storage, any data like hard drive will be there until deleted or overwritten, even if the power goes down. The crash course video I linked above explains the history of memory and storage from before 1950 up until now. What we utilize affordably and easily for now is the result of re-inventing and trial and errors! It talks about delay line memory, stored-program computer (Edvac) or sequential memory, magnetic core memory, MIT’s whirlwind computer (predominant RAM at the time) which can access any data unlike delay memory, solid state drive -> no moving part like optical disk or compact disk, but slower than compuer RAM. That’s why comp still use memory hierarchy.

  5. Memory hierarchy
    The memory hierarchy separates computer storage into a hierarchy based on response time. Since response time, complexity, and capacity are related, the levels may also be distinguished by their performance and controlling technologies. As can be seen on this diagram https://en.wikipedia.org/wiki/Memory_hierarchy#/media/File:ComputerMemoryHierarchy.svg the highest is processor register, the middle is RAM, and the lower are hard drives and tape backup.

  6. Logical vs Physical Address
    An address generated by the CPU is known as a logical address, which the memory management unit (MMU) translates to a physical address in memory. It is important to compare the process size with the physical address space, which must be less than physical address space. The javatpoint web is a good resource to learn memory management topic.

  7. Paging
    Modern operating systems use paging to manage memory. In this process, physical memory is divided into fixed-sized blocks called frames and logical memory into blocks of the same size called pages. When paging is used, a logical address is divided into two parts: a page number and a page offset. The page number serves as an index into a perprocess page table that contains the frame in physical memory that holds the page. The offset is the specific location in the frame being referenced. (Silbershatz et al) https://www.geeksforgeeks.org/paging-in-operating-system/ is a good resource to learn paging, TLB, segmentation in more details.

  8. Translation look-aside buffer - TLB
    A translation look-aside buffer (TLB) is a hardware cache of the page table. Each TLB entry contains a page number and its corresponding frame. Using a TLB in address translation for paging systems involves obtaining the page number from the logical address and checking if the frame for the page is in the TLB. If it is, the frame is obtained from the TLB. If the frame is not present in the TLB, it must be retrieved from the page table.

  9. Memory manager
    Each process is executed in memory, so memory is a core contributor for efficiency, Memory manager and its 3 old scheme of memory management: -Single user contiguous Not efficient, a single job reserved the entire memory space. If a job size is larger than size, the job won’t be executed. Sequential. SLOW -Fixed Partition Allowing allocated partition run more than a single job at a time. The partition memory table stores all partitions. The partition size is static cannot be change unless reboot. DYNAMIC PARTITION COME AS SOLUTION -Dynamic partition They are given as much memory as they request. The memory is not wasted in the partition.

  10. Allocating partitions
    One approach to allocating memory is to allocate partitions of contiguous memory of varying sizes. These partitions may be allocated based on three possible strategies: (1) first fit, (2) best fit, and (3) worst fit. Both fixed and dynamic partitionas need some way to allocated job to available partition. The video I linked above clearly explains it.