os202

A repository for operating systems class fall 2020

View on GitHub

HOME


Top 10 List of Week 06

  1. Process Concept
    Early computers allowed only one program to be executed at a time. Meanwhile, contemporary computer systems allow multiple programs to be loaded into memory and executed concurrently. This required more control and compartmentalization of the various programs; and these needs resulted in the notion of a process, which is a program in execution. A process is the unit of work in a modern computing system. A system therefore consists of a collection of processes, some executing user code, others executing operating system code.

  2. System Calls
    System calls provide an interface to the services node available by an operating system. It is the programmatic way in which a computer program requests a service from the kernel of the operating system. These calls are generally available as routines written in C or C++. It is used when switching from user mode to kernel mode to access particular files. The above youtube video explains them concisely.

  3. Fork() System Call
    Fork() system call is used to create processes on UNIX, or it is specific for Linux based system. In windows, it is called as CreateProcess().
    Fork is used to create a separated, duplicate process. The exact process will be duplicated but with different process ID. The process where we call the fork is the parent, and the created ones are the child processes. The above youtube video shows a simple program which print one line of output. When you he put fork() before the program, it will print 2 lines. When fork is called 3 times, it will print 8 lines.

  4. Exec() System Call
    Again, exec() is specific for Linux based system. When an exec() is invoked, the program specified in the parameter to exec() will replace the entire process from which the exec is called. It will replace a process with another process, so the process ID is the same. The above video calls ex2 from ex1 file, it executes the ex2 with the same process id and does not go back to ex1 once ex2 is finished. It is because ex1 has been replaced entirely.

  5. Process in Memory
    The layout of a process in memory is represented by four different sections: (1) text, (2) data, (3) heap, and (4) stack. The text section comprises of the program’s code, the data section stores global and static variables, the heap is used for dynamic memory allocation ( managed via calls to new, delete, malloc, free), lastly, stack stores local variables.

  6. Processes State
    As a process executes, it changes state. There are four general states of a process: (1) ready, (2) running, (3) waiting, and (4) terminated. New: the process is in the stage of being created; Ready: The process has all resources it needs yet CPU is not yet ready; Running: the CPU is working on process’s instructions; Waiting: the process cannot run at the moment because it is waiting for some resource to become available; terminated: the process is completed.

  7. Threads
    Most modern operating systems have extended the process concept to allow a process to have multiple threads of execution and thus to perform more than one task at a time. For instance, program allows us to type while check its spelling. This feature is especially beneficial on multicore systems, where multiple threads can run in parallel. Most software applications that run on modern computers and mobile devices are multithreaded. An application typically is implemented as a separate process with several threads of control. A web browser might have one thread display images or text while another thread retrieves data from the network. Another example would be a word processor may have a thread for displaying graphics, another thread for responding to keystrokes from the user, and a third thread for performing spelling and grammar checking in the background.

  8. Process Scheduling
    The role of the process scheduler is to select an available process to run on a CPU to meet the objective of multiprogramming (some process running at all times to maximize CPU utilization) and time-sharing(switch a CPU core among processes so frequently). For a system with a single CPU core, there will never be more than one process running at a time, whereas a multicore system can run multiple processes at one time.
    In general, most processes can be described as either I/O bound or CPU bound. An I/O-bound process is one that spends more of its time doing I/O than it spends doing computations. A CPU-bound process, in contrast, generates I/O requests infrequently, using more of its time doing computations.

  9. Interprocess Communication
    Processes executing concurrently in OS may be either independent processes or cooperating processes. A process is independent if it does not share data with any other processes executing in the system. A process is cooperating if it can affect or be affected by the other processes executing in the system. The traits of process cooperation are information sharing, computation speedup, and modularity.
    Cooperating process require an interprocess communication (IPC) mechanism to exchange data. There are two fundamental models of interprocess communication: shared memory and message passing.
    In the shared-memory model, a region of memory that is shared by the cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region. In the message-passing model, communication takes place by means of messages exchanged between the cooperating processes.

  10. Data VS Task Parallelism
    Trend toward multicore systems continues to place, OS designer must write scheduling algorithms that use multiple processing cores to allow the parallel execution. Data parallelism focuses on distributing subsets of the same data across multiple computing cores and performing the same operation on each core. Example, summing an array’s content. On a single-core system, one thread would simply sum the elements [0] …[N - 1]. On a dual-core system, thread A, running on core 0, could sum the elements [0] . . . [N/2 - 1] while thread B, running on core 1, could sum the elements [N/2]. . . [N - 1]. Task parallelism involves distributing not data but tasks (threads) across multiple computing cores. Each thread is performing a unique operation.