We're Open
+44 7340 9595 39
+44 20 3239 6980

[Solved]Operating System and Environment

  100% Pass and No Plagiarism Guaranteed

[Solved]Operating System and Environment

Operating System and Environment

INSTRUCTIONS:

Module 2 - Case



PROCESS MANAGEMENT

The Process

A process is a program in execution. A program by itself is not a process. A program is a passive entity, such as a file containing a list of instructions stored on disk (often called an executable file). In contrast, a process is an active entity. A program becomes a process when an executable file is loaded into memory. Two common techniques for loading executable files are double-clicking an icon representing the executable file and entering the name of the executable file on the command line.



1. Process State



As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. A process may be in one of the following states:



New. The process is being created.

Running. Instructions are being executed.

Waiting. The process is waiting for some event to occur (such as an I/O completion or reception of a signal).

Ready. The process is waiting to be assigned to a processor.

Terminated. The process has finished execution.

These names are arbitrary, and they vary across operating systems. The states that they represent are found on all systems, however. Certain operating systems also more finely delineate process states. It is important to realize that only one process can be running on any processor at any instant. Many processes may be ready and waiting, however. The state diagram corresponding to these states is presented in Figure 1.



Diagram of Process State



Figure 1. Diagram of Process State



2. Process Control Block



Each process is represented in the operating system by a process control block (PCB)—also called a task control block. A PCB is shown in Figure 2. It contains many pieces of information associated with a specific process, including these:



Process Control Block (PCB)



Figure 2. Process Control Block (PCB)



Process state. The state may be new, ready, running, waiting, halted, and so on.

Program counter. The counter indicates the address of the next instruction to be executed for this process.

CPU registers. The registers vary in number and type, depending on the computer architecture. They include accumulators, index registers, stack pointers, and general-purpose registers, plus any condition-code information. Along with the program counter, this state information must be saved when an interrupt occurs, to allow the process to be continued correctly afterward.

CPU-scheduling information. This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters.

Memory-management information. This information may include such items as the memory system used by the operating system.

Accounting information. This information includes the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on.

I/O status information. This information includes the list of I/O devices allocated to the process, a list of open files, and so on.

In brief, the PCB simply serves as the repository for any information that may vary from process to process.



3. Interprocess Communication



Processes executing concurrently in the operating system may be either independent processes or cooperating processes. A process is independent if it cannot affect or be affected by the other processes executing in the system. Any process that does not share data with any other process is independent. A process is cooperating if it can affect or be affected by the other processes executing in the system. Clearly, any process that shares data with other processes is a cooperating process.



There are several reasons for providing an environment that allows process cooperation:



Information sharing. Since several users may be interested in the same piece of information (for instance, a shared file), we must provide an environment to allow concurrent access to such information.

Computation speedup. If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Notice that such a speedup can be achieved only if the computer has multiple processing cores.

Modularity. We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads.

Convenience. Even an individual user may work on many tasks at the same time. For instance, a user may be editing, listening to music, and compiling in parallel.

Cooperating processes require an interprocess communication (IPC) mechanism that will allow them to exchange data and information. There are two fundamental models of interprocess communication: shared memory and message passing. In the shared-memory model, a region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region. In the message-passing model, communication takes place by means of messages exchanged between the cooperating processes. The two communications models are contrasted in Figure 3.



Interprocess Communication



Figure 3. Interprocess Communication (a. Message passing. B. shared memory) 



Both of the models just mentioned are common in operating systems, and many systems implement both. Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided. Message passing is also easier to implement in a distributed system than shared memory. Shared memory can be faster than message passing, since message-passing systems are typically implemented using system calls and thus require the more time-consuming task of kernel intervention. In shared-memory systems, system calls are required only to establish shared-memory regions. Once shared memory is established, all accesses are treated as routine memory accesses, and no assistance from the kernel is required.



Recent research on systems with several processing cores indicates that message passing provides better performance than shared memory on such systems. Shared memory suffers from cache coherency issues, which arise because shared data migrate among the several caches. As the number of processing cores on systems increases, it is possible that we will see message passing as the preferred mechanism for IPC.



Threads

A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register set, and a stack. It shares with other threads belonging to the same process its code section, data section, and other operating-system resources, such as open files and signals. A traditional (or heavyweight) process has a single thread of control. If a process has multiple threads of control, it can perform more than one task at a time. Figure 4 illustrates the difference between a traditional single-threaded process and a multithreaded process.



Single-threaded and multithreaded processes



Figure 4. Single-threaded and multithreaded processes



1. Motivation



Most software applications that run on modern computers are multithreaded. An application typically is implemented as a separate process with several threads of control. A Web browser might have one thread display images or text while another thread retrieves data from the network, for example. A word processor may have a thread for displaying graphics, another thread for responding to keystrokes from the user, and a third thread for performing spelling and grammar checking in the background. Applications can also be designed to leverage processing capabilities on multicore systems. Such applications can perform several CPU-intensive tasks in parallel across the multiple computing cores.



In certain situations, a single application may be required to perform several similar tasks. For example, a Web server accepts client requests for Web pages, images, sound, and so forth. A busy Web server may have several (perhaps thousands of) clients concurrently accessing it. If the Web server ran as a traditional single-threaded process, it would be able to service only one client at a time, and a client might have to wait a very long time for its request to be serviced.



One solution is to have the server run as a single process that accepts requests. When the server receives a request, it creates a separate process to service that request. In fact, this process-creation method was in common use before threads became popular. Process creation is time consuming and resource intensive, however. If the new process will perform the same tasks as the existing process, why incur all that overhead? It is generally more efficient to use one process that contains multiple threads. If the Web-server process is multithreaded, the server will create a separate thread that listens for client requests. When a request is made, rather than creating another process, the server creates a new thread to service the request and resume listening for additional requests.



2. Benefits 



The benefits of multithreaded programming can be broken down into four major categories:



Responsiveness. Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user. This quality is especially useful in designing user interfaces. For instance, consider what happens when a user clicks a button that results in the performance of a time-consuming operation. A single-threaded application would be unresponsive to the user until the operation had completed. In contrast, if the time-consuming operation is performed in a separate thread, the application remains responsive to the user.

Resource sharing. Processes can only share resources through techniques such as shared memory and message passing. Such techniques must be explicitly arranged by the programmer. However, threads share the memory and the resources of the process to which they belong by default. The benefit of sharing code and data is that it allows an application to have several different threads of activity within the same address space.

Economy. Allocating memory and resources for process creation is costly. Because threads share the resources of the process to which they belong, it is more economical to create and context-switch threads. Empirically gauging the difference in overhead can be difficult, but in general it is significantly more time consuming to create and manage processes than threads.

Scalability. The benefits of multithreading can be even greater in a multiprocessor architecture, where threads may be running in parallel on different processing cores. A single-threaded process can run on only one processor, regardless how many are available.

3. Multicore Programming



Earlier in the history of computer design, in response to the need for more computing performance, single-CPU systems evolved into multi-CPU systems. A more recent, similar trend in system design is to place multiple computing cores on a single chip. Each core appears as a separate processor to the operating system. Whether the cores appear across CPU chips or within CPU chips, we call these systems multicore or multiprocessor systems. Multithreaded programming provides a mechanism for more efficient use of these multiple computing cores and improved concurrency. Consider an application with four threads. On a system with a single computing core, concurrency merely means that the execution of the threads will be interleaved overtime (Figure 5), because the processing core is capable of executing only one thread at a time. On a system with multiple cores, however, concurrency means that the threads can run in parallel, because the system can assign a separate thread to each core (Figure 6).



Concurrent execution on single-core system



Figure 5. Concurrent execution on single-core system



Parallelism on a multicore system



Figure 6. Parallelism on a multicore system



Notice the distinction between parallelism and concurrency in this discussion. A system is parallel if it can perform more than one task simultaneously. In contrast, a concurrent system supports more than one task by allowing all the tasks to make progress. Thus, it is possible to have concurrency without parallelism. Before the advent of multicore architectures, most computer systems had only a single processor. CPU schedulers were designed to provide the illusion of parallelism by rapidly switching between processes in the system, thereby allowing each process to make progress. Such processes were running concurrently, but not in parallel.



As systems have grown from tens of threads to thousands of threads, CPU designer shave improved system performance by adding hardware to improve thread performance. Modern Intel CPUs frequently support two threads per core, while the Oracle T4 CPU supports eight threads per core. This support means that multiple threads can be loaded into the core for fast switching.



Multicore computers will no doubt continue to increase in core counts and hardware thread support.



4. Multithreading Models



Support for threads may be provided either at the user level, for user threads, or by the kernel, for kernel threads. User threads are supported above the kernel and are managed without kernel support, whereas kernel threads are supported and managed directly by the operating system. Virtually all contemporary operating systems—including Windows, Linux, Mac OS X, and Solaris—support kernel threads.



Ultimately, a relationship must exist between user threads and kernel threads. In this section, we look at three common ways of establishing such a relationship: the many-to-one model, the one-to-one model, and the many-to- many model.



Many-To-One Model

The many-to-one model (Figure 7) maps many user-level threads to one kernel thread. The entire process will block if a thread makes a blocking system call. Also, because only one thread can access the kernel at a time, multiple threads are unable to run in parallel on multicore systems. Very few systems continue to use the model because of its inability to take advantage of multiple processing cores.



Many-to-one model



Figure 7. Many-to-one model



One-to-One Model

The one-to-one model (Figure 8) maps each user thread to a kernel thread. It provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call. It also allows multiple threads to run in parallel on multiprocessors. The only drawback to this model is that creating a user thread requires creating the corresponding kernel thread. Because the overhead of creating kernel threads can burden the performance of an application, most implementations of this model restrict the number of threads supported by the system. Linux, along with the family of Windows operating systems, implement the one-to-one model.



One-to-one model



Figure 8. One-to-one model



Many-To-Many Model

The many-to-many model (Figure 9) multiplexes many user-level threads to a smaller or equal number of kernel threads. The number of kernel threads may be specific to either a particular application or a particular machine (an application may be allocated more kernel threads on a multiprocessor than on a single processor).



Many-to-many model



Figure 9. Many-to-many model



Let’s consider the effect of this design on concurrency. Whereas the many-to-one model allows the developer to create as many user threads as she wishes, it does not result in true concurrency, because the kernel can schedule only one thread at a time. The one-to-one model allows greater concurrency, but the developer has to be careful not to create too many threads within an application (and in some instances may be limited in the number of threads she can create). The many-to-many model suffers from neither of these shortcomings: developers can create as many user threads as necessary, and the corresponding kernel threads can run in parallel on a multiprocessor. Also, when a thread performs a blocking system call, the kernel can schedule another thread for execution.



To learn more about process management, check the following sites:



Process Management at http://ptgmedia.pearsoncmg.com/images/0201702452/samplechapter/mckusick_ch04.pdf



Processes and Process Management at http://nptel.ac.in/courses/106108101/pdf/Lecture_Notes/Mod%203_LN.pdf



Operating Systems Processes at http://web.cs.wpi.edu/~cs3013/c07/lectures/Section03-Processes.pdf



Processes Operating Systems at http://www.cs.princeton.edu/courses/archive/spr03/cs217/lectures/Process.pdf



Processes - Part 1 at https://www.youtube.com/watch?v=TIa2mhKCeYo



Processes - Part 2 at https://www.youtube.com/watch?v=_5EV7isUJ6k



Process Management at https://www.youtube.com/watch?v=mHPySA51t18



Processes and Threads at http://web.stanford.edu/class/cs140/cgi-bin/lecture.php?topic=process



Processes vs. Threads at http://www.programmerinterview.com/index.php/operating-systems/thread-vs-process/



Programming Interview: Processes and Threads in Operating System at https://www.youtube.com/watch?v=yocLHHtmA7M



Introduction to Processes & Threads at https://www.youtube.com/watch?v=hsERPf9k54U

CONTENT:

OPERATING SYSTEMS AND ENVIRONMENTS Name: Institution: Course: Date: One of the basic elements of the cooperating processes is that they require interprocess communication relative to the fact that, it involves more than one processes interacting with one another to raise computation speed, share information, create modulation or convenience (Swift, 2009). Interprocess communication is based on either of two models namely; message passing and shared memory. In the case of the message passing model, it is common where data amounts are much smaller and thus will not cause any form of interference or conflicts to be

...

100% Plagiarism Free & Custom Written,
Tailored to your instructions


International House, 12 Constance Street, London, United Kingdom,
E16 2DQ

UK Registered Company # 11483120


100% Pass Guarantee

STILL NOT CONVINCED?

View our samples written by our professional writers to let you comprehend how your work is going to look like. We have categorised this into 3 categories with a few different subject domains

View Our Samples

We offer a £ 2999

If your assignment is plagiarised, we will give you £ 2999 in compensation

Recent Updates

Details

  • Title: [Solved]Operating System and Environment
  • Price: £ 89
  • Post Date: 2021-10-28T12:31:01+00:00
  • Category: Essays
  • No Plagiarism Guarantee
  • 100% Custom Written

Customer Reviews

[Solved]Operating System and Environment [Solved]Operating System and Environment
Reviews: 5

A masterpiece of assignment by , written on 2020-03-12

I have been taking help from Insta Research since 2015 and believe me, this place is incredible in giving the best help in assignments and essays. I also ask them to run plagiarism in my essays that I have written, and they always gave me accurate results. I am literally blessed to have a strong bonding with this site so that in any need of urgency, I contact them and find them always beside me. Thank you!
Reviews: 5

A masterpiece of assignment by , written on 2020-03-12

I received my order last night and now I’m writing my reviews. My assignment has all the points I needed along with a good style. The citations used are relatable and professional. The best thing is the discount I got because I recommended my friend too to use their service. I am so pleased to use this effective service. The features are also amazing, everything is good. Will come again soon!