Thursday, October 17, 2013

OS: Scheduling Algorithms

Scheduling Algorithms

First-Come, First-Served (FCFS) Scheduling
       
         Process          Burst Time
          P1                                24
          P2                                  3
          P3                                  3

Suppose that the processes arrive in the order: P1 P2 P3

The Gantt Chart for the schedule is:
        
  
P1
P2
P3
0                                                  24                                    27                                                       30

  Waiting time for P1 = 0; P2 = 24; P3 = 27
  Average waiting time: (0 + 24 + 27)/3 = 17

  Suppose that the processes arrive in the order
  P2 P3 P1 .

The Gantt chart for the schedule is:

P2
P3
P1
0                                                      3                                           6                                               30

  Waiting time for P1 = 6; P2 = 0; P3 = 3
  Average waiting time: (6 + 0 + 3)/3 = 3
  
Much better than previous case.

 Convoy effect short process behind long process
  
Shortest-Job-First (SJF) Scheduling

Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time.

Two schemes:

1. Non pre- emptive – once CPU given to the process it cannot be preempted until completes its CPU burst.

2. Preemptive – Preemption takes place when a new process arrives with CPU burst length less than remaining time of current executing process. This scheme is also known as the Shortest-Remaining-Time-Next (SRTN) Scheduling.

SJF is optimal – gives minimum average waiting time for a given set of processes.

                Process          Arrival Time     Burst Time
                    P1                    0.0                         7
                    P2                    2.0                         4
                    P3                    4.0                         1
                    P4                    5.0                         4

SJF (non-preemptive)

P1
P3
P2
P4
0                                  7                               8                         12                                     16

Average waiting time = [0 +(8-2)+(7-4) +(12-5)] /4 =4

Example of Preemptive SJF

Proces                        Arrival  Time               Burst Time
P1                                    0.0                               7
P2                                    2.0                               4
P3                                    4.0                               1
P4                                     5.0                              4

SJF (preemptive)

P1
P2
P3
P2
P4
P1
0                   2                       4                     5                         7                      11               16

        Average waiting time = (9 + 1 + 0 +2)/4 =3
       

Determining length of cpu burst is possiblee by using the length of previous CPU bursts, using exponential averaging though it's a very complex calculation to carry out.



Priority Scheduling

A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority (smallest integer ≡ highest priority).
       1. Preemptive
       2. nonpreemptive
SJF is a priority scheduling where priority is the predicted next CPU burst time.
Problem ≡ Starvation – low priority processes may never execute.
Solution ≡ Aging – as time progresses increase the priority of the process.

Round Robin (RR)

Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time  has elapsed, the process is preempted and added to the end of the ready queue.
If there are processes in the ready queue and the time quantum is q, then each process gets 1/of the
CPU time in chunks of at most time units at once. No process waits more than (n-1)time units.
Performance
        1. large FIFO
        2. small must be large with respect to context switch, otherwise overhead is too high.
Example of RR with Time Quantum = 4

                        Process    Burst Time
                            P1                    24
                            P2                     3
                            P3                     3

The Gantt chart is:
P1
P2
P3
P1
P1
P1
P1
P1
0          4               7              10            14            18             22          26            30

Average waiting time =    ((30-24)+4+7)/3  = 17/3 =5.66
     
Multilevel Queue

Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
Each queue has its own scheduling algorithm,
foreground – RR
background – FCFS
Scheduling must be done between the queues.
  1. Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation.
  2. Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to foreground in RR
1. 20% to background in FCFS

Multilevel Queue Scheduling

      

Multilevel Feedback Queue

A process can move between the various queues; aging can be implemented this way.
Multilevel-feedback-queue scheduler defined by the following parameters:
   1. number of queues
   2. scheduling algorithms for each queue
   3. method used to determine when to upgrade a process
   4. method used to determine when to degrade a process
   5. method used to determine which queue a process will enter  when that process needs service



          


Three queues:
    1. Q0 – time quantum 8 milliseconds
    2. Q1 – time quantum 16 milliseconds
    3. Q2 – FCFS
Scheduling
    1. A new job enters queue Q0 which is served FCFS . When it gains CPU, job receives 8 milliseconds.
        If it does not finish in 8 milliseconds, job is moved to queue Q1.
    2. At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete,
         it is preempted and moved to queue Q2.


OS: Deadlocks

Deadlock Definition and Handling



Deadlock: deļ¬nition 

There exists a cycle of processes such that each process 
cannot proceed until the next process takes some specific 
action. 
Result: all processes in the cycle are stuck! 
There are three practical strategies for dealing with deadlocks

·        Deadlock Prevention
·        Deadlock Avoidance
·        Deadlock Detection and Recovery

Deadlock Prevention

Since four conditions are required to produce a deadlock, one can infer that if one or more of the conditions is negated, then deadlock can be prevented. Thus, the basic idea of deadlock prevention is to deny at least one of the four necessary conditions for deadlock. 
The mutual exclusion condition cannot be disallowed, so it is customary to prevent one or more of the remaining three conditions. 
Deadlock prevention methods tend to be wasteful in terms of machine resources.

  • Denying Hold and Wait
  • Denying No Preemption
  • Denying Circular Wait

Deadlock Avoidance

Deadlock avoidance attempts to predict the possibility of a deadlock dynamically as each resource is made. The basic idea of deadlock avoidance is to grant only those requests for available resources that cannot possibly result in a state of deadlock. This strategy is usually implemented by checking the effect of granting a particular resource request by the resource allocator.  If granting of the resource cannot possibly lead to deadlock, the resource is granted to the requester. Otherwise, the requesting process is suspended until its pending request can be safely granted.

The most famous deadlock avoidance algorithm is Dijikstra’s Banker’s Algorithm.The algorithm is called by this name because it involves a banker who makes loan and receives payments from a given source of capital. The banker will only grant a loan if he can still meet the needs of other customers, based on their projected future loan requirements.

and finally...

Deadlock Detection and Recovery


This technique does not attempt to prevent deadlocks; instead, it lets them occur.The system detects when deadlock happens, and then takes some reactive action to recover .With deadlock detection, requested resources are granted to processes whenever possiblePeriodically, operating system performs an algorithm that allows it to detect the circular wait condition(polling). polling can be made as frequently as resource request, or less frequently, depending on how likely it is for a deadlock to occur.

Checking at each resource request has two advantages: It leads to early detection, and the algorithm is relatively simple because it is based on incremental changes to the state of the systemOn the other hand, such frequent checks consume considerable processor time. Once the deadlock algorithm has successfully detected a deadlock, some strategy is needed for recovery


Three approaches are
  • Recovery through Preemption; In some cases, it may be possible to temporarily take a resource away from its current owner and give it to another.
  • Recovery through Rollback; If it is known that deadlocks are likely, one can arrange to have processes checkpointed periodically. For example, can undo transactions, thus free locks on database records. This often requires extra software functionality.
  • Recovery through Termination; The most trivial way to break a deadlock is to kill one or more processes. One possibility is to kill a process in the cycle. Warning! Irrecoverable losses or erroneous results may occur, even if this is the least advanced process.

The Ostrich Algorithm

  • The UNIX approach is just to ignore the problem on the assumption that most users would prefer an occasional deadlock over a rule restricting user access to only one resource at a time.
  • Just like an ostrich covering its head under sand and pretending no one is seeing. :)
  • The problem is that the prevention price is high, mostly in terms of putting inconvenient restrictions on processes


UNIX: cat Command

cat command

Cat command is one of the basic UNIX command.
cat displays a file content. What more could this command do?
here is 5 practical usage examples for cat command.

1. Display the contents of a file

When you pass the filename as an argument to cat, it displays the contents of the file as shown below.

$ cat myfile.txt
 
This is the first line in myfile
 
You can also display contents of more than one file as shown below.

$ cat myfile.txt yourfile.txt
 
This is the first line in myfile
 
This is the first line in your file

2. Create a New File

cat command receives input from stdin and redirected to the screen

$ cat
cat command for file oriented operations.
cat command for file oriented operations.
cp command for copy files or directories.
cp command for copy files or directories.
 

The lines received from stdin can be redirected to a new file using redirection symbol.

$ cat >mynewfile.txt
 
this is new file


*press Ctrl+D to save and exit 
 

To append some content to a file, use >> redirection symbol as shown below.

$ cat >> myfile.txt
This is the second line in myfile
*press Ctrl+D to save and exit 
this will add a new line to the end of myfile.txt

3. Copy File Content

Redirection symbols in unix plays an important role in processing the standard file descriptor contents. Using it, you can copy the contents of one file into another as shown below.

$ cat myfile.txt >ourfile.txt

As seen above, since we used the output redirection, the content 
displayed in standard output gets directed into a new file called 
ourfile.txt

4. Concatenate Contents of Multiple Files

Through cat command, you will be able to concatenate contents of more than one file into a new file.
For example, the text from myfile.txt and youfile.txt gets combined into a new file ourfile.txt


$ cat myfile.txt yourfile.txt >ourfile.txt

As seen above, stdout gets redirected and the new file has been 
created with the contents of myfile.txt and yourfile.txt

5. Display Line numbers

To display the contents of a file with the line number in front of each line, use option -n.
$ cat -n ourfile.txt
1 This is the first line in myfile
2 This is the second line in myfile
3 This is the first line in your file
In case of numbering only nonempty lines, use option -b as follows,

$ cat -b ourfile.txt
Note that the lines which contains whitespaces are not considered as empty lines and the same applicable to line numbered 2.