Вы находитесь на странице: 1из 21

School of Information Technology and Engineering

Programme: MCA
Course Code: ITA 5006
Course Name: Distributed Operating Systems

SHARAN SHARMA 16MCA0132


MEENAKSHI SIDHU 16MCA0177
Design an Enhanced Scheduling Technique for Distributed
Operating System Environment

ABSTRACT run time. Text includes the current activity represented


by the value of Program Counter and the contents of
CPU scheduling is part of a comprehensive resource the processor's registers. Data contains the global and
distribution problems, it is probably the most focused static variables. When a process executes, it passes
problem in the Operating System Knowledge. Proper through different states. These stages may differ in
Scheduling of processes will provide enhanced different operating systems. There are 5 states in
hardware usability and speedup system. Distributed general: Start, Ready, Running, Waiting and
scheduling quintessence on overall scheduling since Terminated.
the structural design of the principal system based on
global scale. The presence of multiple processing
nodes in distributed systems creates a challenging
problem for scheduling processes onto processors.
This paper about, recommending a
metaheuristic optimization technique (Ant Colony
Optimization (ACO)) for task scheduling of
Distributed Operating System in an effectual way. The
study combines optimization problems Examples for,
combinatorial optimization problems are the
Travelling Salesman Problem (TSP), the Quadratic
Assignment Problem (QAP), time tabling and
scheduling problems. The complete algorithms are The process scheduling is the activity of the process
guaranteed to find for every finite size instance of a manager that handles the removal of the running process
combinatorial optimization problem an optimal from the CPU and the selection of another process based on
solution in bounded time. As the processes scheduled a strategy. Process scheduling is an essential part of a
in optimized fashion another problem this one will Multiprogramming operating systems. Such operating
came across is the load balancing factors of each systems allow more than one process to be loaded into the
process nodes. ACO provides proper scheduling as executable memory at a time and the loaded process shares
well as load balancing mechanism with combine
the CPU using time multiplexing. The Operating System
optimized contrivance.
maintains the following important process scheduling
queues: JOB QUEUE, READY QUEUE, DEVICE
QUEUE.
INTRODUCTION
A process is defined as an entity which represents the
basic unit of work to be implemented in the system.
When a program is loaded into the memory and it
becomes a process, it can be divided into four sections
stack, heap, text and data. The stack contains the
temporary data such as method/function parameters,
return address and local variables. Heap is a
dynamically allocated memory to a process during its
technology over the past decade have significantly
raised interest
In high-performance paradigm, some of the
characteristics of reconfigurable hardware
include, configurability, functional flexibility, power
efficiency, ease of use, extensibility (adding new
functionality), (reasonably) high Performance,
hardware abstraction, and scalability (by adding More
soft-cores). Design and implementation to incorporate
partial reconfigurable functionality to the
reconfigurable nodes in DREAMSim. Dynamic
Reconfigurable Autonomous Many-task Simulator
(DReAMSim). Design of efficient data structures to
Schedulers are special system software which handle maintain the dynamic statuses of the nodes. Task
process scheduling in various ways. Their main task is scheduling algorithm is proposed and implemented to
to select the jobs to be submitted into the system and verify the functionality of the simulation framework.
to decide which process to run. They are of 3 types:
Long Term Scheduler, Medium Term Scheduler,
Short Term Scheduler. Context Switch is a method to PAPER TITLE------- Multi-criteria and satisfaction
store the status of processor or CPU so that process oriented scheduling for hybrid Distributed computing
execution can be resumed for the same point at later infrastructures
stages. It enables multiple processes to use single
CPU. The Process scheduler algorithm can be either
primitive or non-primitive. Primitive process are based Task scheduling problem becomes more complex and
on priority and in this scheduler can swap the high challenging. In this paper, we present the design for a
priority process with low priority process in running fault-tolerant and trust-aware scheduler, which allows
state. Whereas non-preemptive to execute Bag-of-Tasks applications on elastic and
hybrid DCI, following user defined scheduling
strategies. Our approach, named Promethee scheduler,
combines a pull-based scheduler with multi-criteria
LITERATURE SURVEY Promethee decision making algorithm. Because multi-
criteria scheduling leads to the multiplication of the
PAPER TITLE------- Task Scheduling in Large-scale possible scheduling strategies, we propose SOFT, a
Distributed Systems Utilizing Partial Reconfigurable methodology that allows to find the optimal
Processing Elements scheduling strategies given a set of application
requirements. The first challenge concerns the design
of the resource management
In this paper a design of a framework is proposed
which simulated the distributed systems processors middleware which allows the assemblage of hybrid
performance. The framework incorporates the partial DCIs. The second challenge is to design task
reconfigurable functionality to the reconfigurable scheduling that are capable of efficiently using hybrid
nodes. Depending on the available reconfigurable DCIs, and in particular, that takes into account the
area, each node can execute more than one task differences between the infrastructures The third
simultaneously furthermore they present a simple task challenge regards the design of a new scheduling
scheduling algorithm to verify the functionality of the approach that maximizes satisfaction of both users and
simulation framework. The proposed algorithm resource owners The Promethee scheduler allows
supports the scheduling of tasks on partially users to provide their own scheduling strategies in
reconfigurable nodes. The simulation results are based order to meet their applications requirements by
on various experiments and they provide a comparison configuring the relative importance of each criteria .
between full (one node-one task mapping) and partial we introduce the design of the fault-tolerant and trust-
(one node-multiple tasks mapping) configuration of aware Promethee scheduler, which allows to execute
the nodes, for the same set of parameters in each Bag-of-Tasks applications on elastic and hybrid DCI,
simulation run. Advances in reconfigurable computing following user-defined scheduling strategies.
computational systems like Grid clusters and cloud
PAPER TITLE------- A locality-aware job environments are used for these purposes. Workflow
scheduling policy with distributed Semantic caches execution in the distributed systems, very important
issue in scheduling of the workflow. In this paper a
combined approach is developed for automatic
Paper propose distributed query scheduling policies parameters tuning and performance models
that consider the dynamic contents of distributed construction in the background of the WMS lifecycle
caching infrastructure and employ statistical the automated process works in the background of the
prediction methods into query scheduling policy. To Grid platform and updates the database of models and
maximize the overall system throughput. Although parameters, which is used during the real workflow
many query scheduling policies exist such as round- scheduling. For the performance models construction,
robin and load-monitoring, they are not sophisticated symbolic regression was used. Symbolic regression
enough to both balance the load and leverage cached was used. symbolic regression is performed by genetic
results.so employ the kernel density estimation programming and uses statistical data about packages
derived from recent queries and the well-known executions. Parameters tuning was performed by
exponential moving average (EMA) to predict the hyper-heuristic genetic algorithm.
query distribution in a multi-dimensional problem
PAPER TITLE------- Optimal distributed task
space that dynamically changes.
scheduling in volunteer clouds
The problem is in many modern applications spend a
large amount of execution time on I/O and
Cloud users can transparently access to virtually
manipulation of the data. The fundamental challenge
for improving performance of such data-intensive infinite resources with the same aptitude of using any
applications is managing massive amounts of data, and other utility. Next to the cloud, the volunteer
reducing data movement and I/O to reduce the I/O on computing paradigm has gained attention in the last
the large datasets, distributed data analysis decade, where the spared resources on each personal
frameworks place huge demands on cluster-wide machine are shared. Conversely, this places complex
memory capabilities, but the size of the memory in a challenges in managing such a large-scale
cluster is not often big enough to hold all the datasets, environment, as the resources available on each node
making in memory computing impossible. However, and the presence of the nodes online are not known
the caching facilities scale with the number of prior. The complexity further increases in presence of
distributed servers, and leveraging the large tasks that have an associated Service Level Agreement
distributed caches plays an important role in specified, e.g., through a deadline. Distributed
improving the overall system throughput as many
management solutions have then been advocated as
large-scale systems are being built by connecting
small machines. The proposed distributed query the only approaches that are realistically applicable.
scheduling policies make Query scheduling decisions The volunteer cloud computing is characterized by a
by interpreting the queries as multidimensional points, large-scale heterogeneous and dynamic environment.
and cluster them so that similar queries cluster In this paper, a framework to allocate tasks per
together for high cache-hit ratio. different policies, defined by suitable optimization
problems in defined and then, we provide a distributed
optimization approach relying on the Alternating
Direction Method of Multipliers. In a real domain, a
PAPER TITLE------- Automatic workflow
single policy could be driven by multiple goals.
scheduling tuning for distributed processing systems

PAPER TITLE------- Resource-aware hybrid


For performing composite application execution, a scheduling algorithm in heterogeneous distributed
large number of smaller applications, which perform
computing.
some specific tasks and communicate with other parts
of the composite application through the signals and
data transfer. For performing execution of such
workflows computational capacity of one computer is Cloud applications generate huge amount of data,
obviously not enough. High performance require gathering, processing and then aggregation in
a fault-tolerant, reliable and secure heterogeneous Various process scheduling for real time, embedded
distributed system created by a mixture of Cloud system is used today. They are categorized into
systems (public/private), mobile devices networks, Preemptive and Non-Preemptive. Preemptive
desktop-based clusters, etc. In this context, dynamic algorithms have better efficiency.so in this, Pre-
resource provisioning for Big Data application emptive algorithms are taken and compare them to
scheduling became a challenge in modern systems. design a scheduler for real time Linux platform for
We proposed a resource-aware hybrid scheduling better efficiency. Here, positive aspects of each
algorithm for different types of application: batch jobs preemptive algorithm are taken to create the
and workflows. The proposed algorithm considers scheduling algorithm with better efficiency. The
hierarchical clustering of the available resources into proposed process is implemented through C or C++.
groups in the allocation phase. Task execution is
performed in two phases: in the first, tasks are assigned Our focus is to study the performance and mechanism
to groups of resources and in the second phase, a of each scheduling algorithm and create a better
classical scheduling algorithm is used for each group performing scheduling algorithm from that. For that
of resources. all sort of variation is done in basic mechanism to
achieve complicated algorithm with effective results.

PAPER TITLE-------A simulator based performance


analysis of Multilevel Feedback Queue Scheduling PAPER TITLE---------Design and Implementation of
a Process Scheduler Simulator and an Improved
Process Scheduling Algorithm for Multimedia
Operating System
Multilevel Feedback Queue (MFQ) permits the
processes to switch between the queues relying upon
their burst time. Different Queues may have different
scheduling policy, (for example, RR, Shortest Job A simulator is designed here to concentrate on
First (SJF) or First Come First Serve (FCFS)). In first evaluating the suitability and performance of different
queue Round Robin Scheduling is used. The processes scheduling algorithm for a Multimedia Operating
which are in ready state comes to first Queue. It can System. It will take generic algorithms and set them in
shift to next queue when burst time is larger than time current scenario effectively to measure their
quantum. Time quantum can be generated characteristics. Many standard algorithms have been
dynamically by simulator or can be enter by user taken into account. A new algorithm is designed and
statically. Simulated MFQ program, can be used to executed to enhance the performance of a MMOS i.e.
demonstrate that using RR in first queue and applying mixed task traffic. The samples of 20 distinctive task
SJF for rest of the queue may enhance the CPU usage. trafc is taken and run all previous calculations on
Dynamic time quantum is used here. To run the them, and compute their performance measurements.
simulator, a set of processes are to be entered with
Five standard algorithms viz. Round Robin, (FCFS),
arrival and burst time. In this experiment, we conclude
(MLFS), (SJF), and Earliest Deadline First (EDF)
that time quantum plays critical part in process
were implemented alongside the proposed change to
scheduling and it is more efficient when found out
EDF. Parameters like due dates missed, add up to
dynamically. Combination of process scheduling is
setting switches, normal holding up time and normal
used to conduct experiment but using SJFRR is best
turnaround time were considered. in all standard
suited to decrease the average waiting time and
algorithms, EDF has the best performance as it
turnaround time. A more efficient MFQ is produced
prevents missing of deadlines which generally lower
with this simulator.
the performance. Experiment conducted here shows
that proposed algorithm is better than MLFS and EDF.
So, the proposed work is the upgradation of EDF.
PAPER TITLE------------Development of Scheduler
for Real Time and Embedded System Domain
PAPER TITLE ---------- Approximate Analysis of
Reader-Writer Queues
technique. So, we conclude that both policy together
gives faster and efficient performance.
We examine the performance of queues that serve
readers and writers. Readers need continuous service
whereas writer wants some selected services. We
investigate a first-come-first-serve (FCFS) reader- PAPER TITLE--------------- An Impact of cross over
writer queue, and find a formula for processing operator on the performance of Genetic algorithm
waiting time and capacity under the Poisson arrivals under operating system process scheduling problem.
and exponential services. Further investigation is
carried out to handle a one-writer queue, and a queue
that have writer locks. The aim is to present it as a The OS scheduling issue is considering as NP hard
guideline for designing existing system. In this they problem and Genetic algorithm is observed as a meta
have given an exact model of FCFS reader-writer heuristic enhancement tool. So, the use of genetic
queue performance. We show formulae to anticipate algorithm for OS process scheduling issues is
the shared resources, waiting time for read and write described here. The key of genetic algorithm relies on
locks. The formulae are easy and basic to be utilized its operators, for example, mutation, transformation,
as a part of a general guideline by a designer. Also, we inversion etc. Our priority is to make genetic algorithm
examine one-writer queues, and update locks. We try adapt itself and become flexible. We use various cross
to anticipate lock waiting times, and analyze the limit over operator with consistent crossover and mutation.
of the common resource The convergence state, flexibility and performance of
genetic algorithm depends on crossover operator and
deflecting per that. After simulation, genetic algorithm
PAPER TITLE-------Good Processor Management = is working efficiently for proposed issue. We find that
Fast Allocation + Efficient Scheduling under characterized parameter setting, crossover
genetic approach to deal with the global maxima is
different than dealing with local maxima. As order
based crossover is taking more iteration rather than its
Multi-client multicomputer OS mainly comprises of global maxima approach and we needed that average
job scheduling algorithms and efficient processor waiting time should be less, so 1164.73 is the
performance wise. One technique called Group minimum estimation of the request based crossover
scheduling, which put jobs in such a way that jobs genetic algo. It performs well under conditions in
belonging to one group do not block each other. In this comparison to other operators.
FCFS is used to schedule the groups so that starvation
can be neglected. Also, it reduces the response time by
minimizing the waiting queue for the jobs in the same
group. In other technique, two novel processor PAPER TITLE-----------The Multi-Objective
management is used that fulfill the demands for mesh Assembly Line Worker Integration and Balancing
connected multicomputer. In this we take a stack based Problem of Type-2
algorithm that work on spatial subtraction and Traditional assembly line balancing (ALB) research
coordinate calculations to allocate a free space for a focuses on the simple assembly line balancing
job in mesh. Both techniques together provide better problem (SALBP) initially dened by Bay bars (1986)
and more efficient service. Both job scheduling and through several well-known simplifying hypotheses.
allocation algorithm plays a vital role in system This classical single model problem consists of nding
outcome. In proposed paper, we use both technique the best feasible assignment of tasks to stations so that
simultaneously, first stack based allocation algorithm precedence constraints are fullled. Two basic
is used as allocate free space to job quickly. Second versions of this problem are called type-1, in which the
group scheduling is used to categorize jobs in groups. cycle time, c, is given, and the aim is to minimize the
Results show that the stack-based allotment algorithm number of needed workstations; and type-2, used
outperforms all the other approaches as far as when there is a given number of workstations, m, and
allocation overhead is concern and group scheduling the goal is to minimize the cycle time (Scholl, 1999).
works for better for all group size and reduces We are particularly interested in another variant, in
response time efficiently compared to FCFS which heterogeneity is more pronounced, conguring
the so-called assembly line worker assignment and Author also shows the analysis of proposed algorithm
balancing problem (ALWABP) (Miralles et al., 2007). on some scheduling factor asThere are many factors
In this problem, inspired by assembly lines in sheltered like waiting time,turnaround time(TAT) etc on which
work centers for the disabled (SWDs), workers are we can check the performance of the scheduling
highly heterogeneous. Indeed, not only each worker algorithm. To test the performance, we will use two
might have a specic processing time for each task, parameters such as avg. WT, avg. TAT Average WT
but also each worker has a set of tasks that they cannot is mentioned in queueing theory as follows: Queueing
execute, called incompatible tasks. theory is the mathematical study of waiting lines, or
queues. The theory enables mathematical analysis of
several related processes, including arriving at the
PAPER TITLE----------Novel Scheduling Algorithm (back of the) queue, waiting in the queue (essentially
for Uni-Processor Operating System a storage process), and being served at the front of the
queue. The theory permits the derivation and
In this we study that researchers are focuses on design calculation of several performance measures including
and development aspect of new and novel scheduling the average waiting time in the queue or the system,
algorithm for multi-programming operating system in the expected number waiting or receiving service, and
the view of optimization. They developed a tool which the probability of encountering the system in certain
gives output in the form of experimental results with states, such as empty, full, having an available server
respect to some standard scheduling algorithms e.g. or having to wait a certain time to be served.
First come first serve, shortest job first, round robin
etc. Efficient resource utilization is achieved by
sharing system resources among multiple users and
system processes. Optimum resource sharing depends
on the efficient scheduling of competing users and PAPER TITLE ---------------- Distributed Process
system processes for the processor, which renders Scheduling Using Genetic Algorithm
process scheduling an important aspect of a
multiprogramming operating system. Part of the
reason for using multiprogramming is that the Two fundamental concept of Distributed Operating
operating system itself is implemented as one or more System are: time sharing and resource sharing. In
processes, so there must be a way for the operating distributed system environment booking holds a vital
system and application processes to share the role in performance and throughput. Scheduling is
CPU.Another main reason is the need for processes to recognized as NP-Complete issue even in the best
perform I/O operations in the normal course of conditions. So here a proposed model is to achieve an
computation. Since I/O operations ordinarily require ideal algorithm which requires minimum execution
orders of magnitude more time to complete than do time, enhances the processor efficiency. the issue
CPU instructions, multiprograming systems allocate arises here is combinatorial optimization which can be
the CPU to another process whenever a process solved by Genetic algorithm. So, a genetic algo based
invokes an I/O Operation. system has been introduced and assessed to overcome
RR have problem of high avg. Waiting Time, SRTF issues and get desirable results. In this system,
gives starvation to longer jobs and though Highest multiple conditions are considered to minimize peak
Response Ratio Next is useful in avoiding problem of load and cost and to get maximum CPU utilization and
RR and SRTF, it fails in terms of responsiveness due to achieve best possible solution.
to its non-pre-emptive mode. So, we proposed this
algorithm which will try to minimize avg. WT and
starvation to longer jobs. And, increases the PAPER TITLE --------------Organization Based
responsiveness due to its pre-emptive nature. For 1st Intelligent Process Scheduling Algorithm (OIPSA)
job time given(TG) is 2*TQ.It will be useful since
many processes may come in that time, which is useful
for making effective decision ahead as we have many In distributed OS, the main aim of scheduling
processes to choose from. algorithm is to divide CPU time in such a manner that
maximum utilization and efficiency could be
extracted. One way to achieve that is to set jobs are to be planned irreversibly at the season of their
priority manually by taking some basic scheduling landings. The primary machine can handle every one
rules in account irrespective of preferences to of the occupations while the second one can prepare
processes. As we notice generally some set of tasks are just part of the employments. The landing of another
performed repeatedly by companies. So, rather than employment happens simply after the present place of
using pre-defined design rules we should give priority employment is planned. Let ={J1, ..., Jn}be the
based to jobs priority or activeness. For that a novel arrangement of all employments orchestrated in the
algorithm come into picture. It schedules request of entry. We mean every employment as
jobs/processes acc. to need not on pre-defined rules. In Ji=(pi, gi), where pi>0is the handling time
this, OIPSA (Organization Based Intelligent Process (additionally called work size) of the occupation Ji and
Scheduling Algo.) study the process and give the gi{1, 2}is the pecking order of the employment Ji.
highest priority to frequently used processes. The gi=1if the employment Ji must be handled by the
proposed novel algorithm arranges every process into primary machine, and gi=2if it can be prepared by both
high, medium, and low priority pool and then arrange two machines. piand giare not known until the landing
it with preferences. Due to this response time, waiting of the occupation Ji. In this paper, we considered the
time and turnaround time decreases visibly and semi-online form of various leveled booking issue on
enhances the efficiency of the whole system when two parallel machines with the target of expanding the
compared to fundamental scheduling algorithms. It base machine stack. If the handling times are limited
schedules acc. to needs to organization. At start by an interim [1, ], we demonstrated the lower bound
OIPSA performance is just theoretical but once it gets of the aggressive proportion of any online calculation
familiar with company preferences it will yield the is 1 +and introduced a calculation which is appeared
better results. In future, user preference will be to be ideal. On the off chance that we additionally
considered by this algorithm. This will help system to know the entirety of all occupations' handling times
work more efficiently. (i.e., the aggregate preparing time), we demonstrated a
lower bound of for the case 1 <2and an ideal
calculation was likewise displayed. In any case,
PAPER TITLE-----------Semi-online hierarchical despite the fact that the second outcome enhances the
load balancing problem with bounded processing relating result in [1], it is as anyone might expect due
times to the additional limitation on the preparing time
interim [1, ]. By and large, if there are m(m
In this paper, we consider the online various leveled >2)parallel machines, we trust that the focused
planning issue on two parallel machines, with the goal calculation is likewise exist. We will concentrate on
of amplifying the base machine stack. Since no finding the lower bound of the focused radio and
focused calculation exists for this issue, we consider outlining an ideal online calculation for this issue in
the semi-online adaptation with limited handling our future research.
times, in which the preparing times are limited by an
interim [1, ] where 1. We demonstrate that no
calculation can have a focused proportion under 1 PAPER TITLE-------------Towards Reducing Energy
+and introduce an ideal algorithm. Hierarchical Consumption using Inter Process Scheduling in
booking issue on m parallel machines has been Preemptive Multitasking OS.
generally considered. For the most part, the issue can
be de-scribed as takes after. We are given machines
and the machines are recognized by various chain of
command. A progression of employments arrives one Current proportional schedulers do not work on
by one over a rundown and every occupation has a Greedy algorithm for multithreaded processes. And it
positive handling time and a pecking order. consumes lot of energy to simulate multithreaded
Occupations ought to be planned irrevocably at the scheduler. To reduce that, we propose a better solution
season of their landings, and every employment must when multithreaded processes run in the system
be handled on a subset of the machines. The regular continuously. We use novel proportional sharing
objective is to minimize the most extreme heap of all scheduler to manage the running thread of same
machines. We are given two machines and a process and adjust its weights. For that TWRS (Thread
progression of occupations arriving on the web which Weight Readjustment Scheduler) is used, which
improve the number of context switches. TWRS progress. Demonstration shows that with little less
distributes CPU time relatively for threads as per their throughput we can improve system working. We can
new weights and pre-allocates some CPU time to overcome throughput issue at later stages. We can
every thread. Context Switches is waste time as system record the performance information of the process to
is does not work while switching. Energy can be saved guide scheduler for future. FPS uses such records so
while minimizing that. TWRS gives a functional that decency can be kept up without the overhead of
approach for multitasking OSs as it operates with the training periods required in FPS. Throughput can
existing kernels. We proposed and explored the be upgraded.
efficiency of TWRS which is a kind of proportional
sharing scheduler for multitasking system. The main
aim is to allocate more CPU time to processes with
more threads and simultaneously scheduler try to PAPER TITLE----------Analysis of CPU
avoid greedy processes to consume extra CPU time so scheduling policies through simulation.
that deadlock can be prevented. Here TWRS is
executed in Linux 2.6.24-1, which is a most prominent
scheduler design, e.g. Completely Fair Scheduler CPU Scheduling is an area of research where
(CFS). The proposed scheduler is the modification on computer scientists used to design efficient algorithms
the Linux run multithreaded services. Our work for scheduling the processes to get output in the form
demonstrates that TWRS reduces the energy of optimum turnaround time and average waiting time.
consumption. CPU scheduling deals with the problem of deciding
which of the process in the ready queue is to be
allocated the CPU. Per this paper, Terry Regner &
PAPER TITLE ----------- FPS: A Fair-progress Craig Lacey has introduced the concepts and
Process Scheduling Policy on Shared-Memory fundamentals of the structure and functionality of
Multiprocessors operating systems. The purpose of this article was to
analyze different scheduling algorithms in a simulated
system. Process scheduling algorithms are used to
ensure that the components of the system would be
Competition for shared memory resources on able to maximize its utilization and able to complete
multiprocessors is the predominant reason for slowing all the processes assigned in a specified period. In
down applications and making their performance some applications, SJF scheduling algorithm is more
suffer. It is necessary to have Quality of Service (QoS) suitable than PrS algorithm since it provides less
on such frameworks. We propose a Fair Process waiting time and less turnaround time. In real-time
Scheduling (FPS) strategy to enhance system applications, PrS algorithm must be used to deal with
performance. In this when slowdown occur then different priorities, since each task has a priority order.
equally weighted processes in running state must Multimedia applications have unique requirements
suffer. When an application endured more slowdown that must be met by network and operating system
then, we assign more CPU time to work efficiently. components. In any multimedia application, we may
This strategy can likewise be connected to threads have several processes running dependently on one
with different weights. We proposed a fair progress another. Multimedia is a real-time application. In
scheduling (FPS) algorithm to give better performance context of multimedia applications, the CPU scheduler
on shared-memory multiprocessors. The main priority determines quality of service rendered. The more CPU
is to give the same amount of CPU time, if an cycles scheduled to a process, the more data can be
application did less viable work than others since it produced faster, which results in a better quality, more
suffers greater slowdown. The test while figuring the reliable output. Author of this paper design a simulator
progress at runtime is to evaluate the run-alone using VB 6.0. The simulator analysis four CPU
performance in each executed quantum while the Scheduling policies. CPU scheduling policies is most
application is really running with others. Our solution important function and critical part of an operating
is to order the execution quanta into phases, and system. There are several policies of process
developing the low dispute pre-scheduled processes. allocation such as FCFS, SJF RRS and PBS. Simulator
We then extend the performance data to other quanta is designed to evaluate the process schedule strategies
that belong with a similar phase to find out their by considering randomly generated reference Process.
The main objective of this research work is to analyze
various policies of CPU scheduling. The foremost
criterion for the evaluation of CPU scheduling is the A Grid is a figuring and information administration
waiting time and burst time of the processes that are foundation that gives the electronic under sticking to a
produced by each policy under same set of conditions worldwide society in business, government, research,
and workload. The workload here is the size of science and stimulation. A computational Grid
memory whose allocate to coming process. Than constitutes the product and equipment foundation that
simulator has been designed to study the behavior gives dependable, consistent, unavoidable and modest
pattern of different policies under similar conditions access to top of the line computational capacities. The
and for a set of requested process, which are generated Grid coordinates networking, communication,
randomly. computation and in arrangement to give a virtual stage
to calculation and information administration similarly
that the Internet coordinates assets to shape a virtual
stage for data. Lately, because of the quick mechanical
PAPER TITLE--------------Optimized Round Robin progressions, the Grid processing has turned into an
CPU Scheduling Algorithm. critical range of research. Framework figuring has
risen another field, recognized from customary
conveyed processing. It concentrates on expansive
We have observed one of the fundamental function of scale asset sharing, innovative applications and in a
an operating system is scheduling. There are 2 types few cases, superior introduction. A Grid is a system of
of uni-processor operating system in general. Those computational assets that may possibly traverse
are uni-programming and multi-programming. Uni- numerous mainlands. The Grid serves as a far
programming operating system execute only single reaching and finish framework for associations by
job at a time while multiprogramming operating which the greatest usage of assets is accomplished.
system can execute multiple jobs concurrently. The heap adjusting is a procedure which includes the
Resource utilization is the basic aim of asset administration and a compelling burden
multiprogramming operating system. Scheduling is dispersion among the assets. Accordingly, it is thought
the heart of any computer system. Since it contains to be imperative in Grid systems. The proposed work
decision of giving resources between possible displays a broad review of the existing burden
processes. Sharing of computer resources between adjusting strategies proposed so far. These methods
multiple processes is also called as scheduling. The are material for different frameworks contingent on
CPU is one of the primary computer resources, so its the necessities of the computational Grid, the sort of
scheduling is essential to an operating systems environment, resources, virtual associations and
design. Efficient resource utilization is achieved by occupation profile it should work with. Each of these
sharing system resources amongst multiple users and models has its own benefits and faults which shapes
system processes. Optimum resource sharing depends the topic of this study. A definite grouping of different
on the efficient scheduling of competing users and load adjusting strategies in view of various parameters
system processes for the processor, which renders has additionally been incorporated into the review.
process scheduling an important aspect of a The static load-balancing algorithms assume that the
multiprogramming operating system. As the processor information governing load-balancing decisions
is the most important resource, process scheduling, which include the characteristics of the jobs, the
which is called CPU scheduling, becomes more computing nodes, and the communication networks
important in achieving the above-mentioned are known in advance. The load-balancing decisions
objectives. are made deterministically or probabilistically at
compile time and remain constant during run time.
However, this is the drawback of the static algorithm.
PAPER TITLE----------Survey of load balancing In contrast, the dynamic load-balancing algorithms
techniques for Grid attempt to use the runtime state information to make
more informative load-balancing decisions.
ADVANTAGES AND DISADVANTAGES

SERIAL PAPER TITLE AUTHOR ADVANTAGES DISADVANTAGES


NUMBER NAME
1. Task Scheduling in Large-scale IEEE 2012 Average wasted area per task Need of a good load balancing
Distributed Systems Utilizing Partial is less. manager
Reconfigurable Processing Elements Simultaneously executing
more than one application in
system node
2. Multi-criteria and satisfaction oriented Fault tolerant and trust aware Need of a scheduling strategies
scheduling for hybrid scheduler (filtering methods)
Distribute computing infrastructures
User satisfaction while
maximizing infrastructure
3. EM-KDE -A LOCALITY-AWARE Query workloads are
Job scheduling policy with Distributed balanced and cached results
Semantic caches are reused.
Less load on servers

4. Automatic workflow scheduling tuning Efficiency of scheduling Algorithm uses higher number
for distributed processing systems increases of iteration which is dependent
on CPU cores and ram
5. Optimal distributed task scheduling in Transparently access to Limited resource capacity on
volunteer clouds virtually infinite resources single a machine

Can be setup on data centers,


personal devices or both
6. Resource-aware hybrid scheduling Scalability Need of a dynamic.
algorithm in heterogeneous distributed Fast execution time.
computing
7. A simulator based performance Dr. Sanjay. K. A process that waits too long Moving the process around
analysis of Multilevel Feedback Queue Dwivedi in a lower priority queue may queue produce more CPU
Scheduling. be moved to a higher priority overhead.
Ritesh Gupta queue. Larger waiting and response
Every process gets equal time.
share of CPU. Low throughput.
Newly created process is Context switching are there.
added to end of ready queue.

8. Development of Scheduler for Real M.V. Less starvation. Elapsed time must be
Time and Embedded System Domain. Panduranga Rao Throughput is high. recorded, is results in
K.C. Shet overhead on processor
R.Balakrishna Overhead on processor is Starvation for longer
K. Roopa low. processor.
Good response time for short
processes.
9. Design and Implementation of a Prabhat K. Minimum average waiting Cant predict the length of
Process Scheduler Simulator and an Saraswat time. next burst.
Improved Process Scheduling Prasoon Gupta Throughput is high. Requires future knowledge.
Algorithm for Multimedia Operating Provably optimal with respect Can lead to starvation.
System. to average turnaround time. Does not minimize average
turnaround time.
10. Good Processor Management = Fast Byung S. Yoo Prevent Starvation. This scheduling method is non-
Allocation + Efficient Scheduling Chita R. Das First Come First Served. preemptive, that is, the process
Easy to Implement. will run until it finishes.

Because of this non-


preemptive scheduling, short
processes which are at the back
of the queue must wait for the
long process at the front to
finish

.
11. Optimized Round Robin CPU Varma, It shows the comparison Where the executing process
Scheduling Algorithm. P. Suresh between two system new and accesses the same resource
existing by an implemented before another preempted
system. process finished using it.

12. Analysis of CPU scheduling policies Kumar, Ashok, Optimization of CPU


through simulation. Harish Rohil, and scheduling can be done.
Suraj Arya. Simulators are used for
scheduling policies. In Future,
they can be use some more
simulation technique.

13. Novel Scheduling Algorithm for Uni- Bandarupalli, Its helps to improve the CPU
Processor Operating System Sukumar Babu, efficiency in real time uni-
Neelima processor-multi programming
Priyanka operating system. CPU
Nutulapati, and Scheduling is the basis of
P. Suresh Varma. multi-programmed operating
system. The scheduler is
responsible for multiplexing
processes on the CPU.

14. Survey of load balancing techniques for By Devashree The algorithm, research Average response time is
Grid Tripathy, C.R. focus, contribution, features, higher.
Tripathy compared model,
performance metrics,
improvement, gap and future
work of each load balancing
technique have been analyzed
and presented.

15. Semi-online hierarchical load balancing By Taibo Luo This work can be extended to This communication overhead
problem with bounded processing times YinfengXu develop a new algorithm to and load balancing time
modify dynamic depends upon the approach
decentralized approach to selected in the algorithm.
reduce the communication
overhead as well as to reduce The load balancing algorithm
migration time and also make for the clusters can be made
it scalable. more robust by scheduling all
jobs irrespective of any
constraints so as to balance the
load perfectly.

16. New Algorithms for Load Balancing in This protocol eliminates the Complex query data structures
Peer-to-Peer Systems necessity of virtual nodes are likely to impose some
while maintaining a balanced structure on how items are
load. assigned to nodes, and this
Improving on related structure must be maintained
protocols. by the load balancing
The scheme allows for the algorithm.
deletion of nodes and admits a
simpler analysis, since the
assignments do not depend on
the history of the network.
Implementation
PROPOSED ALGORITHM

ANT COLONY OPTIMIZATION

advancement arrangement calculations. After over ten


years of studies, both its application viability and its
Conclusions hypothetical groundings have been increased that
makes ACO a standout amongst the best worldview in
the metaheuristic range. This outline tries to propose
Ant Colony Optimization has been and keeps on being
the process scheduling and load balancing in the
a productive worldview outlining viable combinatorial
distributed system. This paper is about, recommending
a metaheuristic optimization technique (Ant Colony
Optimization (ACO)) for task scheduling of
Distributed Operating System in an effectual way.

REFERENCES
[1] Dr. Sanjay. K. Dwivedi, Ritesh Gupta, A simulator based performance analysis of Multilevel Feedback
Queue Scheduling ,5th International Conference on Computer and Communication Technology 2014.
[2] M.V. Panduranga Rao, K.C, Shet, R.Balakrishna, K. Roopa, Development of Scheduler for Real Time
and Embedded System Domain, 22nd International Conference on Advanced Information Networking
and Applications - Workshops.
[3] Byung S. Yoo and Chita R. Das, Department of Computer Science and Engineering The Pennsylvania
State University, Good Processor Management = Fast Allocation + Efficient Scheduling.
[4] Prabhat K. Saraswat, Prasoon Gupta, Dhirubhai Ambani Institute of Information and Communication
Technology, Design and Implementation of a Process Scheduler Simulator and an Improved Process
Scheduling Algorithm for Multimedia Operating Systems.
[5] Theodore Johnson, Member, IEEE, Approximate Analysis of Reader Writer Queues, IEEE
TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 21, NO. 3, MARCH 1995.
[6] Rajiv Kumar, Sanjeev Gill, Ashwani Kaushik, An Impact of cross over operator on the performance of
Genetic algorithm under operating system process scheduling problem, International Conference on
Communication Systems and Network Technologies, 2011.
[7] Munam Ali Shah, Muhammad Bilal Shahid, Sijing Zhang, Safi Mustafa, Mushahid Hussain,
Organization Based Intelligent Process Scheduling Algorithm (OIPSA), Proceedings of the 21st
International Conference on Automation & Computing, University of Strathclyde, Glasgow, UK, 11-12
September 2015.
[8] Ranjeet Singh, Santosh Kumar Gupta, Distributed Process Scheduling Using Genetic Algorithm.
[9] Chenggang Wu, Jin Li, Di Xu, Pen-Chung Yew, Fellow IEEE, Jianjun Li, and Zhenjiang Wang, FPS:
A Fair-progress Process Scheduling Policy on Shared-Memory Multiprocessors, IEEE TRANSACTION.
[10] Samih M. Mostafa, Shigeru KUSAKABE, Towards Reducing Energy Consumption using
InterProcess Scheduling in Preemptive Multitasking OS.

Вам также может понравиться