Вы находитесь на странице: 1из 42

PARROT OS

REPORT - I
INTRODUCTION :
Parrot Linux is a Linux distribution based on Debian with a
focus on computer security. It is designed for penetration
testing, vulnerability assessment and mitigation, computer
forensics and anonymous web browsing. It is developed by the
Frozenbox team.
Goals of Parrot :
ParrotSec is intended to provide a suite of penetration testing
tools to be used for attack mitigation, security research,
forensics, and vulnerability assessment.

Editions in Parrot OS
1.Parrot Home
2.Parrot Studio
3.Parrot ARM

Parrot Studio :
Designed for students, producers, video editing and all related
professional multi-media creation. This edition's goals are
to provide a reliable workstation for a multi-purpose
computing.
Parrot Home :
The distribution has the same look and feel of a regular Parrot
environment and includes all the basic programs for daily work.
Parrot Home also includes programs to chat privately with
people, encrypt documents with the highest cryptographic
standards or surf the net in a completely anonymous and
secure way. The system can also be used as a starting point to
build a very customized pentesting platform with only the tools
you need, or you can use it to build your professional
workstation by taking advantage of all the latest and most
powerful technologies of Debian without hassle

Is parrot os open source or close source or


combination of both ?
It is open source Open-source software (OSS) is a type
of computer software in which source code is released under
a license in which the copyright holder grants users the rights
to study, change, and distribute the software to anyone and for
any purpose.

Use of mode bit provided by processors


in PARROT :
A bit, called the mode bit, is added to the hardware of the
computer to indicate the current mode: kernel (0) or user
(1). With the mode bit, we are able to distinguish between
a task that is executed on behalf of the operating system
and one that is executed on behalf of the use
Now if it's a multiprocessor system, then suppose a
process executes a system call and changes the mode bit
from 1 to 0.
Now there might be some other processes running in user
mode parallelly as it's a multiprocessor system but the
mode bit is set as 0 indicating kernel mode causing
inconsistency.

Structure of the PCB in PARROT :


While creating a process the operating system performs several
operations. To identify these process, it must identify each
process, hence it assigns a process identification number (PID) to
each process. As the operating system supports multi-
programming, it needs to keep track of all the processes. For this
task, the process control block (PCB) is used to track the
process’s execution status. Each block of memory contains
information about the process state, program counter, stack
pointer, status of opened files, scheduling algorithms, etc. All
these information is required and must be saved when the
process is switched from one state to another. When the process
made transitions from one state to another, the operating system
must update information in the process’s PCB.
A process control block (PCB) contains information about the
process, i.e. registers, quantum, priority, etc. The process table is
an array of PCB’s, that means logically contains a PCB for all of
the current processes in the system.

● Pointer – It is a stack pointer which is required to be saved


when the process is switched from one state to another to
retain the current position of the process.
● Process state – It stores the respective state of the process.
● Process number – Every process is assigned with a unique id
known as process ID or PID which stores the process
identifier.
● Program counter – It stores the counter which contains the
address of the next instruction that is to be executed for the
process.
● Register – These are the CPU registers which includes:
accumulator, base, registers and general purpose registers.
● Memory limits – This field contains the information about
memory management system used by operating system. This
may include the page tables, segment tables etc.
● Open files list – This information includes the list of files
opened for a process.
⮚ This field includes information about the amount of CPU
used, time constraints, jobs or process number, etc.
The process control block stores the register content also
known as execution content of the processor when it was
blocked from running. This execution content architecture
enables the operating system to restore a process’s
execution context when the process returns to the running
state. When the process made transitions from one state to
another, the operating system update its information in the
process’s PCB. The operating system maintains pointers to
each process’s PCB in a process table so that it can access
the PCB quickly.

Process states :
The states that a Process enters in working from start till end
are known as Process states. These are listed below as:
Created-Process is newly created by system call, is not ready to
run User running-Process is running in user mode which means
it is a user process. Kernel Running-Indicates process is a kernel
process running in kernel mode. Zombie- Process does not
exist/ is terminated. Preempted- When process runs from
kernel to user mode, it is said to be preempted. Ready to run in
memory- It indicated that process has reached a state where it
is ready to run in memory and is waiting for kernel to schedule
it. Ready to run, swapped– Process is ready to run but no
empty main memory is present Sleep, swapped- Process has
been swapped to secondary storage and is at a blocked state.
Asleep in memory- Process is in memory(not swapped to
secondary storage) but is in blocked state.

Operations on Process :

Process Creation:

Through appropriate system calls, such as fork or spawn,


processes may create other processes. The process which
creates other process, is termed the parent of the other
process, while the created sub-process is termed its child.
Each process is given an integer identifier, termed as process
identifier, or PID. The parent PID (PPID) is also stored for each
process.
On a typical UNIX systems the process scheduler is termed as
sched, and is given PID 0. The first thing done by it at system
start-up time is to launch init, which gives that process PID 1.
Further Init launches all the system daemons and user logins,
and becomes the ultimate parent of all other processes.

A child process may receive some amount of shared resources


with its parent depending on system implementation. To
prevent runaway children from consuming all of a certain
system resource, child processes may or may not be limited to
a subset of the resources originally allocated to the parent.
There are two options for the parent process after creating the
child :
● Wait for the child process to terminate before

proceeding. Parent process makes a wait() system call,

for either a specific child process or for any particular

child process, which causes the parent process to block

until the wait() returns. UNIX shells normally wait for

their children to complete before issuing a new prompt.

● Run concurrently with the child, continuing to process

without waiting. When a UNIX shell runs a process as a

background task, this is the operation seen. It is also

possible for the parent to run for a while, and then wait

for the child later, which might occur in a sort of a

parallel processing operation.


There are also two possibilities in terms of the address space of
the new process:
1. The child process is a duplicate of the parent process.

2. The child process has a program loaded into it.

If fork is called for n times, the number of child processes or


new processes created will be: 2n – 1.
Process Termination

By making the exit(system call), typically returning an int,


processes may request their own termination. This int is passed
along to the parent if it is doing a wait(), and is typically zero on
successful completion and some non-zero code in the event of
any problem.
Processes may also be terminated by the system for a variety of
reasons, including :
● The inability of the system to deliver the necessary

system resources.

● In response to a KILL command or other unhandled

process interrupts.
● A parent may kill its children if the task assigned to

them is no longer needed i.e. if the need of having a

child terminates.

● If the parent exits, the system may or may not allow the

child to continue without a parent (In UNIX systems,

orphaned processes are generally inherited by init,

which then proceeds to kill them.)

When a process ends, all of its system resources are freed up,
open files flushed and closed, etc. The process termination
status and execution times are returned to the parent if the
parent is waiting for the child to terminate, or eventually
returned to init if the process already became an orphan.
The processes which are trying to terminate but cannot do so
because their parent is not waiting for them are termed
zombies. These are eventually inherited by init as orphans and
killed off.
Different types of queues in Parrot OS :
The scheduler of parrot OS maintains a run queue and wait
queue like typical schedulers. Only the head of the run queue
may enter synchronization next. Once the synchronization call
is executed, PARROT updates the queues accordingly. For
instance, for pthread create, PARROT appends the new thread
to the tail of the run queue and rotates the head to the tail. By
maintaining its own queues, PARROT avoids nondeterminism in
the OS scheduler and the Pthreads library. The get turn
function waits until the calling thread becomes the head of the
run queue, i.e., the thread gets a “turn” to do a
synchronization. The put turn function rotates the calling
thread from the head to the tail of the run queue, i.e., the
thread gives up a turn. It records the address the thread is
waiting for and the timeout, and moves the calling thread to
the tail of the wait queue. The thread is moved to the tail of
the run queue when another thread wakes it up via signal or
broadcast or the timeout has expired. The signal(void *addr)
function appends the first thread waiting for addr to the run
queue. The broadcast(void *addr) function appends all threads
waiting for addr to the run queue in order. The lazy updates
simplify the implementation of this optimization by maintaining
the invariant that only the head of the run queue can modify
the run and wait queues.

Different types of schedulers and and


scheduling algorithms in PARROT:

The scheduler intercepts synchronization calls and releases


threads using the well-understood,
deterministic round-robin algorithm: the first thread enters
synchronization first, the second
thread second, ..., and repeat. It does not control non-
synchronization code, often the majority
of code, which runs in parallel. It maintains a queue of runnable
threads (run queue) and
another queue of waiting threads (wait queue), like typical
schedulers. Only the head of the run
queue may enter synchronization next. Once the
synchronization call is executed, PARROT
updates the queues accordingly. For instance, for pthread
create, PARROT appends the new
thread to the tail of the run queue and rotates the head to the
tail. By maintaining its own
queues, PARROT avoids nondeterminism in the OS scheduler
and the Pthreads library. To
implement operations in the PARROT runtime, the scheduler
provides a monitor-like internal
interface,
shown in Table 1. The first five functions map one-to-one to
functions of a typical monitor,
except the scheduler functions are deterministic. The last two
are for selectively reverting to
nondeterministic execution. The rest of this subsection
describes these functions. The get turn
function waits until the calling thread becomes the head of the
run queue, i.e., the thread gets a
“turn” to do a synchronization. The put turn function rotates
the calling thread from the head to
the tail of the run queue, i.e., the thread gives up a turn. The
wait function is similar to pthread
cond timedwait. It requires that the calling thread has the turn.
It records the address the thread
is waiting for and the timeout , and moves the calling thread to
the tail of the wait queue. The
thread is moved to the tail of the run queue when (1) another
thread wakes it up via signal or
broadcast or (2) the timeout has expired. The wait function
returns when the calling thread gets
a turn again. Its return value indicates how the thread was
woken up.

int wrap mutex lock(pthread mutex t *mu){


scheduler.get turn();
while(pthread mutex trylock(mu))
scheduler.wait(mu, 0);
scheduler.put turn();
return 0; /* error handling is omitted for clarity. */
}
int wrap mutex unlock(pthread mutex t *mu){
scheduler.get turn();
pthread mutex unlock(mu);
scheduler.signal(mu);
scheduler.put turn();
return 0; /* error handling is omitted for clarity. */
}
The signal(void *addr function appends the first thread waiting
for addr to the run queue. The
broadcast(void *addr) function appends all threads waiting for
addr to the run queue in order.
Both signal and broadcast require the turn. The timeout in the
wait function does not specify real

time, but relative logical time that counts the number of turns
executed since the beginning of
current execution.
In each call to the get turn function, PARROT increments this
logical time and checks for
timeouts. (If all threads block, PARROT keeps the logic time
advancing with an idle thread; see
§4.5.) The wait function takes a relative timeout argument. If
current logical time is tl , a timeout
of 10 means waking up the thread at logical time tl +10. A
wait(NULL, timeout) call is a logical
sleep, and a wait(addr, 0) call never times out. The last two
functions in Table 1 support
performance critical sections and network operations. They set
the calling thread’s execution
mode to non deterministic or deterministic. PARROT always
schedules synchronizations of
deterministic threads using round-robin, but it lets the OS
scheduler schedule nondeterministic
threads. Implementation-wise, the nondet begin function
marks the calling thread as
nondeterministic and simply returns. This thread will be lazily
removed from the run queue by
the thread that next tries to pass the turn to it. The nondet end
function marks the calling thread
as deterministic and appends it to an additional queue. This
thread will be lazily appended to the
run queue by the next thread getting the turn.
int wrap cond wait(pthread cond t *cv,pthread mutex t *mu){
scheduler.get turn();
pthread mutex unlock(mu);
scheduler.signal(mu);
scheduler.wait(cv, 0);
while(pthread mutex trylock(mu))
scheduler.wait(mu, 0);
scheduler.put turn();
return 0; /* error handling is omitted for clarity. */
}

We have optimized the multicore scheduler implementation for


the most frequent operations:
get turn, put turn, wait, and signal. Each thread has an integer
flag and condition variable. The
get turn function spin-waits on the current thread’s flag for a
while before block-waiting on the
condition variable. The wait function needs to get the turn
before it returns, so it uses the same
combined spin- and block-wait strategy as the get turn
function. The put turn and the signal
functions signal both the flag and the condition variable of the
next thread. In the common case,
these operations acquire no lock and do not block-wait. The
lazy updates above simplify the
implementation of this optimization by maintaining the
invariant that only the head of the run
queue can modify the run and wait queues.

Operations on threads in parrot OS with


examples. :
The scheduler intercepts synchronization calls and releases
threads using the well-understood, deterministic round-robin
algorithm: the first thread enters synchronization first, the
second thread second, ..., and repeat. Once the synchronization
call is executed, PARROT updates the queues accordingly. For
instance, for pthread create, PARROT appends the new thread
to the tail of the run queue and rotates the head to the tail. By
maintaining its own queues, PARROT avoids nondeterminism in
the OS scheduler and the Pthreads library. The put turn
function rotates the calling thread from the head to the tail of
the run queue, i.e., the thread gives up a turn. The wait
function is similar to pthread cond timedwait. It requires that
the calling thread has the turn. It records the address the
thread is waiting for and the timeout, and moves the calling
thread to the tail of the wait queue. The wait function returns
when the calling thread gets a turn again. Its return value
indicates how the thread was woken up. The performance
critical section and network operations set the calling thread’s
execution mode to nondeterministic or deterministic. PARROT
always schedules synchronizations of deterministic threads
using round-robin, but it lets the OS scheduler schedule
nondeterministic threads. Implementation- wise, the nondet
begin function marks the calling thread as nondeterministic and
simply returns. This thread will be lazily removed from the run
queue by the thread that next tries to pass the turn to it.The
nondet end function marks the calling thread as deterministic
and appends it to an additional queue. This thread will be lazily
appended to the run queue by the next thread getting the turn.
We have optimized the multicore scheduler implementation
for the most frequent operations: get turn, put turn, wait, and
signal. Each thread has an integer flag and condition variable.
The get turn function spin-waits on the current thread’s flag for
a while before block-waiting on the condition variable. The put
turn and the signal functions signal both the flag and the
condition variable of the next thread.
Example 1:
Wrappers of Pthreads mutex lock&unlock
int wrap mutex lock(pthread mutex t *mu)
{
scheduler.get turn();
while(pthread mutex trylock(mu))
scheduler.wait(mu, 0);
scheduler.put turn();
return 0;
/* error handling is omitted for clarity. */
}
int wrap mutex unlock(pthread mutex t *mu)
{
scheduler.get turn();
pthread mutex unlock(mu);
scheduler.signal(mu);
scheduler.put turn();
return 0;
/* error handling is omitted for clarity. */
}
Example 2:
Wrapper of pthread cond wait.
int wrap cond wait(pthread cond t *cv,pthread mutex t *mu)

{
scheduler.get turn();
pthread mutex unlock(mu);
scheduler.signal(mu);
scheduler.wait(cv, 0);
while(pthread mutex trylock(mu))
scheduler.wait(mu, 0);
scheduler.put turn();
return 0;
/* error handling is omitted for clarity. */
}
Mapping between user level threads and
kernel level threads in PARROT :
A thread library provides programmers with an API for creating
and managing threads. Support for threads must be provided
either at the user level or by the kernel.
● Kernel level threads are supported and managed directly
by the operating system.
● User level threads are supported above the kernel in user
space and are managed without kernel support.

Kernel level threads


Kernel level threads are supported and managed directly
by the operating system.
● The kernel knows about and manages all threads.
● One process control block (PCP) per process.
● One thread control block (TCB) per thread in the
system.
● Provide system calls to create and manage threads
from user space.
Advantages

● The kernel has full knowledge of all threads.


● Scheduler may decide to give more CPU time to a
process having a large numer of threads.
● Good for applications that frequently block.
Disadvantages

● Kernel manage and schedule all threads.


● Significant overhead and increase in kernel
complexity.
● Kernel level threads are slow and inefficient
compared to user level threads.
● Thread operations are hundreds of times slower
compared to user-level threads.
User level threads
User level threads are supported above the kernel in user
space and are managed without kernel support.
● Threads managed entirely by the run-time system
(user-level library).
● Ideally, thread operations should be as fast as a
function call.
● The kernel knows nothing about user-level threads
and manage them as if they where single-threaded
processes.
Advantages

● Can be implemented on an OS that does not suport


kernel-level threads.
● Does not require modifications of the OS.
● Simple representation: PC, registers, stack and small
thread control block all stored in the user-level
process address space.
● Simple management: Creating, switching and
synchronizing threads done in user-space without
kernel intervention.
● Fast and efficient: switching threads not much more
expensive than a function call.
Disadvantages

● Not a perfect solution (a trade off).


● Lack of coordination between the user-level thread
manager and the kernel.
● OS may make poor decisions like:
o scheduling a process with idle threads
o blocking a process due to a blocking thread
even though the process has other threads that
can run
o giving a process as a whole one time slice
irrespective of whether the process has 1 or
1000 threads
o unschedule a process with a thread holding a
lock.
● May require communication between the kernel and
the user-level thread manager (scheduler activations)
to overcome the above problems.
User-level thread models
In general, user-level threads can be implemented using
one of four models.
● Many-to-one
● One-to-one
● Many-to-many

All models maps user-level threads to kernel-level


threads. A kernel thread is similar to a process in a non-
threaded (single-threaded) system. The kernel thread is
the unit of execution that is scheduled by the kernel to
execute on the CPU. The term virtual processor is often
used instead of kernel thread.
Many-to-one
In the many-to-one model all user level threads execute
on the same kernel thread. The process can only run one
user-level thread at a time because there is only one
kernel-level thread associated with the process.
The kernel has no knowledge of user-level threads. From
its perspective, a process is an opaque black box that
occasionally makes system calls.
One-to-one
In the one-to-one model every user-level thread execute
on a separate kernel-level thread.
In this model the kernel must provide a system call for
creating a new kernel thread.
Many-to-many
In the many-to-many model the process is allocated m
number of kernel-level threads to execute n number of
user-level thread.
IPC MECHANISMS ON PARROT :
Inter-Process-Communication (or IPC for short) are
mechanisms provided by the kernel to allow processes to
communicate with each other. On modern systems, IPCs form
the web that bind together each process within a large scale
software architecture.
The PARROT provides the following IPC mechanisms:
1. Signals
2. Anonymous Pipes
3. Named Pipes or FIFOs
4. SysV Message Queues
5. POSIX Message Queues
6. SysV Shared memory
7. POSIX Shared memory
8. SysV semaphores
9. POSIX semaphores
10.FUTEX locks
While the above list seems quite a lot, each IPC mechanism
from the list describe above , is tailored to work better for a
particular use-case scenario.
SIGNALS
Signals are the cheapest forms of IPC provided by Linux.
Their primary use is to notify processes of change in states or
events that occur within the kernel or other processes. We use
signals in real world to convey messages with least overhead -
think of hand and body gestures. For example, in a crowded
gathering, we raise a hand to gain attention, wave hand at a
friend to greet and so on.
ANONYMOUS PIPES
Anonymous pipes (or simply pipes, for short) provide a
mechanism for one process to stream data to another. A pipe
has two ends associated with a pair of file descriptors -
making it a one-to-one messaging or communication
mechanism. One end of the pipe is the read-end which is
associated with a file-descriptor that can only be read, and the
other end is the write-end which is associated with a file
descriptor that can only be written. This design means that
pipes are essentially half-duplex.
NAMED PIPES OR FIFO
Named pipes (or FIFO) are variants of pipe that allow
communication between processes that are not related to each
other. The processes communicate using named pipes by
opening a special file known as a FIFO file. One process
opens the FIFO file from writing while the other process
opens the same file for reading. Thus any data written by the
former process gets streamed through a pipe to the latter
process. The FIFO file on disk acts as the contract between the
two processes that wish to communicate.
MESSAGE QUEUES
Message Queues are synonymous to mailboxes. One process
writes a message packet on the message queue and exits.
Another process can access the message packet from the same
message queue at a latter point in time. The advantage of
message queues over pipes/FIFOs are that the sender (or
writer) processes do not have to wait for the receiver (or
reader) processes to connect. Think of communication using
pipes as similar to two people communicating over phone,
while message queues are similar to two people
communicating using mail or other messaging services.
There are two standard specifications for message queues.
1. SysV message queues.
The AT&T SysV message queues support message
channeling. Each message packet sent by senders
carry a message number. The receivers can either
choose to receive message that match a particular
message number, or receive all other messages
excluding a particular message number or all
messages.
2. POSIX message queues.
The POSIX message queues support message
priorities. Each message packet sent by the senders
carry a priority number along with the message
payload. The messages get ordered based on the
priority number in the message queue. When the
receiver tries to read a message at a later point in time,
the messages with higher priority numbers get
delivered first. POSIX message queues also support
asynchronous message delivery using threads or
signal based notification.
Linux support both of the above standards for message
queues.
SHARED MEMORY
As the name implies, this IPC mechanism allows one process
to share a region of memory in its address space with another.
This allows two or more processes to communicate data more
efficiently amongst themselves with minimal kernel
intervention.
There are two standard specifications for Shared memory.
1. SysV Shared memory. Many applications even today
use this mechanism for historical reasons. It follows
some of the artifacts of SysV IPC semantics.
2. POSIX Shared memory. The POSIX specifications
provide a more elegant approach towards
implementing shared memory interface. On Linux,
POSIX Shared memory is actually implemented by
using files backed by RAM-based filesystem. I
recommend using this mechanism over the SysV
semantics due to a more elegant file based semantics.
SEMAPHORES
Semaphores are locking and synchronization mechanism used
most widely when processes share resources. Linux supports
both SysV semaphores and POSIX semaphores. POSIX
semaphores provide a more simpler and elegant
implementation and thus is most widely used when compared
to SysV semaphores on Linux.
FUTEXES
Futexes are high-performance low-overhead locking
mechanisms provided by the kernel. Direct use of futexes is
highly discouraged in system programs. Futexes are used
internally by POSIX threading API for condition variables and
its mutex implementations.

Parrot Architecture
REPORT - II

Synchronization in Parrot OS

PARROT handles all synchronizations on Pthreads mu-


texes, read-write locks, condition variables, semaphores
and barriers. It also handles thread creation, join, and
exit. It need not implement the other Pthreads func-
tions such as thread ID operations, another advantage of
leveraging existing Pthreads runtimes. In total, PARROT has 38
synchronization wrappers. They ensure a total (round-robin)
order of synchronizations by (1) using the scheduler primitives
to ensure that at most one wrapper has the turn and (2)
executing the actual synchronizations only when the turn is
held.Code1 shows the pseudo code of our Pthreads mutex lock
and unlock wrappers. Both are quite simple; so are most other
wrappers. The lock wrapper uses the try version of the
Pthreads lock operation to avoid deadlock if the head of run
queue is blocked waiting for a lock be fore giving up the turn,
no other thread can get the turn.
Code2 shows the pthread cond wait wrapper. It is slightly more
complex than the lock and unlock wrap-
pers for two reasons. First, there is no try-version of
pthread cond wait, so PARROT cannot use the same
trick to avoid deadlock as in the lock wrapper. Sec-
ond, PARROT must ensure that unlocking the mutex
and waiting on the conditional variable are atomic (to
avoid the well-known lost-wakeup problem). PARROT
solves these issues by implementing the wait with the
scheduler’s wait which atomically gives up the turn and
blocks the calling thread on the wait queue. The wrapper
of pthread cond signal (not shown) calls the sched-
uler’s signal accordingly.
Thread creation is the most complex of all wrappers
for two reasons. First, it must deterministically assign
a logical thread ID to the newly created thread because
the system’s thread IDs are nondeterministic. Second, it
must also prevent the new thread from using the logical
ID before the ID is assigned. PARROT solves these issues by
synchronizing the current and new threads with two
semaphores, one to make the new thread wait for the current
thread to assign an ID, and the other to make the current
thread wait until the child gets the ID.

int wrap mutex lock(pthread mutex t *mu){


scheduler.get turn();
while(pthread mutex trylock(mu))
scheduler.wait(mu, 0);
scheduler.put turn();
return 0; /* error handling is omitted for clarity. */
}
int wrap mutex unlock(pthread mutex t *mu){
scheduler.get turn();
pthread mutex unlock(mu);
scheduler.signal(mu);
scheduler.put turn();
return 0; /* error handling is omitted for clarity. */
}
Code1: Wrappers of Pthreads mutex lock&unlock.

int wrap cond wait(pthread cond t *cv,pthread mutex t *mu){


scheduler.get turn();
pthread mutex unlock(mu);
scheduler.signal(mu);
scheduler.wait(cv, 0);
while(pthread mutex trylock(mu))
scheduler.wait(mu, 0);
scheduler.put turn();
return 0; /* error handling is omitted for clarity. */
}

Code2: Wrapper of pthread cond wait.


DEADLOCKS

PARROT implements performance hints using the scheduler


primitives. It implements the soft barrier as a
reusable barrier with a deterministic timeout. It imple-
ments the performance critical section by simply calling
nondet begin() and nondet end().
One tricky issue is that deterministic and nondeter-
ministic executions may interfere. Consider a determin-
istic thread t1 trying to lock a mutex that a nondetermin-
istic t2 is trying to unlock. Nondeterministic thread t2 al-
ways “wins” because the timing of t2’s unlock directly
influences t1’s lock regardless of how hard PARROT
tries to run t1 deterministically. An additional concern
is deadlock: PARROT may move t1 to the wait queue but never
wake t1 up because it cannot see t2’s unlock.
To avoid the above interference, PARROT requires that
synchronization variables accessed in nondeterministic
execution are isolated from those accessed in determin-
istic execution. This strong isolation is easy to achieve
based on our experiments because, as discussed in §3,
the synchronizations causing high overhead on deter-
ministic execution tend to be low-level synchronizations
already isolated from other synchronizations. To help
developers write performance critical sections that con-
form to strong isolation, PARROT checks this property at
runtime: it tracks two sets of synchronization variables
accessed within deterministic and nondeterministic exe-
cutions, and emits a warning when the two sets overlap.
Strong isolation is considerably stronger than necessary:
to avoid interference, it suffices to forbid deterministic
and nondeterministic sections from concurrently access-
ing the same synchronization variables. We have not im-
plemented this weak isolation because strong isolation
works well for all programs evaluated.

Real-world programs frequently use timeouts (e.g.,


sleep, epoll wait, and pthread cond timedwait)
for periodic activities or timed waits. Not handling
them can lead to nondeterministic execution and dead-
locks. One deadlock example in our evaluation was
running PBZip2 with DTHREADS: DTHREADS ignores
the timeout in pthread cond timedwait, but PBZip2
sometimes relies on the timeout to finish.
PARROT makes timeouts deterministic by proportion-
ally converting them to a logical timeout. When a thread
registers a relative timeout that fires ∆tr
later in real time,
PARROT converts ∆tr
to a relative logical timeout ∆tr/R
where R is a configurable conversion ratio. (R defaults
to 3 µs, which works for all evaluated programs.) Pro-
portional conversion is better than a fixed logical time-
out because it matches developer intents better (e.g.,
important activities run more often). A nice fallout is
that it makes some non-terminating executions terminate for
model checking (§7.6). Of course, PARROT’s logical time
corresponds loosely to real time, and may be lessuseful for real-
time applications.
When all threads are on the wait queue, PARROT
spawns an idle thread to keep the logical time flowing.
The thread repeatedly gets the turn, sleeps for time R,
and gives up the turn. An alternative to idling is fast-
forwarding [10, 67]. Our experiments show that using an
idle thread has better performance than fast-forwarding
because the latter often wakes up threads prematurely
before the pending external events (e.g., receiving a net-
work packet) are done, wasting CPU cycles.
PARROT handles all common timed operations such as
sleep and pthread cond timedwait, enough for all
five evaluated programs that require timeouts (PBZip2,
Berkeley DB, MPlayer, ImageMagick, and Redis).
Pthreads timed synchronizations use absolute time, so
PARROT provides developers a function set base time
to pass in the base time. It uses the delta between the
base time and the absolute time argument as ∆tr.

Team Members :
- B.chaitanya harsha- 17BCE7093
HIMA VAMSI– 17BCD7082
K.yaswanth - 17BCE7170
B.yaswanth krishna– 17BCE7034
Sai ram -17BCE702
CONTRIBUTION:
Report - I
1,5,11 – B.Chaitanya harsha- 17BCE7093
3,2,6- Hima vamsi – 17BCD7082
4,12 - Sai ram– 17BCE7027
7,10 - yaswanth krishna – 17BCE7034
8,9 - K. Yaswanth- 17BCE7170
Report - II
Synchronization -
- B.chaitanya harsha- 17BCE7093
HIMA VAMSI– 17BCD7082

Deadlock -

K.yaswanth - 17BCE7170
B.yaswanth krishna– 17BCE7034
Sai ram -17BCE702

Вам также может понравиться