Академический Документы
Профессиональный Документы
Культура Документы
Varun Thacker
Linux Users Group Manipal
April 8, 2010
Distributed Computing
April 8, 2010
1 / 42
Outline I
1
Introduction
LUG Manipal
Points To Remember
Distributed Computing
Distributed Computing
Technologies to be covered
Idea
Data !!
Why Distributed Computing is Hard
Why Distributed Computing is Important
Three Common Distributed Architectures
Distributed Computing
April 8, 2010
2 / 42
Outline II
GFS Architecture: Master
GFS: Life of a Read
GFS: Life of a Write
GFS: Master Failure
4
MapReduce
MapReduce
Do We Need It?
Bad News!
MapReduce
Map Reduce Paradigm
MapReduce Paradigm
Working
Working
Under the hood: Scheduling
Robustness
Hadoop
Varun Thacker (LUG Manipal)
Distributed Computing
April 8, 2010
3 / 42
Outline III
Hadoop
What is Hadoop
Who uses Hadoop?
Mapper
Combiners
Reducer
Some Terminology
Job Distribution
6
Contact Information
Attribution
Copying
Varun Thacker (LUG Manipal)
Distributed Computing
April 8, 2010
4 / 42
Distributed Computing
April 8, 2010
5 / 42
Distributed Computing
April 8, 2010
5 / 42
Distributed Computing
April 8, 2010
5 / 42
Distributed Computing
April 8, 2010
5 / 42
Distributed Computing
April 8, 2010
5 / 42
Distributed Computing
April 8, 2010
5 / 42
Points To Remember!!!
Distributed Computing
April 8, 2010
6 / 42
Points To Remember!!!
Distributed Computing
April 8, 2010
6 / 42
Points To Remember!!!
Distributed Computing
April 8, 2010
6 / 42
Points To Remember!!!
Distributed Computing
April 8, 2010
6 / 42
Points To Remember!!!
Distributed Computing
April 8, 2010
6 / 42
Points To Remember!!!
Distributed Computing
April 8, 2010
6 / 42
Points To Remember!!!
Distributed Computing
April 8, 2010
6 / 42
Distributed Computing
Distributed Computing
Distributed Computing
April 8, 2010
7 / 42
Technologies to be covered
Distributed Computing
April 8, 2010
8 / 42
Technologies to be covered
Distributed Computing
April 8, 2010
8 / 42
Technologies to be covered
Distributed Computing
April 8, 2010
8 / 42
Technologies to be covered
Distributed Computing
April 8, 2010
8 / 42
Technologies to be covered
Distributed Computing
April 8, 2010
8 / 42
Idea
Distributed Computing
April 8, 2010
9 / 42
Idea
Distributed Computing
April 8, 2010
9 / 42
Idea
Distributed Computing
April 8, 2010
9 / 42
Data
We live in the data age.An IDC estimate put the size of the digital
universe at 0.18 zettabytes(?) in 2006.
Distributed Computing
April 8, 2010
10 / 42
Data
We live in the data age.An IDC estimate put the size of the digital
universe at 0.18 zettabytes(?) in 2006.
And by 2011 there will be a tenfold growth to 1.8 zettabytes.
Distributed Computing
April 8, 2010
10 / 42
Data
We live in the data age.An IDC estimate put the size of the digital
universe at 0.18 zettabytes(?) in 2006.
And by 2011 there will be a tenfold growth to 1.8 zettabytes.
1 zetabyte is one million petabytes, or one billion terabytes.
Distributed Computing
April 8, 2010
10 / 42
Data
We live in the data age.An IDC estimate put the size of the digital
universe at 0.18 zettabytes(?) in 2006.
And by 2011 there will be a tenfold growth to 1.8 zettabytes.
1 zetabyte is one million petabytes, or one billion terabytes.
The New York Stock Exchange generates about one terabyte of new
trade data per day.
Distributed Computing
April 8, 2010
10 / 42
Data
We live in the data age.An IDC estimate put the size of the digital
universe at 0.18 zettabytes(?) in 2006.
And by 2011 there will be a tenfold growth to 1.8 zettabytes.
1 zetabyte is one million petabytes, or one billion terabytes.
The New York Stock Exchange generates about one terabyte of new
trade data per day.
Facebook hosts approximately 10 billion photos, taking up one
petabyte of storage.
Distributed Computing
April 8, 2010
10 / 42
Data
We live in the data age.An IDC estimate put the size of the digital
universe at 0.18 zettabytes(?) in 2006.
And by 2011 there will be a tenfold growth to 1.8 zettabytes.
1 zetabyte is one million petabytes, or one billion terabytes.
The New York Stock Exchange generates about one terabyte of new
trade data per day.
Facebook hosts approximately 10 billion photos, taking up one
petabyte of storage.
The Large Hadron Collider near Geneva produces about 15 petabytes
of data per year.
Distributed Computing
April 8, 2010
10 / 42
Computers crash.
Distributed Computing
April 8, 2010
11 / 42
Computers crash.
Network links crash.
Distributed Computing
April 8, 2010
11 / 42
Computers crash.
Network links crash.
Talking is slow(even ethernet has 300 microsecond latency, during
which time your 2Ghz PC can do 600,000 cycles).
Distributed Computing
April 8, 2010
11 / 42
Computers crash.
Network links crash.
Talking is slow(even ethernet has 300 microsecond latency, during
which time your 2Ghz PC can do 600,000 cycles).
Bandwidth is finite.
Distributed Computing
April 8, 2010
11 / 42
Computers crash.
Network links crash.
Talking is slow(even ethernet has 300 microsecond latency, during
which time your 2Ghz PC can do 600,000 cycles).
Bandwidth is finite.
Internet scale: the computers and network are
heterogeneous,untrustworthy, and subject to change at any time.
Distributed Computing
April 8, 2010
11 / 42
Distributed Computing
April 8, 2010
12 / 42
Distributed Computing
April 8, 2010
12 / 42
Distributed Computing
April 8, 2010
12 / 42
Distributed Computing
April 8, 2010
13 / 42
Distributed Computing
April 8, 2010
13 / 42
Distributed Computing
April 8, 2010
13 / 42
GFS
GFS
Distributed Computing
April 8, 2010
14 / 42
Usual file system stuff: create, read, move & find files.
Distributed Computing
April 8, 2010
15 / 42
Usual file system stuff: create, read, move & find files.
Allow distributed access to files.
Distributed Computing
April 8, 2010
15 / 42
Usual file system stuff: create, read, move & find files.
Allow distributed access to files.
Files are stored distributedly.
Distributed Computing
April 8, 2010
15 / 42
Usual file system stuff: create, read, move & find files.
Allow distributed access to files.
Files are stored distributedly.
If you just do #1 and #2, you are a network file system.
Distributed Computing
April 8, 2010
15 / 42
Usual file system stuff: create, read, move & find files.
Allow distributed access to files.
Files are stored distributedly.
If you just do #1 and #2, you are a network file system.
To do #3, its a good idea to also provide fault tolerance.
Distributed Computing
April 8, 2010
15 / 42
GFS Architecture
Distributed Computing
April 8, 2010
16 / 42
Distributed Computing
April 8, 2010
17 / 42
Distributed Computing
April 8, 2010
17 / 42
Distributed Computing
April 8, 2010
17 / 42
Distributed Computing
April 8, 2010
17 / 42
Distributed Computing
April 8, 2010
17 / 42
Distributed Computing
April 8, 2010
17 / 42
Distributed Computing
April 8, 2010
18 / 42
Distributed Computing
April 8, 2010
18 / 42
Distributed Computing
April 8, 2010
18 / 42
Distributed Computing
April 8, 2010
18 / 42
Distributed Computing
April 8, 2010
18 / 42
Distributed Computing
April 8, 2010
18 / 42
Distributed Computing
April 8, 2010
18 / 42
Distributed Computing
April 8, 2010
19 / 42
Distributed Computing
April 8, 2010
19 / 42
Distributed Computing
April 8, 2010
19 / 42
Distributed Computing
April 8, 2010
19 / 42
Distributed Computing
April 8, 2010
19 / 42
Distributed Computing
April 8, 2010
19 / 42
Distributed Computing
April 8, 2010
19 / 42
Distributed Computing
April 8, 2010
20 / 42
Distributed Computing
April 8, 2010
20 / 42
Distributed Computing
April 8, 2010
20 / 42
Distributed Computing
April 8, 2010
20 / 42
Distributed Computing
April 8, 2010
20 / 42
Distributed Computing
April 8, 2010
20 / 42
Distributed Computing
April 8, 2010
20 / 42
The Master stores its state via periodic checkpoints and a mutation
log.
Distributed Computing
April 8, 2010
21 / 42
The Master stores its state via periodic checkpoints and a mutation
log.
Both are replicated.
Distributed Computing
April 8, 2010
21 / 42
The Master stores its state via periodic checkpoints and a mutation
log.
Both are replicated.
Master election and notification is implemented using an external lock
server.
Distributed Computing
April 8, 2010
21 / 42
The Master stores its state via periodic checkpoints and a mutation
log.
Both are replicated.
Master election and notification is implemented using an external lock
server.
New master restores state from checkpoint and log.
Distributed Computing
April 8, 2010
21 / 42
MapReduce
MapReduce
Distributed Computing
April 8, 2010
22 / 42
Do We Need It?
Distributed Computing
April 8, 2010
23 / 42
Do We Need It?
Distributed Computing
April 8, 2010
23 / 42
Do We Need It?
Distributed Computing
April 8, 2010
23 / 42
Do We Need It?
Distributed Computing
April 8, 2010
23 / 42
Do We Need It?
Distributed Computing
April 8, 2010
23 / 42
Bad News!
Bad News!!
Distributed Computing
April 8, 2010
24 / 42
Bad News!
Bad News!!
communication and coordination
Distributed Computing
April 8, 2010
24 / 42
Bad News!
Bad News!!
communication and coordination
recovering from machine failure (all the time!)
Distributed Computing
April 8, 2010
24 / 42
Bad News!
Bad News!!
communication and coordination
recovering from machine failure (all the time!)
debugging
Distributed Computing
April 8, 2010
24 / 42
Bad News!
Bad News!!
communication and coordination
recovering from machine failure (all the time!)
debugging
optimization
Distributed Computing
April 8, 2010
24 / 42
Bad News!
Bad News!!
communication and coordination
recovering from machine failure (all the time!)
debugging
optimization
locality
Distributed Computing
April 8, 2010
24 / 42
Bad News!
Bad News!!
communication and coordination
recovering from machine failure (all the time!)
debugging
optimization
locality
Bad news II: repeat for every problem you want to solve
Distributed Computing
April 8, 2010
24 / 42
Bad News!
Bad News!!
communication and coordination
recovering from machine failure (all the time!)
debugging
optimization
locality
Bad news II: repeat for every problem you want to solve
Good News I and II: MapReduce and Hadoop!
Distributed Computing
April 8, 2010
24 / 42
Bad News!
Bad News!!
communication and coordination
recovering from machine failure (all the time!)
debugging
optimization
locality
Bad news II: repeat for every problem you want to solve
Good News I and II: MapReduce and Hadoop!
Distributed Computing
April 8, 2010
24 / 42
MapReduce
Distributed Computing
April 8, 2010
25 / 42
MapReduce
Distributed Computing
April 8, 2010
25 / 42
MapReduce
Distributed Computing
April 8, 2010
25 / 42
MapReduce
Distributed Computing
April 8, 2010
25 / 42
MapReduce
Distributed Computing
April 8, 2010
25 / 42
MapReduce
Distributed Computing
April 8, 2010
25 / 42
MapReduce
Distributed Computing
April 8, 2010
25 / 42
MapReduce
Distributed Computing
April 8, 2010
25 / 42
MapReduce
Distributed Computing
April 8, 2010
25 / 42
Distributed Computing
April 8, 2010
26 / 42
Distributed Computing
April 8, 2010
26 / 42
Distributed Computing
April 8, 2010
26 / 42
Distributed Computing
April 8, 2010
26 / 42
Distributed Computing
April 8, 2010
26 / 42
MapReduce Paradigm
Distributed Computing
April 8, 2010
27 / 42
MapReduce Paradigm
Distributed Computing
April 8, 2010
27 / 42
MapReduce Paradigm
Distributed Computing
April 8, 2010
27 / 42
MapReduce Paradigm
Distributed Computing
April 8, 2010
27 / 42
MapReduce Paradigm
Distributed Computing
April 8, 2010
27 / 42
MapReduce Paradigm
Distributed Computing
April 8, 2010
27 / 42
MapReduce Paradigm
Distributed Computing
April 8, 2010
27 / 42
MapReduce Paradigm
Distributed Computing
April 8, 2010
27 / 42
Working
Distributed Computing
April 8, 2010
28 / 42
Working
Distributed Computing
April 8, 2010
29 / 42
Distributed Computing
April 8, 2010
30 / 42
Distributed Computing
April 8, 2010
30 / 42
Distributed Computing
April 8, 2010
30 / 42
Distributed Computing
April 8, 2010
30 / 42
Distributed Computing
April 8, 2010
30 / 42
Distributed Computing
April 8, 2010
30 / 42
Distributed Computing
April 8, 2010
30 / 42
Distributed Computing
April 8, 2010
30 / 42
Distributed Computing
April 8, 2010
30 / 42
Distributed Computing
April 8, 2010
30 / 42
Distributed Computing
April 8, 2010
30 / 42
Distributed Computing
April 8, 2010
30 / 42
Robustness
Distributed Computing
April 8, 2010
31 / 42
Robustness
Distributed Computing
April 8, 2010
31 / 42
Robustness
Distributed Computing
April 8, 2010
31 / 42
Robustness
Distributed Computing
April 8, 2010
31 / 42
Robustness
Distributed Computing
April 8, 2010
31 / 42
Robustness
Distributed Computing
April 8, 2010
31 / 42
Robustness
Distributed Computing
April 8, 2010
31 / 42
Robustness
Distributed Computing
April 8, 2010
31 / 42
Robustness
Distributed Computing
April 8, 2010
31 / 42
Hadoop
Hadoop
Distributed Computing
April 8, 2010
32 / 42
What is hadoop
Distributed Computing
April 8, 2010
33 / 42
What is hadoop
Distributed Computing
April 8, 2010
33 / 42
What is hadoop
Distributed Computing
April 8, 2010
33 / 42
What is hadoop
Distributed Computing
April 8, 2010
33 / 42
What is hadoop
Distributed Computing
April 8, 2010
33 / 42
Distributed Computing
April 8, 2010
34 / 42
Distributed Computing
April 8, 2010
34 / 42
Distributed Computing
April 8, 2010
34 / 42
Distributed Computing
April 8, 2010
34 / 42
Distributed Computing
April 8, 2010
34 / 42
Distributed Computing
April 8, 2010
34 / 42
Distributed Computing
April 8, 2010
34 / 42
Distributed Computing
April 8, 2010
34 / 42
Distributed Computing
April 8, 2010
34 / 42
Distributed Computing
April 8, 2010
34 / 42
Distributed Computing
April 8, 2010
34 / 42
Mapper
Mapper maps input key/value pairs to a set of intermediate key/value
pairs.
Distributed Computing
April 8, 2010
35 / 42
Mapper
Mapper maps input key/value pairs to a set of intermediate key/value
pairs.
The Hadoop Map/Reduce framework spawns one map task for each
InputSplit generated by the InputFormat.
Distributed Computing
April 8, 2010
35 / 42
Mapper
Mapper maps input key/value pairs to a set of intermediate key/value
pairs.
The Hadoop Map/Reduce framework spawns one map task for each
InputSplit generated by the InputFormat.
Output pairs do not need to be of the same types as input pairs.
Distributed Computing
April 8, 2010
35 / 42
Mapper
Mapper maps input key/value pairs to a set of intermediate key/value
pairs.
The Hadoop Map/Reduce framework spawns one map task for each
InputSplit generated by the InputFormat.
Output pairs do not need to be of the same types as input pairs.
Mapper implementations are passed the JobConf for the job.
Distributed Computing
April 8, 2010
35 / 42
Mapper
Mapper maps input key/value pairs to a set of intermediate key/value
pairs.
The Hadoop Map/Reduce framework spawns one map task for each
InputSplit generated by the InputFormat.
Output pairs do not need to be of the same types as input pairs.
Mapper implementations are passed the JobConf for the job.
The framework then calls map method for each key/value pair.
Distributed Computing
April 8, 2010
35 / 42
Mapper
Mapper maps input key/value pairs to a set of intermediate key/value
pairs.
The Hadoop Map/Reduce framework spawns one map task for each
InputSplit generated by the InputFormat.
Output pairs do not need to be of the same types as input pairs.
Mapper implementations are passed the JobConf for the job.
The framework then calls map method for each key/value pair.
Applications can use the Reporter to report progress.
Distributed Computing
April 8, 2010
35 / 42
Mapper
Mapper maps input key/value pairs to a set of intermediate key/value
pairs.
The Hadoop Map/Reduce framework spawns one map task for each
InputSplit generated by the InputFormat.
Output pairs do not need to be of the same types as input pairs.
Mapper implementations are passed the JobConf for the job.
The framework then calls map method for each key/value pair.
Applications can use the Reporter to report progress.
All intermediate values associated with a given output key are
subsequently grouped by the framework, and passed to the
Reducer(s) to determine the final output.
Distributed Computing
April 8, 2010
35 / 42
Mapper
Mapper maps input key/value pairs to a set of intermediate key/value
pairs.
The Hadoop Map/Reduce framework spawns one map task for each
InputSplit generated by the InputFormat.
Output pairs do not need to be of the same types as input pairs.
Mapper implementations are passed the JobConf for the job.
The framework then calls map method for each key/value pair.
Applications can use the Reporter to report progress.
All intermediate values associated with a given output key are
subsequently grouped by the framework, and passed to the
Reducer(s) to determine the final output.
The intermediate, sorted outputs are always stored in a simple
(key-len, key, value-len, value) format.
Distributed Computing
April 8, 2010
35 / 42
Mapper
Mapper maps input key/value pairs to a set of intermediate key/value
pairs.
The Hadoop Map/Reduce framework spawns one map task for each
InputSplit generated by the InputFormat.
Output pairs do not need to be of the same types as input pairs.
Mapper implementations are passed the JobConf for the job.
The framework then calls map method for each key/value pair.
Applications can use the Reporter to report progress.
All intermediate values associated with a given output key are
subsequently grouped by the framework, and passed to the
Reducer(s) to determine the final output.
The intermediate, sorted outputs are always stored in a simple
(key-len, key, value-len, value) format.
The number of maps is usually driven by the total size of the inputs,
that is, the total number of blocks of the input files.
Varun Thacker (LUG Manipal)
Distributed Computing
April 8, 2010
35 / 42
Mapper
Mapper maps input key/value pairs to a set of intermediate key/value
pairs.
The Hadoop Map/Reduce framework spawns one map task for each
InputSplit generated by the InputFormat.
Output pairs do not need to be of the same types as input pairs.
Mapper implementations are passed the JobConf for the job.
The framework then calls map method for each key/value pair.
Applications can use the Reporter to report progress.
All intermediate values associated with a given output key are
subsequently grouped by the framework, and passed to the
Reducer(s) to determine the final output.
The intermediate, sorted outputs are always stored in a simple
(key-len, key, value-len, value) format.
The number of maps is usually driven by the total size of the inputs,
that is, the total number of blocks of the input files.
Users can optionally specify a combiner to perform local aggregation
of the intermediate outputs.
Varun Thacker (LUG Manipal)
Distributed Computing
April 8, 2010
35 / 42
Mapper
Mapper maps input key/value pairs to a set of intermediate key/value
pairs.
The Hadoop Map/Reduce framework spawns one map task for each
InputSplit generated by the InputFormat.
Output pairs do not need to be of the same types as input pairs.
Mapper implementations are passed the JobConf for the job.
The framework then calls map method for each key/value pair.
Applications can use the Reporter to report progress.
All intermediate values associated with a given output key are
subsequently grouped by the framework, and passed to the
Reducer(s) to determine the final output.
The intermediate, sorted outputs are always stored in a simple
(key-len, key, value-len, value) format.
The number of maps is usually driven by the total size of the inputs,
that is, the total number of blocks of the input files.
Users can optionally specify a combiner to perform local aggregation
of the intermediate outputs.
Varun Thacker (LUG Manipal)
Distributed Computing
April 8, 2010
35 / 42
Combiners
When the map operation outputs its pairs they are already available
in memory.
Distributed Computing
April 8, 2010
36 / 42
Combiners
When the map operation outputs its pairs they are already available
in memory.
If a combiner is used then the map key-value pairs are not
immediately written to the output.
Distributed Computing
April 8, 2010
36 / 42
Combiners
When the map operation outputs its pairs they are already available
in memory.
If a combiner is used then the map key-value pairs are not
immediately written to the output.
They are collected in lists, one list per each key value.
Distributed Computing
April 8, 2010
36 / 42
Combiners
When the map operation outputs its pairs they are already available
in memory.
If a combiner is used then the map key-value pairs are not
immediately written to the output.
They are collected in lists, one list per each key value.
When a certain number of key-value pairs have been written, this
buffer is flushed by passing all the values of each key to the combiners
reduce method and outputting the key-value pairs of the combine
operation as if they were created by the original map operation.
Distributed Computing
April 8, 2010
36 / 42
Reducer
Reducer reduces a set of intermediate values which share a key to a
smaller set of values.
Distributed Computing
April 8, 2010
37 / 42
Reducer
Reducer reduces a set of intermediate values which share a key to a
smaller set of values.
Reducer implementations are passed the JobConf for the job.
Distributed Computing
April 8, 2010
37 / 42
Reducer
Reducer reduces a set of intermediate values which share a key to a
smaller set of values.
Reducer implementations are passed the JobConf for the job.
The framework then calls reduce(WritableComparable, Iterator,
OutputCollector, Reporter) method for each key, (list of values) pair
in the grouped inputs.
Distributed Computing
April 8, 2010
37 / 42
Reducer
Reducer reduces a set of intermediate values which share a key to a
smaller set of values.
Reducer implementations are passed the JobConf for the job.
The framework then calls reduce(WritableComparable, Iterator,
OutputCollector, Reporter) method for each key, (list of values) pair
in the grouped inputs.
The reducer has 3 primary phases:
Distributed Computing
April 8, 2010
37 / 42
Reducer
Reducer reduces a set of intermediate values which share a key to a
smaller set of values.
Reducer implementations are passed the JobConf for the job.
The framework then calls reduce(WritableComparable, Iterator,
OutputCollector, Reporter) method for each key, (list of values) pair
in the grouped inputs.
The reducer has 3 primary phases:
Shuffle:Input to the Reducer is the sorted output of the mappers. In
this phase the framework fetches the relevant partition of the output
of all the mappers, via HTTP.
Distributed Computing
April 8, 2010
37 / 42
Reducer
Reducer reduces a set of intermediate values which share a key to a
smaller set of values.
Reducer implementations are passed the JobConf for the job.
The framework then calls reduce(WritableComparable, Iterator,
OutputCollector, Reporter) method for each key, (list of values) pair
in the grouped inputs.
The reducer has 3 primary phases:
Shuffle:Input to the Reducer is the sorted output of the mappers. In
this phase the framework fetches the relevant partition of the output
of all the mappers, via HTTP.
Sort:The framework groups Reducer inputs by keys (since different
mappers may have output the same key) in this stage.
Distributed Computing
April 8, 2010
37 / 42
Reducer
Reducer reduces a set of intermediate values which share a key to a
smaller set of values.
Reducer implementations are passed the JobConf for the job.
The framework then calls reduce(WritableComparable, Iterator,
OutputCollector, Reporter) method for each key, (list of values) pair
in the grouped inputs.
The reducer has 3 primary phases:
Shuffle:Input to the Reducer is the sorted output of the mappers. In
this phase the framework fetches the relevant partition of the output
of all the mappers, via HTTP.
Sort:The framework groups Reducer inputs by keys (since different
mappers may have output the same key) in this stage.
Reduce:In this phase the reduce method is called for each <key, (list
of values)> pair in the grouped inputs.
Varun Thacker (LUG Manipal)
Distributed Computing
April 8, 2010
37 / 42
Reducer
Reducer reduces a set of intermediate values which share a key to a
smaller set of values.
Reducer implementations are passed the JobConf for the job.
The framework then calls reduce(WritableComparable, Iterator,
OutputCollector, Reporter) method for each key, (list of values) pair
in the grouped inputs.
The reducer has 3 primary phases:
Shuffle:Input to the Reducer is the sorted output of the mappers. In
this phase the framework fetches the relevant partition of the output
of all the mappers, via HTTP.
Sort:The framework groups Reducer inputs by keys (since different
mappers may have output the same key) in this stage.
Reduce:In this phase the reduce method is called for each <key, (list
of values)> pair in the grouped inputs.
The generated ouput is a new value.
Varun Thacker (LUG Manipal)
Distributed Computing
April 8, 2010
37 / 42
Some Terminology
Distributed Computing
April 8, 2010
38 / 42
Some Terminology
Distributed Computing
April 8, 2010
38 / 42
Some Terminology
Distributed Computing
April 8, 2010
38 / 42
Job Distribution
Distributed Computing
April 8, 2010
39 / 42
Job Distribution
Distributed Computing
April 8, 2010
39 / 42
Job Distribution
Distributed Computing
April 8, 2010
39 / 42
Contact Information
Varun Thacker
varunthacker1989@gmail.com
http:
//varunthacker.wordpress.com
Distributed Computing
April 8, 2010
40 / 42
Attribution
Google
Under the Creative Commons Attribution-Share Alike 2.5 Generic.
Distributed Computing
April 8, 2010
41 / 42
Copying
Distributed Computing
April 8, 2010
42 / 42