135 views

Uploaded by BMS_SOFTGUY

Amdahl's Law

- How Parallel Threads Work Tutorial
- Advanced Computer Architecture
- Stamatis Kalogerakos Thesis 2011
- vliw ieee paper jun2012
- ECE 690 Project Report SR658 31345543
- 14.Applied -A Comparative Analysis of Scheduling Algorithms- AJALA FUNMILOLA ALABA
- 03 Programming
- Workshare Process of Thread Programming and MPI Model on Multicore Architecture
- Open Foam Heat 16
- PhD Proposal CFProg
- High-Speed Parallel-Prefix VLSI Ling Adders
- Advanced Analysis of Steel Frames Using Parallel Processign and Vectorization
- xi4_ds_perf_opt_en
- PGCSE-202 (30522)
- Oops
- PKP 2018
- 5426
- 230
- ACA_bits (1)
- JAVA CONCURENT PROGRAM FOR THE SAMARANDACHE FUNCTION

You are on page 1of 5

Amdahl's law

Amdahl's law, also known as Amdahl's argument,[1] is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors. It was presented at the AFIPS Spring Joint Computer Conference in 1967. The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a The speedup of a program using multiple processors in parallel computing is program needs 20 hours using a single limited by the sequential fraction of the program. For example, if 95% of the processor core, and a particular portion of 1 program can be parallelized, the theoretical maximum speedup using parallel computing would be 20 as shown in the diagram, no matter how many processors hour cannot be parallelized, while the are used. remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimum execution time cannot be less than that critical 1 hour. Hence the speedup is limited up to 20, as the diagram illustrates.

Description

Amdahl's law is a model for the relationship between the expected speedup of parallelized implementations of an algorithm relative to the serial algorithm, under the assumption that the problem size remains the same when parallelized. For example, if for a given problem size a parallelized implementation of an algorithm can run 12% of the algorithm's operations arbitrarily quickly (while the remaining 88% of the operations are not parallelizable), Amdahl's law states that the maximum speedup of the parallelized version is 1/(1 0.12) = 1.136 times as fast as the non-parallelized implementation. More technically, the law is concerned with the speedup achievable from an improvement to a computation that affects a proportion P of that computation where the improvement has a speedup of S. (For example, if 30% of the computation may be the subject of a speed up, P will be 0.3; if the improvement makes the portion affected twice as fast, S will be 2.) Amdahl's law states that the overall speedup of applying the improvement will be:

To see how this formula was derived, assume that the running time of the old computation was 1, for some unit of time. The running time of the new computation will be the length of time the unimproved fraction takes (which is 1 P), plus the length of time the improved fraction takes. The length of time for the improved part of the computation is the length of the improved part's former running time divided by the speedup, making the length of time of the improved part P/S. The final speedup is computed by dividing the old running time by the new running time, which is what the above formula does. Here's another example. We are given a sequential task which is split into four consecutive parts: P1, P2, P3 and P4 with the percentages of runtime being 11%, 18%, 23% and 48% respectively. Then we are told that P1 is not speed

Amdahl's law up, so S1 = 1, while P2 is speed up 5, P3 is speed up 20, and P4 is speed up 1.6. By using the formula P1/S1 + P2/S2 + P3/S3 + P4/S4, we find the new sequential running time is:

or a little less than 12 the original running time. Using the formula (P1/S1 + P2/S2 + P3/S3 + P4/S4)1, the overall speed boost is 1 / 0.4575 = 2.186, or a little more than double the original speed. Notice how the 20 and 5 speedup don't have much effect on the overall speed when P1 (11%) is not sped up, and P4 (48%) is sped up only 1.6 times.

Parallelization

In the case of parallelization, Amdahl's law states that if P is the proportion of a program that can be made parallel (i.e., benefit from parallelization), and (1 P) is the proportion that cannot be parallelized (remains serial), then the maximum speedup that can be achieved by using N processors is . In the limit, as N tends to infinity, the maximum speedup tends to 1 / (1 P). In practice, performance to price ratio falls rapidly as N is increased once there is even a small component of (1 P). As an example, if P is 90%, then (1 P) is 10%, and the problem can be sped up by a maximum of a factor of 10, no matter how large the value of N used. For this reason, parallel computing is only useful for either small numbers of processors, or problems with very high values of P: so-called embarrassingly parallel problems. A great part of the craft of parallel programming consists of attempting to reduce the component (1 P) to the smallest possible value. P can be estimated by using the measured speedup (SU) on a specific number of processors (NP) using . P estimated in this way can then be used in Amdahl's law to predict speedup for a different number of processors.

Amdahl's law is often conflated with the law of diminishing returns, whereas only a special case of applying Amdahl's law demonstrates 'law of diminishing returns'. If one picks optimally (in terms of the achieved speed-up) what to improve, then one will see monotonically decreasing improvements as one improves. If, however, one picks non-optimally, after improving a sub-optimal component and moving on to improve a more optimal component, one can see an increase in return. Note that it is often rational to improve a system in an order that is "non-optimal" in this sense, given that some improvements are more difficult or consuming of development time than others. Amdahl's law does represent the law of diminishing returns if you are considering what sort of return you get by adding more processors to a machine, if you are running a fixed-size computation that will use all available processors to their capacity. Each new processor you add to the system will add less usable power than the previous one. Each time you double the number of processors the speedup ratio will diminish, as the total throughput heads toward the limit of . This analysis neglects other potential bottlenecks such as memory bandwidth and I/O bandwidth, if they do not scale with the number of processors; however, taking into account such bottlenecks would tend to further demonstrate the diminishing returns of only adding processors.

Amdahl's law

The maximum speedup in an improved sequential program, where some part was sped up times is limited by inequality

where ( ) is the fraction of time (before the improvement) spent in the part that was not improved. For example (see picture on right): If part B is made five times faster ( ), , , and , then

Assume that a task has two independent parts, A and B. B takes roughly 25% of the time of the whole computation. By working very hard, one may be able to make this part 5 times faster, but this only reduces the time for the whole computation by a little. In contrast, one may need to perform less work to make part A be twice as fast. This will make the computation much faster than by optimizing part B, even though B's speed-up is greater by ratio, (5 versus 2).

),

, and

, then

Therefore, making A twice as fast is better than making B five times faster. The percentage improvement in speed can be calculated as

Improving part A by a factor of two will increase overall program speed by a factor of 1.6, which makes it 37.5% faster than the original computation. However, improving part B by a factor of five, which presumably requires more effort, will only achieve an overall speedup factor of 1.25, which makes it 20% faster.

John L. Gustafson pointed out in 1988 what is now known as Gustafson's Law: people typically are not interested in solving a fixed problem in the shortest possible period of time, as Amdahl's Law describes, but rather in solving the largest possible problem (e.g., the most accurate possible approximation) in a fixed "reasonable" amount of time. If the non-parallelizable portion of the problem is fixed, or grows very slowly with problem size (e.g., O(log n)), then additional processors can increase the possible problem size without limit.

Amdahl's law

Notes

[1] Rodgers 85, p.226

References

Amdahl, Gene (1967). "Validity of the Single Processor Approach to Achieving Large-Scale Computing Capabilities" (http://www-inst.eecs.berkeley.edu/~n252/paper/Amdahl.pdf) (PDF). AFIPS Conference Proceedings (30): 483485. Rodgers, David P. (June 1985). "Improvements in multiprocessor system design" (http://portal.acm.org/ citation.cfm?id=327215). ACM SIGARCH Computer Architecture News archive (New York, NY, USA: ACM) 13 (3): 225231. doi: 10.1145/327070.327215 (http://dx.doi.org/10.1145/327070.327215). ISSN 0163-5964 (http://www.worldcat.org/issn/0163-5964).

External links

Cases where Amdahl's law is inapplicable (http://www.futurechips.org/thoughts-for-researchers/ parallel-programming-gene-amdahl-said.html) Oral history interview with Gene M. Amdahl (http://purl.umn.edu/104341) Charles Babbage Institute, University of Minnesota. Amdahl discusses his graduate work at the University of Wisconsin and his design of WISC. Discusses his role in the design of several computers for IBM including the STRETCH, IBM 701, and IBM 704. He discusses his work with Nathaniel Rochester and IBM's management of the design process. Mentions work with Ramo-Wooldridge, Aeronutronic, and Computer Sciences Corporation Reevaluating Amdahl's Law (http://www.scl.ameslab.gov/Publications/Gus/AmdahlsLaw/Amdahls.html) Reevaluating Amdahl's Law and Gustafson's Law (http://spartan.cis.temple.edu/shi/public_html/docs/ amdahl/amdahl.html) A simple interactive Amdahl's Law calculator (http://www.julianbrowne.com/article/viewer/amdahls-law) "Amdahl's Law" (http://demonstrations.wolfram.com/AmdahlsLaw/) by Joel F. Klein, Wolfram Demonstrations Project, 2007. Amdahl's Law in the Multicore Era (http://www.cs.wisc.edu/multifacet/amdahl/) Amdahl's Law explanation (http://www.gordon-taft.net/Amdahl_Law.html) Blog Post: "What the $#@! is Parallelism, Anyhow?" (http://www.cilk.com/multicore-blog/bid/5365/ What-the-is-Parallelism-Anyhow) Amdahl's Law applied to OS system calls on multicore CPU (http://www.multicorepacketprocessing.com/ how-should-amdahl-law-drive-the-redesigns-of-socket-system-calls-for-an-os-on-a-multicore-cpu)

Amdahl's law Source: http://en.wikipedia.org/w/index.php?oldid=548490358 Contributors: 2001:660:330F:A4:D6BE:D9FF:FE15:81A1, Aaronbrick, Alexf, Arcann, Arensb, Bender235, Berek Halfhand, Bfroehler, Blue Dream, Bovlb, CRGreathouse, Chowbok, Constructive editor, Conversion script, Csari, Daniels220, Dgies, DopefishJustin, Dwarf Kirlston, Dyl, Ebryn, Edward, Ejrh, Elwikipedista, FedericoMenaQuintero, Fflanner, Fnordpig, Fred Hsu, Fresheneesz, Gareth Owen, Ggenellina, Ghewgill, Gonzalo Diethelm, Hadal, Hga, Hooperbloob, Hyad, IRWolfie-, Itaj Sherman, Jgonion, Jleedev, JonHarder, Jwiley80, Khazadum, Kjkolb, KrakatoaKatie, Liao, Ligulem, Lproven, MSGJ, Madmardigan53, Malcolm rowe, Masgatotkaca, Maximus Rex, Michael Hardy, Michaelbrawn, MichaelsProgramming, Mike Schwartz, Mlpkr, Mrdice, Mschlindwein, Mshonle, Newone, Nickpowerz, Nikai, Nikola Smolenski, No Guru, Odd bloke, Orphic, PanagosTheOther, Pdxdaved, Pion, Pleasantville, Poszwa, Prohlep, RPHv, Randomalious, Rbonvall, Reedy, Ripe, Rjtech, Sayakbhowmick, Skylogic, Spleenk, Steve Mohan, Stw, Svick, Swarm, Tagishsimon, Teamxtra, Telofy, The Anome, The wub, Theimmaculatechemist, Thom2729, Threadman, Thumperward, TimBentley, Timwi, TittoAssini, Toby Bartels, Torla42, Utar, Velella, Vthiru, Wikiuser1239, William Avery, Wiwaxia, Wolfgang Kufner, Woohookitty, Xaven, YuryKirienko, 184 anonymous edits

Image:AmdahlsLaw.svg Source: http://en.wikipedia.org/w/index.php?title=File:AmdahlsLaw.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: AnonMoos, Bender235, Ebraminio, JRGom, Phatom87, Senator2029, Utar Image:Optimizing-different-parts.svg Source: http://en.wikipedia.org/w/index.php?title=File:Optimizing-different-parts.svg License: Public Domain Contributors: Gorivero

License

Creative Commons Attribution-Share Alike 3.0 Unported //creativecommons.org/licenses/by-sa/3.0/

- How Parallel Threads Work TutorialUploaded bysurezvv
- Advanced Computer ArchitectureUploaded bysatraj5
- Stamatis Kalogerakos Thesis 2011Uploaded byAnonymous 8te2h1
- vliw ieee paper jun2012Uploaded byAvinash Kumar
- ECE 690 Project Report SR658 31345543Uploaded byShyamSundar
- 14.Applied -A Comparative Analysis of Scheduling Algorithms- AJALA FUNMILOLA ALABAUploaded byImpact Journals
- 03 ProgrammingUploaded bytamachan99
- Workshare Process of Thread Programming and MPI Model on Multicore ArchitectureUploaded byEditor IJACSA
- Open Foam Heat 16Uploaded byOhiozua Ohi
- PhD Proposal CFProgUploaded bymoadismoa
- High-Speed Parallel-Prefix VLSI Ling AddersUploaded byAblazeVassilas
- Advanced Analysis of Steel Frames Using Parallel Processign and VectorizationUploaded byMohamed Elfawal
- xi4_ds_perf_opt_enUploaded bySubhendu Swain
- PGCSE-202 (30522)Uploaded bySatanu Maity
- OopsUploaded bySENTHIL R
- PKP 2018Uploaded byRoza hidayani
- 5426Uploaded byMuhammad Anwaar
- 230Uploaded byrovillare
- ACA_bits (1)Uploaded bySriharsha Koritala
- JAVA CONCURENT PROGRAM FOR THE SAMARANDACHE FUNCTIONUploaded byMia Amalia
- Solving Routing Problems With Cellular Automata (1996)Uploaded byvojtechbardiovsky
- ManArray.pdfUploaded byJessica Thompson
- 04~solutions_for_chapter_4_exercisesUploaded byMuhammad Zunair Hussain
- Imp_Exp_DynUploaded byvenkata
- Gpu and CudaUploaded byAlejandro Gómez
- 5Uploaded byapi-3814854
- Parallels Cloud Storage B2Uploaded byĐỗ Công Thành
- CIRED2015_0724_finalUploaded byfmgonzales
- Assignment 1Uploaded byDilini Dakshika Herath
- 357_LandrusUploaded bykoctya

- Power Plant Engineering, Pk NagUploaded byElumalai Boominathan
- 25 Projects ListUploaded byBMS_SOFTGUY
- 50-555CircuitsUploaded bymmkeno
- 1000 ProjectsUploaded byBMS_SOFTGUY
- Eee SyllabusUploaded byDhana Selvam
- Final Career PptUploaded byBMS_SOFTGUY
- Interference Management in LTE-based HetNetsUploaded byBMS_SOFTGUY
- Mentor Backend FlowUploaded byGaurav Patel
- Syllabus.pdfUploaded byrahul
- PSA April May 2015 IMP.pdfUploaded byBMS_SOFTGUY
- DTSSP April May 2015 IMP.pdfUploaded byBMS_SOFTGUY
- TD April May 2015 IMP.pdfUploaded byBMS_SOFTGUY
- Cyclic PrefixUploaded byBMS_SOFTGUY
- Best Quotes of All TimeUploaded byronaldricardo_ladiesman01
- LIC pptUploaded byBMS_SOFTGUY
- HVDC notes.pdfUploaded byBMS_SOFTGUY
- High_Voltage_DC_Transmission.pdfUploaded byBMS_SOFTGUY
- Electrical Dc Machines Dr Af-batiUploaded byBoneco De Neve
- ME2209_SET2.docxUploaded byBMS_SOFTGUY
- ME2209_SET1Uploaded byBMS_SOFTGUY
- Maglev Final ReportUploaded byAbdul Halim Mohd Napiah
- UNDERSTANDING8085 8086 Instruccion SetUploaded byChristie Sajitha
- 8086 LecturesUploaded bymanoj kumar rout
- EC2021-Medical Electronics Notes for All Five Units-libreUploaded byBMS_SOFTGUY
- emfbookUploaded byZEUJ
- Sample Manuscript for Journal of Renewable and Sustainable EnergyaUploaded byBMS_SOFTGUY
- 1.4850475Uploaded byBMS_SOFTGUY
- CoimbatoreUploaded byBMS_SOFTGUY
- Homer ResultsUploaded byBMS_SOFTGUY

- macovski1987Uploaded bysamuelvimes
- Mechanical Designer 3D Design in Dallas Ft Worth TX Resume Shawn VahdatUploaded byShawnVahdat
- Gap Fitment Sfdc VlocityUploaded bynotknot
- Procedure for Db RefreshUploaded byMarissa Johnson
- Course Notes on Data Structures and Algorithm by Clifford a. ShaffordUploaded byAbdul Wahid Khan
- Eu Public Procurement Law IntroductionUploaded byNA BO
- OBIEE 11g implementation BootCampUploaded byrams08
- A Mystical Prayer to the Holy Spirit by St. Symeon the New TheologianUploaded byChristos
- ConductionRod_ASEN3113_2005Uploaded byHassan Talha
- Dempayos DevCom201 Module Test Answers for Module IIIUploaded byRecis Dempayos
- E. J. Waggoner (1888)_The Gospel in Galatians_&_ G. I. Butler (1886)_The Law in the Book of GalatiansUploaded byTheMedien
- Development Action Plan GuideUploaded byGiles Daya
- ogiba-v3-2080344Uploaded bypatrick.campion6604
- Chapter 3 of《GSM RNP&RNO》- Radio Transmission Theory.docUploaded byCoachArun Mishra
- history fair sourcesUploaded byapi-109002720
- Manual - Whirlpool LER5636LUploaded byLeslie Martinez
- Water Permeation Properties of Silicone MaterialUploaded byiprao
- Fuel Related StandardUploaded byRay Romey
- i Bd 20130918Uploaded byloftgr
- hvdc UNIT V_opt.pdfUploaded byjayaraju_2002
- Dn05171601 3 en Global PDF Online a4Uploaded byعابد الشعبي
- A2 Unit 3 Dada & SurrealismUploaded bymarcatkinson
- bioresin impruvement wood.pdfUploaded byleizeriucadi
- T-Kit_11_A4_assembleUploaded byNikko106
- lesson plan freerice orgUploaded byapi-273451553
- Rigvedic DeitiesUploaded bynieotyagi
- The Tragic Irony of Religious WarsUploaded byanthony_mcmahon5435
- c 10ab 2 0 explain stemscopedia studentUploaded byapi-236826747
- Group 8Uploaded byMarilya Chandra
- Land Suitability Characterization for Crop and Fruit Production of Some River Nile Terraces, Khartoum North, SudanUploaded byIJSRP ORG