Вы находитесь на странице: 1из 17

Time complexity

From Wikipedia, the free encyclopedia

Jump to: navigation, search This article needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (January 2010)

n computer science, the time complexity of an algorithm !uantifies the amount of time taken by an algorithm to run as a function of the si"e of the input to the problem. The time comple#ity of an algorithm is commonly e#pressed using big $ notation, %hich suppresses multiplicative constants and lo%er order terms. When e#pressed this %ay, the time comple#ity is said to be described asymptotically, i.e., as the input si"e goes to infinity. For e#ample, if the time re!uired by an algorithm on all inputs of si"e n is at most &n' ( 'n, the asymptotic time comple#ity is $)n'*. Time comple#ity is commonly estimated by counting the number of elementary operations performed by the algorithm, %here an elementary operation takes a fi#ed amount of time to perform. Thus the amount of time taken and the number of elementary operations performed by the algorithm differ by at most a constant factor. +ince an algorithm may take a different amount of time even on inputs of the same si"e, the most commonly used measure of time comple#ity, the %orst,case time comple#ity of an algorithm, denoted as T(n), is the ma#imum amount of time taken on any input of si"e n. Time comple#ities are classified by the nature of the function T)n*. For instance, an algorithm %ith T)n* - $)n* is called a linear time algorithm, and an algorithm %ith T)n* $).n* is said to be an e#ponential time algorithm.

Contents
/hide0

1 Table of common time comple#ities . 2onstant time ' 3ogarithmic time 4 Polylogarithmic time & +ub,linear time 5 3inear time 6 3inearithmic7!uasilinear time 8 +ub,!uadratic time 9 Polynomial time o 9.1 +trongly and %eakly polynomial time o 9.. 2omple#ity classes 1: +uperpolynomial time

11 ;uasi,polynomial time o 11.1 <elation to =P,complete problems 1. +ub,e#ponential time o 1..1 First definition o 1... +econd definition 1....1 >#ponential time hypothesis 1' >#ponential time 14 ?ouble e#ponential time 1& +ee also 15 <eferences

[edit] Table of common time complexities


The follo%ing table summarises some classes of commonly encountered time comple#ities. n the table, poly)x* - xO)1*, i.e., polynomial in x. Name constant time Complexity Running class time (T(n)) O)1* Examples of running times 1: Examples of algorit ms

?etermining if a number is even or odd inverse @morti"ed time per O)A)n** @ckermann operation using a disBoint set iterated ?istributed coloring of O)logC n* logarithmic cycles @morti"ed time per log,logarithmic O)log log n* operation using a bounded priority !ueue/10 . logarithmic time ?3$DT E> O)log n* log n, log)n * Finary search polylogarithmic poly)log n* )log n*. time O)nGc*, fractional po%er n1/2, n2/3 +earching in a kd,tree :HcH1 Finding the smallest item in linear time O)n* n an unsorted array +eidelJs polygon triangulation algorithm. Ilog In log star nI O)n logC n* star nI is the iterated logarithm Fastest possible comparison linearithmic time O)n log n* n log n, log nK sort . . !uadratic time O )n * n Fubble sortL nsertion sort cubic time O)n'* n' =aive multiplication of t%o

polynomial time P !uasi,polynomial ;P time sub,e#ponential time +UF>OP )first definition* sub,e#ponential time )second definition* e#ponential time > e#ponential time >OPT E> factorial time

nMn matrices. 2alculating partial correlation. NarmarkarJs algorithm for .O)log n* n, n log n, n1: linear programmingL @N+ poly)n* primality test Fest,kno%n $)log. n*, appro#imation algorithm for .poly)log n* nlog log n, nlog n the directed +teiner tree problem. @ssuming comple#ity O).n* for all O).log nlog log n* theoretic conBectures, FPP is P: contained in +UF>OP./.0 .
o)n*

n17'

Fest,kno%n algorithm for integer factori"ation and graph isomorphism +olving the traveling salesman problem using dynamic programming +olving the traveling salesman problem via brute, force search ?eciding the truth of a given statement in Presburger arithmetic

.O)n* .poly)n* O)nK*

1.1n, 1:n nK, nn, .n. nK .'n

double .,>OPT E> ..poly)n* e#ponential time

[edit] Constant time


@n algorithm is said to be constant time )also %ritten as !(") time* if the value of T)n* is bounded by a value that does not depend on the si"e of the input. For e#ample, accessing any single element in an array takes constant time as only one operation has to be performed to locate it. Qo%ever, finding the minimal value in an unordered array is not a constant time operation as a scan over each element in the array is needed in order to determine the minimal value. Qence it is a linear time operation, taking $)n* time )unless some more efficient algorithm is devised, for e#ample, a binary search across a sorted array*. f the number of elements is kno%n in advance and does not change, ho%ever, such an algorithm can still be said to run in constant time. ?espite the name Iconstant timeI, the running time does not have to be independent of the problem si"e, but an upper bound for the running time has to be bounded independently of the problem si"e. For e#ample, the task Ie#change the values of a and b if necessary so that aRbI is called constant time even though the time may depend on %hether or not it is already true that a R b. Qo%ever, there is some constant t such that the time re!uired is al%ays at most t.

Qere are some e#amples of code fragments that run in constant time:
int index = 5; int item = list[index]; if (condition true) then perform some operation that runs in constant time else perform some other operation that runs in constant time for i = 1 to 100 for j = 1 to 200 perform some operation that runs in constant time

f T)n* is $)any constant value*, this is e!uivalent to and stated in standard notation as T)n* being $)1*.

[edit] #ogarit mic time


Further in ormation! "o#arithmic #ro$th @n algorithm is said to take logarit mic time if T)n* - !(log n). ?ue to the use of the binary numeral system by computers, the logarithm is fre!uently base . )that is, log. n, sometimes %ritten lg n*. Qo%ever, by the change of base e!uation for logarithms, loga n and logb n differ only by a constant multiplier, %hich in big,$ notation is discardedL thus $)log n* is the standard notation for logarithmic time algorithms regardless of the base of the logarithm. @lgorithms taking logarithmic time are commonly found in operations on binary trees or %hen using binary search.

[edit] $olylogarit mic time


@n algorithm is said to run in polylogarit mic time if T)n* - $))log n*%*, for some constant %. For e#ample, matri# chain ordering can be solved in polylogarithmic time on a Parallel <andom @ccess Eachine./'0

[edit] %ub&linear time


@n algorithm is said to run in sub&linear time )often spelled sublinear time* if T)n* o)n*. n particular this includes algorithms %ith the time comple#ities defined above, as %ell as others such as the $)nS* DroverJs search algorithm. For an algorithm to be e#act and yet run in sub,linear time, it needs to use parallel processing )as the =21 matri# determinant calculation does* or non,classical processing )as DroverJs search does*, or alternatively have guaranteed assumptions on the input structure )as the logarithmic time binary search and many tree maintenance algorithms

do*. $ther%ise, a sub,linear time algorithm %ould not be able to read or learn the entire input prior to providing its output. The specific term sublinear time al#orithm is usually reserved to algorithms that are unlike the above in that they are run over classical serial machine models and are not allo%ed prior assumptions on the input./40 They are ho%ever allo%ed to be randomi"ed, and indeed must be randomi"ed for all but the most trivial of tasks. @s such an algorithm must provide an ans%er %ithout reading the entire input, its particulars heavily depend on the access allo%ed to the input. Usually for an input that is represented as a binary string b1,...,b% it is assumed that the algorithm can in time $)1* re!uest and obtain the value of bi for any i. +ublinear time algorithms are typically randomi"ed, and provide only appro#imate solutions. n fact, the property of a binary string having only "eros )and no ones* can be easily proved not to be decidable by a )non,appro#imate* sublinear time algorithm. +ublinear time algorithms arise naturally in the investigation of property testing.

[edit] #inear time


@n algorithm is said to take linear time, or !(n) time, if its time comple#ity is $)n*. nformally, this means that for large enough input si"es the running time increases linearly %ith the si"e of the input. For e#ample, a procedure that adds up all elements of a list re!uires time proportional to the length of the list. This description is slightly inaccurate, since the running time can significantly deviate from a precise proportionality, especially for small values of n. 3inear time is often vie%ed as a desirable attribute for an algorithm. Euch research has been invested into creating algorithms e#hibiting )nearly* linear time or better. This research includes both soft%are and hard%are methods. n the case of hard%are, some algorithms %hich, mathematically speaking, can never achieve linear time %ith standard computation models are able to run in linear time. There are several hard%are technologies %hich e#ploit parallelism to provide this. @n e#ample is content,addressable memory. This concept of 3inear Time is used in string matching algorithms such as Foyer,Eoore @lgorithm and UkkonenJs @lgorithm.

[edit] #inearit mic'(uasilinear time


@ linearit mic function )portmanteau of linear and lo#arithmic* is a function of the form n T log n )i.e., a product of a linear and a logarithmic term*. @n algorithm is said to run in linearit mic time if T)n* - !(n log n). 2ompared to other functions, a linearithmic function is U)n*, o)n1(V* for every V P :, and W)n T log n*. Thus, a linearithmic term gro%s faster than a linear term but slo%er than any polynomial in n %ith e#ponent strictly greater than 1.

@n algorithm is said to run in (uasilinear time if T)n* - !(n logk n) for any constant %. ;uasilinear time algorithms are also o)n1(V* for every V P :, and thus run faster than any polynomial in n %ith e#ponent strictly greater than 1. n many cases, the n T log n running time is simply the result of performing a W)log n* operation n times. For e#ample, Finary tree sort creates a Finary tree by inserting each element of the n,si"ed array one by one. +ince the insert operation on a self,balancing binary search tree takes $)log n* time, the entire algorithm takes linearithmic time. 2omparison sorts re!uire at least linearithmic number of comparisons in the %orst case because log)nK* - W)n log n*. They also fre!uently arise from the recurrence relation T)n* - . T)n7.* ( $)n*. +ome famous algorithms that run in linearithmic time include:

2omb sort, in the average and %orst case ;uicksort in the average case Qeapsort, merge sort, introsort, binary tree sort, smoothsort, patience sorting, etc. in the %orst case Fast Fourier transforms Eonge array calculation

[edit] %ub&(uadratic time


@n algorithm is said to be sub(uadratic time if T)n* - o)n.*. For e#ample, most naXve comparison,based sorting algorithms are !uadratic )e.g. insertion sort*, but more advanced algorithms can be found that are sub!uadratic )e.g. merge sort*L to be precise, such algorithms are linearithmic. =o general,purpose sorts run in linear time, but the change from !uadratic to the common $)n log n* is of great practical importance.

[edit] $olynomial time


@n algorithm is said to be polynomial time if its running time is upper bounded by a polynomial in the si"e of the input for the algorithm, i.e., T)n* - $)n%* for some constant %./&0/50 Problems for %hich a polynomial time algorithm e#ists belong to the comple#ity class $, %hich is central in the field of computational comple#ity theory. 2obhamJs thesis states that polynomial time is a synonym for ItractableI, IfeasibleI, IefficientI, or IfastI.
/60

+ome e#amples of polynomial time algorithms:

The !uicksort sorting algorithm on n integers performs at most &n. operations for some constant &. Thus it runs in time O)n.* and is a polynomial time algorithm.

@ll the basic arithmetic operations )addition, subtraction, multiplication, division, and comparison* can be done in polynomial time. Ea#imum matchings in graphs can be found in polynomial time.

[edit] %trongly and )ea*ly polynomial time


n some conte#ts, especially in optimi"ation, one differentiates bet%een strongly polynomial time and )ea*ly polynomial time algorithms. These t%o concepts are only relevant if the inputs to the algorithms consist of integers. +trongly polynomial time is defined in the arithmetic model of computation. n this model of computation the basic arithmetic operations )addition, subtraction, multiplication, division, and comparison* take a unit time step to perform, regardless of the si"es of the operands. The algorithm runs in strongly polynomial time if /80 1. the number of operations in the arithmetic model of computation is bounded by a polynomial in the number of integers in the input instanceL and .. the space used by the algorithm is bounded by a polynomial in the si"e of the input. @ny algorithm %ith these t%o properties can be converted to a polynomial time algorithm by replacing the arithmetic operations by suitable algorithms for performing the arithmetic operations on a Turing machine. f the second re!uirement above is omitted, then this is not true any more. Diven n integers it is possible to compute %ith n multiplications using repeated s!uaring. f the integers are small enough )say they are e!ual to 1*, then cannot be represented in polynomial space. Qence, it is not possible to compute this number in polynomial time on a Turing machine, but it is possible to compute it by polynomially many arithmetic operations. There are algorithms %hich run in polynomial time in the Turing machine model but not in the arithmetic model. The >uclidean algorithm for computing the greatest common divisor of t%o integers is one e#ample. Diven t%o integers a and b the running time of the algorithm is bounded by . This is polynomial in the si"e of a binary representation of a and b as the si"e of such a representation is roughly . Qo%ever, the algorithm does not run in strongly polynomial time as the running time depends on the magnitudes of a and b and not only on the number of integers in the input )%hich is constant in this case, there is al%ays only t%o integers in the input*. @n algorithm %hich runs in polynomial time but %hich is not strongly polynomial is said to run in )ea*ly polynomial time./90 @ %ell,kno%n e#ample of a problem for %hich a %eakly polynomial,time algorithm is kno%n, but is not kno%n to admit a strongly polynomial,time algorithm, is linear programming. Weakly polynomial,time should not be confused %ith pseudo,polynomial time.

[edit] Complexity classes

The concept of polynomial time leads to several comple#ity classes in computational comple#ity theory. +ome important classes defined using polynomial time are the follo%ing.

$: The comple#ity class of decision problems that can be solved on a deterministic Turing machine in polynomial time. N$: The comple#ity class of decision problems that can be solved on a non, deterministic Turing machine in polynomial time. +$$: The comple#ity class of decision problems that can be solved %ith "ero error on a probabilistic Turing machine in polynomial time. R$: The comple#ity class of decision problems that can be solved %ith 1,sided error on a probabilistic Turing machine in polynomial time. ,$$: The comple#ity class of decision problems that can be solved %ith .,sided error on a probabilistic Turing machine in polynomial time. ,-$: The comple#ity class of decision problems that can be solved %ith .,sided error on a !uantum Turing machine in polynomial time.

P is the smallest time,comple#ity class on a deterministic machine %hich is robust in terms of machine model changes. )For e#ample, a change from a single,tape Turing machine to a multi,tape machine can lead to a !uadratic speedup, but any algorithm that runs in polynomial time under one model also does so on the other.* @ny given abstract machine %ill have a comple#ity class corresponding to the problems %hich can be solved in polynomial time on that machine.

[edit] %uperpolynomial time


@n algorithm is said to take superpolynomial time if T)n* is not bounded above by any polynomial. t is U)nc* time for all constants c, %here n is the input parameter, typically the number of bits in the input. For e#ample, an algorithm that runs for .n steps on an input of si"e n re!uires superpolynomial time )more specifically, e#ponential time*. @n algorithm that uses e#ponential resources is clearly superpolynomial, but some algorithms are only very %eakly superpolynomial. For e#ample, the @dlemanY PomeranceY<umely primality test runs for n$)log log n* time on n,bit inputsL this gro%s faster than any polynomial for large enough n, but the input si"e must become impractically large before it cannot be dominated by a polynomial %ith small degree. @n algorithm that has been proven to re!uire superpolynomial time cannot be solved in polynomial time, and so is kno%n to lie outside the comple#ity class $. 2obhamJs thesis conBectures that these algorithms are impractical, and in many cases they are. +ince the P versus =P problem is unresolved, no algorithm for a =P,complete problem is currently kno%n to run in polynomial time.

[edit] -uasi&polynomial time


-uasi&polynomial time algorithms are algorithms %hich run slo%er than polynomial time, yet not so slo% as to be e#ponential time. The %orst case running time of a !uasi, polynomial time algorithm is for some fi#ed c. The best,kno%n classical algorithm for integer factori"ation, the general number field sieve, %hich runs in time about is not !uasi,polynomial since the running time cannot be e#pressed as for some fi#ed c. f the constant IcI in the definition of !uasi,polynomial time algorithms is e!ual to 1, %e get a polynomial time algorithm, and if it less than 1, %e get a sub,linear time algorithm. ;uasi,polynomial time algorithms typically arise in reductions from an =P,hard problem to another problem. For e#ample, one can take an instance of an =P hard problem, say '+@T, and convert it to an instance of another problem F, but the si"e of the instance becomes . n that case, this reduction does not prove that problem F is =P,hardL this reduction only sho%s that there is no polynomial time algorithm for F unless there is a !uasi,polynomial time algorithm for '+@T )and thus all of =P*. +imilarly, there are some problems for %hich %e kno% !uasi,polynomial time algorithms, but no polynomial time algorithm is kno%n. +uch problems arise in appro#imation algorithmsL a famous e#ample is the directed +teiner tree problem problem, for %hich there is a !uasi,polynomial time appro#imation algorithm achieving an appro#imation factor of O)log.n* )n being the number of vertices*, but sho%ing the e#istence of such a polynomial time algorithm is an open problem. The comple#ity class -$ consists of all problems %hich have !uasi,polynomial time algorithms. t can be defined in terms of ?T E> as follo%s./1:0

[edit] Relation to N$&complete problems


n comple#ity theory, the unsolved P versus =P problem asks if all problems in =P have polynomial,time algorithms. @ll the best,kno%n algorithms for =P,complete problems like '+@T etc. take e#ponential time. ndeed, it is conBectured for many natural =P, complete problems that they do not have sub,e#ponential time algorithms. Qere Isub, e#ponential timeI is taken to mean the second definition presented above. )$n the other hand, many graph problems represented in the natural %ay by adBacency matrices are solvable in sube#ponential time simply because the si"e of the input is s!uare of the number of vertices.* This conBecture )for the k,+@T problem* is kno%n as the e#ponential time hypothesis./110 +ince it is conBectured that =P,complete problems do not have !uasi, polynomial time algorithms, some inappro#imability results in the field of appro#imation algorithms make the assumption that =P,complete problems do not have !uasi, polynomial time algorithms. For e#ample, see the kno%n inappro#imability results for the set cover problem.

[edit] %ub&exponential time

The term sub&exponential time is used to e#press that the running time of some algorithm may gro% faster than any polynomial but is still significantly smaller than an e#ponential. n this sense, problems that have sub,e#ponential time algorithms are some%hat more tractable than those that only have e#ponential algorithms. The precise definition of Isub,e#ponentialI is not generally agreed upon,/1.0 and %e list the t%o most %idely,used ones belo%.

[edit] .irst definition


@ problem is said to be sub,e#ponential time solvable if it can be solved in running times %hose logarithms gro% smaller than any given polynomial. Eore precisely, a problem is in sub,e#ponential time if for every V P : there e#ists an algorithm %hich solves the problem in time $).nV*. The set of all such problems is the comple#ity class %/,E0$ %hich can be defined in terms of ?T E> as follo%s./.0/1'0/140/1&0 =ote that this notion of sub,e#ponential is non,uniform in terms of V in the sense that V is not part of the input and each V may have its o%n algorithm for the problem.

[edit] %econd definition


+ome authors define sub,e#ponential time as running times in .o)n*./110/150/160 This definition allo%s larger running times than the first definition of sub,e#ponential time. @n e#ample of such a sub,e#ponential time algorithm is the best,kno%n classical algorithm for integer factori"ation, the general number field sieve, %hich runs in time about , %here the length of the input is n. @nother e#ample is the best,kno%n algorithm for the graph isomorphism problem, %hich runs in time .$)Z)n log n**. =ote that it makes a difference %hether the algorithm is allo%ed to be sub,e#ponential in the si"e of the instance, the number of vertices, or the number of edges. n parameteri"ed comple#ity, this difference is made e#plicit by considering pairs )",%* of decision problems and parameters %. %/,E$T is the class of all parameteri"ed problems that run in time sub,e#ponential in % and polynomial in the input si"e n:/180 . Eore precisely, +UF>PT is the class of all parameteri"ed problems )",%* for %hich there is a computable function %ith and an algorithm that decides " in time . [edit] Exponential time ypot esis The exponential time ypot esis )ET1* is that '+@T, the satisfiability problem of Foolean formulas in conBunctive normal form %ith at most three literals per clause and %ith n variables, cannot be solved in time .o)n*. With m denoting the number of clauses, >TQ is e!uivalent to the hypothesis that %+@T cannot be solved in time .o)m* for any integer %['./190 The e#ponential time hypothesis implies P \ =P.

[edit] Exponential time


@n algorithm is said to be exponential time, if T)n* is upper bounded by .poly)n*, %here poly)n* is some polynomial in n. Eore formally, an algorithm is e#ponential time if T)n* is bounded by $).n%* for some constant %. Problems %hich admit e#ponential time algorithms on a deterministic Turing machine form the comple#ity class kno%n as E0$. +ometimes, e#ponential time is used to refer to algorithms that have T)n* - .$)n*, %here the e#ponent is at most a linear function of n. This gives rise to the comple#ity class E.

[edit] 2ouble exponential time


@n algorithm is said to be double e#ponential time if T)n* is upper bounded by ..poly)n*, %here poly)n* is some polynomial in n. +uch algorithms belong to the comple#ity class .,>OPT E>. Well,kno%n double e#ponential time algorithms include:

?ecision procedures for Presburger arithmetic 2omputing a Dr]bner basis )in the %orst case* Finding a complete set of associative,commutative unifiers/.:0 +atisfying 2T3( )%hich is, in fact, .,>OPT E>,complete*/.10 ;uantifier elimination on real closed fields takes at least doubly,e#ponential time )but is not even kno%n to be computable in >3>E>=T@<^*

[edit] %ee also

3,notation

[edit] References
1. 3 Eehlhorn, NurtL =aher, +tefan )199:*. IFounded ordered dictionaries in $)log log =* time and $)n* spaceI. 'n ormation (rocessin# "etters. .. G a b Fabai, 3_s"l`L Fortno%, 3anceL =isan, =.L Wigderson, @vi )199'*. IFPP has sube#ponential time simulations unless >OPT E> has publishable proofsI. )omputational )omplexity )Ferlin, =e% ^ork: +pringer,aerlag* 4 )4*: ':6Y'18. doi:1:.1::67FF:1.6&485. '. 3 Fradford, Phillip D.L <a%lins, Dregory J. >.L +hannon, Dregory >. )1998*. I>fficient Eatri# 2hain $rdering in Polylog TimeI. *'&+ Journal on )omputin# )Philadelphia: +ociety for ndustrial and @pplied Eathematics* 56 ).*: 455Y49:. doi:1:.11'67+::96&'9694.6:598. ++= 1:9&,6111. 4. 3 Numar, <aviL <ubinfeld, <onitt ).::'*. I+ublinear time algorithmsI. *',&)T -e$s 47 )4*: &6Y56.

http:77%%%.cs.princeton.edu7courses7archive7spr:47cos&98F7bib7kumar<, survey.pdf. &. 3 Papadimitriou, 2hristos Q. )1994*. )omputational complexity. <eading, Eass.: @ddison,Wesley. +F= :,.:1,&':8.,1. 5. 3 +ipser, Eichael ).::5*. 'ntro.uction to the Theory o )omputation. 2ourse Technology nc. +F= :,519,.1654,.. 6. 3 2obham, @lan )195&*. IThe intrinsic computational difficulty of functionsI. (roc/ "o#ic0 +etho.olo#y0 an. (hilosophy o *cience ''. =orth Qolland. 8. 3 Dr]tschel, EartinL 3_s"l` 3ov_s", @le#ander +chriBver )1988*. I2omple#ity, $racles, and =umerical 2omputationI. ,eometric &l#orithms an. )ombinatorial Optimi1ation. +pringer. +F= :'861'5.4O. 9. 3 +chriBver, @le#ander ).::'*. IPreliminaries on algorithms and 2omple#ityI. )ombinatorial Optimi1ation! (olyhe.ra an. 2 iciency. ". +pringer. +F= '&4:44'894. 1:. 3 )omplexity 3oo: 2lass ;P: ;uasipolynomial,Time 11. G a b mpaglia""o, <.L Paturi, <. ).::1*. I$n the comple#ity of k,+@TI. Journal o )omputer an. *ystem *ciences )>lsevier* 85 ).*: '56Y'6&. doi:1:.1::57Bcss..:::.16.6. ++= 1:9:,.6.4. 1.. 3 @aronson, +cott )& @pril .::9*. I@ not,!uite,e#ponential dilemmaI. *htetl4 Optimi1e.. http:77scottaaronson.com7blog7bp-'94. <etrieved . ?ecember .::9. 1'. 3 )omplexity 3oo: 2lass +UF>OP: ?eterministic +ube#ponential,Time 14. 3 Eoser, P. ).::'*. IFaireJs 2ategories on +mall 2omple#ity 2lassesI. "ecture -otes in )omputer *cience )Ferlin, =e% ^ork: +pringer,aerlag*: '''Y'4.. ++= :':.,964'. 1&. 3 Eiltersen, P.F. ).::1*. I?><@=?$E c =D 2$EP3>O T^ 23@++>+I. 5an.boo% o 6an.omi1e. )omputin# )Nlu%er @cademic Pub*: 84'. 15. 3 Nuperberg, Dreg ).::&*. I@ +ube#ponential,Time ;uantum @lgorithm for the ?ihedral Qidden +ubgroup ProblemI. *'&+ Journal on )omputin# )Philadelphia: +ociety for ndustrial and @pplied Eathematics* 49 )1*: 188. ++= 1:9&,6111. 16. 3 $ded <egev ).::4*. I@ +ube#ponential Time @lgorithm for the ?ihedral Qidden +ubgroup Problem %ith Polynomial +paceI. ar7iv!8uant4ph/090:1;1v1 /!uant,ph0. 18. 3 Flum, J]rgL Drohe, Eartin ).::5*. (arameteri1e. )omplexity Theory. +pringer. p. 416. +F= 968,',&4:,.99&.,'. http:77%%%.springer.com7east7home7generic7search7resultsb+DW ?-&,4:1:9,.., 141'&8'..,:. <etrieved .:1:,:',:&. 19. 3 mpaglia""o, <.L Paturi, <.L cane, F. ).::1*. IWhich problems have strongly e#ponential comple#itybI. Journal o )omputer an. *ystem *ciences 84 )4*: &1.Y &':. doi:1:.1::57Bcss..::1.1664. .:. 3 Napur, ?eepakL =arendran, Paliath )199.*. IProc. 6th >>> +ymp. 3ogic in 2omputer +cience )3 2+ 199.*I. pp. 11Y.1. doi:1:.11:973 2+.199..18&&1&. http:77citeseer.ist.psu.edu7''6'5'.html. .1. 3 Johannse, JanL 3ange, Eartin ).::'*. I2T3( is complete for double e#ponential timeI. in Faeten, Jos 2. E.L 3enstra, Jan NarelL Parro%, Joachim et al.. (roc/ 30th 'nt/ )ollo8/ &utomata0 "an#ua#es0 an. (ro#rammin# (')&"( 2003). 3ecture

=otes in 2omputer +cience. 56":. +pringer,aerlag. pp. 656Y66&. doi:1:.1::67', &4:,4&:51

Non&deterministic Turing mac ine


From Wikipedia, the free encyclopedia

Jump to: navigation, search The introduction to this article provides insufficient context for t ose unfamiliar )it t e sub;ect. Please help improve the article %ith a good introductory style.
(October 200<)

n theoretical computer science, a Turing machine is a theoretical machine that is used in thought e#periments to e#amine the abilities and limitations of computers. n essence, a Turing machine is imagined to be a simple computer that reads and %rites symbols one at a time on an endless strip of paper tape by strictly follo%ing a set of rules. t determines %hat action it should perform ne#t according to its internal IstateI and %hat number it currently sees. @n e#ample of one of a Turing EachineJs rules might thus be: I f you are in state . and you see an J@J, change it to a F and move left.I n a deterministic Turing mac ine, the set of rules prescribes at most one action to be performed for any given situation. @ non&deterministic Turing mac ine )NT<*, by contrast, may have a set of rules that prescribes more than one action for a given situation. For e#ample, a non,deterministic Turing machine may have both I f you are in state . and you see an J@J, change it to a JFJ and move leftI and I f you are in state . and you see an J@J, change it to a J2J and move rightI in its rule set. @n ordinary )deterministic* Turing machine )?TE* has a transition function that, for a given state and symbol under the tape head, specifies three things: the symbol to be %ritten to the tape, the direction )left or right* in %hich the head should move, and the subse!uent state of the finite control. For e#ample, an O on the tape in state ' might make the ?TE %rite a ^ on the tape, move the head one position to the right, and s%itch to state &. @ non,deterministic Turing machine )=TE* differs in that the state and tape symbol no longer uni8uely specify these thingsL rather, many different actions may apply for the same combination of state and symbol. For e#ample, an O on the tape in state ' might no% allo% the =TE to %rite a ^, move right, and s%itch to state & or to %rite an O, move left, and stay in state '.

Contents
/hide0

1 ?efinition o 1.1 <esolution of multiple rules . aariations ' >!uivalence %ith ?TEs 4 Founded non,determinism & 2omparison %ith !uantum computers 5 +ee also 6 <eferences 8 >#ternal links

[edit] 2efinition
@ nondeterministic Turing machine can be formally defined as a 5,tuple , %here

= is a finite set of states d is a finite set of symbols )the tape alphabet*


is the initial state is the blank symbol is the set of accepting states is a relation on states and symbols called the transition relation.

The difference %ith a standard )deterministic* Turing machine is that for those, the transition relation is a function )the transition function*. 2onfigurations and the yiel.s relation on configurations, %hich describes the possible actions of the Turing machine given any possible contents of the tape, are as for standard Turing machines, e#cept that the yields relation is no longer single,valued. The notion of string acceptance is unchanged: a non,deterministic Turing machine accepts a string if, %hen the machine is started on the configuration in %hich the tape head is on the first character of the string )if any*, and the tape is all blank other%ise, at least one of the machineJs possible computations from that configuration puts the machine into a state in &. ) f the machine is deterministic, the possible computations are the prefi#es of a single, possibly infinite, path.*

[edit] Resolution of multiple rules


Qo% does the =TE Ikno%I %hich of these actions it should takeb There are t%o %ays of looking at it. $ne is to say that the machine is the Iluckiest possible guesserIL it al%ays picks the transition %hich eventually leads to an accepting state, if there is such a transition. The other is to imagine that the machine IbranchesI into many copies, each of %hich follo%s one of the possible transitions. Whereas a ?TE has a single Icomputation pathI that it follo%s, an =TE has a Icomputation treeI. f any branch of the tree halts %ith an IacceptI condition, %e say that the =TE accepts the input.

[edit] =ariations [edit] E(uivalence )it 2T<s


n particular, nondeterministic Turing machines are e!uivalent %ith deterministic Turing machines. =TEs effectively include ?TEs as special cases, so it is immediately clear that ?TEs are not more po%erful. t might seem that =TEs are more po%erful than ?TEs, since they can allo% trees of possible computations arising from the same initial configuration, accepting a string if any one branch in the tree accepts it. Fut it is possible to simulate =TEs %ith ?TEs. $ne approach is to use a ?TE of %hich the configurations represent multiple configurations of the =TE, and the ?TEJs operation consists of visiting each of them in turn, e#ecuting a single step at each visit, and spa%ning ne% configurations %henever the transition relation defines multiple continuationsL this is effectively ho% a multitasking operating system implements the e#ecution of multiple concurrent processes %ith a single processor and a single memory array. @nother construction/10 simulates =TEs %ith ',tape ?TEs, of %hich the first tape al%ays holds the original input string, the second is used to simulate a particular computation of the =TE, and the third encodes a path in the =TEJs computation tree. The ',tape ?TEs are easily simulated %ith a normal single,tape ?TE. n this construction, the resulting ?TE effectively performs a breadth,first search of the =TEJs computation tree, visiting all possible computations of the =TE in order of increasing length until it finds an accepting one. Therefore, the length of an accepting computation of the ?TE is, in general, e#ponential in the length of the shortest accepting computation of the =TE. This is considered to be a general property of simulations of =TEs by ?TEsL the most famous unresolved !uestion in computer science, the P - =P problem, is related to this issue.

[edit] ,ounded non&determinism


@n =TE has the property of bounded non,determinism, i/e/, if an =TE al%ays halts on a given input tape T then it halts in a bounded number of steps, and therefore can only have a bounded number of possible configurations.

[edit] Comparison )it (uantum computers

The suspected shape of the range of problems solvable by !uantum computers in polynomial time. =ote that the figure suggests and , if this is not true then the figure should look other%ise. t is a common misconception that !uantum computers are =TEs./.0 t is believed but has not been proven that the po%er of !uantum computers is incomparable to that of =TEs/'0. That is, problems likely e#ist that an =TE could efficiently solve but that a !uantum computer cannot. @ likely e#ample of problems solvable by =TEs but not by !uantum computers in polynomial time are =P,complete problems.

[edit] %ee also

Probabilistic Turing machine

[edit] References
1. 3 2lements o the Theory o )omputation, by Qarry <. 3e%is and 2hristos Q. Papadimitriou, Prentice,Qall, >ngle%ood 2liffs, =e% Jersey, 1981, +F= :,1', .6'416,5, pp. .:5,.11 .. 3 The $rion ;uantum 2omputer @nti,Qype F@;, +cott @aronson. '. 3 Tusarova, Tere"a ).::4*. =uantum complexity classes. arOiv:cs7:4:9:&1..

Qarry <. 3e%is, 2hristos Papadimitriou )1981*. 2lements o the Theory o )omputation )1st ed.*. Prentice,Qall. +F= :,1',.6'416,5. +ection 4.5: =ondeterministic Turing machines, pp. .:4Y.11. John 2. Eartin )1996*. 'ntro.uction to "an#ua#es an. the Theory o )omputation ).nd ed.*. EcDra%,Qill. +F= :,:6,:4:84&,9. +ection 9.5: =ondeterministic Turing machines, pp. .66Y.81. 2hristos Papadimitriou )199'*. )omputational )omplexity )1st ed.*. @ddison, Wesley. +F= :,.:1,&':8.,1. +ection ..6: =ondeterministic machines, pp. 4&Y &:.

$
)definition* 2efinition> The complexity class of languages that can be accepted by a deterministic Turin# machine in polynomial time.

N$

)definition* 2efinition> The complexity class of .ecision problems for %hich ans%ers can be checked by an algorithm %hose run time is polynomial in the si"e of the input. =ote that this doesnJt re!uire or imply that an ans%er can be found !uickly, only that any claimed solution can be verified !uickly. I=PI is the class that a Non.eterministic Turin# machine accepts in $olynomial time.

N$& ard
)definition* 2efinition> The complexity class of .ecision problems that are intrinsically harder than those that can be solved by a non.eterministic Turin# machine in polynomial time. When a decision version of a combinatorial optimi1ation problem is proved to belong to the class of -(4complete problems, then the optimi"ation version is =P,hard.

N$&complete
)definition* 2efinition> The complexity class of .ecision problems for %hich ans%ers can be checked for correctness, given a certi icate, by an algorithm %hose run time is polynomial in the si"e of the input )that is, it is -(* and no other =P problem is more than a polynomial factor harder. nformally, a problem is =P,complete if ans%ers can be verified !uickly, and a !uick algorithm to solve this problem can be used to solve all other =P problems !uickly.

Вам также может понравиться