Вы находитесь на странице: 1из 2118

Book:Algorithms

en.wikipedia.org
May 7, 2020

On the 28th of April 2012 the contents of the English as well as German Wikibooks and Wikipedia
projects were licensed under Creative Commons Attribution-ShareAlike 3.0 Unported license. A
URI to this license is given in the list of figures on page 2055. If this document is a derived work
from the contents of one of these projects and the content was still licensed by the project under
this license at the time of derivation this document has to be licensed under the same, a similar or a
compatible license, as stated in section 4b of the license. The list of contributors is included in chapter
Contributors on page 1669. The licenses GPL, LGPL and GFDL are included in chapter Licenses on
page 2085, since this book and/or parts of it may or may not be licensed under one or more of these
licenses, and thus require inclusion of these licenses. The licenses of the figures are given in the list of
figures on page 2055. This PDF was generated by the LATEX typesetting software. The LATEX source
code is included as an attachment (source.7z.txt) in this PDF file. To extract the source from
the PDF file, you can use the pdfdetach tool including in the poppler suite, or the http://www.
pdflabs.com/tools/pdftk-the-pdf-toolkit/ utility. Some PDF viewers may also let you save
the attachment to a file. After extracting it from the PDF file you have to rename it to source.7z.
To uncompress the resulting archive we recommend the use of http://www.7-zip.org/. The LATEX
source itself was generated by a program written by Dirk Hünniger, which is freely available under
an open source license from http://de.wikibooks.org/wiki/Benutzer:Dirk_Huenniger/wb2pdf.
Contents

1 Sorting algorithm 3
1.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Comparison of algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Popular sorting algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5 Memory usage patterns and index sorting . . . . . . . . . . . . . . . . . 22
1.6 Related algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2 Comparison sort 31
2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2 Performance limits and advantages of different sorting techniques . . . . 33
2.3 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4 Number of comparisons required to sort a list . . . . . . . . . . . . . . . 35
2.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3 Selection sort 41
3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.2 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.3 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4 Comparison to other sorting algorithms . . . . . . . . . . . . . . . . . . 45
3.5 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4 Insertion sort 51
4.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2 Best, worst, and average cases . . . . . . . . . . . . . . . . . . . . . . . 55
4.3 Relation to other sorting algorithms . . . . . . . . . . . . . . . . . . . . 55
4.4 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.6 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

III
Contents

5 Merge sort 63
5.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.2 Natural merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.4 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.5 Use with tape drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.6 Optimizing merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.7 Parallel merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.8 Comparison with other sort algorithms . . . . . . . . . . . . . . . . . . 81
5.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

6 Merge sort 87
6.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.2 Natural merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.4 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.5 Use with tape drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.6 Optimizing merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.7 Parallel merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.8 Comparison with other sort algorithms . . . . . . . . . . . . . . . . . . 105
6.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

7 Quicksort 111
7.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.3 Formal analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
7.4 Relation to other algorithms . . . . . . . . . . . . . . . . . . . . . . . . 123
7.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
7.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

8 Heapsort 135
8.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
8.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
8.3 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.4 Comparison with other sorts . . . . . . . . . . . . . . . . . . . . . . . . 142
8.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
8.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
8.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
8.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

9 Bubble sort 153


9.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

IV
Contents

9.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156


9.3 Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
9.4 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9.5 Debate over name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
9.6 In popular culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
9.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
9.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
9.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

10 Shellsort 163
10.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
10.2 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
10.3 Gap sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
10.4 Computational complexity . . . . . . . . . . . . . . . . . . . . . . . . . 168
10.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
10.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
10.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
10.8 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
10.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

11 Integer sorting 175


11.1 General considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
11.2 Practical algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
11.3 Theoretical algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
11.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

12 Counting sort 189


12.1 Input and output assumptions . . . . . . . . . . . . . . . . . . . . . . . 190
12.2 The algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
12.3 Complexity analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
12.4 Variant algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
12.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
12.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
12.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

13 Bucket sort 195


13.1 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
13.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
13.3 Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
13.4 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
13.5 Comparison with other sorting algorithms . . . . . . . . . . . . . . . . . 200
13.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
13.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202

14 Radix sort 203


14.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
14.2 Digit order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
14.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

V
Contents

14.4 Complexity and performance . . . . . . . . . . . . . . . . . . . . . . . . 206


14.5 Specialized variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
14.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
14.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
14.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

15 Data structure 213


15.1 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
15.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
15.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
15.4 Language support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
15.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
15.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
15.7 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
15.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
15.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

16 Search algorithm 223


16.1 Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
16.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
16.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
16.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230

17 Linear search 231


17.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
17.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
17.3 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
17.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
17.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234

18 Binary search algorithm 237


18.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
18.2 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
18.3 Binary search versus other schemes . . . . . . . . . . . . . . . . . . . . . 248
18.4 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
18.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
18.6 Implementation issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
18.7 Library support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
18.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
18.9 Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
18.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270

19 Binary search tree 271


19.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
19.2 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
19.3 Examples of applications . . . . . . . . . . . . . . . . . . . . . . . . . . 280
19.4 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
19.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283

VI
Contents

19.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283


19.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
19.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
19.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285

20 Trie 287
20.1 History and etymology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
20.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
20.3 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
20.4 Implementation strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 292
20.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
20.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
20.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299

21 Hash table 301


21.1 Hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
21.2 Key statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
21.3 Collision resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
21.4 Dynamic resizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
21.5 Performance analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
21.6 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
21.7 Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
21.8 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
21.9 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
21.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
21.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
21.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
21.13 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329

22 Hash function 331


22.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
22.2 Hash tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
22.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
22.4 Hashing integer data types . . . . . . . . . . . . . . . . . . . . . . . . . 340
22.5 Hashing variable-length data . . . . . . . . . . . . . . . . . . . . . . . . 345
22.6 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
22.7 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
22.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
22.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
22.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
22.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350

23 Collision (computer science) 351


23.1 Computer security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
23.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
23.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353

VII
Contents

24 Perfect hash function 355


24.1 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
24.2 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
24.3 Space lower bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
24.4 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
24.5 Related constructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
24.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
24.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
24.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361

25 Open addressing 363


25.1 Example pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
25.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
25.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366

26 Linear probing 367


26.1 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
26.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
26.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
26.4 Choice of hash function . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
26.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
26.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374

27 Quadratic probing 379


27.1 Quadratic function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
27.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
27.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
27.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381

28 Double hashing 383


28.1 Selection of h2 (k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
28.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
28.3 Enhanced double hashing . . . . . . . . . . . . . . . . . . . . . . . . . . 384
28.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
28.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
28.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386

29 Cuckoo hashing 389


29.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
29.2 Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
29.3 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
29.4 Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
29.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
29.6 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
29.7 Comparison with related structures . . . . . . . . . . . . . . . . . . . . 395
29.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
29.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
29.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397

VIII
Contents

30 Random number generation 399


30.1 Practical applications and uses . . . . . . . . . . . . . . . . . . . . . . . 401
30.2 ”True”vs. pseudo-random numbers . . . . . . . . . . . . . . . . . . . . . 402
30.3 Generation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
30.4 Post-processing and statistical checks . . . . . . . . . . . . . . . . . . . 407
30.5 Other considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
30.6 Low-discrepancy sequences as an alternative . . . . . . . . . . . . . . . . 408
30.7 Activities and demonstrations . . . . . . . . . . . . . . . . . . . . . . . . 408
30.8 Backdoors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
30.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
30.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
30.11 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
30.12 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414

31 Pseudorandom number generator 415


31.1 Potential problems with deterministic generators . . . . . . . . . . . . . 416
31.2 Generators based on linear recurrences . . . . . . . . . . . . . . . . . . . 417
31.3 Cryptographically secure pseudorandom number generators . . . . . . . 417
31.4 BSI evaluation criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
31.5 Mathematical definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
31.6 Early approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
31.7 Non-uniform generators . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
31.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
31.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
31.10 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
31.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425

32 Linear congruential generator 427


32.1 Period length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
32.2 Parameters in common use . . . . . . . . . . . . . . . . . . . . . . . . . 430
32.3 Advantages and disadvantages . . . . . . . . . . . . . . . . . . . . . . . 432
32.4 Sample Python code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
32.5 Sample Free Pascal code . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
32.6 LCG derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
32.7 Comparison with other PRNGs . . . . . . . . . . . . . . . . . . . . . . . 437
32.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
32.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
32.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
32.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443

33 Middle-square method 445


33.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
33.2 The method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
33.3 Middle Square Weyl Sequence PRNG . . . . . . . . . . . . . . . . . . . 448
33.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
33.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450

IX
Contents

34 Xorshift 451
34.1 Example implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 451
34.2 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
34.3 xoshiro and xoroshiro . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
34.4 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
34.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
34.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
34.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460

35 Mersenne Twister 461


35.1 Adoption in software systems . . . . . . . . . . . . . . . . . . . . . . . . 461
35.2 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
35.3 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
35.4 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
35.5 k-distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
35.6 Algorithmic detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
35.7 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
35.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
35.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
35.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474

36 Cryptographically secure pseudorandom number generator 475


36.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
36.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
36.3 Entropy extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
36.4 Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
36.5 Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
36.6 NSA kleptographic backdoor in the Dual_EC_DRBG PRNG . . . . . . 482
36.7 Security flaws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
36.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
36.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486

37 Blum Blum Shub 489


37.1 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
37.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
37.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
37.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492

38 Blum–Micali algorithm 493


38.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
38.2 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494

39 Combinatorics 495
39.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
39.2 Approaches and subfields of combinatorics . . . . . . . . . . . . . . . . . 499
39.3 Related fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
39.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
39.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514

X
Contents

39.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516


39.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517

40 Cycle detection 519


40.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
40.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
40.3 Computer representation . . . . . . . . . . . . . . . . . . . . . . . . . . 521
40.4 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
40.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
40.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
40.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531

41 Stable marriage problem 533


41.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
41.2 Different stable matchings . . . . . . . . . . . . . . . . . . . . . . . . . . 534
41.3 Algorithmic solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
41.4 Rural hospitals theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
41.5 Related problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
41.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
41.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
41.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541

42 Graph theory 543


42.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
42.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
42.3 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
42.4 Graph drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
42.5 Graph-theoretic data structures . . . . . . . . . . . . . . . . . . . . . . . 557
42.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
42.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
42.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
42.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
42.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572

43 Graph coloring 575


43.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
43.2 Definition and terminology . . . . . . . . . . . . . . . . . . . . . . . . . 579
43.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
43.4 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
43.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
43.6 Other colorings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
43.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
43.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
43.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
43.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605

44 A* search algorithm 607


44.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610

XI
Contents

44.2 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611


44.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
44.4 Bounded relaxation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
44.5 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
44.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
44.7 Relations to other algorithms . . . . . . . . . . . . . . . . . . . . . . . . 621
44.8 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
44.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
44.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
44.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
44.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
44.13 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627

45 Szemerédi regularity lemma 629


45.1 Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
45.2 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
45.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
45.4 History and Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
45.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
45.6 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642

46 Alpha–beta pruning 643


46.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
46.2 Core idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
46.3 Improvements over naive minimax . . . . . . . . . . . . . . . . . . . . . 646
46.4 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
46.5 Heuristic improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
46.6 Other algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
46.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
46.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
46.9 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652

47 Aperiodic graph 655


47.1 Graphs that cannot be aperiodic . . . . . . . . . . . . . . . . . . . . . . 657
47.2 Testing for aperiodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
47.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
47.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658

48 B* 659
48.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
48.2 Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
48.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
48.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663

49 Barabási–Albert model 665


49.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
49.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
49.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672

XII
Contents

49.4 Limiting cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677


49.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
49.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
49.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
49.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682

50 Belief propagation 683


50.1 Description of the sum-product algorithm . . . . . . . . . . . . . . . . . 684
50.2 Exact algorithm for trees . . . . . . . . . . . . . . . . . . . . . . . . . . 685
50.3 Approximate algorithm for general graphs . . . . . . . . . . . . . . . . . 686
50.4 Related algorithm and complexity issues . . . . . . . . . . . . . . . . . . 687
50.5 Relation to free energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
50.6 Generalized belief propagation (GBP) . . . . . . . . . . . . . . . . . . . 688
50.7 Gaussian belief propagation (GaBP) . . . . . . . . . . . . . . . . . . . . 688
50.8 Syndrome-based BP decoding . . . . . . . . . . . . . . . . . . . . . . . . 689
50.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
50.10 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694

51 Bellman–Ford algorithm 697


51.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
51.2 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
51.3 Finding negative cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
51.4 Applications in routing . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
51.5 Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
51.6 Trivia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
51.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
51.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705

52 Bidirectional search 709


52.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
52.2 Approaches for Bidirectional Heuristic Search . . . . . . . . . . . . . . . 712
52.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713

53 Borůvka's algorithm 715


53.1 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
53.2 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
53.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
53.4 Other algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
53.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720

54 Bottleneck traveling salesman problem 723


54.1 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
54.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
54.3 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724
54.4 Metric approximation algorithm . . . . . . . . . . . . . . . . . . . . . . 724
54.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
54.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725

XIII
Contents

55 Breadth-first search 727


55.1 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
55.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
55.3 BFS ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
55.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
55.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
55.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
55.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735

56 Bron–Kerbosch algorithm 737


56.1 Without pivoting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
56.2 With pivoting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
56.3 With vertex ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
56.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
56.5 Worst-case analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741
56.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
56.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
56.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743

57 Centrality 745
57.1 Definition and characterization of centrality indices . . . . . . . . . . . . 748
57.2 Important limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
57.3 Degree centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
57.4 Closeness centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
57.5 Betweenness centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
57.6 Eigenvector centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758
57.7 Katz centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
57.8 PageRank centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
57.9 Percolation centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
57.10 Cross-clique centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
57.11 Freeman centralization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
57.12 Dissimilarity based centrality measures . . . . . . . . . . . . . . . . . . 762
57.13 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
57.14 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
57.15 Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
57.16 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769

58 Chaitin's algorithm 771


58.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771

59 Christofides algorithm 773


59.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
59.2 Approximation ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
59.3 Lower bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
59.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
59.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
59.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779

XIV
Contents

60 Clique percolation method 781


60.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
60.2 Percolation transition in the CPM . . . . . . . . . . . . . . . . . . . . . 784
60.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
60.4 Algorithms and Software . . . . . . . . . . . . . . . . . . . . . . . . . . 784
60.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
60.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785

61 Closure problem 791


61.1 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791
61.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793
61.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795

62 Color-coding 797
62.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797
62.2 The method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
62.3 Derandomization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
62.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
62.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
62.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801

63 Contraction hierarchies 803


63.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804
63.2 Customized contraction hierarchies . . . . . . . . . . . . . . . . . . . . . 807
63.3 Extensions and applications . . . . . . . . . . . . . . . . . . . . . . . . . 808
63.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
63.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811

64 Courcelle's theorem 813


64.1 Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813
64.2 Proof strategy and complexity . . . . . . . . . . . . . . . . . . . . . . . 815
64.3 Bojańczyk-Pilipczuk's theorem . . . . . . . . . . . . . . . . . . . . . . . 816
64.4 Satisfiability and Seese's theorem . . . . . . . . . . . . . . . . . . . . . . 817
64.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
64.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818

65 Cuthill–McKee algorithm 825


65.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
65.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
65.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828

66 D* 829
66.1 Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831
66.2 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
66.3 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
66.4 Minimum cost versus current cost . . . . . . . . . . . . . . . . . . . . . 840
66.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
66.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841

XV
Contents

67 Depth-first search 843


67.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
67.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846
67.3 Output of a depth-first search . . . . . . . . . . . . . . . . . . . . . . . . 847
67.4 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
67.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
67.6 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
67.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
67.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
67.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
67.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854

68 Iterative deepening depth-first search 855


68.1 Algorithm for directed graphs . . . . . . . . . . . . . . . . . . . . . . . . 857
68.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
68.3 Asymptotic analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
68.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
68.5 Related algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
68.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864

69 Dijkstra's algorithm 865


69.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
69.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 870
69.3 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 871
69.4 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
69.5 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875
69.6 Running time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876
69.7 Related problems and algorithms . . . . . . . . . . . . . . . . . . . . . . 879
69.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 880
69.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
69.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
69.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885

70 Dijkstra–Scholten algorithm 887


70.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 887
70.2 Dijkstra–Scholten algorithm for a tree . . . . . . . . . . . . . . . . . . . 888
70.3 Dijkstra–Scholten algorithm for directed acyclic graphs . . . . . . . . . 888
70.4 Dijkstra–Scholten algorithm for cyclic directed graphs . . . . . . . . . . 888
70.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889
70.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889

71 Dinic's algorithm 891


71.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
71.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892
71.3 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892
71.4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892
71.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893
71.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894

XVI
Contents

71.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894


71.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895

72 Double pushout graph rewriting 897


72.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
72.2 Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
72.3 Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
72.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898

73 Dulmage–Mendelsohn decomposition 901


73.1 The coarse decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 901
73.2 The fine decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 902
73.3 Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
73.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
73.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
73.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904

74 Edmonds' algorithm 905


74.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906
74.2 Running time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907
74.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907
74.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908

75 Blossom algorithm 909


75.1 Augmenting paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909
75.2 Blossoms and contractions . . . . . . . . . . . . . . . . . . . . . . . . . . 911
75.3 Finding an augmenting path . . . . . . . . . . . . . . . . . . . . . . . . 914
75.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 918

76 Edmonds–Karp algorithm 921


76.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
76.2 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
76.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922
76.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924
76.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925

77 Euler tour technique 927


77.1 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928
77.2 Roots, advance and retreat edges . . . . . . . . . . . . . . . . . . . . . . 928
77.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929
77.4 Euler tour trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929
77.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930

78 FKT algorithm 931


78.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931
78.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 932
78.3 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935
78.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936
78.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936

XVII
Contents

78.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 938

79 Flooding algorithm 939


79.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
79.2 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939

80 Flow network 941


80.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 941
80.2 Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 941
80.3 Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 942
80.4 Concepts useful to flow problems . . . . . . . . . . . . . . . . . . . . . . 943
80.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944
80.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945
80.7 Classifying flow problems . . . . . . . . . . . . . . . . . . . . . . . . . . 945
80.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
80.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
80.10 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948
80.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949

81 Floyd–Warshall algorithm 951


81.1 History and naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
81.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
81.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
81.4 Behavior with negative cycles . . . . . . . . . . . . . . . . . . . . . . . . 956
81.5 Path reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
81.6 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957
81.7 Applications and generalizations . . . . . . . . . . . . . . . . . . . . . . 958
81.8 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 958
81.9 Comparison with other shortest path algorithms . . . . . . . . . . . . . 959
81.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 960
81.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961

82 Force-directed graph drawing 963


82.1 Forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965
82.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966
82.3 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966
82.4 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 967
82.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 968
82.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969
82.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969
82.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971
82.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971

83 Ford–Fulkerson algorithm 973


83.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973
83.2 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
83.3 Integral example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
83.4 Non-terminating example . . . . . . . . . . . . . . . . . . . . . . . . . . 977

XVIII
Contents

83.5 Python implementation of Edmonds–Karp algorithm . . . . . . . . . . . 978


83.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
83.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
83.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980
83.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980

84 Fringe search 983


84.1 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
84.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
84.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987
84.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987

85 Girvan–Newman algorithm 989


85.1 Edge betweenness and community structure . . . . . . . . . . . . . . . . 989
85.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
85.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990

86 Goal node (computer science) 991


86.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991
86.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991

87 Gomory–Hu tree 993


87.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993
87.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993
87.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994
87.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
87.5 Implementations: Sequential and Parallel . . . . . . . . . . . . . . . . . 999
87.6 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
87.7 Related concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
87.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
87.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000

88 Graph bandwidth 1003


88.1 Bandwidth formulas for some graphs . . . . . . . . . . . . . . . . . . . . 1003
88.2 Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1004
88.3 Computing the bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . 1005
88.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005
88.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005
88.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006
88.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007

89 Graph embedding 1009


89.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1010
89.2 Combinatorial embedding . . . . . . . . . . . . . . . . . . . . . . . . . . 1010
89.3 Computational complexity . . . . . . . . . . . . . . . . . . . . . . . . . 1011
89.4 Embeddings of graphs into higher-dimensional spaces . . . . . . . . . . 1011
89.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
89.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012

XIX
Contents

90 Graph isomorphism 1015


90.1 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
90.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
90.3 Whitney theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
90.4 Recognition of graph isomorphism . . . . . . . . . . . . . . . . . . . . . 1018
90.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019
90.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019
90.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020

91 Graph isomorphism problem 1021


91.1 State of the art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1022
91.2 Solved special cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023
91.3 Complexity class GI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023
91.4 Program checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
91.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
91.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
91.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
91.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1031

92 Graph kernel 1041


92.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042
92.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042
92.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043

93 Graph reduction 1045


93.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045
93.2 Combinator graph reduction . . . . . . . . . . . . . . . . . . . . . . . . 1048
93.3 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1048
93.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1048
93.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049
93.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049
93.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049

94 Graph traversal 1051


94.1 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053
94.2 Graph traversal algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 1053
94.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056
94.4 Graph exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056
94.5 Universal traversal sequences . . . . . . . . . . . . . . . . . . . . . . . . 1057
94.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1057
94.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058

95 Hierarchical clustering of networks 1059


95.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059
95.2 Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059
95.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1060
95.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1060

XX
Contents

96 Hopcroft–Karp algorithm 1061


96.1 Augmenting paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1062
96.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063
96.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064
96.4 Comparison with other bipartite matching algorithms . . . . . . . . . . 1065
96.5 Non-bipartite graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
96.6 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
96.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1068
96.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069
96.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069

97 Iterative deepening A* 1073


97.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075
97.2 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075
97.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1076
97.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077
97.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077

98 Iterative deepening depth-first search 1079


98.1 Algorithm for directed graphs . . . . . . . . . . . . . . . . . . . . . . . . 1081
98.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082
98.3 Asymptotic analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082
98.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084
98.5 Related algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085
98.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1088

99 Johnson's algorithm 1089


99.1 Algorithm description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1091
99.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1091
99.3 Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1092
99.4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093
99.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093
99.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094

100 Journal of Graph Algorithms and Applications 1095


100.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
100.2 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096

101 Jump point search 1097


101.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1098
101.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1098
101.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1099

102 k shortest path routing 1101


102.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1101
102.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1101
102.3 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1102
102.4 Some examples and description . . . . . . . . . . . . . . . . . . . . . . . 1103
102.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105

XXI
Contents

102.6 Related problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105


102.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105
102.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105
102.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1107

103 Karger's algorithm 1109


103.1 The global minimum cut problem . . . . . . . . . . . . . . . . . . . . . 1110
103.2 Contraction algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1111
103.3 Karger–Stein algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115
103.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1117

104 Knight's tour 1119


104.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1121
104.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1122
104.3 Existence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125
104.4 Number of tours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126
104.5 Finding tours with computers . . . . . . . . . . . . . . . . . . . . . . . . 1126
104.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1131
104.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1131
104.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1134

105 Kosaraju's algorithm 1135


105.1 The algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135
105.2 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136
105.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1137
105.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1137

106 Kruskal's algorithm 1139


106.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1141
106.2 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1142
106.3 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1143
106.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1143
106.5 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146
106.6 Parallel algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146
106.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148
106.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148
106.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1150

107 Lexicographic breadth-first search 1151


107.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1152
107.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153
107.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1154
107.4 LexBFS ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155
107.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156
107.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156

108 Longest path problem 1157


108.1 NP-hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1157
108.2 Acyclic graphs and critical paths . . . . . . . . . . . . . . . . . . . . . . 1158

XXII
Contents

108.3 Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1159


108.4 Parameterized complexity . . . . . . . . . . . . . . . . . . . . . . . . . . 1159
108.5 Special classes of graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . 1160
108.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1161
108.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1161
108.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1165

109 Minimax 1167


109.1 Game theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1167
109.2 Combinatorial game theory . . . . . . . . . . . . . . . . . . . . . . . . . 1170
109.3 Minimax for individual decisions . . . . . . . . . . . . . . . . . . . . . . 1175
109.4 Maximin in philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176
109.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176
109.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
109.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1178

110 Minimum cut 1181


110.1 Without terminal nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
110.2 With terminal nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
110.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
110.4 Number of minimum cuts . . . . . . . . . . . . . . . . . . . . . . . . . . 1183
110.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183
110.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183

111 Nearest neighbour algorithm 1185


111.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1185
111.2 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186
111.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186

112 Nonblocking minimal spanning switch 1187


112.1 Background: switching topologies . . . . . . . . . . . . . . . . . . . . . . 1189
112.2 Practical implementations of switches . . . . . . . . . . . . . . . . . . . 1192
112.3 Digital switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193
112.4 Example of rerouting a switch . . . . . . . . . . . . . . . . . . . . . . . 1195
112.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196
112.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197

113 Path-based strong component algorithm 1199


113.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199
113.2 Related algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200
113.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200
113.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200

114 Prim's algorithm 1203


114.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206
114.2 Time complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208
114.3 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
114.4 Parallel algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
114.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212

XXIII
Contents

114.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212


114.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214

115 Proof-number search 1217


115.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218
115.2 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218

116 Push–relabel maximum flow algorithm 1219


116.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219
116.2 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1220
116.3 The generic push–relabel algorithm . . . . . . . . . . . . . . . . . . . . . 1222
116.4 Practical implementations . . . . . . . . . . . . . . . . . . . . . . . . . . 1227
116.5 Sample implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . 1229
116.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1232

117 Reverse-delete algorithm 1235


117.1 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235
117.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1236
117.3 Running time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1238
117.4 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1238
117.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
117.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240

118 Sethi–Ullman algorithm 1241


118.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1241
118.2 Simple Sethi–Ullman algorithm . . . . . . . . . . . . . . . . . . . . . . . 1242
118.3 Advanced Sethi–Ullman algorithm . . . . . . . . . . . . . . . . . . . . . 1243
118.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1243
118.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244
118.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244

119 Shortest Path Faster Algorithm 1245


119.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1246
119.2 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1248
119.3 Running time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
119.4 Optimization techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
119.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249

120 Shortest path problem 1251


120.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1252
120.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1253
120.3 Single-source shortest paths . . . . . . . . . . . . . . . . . . . . . . . . . 1253
120.4 All-pairs shortest paths . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256
120.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1257
120.6 Related problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1258
120.7 Linear programming formulation . . . . . . . . . . . . . . . . . . . . . . 1260
120.8 General algebraic framework on semirings: the algebraic path problem . 1260
120.9 Shortest path in stochastic time-dependent networks . . . . . . . . . . . 1261
120.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262

XXIV
Contents

120.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262


120.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1268

121 SMA* 1271


121.1 Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1273
121.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1273
121.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274
121.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274

122 Spectral layout 1275


122.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1275

123 Strongly connected component 1277


123.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1278
123.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1279
123.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1281
123.4 Related results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1281
123.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
123.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
123.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284

124 Subgraph isomorphism problem 1285


124.1 Decision problem and computational complexity . . . . . . . . . . . . . 1285
124.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1286
124.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287
124.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287
124.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1288
124.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1289

125 Suurballe's algorithm 1293


125.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1293
125.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1294
125.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1294
125.4 Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1295
125.5 Analysis and running time . . . . . . . . . . . . . . . . . . . . . . . . . 1296
125.6 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1296
125.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1296
125.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1297

126 Tarjan's off-line lowest common ancestors algorithm 1299


126.1 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1299
126.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1300

127 Tarjan's strongly connected components algorithm 1301


127.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1301
127.2 The algorithm in pseudocode . . . . . . . . . . . . . . . . . . . . . . . . 1302
127.3 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304
127.4 Additional remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304
127.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304

XXV
Contents

128 Topological sorting 1307


128.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1307
128.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1308
128.3 Application to shortest path finding . . . . . . . . . . . . . . . . . . . . 1313
128.4 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1313
128.5 Relation to partial orders . . . . . . . . . . . . . . . . . . . . . . . . . . 1314
128.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1314
128.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1315
128.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1316
128.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1316

129 Transitive closure 1317


129.1 Transitive relations and examples . . . . . . . . . . . . . . . . . . . . . . 1317
129.2 Existence and description . . . . . . . . . . . . . . . . . . . . . . . . . . 1318
129.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1319
129.4 In graph theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1319
129.5 In logic and computational complexity . . . . . . . . . . . . . . . . . . . 1320
129.6 In database query languages . . . . . . . . . . . . . . . . . . . . . . . . 1321
129.7 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1321
129.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1321
129.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1322
129.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1323

130 Transitive reduction 1325


130.1 In acyclic directed graphs . . . . . . . . . . . . . . . . . . . . . . . . . . 1325
130.2 In graphs with cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1327
130.3 Computational complexity . . . . . . . . . . . . . . . . . . . . . . . . . 1327
130.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1329
130.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1329
130.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1330

131 Travelling salesman problem 1331


131.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1333
131.2 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1336
131.3 Integer linear programming formulations . . . . . . . . . . . . . . . . . . 1338
131.4 Computing a solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1340
131.5 Special cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1350
131.6 Computational complexity . . . . . . . . . . . . . . . . . . . . . . . . . 1355
131.7 Human and animal performance . . . . . . . . . . . . . . . . . . . . . . 1356
131.8 Natural computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1356
131.9 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357
131.10 Popular culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357
131.11 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357
131.12 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357
131.13 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365
131.14 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368
131.15 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371

XXVI
Contents

132 Tree traversal 1373


132.1 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375
132.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1380
132.3 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1381
132.4 Infinite trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384
132.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385
132.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386

133 Dijkstra's algorithm 1387


133.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1390
133.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
133.3 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1393
133.4 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
133.5 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1397
133.6 Running time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398
133.7 Related problems and algorithms . . . . . . . . . . . . . . . . . . . . . . 1401
133.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1402
133.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1403
133.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406
133.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1407

134 Widest path problem 1409


134.1 Undirected graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1410
134.2 Directed graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1412
134.3 Euclidean point sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1415
134.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1416

135 Yen's algorithm 1421


135.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1423
135.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426
135.3 Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426
135.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1427
135.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1427
135.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1428

136 Hungarian algorithm 1429


136.1 The problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1429
136.2 The algorithm in terms of bipartite graphs . . . . . . . . . . . . . . . . 1431
136.3 Matrix interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1432
136.4 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435
136.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435
136.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1436

137 Prüfer sequence 1439


137.1 Algorithm to convert a tree into a Prüfer sequence . . . . . . . . . . . . 1439
137.2 Algorithm to convert a Prüfer sequence into a tree . . . . . . . . . . . . 1441
137.3 Cayley's formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1441
137.4 Other applications[3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1442

XXVII
Contents

137.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1442


137.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1443

138 Graph drawing 1445


138.1 Graphical conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1447
138.2 Quality measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1448
138.3 Layout methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1451
138.4 Application-specific graph drawings . . . . . . . . . . . . . . . . . . . . 1454
138.5 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455
138.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456
138.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1463

139 Analysis of algorithms 1465


139.1 Cost models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1468
139.2 Run-time analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1469
139.3 Relevance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1474
139.4 Constant factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475
139.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475
139.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1476
139.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1477
139.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1478

140 Time complexity 1479


140.1 Table of common time complexities . . . . . . . . . . . . . . . . . . . . 1480
140.2 Constant time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482
140.3 Logarithmic time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482
140.4 Polylogarithmic time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1483
140.5 Sub-linear time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1483
140.6 Linear time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1484
140.7 Quasilinear time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1484
140.8 Sub-quadratic time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1485
140.9 Polynomial time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486
140.10 Superpolynomial time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1488
140.11 Quasi-polynomial time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1488
140.12 Sub-exponential time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1490
140.13 Exponential time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1491
140.14 Factorial time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1492
140.15 Double exponential time . . . . . . . . . . . . . . . . . . . . . . . . . . . 1492
140.16 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1492
140.17 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1493

141 Space complexity 1497


141.1 Space complexity classes . . . . . . . . . . . . . . . . . . . . . . . . . . . 1497
141.2 Relationships between classes . . . . . . . . . . . . . . . . . . . . . . . . 1498
141.3 LOGSPACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1498
141.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1499

XXVIII
Contents

142 Big O notation 1501


142.1 Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1503
142.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1504
142.3 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505
142.4 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1507
142.5 Multiple variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1509
142.6 Matters of notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1509
142.7 Orders of common functions . . . . . . . . . . . . . . . . . . . . . . . . 1511
142.8 Related asymptotic notations . . . . . . . . . . . . . . . . . . . . . . . . 1513
142.9 Generalizations and related usages . . . . . . . . . . . . . . . . . . . . . 1518
142.10 History (Bachmann–Landau, Hardy, and Vinogradov notations) . . . . 1519
142.11 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1520
142.12 References and notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1520
142.13 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1523
142.14 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1524

143 Master theorem 1527

144 Best, worst and average case 1529


144.1 Best case performance for algorithm . . . . . . . . . . . . . . . . . . . . 1530
144.2 Worst-case versus average-case performance . . . . . . . . . . . . . . . . 1530
144.3 Practical consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1532
144.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1532
144.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1534
144.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1535

145 Amortized analysis 1537


145.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1537
145.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1537
145.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1539
145.4 Common use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1540
145.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1541

146 Computational complexity theory 1543


146.1 Computational problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 1544
146.2 Machine models and complexity measures . . . . . . . . . . . . . . . . . 1548
146.3 Complexity classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1552
146.4 Important open problems . . . . . . . . . . . . . . . . . . . . . . . . . . 1557
146.5 Intractability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1559
146.6 Continuous complexity theory . . . . . . . . . . . . . . . . . . . . . . . . 1561
146.7 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1561
146.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1562
146.9 Works on Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1563
146.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1563
146.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1568

147 Complexity class 1571


147.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1573

XXIX
Contents

147.2 Common complexity classes . . . . . . . . . . . . . . . . . . . . . . . . . 1575


147.3 Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1584
147.4 Closure properties of classes . . . . . . . . . . . . . . . . . . . . . . . . . 1585
147.5 Relationships between complexity classes . . . . . . . . . . . . . . . . . 1585
147.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1588
147.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1588
147.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1589
147.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1589

148 P (complexity) 1591


148.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1591
148.2 Notable problems in P . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1592
148.3 Relationships to other classes . . . . . . . . . . . . . . . . . . . . . . . . 1592
148.4 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1593
148.5 Pure existence proofs of polynomial-time algorithms . . . . . . . . . . . 1594
148.6 Alternative characterizations . . . . . . . . . . . . . . . . . . . . . . . . 1594
148.7 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1594
148.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1595
148.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1596
148.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1597

149 NP (complexity) 1599


149.1 Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1600
149.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1601
149.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1602
149.4 Why some NP problems are hard to solve . . . . . . . . . . . . . . . . . 1603
149.5 Equivalence of definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 1603
149.6 Relationship to other classes . . . . . . . . . . . . . . . . . . . . . . . . 1604
149.7 Other characterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . 1604
149.8 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1605
149.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1606
149.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1606
149.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1606
149.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1607
149.13 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1607

150 NP-hardness 1609


150.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1610
150.2 Consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1610
150.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1611
150.4 NP-naming convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1611
150.5 Application areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1612
150.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1613

151 NP-completeness 1615


151.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1616
151.2 Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1617
151.3 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1617

XXX
Contents

151.4 NP-complete problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1619


151.5 Solving NP-complete problems . . . . . . . . . . . . . . . . . . . . . . . 1621
151.6 Completeness under different types of reduction . . . . . . . . . . . . . . 1622
151.7 Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1623
151.8 Common misconceptions . . . . . . . . . . . . . . . . . . . . . . . . . . 1623
151.9 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1624
151.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1624
151.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1625
151.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1629

152 PSPACE 1631


152.1 Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
152.2 Relation among other classes . . . . . . . . . . . . . . . . . . . . . . . . 1632
152.3 Closure properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1633
152.4 Other characterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . 1633
152.5 PSPACE-completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1634
152.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1634

153 EXPSPACE 1637


153.1 Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1637
153.2 Examples of problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1638
153.3 Relationship to other classes . . . . . . . . . . . . . . . . . . . . . . . . 1638
153.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1638
153.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1639

154 P versus NP problem 1641


154.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
154.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
154.3 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1644
154.4 NP-completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1645
154.5 Harder problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646
154.6 Problems in NP not known to be in P or NP-complete . . . . . . . . . . 1647
154.7 Does P mean ”easy”? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1649
154.8 Reasons to believe P ≠NP or P = NP . . . . . . . . . . . . . . . . . . . 1650
154.9 Consequences of solution . . . . . . . . . . . . . . . . . . . . . . . . . . 1651
154.10 Results about difficulty of proof . . . . . . . . . . . . . . . . . . . . . . 1653
154.11 Claimed solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1655
154.12 Logical characterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . 1655
154.13 Polynomial-time algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 1656
154.14 Formal definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1657
154.15 Popular culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1658
154.16 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1659
154.17 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1659
154.18 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1659
154.19 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665
154.20 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666

155 Contributors 1669

XXXI
Contents

List of Figures 2055

156 Licenses 2085


156.1 GNU GENERAL PUBLIC LICENSE . . . . . . . . . . . . . . . . . . . 2085
156.2 GNU Free Documentation License . . . . . . . . . . . . . . . . . . . . . 2086
156.3 GNU Lesser General Public License . . . . . . . . . . . . . . . . . . . . 2087

1
1 Sorting algorithm

An algorithm that arranges lists in order

This section does not cite1 any sources2 . Please help improve this section3 by
adding citations to reliable sources4 . Unsourced material may be challenged and
removed5 .
Find sources: ”Sorting algorithm”6 – news7 · newspapers8 · books9 · scholar10 · JSTOR11
(May 2019)(Learn how and when to remove this template message12 )

In computer science13 , a sorting algorithm is an algorithm14 that puts elements of a


list15 in a certain order16 . The most frequently used orders are numerical order17 and
lexicographical order18 . Efficient sorting19 is important for optimizing the efficiency20 of
other algorithms (such as search21 and merge22 algorithms) that require input data to be
in sorted lists. Sorting is also often useful for canonicalizing23 data and for producing
human-readable output. More formally, the output of any sorting algorithm must satisfy
two conditions:

1 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
2 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
3 https://en.wikipedia.org/w/index.php?title=Sorting_algorithm&action=edit
4 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
5 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
6 http://www.google.com/search?as_eq=wikipedia&q=%22Sorting+algorithm%22
7 http://www.google.com/search?tbm=nws&q=%22Sorting+algorithm%22+-wikipedia
http://www.google.com/search?&q=%22Sorting+algorithm%22+site:news.google.com/
8
newspapers&source=newspapers
9 http://www.google.com/search?tbs=bks:1&q=%22Sorting+algorithm%22+-wikipedia
10 http://scholar.google.com/scholar?q=%22Sorting+algorithm%22
11 https://www.jstor.org/action/doBasicSearch?Query=%22Sorting+algorithm%22&acc=on&wc=on
12 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
13 https://en.wikipedia.org/wiki/Computer_science
14 https://en.wikipedia.org/wiki/Algorithm
15 https://en.wikipedia.org/wiki/List_(computing)
16 https://en.wikipedia.org/wiki/Total_order
17 https://en.wikipedia.org/wiki/Numerical_order
18 https://en.wikipedia.org/wiki/Lexicographical_order
19 https://en.wikipedia.org/wiki/Sorting
20 https://en.wikipedia.org/wiki/Algorithmic_efficiency
21 https://en.wikipedia.org/wiki/Search_algorithm
22 https://en.wikipedia.org/wiki/Merge_algorithm
23 https://en.wikipedia.org/wiki/Canonicalization

3
Sorting algorithm

1. The output is in nondecreasing order (each element is no smaller than the previous
element according to the desired total order24 );
2. The output is a permutation25 (a reordering, yet retaining all of the original elements)
of the input.
Further, the input data is often stored in an array26 , which allows random access27 , rather
than a list, which only allows sequential access28 ; though many algorithms can be applied
to either type of data after suitable modification.
Sorting algorithms are often referred to as a word followed by the word ”sort,” and gram-
matically are used in English as noun phrases, for example in the sentence, ”it is inefficient
to use insertion sort on large lists,” the phrase insertion sort refers to the insertion sort29
sorting algorithm.

1.1 History

From the beginning of computing, the sorting problem has attracted a great deal of research,
perhaps due to the complexity of solving it efficiently despite its simple, familiar statement.
Among the authors of early sorting algorithms around 1951 was Betty Holberton30 (née
Snyder), who worked on ENIAC31 and UNIVAC32 .[1][2] Bubble sort33 was analyzed as early
as 1956.[3] Comparison sorting algorithms have a fundamental requirement of Ω(n log n)34
comparisons (some input sequences will require a multiple of n log n comparisons); algo-
rithms not based on comparisons, such as counting sort35 , can have better performance.
Asymptotically optimal algorithms have been known since the mid-20th century—useful
new algorithms are still being invented, with the now widely used Timsort36 dating to 2002,
and the library sort37 being first published in 2006.
Sorting algorithms are prevalent in introductory computer science38 classes, where the abun-
dance of algorithms for the problem provides a gentle introduction to a variety of core algo-
rithm concepts, such as big O notation39 , divide and conquer algorithms40 , data structures41

24 https://en.wikipedia.org/wiki/Total_order
25 https://en.wikipedia.org/wiki/Permutation
26 https://en.wikipedia.org/wiki/Array_data_type
27 https://en.wikipedia.org/wiki/Random_access
28 https://en.wikipedia.org/wiki/Sequential_access
29 https://en.wikipedia.org/wiki/Insertion_sort
30 https://en.wikipedia.org/wiki/Betty_Holberton
31 https://en.wikipedia.org/wiki/ENIAC
32 https://en.wikipedia.org/wiki/UNIVAC
33 https://en.wikipedia.org/wiki/Bubble_sort
34 https://en.wikipedia.org/wiki/Big_omega_notation
35 https://en.wikipedia.org/wiki/Counting_sort
36 https://en.wikipedia.org/wiki/Timsort
37 https://en.wikipedia.org/wiki/Library_sort
38 https://en.wikipedia.org/wiki/Computer_science
39 https://en.wikipedia.org/wiki/Big_O_notation
40 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
41 https://en.wikipedia.org/wiki/Data_structure

4
Classification

such as heaps42 and binary trees43 , randomized algorithms44 , best, worst and average case45
analysis, time–space tradeoffs46 , and upper and lower bounds47 .

1.2 Classification

Sorting algorithms are often classified by:


• Computational complexity48 (worst, average and best49 behavior) in terms of the size of
the list (n). For typical serial sorting algorithms good behavior is O(n log n), with parallel
sort in O(log2 n), and bad behavior is O(n2 ). (See Big O notation50 .) Ideal behavior for
a serial sort is O(n), but this is not possible in the average case. Optimal parallel sorting
is O(log n). Comparison-based sorting algorithms51 need at least Ω(n log n) comparisons
for most inputs.
• Computational complexity52 of swaps (for ”in-place” algorithms).
• Memory53 usage (and use of other computer resources). In particular, some sorting
algorithms are ”in-place54 ”. Strictly, an in-place sort needs only O(1) memory beyond the
items being sorted; sometimes O(log(n)) additional memory is considered ”in-place”.
• Recursion. Some algorithms are either recursive or non-recursive, while others may be
both (e.g., merge sort).
• Stability: stable sorting algorithms55 maintain the relative order of records with equal
keys (i.e., values).
• Whether or not they are a comparison sort56 . A comparison sort examines the data only
by comparing two elements with a comparison operator.
• General method: insertion, exchange, selection, merging, etc. Exchange sorts include
bubble sort and quicksort. Selection sorts include shaker sort and heapsort.
• Whether the algorithm is serial or parallel. The remainder of this discussion almost
exclusively concentrates upon serial algorithms and assumes serial operation.
• Adaptability: Whether or not the presortedness of the input affects the running time.
Algorithms that take this into account are known to be adaptive57 .

42 https://en.wikipedia.org/wiki/Heap_(data_structure)
43 https://en.wikipedia.org/wiki/Binary_tree
44 https://en.wikipedia.org/wiki/Randomized_algorithm
45 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
46 https://en.wikipedia.org/wiki/Time%E2%80%93space_tradeoff
47 https://en.wikipedia.org/wiki/Upper_and_lower_bounds
48 https://en.wikipedia.org/wiki/Computational_complexity_theory
49 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
50 https://en.wikipedia.org/wiki/Big_O_notation
51 https://en.wikipedia.org/wiki/Comparison_sort
52 https://en.wikipedia.org/wiki/Computational_complexity_theory
53 https://en.wikipedia.org/wiki/Memory_(computing)
54 https://en.wikipedia.org/wiki/In-place_algorithm
55 #Stability
56 https://en.wikipedia.org/wiki/Comparison_sort
57 https://en.wikipedia.org/wiki/Adaptive_sort

5
Sorting algorithm

1.2.1 Stability

Figure 2 An example of stable sort on playing cards. When the cards are sorted by
rank with a stable sort, the two 5s must remain in the same order in the sorted output
that they were originally in. When they are sorted with a non-stable sort, the 5s may end
up in the opposite order in the sorted output.

Stable sort algorithms sort repeated elements in the same order that they appear in the
input. When sorting some kinds of data, only part of the data is examined when determining
the sort order. For example, in the card sorting example to the right, the cards are being
sorted by their rank, and their suit is being ignored. This allows the possibility of multiple
different correctly sorted versions of the original list. Stable sorting algorithms choose one

6
Classification

of these, according to the following rule: if two items compare as equal, like the two 5 cards,
then their relative order will be preserved, so that if one came before the other in the input,
it will also come before the other in the output.
Stability is important for the following reason: say that student records consisting of name
and class section are sorted dynamically on a web page, first by name, then by class section
in a second operation. If a stable sorting algorithm is used in both cases, the sort-by-
class-section operation will not change the name order; with an unstable sort, it could be
that sorting by section shuffles the name order. Using a stable sort, users can choose to
sort by section and then by name, by first sorting using name and then sort again using
section, resulting in the name order being preserved. (Some spreadsheet programs obey
this behavior: sorting by name, then by section yields an alphabetical list of students by
section.)
More formally, the data being sorted can be represented as a record or tuple of values, and
the part of the data that is used for sorting is called the key. In the card example, cards are
represented as a record (rank, suit), and the key is the rank. A sorting algorithm is stable
if whenever there are two records R and S with the same key, and R appears before S in
the original list, then R will always appear before S in the sorted list.
When equal elements are indistinguishable, such as with integers, or more generally, any
data where the entire element is the key, stability is not an issue. Stability is also not an
issue if all keys are different.
Unstable sorting algorithms can be specially implemented to be stable. One way of doing
this is to artificially extend the key comparison, so that comparisons between two objects
with otherwise equal keys are decided using the order of the entries in the original input list
as a tie-breaker. Remembering this order, however, may require additional time and space.
One application for stable sorting algorithms is sorting a list using a primary and secondary
key. For example, suppose we wish to sort a hand of cards such that the suits are in the
order clubs (♣), diamonds (♦), hearts (♥), spades (♠), and within each suit, the cards are
sorted by rank. This can be done by first sorting the cards by rank (using any sort), and
then doing a stable sort by suit:

7
Sorting algorithm

Figure 3

Within each suit, the stable sort preserves the ordering by rank that was already done. This
idea can be extended to any number of keys and is utilised by radix sort58 . The same effect
can be achieved with an unstable sort by using a lexicographic key comparison, which, e.g.,
compares first by suit, and then compares by rank if the suits are the same.

1.3 Comparison of algorithms

In this table, n is the number of records to be sorted. The columns ”Average” and ”Worst”
give the time complexity59 in each case, under the assumption that the length of each
key is constant, and that therefore all comparisons, swaps, and other needed operations can
proceed in constant time. ”Memory” denotes the amount of auxiliary storage needed beyond
that used by the list itself, under the same assumption. The run times and the memory
requirements listed below should be understood to be inside big O notation60 , hence the
base of the logarithms does not matter; the notation log2 n means (log n)2 .

58 https://en.wikipedia.org/wiki/Radix_sort
59 https://en.wikipedia.org/wiki/Time_complexity
60 https://en.wikipedia.org/wiki/Big_O_notation

8
Comparison of algorithms

1.3.1 Comparison sorts

Below is a table of comparison sorts61 . A comparison sort cannot perform better than
O(n log n).[4]
Comparison sorts62

Name Best Average Worst Memory Stable Method Other notes


2
Quicksort63 n log n n log n n log n No Partitioning Quicksort is
usually done
in-place with
O(log n) stack
space.[5] [6]
Merge sort64 n log n n log n n log n n Yes Merging Highly paral-
lelizable65 (up
to O(log n)
using the Three
Hungarians'
Algorithm).[7]
2
In-place merge — — n log n 1 Yes Merging Can be im-
sort66 plemented as
a stable sort
based on sta-
ble in-place
merging.[8]
Introsort67 n log n n log n n log n log n No Partitioning & Selection Used in several
STL68 imple-
mentations.
Heapsort69 n log n n log n n log n 1 No Selection
2 2
Insertion sort70 n n n 1 Yes Insertion O(n + d), in
the worst case
over sequences
that have
d inversions71 .
Block sort72 n n log n n log n 1 Yes Insertion & Merging Combine a
block-based
O(n) in-
place merge
algorithm[9]
with a bottom-
up merge
sort73 .
Quadsort n n log n n log n n Yes Merging Uses a 4-
input sorting
network74 .[10]
Timsort75 n n log n n log n n Yes Insertion & Merging Makes
n comparisons
when the data
is already sorted
or reverse
sorted.
2 2 2
Selection sort76 n n n 1 No Selection Stable with
O(n) extra
space or when
using linked
lists.[11]

61 https://en.wikipedia.org/wiki/Comparison_sort
63 https://en.wikipedia.org/wiki/Quicksort
64 https://en.wikipedia.org/wiki/Merge_sort
65 https://en.wikipedia.org/wiki/Merge_sort#Parallel_merge_sort
66 https://en.wikipedia.org/wiki/In-place_merge_sort
67 https://en.wikipedia.org/wiki/Introsort
68 https://en.wikipedia.org/wiki/Standard_Template_Library
69 https://en.wikipedia.org/wiki/Heapsort
70 https://en.wikipedia.org/wiki/Insertion_sort
71 https://en.wikipedia.org/wiki/Inversion_(discrete_mathematics)
72 https://en.wikipedia.org/wiki/Block_sort
73 https://en.wikipedia.org/wiki/Merge_sort#Bottom-up_implementation
74 https://en.wikipedia.org/wiki/Sorting_network
75 https://en.wikipedia.org/wiki/Timsort
76 https://en.wikipedia.org/wiki/Selection_sort

9
Sorting algorithm

Comparison sorts62
Cubesort77 n n log n n log n n Yes Insertion Makes
n comparisons
when the data
is already sorted
or reverse
sorted.
4/3 3/2
Shellsort78 n log n n n 1 No Insertion Small code size.
2 2
Bubble sort79 n n n 1 Yes Exchanging Tiny code size.
Tree sort80 n log n n log n n log n<wbr n Yes Insertion When using a
/>(balanced) self-balancing
binary search
tree81 .
2 2 2
Cycle sort82 n n n 1 No Insertion In-place with
theoretically
optimal number
of writes.
2
Library sort83 n n log n n n Yes Insertion
Patience n — n log n n No Insertion & Selection Finds all
sorting84 the longest
increasing
subsequences85
in O(n log n).
Smoothsort86 n n log n n log n 1 No Selection An adaptive87
variant of
heapsort
based upon
the Leonardo
sequence88
rather than
a traditional
binary heap89 .
2 2
Strand sort90 n n n n Yes Selection
Tournament n log n n log n n log n n[12] No Selection Variation of
sort91 Heap Sort.
2 2
Cocktail shaker n n n 1 Yes Exchanging
sort92
2 2
Comb sort93 n log n n n 1 No Exchanging Faster than
bubble sort on
average.
2 2
Gnome sort94 n n n 1 Yes Exchanging Tiny code size.
UnShuffle n kn kn n No Distribution and Merge No exchanges
Sort[13] are performed.
The parameter
k is propor-
tional to the
entropy in the
input. k = 1
for ordered or
reverse ordered
input.
Franceschini's — n log n n log n 1 Yes ?
method[14]
2 2
Odd–even sort95 n n n 1 Yes Exchanging Can be run
on parallel
processors
easily.

77 https://en.wikipedia.org/wiki/Cubesort
78 https://en.wikipedia.org/wiki/Shellsort
79 https://en.wikipedia.org/wiki/Bubble_sort
80 https://en.wikipedia.org/wiki/Tree_sort
81 https://en.wikipedia.org/wiki/Self-balancing_binary_search_tree
82 https://en.wikipedia.org/wiki/Cycle_sort
83 https://en.wikipedia.org/wiki/Library_sort
84 https://en.wikipedia.org/wiki/Patience_sorting
85 https://en.wikipedia.org/wiki/Longest_increasing_subsequence
86 https://en.wikipedia.org/wiki/Smoothsort
87 https://en.wikipedia.org/wiki/Adaptive_sort
88 https://en.wikipedia.org/wiki/Leonardo_number
89 https://en.wikipedia.org/wiki/Binary_heap
90 https://en.wikipedia.org/wiki/Strand_sort
91 https://en.wikipedia.org/wiki/Tournament_sort
92 https://en.wikipedia.org/wiki/Cocktail_shaker_sort
93 https://en.wikipedia.org/wiki/Comb_sort
94 https://en.wikipedia.org/wiki/Gnome_sort
95 https://en.wikipedia.org/wiki/Odd%E2%80%93even_sort

10
Comparison of algorithms

Comparison sorts62
Zip sort n log n n log n n log n 1 Yes Merging In-place merge
algorithm,
minimises data
moves.[15]

1.3.2 Non-comparison sorts

The following table describes integer sorting96 algorithms and other sorting algorithms that
are not comparison sorts97 . As such, they are not limited to Ω(n log n).[16] Complexities
below assume n items to be sorted, with keys of size k, digit size d, and r the range of
numbers to be sorted. Many of them are based on the assumption that the key size is large
enough that all entries have unique key values, and hence that n ≪2k , where ≪ means ”much
less than”. In the unit-cost random access machine98 model, algorithms with running time
of n· kd , such as radix sort, still take time proportional to Θ(n log n), because n is limited to
k
be not more than 2 d , and a larger number of elements to sort would require a bigger k in
order to store them in the memory.[17]
Non-comparison sorts

Name Best Average Worst Memory Stable n ≪2k Notes


Pigeonhole — n + 2k n + 2k 2k Yes Yes
sort99
Bucket — n+k n2 · k n·k Yes No Assumes
sort100 uniform
(uniform distribu-
keys) tion of ele-
ments from
the do-
main in the
array.[18]
Bucket — n+r n+r n+r Yes Yes If r is O(n),
sort101 then aver-
(integer age time
keys) complexity
is O(n).[19]
Counting — n+r n+r n+r Yes Yes If r is O(n),
sort102 then aver-
age time
complexity
is O(n).[18]
k k k
LSD Radix — n· n· n + 2d Yes No recur-
Sort103 d d d
sion levels,
2d for count
array.[18][19]

96 https://en.wikipedia.org/wiki/Integer_sorting
97 https://en.wikipedia.org/wiki/Comparison_sort
98 https://en.wikipedia.org/wiki/Random_access_machine
99 https://en.wikipedia.org/wiki/Pigeonhole_sort
100 https://en.wikipedia.org/wiki/Bucket_sort
101 https://en.wikipedia.org/wiki/Bucket_sort
102 https://en.wikipedia.org/wiki/Counting_sort
103 https://en.wikipedia.org/wiki/Radix_sort#Least_significant_digit_radix_sorts

11
Sorting algorithm

Non-comparison sorts
k k
MSD Radix — n· n· n + 2d Yes No Stable
Sort104 d d version uses
an external
array of size
n to hold all
of the bins.
k k
MSD Radix — n· n· 21 No No d=1 for in-
Sort105 1 1 place, k/1
(in-place) recursion
levels, no
( ) count array.
k k k d
Spread- n n· n· +d ·2 No No Asymptotic
sort106 d s d are based
on the
assumption
that n ≪2k ,
but the
algorithm
does not
require this.
k k k
Burstsort107 — n· n· n· No No Has better
d d d constant
factor than
radix sort
for sorting
strings.
Though
relies some-
what on
specifics of
commonly
encountered
strings.
Flashsort108 n n+r n2 n No No Requires
uniform
distribution
of elements
from the
domain in
the array to
run in lin-
ear time. If
distribution
is extremely
skewed then
it can go
quadratic
if underly-
ing sort is
quadratic
(it is usu-
ally an
insertion
sort). In-
place ver-
sion is not
stable.

104 https://en.wikipedia.org/wiki/Radix_sort#Most_significant_digit_radix_sorts
105 https://en.wikipedia.org/wiki/Radix_sort#Most_significant_digit_radix_sorts
106 https://en.wikipedia.org/wiki/Spreadsort
107 https://en.wikipedia.org/wiki/Burstsort
108 https://en.wikipedia.org/wiki/Flashsort

12
Comparison of algorithms

Non-comparison sorts
k k
Postman — n· n· n + 2d — No A variation
sort109 d d of bucket
sort, which
works very
similar
to MSD
Radix Sort.
Specific to
post service
needs.

Samplesort110 can be used to parallelize any of the non-comparison sorts, by efficiently


distributing data into several buckets and then passing down sorting to several processors,
with no need to merge as buckets are already sorted between each other.

1.3.3 Others

Some algorithms are slow compared to those discussed above, such as the bogosort111 with
unbounded run time and the stooge sort112 which has O(n2.7 ) run time. These sorts are usu-
ally described for educational purposes in order to demonstrate how run time of algorithms
is estimated. The following table describes some sorting algorithms that are impractical for
real-life use in traditional software contexts due to extremely poor performance or special-
ized hardware requirements.
Name Best Average Worst Memory Stable Comparison Other notes
2
Bead sort113 n S S n N/A No Works only
with positive
integers. Requires
specialized
hardware for
it to run in
guaranteed O(n)
time. There is
a possibility
for software
implementation,
but running time
will be O(S),
where S is sum of
all integers to be
sorted, in case of
small integers it
can be considered
to be linear.
Simple pancake — n n log n No Yes Count is number
sort114 of flips.
2
Spaghetti (Poll) n n n n Yes Polling This is a linear-
sort115 time, analog
algorithm
for sorting a
sequence of items,
requiring O(n)
stack space,
and the sort
is stable. This
requires n parallel
processors.
See spaghetti
sort#Analysis116 .

109 https://en.wikipedia.org/wiki/Postman_sort
110 https://en.wikipedia.org/wiki/Samplesort
111 https://en.wikipedia.org/wiki/Bogosort
112 https://en.wikipedia.org/wiki/Stooge_sort
113 https://en.wikipedia.org/wiki/Bead_sort
114 https://en.wikipedia.org/wiki/Pancake_sorting
115 https://en.wikipedia.org/wiki/Spaghetti_sort
116 https://en.wikipedia.org/wiki/Spaghetti_sort#Analysis

13
Sorting algorithm

Name Best Average Worst Memory Stable Comparison Other notes


2 2 2 2
Sorting net- log n log n log n n log n Varies (stable Yes Order of
work117 sorting networks comparisons are
require more set in advance
comparisons) based on a fixed
network size.
Impractical for
more than 32118 119
items.[disputed − discuss ]
2 2 2 2
Bitonic sorter120 log n log n log n n log n No Yes An effective
variation of
Sorting networks.
Bogosort121 n (n × n!) ∞ 1 No Yes Random shuffling.
Used for example
purposes only,
as sorting with
unbounded worst
case running
time.
log 3/ log 1.5 log 3/ log 1.5 log 3/ log 1.5
Stooge sort122 n n n n No Yes Slower than most
of the sorting
algorithms (even
naive ones)
with a time
complexity of
O(nlog 3 / log 1.5 )
= O(n2.7095... ).

Theoretical computer scientists have detailed other sorting algorithms that provide better
than O(n log n) time complexity assuming additional constraints, including:
• Thorup's algorithm, a randomized algorithm for sorting keys from a domain of finite
size, taking O(n log log n) time and O(n) space.[20]( )

• A randomized integer sorting123 algorithm taking O n log log n expected time and O(n)
space.[21]

1.4 Popular sorting algorithms

While there are a large number of sorting algorithms, in practical implementations a few
algorithms predominate. Insertion sort is widely used for small data sets, while for large data
sets an asymptotically efficient sort is used, primarily heap sort, merge sort, or quicksort.
Efficient implementations generally use a hybrid algorithm124 , combining an asymptotically
efficient algorithm for the overall sort with insertion sort for small lists at the bottom
of a recursion. Highly tuned implementations use more sophisticated variants, such as
Timsort125 (merge sort, insertion sort, and additional logic), used in Android, Java, and
Python, and introsort126 (quicksort and heap sort), used (in variant forms) in some C++
sort127 implementations and in .NET.

117 https://en.wikipedia.org/wiki/Sorting_network
118 https://en.wikipedia.org/wiki/Wikipedia:Disputed_statement
119 https://en.wikipedia.org/wiki/Talk:Sorting_algorithm
120 https://en.wikipedia.org/wiki/Bitonic_sorter
121 https://en.wikipedia.org/wiki/Bogosort
122 https://en.wikipedia.org/wiki/Stooge_sort
123 https://en.wikipedia.org/wiki/Integer_sorting
124 https://en.wikipedia.org/wiki/Hybrid_algorithm
125 https://en.wikipedia.org/wiki/Timsort
126 https://en.wikipedia.org/wiki/Introsort
127 https://en.wikipedia.org/wiki/Sort_(C%2B%2B)

14
Popular sorting algorithms

For more restricted data, such as numbers in a fixed interval, distribution sorts128 such as
counting sort or radix sort are widely used. Bubble sort and variants are rarely used in
practice, but are commonly found in teaching and theoretical discussions.
When physically sorting objects (such as alphabetizing papers, tests or books) people intu-
itively generally use insertion sorts for small sets. For larger sets, people often first bucket,
such as by initial letter, and multiple bucketing allows practical sorting of very large sets.
Often space is relatively cheap, such as by spreading objects out on the floor or over a large
area, but operations are expensive, particularly moving an object a large distance – locality
of reference is important. Merge sorts are also practical for physical objects, particularly as
two hands can be used, one for each list to merge, while other algorithms, such as heap sort
or quick sort, are poorly suited for human use. Other algorithms, such as library sort129 , a
variant of insertion sort that leaves spaces, are also practical for physical use.

1.4.1 Simple sorts

Two of the simplest sorts are insertion sort and selection sort, both of which are efficient on
small data, due to low overhead, but not efficient on large data. Insertion sort is generally
faster than selection sort in practice, due to fewer comparisons and good performance on
almost-sorted data, and thus is preferred in practice, but selection sort uses fewer writes,
and thus is used when write performance is a limiting factor.

Insertion sort

Main article: Insertion sort130 Insertion sort131 is a simple sorting algorithm that is relatively
efficient for small lists and mostly sorted lists, and is often used as part of more sophisticated
algorithms. It works by taking elements from the list one by one and inserting them in
their correct position into a new sorted list similar to how we put money in our wallet.[22] In
arrays, the new list and the remaining elements can share the array's space, but insertion
is expensive, requiring shifting all following elements over by one. Shellsort132 (see below)
is a variant of insertion sort that is more efficient for larger lists.

Selection sort

Main article: Selection sort133 Selection sort is an in-place134 comparison sort135 . It has
O136 (n2 ) complexity, making it inefficient on large lists, and generally performs worse than

128 #Distribution_sort
129 https://en.wikipedia.org/wiki/Library_sort
130 https://en.wikipedia.org/wiki/Insertion_sort
131 https://en.wikipedia.org/wiki/Insertion_sort
132 #Shellsort
133 https://en.wikipedia.org/wiki/Selection_sort
134 https://en.wikipedia.org/wiki/In-place_algorithm
135 https://en.wikipedia.org/wiki/Comparison_sort
136 https://en.wikipedia.org/wiki/Big_O_notation

15
Sorting algorithm

the similar insertion sort137 . Selection sort is noted for its simplicity, and also has perfor-
mance advantages over more complicated algorithms in certain situations.
The algorithm finds the minimum value, swaps it with the value in the first position, and
repeats these steps for the remainder of the list.[23] It does no more than n swaps, and thus
is useful where swapping is very expensive.

1.4.2 Efficient sorts

Practical general sorting algorithms are almost always based on an algorithm with average
time complexity (and generally worst-case complexity) O(n log n), of which the most com-
mon are heap sort, merge sort, and quicksort. Each has advantages and drawbacks, with
the most significant being that simple implementation of merge sort uses O(n) additional
space, and simple implementation of quicksort has O(n2 ) worst-case complexity. These
problems can be solved or ameliorated at the cost of a more complex algorithm.
While these algorithms are asymptotically efficient on random data, for practical efficiency
on real-world data various modifications are used. First, the overhead of these algorithms
becomes significant on smaller data, so often a hybrid algorithm is used, commonly switching
to insertion sort once the data is small enough. Second, the algorithms often perform poorly
on already sorted data or almost sorted data – these are common in real-world data, and can
be sorted in O(n) time by appropriate algorithms. Finally, they may also be unstable138 ,
and stability is often a desirable property in a sort. Thus more sophisticated algorithms
are often employed, such as Timsort139 (based on merge sort) or introsort140 (based on
quicksort, falling back to heap sort).

Merge sort

Main article: Merge sort141 Merge sort takes advantage of the ease of merging already sorted
lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then
3 with 4...) and swapping them if the first should come after the second. It then merges
each of the resulting lists of two into lists of four, then merges those lists of four, and so on;
until at last two lists are merged into the final sorted list.[24] Of the algorithms described
here, this is the first that scales well to very large lists, because its worst-case running time
is O(n log n). It is also easily applied to lists, not only arrays, as it only requires sequential
access, not random access. However, it has additional O(n) space complexity, and involves
a large number of copies in simple implementations.
Merge sort has seen a relatively recent surge in popularity for practical implementations,
due to its use in the sophisticated algorithm Timsort142 , which is used for the standard sort

137 https://en.wikipedia.org/wiki/Insertion_sort
138 https://en.wikipedia.org/wiki/Unstable_sort
139 https://en.wikipedia.org/wiki/Timsort
140 https://en.wikipedia.org/wiki/Introsort
141 https://en.wikipedia.org/wiki/Merge_sort
142 https://en.wikipedia.org/wiki/Timsort

16
Popular sorting algorithms

routine in the programming languages Python143[25] and Java144 (as of JDK7145[26] ). Merge
sort itself is the standard routine in Perl146 ,[27] among others, and has been used in Java at
least since 2000 in JDK1.3147 .[28]

Heapsort

Main article: Heapsort148 Heapsort is a much more efficient version of selection sort149 . It
also works by determining the largest (or smallest) element of the list, placing that at the
end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this
task efficiently by using a data structure called a heap150 , a special type of binary tree151 .[29]
Once the data list has been made into a heap, the root node is guaranteed to be the largest
(or smallest) element. When it is removed and placed at the end of the list, the heap is
rearranged so the largest element remaining moves to the root. Using the heap, finding
the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple
selection sort. This allows Heapsort to run in O(n log n) time, and this is also the worst
case complexity.

Quicksort

Main article: Quicksort152 Quicksort is a divide and conquer153 algorithm154 which relies on
a partition operation: to partition an array, an element called a pivot is selected.[30][31] All
elements smaller than the pivot are moved before it and all greater elements are moved after
it. This can be done efficiently in linear time and in-place155 . The lesser and greater sublists
are then recursively sorted. This yields average time complexity of O(n log n), with low
overhead, and thus this is a popular algorithm. Efficient implementations of quicksort (with
in-place partitioning) are typically unstable sorts and somewhat complex, but are among
the fastest sorting algorithms in practice. Together with its modest O(log n) space usage,
quicksort is one of the most popular sorting algorithms and is available in many standard
programming libraries.
The important caveat about quicksort is that its worst-case performance is O(n2 ); while this
is rare, in naive implementations (choosing the first or last element as pivot) this occurs
for sorted data, which is a common case. The most complex issue in quicksort is thus
choosing a good pivot element, as consistently poor choices of pivots can result in drastically
slower O(n2 ) performance, but good choice of pivots yields O(n log n) performance, which

143 https://en.wikipedia.org/wiki/Python_(programming_language)
144 https://en.wikipedia.org/wiki/Java_(programming_language)
145 https://en.wikipedia.org/wiki/JDK7
146 https://en.wikipedia.org/wiki/Perl
147 https://en.wikipedia.org/wiki/Java_version_history#J2SE_1.3
148 https://en.wikipedia.org/wiki/Heapsort
149 https://en.wikipedia.org/wiki/Selection_sort
150 https://en.wikipedia.org/wiki/Heap_(data_structure)
151 https://en.wikipedia.org/wiki/Binary_tree
152 https://en.wikipedia.org/wiki/Quicksort
153 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
154 https://en.wikipedia.org/wiki/Algorithm
155 https://en.wikipedia.org/wiki/In-place_algorithm

17
Sorting algorithm

is asymptotically optimal. For example, if at each step the median156 is chosen as the
pivot then the algorithm works in O(n log n). Finding the median, such as by the median
of medians157 selection algorithm158 is however an O(n) operation on unsorted lists and
therefore exacts significant overhead with sorting. In practice choosing a random pivot
almost certainly yields O(n log n) performance.

Shellsort

Figure 4 A Shell sort, different from bubble sort in that it moves elements to numerous
swapping positions.

156 https://en.wikipedia.org/wiki/Median
157 https://en.wikipedia.org/wiki/Median_of_medians
158 https://en.wikipedia.org/wiki/Selection_algorithm

18
Popular sorting algorithms

Main article: Shell sort159 Shellsort was invented by Donald Shell160 in 1959.[32] It improves
upon insertion sort by moving out of order elements more than one position at a time.
The concept behind Shellsort is that insertion sort performs in O(kn) time, where k is
the greatest distance between two out-of-place elements. This means that generally, they
perform in O(n2 ), but for data that is mostly sorted, with only a few elements out of place,
they perform faster. So, by first sorting elements far away, and progressively shrinking the
gap between the elements to sort, the final sort computes much faster. One implementation
can be described as arranging the data sequence in a two-dimensional array and then sorting
the columns of the array using insertion sort.
The worst-case time complexity of Shellsort is an open problem161 and depends on the
gap sequence used, with known complexities ranging from O(n2 ) to O(n4/3 ) and Θ(n log2
n). This, combined with the fact that Shellsort is in-place162 , only needs a relatively small
amount of code, and does not require use of the call stack163 , makes it is useful in situations
where memory is at a premium, such as in embedded systems164 and operating system
kernels165 .

1.4.3 Bubble sort and variants

This section does not cite166 any sources167 . Please help improve this sec-
tion168 by adding citations to reliable sources169 . Unsourced material may be chal-
lenged and removed170 .
Find sources: ”Sorting algorithm”171 –
news172 · newspapers173 · books174 · scholar175 · JSTOR176 (May 2019)(Learn how
and when to remove this template message177 )

159 https://en.wikipedia.org/wiki/Shellsort
160 https://en.wikipedia.org/wiki/Donald_Shell
161 https://en.wikipedia.org/wiki/Open_problem
162 https://en.wikipedia.org/wiki/In-place
163 https://en.wikipedia.org/wiki/Call_stack
164 https://en.wikipedia.org/wiki/Embedded_system
165 https://en.wikipedia.org/wiki/Operating_system_kernel
166 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
167 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
168 https://en.wikipedia.org/w/index.php?title=Sorting_algorithm&action=edit
169 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
170 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
171 http://www.google.com/search?as_eq=wikipedia&q=%22Sorting+algorithm%22
172 http://www.google.com/search?tbm=nws&q=%22Sorting+algorithm%22+-wikipedia
http://www.google.com/search?&q=%22Sorting+algorithm%22+site:news.google.com/
173
newspapers&source=newspapers
174 http://www.google.com/search?tbs=bks:1&q=%22Sorting+algorithm%22+-wikipedia
175 http://scholar.google.com/scholar?q=%22Sorting+algorithm%22
176 https://www.jstor.org/action/doBasicSearch?Query=%22Sorting+algorithm%22&acc=on&wc=on
177 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

19
Sorting algorithm

Bubble sort, and variants such as the shell sort178 and cocktail sort179 , are simple, highly-
inefficient sorting algorithms. They are frequently seen in introductory texts due to ease of
analysis, but they are rarely used in practice.

Bubble sort

Figure 6 A bubble sort, a sorting algorithm that continuously steps through a list,
swapping items until they appear in the correct order.

Main article: Bubble sort180 Bubble sort is a simple sorting algorithm. The algorithm starts
at the beginning of the data set. It compares the first two elements, and if the first is greater
than the second, it swaps them. It continues doing this for each pair of adjacent elements
to the end of the data set. It then starts again with the first two elements, repeating until
no swaps have occurred on the last pass.[33] This algorithm's average time and worst-case
performance is O(n2 ), so it is rarely used to sort large, unordered data sets. Bubble sort
can be used to sort a small number of items (where its asymptotic inefficiency is not a

178 https://en.wikipedia.org/wiki/Shell_sort
179 https://en.wikipedia.org/wiki/Cocktail_sort
180 https://en.wikipedia.org/wiki/Bubble_sort

20
Popular sorting algorithms

high penalty). Bubble sort can also be used efficiently on a list of any length that is nearly
sorted (that is, the elements are not significantly out of place). For example, if any number
of elements are out of place by only one position (e.g. 0123546789 and 1032547698), bubble
sort's exchange will get them in order on the first pass, the second pass will find all elements
in order, so the sort will take only 2n time.
[34]

Comb sort

Main article: Comb sort181 Comb sort is a relatively simple sorting algorithm based on
bubble sort182 and originally designed by Włodzimierz Dobosiewicz in 1980.[35] It was later
rediscovered and popularized by Stephen Lacey and Richard Box with a Byte Magazine183
article published in April 1991. The basic idea is to eliminate turtles, or small values
near the end of the list, since in a bubble sort these slow the sorting down tremendously.
(Rabbits, large values around the beginning of the list, do not pose a problem in bubble
sort) It accomplishes this by initially swapping elements that are a certain distance from
one another in the array, rather than only swapping elements if they are adjacent to one
another, and then shrinking the chosen distance until it is operating as a normal bubble
sort. Thus, if Shellsort can be thought of as a generalized version of insertion sort that
swaps elements spaced a certain distance away from one another, comb sort can be thought
of as the same generalization applied to bubble sort.

1.4.4 Distribution sort

See also: External sorting184 Distribution sort refers to any sorting algorithm where data
is distributed from their input to multiple intermediate structures which are then gathered
and placed on the output. For example, both bucket sort185 and flashsort186 are distribution
based sorting algorithms. Distribution sorting algorithms can be used on a single processor,
or they can be a distributed algorithm187 , where individual subsets are separately sorted on
different processors, then combined. This allows external sorting188 of data too large to fit
into a single computer's memory.

Counting sort

Main article: Counting sort189 Counting sort is applicable when each input is known to
belong to a particular set, S, of possibilities. The algorithm runs in O(|S| + n) time and

181 https://en.wikipedia.org/wiki/Comb_sort
182 https://en.wikipedia.org/wiki/Bubble_sort
183 https://en.wikipedia.org/wiki/Byte_Magazine
184 https://en.wikipedia.org/wiki/External_sorting
185 https://en.wikipedia.org/wiki/Bucket_sort
186 https://en.wikipedia.org/wiki/Flashsort
187 https://en.wikipedia.org/wiki/Distributed_algorithm
188 https://en.wikipedia.org/wiki/External_sorting
189 https://en.wikipedia.org/wiki/Counting_sort

21
Sorting algorithm

O(|S|) memory where n is the length of the input. It works by creating an integer array of
size |S| and using the ith bin to count the occurrences of the ith member of S in the input.
Each input is then counted by incrementing the value of its corresponding bin. Afterward,
the counting array is looped through to arrange all of the inputs in order. This sorting
algorithm often cannot be used because S needs to be reasonably small for the algorithm
to be efficient, but it is extremely fast and demonstrates great asymptotic behavior as
n increases. It also can be modified to provide stable behavior.

Bucket sort

Main article: Bucket sort190 Bucket sort is a divide and conquer191 sorting algorithm that
generalizes counting sort192 by partitioning an array into a finite number of buckets. Each
bucket is then sorted individually, either using a different sorting algorithm, or by recursively
applying the bucket sorting algorithm.
A bucket sort works best when the elements of the data set are evenly distributed across
all buckets.

Radix sort

Main article: Radix sort193 Radix sort is an algorithm that sorts numbers by processing
individual digits. n numbers consisting of k digits each are sorted in O(n · k) time. Radix
sort can process digits of each number either starting from the least significant digit194 (LSD)
or starting from the most significant digit195 (MSD). The LSD algorithm first sorts the list
by the least significant digit while preserving their relative order using a stable sort. Then
it sorts them by the next digit, and so on from the least significant to the most significant,
ending up with a sorted list. While the LSD radix sort requires the use of a stable sort, the
MSD radix sort algorithm does not (unless stable sorting is desired). In-place MSD radix
sort is not stable. It is common for the counting sort196 algorithm to be used internally by
the radix sort. A hybrid197 sorting approach, such as using insertion sort198 for small bins
improves performance of radix sort significantly.

1.5 Memory usage patterns and index sorting

When the size of the array to be sorted approaches or exceeds the available primary mem-
ory, so that (much slower) disk or swap space must be employed, the memory usage pattern
of a sorting algorithm becomes important, and an algorithm that might have been fairly

190 https://en.wikipedia.org/wiki/Bucket_sort
191 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
192 https://en.wikipedia.org/wiki/Counting_sort
193 https://en.wikipedia.org/wiki/Radix_sort
194 https://en.wikipedia.org/wiki/Least_significant_digit
195 https://en.wikipedia.org/wiki/Most_significant_digit
196 https://en.wikipedia.org/wiki/Counting_sort
197 https://en.wikipedia.org/wiki/Hybrid_algorithm
198 https://en.wikipedia.org/wiki/Insertion_sort

22
Memory usage patterns and index sorting

efficient when the array fit easily in RAM may become impractical. In this scenario, the
total number of comparisons becomes (relatively) less important, and the number of times
sections of memory must be copied or swapped to and from the disk can dominate the per-
formance characteristics of an algorithm. Thus, the number of passes and the localization
of comparisons can be more important than the raw number of comparisons, since compar-
isons of nearby elements to one another happen at system bus199 speed (or, with caching,
even at CPU200 speed), which, compared to disk speed, is virtually instantaneous.
For example, the popular recursive quicksort201 algorithm provides quite reasonable per-
formance with adequate RAM, but due to the recursive way that it copies portions of the
array it becomes much less practical when the array does not fit in RAM, because it may
cause a number of slow copy or move operations to and from disk. In that scenario, another
algorithm may be preferable even if it requires more total comparisons.
One way to work around this problem, which works well when complex records (such as in a
relational database202 ) are being sorted by a relatively small key field, is to create an index
into the array and then sort the index, rather than the entire array. (A sorted version of
the entire array can then be produced with one pass, reading from the index, but often even
that is unnecessary, as having the sorted index is adequate.) Because the index is much
smaller than the entire array, it may fit easily in memory where the entire array would not,
effectively eliminating the disk-swapping problem. This procedure is sometimes called ”tag
sort”.[36]
Another technique for overcoming the memory-size problem is using external sorting203 ,
for example one of the ways is to combine two algorithms in a way that takes advantage
of the strength of each to improve overall performance. For instance, the array might be
subdivided into chunks of a size that will fit in RAM, the contents of each chunk sorted
using an efficient algorithm (such as quicksort204 ), and the results merged using a k-way
merge similar to that used in mergesort205 . This is faster than performing either mergesort
or quicksort over the entire list.[37][38]
Techniques can also be combined. For sorting very large sets of data that vastly exceed
system memory, even the index may need to be sorted using an algorithm or combination
of algorithms designed to perform reasonably with virtual memory206 , i.e., to reduce the
amount of swapping required.

199 https://en.wikipedia.org/wiki/Computer_bus
200 https://en.wikipedia.org/wiki/Central_Processing_Unit
201 https://en.wikipedia.org/wiki/Quicksort
202 https://en.wikipedia.org/wiki/Relational_database
203 https://en.wikipedia.org/wiki/External_sorting
204 https://en.wikipedia.org/wiki/Quicksort
205 https://en.wikipedia.org/wiki/Mergesort
206 https://en.wikipedia.org/wiki/Virtual_memory

23
Sorting algorithm

1.6 Related algorithms

Related problems include partial sorting207 (sorting only the k smallest elements of a list, or
alternatively computing the k smallest elements, but unordered) and selection208 (computing
the kth smallest element). These can be solved inefficiently by a total sort, but more
efficient algorithms exist, often derived by generalizing a sorting algorithm. The most
notable example is quickselect209 , which is related to quicksort210 . Conversely, some sorting
algorithms can be derived by repeated application of a selection algorithm; quicksort and
quickselect can be seen as the same pivoting move, differing only in whether one recurses
on both sides (quicksort, divide and conquer211 ) or one side (quickselect, decrease and
conquer212 ).
A kind of opposite of a sorting algorithm is a shuffling algorithm213 . These are fundamen-
tally different because they require a source of random numbers. Shuffling can also be
implemented by a sorting algorithm, namely by a random sort: assigning a random number
to each element of the list and then sorting based on the random numbers. This is generally
not done in practice, however, and there is a well-known simple and efficient algorithm for
shuffling: the Fisher–Yates shuffle214 .

1.7 See also


• Collation215
• Schwartzian transform216
• Search algorithm217 − Any algorithm which solves the search problem
• Quantum sort218 − Sorting algorithms for quantum computers

1.8 References

207 https://en.wikipedia.org/wiki/Partial_sorting
208 https://en.wikipedia.org/wiki/Selection_algorithm
209 https://en.wikipedia.org/wiki/Quickselect
210 https://en.wikipedia.org/wiki/Quicksort
211 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
212 https://en.wikipedia.org/wiki/Decrease_and_conquer
213 https://en.wikipedia.org/wiki/Shuffling_algorithm
214 https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle
215 https://en.wikipedia.org/wiki/Collation
216 https://en.wikipedia.org/wiki/Schwartzian_transform
217 https://en.wikipedia.org/wiki/Search_algorithm
218 https://en.wikipedia.org/wiki/Quantum_sort

24
References

This article includes a list of references219 , but its sources remain unclear be-
cause it has insufficient inline citations220 . Please help to improve221 this ar-
ticle by introducing222 more precise citations. (September 2009)(Learn how and when
to remove this template message223 )

1. ”M  'R L' W P  ENIAC”224 . Mental


Floss. 2013-10-13. Retrieved 2016-06-16.
2. L, S (D 17, 2001). ”F E. H, 84, E C
P”225 . NYT. R 16 D 2014.
3. D, H B. (1956). Electronic Data Sorting (PhD thesis). Stanford
University. ProQuest226 301940891227 .
4. C, T H.228 ; L, C E.229 ; R, R L.230 ;
S, C231 (2009), ”8”, Introduction To Algorithms232 (3 .), C-
, MA: T MIT P, . 167, ISBN233 978-0-262-03293-3234
5. S, R235 (1 S 1998). Algorithms In C: Fundamentals,
Data Structures, Sorting, Searching, Parts 1-4236 (3 .). P E.
ISBN237 978-81-317-1291-7238 . R 27 N 2012.
6. S, R.239 (1978). ”I Q ”. Comm.
ACM240 . 21 (10): 847–857. doi241 :10.1145/359619.359631242 .

219 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
220 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
221 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Fact_and_Reference_Check
222 https://en.wikipedia.org/wiki/Wikipedia:When_to_cite
223 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
224 http://mentalfloss.com/article/53160/meet-refrigerator-ladies-who-programmed-eniac
https://www.nytimes.com/2001/12/17/business/frances-e-holberton-84-early-computer-
225
programmer.html
226 https://en.wikipedia.org/wiki/ProQuest_(identifier)
227 https://search.proquest.com/docview/301940891
228 https://en.wikipedia.org/wiki/Thomas_H._Cormen
229 https://en.wikipedia.org/wiki/Charles_E._Leiserson
230 https://en.wikipedia.org/wiki/Ron_Rivest
231 https://en.wikipedia.org/wiki/Clifford_Stein
232 https://books.google.com/books?id=NLngYyWFl_YC
233 https://en.wikipedia.org/wiki/ISBN_(identifier)
234 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03293-3
235 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
236 https://books.google.com/books?id=ylAETlep0CwC
237 https://en.wikipedia.org/wiki/ISBN_(identifier)
238 https://en.wikipedia.org/wiki/Special:BookSources/978-81-317-1291-7
239 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
240 https://en.wikipedia.org/wiki/Communications_of_the_ACM
241 https://en.wikipedia.org/wiki/Doi_(identifier)
242 https://doi.org/10.1145%2F359619.359631

25
Sorting algorithm

7. A, M.243 ; K, J.244 ; S, E.245 (1983). An O(n log n) sorting
network. STOC246 '83. Proceedings of the fifteenth annual ACM symposium on Theory
of computing. pp. 1–9. doi247 :10.1145/800061.808726248 . ISBN249 0-89791-099-0250 .
8. H, B. C.; L, M. A. (D 1992). ”F S M
 S  C E S”251 (PDF). Comput. J.252 35 (6): 643–
650. CiteSeerX253 10.1.1.54.8381254 . doi255 :10.1093/comjnl/35.6.643256 .
9. K, P. S.; K, A. (2008). Ratio Based Stable In-Place Merging.
TAMC257 2008. Theory and Applications of Models of Computation. LNCS258 .
4978. pp. 246–257. CiteSeerX259 10.1.1.330.2641260 . doi261 :10.1007/978-3-540-
79228-4_22262 . ISBN263 978-3-540-79227-7264 .
10. 265
11. ”SELECTION SORT (J, C++) - A  D S”266 .
www.algolist.net. Retrieved 14 April 2018.
12. 267
13. K, A (N 1985). ”U, N Q  S”. Computer
Language. 2 (11).
14. F, G. (J 2007). ”S S,  P,  O(  )
C  O() M”. Theory of Computing Systems. 40 (4): 327–353.
doi268 :10.1007/s00224-006-1311-1269 .
15. C, R. (M 2020). ”- --”270 .
www.github.com.

243 https://en.wikipedia.org/wiki/Mikl%C3%B3s_Ajtai
244 https://en.wikipedia.org/wiki/J%C3%A1nos_Koml%C3%B3s_(mathematician)
245 https://en.wikipedia.org/wiki/Endre_Szemer%C3%A9di
246 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
247 https://en.wikipedia.org/wiki/Doi_(identifier)
248 https://doi.org/10.1145%2F800061.808726
249 https://en.wikipedia.org/wiki/ISBN_(identifier)
250 https://en.wikipedia.org/wiki/Special:BookSources/0-89791-099-0
251 http://comjnl.oxfordjournals.org/content/35/6/643.full.pdf
252 https://en.wikipedia.org/wiki/The_Computer_Journal
253 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
254 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.54.8381
255 https://en.wikipedia.org/wiki/Doi_(identifier)
256 https://doi.org/10.1093%2Fcomjnl%2F35.6.643
https://en.wikipedia.org/wiki/International_Conference_on_Theory_and_Applications_of_
257
Models_of_Computation
258 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
259 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
260 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.330.2641
261 https://en.wikipedia.org/wiki/Doi_(identifier)
262 https://doi.org/10.1007%2F978-3-540-79228-4_22
263 https://en.wikipedia.org/wiki/ISBN_(identifier)
264 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-79227-7
265 https://qiita.com/hon_no_mushi/items/92ff1a220f179b8d40f9
266 http://www.algolist.net/Algorithms/Sorting/Selection_sort
267 http://dbs.uni-leipzig.de/skripte/ADS1/PDF4/kap4.pdf
268 https://en.wikipedia.org/wiki/Doi_(identifier)
269 https://doi.org/10.1007%2Fs00224-006-1311-1
270 https://github.com/ceorron/stable-inplace-sorting-algorithms

26
References

16. C, T H.271 ; L, C E.272 ; R, R L.273 ;
S, C274 (2001), ”8”, Introduction To Algorithms275 (2 .), C-
, MA: T MIT P, . 165, ISBN276 0-262-03293-7277
17. N, S (2000). ”T F S A?”278 . Dr.
279
Dobb's .
18. C, T H.280 ; L, C E.281 ; R, R L.282 ;
S, C283 (2001) [1990]. Introduction to Algorithms284 (2 .). MIT
P  MG-H. ISBN285 0-262-03293-7286 .
19. G, M T.287 ; T, R288 (2002). ”4.5 B-S
 R-S”. Algorithm Design: Foundations, Analysis, and Internet Examples.
John Wiley & Sons. pp. 241–243. ISBN289 978-0-471-38365-9290 .
20. T, M.291 (F 2002). ”R S  O(   )
T  L S U A, S,  B- B O-
”. Journal of Algorithms. 42 (2): 205–230. doi292 :10.1006/jagm.2002.1211293 .
21. H, Y; T, M.294 (2002). Integer sorting in O(n√(log log n)) expected time
and linear space. The 43rd Annual IEEE Symposium on Foundations of Computer Sci-
ence295 . pp. 135–144. doi296 :10.1109/SFCS.2002.1181890297 . ISBN298 0-7695-1822-
2299 .
22. W, N300 (1986), Algorithms & Data Structures, Upper Saddle River,
NJ: Prentice-Hall, pp. 76–77, ISBN301 978-0130220059302

271 https://en.wikipedia.org/wiki/Thomas_H._Cormen
272 https://en.wikipedia.org/wiki/Charles_E._Leiserson
273 https://en.wikipedia.org/wiki/Ron_Rivest
274 https://en.wikipedia.org/wiki/Clifford_Stein
275 https://books.google.com/books?id=NLngYyWFl_YC
276 https://en.wikipedia.org/wiki/ISBN_(identifier)
277 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
http://www.drdobbs.com/architecture-and-design/the-fastest-sorting-algorithm/
278
184404062
279 https://en.wikipedia.org/wiki/Dr._Dobb%27s
280 https://en.wikipedia.org/wiki/Thomas_H._Cormen
281 https://en.wikipedia.org/wiki/Charles_E._Leiserson
282 https://en.wikipedia.org/wiki/Ron_Rivest
283 https://en.wikipedia.org/wiki/Clifford_Stein
284 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
285 https://en.wikipedia.org/wiki/ISBN_(identifier)
286 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
287 https://en.wikipedia.org/wiki/Michael_T._Goodrich
288 https://en.wikipedia.org/wiki/Roberto_Tamassia
289 https://en.wikipedia.org/wiki/ISBN_(identifier)
290 https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-38365-9
291 https://en.wikipedia.org/wiki/Mikkel_Thorup
292 https://en.wikipedia.org/wiki/Doi_(identifier)
293 https://doi.org/10.1006%2Fjagm.2002.1211
294 https://en.wikipedia.org/wiki/Mikkel_Thorup
295 https://en.wikipedia.org/wiki/Symposium_on_Foundations_of_Computer_Science
296 https://en.wikipedia.org/wiki/Doi_(identifier)
297 https://doi.org/10.1109%2FSFCS.2002.1181890
298 https://en.wikipedia.org/wiki/ISBN_(identifier)
299 https://en.wikipedia.org/wiki/Special:BookSources/0-7695-1822-2
300 https://en.wikipedia.org/wiki/Niklaus_Wirth
301 https://en.wikipedia.org/wiki/ISBN_(identifier)
302 https://en.wikipedia.org/wiki/Special:BookSources/978-0130220059

27
Sorting algorithm

23. Wirth 1986303 , pp. 79–80


24. Wirth 1986304 , pp. 101–102
25. ”T P'    ”305 . python.org. Retrieved 14
April 2018.
26. ”OJDK' TS.”306 . java.net. Retrieved 14 April 2018.
27. ” - ..”307 . perldoc.perl.org. Retrieved 14 April 2018.
28. Merge sort in Java 1.3308 , Sun. Archived309 2009-03-04 at the Wayback Machine310
29. Wirth 1986311 , pp. 87–89
30. Wirth 1986312 , p. 93
31. C, T H.313 ; L, C E.314 ; R, R L.315 ;
S, C316 (2009), Introduction to Algorithms (3rd ed.), Cambridge, MA:
The MIT Press, pp. 171–172, ISBN317 978-0262033848318
32. S, D. L. (1959). ”A H-S S P”319 (PDF). Communi-
cations of the ACM. 2 (7): 30–32. doi320 :10.1145/368370.368387321 .
33. Wirth 1986322 , pp. 81–82
34. ”/.”323 . R 2012-05-05.
35. B, B. (15 S 2001). ”A   S”. Inf.
Process. Lett.324 79 (5): 223–227. doi325 :10.1016/S0020-0190(00)00223-4326 .
36. ”  D  PC M E”327 . www.pcmag.com.
Retrieved 14 April 2018.

303 #CITEREFWirth1986
304 #CITEREFWirth1986
305 http://svn.python.org/projects/python/trunk/Objects/listsort.txt
http://cr.openjdk.java.net/~martin/webrevs/openjdk7/timsort/raw_files/new/src/share/
306
classes/java/util/TimSort.java
307 http://perldoc.perl.org/functions/sort.html
http://java.sun.com/j2se/1.3/docs/api/java/util/Arrays.html#sort(java.lang.Object%5B%
308
5D)
https://web.archive.org/web/20090304021927/http://java.sun.com/j2se/1.3/docs/api/
309
java/util/Arrays.html#sort(java.lang.Object%5B%5D)#sort(java.lang.Object%5B%5D)
310 https://en.wikipedia.org/wiki/Wayback_Machine
311 #CITEREFWirth1986
312 #CITEREFWirth1986
313 https://en.wikipedia.org/wiki/Thomas_H._Cormen
314 https://en.wikipedia.org/wiki/Charles_E._Leiserson
315 https://en.wikipedia.org/wiki/Ron_Rivest
316 https://en.wikipedia.org/wiki/Clifford_Stein
317 https://en.wikipedia.org/wiki/ISBN_(identifier)
318 https://en.wikipedia.org/wiki/Special:BookSources/978-0262033848
319 http://penguin.ewu.edu/cscd300/Topic/AdvSorting/p30-shell.pdf
320 https://en.wikipedia.org/wiki/Doi_(identifier)
321 https://doi.org/10.1145%2F368370.368387
322 #CITEREFWirth1986
https://github.com/torvalds/linux/blob/72932611b4b05bbd89fafa369d564ac8e449809b/
323
kernel/groups.c#L105
324 https://en.wikipedia.org/wiki/Information_Processing_Letters
325 https://en.wikipedia.org/wiki/Doi_(identifier)
326 https://doi.org/10.1016%2FS0020-0190%2800%2900223-4
327 https://www.pcmag.com/encyclopedia_term/0,2542,t=tag+sort&i=52532,00.asp

28
External links

37. Donald Knuth328 , The Art of Computer Programming329 , Volume 3: Sorting and
Searching, Second Edition. Addison-Wesley, 1998, ISBN330 0-201-89685-0331 , Section
5.4: External Sorting, pp. 248–379.
38. Ellis Horowitz332 and Sartaj Sahni333 , Fundamentals of Data Structures, H. Freeman
& Co., ISBN334 0-7167-8042-9335 .

1.9 Further reading


• K, D E.336 (1998), Sorting and Searching, The Art of Computer Program-
ming, 3 (2nd ed.), Boston: Addison-Wesley, ISBN337 0-201-89685-0338
• S, R339 (1980), ”E S  C: A I-
”, Computational Probability340 , N Y: A P, . 101–130341 ,
ISBN342 0-12-394680-8343

1.10 External links

The Wikibook Algorithm implementation344 has a page on the topic of: Sorting
algorithms345

The Wikibook A-level Mathematics346 has a page on the topic of: Sorting algo-
rithms347

Wikimedia Commons has media related to Sorting algorithms348 .

328 https://en.wikipedia.org/wiki/Donald_Knuth
329 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
330 https://en.wikipedia.org/wiki/ISBN_(identifier)
331 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
332 https://en.wikipedia.org/wiki/Ellis_Horowitz
333 https://en.wikipedia.org/wiki/Sartaj_Sahni
334 https://en.wikipedia.org/wiki/ISBN_(identifier)
335 https://en.wikipedia.org/wiki/Special:BookSources/0-7167-8042-9
336 https://en.wikipedia.org/wiki/Donald_Knuth
337 https://en.wikipedia.org/wiki/ISBN_(identifier)
338 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
339 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
340 https://archive.org/details/computationalpro00actu/page/101
341 https://archive.org/details/computationalpro00actu/page/101
342 https://en.wikipedia.org/wiki/ISBN_(identifier)
343 https://en.wikipedia.org/wiki/Special:BookSources/0-12-394680-8
344 https://en.wikibooks.org/wiki/Algorithm_implementation
345 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting
346 https://en.wikibooks.org/wiki/A-level_Mathematics
https://en.wikibooks.org/wiki/A-level_Mathematics/OCR/D1/Algorithms#Sorting_
347
Algorithms
348 https://commons.wikimedia.org/wiki/Category:Sort_algorithms

29
Sorting algorithm

• Sorting Algorithm Animations349 at the Wayback Machine350 (archived 3 March 2015)


• Sequential and parallel sorting algorithms351 – explanations and analyses of many sorting
algorithms
• Dictionary of Algorithms, Data Structures, and Problems352 – dictionary of algorithms,
techniques, common functions, and problems
• Slightly Skeptical View on Sorting Algorithms353 – Discusses several classic algorithms
and promotes alternatives to the quicksort354 algorithm
• 15 Sorting Algorithms in 6 Minutes (Youtube)355 – visualization and ”audibilization” of
15 Sorting Algorithms in 6 Minutes
• A036604 sequence in OEIS database titled ”Sorting numbers: minimal number of com-
parisons needed to sort n elements”356 – which performed by Ford–Johnson algorithm357
• Sorting Algorithms Used on Famous Paintings (Youtube)358 - Visualization of Sorting
Algorithms on Many Famous Paintings.

Sorting algorithms

349 https://web.archive.org/web/20150303022622/http://www.sorting-algorithms.com/
350 https://en.wikipedia.org/wiki/Wayback_Machine
351 http://www.iti.fh-flensburg.de/lang/algorithmen/sortieren/algoen.htm
352 https://www.nist.gov/dads/
353 http://www.softpanorama.org/Algorithms/sorting.shtml
354 https://en.wikipedia.org/wiki/Quicksort
355 https://www.youtube.com/watch?v=kPRA0W1kECg
356 https://oeis.org/A036604
357 https://en.wikipedia.org/wiki/Ford%E2%80%93Johnson_algorithm
358 https://www.youtube.com/watch?v=d2d0r1bArUQ

30
2 Comparison sort

Figure 7 Sorting a set of unlabelled weights by weight using only a balance scale
requires a comparison sort algorithm.

A comparison sort is a type of sorting algorithm1 that only reads the list elements through
a single abstract comparison operation (often a ”less than or equal to” operator or a three-
way comparison2 ) that determines which of two elements should occur first in the final

1 https://en.wikipedia.org/wiki/Sorting_algorithm
2 https://en.wikipedia.org/wiki/Three-way_comparison

31
Comparison sort

sorted list. The only requirement is that the operator forms a total preorder3 over the data,
with:
1. if a ≤b and b ≤c then a ≤c (transitivity)
2. for all a and b, a ≤b or b ≤a (connexity4 ).
It is possible that both a ≤b and b ≤a; in this case either may come first in the sorted list.
In a stable sort5 , the input order determines the sorted order in this case.
A metaphor for thinking about comparison sorts is that someone has a set of unlabelled
weights and a balance scale6 . Their goal is to line up the weights in order by their weight
without any information except that obtained by placing two weights on the scale and seeing
which one is heavier (or if they weigh the same).

2.1 Examples

Figure 8 Quicksort in action on a list of numbers. The horizontal lines are pivot values.

3 https://en.wikipedia.org/wiki/Total_preorder
4 https://en.wikipedia.org/wiki/Connex_relation
5 https://en.wikipedia.org/wiki/Sorting_algorithm#Stability
6 https://en.wikipedia.org/wiki/Balance_scale

32
Performance limits and advantages of different sorting techniques

Some of the most well-known comparison sorts include:


• Quicksort7
• Heapsort8
• Shellsort9
• Merge sort10
• Introsort11
• Insertion sort12
• Selection sort13
• Bubble sort14
• Odd–even sort15
• Cocktail shaker sort16
• Cycle sort17
• Merge-insertion sort18
• Smoothsort19
• Timsort20

2.2 Performance limits and advantages of different sorting


techniques

There are fundamental limits on the performance of comparison sorts. A comparison sort
must have an average-case lower bound of Ω21 (n log n) comparison operations,[1] which is
known as linearithmic22 time. This is a consequence of the limited information available
through comparisons alone — or, to put it differently, of the vague algebraic structure of
totally ordered sets. In this sense, mergesort, heapsort, and introsort are asymptotically
optimal23 in terms of the number of comparisons they must perform, although this metric
neglects other operations. Non-comparison sorts (such as the examples discussed below)
can achieve O24 (n) performance by using operations other than comparisons, allowing them
to sidestep this lower bound (assuming elements are constant-sized).

7 https://en.wikipedia.org/wiki/Quicksort
8 https://en.wikipedia.org/wiki/Heapsort
9 https://en.wikipedia.org/wiki/Shellsort
10 https://en.wikipedia.org/wiki/Merge_sort
11 https://en.wikipedia.org/wiki/Introsort
12 https://en.wikipedia.org/wiki/Insertion_sort
13 https://en.wikipedia.org/wiki/Selection_sort
14 https://en.wikipedia.org/wiki/Bubble_sort
15 https://en.wikipedia.org/wiki/Odd%E2%80%93even_sort
16 https://en.wikipedia.org/wiki/Cocktail_shaker_sort
17 https://en.wikipedia.org/wiki/Cycle_sort
18 https://en.wikipedia.org/wiki/Merge-insertion_sort
19 https://en.wikipedia.org/wiki/Smoothsort
20 https://en.wikipedia.org/wiki/Timsort
21 https://en.wikipedia.org/wiki/Big-O_notation
22 https://en.wikipedia.org/wiki/Linearithmic
23 https://en.wikipedia.org/wiki/Asymptotically_optimal
24 https://en.wikipedia.org/wiki/Big-O_notation

33
Comparison sort

Comparison sorts may run faster on some lists; many adaptive sorts25 such as insertion
sort26 run in O(n) time on an already-sorted or nearly-sorted list. The Ω27 (n log n) lower
bound applies only to the case in which the input list can be in any possible order.
Real-world measures of sorting speed may need to take into account the ability of some
algorithms to optimally use relatively fast cached computer memory28 , or the application
may benefit from sorting methods where sorted data begins to appear to the user quickly
(and then user's speed of reading will be the limiting factor) as opposed to sorting methods
where no output is available until the whole list is sorted.
Despite these limitations, comparison sorts offer the notable practical advantage that control
over the comparison function allows sorting of many different datatypes and fine control
over how the list is sorted. For example, reversing the result of the comparison function
allows the list to be sorted in reverse; and one can sort a list of tuples29 in lexicographic
order30 by just creating a comparison function that compares each part in sequence:
function tupleCompare((lefta, leftb, leftc), (righta, rightb, rightc))
if lefta ≠ righta
return compare(lefta, righta)
else if leftb ≠ rightb
return compare(leftb, rightb)
else
return compare(leftc, rightc)

Balanced ternary31 notation allows comparisons to be made in one step, whose result will
be one of ”less than”, ”greater than” or ”equal to”.
Comparison sorts generally adapt more easily to complex orders such as the order of floating-
point numbers32 . Additionally, once a comparison function is written, any comparison
sort can be used without modification; non-comparison sorts typically require specialized
versions for each datatype.
This flexibility, together with the efficiency of the above comparison sorting algorithms on
modern computers, has led to widespread preference for comparison sorts in most practical
work.

2.3 Alternatives

Some sorting problems admit a strictly faster solution than the Ω(n log n) bound for com-
parison sorting; an example is integer sorting33 , where all keys are integers. When the keys
form a small (compared to n) range, counting sort34 is an example algorithm that runs in

25 https://en.wikipedia.org/wiki/Adaptive_sort
26 https://en.wikipedia.org/wiki/Insertion_sort
27 https://en.wikipedia.org/wiki/Big-O_notation
28 https://en.wikipedia.org/wiki/Random_Access_Memory
29 https://en.wikipedia.org/wiki/Tuple
30 https://en.wikipedia.org/wiki/Lexicographic_order
31 https://en.wikipedia.org/wiki/Balanced_ternary
32 https://en.wikipedia.org/wiki/Floating-point_number
33 https://en.wikipedia.org/wiki/Integer_sorting
34 https://en.wikipedia.org/wiki/Counting_sort

34
Number of comparisons required to sort a list

linear time. Other integer sorting algorithms, such as radix sort35 , are not asymptotically
faster than comparison sorting, but can be faster in practice.
The problem of sorting pairs of numbers by their sum36 is not subject to the Ω(n² log n)
bound either (the square resulting from the pairing up); the best known algorithm still takes
O(n² log n) time, but only O(n²) comparisons.

2.4 Number of comparisons required to sort a list

n ⌈log2 (n!)⌉ Minimum


1 0 0
2 1 1
3 3 3
4 5 5
5 7 7
6 10 10
7 13 13
8 16 16
9 19 19
10 22 22
11 26 26
12 29 30[2][3]
13 33 34[4][5][6]
14 37 38[6]
15 41 42[7][8][9]
16 45 45 or 46[10]
17 49 49 or 50
18 53 53 or 54
19 57 58[9]
20 62 62
21 66 66
22 70 71[6]

n
n ⌈log2 (n!)⌉ n log2 n −
ln 2
10 22 19
100 525 521
1 000 8 530 8 524
10 000 118 459 118 451
100 000 1 516 705 1 516 695
1 000 000 18 488 885 18 488 874

35 https://en.wikipedia.org/wiki/Radix_sort
36 https://en.wikipedia.org/wiki/X_%2B_Y_sorting

35
Comparison sort

Above: A comparison of the lower bound ⌈log2 (n!)⌉ to the actual minimum number of
comparisons (from OEIS37 : A03660438 ) required to sort a list of n items (for the worst
case). Below: Using Stirling's approximation39 , this lower bound is well-approximated by
n
n log2 n − .
ln 2
The number of comparisons that a comparison sort algorithm requires increases in propor-
tion to n log(n), where n is the number of elements to sort. This bound is asymptotically
tight40 .
Given a list of distinct numbers (we can assume this because this is a worst-case analysis),
there are n factorial41 permutations exactly one of which is the list in sorted order. The
sort algorithm must gain enough information from the comparisons to identify the correct
permutation. If the algorithm always completes after at most f(n) steps, it cannot distin-
guish more than 2f(n) cases because the keys are distinct and each comparison has only two
possible outcomes. Therefore,
2f (n) ≥ n!, or equivalently f (n) ≥ log2 (n!).
By looking at the first n/2 factors of n! = n(n − 1) · · · 1, we obtain
(( ) n )
n 2 n log n n
log2 (n!) ≥ log2 = − = Θ(n log n).
2 2 log 2 2
log2 (n!) = Ω(n log n).
This provides the lower-bound part of the claim. A better bound can be given via Stirling's
approximation42 .
An identical upper bound follows from the existence of the algorithms that attain this bound
in the worst case, like heapsort43 and mergesort44 .
The above argument provides an absolute, rather than only asymptotic lower bound on the
number of comparisons, namely ⌈log2 (n!)⌉ comparisons. This lower bound is fairly good (it
can be approached within a linear tolerance by a simple merge sort), but it is known to be
inexact. For example, ⌈log2 (13!)⌉ = 33, but the minimal number of comparisons to sort 13
elements has been proved to be 34.
Determining the exact number of comparisons needed to sort a given number of entries
is a computationally hard problem even for small n, and no simple formula for the so-
lution is known. For some of the few concrete values that have been computed, see
OEIS45 : A03660446 .

37 https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences
38 http://oeis.org/A036604
39 https://en.wikipedia.org/wiki/Stirling%27s_approximation
40 https://en.wikipedia.org/wiki/Asymptotic_computational_complexity
41 https://en.wikipedia.org/wiki/Factorial
42 https://en.wikipedia.org/wiki/Stirling%27s_approximation
43 https://en.wikipedia.org/wiki/Heapsort
44 https://en.wikipedia.org/wiki/Merge_sort
45 https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences
46 http://oeis.org/A036604

36
Number of comparisons required to sort a list

2.4.1 Lower bound for the average number of comparisons

A similar bound applies to the average number of comparisons. Assuming that


• all keys are distinct, i.e. every comparison will give either a>b or a<b, and
• the input is a random permutation, chosen uniformly from the set of all possible permu-
tations of n elements,
it is impossible to determine which order the input is in with fewer than log2 (n!) comparisons
on average.
This can be most easily seen using concepts from information theory47 . The Shannon en-
tropy48 of such a random permutation is log2 (n!) bits. Since a comparison can give only
two results, the maximum amount of information it provides is 1 bit. Therefore, after
k comparisons the remaining entropy of the permutation, given the results of those compar-
isons, is at least log2 (n!) − k bits on average. To perform the sort, complete information is
needed, so the remaining entropy must be 0. It follows that k must be at least log2 (n!) on
average.
The lower bound derived via information theory is phrased as 'information-theoretic lower
bound'. Information-theoretic lower bound is correct but is not necessarily the strongest
lower bound. And in some cases, the information-theoretic lower bound of a problem may
even be far from the true lower bound. For example, the information-theoretic lower bound
of selection is ⌈log2 (n)⌉ whereas n − 1 comparisons are needed by an adversarial argument.
The interplay between information-theoretic lower bound and the true lower bound are
much like a real-valued function lower-bounding an integer function. However, this is not
exactly correct when the average case is considered.
To unearth what happens while analyzing the average case, the key is that what does
'average' refer to? Averaging over what? With some knowledge of information theory, the
information-theoretic lower bound averages over the set of all permutations as a whole. But
any computer algorithms (under what are believed currently) must treat each permutation
as an individual instance of the problem. Hence, the average lower bound we're searching
for is averaged over all individual cases.
To search for the lower bound relating to the non-achievability of computers, we adopt the
Decision tree model49 . Let's rephrase a bit of what our objective is. In the Decision tree
model50 , the lower bound to be shown is the lower bound of the average length of root-to-leaf
paths of an n!-leaf binary tree (in which each leaf corresponds to a permutation). It would
be convinced to say that a balanced full binary tree achieves the minimum of the average
length. With some careful calculations, for a balanced full binary tree with n! leaves, the
average length of root-to-leaf paths is given by
(2n! − 2⌊log2 n!⌋+1 ) · ⌈log2 n!⌉ + (2⌊log2 n!⌋+1 − n!) · ⌊log2 n!⌋
n!

47 https://en.wikipedia.org/wiki/Information_theory
48 https://en.wikipedia.org/wiki/Shannon_entropy
49 https://en.wikipedia.org/wiki/Decision_tree_model
50 https://en.wikipedia.org/wiki/Decision_tree_model

37
Comparison sort

For example, for n = 3, the information-theoretic lower bound for the average case is ap-
proximately 2.58, while the average lower bound derived via Decision tree model51 is 8/3,
approximately 2.67.
In the case that multiple items may have the same key, there is no obvious statistical
interpretation for the term ”average case”, so an argument like the above cannot be applied
without making specific assumptions about the distribution of keys.

2.5 Notes
1. C, T H.52 ; L, C E.53 ; R, R L.54 ; S,
C55 (2009) [1990]. Introduction to Algorithms56 (3 .). MIT P 
MG-H. . 191–193. ISBN57 0-262-03384-458 .
2. Mark Wells, Applications of a language for computing in combinatorics, Information
Processing 65 (Proceedings of the 1965 IFIP Congress), 497–498, 1966.
3. Mark Wells, Elements of Combinatorial Computing, Pergamon Press, Oxford, 1971.
4. Takumi Kasai, Shusaku Sawato, Shigeki Iwata, Thirty four comparisons are required
to sort 13 items, LNCS 792, 260-269, 1994.
5. Marcin Peczarski, Sorting 13 elements requires 34 comparisons, LNCS 2461, 785–794,
2002.
6. Marcin Peczarski, New results in minimum-comparison sorting, Algorithmica 40 (2),
133–145, 2004.
7. Marcin Peczarski, Computer assisted research of posets, PhD thesis, University of
Warsaw, 2006.
8. P, M (2007). ”T F-J    -
    47 ”. Inf. Process. Lett. 101 (3): 126–128.
doi59 :10.1016/j.ipl.2006.09.00160 .
9. C, W; L, X; W, G; L, J (O 2007).
”最少比较排序问题中S(15)和S(19)的解决”61 [T   S(15)  S(19)
 -  ]. Journal of Frontiers of Computer
Science and Technology (in Chinese). 1 (3): 305–313.
10. P, M (3 A 2011). ”T O S  16 E-
”. Acta Universitatis Sapientiae. 4 (2): 215–224. arXiv62 :1108.086663 . Bib-
code64 :2011arXiv1108.0866P65 .

51 https://en.wikipedia.org/wiki/Decision_tree_model
52 https://en.wikipedia.org/wiki/Thomas_H._Cormen
53 https://en.wikipedia.org/wiki/Charles_E._Leiserson
54 https://en.wikipedia.org/wiki/Ron_Rivest
55 https://en.wikipedia.org/wiki/Clifford_Stein
56 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
57 https://en.wikipedia.org/wiki/ISBN_(identifier)
58 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
59 https://en.wikipedia.org/wiki/Doi_(identifier)
60 https://doi.org/10.1016%2Fj.ipl.2006.09.001
61 http://fcst.ceaj.org/EN/abstract/abstract47.shtml
62 https://en.wikipedia.org/wiki/ArXiv_(identifier)
63 http://arxiv.org/abs/1108.0866
64 https://en.wikipedia.org/wiki/Bibcode_(identifier)
65 https://ui.adsabs.harvard.edu/abs/2011arXiv1108.0866P

38
References

2.6 References
• Donald Knuth66 . The Art of Computer Programming67 , Volume 3: Sorting and Search-
ing, Second Edition. Addison-Wesley, 1997. ISBN68 0-201-89685-069 . Section 5.3.1:
Minimum-Comparison Sorting, pp. 180–197.

Sorting algorithms

66 https://en.wikipedia.org/wiki/Donald_Knuth
67 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
68 https://en.wikipedia.org/wiki/ISBN_(identifier)
69 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0

39
3 Selection sort

This article includes a list of references1 , related reading or external links2 , but
its sources remain unclear because it lacks inline citations3 . Please help
to improve4 this article by introducing5 more precise citations. (May 2019)(Learn
how and when to remove this template message6 )

Selection sort
Class Sorting algorithm
Data structure Array
Worst-case per- О(n2 ) comparisons,
formance О(n) swaps
Best-case perfor- О(n2 ) comparisons,
mance О(n) swaps
Average perfor- О(n2 ) comparisons,
mance О(n) swaps
Worst-case space O(1) auxiliary
complexity

In computer science7 , selection sort is an in-place8 comparison9 sorting algorithm10 . It


has an O11 (n2 ) time complexity12 , which makes it inefficient on large lists, and generally
performs worse than the similar insertion sort13 . Selection sort is noted for its simplicity
and has performance advantages over more complicated algorithms in certain situations,
particularly where auxiliary memory14 is limited.
The algorithm divides the input list into two parts: a sorted sublist of items which is built
up from left to right at the front (left) of the list and a sublist of the remaining unsorted
items that occupy the rest of the list. Initially, the sorted sublist is empty and the unsorted

1 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
2 https://en.wikipedia.org/wiki/Wikipedia:External_links
3 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
4 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Fact_and_Reference_Check
5 https://en.wikipedia.org/wiki/Wikipedia:When_to_cite
6 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
7 https://en.wikipedia.org/wiki/Computer_science
8 https://en.wikipedia.org/wiki/In-place_algorithm
9 https://en.wikipedia.org/wiki/Comparison_sort
10 https://en.wikipedia.org/wiki/Sorting_algorithm
11 https://en.wikipedia.org/wiki/Big_O_notation
12 https://en.wikipedia.org/wiki/Time_complexity
13 https://en.wikipedia.org/wiki/Insertion_sort
14 https://en.wikipedia.org/wiki/Auxiliary_memory

41
Selection sort

sublist is the entire input list. The algorithm proceeds by finding the smallest (or largest,
depending on sorting order) element in the unsorted sublist, exchanging (swapping) it with
the leftmost unsorted element (putting it in sorted order), and moving the sublist boundaries
one element to the right.
The time efficiency of selection sort is quadratic, so there are a number of sorting techniques
which have better time complexity than selection sort. One thing which distinguishes
selection sort from other sorting algorithms is that it makes the minimum possible number
of swaps, n − 1 in the worst case.

3.1 Example

Here is an example of this sort algorithm sorting five elements:

Sorted sublist Unsorted sublist Least element in unsorted list


() (11, 25, 12, 22, 64) 11
(11) (25, 12, 22, 64) 12
(11, 12) (25, 22, 64) 22
(11, 12, 22) (25, 64) 25
(11, 12, 22, 25) (64) 64
(11, 12, 22, 25, 64) ()

42
Example

Figure 9 Selection sort animation. Red is current min. Yellow is sorted list. Blue is
current item.

(Nothing appears changed on these last two lines because the last two numbers were already
in order.)
Selection sort can also be used on list structures that make add and remove efficient, such
as a linked list15 . In this case it is more common to remove the minimum element from the
remainder of the list, and then insert it at the end of the values sorted so far. For example:
arr[] = 64 25 12 22 11

15 https://en.wikipedia.org/wiki/Linked_list

43
Selection sort

// Find the minimum element in arr[0...4]


// and place it at beginning
11 25 12 22 64

// Find the minimum element in arr[1...4]


// and place it at beginning of arr[1...4]
11 12 25 22 64

// Find the minimum element in arr[2...4]


// and place it at beginning of arr[2...4]
11 12 22 25 64

// Find the minimum element in arr[3...4]


// and place it at beginning of arr[3...4]
11 12 22 25 64

3.2 Implementations

This section does not cite16 any sources17 . Please help improve this section18
by adding citations to reliable sources19 . Unsourced material may be challenged
and removed20 .
Find sources: ”Selection sort”21 – news22 · newspapers23 · books24 · scholar25 · JSTOR26
(May 2019)(Learn how and when to remove this template message27 )

Below is an implementation in C28 . More implementations can be found on the talk page
of this Wikipedia article29 .

1 /* a[0] to a[aLength-1] is the array to sort */


2 int i,j;
3 int aLength; // initialise to as length
4
5 /* advance the position through the entire array */
6 /* (could do i < aLength-1 because single element is also min element) */
7 for (i = 0; i < aLength-1; i++)
8 {
9 /* find the min element in the unsorted a[i .. aLength-1] */

16 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
17 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
18 https://en.wikipedia.org/w/index.php?title=Selection_sort&action=edit
19 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
20 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
21 http://www.google.com/search?as_eq=wikipedia&q=%22Selection+sort%22
22 http://www.google.com/search?tbm=nws&q=%22Selection+sort%22+-wikipedia
http://www.google.com/search?&q=%22Selection+sort%22+site:news.google.com/newspapers&
23
source=newspapers
24 http://www.google.com/search?tbs=bks:1&q=%22Selection+sort%22+-wikipedia
25 http://scholar.google.com/scholar?q=%22Selection+sort%22
26 https://www.jstor.org/action/doBasicSearch?Query=%22Selection+sort%22&acc=on&wc=on
27 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
28 https://en.wikipedia.org/wiki/C_(programming_language)
29 https://en.wikipedia.org/wiki/Talk:Selection_sort#Implementations

44
Complexity

10
11 /* assume the min is the first element */
12 int jMin = i;
13 /* test against elements after i to find the smallest */
14 for (j = i+1; j < aLength; j++)
15 {
16 /* if this element is less, then it is the new minimum */
17 if (a[j] < a[jMin])
18 {
19 /* found new minimum; remember its index */
20 jMin = j;
21 }
22 }
23
24 if (jMin != i)
25 {
26 swap(a[i], a[jMin]);
27 }
28 }

3.3 Complexity

Selection sort is not difficult to analyze compared to other sorting algorithms since none
of the loops depend on the data in the array. Selecting the minimum requires scanning n
elements (taking n − 1 comparisons) and then swapping it into the first position. Finding the
next lowest element requires scanning the remaining n − 1 elements and so on. Therefore,
the total number of comparisons is

n−1
(n − 1) + (n − 2) + ... + 1 = i
i=1

By arithmetic progression30 ,

n−1
(n − 1) + 1 1 1
i= (n − 1) = n(n − 1) = (n2 − n)
i=1
2 2 2

which is of complexity O(n2 ) in terms of number of comparisons. Each of these scans


requires one swap for n − 1 elements (the final element is already in place).

3.4 Comparison to other sorting algorithms

Among quadratic sorting algorithms (sorting algorithms with a simple average-case of


Θ(n2 )31 ), selection sort almost always outperforms bubble sort32 and gnome sort33 . In-
sertion sort34 is very similar in that after the kth iteration, the first k elements in the array
are in sorted order. Insertion sort's advantage is that it only scans as many elements as it

30 https://en.wikipedia.org/wiki/Arithmetic_progression
https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann%E2%80%93Landau_
31
notations
32 https://en.wikipedia.org/wiki/Bubble_sort
33 https://en.wikipedia.org/wiki/Gnome_sort
34 https://en.wikipedia.org/wiki/Insertion_sort

45
Selection sort

needs in order to place the k + 1st element, while selection sort must scan all remaining
elements to find the k + 1st element.
Simple calculation shows that insertion sort will therefore usually perform about half as
many comparisons as selection sort, although it can perform just as many or far fewer
depending on the order the array was in prior to sorting. It can be seen as an advantage
for some real-time35 applications that selection sort will perform identically regardless of
the order of the array, while insertion sort's running time can vary considerably. However,
this is more often an advantage for insertion sort in that it runs much more efficiently if the
array is already sorted or ”close to sorted.”
While selection sort is preferable to insertion sort in terms of number of writes (Θ(n) swaps
versus Ο(n2 ) swaps), it almost always far exceeds (and never beats) the number of writes
that cycle sort36 makes, as cycle sort is theoretically optimal in the number of writes.
This can be important if writes are significantly more expensive than reads, such as with
EEPROM37 or Flash38 memory, where every write lessens the lifespan of the memory.
Finally, selection sort is greatly outperformed on larger arrays by Θ(n log n) divide-and-
conquer algorithms39 such as mergesort40 . However, insertion sort or selection sort are both
typically faster for small arrays (i.e. fewer than 10–20 elements). A useful optimization in
practice for the recursive algorithms is to switch to insertion sort or selection sort for ”small
enough” sublists.

3.5 Variants

Heapsort41 greatly improves the basic algorithm by using an implicit42 heap43 data struc-
ture44 to speed up finding and removing the lowest datum. If implemented correctly, the
heap will allow finding the next lowest element in Θ(log n) time instead of Θ(n) for the
inner loop in normal selection sort, reducing the total running time to Θ(n log n).
A bidirectional variant of selection sort (sometimes called cocktail sort due to its similarity
to the bubble-sort variant cocktail shaker sort45 ) is an algorithm which finds both the
minimum and maximum values in the list in every pass. This reduces the number of scans
of the input by a factor of two. Each scan performs three comparisons per two elements (a
pair of elements is compared, then the greater is compared to the maximum and the lesser
is compared to the minimum), a 25% savings over regular selection sort, which does one
comparison per element. Sometimes this is double selection sort.

35 https://en.wikipedia.org/wiki/Real-time_computing
36 https://en.wikipedia.org/wiki/Cycle_sort
37 https://en.wikipedia.org/wiki/EEPROM
38 https://en.wikipedia.org/wiki/Flash_memory
39 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
40 https://en.wikipedia.org/wiki/Mergesort
41 https://en.wikipedia.org/wiki/Heapsort
42 https://en.wikipedia.org/wiki/Implicit_data_structure
43 https://en.wikipedia.org/wiki/Heap_(data_structure)
44 https://en.wikipedia.org/wiki/Data_structure
45 https://en.wikipedia.org/wiki/Cocktail_shaker_sort

46
See also

Selection sort can be implemented as a stable sort46 . If, rather than swapping in step 2,
the minimum value is inserted into the first position (that is, all intervening items moved
down), the algorithm is stable. However, this modification either requires a data structure
that supports efficient insertions or deletions, such as a linked list, or it leads to performing
Θ(n2 ) writes.
In the bingo sort variant, items are ordered by repeatedly looking through the remaining
items to find the greatest value and moving all items with that value to their final location.[1]
Like counting sort47 , this is an efficient variant if there are many duplicate values. Indeed,
selection sort does one pass through the remaining items for each item moved. Bingo sort
does one pass for each value (not item): after an initial pass to find the biggest value, the
next passes can move every item with that value to its final location while finding the next
value as in the following pseudocode48 (arrays are zero-based and the for-loop includes both
the top and bottom limits, as in Pascal49 ):

bingo(array A)

{ This procedure sorts in ascending order. }


begin
max := length(A)-1;

{ The first iteration is written to look very similar to the subsequent


ones, but
without swaps. }
nextValue := A[max];
for i := max - 1 downto 0 do
if A[i] > nextValue then
nextValue := A[i];
while (max > 0) and (A[max] = nextValue) do
max := max - 1;

while max > 0 do begin


value := nextValue;
nextValue := A[max];
for i := max - 1 downto 0 do
if A[i] = value then begin
swap(A[i], A[max]);
max := max - 1;
end else if A[i] > nextValue then
nextValue := A[i];
while (max > 0) and (A[max] = nextValue) do
max := max - 1;
end;
end;

Thus, if on average there are more than two items with the same value, bingo sort can be
expected to be faster because it executes the inner loop fewer times than selection sort.

3.6 See also


• Selection algorithm50

46 https://en.wikipedia.org/wiki/Sorting_algorithm#Classification
47 https://en.wikipedia.org/wiki/Counting_sort
48 https://en.wikipedia.org/wiki/Pseudocode
49 https://en.wikipedia.org/wiki/Pascal_(programming_language)
50 https://en.wikipedia.org/wiki/Selection_algorithm

47
Selection sort

3.7 References
1. This article incorporates public domain material51 from the NIST52 document:
B, P E. ”B ”53 . Dictionary of Algorithms and Data Structures54 .
• Donald Knuth55 . The Art of Computer Programming56 , Volume 3: Sorting and Searching,
Third Edition. Addison–Wesley, 1997. ISBN57 0-201-89685-058 . Pages 138–141 of Section
5.2.3: Sorting by Selection.
• Anany Levitin. Introduction to the Design & Analysis of Algorithms, 2nd Edition.
ISBN59 0-321-35828-760 . Section 3.1: Selection Sort, pp 98–100.
• Robert Sedgewick61 . Algorithms in C++, Parts 1–4: Fundamentals, Data Structure,
Sorting, Searching: Fundamentals, Data Structures, Sorting, Searching Pts. 1–4, Second
Edition. Addison–Wesley Longman, 1998. ISBN62 0-201-35088-263 . Pages 273–274

3.8 External links

The Wikibook Algorithm implementation64 has a page on the topic of: Selection
sort65

• Animated Sorting Algorithms: Selection Sort66 at the Wayback Machine67 (archived 7


March 2015) – graphical demonstration

Sorting algorithms

https://en.wikipedia.org/wiki/Copyright_status_of_works_by_the_federal_government_of_
51
the_United_States
52 https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
53 https://xlinux.nist.gov/dads/HTML/bingosort.html
54 https://en.wikipedia.org/wiki/Dictionary_of_Algorithms_and_Data_Structures
55 https://en.wikipedia.org/wiki/Donald_Knuth
56 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
57 https://en.wikipedia.org/wiki/ISBN_(identifier)
58 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
59 https://en.wikipedia.org/wiki/ISBN_(identifier)
60 https://en.wikipedia.org/wiki/Special:BookSources/0-321-35828-7
61 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
62 https://en.wikipedia.org/wiki/ISBN_(identifier)
63 https://en.wikipedia.org/wiki/Special:BookSources/0-201-35088-2
64 https://en.wikibooks.org/wiki/Algorithm_implementation
65 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Selection_sort
https://web.archive.org/web/20150307110315/http://www.sorting-algorithms.com/
66
selection-sort
67 https://en.wikipedia.org/wiki/Wayback_Machine

48
External links

Sorting algorithms

49
4 Insertion sort

Insertion sort
Animation of insertion sort
Class Sorting algorithm
Data structure Array
Worst-case per- О(n2 ) comparisons
formance and swaps
Best-case perfor- O(n) comparisons,
mance O(1) swaps
Average perfor- О(n2 ) comparisons
mance and swaps
Worst-case space О(n) total, O(1) aux-
complexity iliary

Insertion sort is a simple sorting algorithm1 that builds the final sorted array2 (or list) one
item at a time. It is much less efficient on large lists than more advanced algorithms such as
quicksort3 , heapsort4 , or merge sort5 . However, insertion sort provides several advantages:
• Simple implementation: Jon Bentley6 shows a three-line C7 version, and a five-line opti-
mized8 version[1]
• Efficient for (quite) small data sets, much like other quadratic sorting algorithms
• More efficient in practice than most other simple quadratic (i.e., O9 (n2 )) algorithms such
as selection sort10 or bubble sort11
• Adaptive12 , i.e., efficient for data sets that are already substantially sorted: the time
complexity13 is O14 (kn) when each element in the input is no more than k places away
from its sorted position
• Stable15 ; i.e., does not change the relative order of elements with equal keys

1 https://en.wikipedia.org/wiki/Sorting_algorithm
2 https://en.wikipedia.org/wiki/Sorted_array
3 https://en.wikipedia.org/wiki/Quicksort
4 https://en.wikipedia.org/wiki/Heapsort
5 https://en.wikipedia.org/wiki/Merge_sort
6 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
7 https://en.wikipedia.org/wiki/C_(programming_language)
8 https://en.wikipedia.org/wiki/Program_optimization
9 https://en.wikipedia.org/wiki/Big_O_notation
10 https://en.wikipedia.org/wiki/Selection_sort
11 https://en.wikipedia.org/wiki/Bubble_sort
12 https://en.wikipedia.org/wiki/Adaptive_sort
13 https://en.wikipedia.org/wiki/Time_complexity
14 https://en.wikipedia.org/wiki/Big_O_notation
15 https://en.wikipedia.org/wiki/Stable_sort

51
Insertion sort

• In-place16 ; i.e., only requires a constant amount O(1) of additional memory space
• Online17 ; i.e., can sort a list as it receives it
When people manually sort cards in a bridge hand, most use a method that is similar to
insertion sort.[2]

4.1 Algorithm

Figure 11 A graphical example of insertion sort. The partial sorted list (black) initially
contains only the first element in the list. With each iteration one element (red) is removed
from the ”not yet checked for order” input data and inserted in-place into the sorted list.

Insertion sort iterates18 , consuming one input element each repetition, and growing a sorted
output list. At each iteration, insertion sort removes one element from the input data, finds
the location it belongs within the sorted list, and inserts it there. It repeats until no input
elements remain.
Sorting is typically done in-place, by iterating up the array, growing the sorted list behind
it. At each array-position, it checks the value there against the largest value in the sorted
list (which happens to be next to it, in the previous array-position checked). If larger, it
leaves the element in place and moves to the next. If smaller, it finds the correct position
within the sorted list, shifts all the larger values up to make a space, and inserts into that
correct position.

16 https://en.wikipedia.org/wiki/In-place_algorithm
17 https://en.wikipedia.org/wiki/Online_algorithm
18 https://en.wikipedia.org/wiki/Iteration

52
Algorithm

The resulting array after k iterations has the property where the first k + 1 entries are
sorted (”+1” because the first entry is skipped). In each iteration the first remaining entry
of the input is removed, and inserted into the result at the correct position, thus extending
the result:

Figure 12 Array prior to the insertion of x

becomes

Figure 13 Array after the insertion of x

with each element greater than x copied to the right as it is compared against x.
The most common variant of insertion sort, which operates on arrays, can be described as
follows:
1. Suppose there exists a function called Insert designed to insert a value into a sorted
sequence at the beginning of an array. It operates by beginning at the end of the
sequence and shifting each element one place to the right until a suitable position is
found for the new element. The function has the side effect of overwriting the value
stored immediately after the sorted sequence in the array.
2. To perform an insertion sort, begin at the left-most element of the array and invoke
Insert to insert each element encountered into its correct position. The ordered se-
quence into which the element is inserted is stored at the beginning of the array in the
set of indices already examined. Each insertion overwrites a single value: the value
being inserted.
Pseudocode19 of the complete algorithm follows, where the arrays are zero-based20 :[1]
i ← 1
while i < length(A)
j←i
while j > 0 and A[j-1] > A[j]
swap A[j] and A[j-1]
j←j-1
end while
i←i+1
end while

19 https://en.wikipedia.org/wiki/Pseudocode
20 https://en.wikipedia.org/wiki/Zero-based_numbering

53
Insertion sort

The outer loop runs over all the elements except the first one, because the single-element
prefix A[0:1] is trivially sorted, so the invariant21 that the first i entries are sorted is true
from the start. The inner loop moves element A[i] to its correct place so that after the
loop, the first i+1 elements are sorted. Note that the and-operator in the test must use
short-circuit evaluation22 , otherwise the test might result in an array bounds error23 , when
j=0 and it tries to evaluate A[j-1] > A[j] (i.e. accessing A[-1] fails).
After expanding the swap operation in-place as x ← A[j]; A[j] ← A[j-1]; A[j-1] ←
x (where x is a temporary variable), a slightly faster version can be produced that moves
A[i] to its position in one go and only performs one assignment in the inner loop body:[1]
i ← 1
while i < length(A)
x ← A[i]
j←i-1
while j >= 0 and A[j] > x
A[j+1] ← A[j]
j←j-1
end while
24
A[j+1] ← x[3]
i←i+1
end while

The new inner loop shifts elements to the right to clear a spot for x = A[i].
The algorithm can also be implemented in a recursive way. The recursion just replaces
the outer loop, calling itself and storing successively smaller values of n on the stack until
n equals 0, where the function then returns back up the call chain to execute the code
after each recursive call starting with n equal to 1, with n increasing by 1 as each instance
of the function returns to the prior instance. The initial call would be insertionSortR(A,
length(A)-1).
function insertionSortR(array A, int n)
if n > 0
insertionSortR(A, n-1)
x ← A[n]
j ← n-1
while j >= 0 and A[j] > x
A[j+1] ← A[j]
j ← j-1
end while
A[j+1] ← x
end if
end function

It does not make the code any shorter, it also doesn't reduce the execution time, but it
increases the additional memory consumption from O(1) to O(N) (at the deepest level of
recursion the stack contains N references to the A array, each with accompanying value of
variable n from N down to 1).

21 https://en.wikipedia.org/wiki/Invariant_(computer_science)
22 https://en.wikipedia.org/wiki/Short-circuit_evaluation
23 https://en.wikipedia.org/wiki/Bounds_checking

54
Best, worst, and average cases

4.2 Best, worst, and average cases

The best case input is an array that is already sorted. In this case insertion sort has a linear
running time (i.e., O(n)). During each iteration, the first remaining element of the input is
only compared with the right-most element of the sorted subsection of the array.
The simplest worst case input is an array sorted in reverse order. The set of all worst case
inputs consists of all arrays where each element is the smallest or second-smallest of the
elements before it. In these cases every iteration of the inner loop will scan and shift the
entire sorted subsection of the array before inserting the next element. This gives insertion
sort a quadratic running time (i.e., O(n2 )).
The average case is also quadratic[4] , which makes insertion sort impractical for sorting
large arrays. However, insertion sort is one of the fastest algorithms for sorting very small
arrays, even faster than quicksort25 ; indeed, good quicksort26 implementations use insertion
sort for arrays smaller than a certain threshold, also when arising as subproblems; the exact
threshold must be determined experimentally and depends on the machine, but is commonly
around ten.
Example: The following table shows the steps for sorting the sequence {3, 7, 4, 9, 5, 2, 6,
1}. In each step, the key under consideration is underlined. The key that was moved (or
left in place because it was biggest yet considered) in the previous step is marked with an
asterisk.
3 7 4 9 5 2 6 1
3* 7 4 9 5 2 6 1
3 7* 4 9 5 2 6 1
3 4* 7 9 5 2 6 1
3 4 7 9* 5 2 6 1
3 4 5* 7 9 2 6 1
2* 3 4 5 7 9 6 1
2 3 4 5 6* 7 9 1
1* 2 3 4 5 6 7 9

4.3 Relation to other sorting algorithms

Insertion sort is very similar to selection sort27 . As in selection sort, after k passes through
the array, the first k elements are in sorted order. However, the fundamental difference
between the two algorithms is that for selection sort these are the k smallest elements of the
unsorted input, while in insertion sort they are simply the first k elements of the input. The
primary advantage of insertion sort over selection sort is that selection sort must always
scan all remaining elements to find the absolute smallest element in the unsorted portion of
the list, while insertion sort requires only a single comparison when the (k + 1)-st element
is greater than the k-th element; when this is frequently true (such as if the input array
is already sorted or partially sorted), insertion sort is distinctly more efficient compared to
selection sort. On average (assuming the rank of the (k + 1)-st element rank is random),

25 https://en.wikipedia.org/wiki/Quicksort
26 https://en.wikipedia.org/wiki/Quicksort
27 https://en.wikipedia.org/wiki/Selection_sort

55
Insertion sort

insertion sort will require comparing and shifting half of the previous k elements, meaning
that insertion sort will perform about half as many comparisons as selection sort on average.
In the worst case for insertion sort (when the input array is reverse-sorted), insertion sort
performs just as many comparisons as selection sort. However, a disadvantage of insertion
sort over selection sort is that it requires more writes due to the fact that, on each iteration,
inserting the (k + 1)-st element into the sorted portion of the array requires many element
swaps to shift all of the following elements, while only a single swap is required for each
iteration of selection sort. In general, insertion sort will write to the array O(n2 ) times,
whereas selection sort will write only O(n) times. For this reason selection sort may be
preferable in cases where writing to memory is significantly more expensive than reading,
such as with EEPROM28 or flash memory29 .
While some divide-and-conquer algorithms30 such as quicksort31 and mergesort32 outper-
form insertion sort for larger arrays, non-recursive sorting algorithms such as insertion sort
or selection sort are generally faster for very small arrays (the exact size varies by envi-
ronment and implementation, but is typically between 7 and 50 elements). Therefore, a
useful optimization in the implementation of those algorithms is a hybrid approach, using
the simpler algorithm when the array has been divided to a small size.[1]

4.4 Variants

D. L. Shell33 made substantial improvements to the algorithm; the modified version is called
Shell sort34 . The sorting algorithm compares elements separated by a distance that decreases
on each pass. Shell sort has distinctly improved running times in practical work, with two
simple variants requiring O(n3/2 ) and O(n4/3 ) running time.[5][6]
If the cost of comparisons exceeds the cost of swaps, as is the case for example with string
keys stored by reference or with human interaction (such as choosing one of a pair displayed
35
side-by-side), then using binary insertion sort[citation needed ] may yield better performance.
Binary insertion sort employs a binary search36 to determine the correct location to insert
new elements, and therefore performs ⌈log2 n⌉ comparisons in the worst case, which is
O(n log n). The algorithm as a whole still has a running time of O(n2 ) on average because
of the series of swaps required for each insertion.
The number of swaps can be reduced by calculating the position of multiple elements before
moving them. For example, if the target position of two elements is calculated before they
are moved into the proper position, the number of swaps can be reduced by about 25% for
random data. In the extreme case, this variant works similar to merge sort37 .

28 https://en.wikipedia.org/wiki/EEPROM
29 https://en.wikipedia.org/wiki/Flash_memory
30 https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm
31 https://en.wikipedia.org/wiki/Quicksort
32 https://en.wikipedia.org/wiki/Mergesort
33 https://en.wikipedia.org/wiki/Donald_Shell
34 https://en.wikipedia.org/wiki/Shellsort
36 https://en.wikipedia.org/wiki/Binary_search_algorithm
37 https://en.wikipedia.org/wiki/Merge_sort

56
Variants

A variant named binary merge sort uses a binary insertion sort to sort groups of 32 elements,
followed by a final sort using merge sort38 . It combines the speed of insertion sort on small
data sets with the speed of merge sort on large data sets.[7]
To avoid having to make a series of swaps for each insertion, the input could be stored in
a linked list39 , which allows elements to be spliced into or out of the list in constant time
when the position in the list is known. However, searching a linked list requires sequentially
following the links to the desired position: a linked list does not have random access, so it
cannot use a faster method such as binary search. Therefore, the running time required for
searching is O(n), and the time for sorting is O(n2 ). If a more sophisticated data structure40
(e.g., heap41 or binary tree42 ) is used, the time required for searching and insertion can be
reduced significantly; this is the essence of heap sort43 and binary tree sort44 .
In 2006 Bender, Martin Farach-Colton45 , and Mosteiro published a new variant of insertion
sort called library sort46 or gapped insertion sort that leaves a small number of unused
spaces (i.e., ”gaps”) spread throughout the array. The benefit is that insertions need only
shift elements over until a gap is reached. The authors show that this sorting algorithm
runs with high probability in O(n log n) time.[8]
If a skip list47 is used, the insertion time is brought down to O(log n), and swaps are not
needed because the skip list is implemented on a linked list structure. The final running
time for insertion would be O(n log n).
List insertion sort is a variant of insertion sort. It reduces the number of
48
movements.[citation needed ]

4.4.1 List insertion sort code in C

If the items are stored in a linked list, then the list can be sorted with O(1) additional space.
The algorithm starts with an initially empty (and therefore trivially sorted) list. The input
items are taken off the list one at a time, and then inserted in the proper place in the sorted
list. When the input list is empty, the sorted list has the desired result.

struct LIST * SortList1(struct LIST * pList)


{
// zero or one element in list
if (pList == NULL || pList->pNext == NULL)
return pList;
// head is the first element of resulting sorted list
struct LIST * head = NULL;
while (pList != NULL) {
struct LIST * current = pList;

38 https://en.wikipedia.org/wiki/Merge_sort
39 https://en.wikipedia.org/wiki/Linked_list
40 https://en.wikipedia.org/wiki/Data_structure
41 https://en.wikipedia.org/wiki/Heap_(data_structure)
42 https://en.wikipedia.org/wiki/Binary_tree
43 https://en.wikipedia.org/wiki/Heap_sort
44 https://en.wikipedia.org/wiki/Binary_tree_sort
45 https://en.wikipedia.org/wiki/Martin_Farach-Colton
46 https://en.wikipedia.org/wiki/Library_sort
47 https://en.wikipedia.org/wiki/Skip_list

57
Insertion sort

pList = pList->pNext;
if (head == NULL || current->iValue < head->iValue) {
// insert into the head of the sorted list
// or as the first element into an empty sorted list
current->pNext = head;
head = current;
} else {
// insert current element into proper position in non-empty sorted
list
struct LIST * p = head;
while (p != NULL) {
if (p->pNext == NULL || // last element of the sorted list
current->iValue < p->pNext->iValue) // middle of the list
{
// insert into middle of the sorted list or as the last
element
current->pNext = p->pNext;
p->pNext = current;
break; // done
}
p = p->pNext;
}
}
}
return head;
}

The algorithm below uses a trailing pointer[9] for the insertion into the sorted list. A simpler
recursive method rebuilds the list each time (rather than splicing) and can use O(n) stack
space.

struct LIST
{
struct LIST * pNext;
int iValue;
};

struct LIST * SortList(struct LIST * pList)


{
// zero or one element in list
if (!pList || !pList->pNext)
return pList;

/* build up the sorted array from the empty list */


struct LIST * pSorted = NULL;

/* take items off the input list one by one until empty */
while (pList != NULL) {
/* remember the head */
struct LIST * pHead = pList;
/* trailing pointer for efficient splice */
struct LIST ** ppTrail = &pSorted;

/* pop head off list */


pList = pList->pNext;

/* splice head into sorted list at proper place */


while (!(*ppTrail == NULL || pHead->iValue < (*ppTrail)->iValue)) { /*
does head belong here? */
/* no - continue down the list */
ppTrail = &(*ppTrail)->pNext;
}

pHead->pNext = *ppTrail;
*ppTrail = pHead;
}

58
References

return pSorted;
}

4.5 References
1. B, J (2000), Programming Pearls, ACM Press/Addison–Wesley, pp. 107–
109
2. S, R49 (1983), Algorithms50 , A-W, . 9551 ,
ISBN52 978-0-201-06672-253 .
3. C, T H.54 ; L, C E.55 ; R, R L.56 ; S,
C57 (2009) [1990]. ”S 2.1: I ”. Introduction to Al-
gorithms58 (3 .). MIT P  MG-H. . 16–18. ISBN59 0-262-
03384-460 .. See in particular p. 18.
4. S, K. ”W    Θ(^2)    ? (-
  ””)”61 . S O.
5. F, R. M.; L, R. B. (1960). ”A H-S S P”.
Communications of the ACM. 3 (1): 20–22. doi62 :10.1145/366947.36695763 .
6. S, R64 (1986). ”A N U B  S”. Journal
of Algorithms. 7 (2): 159–173. doi65 :10.1016/0196-6774(86)90001-566 .
7. ”B M S”67
8. B, M A.; F-C, M68 ; M, M A.
(2006), ”I   O(n log n)”, Theory of Computing Systems, 39 (3): 391–
397, arXiv69 :cs/040700370 , doi71 :10.1007/s00224-005-1237-z72 , MR73 221840974

49 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
50 https://archive.org/details/algorithms00sedg/page/95
51 https://archive.org/details/algorithms00sedg/page/95
52 https://en.wikipedia.org/wiki/ISBN_(identifier)
53 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-06672-2
54 https://en.wikipedia.org/wiki/Thomas_H._Cormen
55 https://en.wikipedia.org/wiki/Charles_E._Leiserson
56 https://en.wikipedia.org/wiki/Ron_Rivest
57 https://en.wikipedia.org/wiki/Clifford_Stein
58 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
59 https://en.wikipedia.org/wiki/ISBN_(identifier)
60 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
61 https://stackoverflow.com/a/17055342
62 https://en.wikipedia.org/wiki/Doi_(identifier)
63 https://doi.org/10.1145%2F366947.366957
64 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
65 https://en.wikipedia.org/wiki/Doi_(identifier)
66 https://doi.org/10.1016%2F0196-6774%2886%2990001-5
67 https://docs.google.com/file/d/0B8KIVX-AaaGiYzcta0pFUXJnNG8
68 https://en.wikipedia.org/wiki/Martin_Farach-Colton
69 https://en.wikipedia.org/wiki/ArXiv_(identifier)
70 http://arxiv.org/abs/cs/0407003
71 https://en.wikipedia.org/wiki/Doi_(identifier)
72 https://doi.org/10.1007%2Fs00224-005-1237-z
73 https://en.wikipedia.org/wiki/MR_(identifier)
74 http://www.ams.org/mathscinet-getitem?mr=2218409

59
Insertion sort

9. H, C (.), ”T P T”, Euler75 , V C S
U,  22 S 2012.

4.6 Further reading


• K, D76 (1998), ”5.2.1: S  I”, The Art of Computer
Programming77 , 3. S  S ( .), A-W, . 80–
105, ISBN78 0-201-89685-079 .

4.7 External links

The Wikibook Algorithm implementation80 has a page on the topic of: Insertion
sort81

Wikimedia Commons has media related to Insertion sort82 .

• Animated Sorting Algorithms: Insertion Sort83 at the Wayback Machine84 (archived 8


March 2015) – graphical demonstration
• A, J P, Binary Insertion Sort – Scoreboard – Complete Investigation
and C Implementation85 , P.
• Insertion Sort – a comparison with other O(n2 ) sorting algorithms86 , UK87 : C .
• Category:Insertion Sort88 (), LP – implementations of insertion
sort in various programming languages

Sorting algorithms

75 http://euler.vcsu.edu:7000/11421/
76 https://en.wikipedia.org/wiki/Donald_Knuth
77 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
78 https://en.wikipedia.org/wiki/ISBN_(identifier)
79 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
80 https://en.wikibooks.org/wiki/Algorithm_implementation
81 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Insertion_sort
82 https://commons.wikimedia.org/wiki/Category:Insertion_sort
https://web.archive.org/web/20150308232109/http://www.sorting-algorithms.com/
83
insertion-sort
84 https://en.wikipedia.org/wiki/Wayback_Machine
85 http://www.pathcom.com/~vadco/binary.html
86 http://corewar.co.uk/assembly/insertion.htm
87 https://en.wikipedia.org/wiki/United_Kingdom
88 http://literateprograms.org/Category:Insertion_sort

60
External links

Sorting algorithms

61
5 Merge sort

A divide and combine sorting algorithm

This article possibly contains original research1 . Please improve it2 by veri-
fying3 the claims made and adding inline citations4 . Statements consisting only of
original research should be removed. (May 2016)(Learn how and when to remove this
template message5 )

Merge sort
An example of merge sort. First divide the list into the smallest unit (1 element), then
compare each element with the adjacent list to sort and merge the two adjacent lists.
Finally all the elements are sorted and merged.
Class Sorting algorithm
Data struc- Array
ture
Worst-case O(n log n)
perfor-
mance
Best-case O(n log n) typical,O(n) nat-
perfor- ural variant
mance
Average O(n log n)
perfor-
mance
Worst-case О(n) total with O(n) aux-
space com- iliary, O(1) auxiliary with
plexity linked lists[1]

In computer science6 , merge sort (also commonly spelled mergesort) is an efficient,


general-purpose, comparison-based7 sorting algorithm8 . Most implementations produce a
stable sort9 , which means that the order of equal elements is the same in the input and

1 https://en.wikipedia.org/wiki/Wikipedia:No_original_research
2 https://en.wikipedia.org/w/index.php?title=Merge_sort&action=edit
3 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
4 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
5 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
6 https://en.wikipedia.org/wiki/Computer_science
7 https://en.wikipedia.org/wiki/Comparison_sort
8 https://en.wikipedia.org/wiki/Sorting_algorithm
9 https://en.wikipedia.org/wiki/Sorting_algorithm#Stability

63
Merge sort

output. Merge sort is a divide and conquer algorithm10 that was invented by John von Neu-
mann11 in 1945.[2] A detailed description and analysis of bottom-up mergesort appeared in
a report by Goldstine12 and von Neumann13 as early as 1948.[3]

5.1 Algorithm

Conceptually, a merge sort works as follows:


1. Divide the unsorted list into n sublists, each containing one element (a list of one
element is considered sorted).
2. Repeatedly merge14 sublists to produce new sorted sublists until there is only one
sublist remaining. This will be the sorted list.

5.1.1 Top-down implementation

Example C-like15 code using indices for top-down merge sort algorithm that recursively
splits the list (called runs in this example) into sublists until sublist size is 1, then merges
those sublists to produce a sorted list. The copy back step is avoided with alternating the
direction of the merge with each level of recursion (except for an initial one time copy). To
help understand this, consider an array with 2 elements. the elements are copied to B[],
then merged back to A[]. If there are 4 elements, when the bottom of recursion level is
reached, single element runs from A[] are merged to B[], and then at the next higher level
of recursion, those 2 element runs are merged to A[]. This pattern continues with each level
of recursion.

// Array A[] has the items to sort; array B[] is a work array.
void TopDownMergeSort(A[], B[], n)
{
CopyArray(A, 0, n, B); // one time copy of A[] to B[]
TopDownSplitMerge(B, 0, n, A); // sort data from B[] into A[]
}

// Sort the given run of array A[] using array B[] as a source.
// iBegin is inclusive; iEnd is exclusive (A[iEnd] is not in the set).
void TopDownSplitMerge(B[], iBegin, iEnd, A[])
{
if(iEnd - iBegin < 2) // if run size == 1
return; // consider it sorted
// split the run longer than 1 item into halves
iMiddle = (iEnd + iBegin) / 2; // iMiddle = mid point
// recursively sort both runs from array A[] into B[]
TopDownSplitMerge(A, iBegin, iMiddle, B); // sort the left run
TopDownSplitMerge(A, iMiddle, iEnd, B); // sort the right run
// merge the resulting runs from array B[] into A[]
TopDownMerge(B, iBegin, iMiddle, iEnd, A);
}

10 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
11 https://en.wikipedia.org/wiki/John_von_Neumann
12 https://en.wikipedia.org/wiki/Herman_Goldstine
13 https://en.wikipedia.org/wiki/John_von_Neumann
14 https://en.wikipedia.org/wiki/Merge_algorithm
15 https://en.wikipedia.org/wiki/C-like

64
Algorithm

// Left source half is A[ iBegin:iMiddle-1].


// Right source half is A[iMiddle:iEnd-1 ].
// Result is B[ iBegin:iEnd-1 ].
void TopDownMerge(A[], iBegin, iMiddle, iEnd, B[])
{
i = iBegin, j = iMiddle;

// While there are elements in the left or right runs...


for (k = iBegin; k < iEnd; k++) {
// If left run head exists and is <= existing right run head.
if (i < iMiddle && (j >= iEnd || A[i] <= A[j])) {
B[k] = A[i];
i = i + 1;
} else {
B[k] = A[j];
j = j + 1;
}
}
}

void CopyArray(A[], iBegin, iEnd, B[])


{
for(k = iBegin; k < iEnd; k++)
B[k] = A[k];
}

5.1.2 Bottom-up implementation

Example C-like code using indices for bottom-up merge sort algorithm which treats the
list as an array of n sublists (called runs in this example) of size 1, and iteratively merges
sub-lists back and forth between two buffers:

// array A[] has the items to sort; array B[] is a work array
void BottomUpMergeSort(A[], B[], n)
{
// Each 1-element run in A is already "sorted".
// Make successively longer sorted runs of length 2, 4, 8, 16... until whole
array is sorted.
for (width = 1; width < n; width = 2 * width)
{
// Array A is full of runs of length width.
for (i = 0; i < n; i = i + 2 * width)
{
// Merge two runs: A[i:i+width-1] and A[i+width:i+2*width-1] to B[]
// or copy A[i:n-1] to B[] ( if(i+width >= n) )
BottomUpMerge(A, i, min(i+width, n), min(i+2*width, n), B);
}
// Now work array B is full of runs of length 2*width.
// Copy array B to array A for next iteration.
// A more efficient implementation would swap the roles of A and B.
CopyArray(B, A, n);
// Now array A is full of runs of length 2*width.
}
}

// Left run is A[iLeft :iRight-1].


// Right run is A[iRight:iEnd-1 ].
void BottomUpMerge(A[], iLeft, iRight, iEnd, B[])
{
i = iLeft, j = iRight;
// While there are elements in the left or right runs...
for (k = iLeft; k < iEnd; k++) {
// If left run head exists and is <= existing right run head.

65
Merge sort

if (i < iRight && (j >= iEnd || A[i] <= A[j])) {


B[k] = A[i];
i = i + 1;
} else {
B[k] = A[j];
j = j + 1;
}
}
}

void CopyArray(B[], A[], n)


{
for(i = 0; i < n; i++)
A[i] = B[i];
}

5.1.3 Top-down implementation using lists

Pseudocode16 for top-down merge sort algorithm which recursively divides the input list
into smaller sublists until the sublists are trivially sorted, and then merges the sublists
while returning up the call chain.
function merge_sort(list m) is
// Base case. A list of zero or one elements is sorted, by definition.
if length of m ≤ 1 then
return m

// Recursive case. First, divide the list into equal-sized sublists


// consisting of the first half and second half of the list.
// This assumes lists start at index 0.
var left := empty list
var right := empty list
for each x with index i in m do
if i < (length of m)/2 then
add x to left
else
add x to right

// Recursively sort both sublists.


left := merge_sort(left)
right := merge_sort(right)

// Then merge the now-sorted sublists.


return merge(left, right)

In this example, the merge function merges the left and right sublists.
function merge(left, right) is
var result := empty list

while left is not empty and right is not empty do


if first(left) ≤ first(right) then
append first(left) to result
left := rest(left)
else
append first(right) to result
right := rest(right)

// Either left or right may have elements left; consume them.


// (Only one of the following loops will actually be entered.)
while left is not empty do

16 https://en.wikipedia.org/wiki/Pseudocode

66
Natural merge sort

append first(left) to result


left := rest(left)
while right is not empty do
append first(right) to result
right := rest(right)
return result

5.1.4 Bottom-up implementation using lists

Pseudocode17 for bottom-up merge sort algorithm which uses a small fixed size array of
references to nodes, where array[i] is either a reference to a list of size 2i or nil18 . node is
a reference or pointer to a node. The merge() function would be similar to the one shown
in the top-down merge lists example, it merges two already sorted lists, and handles empty
lists. In this case, merge() would use node for its input parameters and return value.
function merge_sort(node head) is
// return if empty list
if head = nil then
return nil
var node array[32]; initially all nil
var node result
var node next
var int i
result := head
// merge nodes into array
while result ≠ nil do
next := result.next;
result.next := nil
for(i = 0; (i < 32) && (array[i] ≠ nil); i += 1) do
result := merge(array[i], result)
array[i] := nil
// do not go past end of array
if i = 32 then
i -= 1
array[i] := result
result := next
// merge array into single list
result := nil
for (i = 0; i < 32; i += 1) do
result := merge(array[i], result)
return result

5.2 Natural merge sort

A natural merge sort is similar to a bottom-up merge sort except that any naturally occur-
ring runs (sorted sequences) in the input are exploited. Both monotonic and bitonic (al-
ternating up/down) runs may be exploited, with lists (or equivalently tapes or files) being
convenient data structures (used as FIFO queues19 or LIFO stacks20 ).[4] In the bottom-up
merge sort, the starting point assumes each run is one item long. In practice, random input

17 https://en.wikipedia.org/wiki/Pseudocode
18 https://en.wikipedia.org/wiki/Null_pointer
19 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
20 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)

67
Merge sort

data will have many short runs that just happen to be sorted. In the typical case, the
natural merge sort may not need as many passes because there are fewer runs to merge.
In the best case, the input is already sorted (i.e., is one run), so the natural merge sort
need only make one pass through the data. In many practical cases, long natural runs
are present, and for that reason natural merge sort is exploited as the key component of
Timsort21 . Example:
Start : 3 4 2 1 7 5 8 9 0 6
Select runs : (3 4)(2)(1 7)(5 8 9)(0 6)
Merge : (2 3 4)(1 5 7 8 9)(0 6)
Merge : (1 2 3 4 5 7 8 9)(0 6)
Merge : (0 1 2 3 4 5 6 7 8 9)

Tournament replacement selection sorts22 are used to gather the initial runs for external
sorting algorithms.

21 https://en.wikipedia.org/wiki/Timsort
22 https://en.wikipedia.org/wiki/Tournament_sort

68
Analysis

5.3 Analysis

Figure 14 A recursive merge sort algorithm used to sort an array of 7 integer values.
These are the steps a human would take to emulate merge sort (top-down).

In sorting n objects, merge sort has an average23 and worst-case performance24 of


O25 (n log n). If the running time of merge sort for a list of length n is T(n), then the
recurrence T(n) = 2T(n/2) + n follows from the definition of the algorithm (apply the al-
gorithm to two lists of half the size of the original list, and add the n steps taken to merge
the resulting two lists). The closed form follows from the master theorem for divide-and-
conquer recurrences26 .

23 https://en.wikipedia.org/wiki/Average_performance
24 https://en.wikipedia.org/wiki/Worst-case_performance
25 https://en.wikipedia.org/wiki/Big_O_notation
26 https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)

69
Merge sort

In the worst case, the number of comparisons merge sort makes is given by the sorting
numbers27 . These numbers are equal to or slightly smaller than (n ⌈lg28 n⌉ − 2⌈lg n⌉ + 1),
which is between (n lg n − n + 1) and (n lg n + n + O(lg n)).[5]
For large n and a randomly ordered input list, merge sort's expected (average) number of
∑∞
1
comparisons approaches α·n fewer than the worst case where α = −1 + k +1
≈ 0.2645.
k=0
2

In the worst case, merge sort does about 39% fewer comparisons than quicksort29 does in
the average case. In terms of moves, merge sort's worst case complexity is O30 (n log n)—
the same complexity as quicksort's best case, and merge sort's best case takes about half
31
as many iterations as the worst case.[citation needed ]
Merge sort is more efficient than quicksort for some types of lists if the data to be sorted can
only be efficiently accessed sequentially, and is thus popular in languages such as Lisp32 ,
where sequentially accessed data structures are very common. Unlike some (efficient) im-
plementations of quicksort, merge sort is a stable sort.
Merge sort's most common implementation does not sort in place;[6] therefore, the memory
size of the input must be allocated for the sorted output to be stored in (see below for
versions that need only n/2 extra spaces).

5.4 Variants

Variants of merge sort are primarily concerned with reducing the space complexity and the
cost of copying.
A simple alternative for reducing the space overhead to n/2 is to maintain left and right as
a combined structure, copy only the left part of m into temporary space, and to direct the
merge routine to place the merged output into m. With this version it is better to allocate
the temporary space outside the merge routine, so that only one allocation is needed. The
excessive copying mentioned previously is also mitigated, since the last pair of lines before
the return result statement (function mergein the pseudo code above) become superfluous.
One drawback of merge sort, when implemented on arrays, is its O(n) working memory
requirement. Several in-place33 variants have been suggested:
• Katajainen et al. present an algorithm that requires a constant amount of working mem-
ory: enough storage space to hold one element of the input array, and additional space
to hold O(1) pointers into the input array. They achieve an O(n log n) time bound with
small constants, but their algorithm is not stable.[7]
• Several attempts have been made at producing an in-place merge algorithm that can
be combined with a standard (top-down or bottom-up) merge sort to produce an in-

27 https://en.wikipedia.org/wiki/Sorting_number
28 https://en.wikipedia.org/wiki/Binary_logarithm
29 https://en.wikipedia.org/wiki/Quicksort
30 https://en.wikipedia.org/wiki/Big_O_notation
32 https://en.wikipedia.org/wiki/Lisp_programming_language
33 https://en.wikipedia.org/wiki/In-place_algorithm

70
Variants

place merge sort. In this case, the notion of ”in-place” can be relaxed to mean ”taking
logarithmic stack space”, because standard merge sort requires that amount of space
for its own stack usage. It was shown by Geffert et al. that in-place, stable merging is
possible in O(n log n) time using a constant amount of scratch space, but their algorithm
is complicated and has high constant factors: merging arrays of length n and m can take
5n + 12m + o(m) moves.[8] This high constant factor and complicated in-place algorithm
was made simpler and easier to understand. Bing-Chao Huang and Michael A. Langston[9]
presented a straightforward linear time algorithm practical in-place merge to merge a
sorted list using fixed amount of additional space. They both have used the work of
Kronrod and others. It merges in linear time and constant extra space. The algorithm
takes little more average time than standard merge sort algorithms, free to exploit O(n)
temporary extra memory cells, by less than a factor of two. Though the algorithm is
much faster in a practical way but it is unstable also for some lists. But using similar
concepts, they have been able to solve this problem. Other in-place algorithms include
SymMerge, which takes O((n + m) log (n + m)) time in total and is stable.[10] Plugging
such an algorithm into merge sort increases its complexity to the non-linearithmic34 , but
still quasilinear35 , O(n (log n)2 ).
• A modern stable linear and in-place merging is block merge sort36 .
An alternative to reduce the copying into multiple lists is to associate a new field of infor-
mation with each key (the elements in m are called keys). This field will be used to link
the keys and any associated information together in a sorted list (a key and its related
information is called a record). Then the merging of the sorted lists proceeds by changing
the link values; no records need to be moved at all. A field which contains only a link will
generally be smaller than an entire record so less space will also be used. This is a standard
sorting technique, not restricted to merge sort.

34 https://en.wikipedia.org/wiki/Linearithmic
35 https://en.wikipedia.org/wiki/Quasilinear_time
36 https://en.wikipedia.org/wiki/Block_merge_sort

71
Merge sort

5.5 Use with tape drives

Figure 15 Merge sort type algorithms allowed large data sets to be sorted on early
computers that had small random access memories by modern standards. Records were
stored on magnetic tape and processed on banks of magnetic tape drives, such as these
IBM 729s.

An external37 merge sort is practical to run using disk38 or tape39 drives when the data to
be sorted is too large to fit into memory40 . External sorting41 explains how merge sort is
implemented with disk drives. A typical tape drive sort uses four tape drives. All I/O is
sequential (except for rewinds at the end of each pass). A minimal implementation can get
by with just two record buffers and a few program variables.
Naming the four tape drives as A, B, C, D, with the original data on A, and using only 2
record buffers, the algorithm is similar to Bottom-up implementation42 , using pairs of tape
drives instead of arrays in memory. The basic algorithm can be described as follows:

37 https://en.wikipedia.org/wiki/External_sorting
38 https://en.wikipedia.org/wiki/Disk_storage
39 https://en.wikipedia.org/wiki/Tape_drive
40 https://en.wikipedia.org/wiki/Primary_storage
41 https://en.wikipedia.org/wiki/External_sorting
42 #Bottom-up_implementation

72
Use with tape drives

1. Merge pairs of records from A; writing two-record sublists alternately to C and D.


2. Merge two-record sublists from C and D into four-record sublists; writing these alter-
nately to A and B.
3. Merge four-record sublists from A and B into eight-record sublists; writing these
alternately to C and D
4. Repeat until you have one list containing all the data, sorted—in log2 (n) passes.
Instead of starting with very short runs, usually a hybrid algorithm43 is used, where the
initial pass will read many records into memory, do an internal sort to create a long run,
and then distribute those long runs onto the output set. The step avoids many early passes.
For example, an internal sort of 1024 records will save nine passes. The internal sort is
often large because it has such a benefit. In fact, there are techniques that can make the
initial runs longer than the available internal memory.[11]
With some overhead, the above algorithm can be modified to use three tapes. O(n log n)
running time can also be achieved using two queues44 , or a stack45 and a queue, or three
stacks. In the other direction, using k > two tapes (and O(k) items in memory), we can
reduce the number of tape operations in O(log k) times by using a k/2-way merge46 .
A more sophisticated merge sort that optimizes tape (and disk) drive usage is the polyphase
merge sort47 .

43 https://en.wikipedia.org/wiki/Hybrid_algorithm
44 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
45 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
46 https://en.wikipedia.org/wiki/K-way_merge_algorithm
47 https://en.wikipedia.org/wiki/Polyphase_merge_sort

73
Merge sort

5.6 Optimizing merge sort

Figure 16 Tiled merge sort applied to an array of random integers. The horizontal axis
is the array index and the vertical axis is the integer.

On modern computers, locality of reference48 can be of paramount importance in software


optimization49 , because multilevel memory hierarchies50 are used. Cache51 -aware versions
of the merge sort algorithm, whose operations have been specifically chosen to minimize
the movement of pages in and out of a machine's memory cache, have been proposed. For
example, the tiled merge sort algorithm stops partitioning subarrays when subarrays of
size S are reached, where S is the number of data items fitting into a CPU's cache. Each
of these subarrays is sorted with an in-place sorting algorithm such as insertion sort52 ,
to discourage memory swaps, and normal merge sort is then completed in the standard

48 https://en.wikipedia.org/wiki/Locality_of_reference
49 https://en.wikipedia.org/wiki/Software_optimization
50 https://en.wikipedia.org/wiki/Memory_hierarchy
51 https://en.wikipedia.org/wiki/Cache_(computing)
52 https://en.wikipedia.org/wiki/Insertion_sort

74
Parallel merge sort

53 ]
recursive fashion. This algorithm has demonstrated better performance[example needed on
machines that benefit from cache optimization. (LaMarca & Ladner 199754 )
Kronrod (1969)55 suggested an alternative version of merge sort that uses constant addi-
tional space. This algorithm was later refined. (Katajainen, Pasanen & Teuhola 199656 )
harv error: multiple targets (2×): CITEREFKatajainenPasanenTeuhola1996 (help57 )
Also, many applications of external sorting58 use a form of merge sorting where the input
get split up to a higher number of sublists, ideally to a number for which merging them still
makes the currently processed set of pages59 fit into main memory.

5.7 Parallel merge sort

Merge sort parallelizes well due to the use of the divide-and-conquer60 method. Several
different parallel variants of the algorithm have been developed over the years. Some parallel
merge sort algorithms are strongly related to the sequential top-down merge algorithm while
others have a different general structure and use the K-way merge61 method.

5.7.1 Merge sort with parallel recursion

The sequential merge sort procedure can be described in two phases, the divide phase and
the merge phase. The first consists of many recursive calls that repeatedly perform the same
division process until the subsequences are trivially sorted (containing one or no element).
An intuitive approach is the parallelization of those recursive calls.[12] Following pseudocode
describes the merge sort with parallel recursion using the fork and join62 keywords:
// Sort elements lo through hi (exclusive) of array A.
algorithm mergesort(A, lo, hi) is
if lo+1 < hi then // Two or more elements.
mid := ⌊(lo + hi) / 2⌋
fork mergesort(A, lo, mid)
mergesort(A, mid, hi)
join
merge(A, lo, mid, hi)

This algorithm is the trivial modification of the sequential version and does not parallelize
well. Therefore, its speedup is not very impressive. It has a span63 of Θ(n), which is
only an improvement of Θ(log n) compared to the sequential version (see Introduction to

54 #CITEREFLaMarcaLadner1997
55 #CITEREFKronrod1969
56 #CITEREFKatajainenPasanenTeuhola1996
57 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
58 https://en.wikipedia.org/wiki/External_sorting
59 https://en.wikipedia.org/wiki/Page_(computer_memory)
60 https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm
61 https://en.wikipedia.org/wiki/K-way_merge_algorithm
62 https://en.wikipedia.org/wiki/Fork%E2%80%93join_model
63 https://en.wikipedia.org/wiki/Analysis_of_parallel_algorithms#Overview

75
Merge sort

Algorithms64 ). This is mainly due to the sequential merge method, as it is the bottleneck
of the parallel executions.

5.7.2 Merge sort with parallel merging

Main article: Merge algorithm § Parallel merge65 Better parallelism can be achieved by
using a parallel merge algorithm66 . Cormen et al.67 present a binary variant that merges
two sorted sub-sequences into one sorted output sequence.[12]
In one of the sequences (the longer one if unequal length), the element of the middle index
is selected. Its position in the other sequence is determined in such a way that this sequence
would remain sorted if this element were inserted at this position. Thus, one knows how
many other elements from both sequences are smaller and the position of the selected
element in the output sequence can be calculated. For the partial sequences of the smaller
and larger elements created in this way, the merge algorithm is again executed in parallel
until the base case of the recursion is reached.
The following pseudocode shows the modified parallel merge sort method using the parallel
merge algorithm (adopted from Cormen et al.).
/**
* A: Input array
* B: Output array
* lo: lower bound
* hi: upper bound
* off: offset
*/
algorithm parallelMergesort(A, lo, hi, B, off) is
len := hi - lo + 1
if len == 1 then
B[off] := A[lo]
else let T[1..len] be a new array
mid := ⌊(lo + hi) / 2⌋
mid' := mid - lo + 1
fork parallelMergesort(A, lo, mid, T, 1)
parallelMergesort(A, mid + 1, hi, T, mid' + 1)
join
parallelMerge(T, 1, mid', mid' + 1, len, B, off)

In order to analyze a Recurrence relation68 for the worst case span, the recursive calls
of parallelMergesort have to be incorporated only once due to their parallel execution,
obtaining
sort (n) = T sort
(n) merge sort
(n) ( )
T∞ ∞ 2 + T∞ (n) = T∞ 2 + Θ log(n)2 .
For detailed information about the complexity of the parallel merge procedure, see Merge
algorithm69 .
The solution of this recurrence is given by

64 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
65 https://en.wikipedia.org/wiki/Merge_algorithm#Parallel_merge
66 https://en.wikipedia.org/wiki/Merge_algorithm
67 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
68 https://en.wikipedia.org/wiki/Recurrence_relation
69 https://en.wikipedia.org/wiki/Merge_algorithm#Parallel_merge

76
Parallel merge sort

( )
sort = Θ log(n)3 .
T∞
( )
n
This parallel merge algorithm reaches a parallelism of Θ , which is much higher
(log n)2
than the parallelism of the previous algorithm. Such a sort can perform well in practice when
combined with a fast stable sequential sort, such as insertion sort70 , and a fast sequential
merge as a base case for merging small arrays.[13]

5.7.3 Parallel multiway merge sort

It seems arbitrary to restrict the merge sort algorithms to a binary merge method, since there
are usually p > 2 processors available. A better approach may be to use a K-way merge71
method, a generalization of binary merge, in which k sorted sequences are merged together.
This merge variant is well suited to describe a sorting algorithm on a PRAM72[14][15] .

Basic Idea

Figure 17 The parallel multiway mergesort process on four processors t0 to t3 .

70 https://en.wikipedia.org/wiki/Insertion_sort
71 https://en.wikipedia.org/wiki/K-way_merge_algorithm
72 https://en.wikipedia.org/wiki/Parallel_random-access_machine

77
Merge sort

Given an unsorted sequence of n elements, the goal is to sort the sequence with p available
processors73 . These elements are distributed equally among all processors and sorted locally
using a sequential Sorting algorithm74 . Hence, the sequence consists of sorted sequences
S1 , ..., Sp of length ⌈ np ⌉. For simplification let n be a multiple of p, so that |Si | = np for
i = 1, ..., p.
These sequences will be used to perform a multisequence selection/splitter selection. For
j = 1, ..., p, the algorithm determines splitter elements vj with global rank k = j np . Then
the corresponding positions of v1 , ..., vp in each sequence Si are determined with binary
search75 and thus the Si are further partitioned into p subsequences Si,1 , ..., Si,p with
Si,j := {x ∈ Si |rank(vj−1 ) < rank(x) ≤ rank(vj )}.
Furthermore, the elements of S1,i , ..., Sp,i are assigned to processor i, means all elements
between rank (i − 1) np and rank i np , which are distributed over all Si . Thus, each processor
receives a sequence of sorted sequences. The fact that the rank k of the splitter elements
vi was chosen globally, provides two important properties: On the one hand, k was chosen
so that each processor can still operate on n/p elements after assignment. The algorithm is
perfectly load-balanced76 . On the other hand, all elements on processor i are less than or
equal to all elements on processor i + 1. Hence, each processor performs the p-way merge77
locally and thus obtains a sorted sequence from its sub-sequences. Because of the second
property, no further p-way-merge has to be performed, the results only have to be put
together in the order of the processor number.

Multisequence selection

In its simplest form, given p sorted sequences S1 , ..., Sp distributed evenly on p processors
and a rank k, the task is to find an element x with a global rank k in the union of the
sequences. Hence, this can be used to divide each Si in two parts at a splitter index li ,
where the lower part contains only elements which are smaller than x, while the elements
bigger than x are located in the upper part.
The presented sequential algorithm returns the indices of the splits in each sequence,
e.g. the indices li in sequences Si such that Si [li ] has a global rank less than k and
rank (Si [li + 1]) ≥ k.[16]
algorithm msSelect(S : Array of sorted Sequences [S_1,..,S_p], k : int) is
for i = 1 to p do
(l_i, r_i) = (0, |S_i|-1)

while there exists i: l_i < r_i do


//pick Pivot Element in S_j[l_j],..,S_j[r_j], chose random j uniformly
v := pickPivot(S, l, r)
for i = 1 to p do
m_i = binarySearch(v, S_i[l_i, r_i]) //sequentially
if m_1 + ... + m_p >= k then //m_1+ ... + m_p is the global rank of v
r := m //vector assignment
else

73 https://en.wikipedia.org/wiki/Processor_(computing)
74 https://en.wikipedia.org/wiki/Sorting_algorithm
75 https://en.wikipedia.org/wiki/Binary_search_algorithm
76 https://en.wikipedia.org/wiki/Load_balancing_(computing)
77 https://en.wikipedia.org/wiki/K-way_merge_algorithm

78
Parallel merge sort

l := m

return l

For the complexity analysis the PRAM78 model is chosen. If the data is evenly dis-
tributed over all p, the p-fold execution of the binarySearch method has a running time

of O (p log (n/p)). The expected recursion depth is O (log ( i |Si |)) = O(log(n)) as in the
ordinary Quickselect79 . Thus the overall expected running time is O (p log(n/p) log(n)).
Applied on the parallel multiway merge sort, this algorithm has to be invoked in parallel
such that all splitter elements of rank i np for i = 1, .., p are found simultaneously. These
splitter elements can then be used to partition each sequence in p parts, with the same total
running time of O (p log(n/p) log(n)).

Pseudocode

Below, the complete pseudocode of the parallel multiway merge sort algorithm is given. We
assume that there is a barrier synchronization before and after the multisequence selection
such that every processor can determine the splitting elements and the sequence partition
properly.
/**
* d: Unsorted Array of Elements
* n: Number of Elements
* p: Number of Processors
* return Sorted Array
*/
algorithm parallelMultiwayMergesort(d : Array, n : int, p : int) is
o := new Array[0, n] // the output array
for i = 1 to p do in parallel // each processor in parallel
S_i := d[(i-1) * n/p, i * n/p] // Sequence of length n/p
sort(S_i) // sort locally
synch
v_i := msSelect([S_1,...,S_p], i * n/p) // element with global rank
i * n/p
synch
(S_i,1 ,..., S_i,p) := sequence_partitioning(si, v_1, ..., v_p) // split s_i
into subsequences

o[(i-1) * n/p, i * n/p] := kWayMerge(s_1,i, ..., s_p,i) // merge and assign


to output array

return o

Analysis

Firstly, each processor sorts the assigned n/p elements locally using a sorting algorithm with
complexity O (n/p log(n/p)). After that, the splitter elements have to be calculated in time
O (p log(n/p) log(n)). Finally, each group of p splits have to be merged in parallel by each

78 https://en.wikipedia.org/wiki/Parallel_random-access_machine
79 https://en.wikipedia.org/wiki/Quickselect

79
Merge sort

processor with a running time of O(log(p)n/p) using a sequential p-way merge algorithm80 .
Thus, the overall running time is given by
( ( ) ( ) )
n n n n
O log + p log log(n) + log(p) .
p p p p

Practical adaption and application

The multiway merge sort algorithm is very scalable through its high parallelization capabil-
ity, which allows the use of many processors. This makes the algorithm a viable candidate
for sorting large amounts of data, such as those processed in computer clusters81 . Also,
since in such systems memory is usually not a limiting resource, the disadvantage of space
complexity of merge sort is negligible. However, other factors become important in such
systems, which are not taken into account when modelling on a PRAM82 . Here, the follow-
ing aspects need to be considered: Memory hierarchy83 , when the data does not fit into the
processors cache, or the communication overhead of exchanging data between processors,
which could become a bottleneck when the data can no longer be accessed via the shared
memory.
Sanders84 et al. have presented in their paper a bulk synchronous parallel85 algorithm for
multilevel multiway mergesort, which divides p processors into r groups of size p′ . All
processors sort locally first. Unlike single level multiway mergesort, these sequences are
then partitioned into r parts and assigned to the appropriate processor groups. These
steps are repeated recursively in those groups. This reduces communication and especially
avoids problems with many small messages. The hierarchial structure of the underlying real
network can be used to define the processor groups (e.g. racks86 , clusters87 ,...).[15]

5.7.4 Further Variants

Merge sort was one of the first sorting algorithms where optimal speed up was achieved, with
Richard Cole using a clever subsampling algorithm to ensure O(1) merge.[17] Other sophis-
ticated parallel sorting algorithms can achieve the same or better time bounds with a lower
constant. For example, in 1991 David Powers described a parallelized quicksort88 (and a
related radix sort89 ) that can operate in O(log n) time on a CRCW90 parallel random-access
machine91 (PRAM) with n processors by performing partitioning implicitly.[18] Powers fur-
ther shows that a pipelined version of Batcher's Bitonic Mergesort92 at O((log n)2 ) time

80 https://en.wikipedia.org/wiki/Merge_algorithm
81 https://en.wikipedia.org/wiki/Computer_cluster
82 https://en.wikipedia.org/wiki/Parallel_random-access_machine
83 https://en.wikipedia.org/wiki/Memory_hierarchy
84 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
85 https://en.wikipedia.org/wiki/Bulk_synchronous_parallel
86 https://en.wikipedia.org/wiki/19-inch_rack
87 https://en.wikipedia.org/wiki/Computer_cluster
88 https://en.wikipedia.org/wiki/Quicksort
89 https://en.wikipedia.org/wiki/Radix_sort
90 https://en.wikipedia.org/wiki/CRCW
91 https://en.wikipedia.org/wiki/Parallel_random-access_machine
92 https://en.wikipedia.org/wiki/Bitonic_sorter

80
Comparison with other sort algorithms

on a butterfly sorting network93 is in practice actually faster than his O(log n) sorts on a
PRAM, and he provides detailed discussion of the hidden overheads in comparison, radix
and parallel sorting.[19]

5.8 Comparison with other sort algorithms

Although heapsort94 has the same time bounds as merge sort, it requires only Θ(1) auxiliary
space instead of merge sort's Θ(n). On typical modern architectures, efficient quicksort95 im-
96
plementations generally outperform mergesort for sorting RAM-based arrays.[citation needed ]
On the other hand, merge sort is a stable sort and is more efficient at handling slow-to-
access sequential media. Merge sort is often the best choice for sorting a linked list97 : in this
situation it is relatively easy to implement a merge sort in such a way that it requires only
Θ(1) extra space, and the slow random-access performance of a linked list makes some other
algorithms (such as quicksort) perform poorly, and others (such as heapsort) completely
impossible.
As of Perl98 5.8, merge sort is its default sorting algorithm (it was quicksort in previous
versions of Perl). In Java99 , the Arrays.sort()100 methods use merge sort or a tuned quicksort
depending on the datatypes and for implementation efficiency switch to insertion sort101
when fewer than seven array elements are being sorted.[20] The Linux102 kernel uses merge
sort for its linked lists.[21] Python103 uses Timsort104 , another tuned hybrid of merge sort
and insertion sort, that has become the standard sort algorithm in Java SE 7105 (for arrays
of non-primitive types),[22] on the Android platform106 ,[23] and in GNU Octave107 .[24]

5.9 Notes
1. Skiena (2008108 , p. 122)
2. Knuth (1998109 , p. 158)
3. K, J; T, J L (M 1997). ”A 
   ”110 (PDF). Proceedings of the 3rd Italian Con-

93 https://en.wikipedia.org/wiki/Sorting_network
94 https://en.wikipedia.org/wiki/Heapsort
95 https://en.wikipedia.org/wiki/Quicksort
97 https://en.wikipedia.org/wiki/Linked_list
98 https://en.wikipedia.org/wiki/Perl
99 https://en.wikipedia.org/wiki/Java_platform
https://docs.oracle.com/javase/9/docs/api/java/util/Arrays.html#sort-java.lang.
100
Object:A-
101 https://en.wikipedia.org/wiki/Insertion_sort
102 https://en.wikipedia.org/wiki/Linux
103 https://en.wikipedia.org/wiki/Python_(programming_language)
104 https://en.wikipedia.org/wiki/Timsort
105 https://en.wikipedia.org/wiki/Java_7
106 https://en.wikipedia.org/wiki/Android_(operating_system)
107 https://en.wikipedia.org/wiki/GNU_Octave
108 #CITEREFSkiena2008
109 #CITEREFKnuth1998
110 http://hjemmesider.diku.dk/~jyrki/Paper/CIAC97.pdf

81
Merge sort

ference on Algorithms and Complexity. Italian Conference on Algorithms and Com-


plexity. Rome. pp. 217–228. CiteSeerX111 10.1.1.86.3154112 . doi113 :10.1007/3-540-
62592-5_74114 .CS1 maint: ref=harv (link115 )
4. Powers, David M. W. and McMahon Graham B. (1983), ”A compendium of interesting
prolog programs”, DCS Technical Report 8313, Department of Computer Science,
University of New South Wales.
5. The worst case number given here does not agree with that given in Knuth116 's Art
of Computer Programming117 , Vol 3. The discrepancy is due to Knuth analyzing a
variant implementation of merge sort that is slightly sub-optimal
6. C; L; R; S. Introduction to Algorithms. p. 151.
ISBN118 978-0-262-03384-8119 .
7. K, J; P, T; T, J (1996). ”P-
 - ”. Nordic J. Computing. 3 (1): 27–40. Cite-
SeerX120 10.1.1.22.8523121 .
8. G, V; K, J; P, T (2000). ”A-
  - ”. Theoretical Computer Science. 237 (1–2):
159–181. doi122 :10.1016/S0304-3975(98)00162-5123 .
9. H, B-C; L, M A. (M 1988). ”P-
 I-P M”. Communications of the ACM. 31 (3): 348–352.
doi124 :10.1145/42392.42403125 .
10. K, P-S; K, A (2004). Stable Minimum Storage Merging by
Symmetric Comparisons. European Symp. Algorithms. Lecture Notes in Computer
Science. 3221. pp. 714–723. CiteSeerX126 10.1.1.102.4612127 . doi128 :10.1007/978-3-
540-30140-0_63129 . ISBN130 978-3-540-23025-0131 .
11. Selection sort. Knuth's snowplow. Natural merge.
12. Cormen et al. 2009132 , pp. 797–805 harvnb error: no target: CITEREFCormenLeis-
ersonRivestStein2009 (help133 )

111 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
112 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.3154
113 https://en.wikipedia.org/wiki/Doi_(identifier)
114 https://doi.org/10.1007%2F3-540-62592-5_74
115 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
116 https://en.wikipedia.org/wiki/Donald_Knuth
117 https://en.wikipedia.org/wiki/Art_of_Computer_Programming
118 https://en.wikipedia.org/wiki/ISBN_(identifier)
119 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03384-8
120 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
121 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.8523
122 https://en.wikipedia.org/wiki/Doi_(identifier)
123 https://doi.org/10.1016%2FS0304-3975%2898%2900162-5
124 https://en.wikipedia.org/wiki/Doi_(identifier)
125 https://doi.org/10.1145%2F42392.42403
126 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
127 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.102.4612
128 https://en.wikipedia.org/wiki/Doi_(identifier)
129 https://doi.org/10.1007%2F978-3-540-30140-0_63
130 https://en.wikipedia.org/wiki/ISBN_(identifier)
131 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-23025-0
132 #CITEREFCormenLeisersonRivestStein2009
133 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors

82
Notes

13. Victor J. Duvanenko ”Parallel Merge Sort” Dr. Dobb's Journal & blog[1]134 and
GitHub repo C++ implementation [2]135
14. Peter Sanders, Johannes Singler. 2008. Lecture Parallel algorithms Last visited
05.02.2020. 136
15. ”P M P S | P   27
ACM   P  A  A”.
137 :10.1145/2755573.2755595138 . Cite journal requires |journal= (help139 )
16. Peter Sanders. 2019. Lecture Parallel algorithms Last visited 05.02.2020. 140
17. C, R (A 1988). ”P  ”. SIAM J. Comput.
17 (4): 770–785. CiteSeerX141 10.1.1.464.7118142 . doi143 :10.1137/0217049144 .CS1
maint: ref=harv (link145 )
18. Powers, David M. W. Parallelized Quicksort and Radixsort with Optimal Speedup146 ,
Proceedings of International Conference on Parallel Computing Technologies. Novosi-
birsk147 . 1991.
19. David M. W. Powers, Parallel Unification: Practical Complexity148 , Australasian
Computer Architecture Workshop, Flinders University, January 1995
20. OpenJDK src/java.base/share/classes/java/util/Arrays.java @ 53904:9c3fe09f69bc149
21. linux kernel /lib/list_sort.c150
22. . ”C 6804124: R ” ” 
..A.  ”151 . Java Development Kit 7 Hg repo.
Archived152 from the original on 2018-01-26. Retrieved 24 Feb 2011.
23. ”C: ..TS<T>”153 . Android JDK Documentation. Archived
from the original154 on January 20, 2015. Retrieved 19 Jan 2015.
24. ”//-.”155 . Mercurial repository of Octave source code.
Lines 23-25 of the initial comment block. Retrieved 18 Feb 2013. Code stolen in large

134 https://duvanenko.tech.blog/2018/01/13/parallel-merge-sort/
135 https://github.com/DragonSpit/ParallelAlgorithms
136 http://algo2.iti.kit.edu/sanders/courses/paralg08/singler.pdf
137 https://en.wikipedia.org/wiki/Doi_(identifier)
138 https://doi.org/10.1145%2F2755573.2755595
139 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
140 http://algo2.iti.kit.edu/sanders/courses/paralg19/vorlesung.pdf
141 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
142 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.464.7118
143 https://en.wikipedia.org/wiki/Doi_(identifier)
144 https://doi.org/10.1137%2F0217049
145 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
146 http://citeseer.ist.psu.edu/327487.html
147 https://en.wikipedia.org/wiki/Novosibirsk
148 http://david.wardpowers.info/Research/AI/papers/199501-ACAW-PUPC.pdf
https://hg.openjdk.java.net/jdk/jdk/file/9c3fe09f69bc/src/java.base/share/classes/
149
java/util/Arrays.java#l1331
150 https://github.com/torvalds/linux/blob/master/lib/list_sort.c
151 http://hg.openjdk.java.net/jdk7/jdk7/jdk/rev/bfd7abda8f79
https://web.archive.org/web/20180126184957/http://hg.openjdk.java.net/jdk7/jdk7/jdk/
152
rev/bfd7abda8f79
https://web.archive.org/web/20150120063131/https://android.googlesource.com/platform/
153
libcore/%2B/jb-mr2-release/luni/src/main/java/java/util/TimSort.java
https://android.googlesource.com/platform/libcore/+/jb-mr2-release/luni/src/main/
154
java/java/util/TimSort.java
155 http://hg.savannah.gnu.org/hgweb/octave/file/0486a29d780f/liboctave/util/oct-sort.cc

83
Merge sort

part from Python's, listobject.c, which itself had no license header. However, thanks
to Tim Peters156 for the parts of the code I ripped-off.

5.10 References
• C, T H.157 ; L, C E.158 ; R, R L.159 ; S,
C160 (2009) [1990]. Introduction to Algorithms161 (3 .). MIT P 
MG-H. ISBN162 0-262-03384-4163 .CS1 maint: ref=harv (link164 )
• K, J; P, T; T, J (1996). ”P -
 ”165 . Nordic Journal of Computing. 3. pp. 27–40. ISSN166 1236-
6064167 . Archived from the original168 on 2011-08-07. Retrieved 2009-04-04.CS1 maint:
ref=harv (link169 ). Also Practical In-Place Mergesort170 . Also [3]171
• K, D172 (1998). ”S 5.2.4: S  M”. Sorting and
Searching. The Art of Computer Programming173 . 3 (2nd ed.). Addison-Wesley.
pp. 158–168. ISBN174 0-201-89685-0175 .CS1 maint: ref=harv (link176 )
• K, M. A. (1969). ”O    
”. Soviet Mathematics - Doklady. 10. p. 744.CS1 maint: ref=harv (link177 )
• LM, A.; L, R. E. (1997). ”T      -
  ”. Proc. 8th Ann. ACM-SIAM Symp. On Discrete Algorithms
(SODA97): 370–379. CiteSeerX178 10.1.1.31.1153179 .CS1 maint: ref=harv (link180 )

156 https://en.wikipedia.org/wiki/Tim_Peters_(software_engineer)
157 https://en.wikipedia.org/wiki/Thomas_H._Cormen
158 https://en.wikipedia.org/wiki/Charles_E._Leiserson
159 https://en.wikipedia.org/wiki/Ron_Rivest
160 https://en.wikipedia.org/wiki/Clifford_Stein
161 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
162 https://en.wikipedia.org/wiki/ISBN_(identifier)
163 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
164 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
https://web.archive.org/web/20110807033704/http://www.diku.dk/hjemmesider/ansatte/
165
jyrki/Paper/mergesort_NJC.ps
166 https://en.wikipedia.org/wiki/ISSN_(identifier)
167 http://www.worldcat.org/issn/1236-6064
168 http://www.diku.dk/hjemmesider/ansatte/jyrki/Paper/mergesort_NJC.ps
169 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
170 http://citeseer.ist.psu.edu/katajainen96practical.html
171 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.8523
172 https://en.wikipedia.org/wiki/Donald_Knuth
173 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
174 https://en.wikipedia.org/wiki/ISBN_(identifier)
175 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
176 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
177 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
178 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
179 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.1153
180 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

84
External links

• S, S S.181 (2008). ”4.5: M: S  D--


C”. The Algorithm Design Manual (2nd ed.). Springer. pp. 120–125.
ISBN182 978-1-84800-069-8183 .CS1 maint: ref=harv (link184 )
• S M. ”A API (J SE 6)”185 . R 2007-11-19.
• O C. ”A (J SE 10 & JDK 10)”186 . R 2018-07-23.

5.11 External links

The Wikibook Algorithm implementation187 has a page on the topic of: Merge
sort188

• Animated Sorting Algorithms: Merge Sort189 at the Wayback Machine190 (archived 6


March 2015) – graphical demonstration
• Open Data Structures - Section 11.1.1 - Merge Sort191 , Pat Morin192

Sorting algorithms

181 https://en.wikipedia.org/wiki/Steven_Skiena
182 https://en.wikipedia.org/wiki/ISBN_(identifier)
183 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84800-069-8
184 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
185 http://java.sun.com/javase/6/docs/api/java/util/Arrays.html
186 https://docs.oracle.com/javase/10/docs/api/java/util/Arrays.html
187 https://en.wikibooks.org/wiki/Algorithm_implementation
188 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Merge_sort
https://web.archive.org/web/20150306071601/http://www.sorting-algorithms.com/merge-
189
sort
190 https://en.wikipedia.org/wiki/Wayback_Machine
http://opendatastructures.org/versions/edition-0.1e/ods-java/11_1_Comparison_Based_
191
Sorti.html#SECTION001411000000000000000
192 https://en.wikipedia.org/wiki/Pat_Morin

85
6 Merge sort

A divide and combine sorting algorithm

This article possibly contains original research1 . Please improve it2 by veri-
fying3 the claims made and adding inline citations4 . Statements consisting only of
original research should be removed. (May 2016)(Learn how and when to remove this
template message5 )

Merge sort
An example of merge sort. First divide the list into the smallest unit (1 element), then
compare each element with the adjacent list to sort and merge the two adjacent lists.
Finally all the elements are sorted and merged.
Class Sorting algorithm
Data struc- Array
ture
Worst-case O(n log n)
perfor-
mance
Best-case O(n log n) typical,O(n) nat-
perfor- ural variant
mance
Average O(n log n)
perfor-
mance
Worst-case О(n) total with O(n) aux-
space com- iliary, O(1) auxiliary with
plexity linked lists[1]

In computer science6 , merge sort (also commonly spelled mergesort) is an efficient,


general-purpose, comparison-based7 sorting algorithm8 . Most implementations produce a
stable sort9 , which means that the order of equal elements is the same in the input and

1 https://en.wikipedia.org/wiki/Wikipedia:No_original_research
2 https://en.wikipedia.org/w/index.php?title=Merge_sort&action=edit
3 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
4 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
5 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
6 https://en.wikipedia.org/wiki/Computer_science
7 https://en.wikipedia.org/wiki/Comparison_sort
8 https://en.wikipedia.org/wiki/Sorting_algorithm
9 https://en.wikipedia.org/wiki/Sorting_algorithm#Stability

87
Merge sort

output. Merge sort is a divide and conquer algorithm10 that was invented by John von Neu-
mann11 in 1945.[2] A detailed description and analysis of bottom-up mergesort appeared in
a report by Goldstine12 and von Neumann13 as early as 1948.[3]

6.1 Algorithm

Conceptually, a merge sort works as follows:


1. Divide the unsorted list into n sublists, each containing one element (a list of one
element is considered sorted).
2. Repeatedly merge14 sublists to produce new sorted sublists until there is only one
sublist remaining. This will be the sorted list.

6.1.1 Top-down implementation

Example C-like15 code using indices for top-down merge sort algorithm that recursively
splits the list (called runs in this example) into sublists until sublist size is 1, then merges
those sublists to produce a sorted list. The copy back step is avoided with alternating the
direction of the merge with each level of recursion (except for an initial one time copy). To
help understand this, consider an array with 2 elements. the elements are copied to B[],
then merged back to A[]. If there are 4 elements, when the bottom of recursion level is
reached, single element runs from A[] are merged to B[], and then at the next higher level
of recursion, those 2 element runs are merged to A[]. This pattern continues with each level
of recursion.

// Array A[] has the items to sort; array B[] is a work array.
void TopDownMergeSort(A[], B[], n)
{
CopyArray(A, 0, n, B); // one time copy of A[] to B[]
TopDownSplitMerge(B, 0, n, A); // sort data from B[] into A[]
}

// Sort the given run of array A[] using array B[] as a source.
// iBegin is inclusive; iEnd is exclusive (A[iEnd] is not in the set).
void TopDownSplitMerge(B[], iBegin, iEnd, A[])
{
if(iEnd - iBegin < 2) // if run size == 1
return; // consider it sorted
// split the run longer than 1 item into halves
iMiddle = (iEnd + iBegin) / 2; // iMiddle = mid point
// recursively sort both runs from array A[] into B[]
TopDownSplitMerge(A, iBegin, iMiddle, B); // sort the left run
TopDownSplitMerge(A, iMiddle, iEnd, B); // sort the right run
// merge the resulting runs from array B[] into A[]
TopDownMerge(B, iBegin, iMiddle, iEnd, A);
}

10 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
11 https://en.wikipedia.org/wiki/John_von_Neumann
12 https://en.wikipedia.org/wiki/Herman_Goldstine
13 https://en.wikipedia.org/wiki/John_von_Neumann
14 https://en.wikipedia.org/wiki/Merge_algorithm
15 https://en.wikipedia.org/wiki/C-like

88
Algorithm

// Left source half is A[ iBegin:iMiddle-1].


// Right source half is A[iMiddle:iEnd-1 ].
// Result is B[ iBegin:iEnd-1 ].
void TopDownMerge(A[], iBegin, iMiddle, iEnd, B[])
{
i = iBegin, j = iMiddle;

// While there are elements in the left or right runs...


for (k = iBegin; k < iEnd; k++) {
// If left run head exists and is <= existing right run head.
if (i < iMiddle && (j >= iEnd || A[i] <= A[j])) {
B[k] = A[i];
i = i + 1;
} else {
B[k] = A[j];
j = j + 1;
}
}
}

void CopyArray(A[], iBegin, iEnd, B[])


{
for(k = iBegin; k < iEnd; k++)
B[k] = A[k];
}

6.1.2 Bottom-up implementation

Example C-like code using indices for bottom-up merge sort algorithm which treats the
list as an array of n sublists (called runs in this example) of size 1, and iteratively merges
sub-lists back and forth between two buffers:

// array A[] has the items to sort; array B[] is a work array
void BottomUpMergeSort(A[], B[], n)
{
// Each 1-element run in A is already "sorted".
// Make successively longer sorted runs of length 2, 4, 8, 16... until whole
array is sorted.
for (width = 1; width < n; width = 2 * width)
{
// Array A is full of runs of length width.
for (i = 0; i < n; i = i + 2 * width)
{
// Merge two runs: A[i:i+width-1] and A[i+width:i+2*width-1] to B[]
// or copy A[i:n-1] to B[] ( if(i+width >= n) )
BottomUpMerge(A, i, min(i+width, n), min(i+2*width, n), B);
}
// Now work array B is full of runs of length 2*width.
// Copy array B to array A for next iteration.
// A more efficient implementation would swap the roles of A and B.
CopyArray(B, A, n);
// Now array A is full of runs of length 2*width.
}
}

// Left run is A[iLeft :iRight-1].


// Right run is A[iRight:iEnd-1 ].
void BottomUpMerge(A[], iLeft, iRight, iEnd, B[])
{
i = iLeft, j = iRight;
// While there are elements in the left or right runs...
for (k = iLeft; k < iEnd; k++) {
// If left run head exists and is <= existing right run head.

89
Merge sort

if (i < iRight && (j >= iEnd || A[i] <= A[j])) {


B[k] = A[i];
i = i + 1;
} else {
B[k] = A[j];
j = j + 1;
}
}
}

void CopyArray(B[], A[], n)


{
for(i = 0; i < n; i++)
A[i] = B[i];
}

6.1.3 Top-down implementation using lists

Pseudocode16 for top-down merge sort algorithm which recursively divides the input list
into smaller sublists until the sublists are trivially sorted, and then merges the sublists
while returning up the call chain.
function merge_sort(list m) is
// Base case. A list of zero or one elements is sorted, by definition.
if length of m ≤ 1 then
return m

// Recursive case. First, divide the list into equal-sized sublists


// consisting of the first half and second half of the list.
// This assumes lists start at index 0.
var left := empty list
var right := empty list
for each x with index i in m do
if i < (length of m)/2 then
add x to left
else
add x to right

// Recursively sort both sublists.


left := merge_sort(left)
right := merge_sort(right)

// Then merge the now-sorted sublists.


return merge(left, right)

In this example, the merge function merges the left and right sublists.
function merge(left, right) is
var result := empty list

while left is not empty and right is not empty do


if first(left) ≤ first(right) then
append first(left) to result
left := rest(left)
else
append first(right) to result
right := rest(right)

// Either left or right may have elements left; consume them.


// (Only one of the following loops will actually be entered.)
while left is not empty do

16 https://en.wikipedia.org/wiki/Pseudocode

90
Natural merge sort

append first(left) to result


left := rest(left)
while right is not empty do
append first(right) to result
right := rest(right)
return result

6.1.4 Bottom-up implementation using lists

Pseudocode17 for bottom-up merge sort algorithm which uses a small fixed size array of
references to nodes, where array[i] is either a reference to a list of size 2i or nil18 . node is
a reference or pointer to a node. The merge() function would be similar to the one shown
in the top-down merge lists example, it merges two already sorted lists, and handles empty
lists. In this case, merge() would use node for its input parameters and return value.
function merge_sort(node head) is
// return if empty list
if head = nil then
return nil
var node array[32]; initially all nil
var node result
var node next
var int i
result := head
// merge nodes into array
while result ≠ nil do
next := result.next;
result.next := nil
for(i = 0; (i < 32) && (array[i] ≠ nil); i += 1) do
result := merge(array[i], result)
array[i] := nil
// do not go past end of array
if i = 32 then
i -= 1
array[i] := result
result := next
// merge array into single list
result := nil
for (i = 0; i < 32; i += 1) do
result := merge(array[i], result)
return result

6.2 Natural merge sort

A natural merge sort is similar to a bottom-up merge sort except that any naturally occur-
ring runs (sorted sequences) in the input are exploited. Both monotonic and bitonic (al-
ternating up/down) runs may be exploited, with lists (or equivalently tapes or files) being
convenient data structures (used as FIFO queues19 or LIFO stacks20 ).[4] In the bottom-up
merge sort, the starting point assumes each run is one item long. In practice, random input

17 https://en.wikipedia.org/wiki/Pseudocode
18 https://en.wikipedia.org/wiki/Null_pointer
19 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
20 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)

91
Merge sort

data will have many short runs that just happen to be sorted. In the typical case, the
natural merge sort may not need as many passes because there are fewer runs to merge.
In the best case, the input is already sorted (i.e., is one run), so the natural merge sort
need only make one pass through the data. In many practical cases, long natural runs
are present, and for that reason natural merge sort is exploited as the key component of
Timsort21 . Example:
Start : 3 4 2 1 7 5 8 9 0 6
Select runs : (3 4)(2)(1 7)(5 8 9)(0 6)
Merge : (2 3 4)(1 5 7 8 9)(0 6)
Merge : (1 2 3 4 5 7 8 9)(0 6)
Merge : (0 1 2 3 4 5 6 7 8 9)

Tournament replacement selection sorts22 are used to gather the initial runs for external
sorting algorithms.

21 https://en.wikipedia.org/wiki/Timsort
22 https://en.wikipedia.org/wiki/Tournament_sort

92
Analysis

6.3 Analysis

Figure 18 A recursive merge sort algorithm used to sort an array of 7 integer values.
These are the steps a human would take to emulate merge sort (top-down).

In sorting n objects, merge sort has an average23 and worst-case performance24 of


O25 (n log n). If the running time of merge sort for a list of length n is T(n), then the
recurrence T(n) = 2T(n/2) + n follows from the definition of the algorithm (apply the al-
gorithm to two lists of half the size of the original list, and add the n steps taken to merge
the resulting two lists). The closed form follows from the master theorem for divide-and-
conquer recurrences26 .

23 https://en.wikipedia.org/wiki/Average_performance
24 https://en.wikipedia.org/wiki/Worst-case_performance
25 https://en.wikipedia.org/wiki/Big_O_notation
26 https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)

93
Merge sort

In the worst case, the number of comparisons merge sort makes is given by the sorting
numbers27 . These numbers are equal to or slightly smaller than (n ⌈lg28 n⌉ − 2⌈lg n⌉ + 1),
which is between (n lg n − n + 1) and (n lg n + n + O(lg n)).[5]
For large n and a randomly ordered input list, merge sort's expected (average) number of
∑∞
1
comparisons approaches α·n fewer than the worst case where α = −1 + k +1
≈ 0.2645.
k=0
2

In the worst case, merge sort does about 39% fewer comparisons than quicksort29 does in
the average case. In terms of moves, merge sort's worst case complexity is O30 (n log n)—
the same complexity as quicksort's best case, and merge sort's best case takes about half
31
as many iterations as the worst case.[citation needed ]
Merge sort is more efficient than quicksort for some types of lists if the data to be sorted can
only be efficiently accessed sequentially, and is thus popular in languages such as Lisp32 ,
where sequentially accessed data structures are very common. Unlike some (efficient) im-
plementations of quicksort, merge sort is a stable sort.
Merge sort's most common implementation does not sort in place;[6] therefore, the memory
size of the input must be allocated for the sorted output to be stored in (see below for
versions that need only n/2 extra spaces).

6.4 Variants

Variants of merge sort are primarily concerned with reducing the space complexity and the
cost of copying.
A simple alternative for reducing the space overhead to n/2 is to maintain left and right as
a combined structure, copy only the left part of m into temporary space, and to direct the
merge routine to place the merged output into m. With this version it is better to allocate
the temporary space outside the merge routine, so that only one allocation is needed. The
excessive copying mentioned previously is also mitigated, since the last pair of lines before
the return result statement (function mergein the pseudo code above) become superfluous.
One drawback of merge sort, when implemented on arrays, is its O(n) working memory
requirement. Several in-place33 variants have been suggested:
• Katajainen et al. present an algorithm that requires a constant amount of working mem-
ory: enough storage space to hold one element of the input array, and additional space
to hold O(1) pointers into the input array. They achieve an O(n log n) time bound with
small constants, but their algorithm is not stable.[7]
• Several attempts have been made at producing an in-place merge algorithm that can
be combined with a standard (top-down or bottom-up) merge sort to produce an in-

27 https://en.wikipedia.org/wiki/Sorting_number
28 https://en.wikipedia.org/wiki/Binary_logarithm
29 https://en.wikipedia.org/wiki/Quicksort
30 https://en.wikipedia.org/wiki/Big_O_notation
32 https://en.wikipedia.org/wiki/Lisp_programming_language
33 https://en.wikipedia.org/wiki/In-place_algorithm

94
Variants

place merge sort. In this case, the notion of ”in-place” can be relaxed to mean ”taking
logarithmic stack space”, because standard merge sort requires that amount of space
for its own stack usage. It was shown by Geffert et al. that in-place, stable merging is
possible in O(n log n) time using a constant amount of scratch space, but their algorithm
is complicated and has high constant factors: merging arrays of length n and m can take
5n + 12m + o(m) moves.[8] This high constant factor and complicated in-place algorithm
was made simpler and easier to understand. Bing-Chao Huang and Michael A. Langston[9]
presented a straightforward linear time algorithm practical in-place merge to merge a
sorted list using fixed amount of additional space. They both have used the work of
Kronrod and others. It merges in linear time and constant extra space. The algorithm
takes little more average time than standard merge sort algorithms, free to exploit O(n)
temporary extra memory cells, by less than a factor of two. Though the algorithm is
much faster in a practical way but it is unstable also for some lists. But using similar
concepts, they have been able to solve this problem. Other in-place algorithms include
SymMerge, which takes O((n + m) log (n + m)) time in total and is stable.[10] Plugging
such an algorithm into merge sort increases its complexity to the non-linearithmic34 , but
still quasilinear35 , O(n (log n)2 ).
• A modern stable linear and in-place merging is block merge sort36 .
An alternative to reduce the copying into multiple lists is to associate a new field of infor-
mation with each key (the elements in m are called keys). This field will be used to link
the keys and any associated information together in a sorted list (a key and its related
information is called a record). Then the merging of the sorted lists proceeds by changing
the link values; no records need to be moved at all. A field which contains only a link will
generally be smaller than an entire record so less space will also be used. This is a standard
sorting technique, not restricted to merge sort.

34 https://en.wikipedia.org/wiki/Linearithmic
35 https://en.wikipedia.org/wiki/Quasilinear_time
36 https://en.wikipedia.org/wiki/Block_merge_sort

95
Merge sort

6.5 Use with tape drives

Figure 19 Merge sort type algorithms allowed large data sets to be sorted on early
computers that had small random access memories by modern standards. Records were
stored on magnetic tape and processed on banks of magnetic tape drives, such as these
IBM 729s.

An external37 merge sort is practical to run using disk38 or tape39 drives when the data to
be sorted is too large to fit into memory40 . External sorting41 explains how merge sort is
implemented with disk drives. A typical tape drive sort uses four tape drives. All I/O is
sequential (except for rewinds at the end of each pass). A minimal implementation can get
by with just two record buffers and a few program variables.
Naming the four tape drives as A, B, C, D, with the original data on A, and using only 2
record buffers, the algorithm is similar to Bottom-up implementation42 , using pairs of tape
drives instead of arrays in memory. The basic algorithm can be described as follows:

37 https://en.wikipedia.org/wiki/External_sorting
38 https://en.wikipedia.org/wiki/Disk_storage
39 https://en.wikipedia.org/wiki/Tape_drive
40 https://en.wikipedia.org/wiki/Primary_storage
41 https://en.wikipedia.org/wiki/External_sorting
42 #Bottom-up_implementation

96
Use with tape drives

1. Merge pairs of records from A; writing two-record sublists alternately to C and D.


2. Merge two-record sublists from C and D into four-record sublists; writing these alter-
nately to A and B.
3. Merge four-record sublists from A and B into eight-record sublists; writing these
alternately to C and D
4. Repeat until you have one list containing all the data, sorted—in log2 (n) passes.
Instead of starting with very short runs, usually a hybrid algorithm43 is used, where the
initial pass will read many records into memory, do an internal sort to create a long run,
and then distribute those long runs onto the output set. The step avoids many early passes.
For example, an internal sort of 1024 records will save nine passes. The internal sort is
often large because it has such a benefit. In fact, there are techniques that can make the
initial runs longer than the available internal memory.[11]
With some overhead, the above algorithm can be modified to use three tapes. O(n log n)
running time can also be achieved using two queues44 , or a stack45 and a queue, or three
stacks. In the other direction, using k > two tapes (and O(k) items in memory), we can
reduce the number of tape operations in O(log k) times by using a k/2-way merge46 .
A more sophisticated merge sort that optimizes tape (and disk) drive usage is the polyphase
merge sort47 .

43 https://en.wikipedia.org/wiki/Hybrid_algorithm
44 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
45 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
46 https://en.wikipedia.org/wiki/K-way_merge_algorithm
47 https://en.wikipedia.org/wiki/Polyphase_merge_sort

97
Merge sort

6.6 Optimizing merge sort

Figure 20 Tiled merge sort applied to an array of random integers. The horizontal axis
is the array index and the vertical axis is the integer.

On modern computers, locality of reference48 can be of paramount importance in software


optimization49 , because multilevel memory hierarchies50 are used. Cache51 -aware versions
of the merge sort algorithm, whose operations have been specifically chosen to minimize
the movement of pages in and out of a machine's memory cache, have been proposed. For
example, the tiled merge sort algorithm stops partitioning subarrays when subarrays of
size S are reached, where S is the number of data items fitting into a CPU's cache. Each
of these subarrays is sorted with an in-place sorting algorithm such as insertion sort52 ,
to discourage memory swaps, and normal merge sort is then completed in the standard

48 https://en.wikipedia.org/wiki/Locality_of_reference
49 https://en.wikipedia.org/wiki/Software_optimization
50 https://en.wikipedia.org/wiki/Memory_hierarchy
51 https://en.wikipedia.org/wiki/Cache_(computing)
52 https://en.wikipedia.org/wiki/Insertion_sort

98
Parallel merge sort

53 ]
recursive fashion. This algorithm has demonstrated better performance[example needed on
machines that benefit from cache optimization. (LaMarca & Ladner 199754 )
Kronrod (1969)55 suggested an alternative version of merge sort that uses constant addi-
tional space. This algorithm was later refined. (Katajainen, Pasanen & Teuhola 199656 )
harv error: multiple targets (2×): CITEREFKatajainenPasanenTeuhola1996 (help57 )
Also, many applications of external sorting58 use a form of merge sorting where the input
get split up to a higher number of sublists, ideally to a number for which merging them still
makes the currently processed set of pages59 fit into main memory.

6.7 Parallel merge sort

Merge sort parallelizes well due to the use of the divide-and-conquer60 method. Several
different parallel variants of the algorithm have been developed over the years. Some parallel
merge sort algorithms are strongly related to the sequential top-down merge algorithm while
others have a different general structure and use the K-way merge61 method.

6.7.1 Merge sort with parallel recursion

The sequential merge sort procedure can be described in two phases, the divide phase and
the merge phase. The first consists of many recursive calls that repeatedly perform the same
division process until the subsequences are trivially sorted (containing one or no element).
An intuitive approach is the parallelization of those recursive calls.[12] Following pseudocode
describes the merge sort with parallel recursion using the fork and join62 keywords:
// Sort elements lo through hi (exclusive) of array A.
algorithm mergesort(A, lo, hi) is
if lo+1 < hi then // Two or more elements.
mid := ⌊(lo + hi) / 2⌋
fork mergesort(A, lo, mid)
mergesort(A, mid, hi)
join
merge(A, lo, mid, hi)

This algorithm is the trivial modification of the sequential version and does not parallelize
well. Therefore, its speedup is not very impressive. It has a span63 of Θ(n), which is
only an improvement of Θ(log n) compared to the sequential version (see Introduction to

54 #CITEREFLaMarcaLadner1997
55 #CITEREFKronrod1969
56 #CITEREFKatajainenPasanenTeuhola1996
57 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
58 https://en.wikipedia.org/wiki/External_sorting
59 https://en.wikipedia.org/wiki/Page_(computer_memory)
60 https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm
61 https://en.wikipedia.org/wiki/K-way_merge_algorithm
62 https://en.wikipedia.org/wiki/Fork%E2%80%93join_model
63 https://en.wikipedia.org/wiki/Analysis_of_parallel_algorithms#Overview

99
Merge sort

Algorithms64 ). This is mainly due to the sequential merge method, as it is the bottleneck
of the parallel executions.

6.7.2 Merge sort with parallel merging

Main article: Merge algorithm § Parallel merge65 Better parallelism can be achieved by
using a parallel merge algorithm66 . Cormen et al.67 present a binary variant that merges
two sorted sub-sequences into one sorted output sequence.[12]
In one of the sequences (the longer one if unequal length), the element of the middle index
is selected. Its position in the other sequence is determined in such a way that this sequence
would remain sorted if this element were inserted at this position. Thus, one knows how
many other elements from both sequences are smaller and the position of the selected
element in the output sequence can be calculated. For the partial sequences of the smaller
and larger elements created in this way, the merge algorithm is again executed in parallel
until the base case of the recursion is reached.
The following pseudocode shows the modified parallel merge sort method using the parallel
merge algorithm (adopted from Cormen et al.).
/**
* A: Input array
* B: Output array
* lo: lower bound
* hi: upper bound
* off: offset
*/
algorithm parallelMergesort(A, lo, hi, B, off) is
len := hi - lo + 1
if len == 1 then
B[off] := A[lo]
else let T[1..len] be a new array
mid := ⌊(lo + hi) / 2⌋
mid' := mid - lo + 1
fork parallelMergesort(A, lo, mid, T, 1)
parallelMergesort(A, mid + 1, hi, T, mid' + 1)
join
parallelMerge(T, 1, mid', mid' + 1, len, B, off)

In order to analyze a Recurrence relation68 for the worst case span, the recursive calls
of parallelMergesort have to be incorporated only once due to their parallel execution,
obtaining
sort (n) = T sort
(n) merge sort
(n) ( )
T∞ ∞ 2 + T∞ (n) = T∞ 2 + Θ log(n)2 .
For detailed information about the complexity of the parallel merge procedure, see Merge
algorithm69 .
The solution of this recurrence is given by

64 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
65 https://en.wikipedia.org/wiki/Merge_algorithm#Parallel_merge
66 https://en.wikipedia.org/wiki/Merge_algorithm
67 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
68 https://en.wikipedia.org/wiki/Recurrence_relation
69 https://en.wikipedia.org/wiki/Merge_algorithm#Parallel_merge

100
Parallel merge sort

( )
sort = Θ log(n)3 .
T∞
( )
n
This parallel merge algorithm reaches a parallelism of Θ , which is much higher
(log n)2
than the parallelism of the previous algorithm. Such a sort can perform well in practice when
combined with a fast stable sequential sort, such as insertion sort70 , and a fast sequential
merge as a base case for merging small arrays.[13]

6.7.3 Parallel multiway merge sort

It seems arbitrary to restrict the merge sort algorithms to a binary merge method, since there
are usually p > 2 processors available. A better approach may be to use a K-way merge71
method, a generalization of binary merge, in which k sorted sequences are merged together.
This merge variant is well suited to describe a sorting algorithm on a PRAM72[14][15] .

Basic Idea

Figure 21 The parallel multiway mergesort process on four processors t0 to t3 .

70 https://en.wikipedia.org/wiki/Insertion_sort
71 https://en.wikipedia.org/wiki/K-way_merge_algorithm
72 https://en.wikipedia.org/wiki/Parallel_random-access_machine

101
Merge sort

Given an unsorted sequence of n elements, the goal is to sort the sequence with p available
processors73 . These elements are distributed equally among all processors and sorted locally
using a sequential Sorting algorithm74 . Hence, the sequence consists of sorted sequences
S1 , ..., Sp of length ⌈ np ⌉. For simplification let n be a multiple of p, so that |Si | = np for
i = 1, ..., p.
These sequences will be used to perform a multisequence selection/splitter selection. For
j = 1, ..., p, the algorithm determines splitter elements vj with global rank k = j np . Then
the corresponding positions of v1 , ..., vp in each sequence Si are determined with binary
search75 and thus the Si are further partitioned into p subsequences Si,1 , ..., Si,p with
Si,j := {x ∈ Si |rank(vj−1 ) < rank(x) ≤ rank(vj )}.
Furthermore, the elements of S1,i , ..., Sp,i are assigned to processor i, means all elements
between rank (i − 1) np and rank i np , which are distributed over all Si . Thus, each processor
receives a sequence of sorted sequences. The fact that the rank k of the splitter elements
vi was chosen globally, provides two important properties: On the one hand, k was chosen
so that each processor can still operate on n/p elements after assignment. The algorithm is
perfectly load-balanced76 . On the other hand, all elements on processor i are less than or
equal to all elements on processor i + 1. Hence, each processor performs the p-way merge77
locally and thus obtains a sorted sequence from its sub-sequences. Because of the second
property, no further p-way-merge has to be performed, the results only have to be put
together in the order of the processor number.

Multisequence selection

In its simplest form, given p sorted sequences S1 , ..., Sp distributed evenly on p processors
and a rank k, the task is to find an element x with a global rank k in the union of the
sequences. Hence, this can be used to divide each Si in two parts at a splitter index li ,
where the lower part contains only elements which are smaller than x, while the elements
bigger than x are located in the upper part.
The presented sequential algorithm returns the indices of the splits in each sequence,
e.g. the indices li in sequences Si such that Si [li ] has a global rank less than k and
rank (Si [li + 1]) ≥ k.[16]
algorithm msSelect(S : Array of sorted Sequences [S_1,..,S_p], k : int) is
for i = 1 to p do
(l_i, r_i) = (0, |S_i|-1)

while there exists i: l_i < r_i do


//pick Pivot Element in S_j[l_j],..,S_j[r_j], chose random j uniformly
v := pickPivot(S, l, r)
for i = 1 to p do
m_i = binarySearch(v, S_i[l_i, r_i]) //sequentially
if m_1 + ... + m_p >= k then //m_1+ ... + m_p is the global rank of v
r := m //vector assignment
else

73 https://en.wikipedia.org/wiki/Processor_(computing)
74 https://en.wikipedia.org/wiki/Sorting_algorithm
75 https://en.wikipedia.org/wiki/Binary_search_algorithm
76 https://en.wikipedia.org/wiki/Load_balancing_(computing)
77 https://en.wikipedia.org/wiki/K-way_merge_algorithm

102
Parallel merge sort

l := m

return l

For the complexity analysis the PRAM78 model is chosen. If the data is evenly dis-
tributed over all p, the p-fold execution of the binarySearch method has a running time

of O (p log (n/p)). The expected recursion depth is O (log ( i |Si |)) = O(log(n)) as in the
ordinary Quickselect79 . Thus the overall expected running time is O (p log(n/p) log(n)).
Applied on the parallel multiway merge sort, this algorithm has to be invoked in parallel
such that all splitter elements of rank i np for i = 1, .., p are found simultaneously. These
splitter elements can then be used to partition each sequence in p parts, with the same total
running time of O (p log(n/p) log(n)).

Pseudocode

Below, the complete pseudocode of the parallel multiway merge sort algorithm is given. We
assume that there is a barrier synchronization before and after the multisequence selection
such that every processor can determine the splitting elements and the sequence partition
properly.
/**
* d: Unsorted Array of Elements
* n: Number of Elements
* p: Number of Processors
* return Sorted Array
*/
algorithm parallelMultiwayMergesort(d : Array, n : int, p : int) is
o := new Array[0, n] // the output array
for i = 1 to p do in parallel // each processor in parallel
S_i := d[(i-1) * n/p, i * n/p] // Sequence of length n/p
sort(S_i) // sort locally
synch
v_i := msSelect([S_1,...,S_p], i * n/p) // element with global rank
i * n/p
synch
(S_i,1 ,..., S_i,p) := sequence_partitioning(si, v_1, ..., v_p) // split s_i
into subsequences

o[(i-1) * n/p, i * n/p] := kWayMerge(s_1,i, ..., s_p,i) // merge and assign


to output array

return o

Analysis

Firstly, each processor sorts the assigned n/p elements locally using a sorting algorithm with
complexity O (n/p log(n/p)). After that, the splitter elements have to be calculated in time
O (p log(n/p) log(n)). Finally, each group of p splits have to be merged in parallel by each

78 https://en.wikipedia.org/wiki/Parallel_random-access_machine
79 https://en.wikipedia.org/wiki/Quickselect

103
Merge sort

processor with a running time of O(log(p)n/p) using a sequential p-way merge algorithm80 .
Thus, the overall running time is given by
( ( ) ( ) )
n n n n
O log + p log log(n) + log(p) .
p p p p

Practical adaption and application

The multiway merge sort algorithm is very scalable through its high parallelization capabil-
ity, which allows the use of many processors. This makes the algorithm a viable candidate
for sorting large amounts of data, such as those processed in computer clusters81 . Also,
since in such systems memory is usually not a limiting resource, the disadvantage of space
complexity of merge sort is negligible. However, other factors become important in such
systems, which are not taken into account when modelling on a PRAM82 . Here, the follow-
ing aspects need to be considered: Memory hierarchy83 , when the data does not fit into the
processors cache, or the communication overhead of exchanging data between processors,
which could become a bottleneck when the data can no longer be accessed via the shared
memory.
Sanders84 et al. have presented in their paper a bulk synchronous parallel85 algorithm for
multilevel multiway mergesort, which divides p processors into r groups of size p′ . All
processors sort locally first. Unlike single level multiway mergesort, these sequences are
then partitioned into r parts and assigned to the appropriate processor groups. These
steps are repeated recursively in those groups. This reduces communication and especially
avoids problems with many small messages. The hierarchial structure of the underlying real
network can be used to define the processor groups (e.g. racks86 , clusters87 ,...).[15]

6.7.4 Further Variants

Merge sort was one of the first sorting algorithms where optimal speed up was achieved, with
Richard Cole using a clever subsampling algorithm to ensure O(1) merge.[17] Other sophis-
ticated parallel sorting algorithms can achieve the same or better time bounds with a lower
constant. For example, in 1991 David Powers described a parallelized quicksort88 (and a
related radix sort89 ) that can operate in O(log n) time on a CRCW90 parallel random-access
machine91 (PRAM) with n processors by performing partitioning implicitly.[18] Powers fur-
ther shows that a pipelined version of Batcher's Bitonic Mergesort92 at O((log n)2 ) time

80 https://en.wikipedia.org/wiki/Merge_algorithm
81 https://en.wikipedia.org/wiki/Computer_cluster
82 https://en.wikipedia.org/wiki/Parallel_random-access_machine
83 https://en.wikipedia.org/wiki/Memory_hierarchy
84 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
85 https://en.wikipedia.org/wiki/Bulk_synchronous_parallel
86 https://en.wikipedia.org/wiki/19-inch_rack
87 https://en.wikipedia.org/wiki/Computer_cluster
88 https://en.wikipedia.org/wiki/Quicksort
89 https://en.wikipedia.org/wiki/Radix_sort
90 https://en.wikipedia.org/wiki/CRCW
91 https://en.wikipedia.org/wiki/Parallel_random-access_machine
92 https://en.wikipedia.org/wiki/Bitonic_sorter

104
Comparison with other sort algorithms

on a butterfly sorting network93 is in practice actually faster than his O(log n) sorts on a
PRAM, and he provides detailed discussion of the hidden overheads in comparison, radix
and parallel sorting.[19]

6.8 Comparison with other sort algorithms

Although heapsort94 has the same time bounds as merge sort, it requires only Θ(1) auxiliary
space instead of merge sort's Θ(n). On typical modern architectures, efficient quicksort95 im-
96
plementations generally outperform mergesort for sorting RAM-based arrays.[citation needed ]
On the other hand, merge sort is a stable sort and is more efficient at handling slow-to-
access sequential media. Merge sort is often the best choice for sorting a linked list97 : in this
situation it is relatively easy to implement a merge sort in such a way that it requires only
Θ(1) extra space, and the slow random-access performance of a linked list makes some other
algorithms (such as quicksort) perform poorly, and others (such as heapsort) completely
impossible.
As of Perl98 5.8, merge sort is its default sorting algorithm (it was quicksort in previous
versions of Perl). In Java99 , the Arrays.sort()100 methods use merge sort or a tuned quicksort
depending on the datatypes and for implementation efficiency switch to insertion sort101
when fewer than seven array elements are being sorted.[20] The Linux102 kernel uses merge
sort for its linked lists.[21] Python103 uses Timsort104 , another tuned hybrid of merge sort
and insertion sort, that has become the standard sort algorithm in Java SE 7105 (for arrays
of non-primitive types),[22] on the Android platform106 ,[23] and in GNU Octave107 .[24]

6.9 Notes
1. Skiena (2008108 , p. 122)
2. Knuth (1998109 , p. 158)
3. K, J; T, J L (M 1997). ”A 
   ”110 (PDF). Proceedings of the 3rd Italian Con-

93 https://en.wikipedia.org/wiki/Sorting_network
94 https://en.wikipedia.org/wiki/Heapsort
95 https://en.wikipedia.org/wiki/Quicksort
97 https://en.wikipedia.org/wiki/Linked_list
98 https://en.wikipedia.org/wiki/Perl
99 https://en.wikipedia.org/wiki/Java_platform
https://docs.oracle.com/javase/9/docs/api/java/util/Arrays.html#sort-java.lang.
100
Object:A-
101 https://en.wikipedia.org/wiki/Insertion_sort
102 https://en.wikipedia.org/wiki/Linux
103 https://en.wikipedia.org/wiki/Python_(programming_language)
104 https://en.wikipedia.org/wiki/Timsort
105 https://en.wikipedia.org/wiki/Java_7
106 https://en.wikipedia.org/wiki/Android_(operating_system)
107 https://en.wikipedia.org/wiki/GNU_Octave
108 #CITEREFSkiena2008
109 #CITEREFKnuth1998
110 http://hjemmesider.diku.dk/~jyrki/Paper/CIAC97.pdf

105
Merge sort

ference on Algorithms and Complexity. Italian Conference on Algorithms and Com-


plexity. Rome. pp. 217–228. CiteSeerX111 10.1.1.86.3154112 . doi113 :10.1007/3-540-
62592-5_74114 .CS1 maint: ref=harv (link115 )
4. Powers, David M. W. and McMahon Graham B. (1983), ”A compendium of interesting
prolog programs”, DCS Technical Report 8313, Department of Computer Science,
University of New South Wales.
5. The worst case number given here does not agree with that given in Knuth116 's Art
of Computer Programming117 , Vol 3. The discrepancy is due to Knuth analyzing a
variant implementation of merge sort that is slightly sub-optimal
6. C; L; R; S. Introduction to Algorithms. p. 151.
ISBN118 978-0-262-03384-8119 .
7. K, J; P, T; T, J (1996). ”P-
 - ”. Nordic J. Computing. 3 (1): 27–40. Cite-
SeerX120 10.1.1.22.8523121 .
8. G, V; K, J; P, T (2000). ”A-
  - ”. Theoretical Computer Science. 237 (1–2):
159–181. doi122 :10.1016/S0304-3975(98)00162-5123 .
9. H, B-C; L, M A. (M 1988). ”P-
 I-P M”. Communications of the ACM. 31 (3): 348–352.
doi124 :10.1145/42392.42403125 .
10. K, P-S; K, A (2004). Stable Minimum Storage Merging by
Symmetric Comparisons. European Symp. Algorithms. Lecture Notes in Computer
Science. 3221. pp. 714–723. CiteSeerX126 10.1.1.102.4612127 . doi128 :10.1007/978-3-
540-30140-0_63129 . ISBN130 978-3-540-23025-0131 .
11. Selection sort. Knuth's snowplow. Natural merge.
12. Cormen et al. 2009132 , pp. 797–805 harvnb error: no target: CITEREFCormenLeis-
ersonRivestStein2009 (help133 )

111 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
112 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.3154
113 https://en.wikipedia.org/wiki/Doi_(identifier)
114 https://doi.org/10.1007%2F3-540-62592-5_74
115 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
116 https://en.wikipedia.org/wiki/Donald_Knuth
117 https://en.wikipedia.org/wiki/Art_of_Computer_Programming
118 https://en.wikipedia.org/wiki/ISBN_(identifier)
119 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03384-8
120 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
121 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.8523
122 https://en.wikipedia.org/wiki/Doi_(identifier)
123 https://doi.org/10.1016%2FS0304-3975%2898%2900162-5
124 https://en.wikipedia.org/wiki/Doi_(identifier)
125 https://doi.org/10.1145%2F42392.42403
126 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
127 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.102.4612
128 https://en.wikipedia.org/wiki/Doi_(identifier)
129 https://doi.org/10.1007%2F978-3-540-30140-0_63
130 https://en.wikipedia.org/wiki/ISBN_(identifier)
131 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-23025-0
132 #CITEREFCormenLeisersonRivestStein2009
133 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors

106
Notes

13. Victor J. Duvanenko ”Parallel Merge Sort” Dr. Dobb's Journal & blog[1]134 and
GitHub repo C++ implementation [2]135
14. Peter Sanders, Johannes Singler. 2008. Lecture Parallel algorithms Last visited
05.02.2020. 136
15. ”P M P S | P   27
ACM   P  A  A”.
137 :10.1145/2755573.2755595138 . Cite journal requires |journal= (help139 )
16. Peter Sanders. 2019. Lecture Parallel algorithms Last visited 05.02.2020. 140
17. C, R (A 1988). ”P  ”. SIAM J. Comput.
17 (4): 770–785. CiteSeerX141 10.1.1.464.7118142 . doi143 :10.1137/0217049144 .CS1
maint: ref=harv (link145 )
18. Powers, David M. W. Parallelized Quicksort and Radixsort with Optimal Speedup146 ,
Proceedings of International Conference on Parallel Computing Technologies. Novosi-
birsk147 . 1991.
19. David M. W. Powers, Parallel Unification: Practical Complexity148 , Australasian
Computer Architecture Workshop, Flinders University, January 1995
20. OpenJDK src/java.base/share/classes/java/util/Arrays.java @ 53904:9c3fe09f69bc149
21. linux kernel /lib/list_sort.c150
22. . ”C 6804124: R ” ” 
..A.  ”151 . Java Development Kit 7 Hg repo.
Archived152 from the original on 2018-01-26. Retrieved 24 Feb 2011.
23. ”C: ..TS<T>”153 . Android JDK Documentation. Archived
from the original154 on January 20, 2015. Retrieved 19 Jan 2015.
24. ”//-.”155 . Mercurial repository of Octave source code.
Lines 23-25 of the initial comment block. Retrieved 18 Feb 2013. Code stolen in large

134 https://duvanenko.tech.blog/2018/01/13/parallel-merge-sort/
135 https://github.com/DragonSpit/ParallelAlgorithms
136 http://algo2.iti.kit.edu/sanders/courses/paralg08/singler.pdf
137 https://en.wikipedia.org/wiki/Doi_(identifier)
138 https://doi.org/10.1145%2F2755573.2755595
139 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
140 http://algo2.iti.kit.edu/sanders/courses/paralg19/vorlesung.pdf
141 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
142 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.464.7118
143 https://en.wikipedia.org/wiki/Doi_(identifier)
144 https://doi.org/10.1137%2F0217049
145 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
146 http://citeseer.ist.psu.edu/327487.html
147 https://en.wikipedia.org/wiki/Novosibirsk
148 http://david.wardpowers.info/Research/AI/papers/199501-ACAW-PUPC.pdf
https://hg.openjdk.java.net/jdk/jdk/file/9c3fe09f69bc/src/java.base/share/classes/
149
java/util/Arrays.java#l1331
150 https://github.com/torvalds/linux/blob/master/lib/list_sort.c
151 http://hg.openjdk.java.net/jdk7/jdk7/jdk/rev/bfd7abda8f79
https://web.archive.org/web/20180126184957/http://hg.openjdk.java.net/jdk7/jdk7/jdk/
152
rev/bfd7abda8f79
https://web.archive.org/web/20150120063131/https://android.googlesource.com/platform/
153
libcore/%2B/jb-mr2-release/luni/src/main/java/java/util/TimSort.java
https://android.googlesource.com/platform/libcore/+/jb-mr2-release/luni/src/main/
154
java/java/util/TimSort.java
155 http://hg.savannah.gnu.org/hgweb/octave/file/0486a29d780f/liboctave/util/oct-sort.cc

107
Merge sort

part from Python's, listobject.c, which itself had no license header. However, thanks
to Tim Peters156 for the parts of the code I ripped-off.

6.10 References
• C, T H.157 ; L, C E.158 ; R, R L.159 ; S,
C160 (2009) [1990]. Introduction to Algorithms161 (3 .). MIT P 
MG-H. ISBN162 0-262-03384-4163 .CS1 maint: ref=harv (link164 )
• K, J; P, T; T, J (1996). ”P -
 ”165 . Nordic Journal of Computing. 3. pp. 27–40. ISSN166 1236-
6064167 . Archived from the original168 on 2011-08-07. Retrieved 2009-04-04.CS1 maint:
ref=harv (link169 ). Also Practical In-Place Mergesort170 . Also [3]171
• K, D172 (1998). ”S 5.2.4: S  M”. Sorting and
Searching. The Art of Computer Programming173 . 3 (2nd ed.). Addison-Wesley.
pp. 158–168. ISBN174 0-201-89685-0175 .CS1 maint: ref=harv (link176 )
• K, M. A. (1969). ”O    
”. Soviet Mathematics - Doklady. 10. p. 744.CS1 maint: ref=harv (link177 )
• LM, A.; L, R. E. (1997). ”T      -
  ”. Proc. 8th Ann. ACM-SIAM Symp. On Discrete Algorithms
(SODA97): 370–379. CiteSeerX178 10.1.1.31.1153179 .CS1 maint: ref=harv (link180 )

156 https://en.wikipedia.org/wiki/Tim_Peters_(software_engineer)
157 https://en.wikipedia.org/wiki/Thomas_H._Cormen
158 https://en.wikipedia.org/wiki/Charles_E._Leiserson
159 https://en.wikipedia.org/wiki/Ron_Rivest
160 https://en.wikipedia.org/wiki/Clifford_Stein
161 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
162 https://en.wikipedia.org/wiki/ISBN_(identifier)
163 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
164 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
https://web.archive.org/web/20110807033704/http://www.diku.dk/hjemmesider/ansatte/
165
jyrki/Paper/mergesort_NJC.ps
166 https://en.wikipedia.org/wiki/ISSN_(identifier)
167 http://www.worldcat.org/issn/1236-6064
168 http://www.diku.dk/hjemmesider/ansatte/jyrki/Paper/mergesort_NJC.ps
169 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
170 http://citeseer.ist.psu.edu/katajainen96practical.html
171 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.8523
172 https://en.wikipedia.org/wiki/Donald_Knuth
173 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
174 https://en.wikipedia.org/wiki/ISBN_(identifier)
175 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
176 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
177 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
178 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
179 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.1153
180 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

108
External links

• S, S S.181 (2008). ”4.5: M: S  D--


C”. The Algorithm Design Manual (2nd ed.). Springer. pp. 120–125.
ISBN182 978-1-84800-069-8183 .CS1 maint: ref=harv (link184 )
• S M. ”A API (J SE 6)”185 . R 2007-11-19.
• O C. ”A (J SE 10 & JDK 10)”186 . R 2018-07-23.

6.11 External links

The Wikibook Algorithm implementation187 has a page on the topic of: Merge
sort188

• Animated Sorting Algorithms: Merge Sort189 at the Wayback Machine190 (archived 6


March 2015) – graphical demonstration
• Open Data Structures - Section 11.1.1 - Merge Sort191 , Pat Morin192

Sorting algorithms

181 https://en.wikipedia.org/wiki/Steven_Skiena
182 https://en.wikipedia.org/wiki/ISBN_(identifier)
183 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84800-069-8
184 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
185 http://java.sun.com/javase/6/docs/api/java/util/Arrays.html
186 https://docs.oracle.com/javase/10/docs/api/java/util/Arrays.html
187 https://en.wikibooks.org/wiki/Algorithm_implementation
188 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Merge_sort
https://web.archive.org/web/20150306071601/http://www.sorting-algorithms.com/merge-
189
sort
190 https://en.wikipedia.org/wiki/Wayback_Machine
http://opendatastructures.org/versions/edition-0.1e/ods-java/11_1_Comparison_Based_
191
Sorti.html#SECTION001411000000000000000
192 https://en.wikipedia.org/wiki/Pat_Morin

109
7 Quicksort

A divide and conquer sorting algorithm

Quicksort
Animated visualization of the quicksort algorithm. The horizontal lines are pivot val-
ues.
Class Sorting algorithm
Worst-case O(n2 )
performance
Best-case per- O(n log n) (simple parti-
formance tion)
or O(n) (three-way parti-
tion and equal keys)
Average per- O(n log n)
formance
Worst-case O(n) auxiliary (naive)
space com- O(log n) auxiliary
plexity (Sedgewick 1978)

Quicksort (sometimes called partition-exchange sort) is an efficient1 sorting algorithm2 .


Developed by British computer scientist Tony Hoare3 in 1959[1] and published in 1961,[2] it
is still a commonly used algorithm for sorting. When implemented well, it can be about two
6
or three times faster than its main competitors, merge sort4 and heapsort5 .[3][contradictory ]
Quicksort is a divide-and-conquer algorithm7 . It works by selecting a 'pivot' element from
the array and partitioning the other elements into two sub-arrays, according to whether
they are less than or greater than the pivot. The sub-arrays are then sorted recursively8 .
This can be done in-place9 , requiring small additional amounts of memory10 to perform the
sorting.
Quicksort is a comparison sort11 , meaning that it can sort items of any type for which
a ”less-than” relation (formally, a total order12 ) is defined. Efficient implementations of

1 https://en.wikipedia.org/wiki/Algorithm_efficiency
2 https://en.wikipedia.org/wiki/Sorting_algorithm
3 https://en.wikipedia.org/wiki/Tony_Hoare
4 https://en.wikipedia.org/wiki/Merge_sort
5 https://en.wikipedia.org/wiki/Heapsort
7 https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm
8 https://en.wikipedia.org/wiki/Recursion_(computer_science)
9 https://en.wikipedia.org/wiki/In-place_algorithm
10 https://en.wikipedia.org/wiki/Main_memory
11 https://en.wikipedia.org/wiki/Comparison_sort
12 https://en.wikipedia.org/wiki/Total_order

111
Quicksort

Quicksort are not a stable sort13 , meaning that the relative order of equal sort items is not
preserved.
Mathematical analysis14 of quicksort shows that, on average15 , the algorithm takes
O16 (n log n) comparisons to sort n items. In the worst case17 , it makes O(n2 ) compar-
isons, though this behavior is rare.

7.1 History

The quicksort algorithm was developed in 1959 by Tony Hoare18 while in the Soviet Union19 ,
as a visiting student at Moscow State University20 . At that time, Hoare worked on a project
on machine translation21 for the National Physical Laboratory22 . As a part of the translation
process, he needed to sort the words in Russian sentences prior to looking them up in a
Russian-English dictionary that was already sorted in alphabetic order on magnetic tape23 .[4]
After recognizing that his first idea, insertion sort24 , would be slow, he quickly came up with
a new idea that was Quicksort. He wrote a program in Mercury Autocode25 for the partition
but could not write the program to account for the list of unsorted segments. On return to
England, he was asked to write code for Shellsort26 as part of his new job. Hoare mentioned
to his boss that he knew of a faster algorithm and his boss bet sixpence that he did not. His
boss ultimately accepted that he had lost the bet. Later, Hoare learned about ALGOL27
and its ability to do recursion that enabled him to publish the code in Communications of
the Association for Computing Machinery28 , the premier computer science journal of the
time.[2][5]
Quicksort gained widespread adoption, appearing, for example, in Unix29 as the default
library sort subroutine. Hence, it lent its name to the C standard library30 subroutine
qsort31[6] and in the reference implementation of Java32 .

13 https://en.wikipedia.org/wiki/Stable_sort
14 https://en.wikipedia.org/wiki/Analysis_of_algorithms
15 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
16 https://en.wikipedia.org/wiki/Big_O_notation
17 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
18 https://en.wikipedia.org/wiki/Tony_Hoare
19 https://en.wikipedia.org/wiki/Soviet_Union
20 https://en.wikipedia.org/wiki/Moscow_State_University
21 https://en.wikipedia.org/wiki/Machine_translation
22 https://en.wikipedia.org/wiki/National_Physical_Laboratory,_UK
23 https://en.wikipedia.org/wiki/Magnetic_tape_data_storage
24 https://en.wikipedia.org/wiki/Insertion_sort
25 https://en.wikipedia.org/wiki/Autocode
26 https://en.wikipedia.org/wiki/Shellsort
27 https://en.wikipedia.org/wiki/ALGOL
28 https://en.wikipedia.org/wiki/Communications_of_the_ACM
29 https://en.wikipedia.org/wiki/Unix
30 https://en.wikipedia.org/wiki/C_standard_library
31 https://en.wikipedia.org/wiki/Qsort
32 https://en.wikipedia.org/wiki/Java_(programming_language)

112
History

Robert Sedgewick33 's Ph.D. thesis in 1975 is considered a milestone in the study of Quick-
sort where he resolved many open problems related to the analysis of various pivot selection
schemes including Samplesort34 , adaptive partitioning by Van Emden[7] as well as deriva-
tion of expected number of comparisons and swaps.[6] Jon Bentley35 and Doug McIlroy36
incorporated various improvements for use in programming libraries, including a technique
to deal with equal elements and a pivot scheme known as pseudomedian of nine, where a
sample of nine elements is divided into groups of three and then the median of the three
medians from three groups is chosen.[6] Bentley described another simpler and compact
partitioning scheme in his book Programming Pearls that he attributed to Nico Lomuto.
Later Bentley wrote that he used Hoare's version for years but never really understood it
but Lomuto's version was simple enough to prove correct.[8] Bentley described Quicksort as
the ”most beautiful code I had ever written” in the same essay. Lomuto's partition scheme
was also popularized by the textbook Introduction to Algorithms37 although it is inferior to
Hoare's scheme because it does three times more swaps on average and degrades to O(n2 )
38
runtime when all elements are equal.[9][self-published source? ]
In 2009, Vladimir Yaroslavskiy proposed the new dual pivot Quicksort implementation.[10]
In the Java core library mailing lists, he initiated a discussion claiming his new algorithm
to be superior to the runtime library's sorting method, which was at that time based on
the widely used and carefully tuned variant of classic Quicksort by Bentley and McIlroy.[11]
Yaroslavskiy's Quicksort has been chosen as the new default sorting algorithm in Oracle's
Java 7 runtime library[12] after extensive empirical performance tests.[13]

33 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
34 https://en.wikipedia.org/wiki/Samplesort
35 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
36 https://en.wikipedia.org/wiki/Douglas_McIlroy
37 https://en.wikipedia.org/wiki/Introduction_to_Algorithms

113
Quicksort

7.2 Algorithm

Figure 22 Full example of quicksort on a random set of numbers. The shaded element
is the pivot. It is always chosen as the last element of the partition. However, always
choosing the last element in the partition as the pivot in this way results in poor
performance (O(n²)) on already sorted arrays, or arrays of identical elements. Since
sub-arrays of sorted / identical elements crop up a lot towards the end of a sorting
procedure on a large set, versions of the quicksort algorithm that choose the pivot as the
middle element run much more quickly than the algorithm described in this diagram on
large sets of numbers.

114
Algorithm

Quicksort is a divide and conquer algorithm39 . It first divides the input array into two
smaller sub-arrays: the low elements and the high elements. It then recursively sorts the
sub-arrays. The steps for in-place40 Quicksort are:
1. Pick an element, called a pivot, from the array.
2. Partitioning: reorder the array so that all elements with values less than the pivot
come before the pivot, while all elements with values greater than the pivot come after
it (equal values can go either way). After this partitioning, the pivot is in its final
position. This is called the partition operation.
3. Recursively41 apply the above steps to the sub-array of elements with smaller values
and separately to the sub-array of elements with greater values.
The base case of the recursion is arrays of size zero or one, which are in order by definition,
so they never need to be sorted.
The pivot selection and partitioning steps can be done in several different ways; the choice
of specific implementation schemes greatly affects the algorithm's performance.

7.2.1 Lomuto partition scheme

This scheme is attributed to Nico Lomuto and popularized by Bentley in his book Pro-
gramming Pearls[14] and Cormen et al. in their book Introduction to Algorithms42 .[15] This
scheme chooses a pivot that is typically the last element in the array. The algorithm main-
tains index i as it scans the array using another index j such that the elements at lo through
i-1 (inclusive) are less than the pivot, and the elements at i through j (inclusive) are equal
to or greater than the pivot. As this scheme is more compact and easy to understand, it
is frequently used in introductory material, although it is less efficient than Hoare's origi-
nal scheme.[16] This scheme degrades to O(n2 ) when the array is already in order.[9] There
have been various variants proposed to boost performance including various ways to select
pivot, deal with equal elements, use other sorting algorithms such as Insertion sort43 for
small arrays and so on. In pseudocode44 , a quicksort that sorts elements at lo through hi
(inclusive) of an array A can be expressed as:[15]
algorithm quicksort(A, lo, hi) is
if lo < hi then
p := partition(A, lo, hi)
quicksort(A, lo, p - 1)
quicksort(A, p + 1, hi)

algorithm partition(A, lo, hi) is


pivot := A[hi]
i := lo
for j := lo to hi do
if A[j] < pivot then
swap A[i] with A[j]
i := i + 1
swap A[i] with A[hi]

39 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
40 https://en.wikipedia.org/wiki/In-place_algorithm
41 https://en.wikipedia.org/wiki/Recursion_(computer_science)
42 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
43 https://en.wikipedia.org/wiki/Insertion_sort
44 https://en.wikipedia.org/wiki/Pseudocode

115
Quicksort

return i

Sorting the entire array is accomplished by quicksort(A, 0, length(A) - 1).

7.2.2 Hoare partition scheme

The original partition scheme described by C.A.R. Hoare uses two indices that start at
the ends of the array being partitioned, then move toward each other, until they detect an
inversion: a pair of elements, one greater than or equal to the pivot, one lesser or equal, that
are in the wrong order relative to each other. The inverted elements are then swapped.[17]
When the indices meet, the algorithm stops and returns the final index. Hoare's scheme is
more efficient than Lomuto's partition scheme because it does three times fewer swaps on
45
average, and it creates efficient partitions even when all values are equal.[9][self-published source? ]
Like Lomuto's partition scheme, Hoare's partitioning also would cause Quicksort to degrade
to O(n2 ) for already sorted input, if the pivot was chosen as the first or the last element.
With the middle element as the pivot, however, sorted data results with (almost) no swaps
in equally sized partitions leading to best case behavior of Quicksort, i.e. O(n log(n)). Like
others, Hoare's partitioning doesn't produce a stable sort. In this scheme, the pivot's final
location is not necessarily at the index that was returned, and the next two segments that
the main algorithm recurs on are (lo..p) and (p+1..hi) as opposed to (lo..p-1) and (p+1..hi)
as in Lomuto's scheme. However, the partitioning algorithm guarantees lo ≤ p < hi which
implies both resulting partitions are non-empty, hence there's no risk of infinite recursion.
In pseudocode46 ,[15]
algorithm quicksort(A, lo, hi) is
if lo < hi then
p := partition(A, lo, hi)
quicksort(A, lo, p)
quicksort(A, p + 1, hi)

algorithm partition(A, lo, hi) is


pivot := A[⌊(hi + lo) / 2⌋]
i := lo - 1
j := hi + 1
loop forever
do
i := i + 1
while A[i] < pivot
do
j := j - 1
while A[j] > pivot
if i ≥ j then
return j
swap A[i] with A[j]

An important point in choosing the pivot item is to round the division result towards zero.
This is the implicit behavior of integer division in some programming languages (e.g., C,
C++, Java), hence rounding is omitted in implementing code. Here it is emphasized with
explicit use of a floor function47 , denoted with a ⌊ ⌋symbols pair. Rounding down is
important to avoid using A[hi] as the pivot, which can result in infinite recursion.

46 https://en.wikipedia.org/wiki/Pseudocode
47 https://en.wikipedia.org/wiki/Floor_and_ceiling_functions

116
Algorithm

The entire array is sorted by quicksort(A, 0, length(A) - 1).

7.2.3 Implementation issues

Choice of pivot

In the very early versions of quicksort, the leftmost element of the partition would often
be chosen as the pivot element. Unfortunately, this causes worst-case behavior on already
sorted arrays, which is a rather common use-case. The problem was easily solved by choosing
either a random index for the pivot, choosing the middle index of the partition or (especially
for longer partitions) choosing the median48 of the first, middle and last element of the
partition for the pivot (as recommended by Sedgewick49 ).[18] This ”median-of-three” rule
counters the case of sorted (or reverse-sorted) input, and gives a better estimate of the
optimal pivot (the true median) than selecting any single element, when no information
about the ordering of the input is known.
Median-of-three code snippet for Lomuto partition:
mid := (lo + hi) / 2
if A[mid] < A[lo]
swap A[lo] with A[mid]
if A[hi] < A[lo]
swap A[lo] with A[hi]
if A[mid] < A[hi]
swap A[mid] with A[hi]
pivot := A[hi]

It puts a median into A[hi] first, then that new value of A[hi] is used for a pivot, as in a
basic algorithm presented above.
Specifically, the expected number of comparisons needed to sort n elements (see § Analysis
of randomized quicksort50 ) with random pivot selection is 1.386 n log n. Median-of-three
pivoting brings this down to C51 n, 2 ≈ 1.188 n log n, at the expense of a three-percent increase
in the expected number of swaps.[6] An even stronger pivoting rule, for larger arrays, is to
pick the ninther52 , a recursive median-of-three (Mo3), defined as[6]
ninther(a) = median(Mo3(first ⅓ of a), Mo3(middle ⅓ of a), Mo3(final ⅓ of a))
Selecting a pivot element is also complicated by the existence of integer overflow53 . If the
boundary indices of the subarray being sorted are sufficiently large, the naïve expression for
the middle index, (lo + hi)/2, will cause overflow and provide an invalid pivot index. This
can be overcome by using, for example, lo + (hi−lo)/2 to index the middle element, at the
cost of more complex arithmetic. Similar issues arise in some other methods of selecting
the pivot element.

48 https://en.wikipedia.org/wiki/Median
49 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
50 #Analysis_of_randomized_quicksort
51 https://en.wikipedia.org/wiki/Binomial_coefficient
52 https://en.wikipedia.org/wiki/Ninther
53 https://en.wikipedia.org/wiki/Integer_overflow

117
Quicksort

Repeated elements

With a partitioning algorithm such as the Lomuto partition scheme described above (even
one that chooses good pivot values), quicksort exhibits poor performance for inputs that
contain many repeated elements. The problem is clearly apparent when all the input el-
ements are equal: at each recursion, the left partition is empty (no input values are less
than the pivot), and the right partition has only decreased by one element (the pivot is
removed). Consequently, the Lomuto partition scheme takes quadratic time54 to sort an
array of equal values. However, with a partitioning algorithm such as the Hoare partition
scheme, repeated elements generally results in better partitioning, and although needless
swaps of elements equal to the pivot may occur, the running time generally decreases as the
number of repeated elements increases (with memory cache reducing the swap overhead).
In the case where all elements are equal, Hoare partition scheme needlessly swaps elements,
but the partitioning itself is best case, as noted in the Hoare partition section above.
To solve the Lomuto partition scheme problem (sometimes called the Dutch national flag
problem55[6] ), an alternative linear-time partition routine can be used that separates the
values into three groups: values less than the pivot, values equal to the pivot, and values
greater than the pivot. (Bentley and McIlroy call this a ”fat partition” and it was already
implemented in the qsort56 of Version 7 Unix57 .[6] ) The values equal to the pivot are already
sorted, so only the less-than and greater-than partitions need to be recursively sorted. In
pseudocode, the quicksort algorithm becomes
algorithm quicksort(A, lo, hi) is
if lo < hi then
p := pivot(A, lo, hi)
left, right := partition(A, p, lo, hi) // note: multiple return values
quicksort(A, lo, left - 1)
quicksort(A, right + 1, hi)

The partition algorithm returns indices to the first ('leftmost') and to the last ('rightmost')
item of the middle partition. Every item of the partition is equal to p and is therefore
sorted. Consequently, the items of the partition need not be included in the recursive calls
to quicksort.
The best case for the algorithm now occurs when all elements are equal (or are chosen from
a small set of k ≪n elements). In the case of all equal elements, the modified quicksort will
perform only two recursive calls on empty subarrays and thus finish in linear time (assuming
the partition subroutine takes no longer than linear time).

Optimizations

Two other important optimizations, also suggested by Sedgewick and widely used in prac-
tice, are:[19][20]

54 https://en.wikipedia.org/wiki/Quadratic_time
55 https://en.wikipedia.org/wiki/Dutch_national_flag_problem
56 https://en.wikipedia.org/wiki/Qsort
57 https://en.wikipedia.org/wiki/Version_7_Unix

118
Algorithm

• To make sure at most O(log n) space is used, recur58 first into the smaller side of the
partition, then use a tail call59 to recur into the other, or update the parameters to no
longer include the now sorted smaller side, and iterate to sort the larger side.
• When the number of elements is below some threshold (perhaps ten elements), switch
to a non-recursive sorting algorithm such as insertion sort60 that performs fewer swaps,
comparisons or other operations on such small arrays. The ideal 'threshold' will vary
based on the details of the specific implementation.
• An older variant of the previous optimization: when the number of elements is less than
the threshold k, simply stop; then after the whole array has been processed, perform inser-
tion sort on it. Stopping the recursion early leaves the array k-sorted, meaning that each
element is at most k positions away from its final sorted position. In this case, insertion
sort takes O(kn) time to finish the sort, which is linear if k is a constant.[21][14]:117 Com-
pared to the ”many small sorts” optimization, this version may execute fewer instructions,
but it makes suboptimal use of the cache memories61 in modern computers.[22]

Parallelization

Quicksort's divide-and-conquer formulation makes it amenable to parallelization62 using


task parallelism63 . The partitioning step is accomplished through the use of a parallel
prefix sum64 algorithm to compute an index for each array element in its section of the
partitioned array.[23][24] Given an array of size n, the partitioning step performs O(n) work
in O(log n) time and requires O(n) additional scratch space. After the array has been
partitioned, the two partitions can be sorted recursively in parallel. Assuming an ideal
choice of pivots, parallel quicksort sorts an array of size n in O(n log n) work in O(log² n)
time using O(n) additional space.
Quicksort has some disadvantages when compared to alternative sorting algorithms, like
merge sort65 , which complicate its efficient parallelization. The depth of quicksort's divide-
and-conquer tree directly impacts the algorithm's scalability, and this depth is highly de-
pendent on the algorithm's choice of pivot. Additionally, it is difficult to parallelize the
partitioning step efficiently in-place. The use of scratch space simplifies the partitioning
step, but increases the algorithm's memory footprint and constant overheads.
Other more sophisticated parallel sorting algorithms can achieve even better time bounds.[25]
For example, in 1991 David Powers described a parallelized quicksort (and a related radix
sort66 ) that can operate in O(log n) time on a CRCW67 (concurrent read and concurrent
write) PRAM68 (parallel random-access machine) with n processors by performing parti-
tioning implicitly.[26]

58 https://en.wiktionary.org/wiki/recurse
59 https://en.wikipedia.org/wiki/Tail_call
60 https://en.wikipedia.org/wiki/Insertion_sort
61 https://en.wikipedia.org/wiki/Cache_memory
62 https://en.wikipedia.org/wiki/Parallel_algorithm
63 https://en.wikipedia.org/wiki/Task_parallelism
64 https://en.wikipedia.org/wiki/Prefix_sum
65 https://en.wikipedia.org/wiki/Merge_sort
66 https://en.wikipedia.org/wiki/Radix_sort
67 https://en.wikipedia.org/wiki/Parallel_random-access_machine#Read/write_conflicts
68 https://en.wikipedia.org/wiki/Parallel_Random_Access_Machine

119
Quicksort

7.3 Formal analysis

7.3.1 Worst-case analysis

The most unbalanced partition occurs when one of the sublists returned by the partitioning
routine is of size n − 1.[27] This may occur if the pivot happens to be the smallest or
largest element in the list, or in some implementations (e.g., the Lomuto partition scheme
as described above) when all the elements are equal.
If this happens repeatedly in every partition, then each recursive call processes a list of size
one less than the previous list. Consequently, we can make n − 1 nested calls before we
reach a list of size 1. This means that the call tree69 is a linear chain of n − 1 nested calls.

The ith call does O(n − i) work to do the partition, and ni=0 (n − i) = O(n2 ), so in that
case Quicksort takes O(n²) time.

7.3.2 Best-case analysis

In the most balanced case, each time we perform a partition we divide the list into two nearly
equal pieces. This means each recursive call processes a list of half the size. Consequently,
we can make only log2 n nested calls before we reach a list of size 1. This means that the
depth of the call tree70 is log2 n. But no two calls at the same level of the call tree process
the same part of the original list; thus, each level of calls needs only O(n) time all together
(each call has some constant overhead, but since there are only O(n) calls at each level, this
is subsumed in the O(n) factor). The result is that the algorithm uses only O(n log n) time.

7.3.3 Average-case analysis

To sort an array of n distinct elements, quicksort takes O(n log n) time in expectation,
averaged over all n! permutations of n elements with equal probability71 . We list here three
common proofs to this claim providing different insights into quicksort's workings.

Using percentiles

If each pivot has rank somewhere in the middle 50 percent, that is, between the 25th
percentile72 and the 75th percentile, then it splits the elements with at least 25% and at
most 75% on each side. If we could consistently choose such pivots, we would only have
to split the list at most log4/3 n times before reaching lists of size 1, yielding an O(n log n)
algorithm.
When the input is a random permutation, the pivot has a random rank, and so it is not
guaranteed to be in the middle 50 percent. However, when we start from a random per-
mutation, in each recursive call the pivot has a random rank in its list, and so it is in the

69 https://en.wikipedia.org/wiki/Call_stack
70 https://en.wikipedia.org/wiki/Call_stack
71 https://en.wikipedia.org/wiki/Uniform_distribution_(discrete)
72 https://en.wikipedia.org/wiki/Percentile

120
Formal analysis

middle 50 percent about half the time. That is good enough. Imagine that you flip a coin:
heads means that the rank of the pivot is in the middle 50 percent, tail means that it isn't.
Imagine that you are flipping a coin over and over until you get k heads. Although this
could take a long time, on average only 2k flips are required, and the chance that you won't
get k heads after 100k flips is highly improbable (this can be made rigorous using Chernoff
bounds73 ). By the same argument, Quicksort's recursion will terminate on average at a call
depth of only 2 log4/3 n. But if its average call depth is O(log n), and each level of the call
tree processes at most n elements, the total amount of work done on average is the product,
O(n log n). The algorithm does not have to verify that the pivot is in the middle half—if
we hit it any constant fraction of the times, that is enough for the desired complexity.

Using recurrences

An alternative approach is to set up a recurrence relation74 for the T(n) factor, the time
needed to sort a list of size n. In the most unbalanced case, a single quicksort call involves
O(n) work plus two recursive calls on lists of size 0 and n−1, so the recurrence relation is
T (n) = O(n) + T (0) + T (n − 1) = O(n) + T (n − 1).
This is the same relation as for insertion sort75 and selection sort76 , and it solves to worst
case T(n) = O(n²).
In the most balanced case, a single quicksort call involves O(n) work plus two recursive calls
on lists of size n/2, so the recurrence relation is
( )
n
T (n) = O(n) + 2T .
2
The master theorem for divide-and-conquer recurrences77 tells us that T(n) = O(n log n).
The outline of a formal proof of the O(n log n) expected time complexity follows. Assume
that there are no duplicates as duplicates could be handled with linear time pre- and post-
processing, or considered cases easier than the analyzed. When the input is a random
permutation, the rank of the pivot is uniform random from 0 to n − 1. Then the resulting
parts of the partition have sizes i and n − i − 1, and i is uniform random from 0 to n −
1. So, averaging over all possible splits and noting that the number of comparisons for the
partition is n − 1, the average number of comparisons over all permutations of the input
sequence can be estimated accurately by solving the recurrence relation:

1 n−1 ∑
2 n−1
C(n) = n − 1 + (C(i) + C(n − i − 1)) = n − 1 + C(i)
n i=0 n i=0


n−1
nC(n) = n(n − 1) + 2 C(i)
i=0

73 https://en.wikipedia.org/wiki/Chernoff_bound
74 https://en.wikipedia.org/wiki/Recurrence_relation
75 https://en.wikipedia.org/wiki/Insertion_sort
76 https://en.wikipedia.org/wiki/Selection_sort
77 https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)

121
Quicksort

nC(n) − (n − 1)C(n − 1) = n(n − 1) − (n − 1)(n − 2) + 2C(n − 1)


nC(n) = (n + 1)C(n − 1) + 2n − 2
C(n) C(n − 1) 2 2 C(n − 1) 2
= + − ≤ +
n+1 n n + 1 n(n + 1) n n+1
C(n − 2) 2 2 2 C(n − 2) 2 2
= + − + ≤ + +
n−1 n (n − 1)n n + 1 n−1 n n+1
..
.
∫ n
C(1) ∑ n
2 ∑1
n−1
1
= + ≤2 ≈2 dx = 2 ln n
2 i=2
i+1 i=1
i 1 x
Solving the recurrence gives C(n) = 2n ln n ≈1.39n log₂n.
This means that, on average, quicksort performs only about 39% worse than in its best
case. In this sense, it is closer to the best case than the worst case. A comparison sort78
cannot use less than log₂(n!) comparisons on average to sort n items (as explained in the
article Comparison sort79 ) and in case of large n, Stirling's approximation80 yields log₂(n!)
≈ n(log₂ n − log₂e), so quicksort is not much worse than an ideal comparison sort. This fast
average runtime is another reason for quicksort's practical dominance over other sorting
algorithms.

Using a binary search tree

To each execution of quicksort corresponds the following binary search tree81 (BST): the
initial pivot is the root node; the pivot of the left half is the root of the left subtree, the pivot
of the right half is the root of the right subtree, and so on. The number of comparisons of the
execution of quicksort equals the number of comparisons during the construction of the BST
by a sequence of insertions. So, the average number of comparisons for randomized quicksort
equals the average cost of constructing a BST when the values inserted (x1 , x2 , . . . , xn ) form
a random permutation.
Consider a BST created by insertion of a sequence (x1 , x2 , . . . , xn ) of values forming
∑ ∑ a random
permutation. Let C denote the cost of creation of the BST. We have C = ci,j , where
i j<i
ci,j is an binary random variable expressing whether during the insertion of xi there was a
comparison to xj .
∑∑
By linearity of expectation82 , the expected value E[C] of C is E[C] = Pr(ci,j ).
i j<i

Fix i and j<i. The values x1 , x2 , . . . , xj , once sorted, define j+1 intervals. The core structural
observation is that xi is compared to xj in the algorithm if and only if xi falls inside one of
the two intervals adjacent to xj .

78 https://en.wikipedia.org/wiki/Comparison_sort
https://en.wikipedia.org/wiki/Comparison_sort#Lower_bound_for_the_average_number_of_
79
comparisons
80 https://en.wikipedia.org/wiki/Stirling%27s_approximation
81 https://en.wikipedia.org/wiki/Binary_search_tree
82 https://en.wikipedia.org/wiki/Expected_value#Linearity

122
Relation to other algorithms

Observe that since (x1 , x2 , . . . , xn ) is a random permutation, (x1 , x2 , . . . , xj , xi ) is also a


2
random permutation, so the probability that xi is adjacent to xj is exactly .
j +1
We end with a short calculation:
( )
∑∑ 2 ∑
E[C] = =O log i = O(n log n).
i j<i
j +1 i

7.3.4 Space complexity

The space used by quicksort depends on the version used.


The in-place version of quicksort has a space complexity of O(log n), even in the worst case,
when it is carefully implemented using the following strategies:
• in-place partitioning is used. This unstable partition requires O(1) space.
• After partitioning, the partition with the fewest elements is (recursively) sorted first,
requiring at most O(log n) space. Then the other partition is sorted using tail recursion83
or iteration, which doesn't add to the call stack. This idea, as discussed above, was
described by R. Sedgewick84 , and keeps the stack depth bounded by O(log n).[18][21]
Quicksort with in-place and unstable partitioning uses only constant additional space before
making any recursive call. Quicksort must store a constant amount of information for each
nested recursive call. Since the best case makes at most O(log n) nested recursive calls, it
uses O(log n) space. However, without Sedgewick's trick to limit the recursive calls, in the
worst case quicksort could make O(n) nested recursive calls and need O(n) auxiliary space.
From a bit complexity viewpoint, variables such as lo and hi do not use constant space; it
takes O(log n) bits to index into a list of n items. Because there are such variables in every
stack frame, quicksort using Sedgewick's trick requires O((log n)²) bits of space. This space
requirement isn't too terrible, though, since if the list contained distinct elements, it would
need at least O(n log n) bits of space.
Another, less common, not-in-place, version of quicksort uses O(n) space for working storage
and can implement a stable sort. The working storage allows the input array to be easily
partitioned in a stable manner and then copied back to the input array for successive
recursive calls. Sedgewick's optimization is still appropriate.

7.4 Relation to other algorithms

Quicksort is a space-optimized version of the binary tree sort85 . Instead of inserting items
sequentially into an explicit tree, quicksort organizes them concurrently into a tree that is
implied by the recursive calls. The algorithms make exactly the same comparisons, but in a
different order. An often desirable property of a sorting algorithm86 is stability – that is the

83 https://en.wikipedia.org/wiki/Tail_recursion
84 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
85 https://en.wikipedia.org/wiki/Binary_tree_sort
86 https://en.wikipedia.org/wiki/Sorting_algorithm

123
Quicksort

order of elements that compare equal is not changed, allowing controlling order of multikey
tables (e.g. directory or folder listings) in a natural way. This property is hard to maintain
for in situ (or in place) quicksort (that uses only constant additional space for pointers and
buffers, and O(log n) additional space for the management of explicit or implicit recursion).
For variant quicksorts involving extra memory due to representations using pointers (e.g.
lists or trees) or files (effectively lists), it is trivial to maintain stability. The more complex,
or disk-bound, data structures tend to increase time cost, in general making increasing use
of virtual memory or disk.
The most direct competitor of quicksort is heapsort87 . Heapsort's running time is O(n log n),
but heapsort's average running time is usually considered slower than in-place quicksort.[28]
This result is debatable; some publications indicate the opposite.[29][30] Introsort88 is a vari-
ant of quicksort that switches to heapsort when a bad case is detected to avoid quicksort's
worst-case running time.
Quicksort also competes with merge sort89 , another O(n log n) sorting algorithm. Mergesort
is a stable sort90 , unlike standard in-place quicksort and heapsort, and has excellent worst-
case performance. The main disadvantage of mergesort is that, when operating on arrays,
efficient implementations require O(n) auxiliary space, whereas the variant of quicksort with
in-place partitioning and tail recursion uses only O(log n) space.
Mergesort works very well on linked lists91 , requiring only a small, constant amount of
auxiliary storage. Although quicksort can be implemented as a stable sort using linked
lists, it will often suffer from poor pivot choices without random access. Mergesort is also
the algorithm of choice for external sorting92 of very large data sets stored on slow-to-access
media such as disk storage93 or network-attached storage94 .
Bucket sort95 with two buckets is very similar to quicksort; the pivot in this case is effec-
tively the value in the middle of the value range, which does well on average for uniformly
distributed inputs.

7.4.1 Selection-based pivoting

A selection algorithm96 chooses the kth smallest of a list of numbers; this is an easier problem
in general than sorting. One simple but effective selection algorithm works nearly in the
same manner as quicksort, and is accordingly known as quickselect97 . The difference is that
instead of making recursive calls on both sublists, it only makes a single tail-recursive call on
the sublist that contains the desired element. This change lowers the average complexity to
linear or O(n) time, which is optimal for selection, but the sorting algorithm is still O(n2 ).

87 https://en.wikipedia.org/wiki/Heapsort
88 https://en.wikipedia.org/wiki/Introsort
89 https://en.wikipedia.org/wiki/Merge_sort
90 https://en.wikipedia.org/wiki/Stable_sort
91 https://en.wikipedia.org/wiki/Linked_list
92 https://en.wikipedia.org/wiki/External_sorting
93 https://en.wikipedia.org/wiki/Disk_storage
94 https://en.wikipedia.org/wiki/Network-attached_storage
95 https://en.wikipedia.org/wiki/Bucket_sort
96 https://en.wikipedia.org/wiki/Selection_algorithm
97 https://en.wikipedia.org/wiki/Quickselect

124
Relation to other algorithms

A variant of quickselect, the median of medians98 algorithm, chooses pivots more carefully,
ensuring that the pivots are near the middle of the data (between the 30th and 70th per-
centiles), and thus has guaranteed linear time – O(n). This same pivot strategy can be used
to construct a variant of quicksort (median of medians quicksort) with O(n log n) time.
However, the overhead of choosing the pivot is significant, so this is generally not used in
practice.
More abstractly, given an O(n) selection algorithm, one can use it to find the ideal pivot
(the median) at every step of quicksort and thus produce a sorting algorithm with O(n log
n) running time. Practical implementations this variant are considerably slower on average,
but they are of theoretical interest because they show an optimal selection algorithm can
yield an optimal sorting algorithm.

7.4.2 Variants

Multi-pivot quicksort

Instead of partitioning into two subarrays using a single pivot, multi-pivot quicksort (also
multiquicksort[22] ) partitions its input into some s number of subarrays using s − 1 piv-
ots. While the dual-pivot case (s = 3) was considered by Sedgewick and others already
in the mid-1970s, the resulting algorithms were not faster in practice than the ”classical”
quicksort.[31] A 1999 assessment of a multiquicksort with a variable number of pivots, tuned
to make efficient use of processor caches, found it to increase the instruction count by
some 20%, but simulation results suggested that it would be more efficient on very large
inputs.[22] A version of dual-pivot quicksort developed by Yaroslavskiy in 2009[10] turned
out to be fast enough to warrant implementation in Java 799 , as the standard algorithm to
sort arrays of primitives100 (sorting arrays of objects101 is done using Timsort102 ).[32] The
performance benefit of this algorithm was subsequently found to be mostly related to cache
performance,[33] and experimental results indicate that the three-pivot variant may perform
even better on modern machines.[34][35]

External quicksort

For magnetic tape files is the same as regular quicksort except the pivot is replaced by a
buffer. First, the M/2 first and last elements are read into the buffer and sorted, then the
next element from the beginning or end is read to balance writing. If the next element is
less than the least of the buffer, write it to available space at the beginning. If greater than
the greatest, write it to the end. Otherwise write the greatest or least of the buffer, and
put the next element in the buffer. Keep the maximum lower and minimum upper keys
written to avoid resorting middle elements that are in order. When done, write the buffer.
Recursively sort the smaller partition, and loop to sort the remaining partition. This is

98 https://en.wikipedia.org/wiki/Median_of_medians
99 https://en.wikipedia.org/wiki/Java_version_history#Java_SE_7_(July_28,_2011)
100 https://en.wikipedia.org/wiki/Primitive_data_type
101 https://en.wikipedia.org/wiki/Object_(computer_science)
102 https://en.wikipedia.org/wiki/Timsort

125
Quicksort

a kind of three-way quicksort in which the middle partition (buffer) represents a sorted
subarray of elements that are approximately equal to the pivot.

Three-way radix quicksort

Main article: Multi-key quicksort103 This algorithm is a combination of radix sort104 and
quicksort. Pick an element from the array (the pivot) and consider the first character (key)
of the string (multikey). Partition the remaining elements into three sets: those whose corre-
sponding character is less than, equal to, and greater than the pivot's character. Recursively
sort the ”less than” and ”greater than” partitions on the same character. Recursively sort
the ”equal to” partition by the next character (key). Given we sort using bytes or words of
length W bits, the best case is O(KN) and the worst case O(2K N) or at least O(N2 ) as for
standard quicksort, given for unique keys N<2K , and K is a hidden constant in all standard
comparison sort105 algorithms including quicksort. This is a kind of three-way quicksort
in which the middle partition represents a (trivially) sorted subarray of elements that are
exactly equal to the pivot.

Quick radix sort

Also developed by Powers as an o(K) parallel PRAM106 algorithm. This is again a combi-
nation of radix sort107 and quicksort but the quicksort left/right partition decision is made
on successive bits of the key, and is thus O(KN) for N K-bit keys. All comparison sort108
algorithms impliclty assume the transdichotomous model109 with K in Θ(log N), as if K is
smaller we can sort in O(N) time using a hash table or integer sorting110 . If K ≫log N but
elements are unique within O(log N) bits, the remaining bits will not be looked at by either
quicksort or quick radix sort. Failing that, all comparison sorting algorithms will also have
the same overhead of looking through O(K) relatively useless bits but quick radix sort will
avoid the worst case O(N2 ) behaviours of standard quicksort and radix quicksort, and will
be faster even in the best case of those comparison algorithms under these conditions of
uniqueprefix(K) ≫ log N. See Powers[36] for further discussion of the hidden overheads in
comparison, radix and parallel sorting.

BlockQuicksort

In any comparison-based sorting algorithm, minimizing the number of comparisons requires


maximizing the amount of information gained from each comparison, meaning that the com-
parison results are unpredictable. This causes frequent branch mispredictions111 , limiting

103 https://en.wikipedia.org/wiki/Multi-key_quicksort
104 https://en.wikipedia.org/wiki/Radix_sort
105 https://en.wikipedia.org/wiki/Comparison_sort
106 https://en.wikipedia.org/wiki/Parallel_random-access_machine
107 https://en.wikipedia.org/wiki/Radix_sort
108 https://en.wikipedia.org/wiki/Comparison_sort
109 https://en.wikipedia.org/wiki/Transdichotomous_model
110 https://en.wikipedia.org/wiki/Integer_sorting
111 https://en.wikipedia.org/wiki/Branch_misprediction

126
See also

performance.[37] BlockQuicksort[38] rearranges the computations of quicksort to convert un-


predictable branches to data dependencies112 . When partitioning, the input is divided into
moderate-sized blocks113 (which fit easily into the data cache114 ), and two arrays are filled
with the positions of elements to swap. (To avoid conditional branches, the position is
unconditionally stored at the end of the array, and the index of the end is incremented
if a swap is needed.) A second pass exchanges the elements at the positions indicated in
the arrays. Both loops have only one conditional branch, a test for termination, which is
usually taken.

Partial and incremental quicksort

Main article: Partial sorting115 Several variants of quicksort exist that separate the k small-
est or largest elements from the rest of the input.

7.4.3 Generalization

Richard Cole116 and David C. Kandathil, in 2004, discovered a one-parameter family of


sorting algorithms, called partition sorts, which on average (with all input orderings equally
likely) perform at most n log n + O(n) comparisons (close to the information theoretic lower
bound) and Θ(n log n) operations; at worst they perform Θ(n log2 n) comparisons (and also
operations); these are in-place, requiring only additional O(log n) space. Practical efficiency
and smaller variance in performance were demonstrated against optimised quicksorts (of
Sedgewick117 and Bentley118 -McIlroy119 ).[39]

7.5 See also

• Computer programming portal120


• Introsort121 − Hybrid sorting algorithm

112 https://en.wikipedia.org/wiki/Data_dependencies
113 https://en.wikipedia.org/wiki/Loop_blocking
114 https://en.wikipedia.org/wiki/Data_cache
115 https://en.wikipedia.org/wiki/Partial_sorting
116 https://en.wikipedia.org/wiki/Richard_J._Cole
117 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
118 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
119 https://en.wikipedia.org/wiki/Douglas_McIlroy
120 https://en.wikipedia.org/wiki/Portal:Computer_programming
121 https://en.wikipedia.org/wiki/Introsort

127
Quicksort

7.6 Notes
1. ”S A H”122 . C H M. A  
123  3 A 2015. R 22 A 2015.
2. H, C. A. R.124 (1961). ”A 64: Q”. Comm. ACM125 . 4 (7):
321. doi126 :10.1145/366622.366644127 .
3. S, S S.128 (2008). The Algorithm Design Manual129 . S. . 129.
ISBN130 978-1-84800-069-8131 .
4. S, L. (2009). ”I: A   C.A.R. H”. Comm.
ACM132 . 52 (3): 38–41. doi133 :10.1145/1467247.1467261134 .
5. ”M Q   S T H,    Q-
”135 . M M D B. 15 M 2015.
6. B, J L.; MI, M. D (1993). ”E  
”136 . Software—Practice and Experience. 23 (11): 1249–1265. Cite-
SeerX137 10.1.1.14.8162138 . doi139 :10.1002/spe.4380231105140 .
7. V E, M. H. (1 N 1970). ”A 402: I-
  E  Q”. Commun. ACM. 13 (11): 693–694.
doi141 :10.1145/362790.362803142 . ISSN143 0001-0782144 .
8. B, J145 (2007). ”T    I  ”. I
O, A; W, G (.). Beautiful Code: Leading Programmers Explain
How They Think. O'Reilly Media. p. 30. ISBN146 978-0-596-51004-6147 .
9. ”Q P: H . L”148 . cs.stackexchange.com. Re-
trieved 3 August 2015.

https://web.archive.org/web/20150403184558/http://www.computerhistory.org/
122
fellowawards/hall/bios/Antony%2CHoare/
123 http://www.computerhistory.org/fellowawards/hall/bios/Antony,Hoare/
124 https://en.wikipedia.org/wiki/Tony_Hoare
125 https://en.wikipedia.org/wiki/Communications_of_the_ACM
126 https://en.wikipedia.org/wiki/Doi_(identifier)
127 https://doi.org/10.1145%2F366622.366644
128 https://en.wikipedia.org/wiki/Steven_Skiena
129 https://books.google.com/books?id=7XUSn0IKQEgC
130 https://en.wikipedia.org/wiki/ISBN_(identifier)
131 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84800-069-8
132 https://en.wikipedia.org/wiki/Communications_of_the_ACM
133 https://en.wikipedia.org/wiki/Doi_(identifier)
134 https://doi.org/10.1145%2F1467247.1467261
http://anothercasualcoder.blogspot.com/2015/03/my-quickshort-interview-with-sir-
135
tony.html
136 http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.8162
137 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
138 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.8162
139 https://en.wikipedia.org/wiki/Doi_(identifier)
140 https://doi.org/10.1002%2Fspe.4380231105
141 https://en.wikipedia.org/wiki/Doi_(identifier)
142 https://doi.org/10.1145%2F362790.362803
143 https://en.wikipedia.org/wiki/ISSN_(identifier)
144 http://www.worldcat.org/issn/0001-0782
145 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
146 https://en.wikipedia.org/wiki/ISBN_(identifier)
147 https://en.wikipedia.org/wiki/Special:BookSources/978-0-596-51004-6
148 https://cs.stackexchange.com/q/11550

128
Notes

10. Y, V (2009). ”D-P Q”149 (PDF).


A   150 (PDF)  2 O 2015.
11. ”R  Q  ..A   D-P
Q”151 . permalink.gmane.org. Retrieved 3 August 2015.
12. ”J 7 A API ”152 . O. R 23 J 2018.
13. W, S.; N, M.; R, R.; L, U. (7 J 2013). Engineering Java
7's Dual Pivot Quicksort Using MaLiJAn. Proceedings. Society for Industrial and
Applied Mathematics. pp. 55–69. doi153 :10.1137/1.9781611972931.5154 . ISBN155 978-
1-61197-253-5156 .
14. J B (1999). Programming Pearls. Addison-Wesley Professional.
15. C, T H.157 ; L, C E.158 ; R, R L.159 ;
S, C160 (2009) [1990]. ”Q”. Introduction to Algorithms161 (3
.). MIT P  MG-H. . 170–190. ISBN162 0-262-03384-4163 .
16. W, S (2012). ”J 7' D P Q”164 . T
U K.
17. H, C. A. R.165 (1 J 1962). ”Q”166 . The Computer Journal.
5 (1): 10–16. doi167 :10.1093/comjnl/5.1.10168 . ISSN169 0010-4620170 .
18. S, R171 (1 S 1998). Algorithms in C: Fundamentals,
Data Structures, Sorting, Searching, Parts 1–4172 (3 .). P E.
ISBN173 978-81-317-1291-7174 . R 27 N 2012.
19. qsort.c in GNU libc175 : [1]176 , [2]177

https://web.archive.org/web/20151002230717/http://iaroslavski.narod.ru/quicksort/
149
DualPivotQuicksort.pdf
150 http://iaroslavski.narod.ru/quicksort/DualPivotQuicksort.pdf
151 http://permalink.gmane.org/gmane.comp.java.openjdk.core-libs.devel/2628
152 https://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html#sort(int%5b%5d)
153 https://en.wikipedia.org/wiki/Doi_(identifier)
154 https://doi.org/10.1137%2F1.9781611972931.5
155 https://en.wikipedia.org/wiki/ISBN_(identifier)
156 https://en.wikipedia.org/wiki/Special:BookSources/978-1-61197-253-5
157 https://en.wikipedia.org/wiki/Thomas_H._Cormen
158 https://en.wikipedia.org/wiki/Charles_E._Leiserson
159 https://en.wikipedia.org/wiki/Ron_Rivest
160 https://en.wikipedia.org/wiki/Clifford_Stein
161 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
162 https://en.wikipedia.org/wiki/ISBN_(identifier)
163 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
164 https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3463
165 https://en.wikipedia.org/wiki/Tony_Hoare
166 http://comjnl.oxfordjournals.org/content/5/1/10
167 https://en.wikipedia.org/wiki/Doi_(identifier)
168 https://doi.org/10.1093%2Fcomjnl%2F5.1.10
169 https://en.wikipedia.org/wiki/ISSN_(identifier)
170 http://www.worldcat.org/issn/0010-4620
171 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
172 https://books.google.com/books?id=ylAETlep0CwC
173 https://en.wikipedia.org/wiki/ISBN_(identifier)
174 https://en.wikipedia.org/wiki/Special:BookSources/978-81-317-1291-7
175 https://en.wikipedia.org/wiki/GNU_libc
176 https://www.cs.columbia.edu/~hgs/teaching/isp/hw/qsort.c
177 http://repo.or.cz/w/glibc.git/blob/HEAD:/stdlib/qsort.c

129
Quicksort

20. 178[permanent dead link179 ]

21. S, R.180 (1978). ”I Q ”. Comm.


ACM181 . 21 (10): 847–857. doi182 :10.1145/359619.359631183 .
22. LM, A; L, R E. (1999). ”T I  C
  P  S”. Journal of Algorithms. 31 (1): 66–104. Cite-
SeerX184 10.1.1.27.1788185 . doi186 :10.1006/jagm.1998.0985187 . Although saving small
subarrays until the end makes sense from an instruction count perspective, it is exactly
the wrong thing to do from a cache performance perspective.
23. Umut A. Acar, Guy E Blelloch, Margaret Reid-Miller, and Kanat Tangwongsan,
Quicksort and Sorting Lower Bounds188 , Parallel and Sequential Data Structures and
Algorithms. 2013.
24. B, C (2012). ”Q P  P S”189 . Dr.
Dobb's.
25. M, R; B, L (2000). Algorithms sequential & parallel: a
unified approach190 . P H. ISBN191 978-0-13-086373-7192 . R 27
N 2012.
26. P, D M. W. (1991). Parallelized Quicksort and Radixsort with Op-
timal Speedup. Proc. Int'l Conf. on Parallel Computing Technologies. Cite-
SeerX193 10.1.1.57.9071194 .
27. The other one may either have 1 element or be empty (have 0 elements), depending
on whether the pivot is included in one of subpartitions, as in the Hoare's partitioning
routine, or is excluded from both of them, like in the Lomuto's routine.
28. E, S; WSS, A (7–8 J 2019). Worst-Case Ef-
ficient Sorting with QuickMergesort. ALENEX 2019: 21st Workshop on Al-
gorithm Engineering and Experiments. San Diego. arXiv195 :1811.99833196 .
doi :10.1137/1.9781611975499.1 . ISBN 978-1-61197-549-9200 . on small in-
197 198 199

stances Heapsort is already considerably slower than Quicksort (in our experiments
more than 30% for n = 210 ) and on larger instances it suffers from its poor cache

178 http://www.ugrad.cs.ubc.ca/~cs260/chnotes/ch6/Ch6CovCompiled.html
180 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
181 https://en.wikipedia.org/wiki/Communications_of_the_ACM
182 https://en.wikipedia.org/wiki/Doi_(identifier)
183 https://doi.org/10.1145%2F359619.359631
184 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
185 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.1788
186 https://en.wikipedia.org/wiki/Doi_(identifier)
187 https://doi.org/10.1006%2Fjagm.1998.0985
188 https://www.cs.cmu.edu/afs/cs/academic/class/15210-s13/www/lectures/lecture19.pdf
189 http://www.drdobbs.com/parallel/quicksort-partition-via-prefix-scan/240003109
190 https://books.google.com/books?id=dZoZAQAAIAAJ
191 https://en.wikipedia.org/wiki/ISBN_(identifier)
192 https://en.wikipedia.org/wiki/Special:BookSources/978-0-13-086373-7
193 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
194 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.9071
195 https://en.wikipedia.org/wiki/ArXiv_(identifier)
196 http://arxiv.org/abs/1811.99833
197 https://en.wikipedia.org/wiki/Doi_(identifier)
198 https://doi.org/10.1137%2F1.9781611975499.1
199 https://en.wikipedia.org/wiki/ISBN_(identifier)
200 https://en.wikipedia.org/wiki/Special:BookSources/978-1-61197-549-9

130
Notes

behavior (in our experiments more than eight times slower than Quicksort for sorting
228 elements).
29. H, P (2004). ”S ”201 . ... R-
 26 A 2010.
30. MK, D (D 2005). ”H, Q,  E”202 .
A203     1 A 2009. R 20 D
2019.
31. W, S; N, M E. (2012). Average case analysis of Java 7's
dual pivot quicksort. European Symposium on Algorithms. arXiv204 :1310.7409205 .
Bibcode206 :2013arXiv1310.7409W207 .
32. ”A”208 . Java Platform SE 7. Oracle. Retrieved 4 September 2014.
33. W, S (3 N 2015). ”W I D-P Q F?”.
X209 :1511.01138210 [.DS211 ].
34. K, S; L-O, A; Q, A;
M, J. I (2014). Multi-Pivot Quicksort: Theory and Experiments.
Proc. Workshop on Algorithm Engineering and Experiments (ALENEX).
doi212 :10.1137/1.9781611973198.6213 .
35. K, S; L-O, A; M, J. I; Q, A
(7 F 2014). Multi-Pivot Quicksort: Theory and Experiments214 (PDF) (S-
 ). W, O215 .
36. David M. W. Powers, Parallel Unification: Practical Complexity216 , Australasian
Computer Architecture Workshop, Flinders University, January 1995
37. K, K; S, P (11–13 S 2006). How Branch Mis-
predictions Affect Quicksort217 (PDF). ESA 2006: 14 A E S-
  A. Z218 . 219 :10.1007/11841036_69220 .

201 http://www.azillionmonkeys.com/qed/sort.html
202 http://www.inference.org.uk/mackay/sorting/sorting.html
https://web.archive.org/web/20090401163041/http://users.aims.ac.za/~mackay/sorting/
203
sorting.html
204 https://en.wikipedia.org/wiki/ArXiv_(identifier)
205 http://arxiv.org/abs/1310.7409
206 https://en.wikipedia.org/wiki/Bibcode_(identifier)
207 https://ui.adsabs.harvard.edu/abs/2013arXiv1310.7409W
208 http://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html#sort%28byte%5B%5D%29
209 https://en.wikipedia.org/wiki/ArXiv_(identifier)
210 http://arxiv.org/abs/1511.01138
211 http://arxiv.org/archive/cs.DS
212 https://en.wikipedia.org/wiki/Doi_(identifier)
213 https://doi.org/10.1137%2F1.9781611973198.6
https://lusy.fri.uni-lj.si/sites/lusy.fri.uni-lj.si/files/publications/alopez2014-
214
seminar-qsort.pdf
215 https://en.wikipedia.org/wiki/Waterloo,_Ontario
216 http://david.wardpowers.info/Research/AI/papers/199501-ACAW-PUPC.pdf
https://www.cs.auckland.ac.nz/~mcw/Teaching/refs/sorting/quicksort-branch-prediction.
217
pdf
218 https://en.wikipedia.org/wiki/Zurich
219 https://en.wikipedia.org/wiki/Doi_(identifier)
220 https://doi.org/10.1007%2F11841036_69

131
Quicksort

38. E, S; WSS, A (22 A 2016). ”BQ: H
B M '  Q”. X221 :1604.06697222
[.DS223 ].
39. Richard Cole, David C. Kandathil: ”The average case analysis of Partition sorts”224 ,
European Symposium on Algorithms, 14–17 September 2004, Bergen, Norway. Pub-
lished: Lecture Notes in Computer Science 3221, Springer Verlag, pp. 240–251.

7.7 References
• S, R.225 (1978). ”I Q ”. Comm. ACM226 .
21 (10): 847–857. doi227 :10.1145/359619.359631228 .
• D, B. C. (2006). ”A       -
 '  ' ”. Discrete Applied Mathematics. 154: 1–5.
doi229 :10.1016/j.dam.2005.07.005230 .
• H, C. A. R.231 (1961). ”A 63: P”. Comm. ACM232 . 4 (7):
321. doi233 :10.1145/366622.366642234 .
• H, C. A. R.235 (1961). ”A 65: F”. Comm. ACM236 . 4 (7): 321–322.
doi237 :10.1145/366622.366647238 .
• H, C. A. R.239 (1962). ”Q”. Comput. J.240 5 (1): 10–16.
doi241 :10.1093/comjnl/5.1.10242 . (Reprinted in Hoare and Jones: Essays in computing
science243 , 1989.)

221 https://en.wikipedia.org/wiki/ArXiv_(identifier)
222 http://arxiv.org/abs/1604.06697
223 http://arxiv.org/archive/cs.DS
224 http://www.cs.nyu.edu/cole/papers/part-sort.pdf
225 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
226 https://en.wikipedia.org/wiki/Communications_of_the_ACM
227 https://en.wikipedia.org/wiki/Doi_(identifier)
228 https://doi.org/10.1145%2F359619.359631
229 https://en.wikipedia.org/wiki/Doi_(identifier)
230 https://doi.org/10.1016%2Fj.dam.2005.07.005
231 https://en.wikipedia.org/wiki/Tony_Hoare
232 https://en.wikipedia.org/wiki/Communications_of_the_ACM
233 https://en.wikipedia.org/wiki/Doi_(identifier)
234 https://doi.org/10.1145%2F366622.366642
235 https://en.wikipedia.org/wiki/Tony_Hoare
236 https://en.wikipedia.org/wiki/Communications_of_the_ACM
237 https://en.wikipedia.org/wiki/Doi_(identifier)
238 https://doi.org/10.1145%2F366622.366647
239 https://en.wikipedia.org/wiki/Tony_Hoare
240 https://en.wikipedia.org/wiki/The_Computer_Journal
241 https://en.wikipedia.org/wiki/Doi_(identifier)
242 https://doi.org/10.1093%2Fcomjnl%2F5.1.10
243 http://portal.acm.org/citation.cfm?id=SERIES11430.63445

132
External links

• M, D R.244 (1997). ”I S  S


A” . 245 Software: Practice and Experience. 27 (8): 983–993.
doi246 :10.1002/(SICI)1097-024X(199708)27:8<983::AID-SPE117>3.0.CO;2-#247 .
• Donald Knuth248 . The Art of Computer Programming, Volume 3: Sorting and Search-
ing, Third Edition. Addison-Wesley, 1997. ISBN249 0-201-89685-0250 . Pages 113–122 of
section 5.2.2: Sorting by Exchanging.
• Thomas H. Cormen251 , Charles E. Leiserson252 , Ronald L. Rivest253 , and Clifford Stein254 .
Introduction to Algorithms255 , Second Edition. MIT Press256 and McGraw-Hill257 , 2001.
ISBN258 0-262-03293-7259 . Chapter 7: Quicksort, pp. 145–164.
• Faron Moller260 . Analysis of Quicksort261 . CS 332: Designing Algorithms. Department
of Computer Science, Swansea University262 .
• M, C.; R, S. (2001). ”O S S  Q-
  Q”. SIAM J. Comput.263 31 (3): 683–705. Cite-
SeerX 10.1.1.17.4954 . doi :10.1137/S0097539700382108267 .
264 265 266

• B, J. L.; MI, M. D. (1993). ”E   ”. Soft-


ware: Practice and Experience. 23 (11): 1249–1265. CiteSeerX268 10.1.1.14.8162269 .
doi270 :10.1002/spe.4380231105271 .

7.8 External links

244 https://en.wikipedia.org/wiki/David_Musser
245 http://www.cs.rpi.edu/~musser/gp/introsort.ps
246 https://en.wikipedia.org/wiki/Doi_(identifier)
https://doi.org/10.1002%2F%28SICI%291097-024X%28199708%2927%3A8%3C983%3A%3AAID-
247
SPE117%3E3.0.CO%3B2-%23
248 https://en.wikipedia.org/wiki/Donald_Knuth
249 https://en.wikipedia.org/wiki/ISBN_(identifier)
250 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
251 https://en.wikipedia.org/wiki/Thomas_H._Cormen
252 https://en.wikipedia.org/wiki/Charles_E._Leiserson
253 https://en.wikipedia.org/wiki/Ronald_L._Rivest
254 https://en.wikipedia.org/wiki/Clifford_Stein
255 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
256 https://en.wikipedia.org/wiki/MIT_Press
257 https://en.wikipedia.org/wiki/McGraw-Hill
258 https://en.wikipedia.org/wiki/ISBN_(identifier)
259 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
260 https://en.wikipedia.org/wiki/Faron_Moller
261 http://www.cs.swan.ac.uk/~csfm/Courses/CS_332/quicksort.pdf
262 https://en.wikipedia.org/wiki/Swansea_University
263 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
264 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
265 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.4954
266 https://en.wikipedia.org/wiki/Doi_(identifier)
267 https://doi.org/10.1137%2FS0097539700382108
268 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
269 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.8162
270 https://en.wikipedia.org/wiki/Doi_(identifier)
271 https://doi.org/10.1002%2Fspe.4380231105

133
Quicksort

The Wikibook Algorithm implementation272 has a page on the topic of: Quick-
sort273

• ”A S A: Q S”274 . A   


 2 M 2015. R 25 N 2008.CS1 maint: BOT: original-url status
unknown (link275 ) – graphical demonstration
• ”A S A: Q S (3- )”276 . A
    6 M 2015. R 25 N 2008.CS1 maint:
BOT: original-url status unknown (link277 )
• Open Data Structures – Section 11.1.2 – Quicksort278 , Pat Morin279
• Interactive illustration of Quicksort280 , with code walkthrough

Sorting algorithms

272 https://en.wikibooks.org/wiki/Algorithm_implementation
273 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Quicksort
https://web.archive.org/web/20150302145415/http://www.sorting-algorithms.com/quick-
274
sort
275 https://en.wikipedia.org/wiki/Category:CS1_maint:_BOT:_original-url_status_unknown
https://web.archive.org/web/20150306071949/http://www.sorting-algorithms.com/quick-
276
sort-3-way
277 https://en.wikipedia.org/wiki/Category:CS1_maint:_BOT:_original-url_status_unknown
http://opendatastructures.org/versions/edition-0.1e/ods-java/11_1_Comparison_Based_
278
Sorti.html#SECTION001412000000000000000
279 https://en.wikipedia.org/wiki/Pat_Morin
https://web.archive.org/web/20180629183103/http://www.tomgsmith.com/quicksort/
280
content/illustration/

134
8 Heapsort

A sorting algorithm which uses the heap data structure

Heapsort
A run of heapsort sorting an array of randomly permuted values. In the first stage of
the algorithm the array elements are reordered to satisfy the heap property. Before the
actual sorting takes place, the heap tree structure is shown briefly for illustration.
Class Sorting algorithm
Data structure Array
Worst-case perfor- O(n log n)
mance
Best-case perfor- O(n log n) (distinct
mance keys)
or O(n) (equal
keys)
Average perfor- O(n log n)
mance
Worst-case space O(n) total O(1)
complexity auxiliary

In computer science1 , heapsort is a comparison-based2 sorting algorithm3 . Heapsort can be


thought of as an improved selection sort4 : like selection sort, heapsort divides its input into
a sorted and an unsorted region, and it iteratively shrinks the unsorted region by extracting
the largest element from it and inserting it into the sorted region. Unlike selection sort,
heapsort does not waste time with a linear-time scan of the unsorted region; rather, heap
sort maintains the unsorted region in a heap5 data structure to more quickly find the largest
element in each step.[1]
Although somewhat slower in practice on most machines than a well-implemented quick-
sort6 , it has the advantage of a more favorable worst-case O(n log n)7 runtime. Heapsort
is an in-place algorithm8 , but it is not a stable sort9 .

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Comparison_sort
3 https://en.wikipedia.org/wiki/Sorting_algorithm
4 https://en.wikipedia.org/wiki/Selection_sort
5 https://en.wikipedia.org/wiki/Heap_(data_structure)
6 https://en.wikipedia.org/wiki/Quicksort
7 https://en.wikipedia.org/wiki/Big_O_notation
8 https://en.wikipedia.org/wiki/In-place_algorithm
9 https://en.wikipedia.org/wiki/Stable_sort

135
Heapsort

Heapsort was invented by J. W. J. Williams10 in 1964.[2] This was also the birth of the
heap, presented already by Williams as a useful data structure in its own right.[3] In the
same year, R. W. Floyd11 published an improved version that could sort an array in-place,
continuing his earlier research into the treesort12 algorithm.[3]

8.1 Overview

The heapsort algorithm can be divided into two parts.


In the first step, a heap13 is built out of the data (see Binary heap § Building a heap14 ).
The heap is often placed in an array with the layout of a complete binary tree15 . The
complete binary tree maps the binary tree structure into the array indices; each array index
represents a node; the index of the node's parent, left child branch, or right child branch
are simple expressions. For a zero-based array, the root node is stored at index 0; if i is the
index of the current node, then
iParent(i) = floor((i-1) / 2) where floor functions map a real number to
the smallest leading integer.
iLeftChild(i) = 2*i + 1
iRightChild(i) = 2*i + 2

In the second step, a sorted array is created by repeatedly removing the largest element
from the heap (the root of the heap), and inserting it into the array. The heap is updated
after each removal to maintain the heap property. Once all objects have been removed from
the heap, the result is a sorted array.
Heapsort can be performed in place. The array can be split into two parts, the sorted array
and the heap. The storage of heaps as arrays is diagrammed here16 . The heap's invariant
is preserved after each extraction, so the only cost is that of extraction.

8.2 Algorithm

The Heapsort algorithm involves preparing the list by first turning it into a max heap17 .
The algorithm then repeatedly swaps the first value of the list with the last value, decreasing
the range of values considered in the heap operation by one, and sifting the new first value
into its position in the heap. This repeats until the range of considered values is one value
in length.
The steps are:

10 https://en.wikipedia.org/wiki/J._W._J._Williams
11 https://en.wikipedia.org/wiki/Robert_Floyd
12 https://en.wikipedia.org/wiki/Treesort
13 https://en.wikipedia.org/wiki/Heap_(data_structure)
14 https://en.wikipedia.org/wiki/Binary_heap#Building_a_heap
15 https://en.wikipedia.org/wiki/Binary_tree#Types_of_binary_trees
16 https://en.wikipedia.org/wiki/Binary_heap#Heap_implementation
17 https://en.wikipedia.org/wiki/Binary_heap

136
Algorithm

1. Call the buildMaxHeap() function on the list. Also referred to as heapify(), this builds
a heap from a list in O(n) operations.
2. Swap the first element of the list with the final element. Decrease the considered range
of the list by one.
3. Call the siftDown() function on the list to sift the new first element to its appropriate
index in the heap.
4. Go to step (2) unless the considered range of the list is one element.
The buildMaxHeap() operation is run once, and is O(n) in performance. The siftDown()
function is O(log n), and is called n times. Therefore, the performance of this algorithm is
O(n + n log n) = O(n log n).

8.2.1 Pseudocode

The following is a simple way to implement the algorithm in pseudocode18 . Arrays are
zero-based19 and swap is used to exchange two elements of the array. Movement 'down'
means from the root towards the leaves, or from lower indices to higher. Note that during
the sort, the largest element is at the root of the heap at a[0], while at the end of the sort,
the largest element is in a[end].
procedure heapsort(a, count) is
input: an unordered array a of length count

(Build the heap in array a so that largest value is at the root)


heapify(a, count)

(The following loop maintains the invariants20 that a[0:end] is a heap and every element
beyond end is greater than everything before it (so a[end:count] is in sorted order))
end ← count - 1
while end > 0 do
(a[0] is the root and largest value. The swap moves it in front of the sorted elements.)
swap(a[end], a[0])
(the heap size is reduced by one)
end ← end - 1
(the swap ruined the heap property, so restore it)
siftDown(a, 0, end)

The sorting routine uses two subroutines, heapify and siftDown. The former is the com-
mon in-place heap construction routine, while the latter is a common subroutine for imple-
menting heapify.
(Put elements of 'a' in heap order, in-place)
procedure heapify(a, count) is
(start is assigned the index in 'a' of the last parent node)
(the last element in a 0-based array is at index count-1; find the parent of that element)
start ← iParent(count-1)

while start ≥ 0 do
(sift down the node at index 'start' to the proper place such that all nodes below
the start index are in heap order)
siftDown(a, start, count - 1)
(go to the next parent node)
start ← start - 1

18 https://en.wikipedia.org/wiki/Pseudocode
19 https://en.wikipedia.org/wiki/Comparison_of_programming_languages_(array)
20 https://en.wikipedia.org/wiki/Loop_invariant

137
Heapsort

(after sifting down the root all nodes/elements are in heap order)

(Repair the heap whose root element is at index 'start', assuming the heaps rooted at its children are valid)
procedure siftDown(a, start, end) is
root ← start

while iLeftChild(root) ≤ end do (While the root has at least one child)
child ← iLeftChild(root) (Left child of root)
swap ← root (Keeps track of child to swap with)

if a[swap] < a[child] then


swap ← child
(If there is a right child and that child is greater)
if child+1 ≤ end and a[swap] < a[child+1] then
swap ← child + 1
if swap = root then
(The root holds the largest element. Since we assume the heaps rooted at the
children are valid, this means that we are done.)
return
else
swap(a[root], a[swap])
root ← swap (repeat to continue sifting down the child now)

The heapify procedure can be thought of as building a heap from the bottom up by suc-
cessively sifting downward to establish the heap property21 . An alternative version (shown
below) that builds the heap top-down and sifts upward may be simpler to understand. This
siftUp version can be visualized as starting with an empty heap and successively inserting
elements, whereas the siftDown version given above treats the entire input array as a full
but ”broken” heap and ”repairs” it starting from the last non-trivial sub-heap (that is, the
last parent node).

Figure 24 Difference in time complexity between the ”siftDown” version and the
”siftUp” version.

Also, the siftDown version of heapify has O(n) time complexity22 , while the siftUp version
given below has O(n log n) time complexity due to its equivalence with inserting each
element, one at a time, into an empty heap.[4] This may seem counter-intuitive since, at a
glance, it is apparent that the former only makes half as many calls to its logarithmic-time

21 https://en.wikipedia.org/wiki/Heap_(data_structure)
22 https://en.wikipedia.org/wiki/Binary_heap#Building_a_heap

138
Variations

sifting function as the latter; i.e., they seem to differ only by a constant factor, which never
affects asymptotic analysis.
To grasp the intuition behind this difference in complexity, note that the number of swaps
that may occur during any one siftUp call increases with the depth of the node on which the
call is made. The crux is that there are many (exponentially many) more ”deep” nodes than
there are ”shallow” nodes in a heap, so that siftUp may have its full logarithmic running-time
on the approximately linear number of calls made on the nodes at or near the ”bottom” of
the heap. On the other hand, the number of swaps that may occur during any one siftDown
call decreases as the depth of the node on which the call is made increases. Thus, when
the siftDown heapify begins and is calling siftDown on the bottom and most numerous
node-layers, each sifting call will incur, at most, a number of swaps equal to the ”height”
(from the bottom of the heap) of the node on which the sifting call is made. In other words,
about half the calls to siftDown will have at most only one swap, then about a quarter of
the calls will have at most two swaps, etc.
The heapsort algorithm itself has O(n log n) time complexity using either version of heapify.
procedure heapify(a,count) is
(end is assigned the index of the first (left) child of the root)
end := 1

while end < count


(sift up the node at index end to the proper place such that all nodes above
the end index are in heap order)
siftUp(a, 0, end)
end := end + 1
(after sifting up the last node all nodes are in heap order)

procedure siftUp(a, start, end) is


input: start represents the limit of how far up the heap to sift.
end is the node to sift up.
child := end
while child > start
parent := iParent(child)
if a[parent] < a[child] then (out of max-heap order)
swap(a[parent], a[child])
child := parent (repeat to continue sifting up the parent now)
else
return

8.3 Variations

8.3.1 Floyd's heap construction

The most important variation to the basic algorithm, which is included in all practical
implementations, is a heap-construction algorithm by Floyd which runs in O(n) time and
uses siftdown23 rather than siftup24 , avoiding the need to implement siftup at all.
Rather than starting with a trivial heap and repeatedly adding leaves, Floyd's algorithm
starts with the leaves, observing that they are trivial but valid heaps by themselves, and

23 https://en.wikipedia.org/wiki/Binary_heap#Extract
24 https://en.wikipedia.org/wiki/Binary_heap#Insert

139
Heapsort

then adds parents. Starting with element n/2 and working backwards, each internal node
is made the root of a valid heap by sifting down. The last step is sifting down the first
element, after which the entire array obeys the heap property.
The worst-case number of comparisons during the Floyd's heap-construction phase of Heap-
sort is known to be equal to 2n − 2s2 (n) − e2 (n), where s2 (n) is the number of 1 bits in the
binary representation of n and e2 (n) is number of trailing 0 bits.[5]
The standard implementation of Floyd's heap-construction algorithm causes a large num-
ber of cache misses25 once the size of the data exceeds that of the CPU cache26 . Much
better performance on large data sets can be obtained by merging in depth-first27 order,
combining subheaps as soon as possible, rather than combining all subheaps on one level
before proceeding to the one above.[6][7]

8.3.2 Bottom-up heapsort

Bottom-up heapsort is a variant which reduces the number of comparisons required by a


significant factor. While ordinary heapsort requires 2n log2 n + O(n) comparisons worst-case
and on average,[8] the bottom-up variant requires n log2 n + O(1) comparisons on average,[8]
and 1.5n log2 n + O(n) in the worst case.[9]
If comparisons are cheap (e.g. integer keys) then the difference is unimportant,[10] as top-
down heapsort compares values that have already been loaded from memory. If, however,
comparisons require a function call28 or other complex logic, then bottom-up heapsort is
advantageous.
This is accomplished by improving the siftDown procedure. The change improves the
linear-time heap-building phase somewhat,[11] but is more significant in the second phase.
Like ordinary heapsort, each iteration of the second phase extracts the top of the heap, a[0],
and fills the gap it leaves with a[end], then sifts this latter element down the heap. But this
element comes from the lowest level of the heap, meaning it is one of the smallest elements
in the heap, so the sift-down will likely take many steps to move it back down. In ordinary
heapsort, each step of the sift-down requires two comparisons, to find the minimum of three
elements: the new node and its two children.
Bottom-up heapsort instead finds the path of largest children to the leaf level of the tree
(as if it were inserting −∞) using only one comparison per level. Put another way, it finds
a leaf which has the property that it and all of its ancestors are greater than or equal to
their siblings. (In the absence of equal keys, this leaf is unique.) Then, from this leaf, it
searches upward (using one comparison per level) for the correct position in that path to
insert a[end]. This is the same location as ordinary heapsort finds, and requires the same
number of exchanges to perform the insert, but fewer comparisons are required to find that
location.[9]

25 https://en.wikipedia.org/wiki/Cache_miss
26 https://en.wikipedia.org/wiki/CPU_cache
27 https://en.wikipedia.org/wiki/Depth-first
28 https://en.wikipedia.org/wiki/Function_call

140
Variations

Because it goes all the way to the bottom and then comes back up, it is called heapsort
with bounce by some authors.[12]
function leafSearch(a, i, end) is
j←i
while iRightChild(j) ≤ end do
(Determine which of j's two children is the greater)
if a[iRightChild(j)] > a[iLeftChild(j)] then
j ← iRightChild(j)
else
j ← iLeftChild(j)
(At the last level, there might be only one child)
if iLeftChild(j) ≤ end then
j ← iLeftChild(j)
return j

The return value of the leafSearch is used in the modified siftDown routine:[9]
procedure siftDown(a, i, end) is
j ← leafSearch(a, i, end)
while a[i] > a[j] do
j ← iParent(j)
x ← a[j]
a[j] ← a[i]
while j > i do
swap x, a[iParent(j)]
j ← iParent(j)

Bottom-up heapsort was announced as beating quicksort (with median-of-three pivot selec-
tion) on arrays of size ≥16000.[8]
A 2008 re-evaluation of this algorithm showed it to be no faster than ordinary heapsort
for integer keys, presumably because modern branch prediction29 nullifies the cost of the
predictable comparisons which bottom-up heapsort manages to avoid.[10]
A further refinement does a binary search in the path to the selected leaf, and sorts in a worst
case of (n+1)(log2 (n+1) + log2 log2 (n+1) + 1.82) + O(log2 n) comparisons, approaching
the information-theoretic lower bound30 of n log2 n − 1.4427n comparisons.[13]
A variant which uses two extra bits per internal node (n−1 bits total for an n-element heap)
to cache information about which child is greater (two bits are required to store three cases:
left, right, and unknown)[11] uses less than n log2 n + 1.1n compares.[14]

8.3.3 Other variations


• Ternary heapsort[15] uses a ternary heap31 instead of a binary heap; that is, each element
in the heap has three children. It is more complicated to program, but does a constant
number of times fewer swap and comparison operations. This is because each sift-down
step in a ternary heap requires three comparisons and one swap, whereas in a binary
heap two comparisons and one swap are required. Two levels in a ternary heap cover
32 = 9 elements, doing more work with the same number of comparisons as three levels

29 https://en.wikipedia.org/wiki/Branch_prediction
https://en.wikipedia.org/wiki/Comparison_sort#Number_of_comparisons_required_to_sort_
30
a_list
31 https://en.wikipedia.org/wiki/Ternary_heap

141
Heapsort

32
in the binary heap, which only cover 23 = 8.[citation needed ] This is primarily of academic
interest, as the additional complexity is not worth the minor savings, and bottom-up
heapsort beats both.
• The smoothsort33 algorithm[16] is a variation of heapsort developed by Edsger Dijkstra34
in 1981. Like heapsort, smoothsort's upper bound is O(n log n)35 . The advantage of
smoothsort is that it comes closer to O(n) time if the input is already sorted to some
degree36 , whereas heapsort averages O(n log n) regardless of the initial sorted state. Due
37
to its complexity, smoothsort is rarely used.[citation needed ]
• Levcopoulos and Petersson[17] describe a variation of heapsort based on a heap of Carte-
sian trees38 . First, a Cartesian tree is built from the input in O(n) time, and its root is
placed in a 1-element binary heap. Then we repeatedly extract the minimum from the
binary heap, output the tree's root element, and add its left and right children (if any)
which are themselves Cartesian trees, to the binary heap.[18] As they show, if the input is
already nearly sorted, the Cartesian trees will be very unbalanced, with few nodes having
left and right children, resulting in the binary heap remaining small, and allowing the
algorithm to sort more quickly than O(n log n) for inputs that are already nearly sorted.
• Several variants such as weak heapsort39 require n log2 n+O(1) comparisons in the worst
case, close to the theoretical minimum, using one extra bit of state per node. While this
extra bit makes the algorithms not truly in-place, if space for it can be found inside the
element, these algorithms are simple and efficient,[6]:40 but still slower than binary heaps
if key comparisons are cheap enough (e.g. integer keys) that a constant factor does not
matter.[19]
• Katajainen's ”ultimate heapsort” requires no extra storage, performs n log2 n+O(1) com-
parisons, and a similar number of element moves.[20] It is, however, even more complex
and not justified unless comparisons are very expensive.

8.4 Comparison with other sorts

Heapsort primarily competes with quicksort40 , another very efficient general purpose nearly-
in-place comparison-based sort algorithm.
Quicksort is typically somewhat faster due to some factors, but the worst-case running time
for quicksort is O(n2 ), which is unacceptable for large data sets and can be deliberately
triggered given enough knowledge of the implementation, creating a security risk. See
quicksort41 for a detailed discussion of this problem and possible solutions.
Thus, because of the O(n log n) upper bound on heapsort's running time and constant upper
bound on its auxiliary storage, embedded systems with real-time constraints or systems
concerned with security often use heapsort, such as the Linux kernel.[21]

33 https://en.wikipedia.org/wiki/Smoothsort
34 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
35 https://en.wikipedia.org/wiki/Big_O_notation
36 https://en.wikipedia.org/wiki/Adaptive_sort
38 https://en.wikipedia.org/wiki/Cartesian_tree
39 https://en.wikipedia.org/wiki/Weak_heap
40 https://en.wikipedia.org/wiki/Quicksort
41 https://en.wikipedia.org/wiki/Quicksort

142
Example

Heapsort also competes with merge sort42 , which has the same time bounds. Merge sort
requires Ω(n) auxiliary space, but heapsort requires only a constant amount. Heapsort
typically runs faster in practice on machines with small or slow data caches43 , and does not
require as much external memory. On the other hand, merge sort has several advantages
over heapsort:
• Merge sort on arrays has considerably better data cache performance, often outperforming
heapsort on modern desktop computers because merge sort frequently accesses contiguous
memory locations (good locality of reference44 ); heapsort references are spread throughout
the heap.
• Heapsort is not a stable sort45 ; merge sort is stable.
• Merge sort parallelizes46 well and can achieve close to linear speedup47 with a trivial
implementation; heapsort is not an obvious candidate for a parallel algorithm.
• Merge sort can be adapted to operate on singly linked lists48 with O(1) extra space.
Heapsort can be adapted to operate on doubly linked lists with only O(1) extra space
49
overhead.[citation needed ]
• Merge sort is used in external sorting50 ; heapsort is not. Locality of reference is the issue.
Introsort51 is an alternative to heapsort that combines quicksort and heapsort to retain
advantages of both: worst case speed of heapsort and average speed of quicksort.

8.5 Example

Let { 6, 5, 3, 1, 8, 7, 2, 4 } be the list that we want to sort from the smallest to the largest.
(NOTE, for 'Building the Heap' step: Larger nodes don't stay below smaller node parents.
They are swapped with parents, and then recursively checked if another swap is needed, to
keep larger numbers above smaller numbers on the heap binary tree.)

42 https://en.wikipedia.org/wiki/Merge_sort
43 https://en.wikipedia.org/wiki/Data_cache
44 https://en.wikipedia.org/wiki/Locality_of_reference
45 https://en.wikipedia.org/wiki/Stable_sort
46 https://en.wikipedia.org/wiki/Parallel_algorithm
47 https://en.wikipedia.org/wiki/Linear_speedup
48 https://en.wikipedia.org/wiki/Linked_list
50 https://en.wikipedia.org/wiki/External_sorting
51 https://en.wikipedia.org/wiki/Introsort

143
Heapsort

Figure 25 An example on heapsort.

1. Build the heap

Heap newly added element swap elements


null 6
6 5
6, 5 3
6, 5, 3 1
6, 5, 3, 1 8
6, 5, 3, 1, 8 5, 8
6, 8, 3, 1, 5 6, 8
8, 6, 3, 1, 5 7
8, 6, 3, 1, 5, 7 3, 7
8, 6, 7, 1, 5, 3 2
8, 6, 7, 1, 5, 3, 2 4
8, 6, 7, 1, 5, 3, 2, 4 1, 4
8, 6, 7, 4, 5, 3, 2, 1

144
Example

2. Sorting

Heap swap ele- delete ele- sorted array details


ments ment
8, 6, 7, 4, 5, 3, 8, 1 swap 8 and
2, 1 1 in order to
delete 8 from
heap
1, 6, 7, 4, 5, 3, 8 delete 8 from
2, 8 heap and add
to sorted array
1, 6, 7, 4, 5, 3, 1, 7 8 swap 1 and 7
2 as they are not
in order in the
heap
7, 6, 1, 4, 5, 3, 1, 3 8 swap 1 and 3
2 as they are not
in order in the
heap
7, 6, 3, 4, 5, 1, 7, 2 8 swap 7 and
2 2 in order to
delete 7 from
heap
2, 6, 3, 4, 5, 1, 7 8 delete 7 from
7 heap and add
to sorted array
2, 6, 3, 4, 5, 1 2, 6 7, 8 swap 2 and 6
as they are not
in order in the
heap
6, 2, 3, 4, 5, 1 2, 5 7, 8 swap 2 and 5
as they are not
in order in the
heap
6, 5, 3, 4, 2, 1 6, 1 7, 8 swap 6 and
1 in order to
delete 6 from
heap
1, 5, 3, 4, 2, 6 6 7, 8 delete 6 from
heap and add
to sorted array
1, 5, 3, 4, 2 1, 5 6, 7, 8 swap 1 and 5
as they are not
in order in the
heap

145
Heapsort

2. Sorting
5, 1, 3, 4, 2 1, 4 6, 7, 8 swap 1 and 4
as they are not
in order in the
heap
5, 4, 3, 1, 2 5, 2 6, 7, 8 swap 5 and
2 in order to
delete 5 from
heap
2, 4, 3, 1, 5 5 6, 7, 8 delete 5 from
heap and add
to sorted array
2, 4, 3, 1 2, 4 5, 6, 7, 8 swap 2 and 4
as they are not
in order in the
heap
4, 2, 3, 1 4, 1 5, 6, 7, 8 swap 4 and
1 in order to
delete 4 from
heap
1, 2, 3, 4 4 5, 6, 7, 8 delete 4 from
heap and add
to sorted array
1, 2, 3 1, 3 4, 5, 6, 7, 8 swap 1 and 3
as they are not
in order in the
heap
3, 2, 1 3, 1 4, 5, 6, 7, 8 swap 3 and
1 in order to
delete 3 from
heap
1, 2, 3 3 4, 5, 6, 7, 8 delete 3 from
heap and add
to sorted array
1, 2 1, 2 3, 4, 5, 6, 7, 8 swap 1 and 2
as they are not
in order in the
heap
2, 1 2, 1 3, 4, 5, 6, 7, 8 swap 2 and
1 in order to
delete 2 from
heap
1, 2 2 3, 4, 5, 6, 7, 8 delete 2 from
heap and add
to sorted array

146
Notes

2. Sorting
1 1 2, 3, 4, 5, 6, 7, delete 1 from
8 heap and add
to sorted array
1, 2, 3, 4, 5, 6, completed
7, 8

8.6 Notes
1. S, S52 (2008). ”S  S”. The Algorithm Design
Manual. Springer. p. 109. doi53 :10.1007/978-1-84800-070-4_454 . ISBN55 978-1-
84800-069-856 . [H]eapsort is nothing but an implementation of selection sort using
the right data structure.
2. Williams 196457
3. B, P (2008). Advanced Data Structures. Cambridge University Press.
p. 209. ISBN58 978-0-521-88037-459 .
4. ”P Q”60 . R 24 M 2011.
5. S, M A. (2012), ”E Y P W-C A-
  F' H-C P”, Fundamenta Informaticae61 ,
120 (1): 75–92, doi62 :10.3233/FI-2012-75163
6. B, J; K, J; S, M (2000). ”P
E C S: H C”64 (PS). ACM Jour-
nal of Experimental Algorithmics. 5 (15): 15–es. CiteSeerX65 10.1.1.35.324866 .
doi67 :10.1145/351827.38425768 . Alternate PDF source69 .
7. C, J; E, S; E, A; K, J (27–
31 A 2012). In-place Heap Construction with Optimized Comparisons, Moves,
and Cache Misses70 (PDF). 37    M-
 F  C S. B, S. . 259–270.

52 https://en.wikipedia.org/wiki/Steven_Skiena
53 https://en.wikipedia.org/wiki/Doi_(identifier)
54 https://doi.org/10.1007%2F978-1-84800-070-4_4
55 https://en.wikipedia.org/wiki/ISBN_(identifier)
56 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84800-069-8
57 #CITEREFWilliams1964
58 https://en.wikipedia.org/wiki/ISBN_(identifier)
59 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-88037-4
60 http://faculty.simpson.edu/lydia.sinapova/www/cmsc250/LN250_Weiss/L10-PQueues.htm
61 https://en.wikipedia.org/wiki/Fundamenta_Informaticae
62 https://en.wikipedia.org/wiki/Doi_(identifier)
63 https://doi.org/10.3233%2FFI-2012-751
64 http://hjemmesider.diku.dk/~jyrki/Paper/katajain.ps
65 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
66 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3248
67 https://en.wikipedia.org/wiki/Doi_(identifier)
68 https://doi.org/10.1145%2F351827.384257
https://www.semanticscholar.org/paper/Performance-Engineering-Case-Study-Heap-
69
Bojesen-Katajainen/6f4ada5912c1da64e16453d67ec99c970173fb5b
https://pdfs.semanticscholar.org/9cc6/36d7998d58b3937ba0098e971710ff039612.pdf#page=
70
11

147
Heapsort

71 :10.1007/978-3-642-32589-2_2572 . ISBN73 978-3-642-32588-574 . See particularly


Fig. 3.
8. W, I75 (13 S 1993). ”BOTTOM-UP HEAPSORT, 
   HEAPSORT ,   , QUICKSORT ( 
   )”76 (PDF). Theoretical Computer Science. 118 (1): 81–98.
doi77 :10.1016/0304-3975(93)90364-y78 . Although this is a reprint of work first pub-
lished in 1990 (at the Mathematical Foundations of Computer Science conference),
the technique was published by Carlsson in 1987.[13]
9. F, R (F 1994). ”A     
   B-U-H”79 (PDF). Algorithmica. 11 (2): 104–115.
doi80 :10.1007/bf0118277081 . hdl82 :11858/00-001M-0000-0014-7B02-C83 . Also avail-
able as
F, R (A 1991). A tight lower bound for the worst case of Bottom-
Up-Heapsort84 (PDF) (T ). MPI-INF85 . MPI-I-91-104.
10. M, K86 ; S, P87 (2008). ”P Q”88 (PDF). Al-
gorithms and Data Structures: The Basic Toolbox89 . S. . 142. ISBN90 978-
3-540-77977-391 .
11. MD, C.J.H.; R, B.A. (S 1989). ”B  ”92
(PDF). Journal of Algorithms. 10 (3): 352–365. doi93 :10.1016/0196-6774(89)90033-
394 .
12. M, B95 ; S, H D. (1991). ”8.6 H”. Algorithms
from P to NP Volume 1: Design and Efficiency. Benjamin/Cummings. p. 528.
ISBN96 0-8053-8008-697 . For lack of a better name we call this enhanced program
'heapsort with bounce.'

71 https://en.wikipedia.org/wiki/Doi_(identifier)
72 https://doi.org/10.1007%2F978-3-642-32589-2_25
73 https://en.wikipedia.org/wiki/ISBN_(identifier)
74 https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-32588-5
75 https://en.wikipedia.org/wiki/Ingo_Wegener
76 https://core.ac.uk/download/pdf/82350265.pdf
77 https://en.wikipedia.org/wiki/Doi_(identifier)
78 https://doi.org/10.1016%2F0304-3975%2893%2990364-y
79 http://staff.gutech.edu.om/~rudolf/Paper/buh_algorithmica94.pdf
80 https://en.wikipedia.org/wiki/Doi_(identifier)
81 https://doi.org/10.1007%2Fbf01182770
82 https://en.wikipedia.org/wiki/Hdl_(identifier)
83 http://hdl.handle.net/11858%2F00-001M-0000-0014-7B02-C
http://pubman.mpdl.mpg.de/pubman/item/escidoc:1834997:3/component/escidoc:2463941/
84
MPI-I-94-104.pdf
85 https://en.wikipedia.org/wiki/Max_Planck_Institute_for_Informatics
86 https://en.wikipedia.org/wiki/Kurt_Mehlhorn
87 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
88 http://people.mpi-inf.mpg.de/~mehlhorn/ftp/Toolbox/PriorityQueues.pdf#page=16
89 http://people.mpi-inf.mpg.de/~mehlhorn/Toolbox.html
90 https://en.wikipedia.org/wiki/ISBN_(identifier)
91 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-77977-3
92 http://cgm.cs.mcgill.ca/~breed/2016COMP610/BUILDINGHEAPSFAST.pdf
93 https://en.wikipedia.org/wiki/Doi_(identifier)
94 https://doi.org/10.1016%2F0196-6774%2889%2990033-3
95 https://en.wikipedia.org/wiki/Bernard_Moret
96 https://en.wikipedia.org/wiki/ISBN_(identifier)
97 https://en.wikipedia.org/wiki/Special:BookSources/0-8053-8008-6

148
Notes

13. C, S (M 1987). ”A      -
   ”98 (PDF). Information Processing Letters. 24 (4):
247–250. doi99 :10.1016/0020-0190(87)90142-6100 .
14. W, I101 (M 1992). ”T     M-
D  R'   BOTTOM-UP HEAPSORT   
n log n + 1.1n”. Information and Computation. 97 (1): 86–96. doi102 :10.1016/0890-
5401(92)90005-Z103 .
104 105 106
15. ”Data Structures Using Pascal”, 1991, page 405,[full citation needed ][author missing ][ISBN missing ]
gives a ternary heapsort as a student exercise. ”Write a sorting routine similar to the
heapsort except that it uses a ternary heap.”
16. D, E W.107 Smoothsort – an alternative to sorting in situ (EWD-
796a)108 (PDF). E.W. D A. C  A H, U-
  T  A109 . (transcription110 )
17. L, C; P, O (1989), ”H—A 
P F”, WADS '89: Proceedings of the Workshop on Algorithms and
Data Structures, Lecture Notes in Computer Science, 382, London, UK: Springer-
Verlag, pp. 499–509, doi111 :10.1007/3-540-51542-9_41112 , ISBN113 978-3-540-51542-
5114 Heapsort—Adapted for presorted files (Q56049336)115 .
18. S, K (27 D 2010). ”CTS.”116 . Archive
of Interesting Code. Retrieved 5 March 2019.
19. K, J (23 S 2013). Seeking for the best priority queue:
Lessons learnt117 . A E (S 13391). D. . 19–
20, 24.
20. K, J (2–3 F 1998). The Ultimate Heapsort118 . C-
:  4 A T S. Australian Computer Science
Communications. 20 (3). Perth. pp. 87–96.
21. 119 Linux kernel source

98 https://pdfs.semanticscholar.org/caec/6682ffd13c6367a8c51b566e2420246faca2.pdf
99 https://en.wikipedia.org/wiki/Doi_(identifier)
100 https://doi.org/10.1016%2F0020-0190%2887%2990142-6
101 https://en.wikipedia.org/wiki/Ingo_Wegener
102 https://en.wikipedia.org/wiki/Doi_(identifier)
103 https://doi.org/10.1016%2F0890-5401%2892%2990005-Z
107 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
108 http://www.cs.utexas.edu/users/EWD/ewd07xx/EWD796a.PDF
109 https://en.wikipedia.org/wiki/University_of_Texas_at_Austin
110 http://www.cs.utexas.edu/users/EWD/transcriptions/EWD07xx/EWD796a.html
111 https://en.wikipedia.org/wiki/Doi_(identifier)
112 https://doi.org/10.1007%2F3-540-51542-9_41
113 https://en.wikipedia.org/wiki/ISBN_(identifier)
114 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-51542-5
115 https://www.wikidata.org/wiki/Special:EntityPage/Q56049336
116 http://www.keithschwarz.com/interesting/code/?dir=cartesian-tree-sort
117 http://hjemmesider.diku.dk/~jyrki/Myris/Kat2013-09-23P.html
118 http://hjemmesider.diku.dk/~jyrki/Myris/Kat1998C.html
119 https://github.com/torvalds/linux/blob/master/lib/sort.c

149
Heapsort

8.7 References
• W, J. W. J.120 (1964), ”A 232 - H”, Communications of the
ACM121 , 7 (6): 347–348, doi122 :10.1145/512274.512284123
• F, R W.124 (1964), ”A 245 - T 3”, Communications of
the ACM125 , 7 (12): 701, doi126 :10.1145/355588.365103127
• C, S128 (1987), ”A-   ”, BIT, 27 (1):
2–17, doi129 :10.1007/bf01937350130
• K, D131 (1997), ”§5.2.3, S  S”, Sorting and Search-
ing, The Art of Computer Programming132 , 3 (third ed.), Addison-Wesley, pp. 144–155,
ISBN133 978-0-201-89685-5134
• Thomas H. Cormen135 , Charles E. Leiserson136 , Ronald L. Rivest137 , and Clifford Stein138 .
Introduction to Algorithms139 , Second Edition. MIT Press and McGraw-Hill, 2001.
ISBN140 0-262-03293-7141 . Chapters 6 and 7 Respectively: Heapsort and Priority Queues
• A PDF of Dijkstra's original paper on Smoothsort142
• Heaps and Heapsort Tutorial143 by David Carlson, St. Vincent College

8.8 External links

The Wikibook Algorithm implementation144 has a page on the topic of: Heap-
sort145

120 https://en.wikipedia.org/wiki/J._W._J._Williams
121 https://en.wikipedia.org/wiki/Communications_of_the_ACM
122 https://en.wikipedia.org/wiki/Doi_(identifier)
123 https://doi.org/10.1145%2F512274.512284
124 https://en.wikipedia.org/wiki/Robert_W._Floyd
125 https://en.wikipedia.org/wiki/Communications_of_the_ACM
126 https://en.wikipedia.org/wiki/Doi_(identifier)
127 https://doi.org/10.1145%2F355588.365103
128 https://sv.wikipedia.org/wiki/Svante_Carlsson
129 https://en.wikipedia.org/wiki/Doi_(identifier)
130 https://doi.org/10.1007%2Fbf01937350
131 https://en.wikipedia.org/wiki/Donald_Knuth
132 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
133 https://en.wikipedia.org/wiki/ISBN_(identifier)
134 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-89685-5
135 https://en.wikipedia.org/wiki/Thomas_H._Cormen
136 https://en.wikipedia.org/wiki/Charles_E._Leiserson
137 https://en.wikipedia.org/wiki/Ronald_L._Rivest
138 https://en.wikipedia.org/wiki/Clifford_Stein
139 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
140 https://en.wikipedia.org/wiki/ISBN_(identifier)
141 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
142 http://www.cs.utexas.edu/users/EWD/ewd07xx/EWD796a.PDF
143 http://cis.stvincent.edu/html/tutorials/swd/heaps/heaps.html
144 https://en.wikibooks.org/wiki/Algorithm_implementation
145 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Heapsort

150
External links

• Animated Sorting Algorithms: Heap Sort146 at the Wayback Machine147 (archived 6


March 2015) – graphical demonstration
• Courseware on Heapsort from Univ. Oldenburg148 - With text, animations and interactive
exercises
• NIST's Dictionary of Algorithms and Data Structures: Heapsort149
• Heapsort implemented in 12 languages150
• Sorting revisited151 by Paul Hsieh
• A PowerPoint presentation demonstrating how Heap sort works152 that is for educators.
• Open Data Structures - Section 11.1.3 - Heap-Sort153 , Pat Morin154

Sorting algorithms

https://web.archive.org/web/20150306071556/http://www.sorting-algorithms.com/heap-
146
sort
147 https://en.wikipedia.org/wiki/Wayback_Machine
https://web.archive.org/web/20130326084250/http://olli.informatik.uni-oldenburg.de/
148
heapsort_SALA/english/start.html
149 https://xlinux.nist.gov/dads/HTML/heapSort.html
150 http://www.codecodex.com/wiki/Heapsort
151 http://www.azillionmonkeys.com/qed/sort.html
152 http://employees.oneonta.edu/zhangs/powerPointPlatform/index.php
http://opendatastructures.org/versions/edition-0.1e/ods-java/11_1_Comparison_Based_
153
Sorti.html#SECTION001413000000000000000
154 https://en.wikipedia.org/wiki/Pat_Morin

151
9 Bubble sort

Simple comparison sorting algorithm

This article needs additional citations for verification1 . Please help improve
this article2 by adding citations to reliable sources3 . Unsourced material may be
challenged and removed.
Find sources: ”Bubble sort”4 – news5 · newspapers6 · books7 · scholar8 · JSTOR9 (Novem-
ber 2016)(Learn how and when to remove this template message10 )

Bubble sort
Static visualization of bubble sort[1]
Class Sorting algorithm
Data structure Array
Worst-case per- O(n2 ) comparisons,
formance O(n2 ) swaps
Best-case perfor- O(n) comparisons,
mance O(1) swaps
Average perfor- O(n2 ) comparisons,
mance O(n2 ) swaps
Worst-case space O(n) total, O(1) aux-
complexity iliary

Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm11 that
repeatedly steps through the list, compares adjacent elements and swaps12 them if they
are in the wrong order. The pass through the list is repeated until the list is sorted. The

1 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
2 https://en.wikipedia.org/w/index.php?title=Bubble_sort&action=edit
3 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
4 http://www.google.com/search?as_eq=wikipedia&q=%22Bubble+sort%22
5 http://www.google.com/search?tbm=nws&q=%22Bubble+sort%22+-wikipedia
http://www.google.com/search?&q=%22Bubble+sort%22+site:news.google.com/newspapers&
6
source=newspapers
7 http://www.google.com/search?tbs=bks:1&q=%22Bubble+sort%22+-wikipedia
8 http://scholar.google.com/scholar?q=%22Bubble+sort%22
9 https://www.jstor.org/action/doBasicSearch?Query=%22Bubble+sort%22&acc=on&wc=on
10 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
11 https://en.wikipedia.org/wiki/Sorting_algorithm
12 https://en.wikipedia.org/wiki/Swap_(computer_science)

153
Bubble sort

algorithm, which is a comparison sort13 , is named for the way smaller or larger elements
”bubble” to the top of the list.
This simple algorithm performs poorly in real world use and is used primarily as an edu-
cational tool. More efficient algorithms such as timsort14 , or merge sort15 are used by the
sorting libraries built into popular programming languages such as Python and Java.[2][3]

9.1 Analysis

Figure 27 An example of bubble sort. Starting from the beginning of the list, compare
every adjacent pair, swap their position if they are not in the right order (the latter one is
smaller than the former one). After each iteration, one less element (the last one) is
needed to be compared until there are no more elements left to be compared.

9.1.1 Performance

Bubble sort has a worst-case and average complexity of О16 (n2 ), where n is the number
of items being sorted. Most practical sorting algorithms have substantially better worst-
case or average complexity, often O(n log n). Even other О(n2 ) sorting algorithms, such as

13 https://en.wikipedia.org/wiki/Comparison_sort
14 https://en.wikipedia.org/wiki/Timsort
15 https://en.wikipedia.org/wiki/Merge_sort
16 https://en.wikipedia.org/wiki/Big_o_notation

154
Analysis

insertion sort17 , generally run faster than bubble sort, and are no more complex. Therefore,
bubble sort is not a practical sorting algorithm.
The only significant advantage that bubble sort has over most other algorithms, even quick-
sort18 , but not insertion sort19 , is that the ability to detect that the list is sorted efficiently
is built into the algorithm. When the list is already sorted (best-case), the complexity
of bubble sort is only O(n). By contrast, most other algorithms, even those with better
average-case complexity20 , perform their entire sorting process on the set and thus are more
complex. However, not only does insertion sort21 share this advantage, but it also performs
better on a list that is substantially sorted (having a small number of inversions22 ).
Bubble sort should be avoided in the case of large collections. It will not be efficient in the
case of a reverse-ordered collection.

9.1.2 Rabbits and turtles

The distance and direction that elements must move during the sort determine bubble sort's
performance because elements move in different directions at different speeds. An element
that must move toward the end of the list can move quickly because it can take part in
successive swaps. For example, the largest element in the list will win every swap, so it
moves to its sorted position on the first pass even if it starts near the beginning. On the
other hand, an element that must move toward the beginning of the list cannot move faster
than one step per pass, so elements move toward the beginning very slowly. If the smallest
element is at the end of the list, it will take n−1 passes to move it to the beginning. This
has led to these types of elements being named rabbits and turtles, respectively, after the
characters in Aesop's fable of The Tortoise and the Hare23 .
Various efforts have been made to eliminate turtles to improve upon the speed of bubble
sort. Cocktail sort24 is a bi-directional bubble sort that goes from beginning to end, and
then reverses itself, going end to beginning. It can move turtles fairly well, but it retains
O(n2 )25 worst-case complexity. Comb sort26 compares elements separated by large gaps,
and can move turtles extremely quickly before proceeding to smaller and smaller gaps to
smooth out the list. Its average speed is comparable to faster algorithms like quicksort27 .

17 https://en.wikipedia.org/wiki/Insertion_sort
18 https://en.wikipedia.org/wiki/Quicksort
19 https://en.wikipedia.org/wiki/Insertion_sort
20 https://en.wikipedia.org/wiki/Average-case_complexity
21 https://en.wikipedia.org/wiki/Insertion_sort
22 https://en.wikipedia.org/wiki/Inversion_(discrete_mathematics)
23 https://en.wikipedia.org/wiki/The_Tortoise_and_the_Hare
24 https://en.wikipedia.org/wiki/Cocktail_sort
25 https://en.wikipedia.org/wiki/Big_O_notation
26 https://en.wikipedia.org/wiki/Comb_sort
27 https://en.wikipedia.org/wiki/Quicksort

155
Bubble sort

9.1.3 Step-by-step example

Take an array of numbers ” 5 1 4 2 8”, and sort the array from lowest number to greatest
number using bubble sort. In each step, elements written in bold are being compared.
Three passes will be required;
First Pass
( 5 1 4 2 8 ) → ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps
since 5 > 1.
( 1 5 4 2 8 ) → ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) → ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) → ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5),
algorithm does not swap them.
Second Pass
(14258)→(14258)
( 1 4 2 5 8 ) → ( 1 2 4 5 8 ), Swap since 4 > 2
(12458)→(12458)
(12458)→(12458)
Now, the array is already sorted, but the algorithm does not know if it is completed. The
algorithm needs one whole pass without any swap to know it is sorted.
Third Pass
(12458)→(12458)
(12458)→(12458)
(12458)→(12458)
(12458)→(12458)

9.2 Implementation

9.2.1 Pseudocode implementation

In pseudocode28 the algorithm can be expressed as (0-based array):

procedure bubbleSort(A : list of sortable items)


n := length(A)
repeat
swapped := false
for i := 1 to n-1 inclusive do
/* if this pair is out of order */
if A[i-1] > A[i] then

28 https://en.wikipedia.org/wiki/Pseudocode

156
Implementation

/* swap them and remember something changed */


swap(A[i-1], A[i])
swapped := true
end if
end for
until not swapped
end procedure

9.2.2 Optimizing bubble sort

The bubble sort algorithm can be optimized by observing that the n-th pass finds the n-th
largest element and puts it into its final place. So, the inner loop can avoid looking at the
last n − 1 items when running for the n-th time:

procedure bubbleSort(A : list of sortable items)


n := length(A)
repeat
swapped := false
for i := 1 to n - 1 inclusive do
if A[i - 1] > A[i] then
swap(A[i - 1], A[i])
swapped = true
end if
end for
n := n - 1
until not swapped
end procedure

More generally, it can happen that more than one element is placed in their final position
on a single pass. In particular, after every pass, all elements after the last swap are sorted,
and do not need to be checked again. This allows to skip over many elements, resulting
in about a worst case 50% improvement in comparison count (though no improvement in
swap counts), and adds very little complexity because the new code subsumes the ”swapped”
variable:
To accomplish this in pseudocode, the following can be written:
procedure bubbleSort(A : list of sortable items)
n := length(A)
repeat
newn := 0
for i := 1 to n - 1 inclusive do
if A[i - 1] > A[i] then
swap(A[i - 1], A[i])
newn := i
end if
end for
n := newn
until n ≤ 1
end procedure

Alternate modifications, such as the cocktail shaker sort29 attempt to improve on the bub-
ble sort performance while keeping the same idea of repeatedly comparing and swapping
adjacent items.

29 https://en.wikipedia.org/wiki/Cocktail_shaker_sort

157
Bubble sort

9.3 Use

Figure 28 A bubble sort, a sorting algorithm that continuously steps through a list,
swapping items until they appear in the correct order. The list was plotted in a Cartesian
coordinate system, with each point (x, y) indicating that the value y is stored at index x.
Then the list would be sorted by bubble sort according to every pixel's value. Note that
the largest end gets sorted first, with smaller elements taking longer to move to their
correct positions.

Although bubble sort is one of the simplest sorting algorithms to understand and implement,
its O(n2 )30 complexity means that its efficiency decreases dramatically on lists of more than
a small number of elements. Even among simple O(n2 ) sorting algorithms, algorithms like
insertion sort31 are usually considerably more efficient.
Due to its simplicity, bubble sort is often used to introduce the concept of an algorithm, or a
sorting algorithm, to introductory computer science32 students. However, some researchers

30 https://en.wikipedia.org/wiki/Big_O_notation
31 https://en.wikipedia.org/wiki/Insertion_sort
32 https://en.wikipedia.org/wiki/Computer_science

158
Variations

such as Owen Astrachan33 have gone to great lengths to disparage bubble sort and its
continued popularity in computer science education, recommending that it no longer even
be taught.[4]
The Jargon File34 , which famously calls bogosort35 ”the archetypical [sic] perversely awful
algorithm”, also calls bubble sort ”the generic bad algorithm”.[5] Donald Knuth36 , in The
Art of Computer Programming37 , concluded that ”the bubble sort seems to have nothing to
recommend it, except a catchy name and the fact that it leads to some interesting theoretical
problems”, some of which he then discusses.[6]
Bubble sort is asymptotically38 equivalent in running time to insertion sort in the worst
case, but the two algorithms differ greatly in the number of swaps necessary. Experimental
results such as those of Astrachan have also shown that insertion sort performs considerably
better even on random lists. For these reasons many modern algorithm textbooks avoid
using the bubble sort algorithm in favor of insertion sort.
Bubble sort also interacts poorly with modern CPU hardware. It produces at least twice
as many writes as insertion sort, twice as many cache misses, and asymptotically more
40
branch mispredictions39 .[citation needed ] Experiments by Astrachan sorting strings in Java41
show bubble sort to be roughly one-fifth as fast as an insertion sort and 70% as fast as a
selection sort42 .[4]
In computer graphics bubble sort is popular for its capability to detect a very small error (like
swap of just two elements) in almost-sorted arrays and fix it with just linear complexity (2n).
For example, it is used in a polygon filling algorithm, where bounding lines are sorted by
their x coordinate at a specific scan line (a line parallel to the x axis) and with incrementing
y their order changes (two elements are swapped) only at intersections of two lines. Bubble
sort is a stable sort algorithm, like insertion sort.

9.4 Variations
• Odd–even sort43 is a parallel version of bubble sort, for message passing systems.
• Passes can be from right to left, rather than left to right. This is more efficient for lists
with unsorted items added to the end.
• Cocktail shaker sort44 alternates leftwards and rightwards passes.

33 https://en.wikipedia.org/wiki/Owen_Astrachan
34 https://en.wikipedia.org/wiki/Jargon_File
35 https://en.wikipedia.org/wiki/Bogosort
36 https://en.wikipedia.org/wiki/Donald_Knuth
37 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
38 https://en.wikipedia.org/wiki/Big_O_notation
39 https://en.wikipedia.org/wiki/Branch_predictor
41 https://en.wikipedia.org/wiki/Java_(programming_language)
42 https://en.wikipedia.org/wiki/Selection_sort
43 https://en.wikipedia.org/wiki/Odd%E2%80%93even_sort
44 https://en.wikipedia.org/wiki/Cocktail_shaker_sort

159
Bubble sort

9.5 Debate over name

Bubble sort has been occasionally referred to as a ”sinking sort”.[7]


For example, in Donald Knuth's The Art of Computer Programming, Volume 3: Sorting
and Searching he states in section 5.2.1 'Sorting by Insertion', that [the value] ”settles to
its proper level” and that this method of sorting has sometimes been called the sifting or
45
sinking technique.[clarification needed ]
This debate is perpetuated by the ease with which one may consider this algorithm from
two different but equally valid perspectives:
1. The larger values might be regarded as heavier and therefore be seen to progressively
sink to the bottom of the list
2. The smaller values might be regarded as lighter and therefore be seen to progressively
bubble up to the top of the list.

9.6 In popular culture

Former Google CEO Eric Schmidt46 asked then-presidential candidate Barack Obama47
once during an interview about the best way to sort one million integers48 – and Obama,
pausing for a moment, then replied: ”I think the bubble sort would be the wrong way to
go.” [8][9]

9.7 Notes
1. C, A (27 A 2007). ”V S A”49 . R-
 16 M 2017.
2. ”[JDK-6804124] () R ” ” 
..A.   - J B S”50 .
bugs.openjdk.java.net. Retrieved 2020-01-11.
3. P, T (2002-07-20). ”[P-D] S”51 . R 2020-01-11.
4. A, O (2003). ”B :   -
 ”52 (PDF). ACM SIGCSE Bulletin. 35 (1): 1–5.
53 54 55
doi :10.1145/792548.611918 . ISSN 0097-8418 . 56

5. ”, : -”57 .

46 https://en.wikipedia.org/wiki/Eric_Schmidt
47 https://en.wikipedia.org/wiki/Barack_Obama
48 https://en.wikipedia.org/wiki/Integer
49 https://corte.si/posts/code/visualisingsorting/index.html
50 https://bugs.openjdk.java.net/browse/JDK-6804124
51 https://mail.python.org/pipermail/python-dev/2002-July/026837.html
52 http://www.cs.duke.edu/~ola/papers/bubble.pdf
53 https://en.wikipedia.org/wiki/Doi_(identifier)
54 https://doi.org/10.1145%2F792548.611918
55 https://en.wikipedia.org/wiki/ISSN_(identifier)
56 http://www.worldcat.org/issn/0097-8418
57 http://www.jargon.net/jargonfile/b/bogo-sort.html

160
References

6. Donald Knuth58 . The Art of Computer Programming59 , Volume 3: Sorting and


Searching, Second Edition. Addison-Wesley, 1998. ISBN60 0-201-89685-061 . Pages
106–110 of section 5.2.2: Sorting by Exchanging. ”[A]lthough the techniques used in
the calculations [to analyze the bubble sort] are instructive, the results are disappoint-
ing since they tell us that the bubble sort isn't really very good at all. Compared to
straight insertion […], bubble sorting requires a more complicated program and takes
about twice as long!” (Quote from the first edition, 1973.)
7. B, P E. (24 A 2009). ” ”62 . Dictionary of Algorithms
and Data Structures63 . N I  S  T64 .
R 1 O 2014.
8. OBAMA PASSES HIS GOOGLE INTERVIEW65 - Wired.com
9. B O, E S (N 14, 2007). Barack Obama | Candidates at
Google66 (Y). M V, CA 94043 T G: T 
G. E   23:20. A   67 (V) 
S 7, 2019. R S 18, 2019.CS1 maint: location (link68 )

9.8 References
• Thomas H. Cormen69 , Charles E. Leiserson70 , Ronald L. Rivest71 , and Clifford Stein72 .
Introduction to Algorithms73 , Second Edition. MIT Press and McGraw-Hill, 2001.
ISBN74 0-262-03293-775 . Problem 2-2, pg.40.
• Sorting in the Presence of Branch Prediction and Caches76
• Fundamentals of Data Structures by Ellis Horowitz, Sartaj Sahni77 and Susan Anderson-
Freed ISBN78 81-7371-605-679
• Owen Astrachan80 . Bubble Sort: An Archaeological Algorithmic Analysis81

58 https://en.wikipedia.org/wiki/Donald_Knuth
59 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
60 https://en.wikipedia.org/wiki/ISBN_(identifier)
61 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
62 https://xlinux.nist.gov/dads/HTML/bubblesort.html
63 https://en.wikipedia.org/wiki/Dictionary_of_Algorithms_and_Data_Structures
64 https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
65 https://www.wired.com/2007/11/obama-elect-me/
https://web.archive.org/web/20190907131624/https://www.youtube.com/watch?v=
66
m4yVlPqeZwo
67 https://www.youtube.com/watch?v=m4yVlPqeZwo
68 https://en.wikipedia.org/wiki/Category:CS1_maint:_location
69 https://en.wikipedia.org/wiki/Thomas_H._Cormen
70 https://en.wikipedia.org/wiki/Charles_E._Leiserson
71 https://en.wikipedia.org/wiki/Ronald_L._Rivest
72 https://en.wikipedia.org/wiki/Clifford_Stein
73 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
74 https://en.wikipedia.org/wiki/ISBN_(identifier)
75 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
76 https://www.cs.tcd.ie/publications/tech-reports/reports.05/TCD-CS-2005-57.pdf
77 https://en.wikipedia.org/wiki/Sartaj_Sahni
78 https://en.wikipedia.org/wiki/ISBN_(identifier)
79 https://en.wikipedia.org/wiki/Special:BookSources/81-7371-605-6
80 https://en.wikipedia.org/wiki/Owen_Astrachan
81 https://users.cs.duke.edu/~ola/bubble/bubble.html

161
Bubble sort

9.9 External links

The Wikibook Algorithm implementation82 has a page on the topic of: Bubble
sort83

Wikimedia Commons has media related to Bubble sort84 .

Wikiversity has learning resources about Bubble sort85

• Bubble sort in java with example86


• M, D R. (2007). ”A S A: B S”87 .
A   88  2015-03-03. – graphical demonstration
• ”L' B S”89 . (Java applet animation)
• OEIS sequence A008302 (Table (statistics) of the number of permutations of [n] that need
k pair-swaps during the sorting)90

Sorting algorithms

82 https://en.wikibooks.org/wiki/Algorithm_implementation
83 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Bubble_sort
84 https://commons.wikimedia.org/wiki/Category:Bubble_sort
85 https://en.wikiversity.org/wiki/Bubble_sort
86 https://onlinetutorials.tech/bubble-sort-in-java-with-example/
https://web.archive.org/web/20150303084352/http://www.sorting-algorithms.com/bubble-
87
sort
88 http://www.sorting-algorithms.com/bubble-sort
89 http://lecture.ecc.u-tokyo.ac.jp/~ueda/JavaApplet/BubbleSort.html
90 https://oeis.org/A008302

162
10 Shellsort

Sorting algorithm which uses multiple comparison intervals

Shellsort
Shellsort with gaps 23, 10, 4, 1 in action
Class Sorting algorithm
Data struc- Array
ture
Worst-case O(n2 ) (worst known worst
performance case gap sequence)
O(n log2 n) (best known
worst case gap sequence)[1]
Best-case O(n log n) (most gap se-
performance quences)
O(n log2 n) (best known
worst-case gap sequence)[2]
Average per- depends on gap sequence
formance
Worst-case О(n) total, O(1) auxiliary
space com-
plexity

163
Shellsort

Figure 29 Swapping pairs of items in successive steps of Shellsort with gaps 5, 3, 1

Shellsort, also known as Shell sort or Shell's method, is an in-place1 comparison sort2 .
It can be seen as either a generalization of sorting by exchange (bubble sort3 ) or sorting by
insertion (insertion sort4 ).[3] The method starts by sorting pairs of elements far apart from
each other, then progressively reducing the gap between elements to be compared. Starting
with far apart elements, it can move some out-of-place elements into position faster than a
simple nearest neighbor exchange. Donald Shell5 published the first version of this sort in
1959.[4][5] The running time of Shellsort is heavily dependent on the gap sequence it uses.
For many practical variants, determining their time complexity6 remains an open problem7 .

1 https://en.wikipedia.org/wiki/In-place_algorithm
2 https://en.wikipedia.org/wiki/Comparison_sort
3 https://en.wikipedia.org/wiki/Bubble_sort
4 https://en.wikipedia.org/wiki/Insertion_sort
5 https://en.wikipedia.org/wiki/Donald_Shell
6 https://en.wikipedia.org/wiki/Time_complexity
7 https://en.wikipedia.org/wiki/Open_problem

164
Pseudocode

10.1 Description

Shellsort is a generalization of insertion sort8 that allows the exchange of items that are far
apart. The idea is to arrange the list of elements so that, starting anywhere, considering
every hth element gives a sorted list. Such a list is said to be h-sorted. Equivalently, it can
be thought of as h interleaved lists, each individually sorted.[6] Beginning with large values
of h, this rearrangement allows elements to move long distances in the original list, reducing
large amounts of disorder quickly, and leaving less work for smaller h-sort steps to do.[7] If
the list is then k-sorted for some smaller integer k, then the list remains h-sorted. Following
this idea for a decreasing sequence of h values ending in 1 is guaranteed to leave a sorted
list in the end.[6]
An example run of Shellsort with gaps 5, 3 and 1 is shown below.
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12
Input 62 83 18 53 07 17 95 86 47 69 25 28
data
After 17 28 18 47 07 25 83 86 53 69 62 95
5-
sorting
After 17 07 18 47 28 25 69 62 53 83 86 95
3-
sorting
After 07 17 18 25 28 47 53 62 69 83 86 95
1-
sorting

The first pass, 5-sorting, performs insertion sort on five separate subarrays (a1 , a6 , a11 ), (a2 ,
a7 , a12 ), (a3 , a8 ), (a4 , a9 ), (a5 , a10 ). For instance, it changes the subarray (a1 , a6 , a11 ) from
(62, 17, 25) to (17, 25, 62). The next pass, 3-sorting, performs insertion sort on the three
subarrays (a1 , a4 , a7 , a10 ), (a2 , a5 , a8 , a11 ), (a3 , a6 , a9 , a12 ). The last pass, 1-sorting, is an
ordinary insertion sort of the entire array (a1 ,..., a12 ).
As the example illustrates, the subarrays that Shellsort operates on are initially short; later
they are longer but almost ordered. In both cases insertion sort works efficiently.
Shellsort is not stable9 : it may change the relative order of elements with equal values. It is
an adaptive sorting algorithm10 in that it executes faster when the input is partially sorted.

10.2 Pseudocode

Using Marcin Ciura's gap sequence, with an inner insertion sort.

# Sort an array a[0...n-1].


gaps = [701, 301, 132, 57, 23, 10, 4, 1]

# Start with the largest gap and work down to a gap of 1


foreach (gap in gaps)

8 https://en.wikipedia.org/wiki/Insertion_sort
9 https://en.wikipedia.org/wiki/Sorting_algorithm#Stability
10 https://en.wikipedia.org/wiki/Adaptive_sort

165
Shellsort

{
# Do a gapped insertion sort for this gap size.
# The first gap elements a[0..gap-1] are already in gapped order
# keep adding one more element until the entire array is gap sorted
for (i = gap; i < n; i += 1)
{
# add a[i] to the elements that have been gap sorted
# save a[i] in temp and make a hole at position i
temp = a[i]
# shift earlier gap-sorted elements up until the correct location for
a[i] is found
for (j = i; j >= gap and a[j - gap] > temp; j -= gap)
{
a[j] = a[j - gap]
}
# put temp (the original a[i]) in its correct location
a[j] = temp
}
}

10.3 Gap sequences

The question of deciding which gap sequence to use is difficult. Every gap sequence that
contains 1 yields a correct sort (as this makes the final pass an ordinary insertion sort);
however, the properties of thus obtained versions of Shellsort may be very different. Too
few gaps slows down the passes, and too many gaps produces an overhead.
The table below compares most proposed gap sequences published so far. Some of them have
decreasing elements that depend on the size of the sorted array (N). Others are increasing
infinite sequences, whose elements less than N should be used in reverse order.
OEIS11 General term (k ≥1) Concrete gaps Worst-case Author and year of
⌊ ⌋ ⌊ ⌋⌊ ⌋ time complexity
( )
publication
N N N
Shell12 , 1959[4]
2
, ,...,1 Θ N [e.g. when
2k 2 4
⌊ ⌋ ⌊ ⌋ N = 2p ]
N N ( 3
)
2 +1 2 + 1, . . . , 3, 1 Θ N2 Frank & Lazarus, 1960[8]
2k+1 4 ( ) 3
A00022513 Hibbard14 , 1963[9]
k
2 −1 1, 3, 7, 15, 31, 63, . . . Θ N
( )
2
3
15 k
A083318 2 + 1, prefixed with 1 1, 3, 5, 9, 17, 33, 65, . . . Θ N 2 Papernov & Stasevich,
( ) 1965[10]
A00358616 Pratt18 , 1971[1]
2
Successive numbers 1, 2, 3, 4, 6, 8, 9, 12, . . . Θ N log N
p q
of the form 2 3 (3-
smooth17 numbers)
3k − 1 ( 3
)
A00346219 , not greater than 1, 4, 13, 40, 121, . . . Θ N2 Knuth20 , 1973,[3] based
⌈ 2⌉ on Pratt21 , 1971[1]
N
3

12 https://en.wikipedia.org/wiki/Donald_Shell
13 http://oeis.org/A000225
14 https://en.wikipedia.org/wiki/Thomas_N._Hibbard
15 http://oeis.org/A083318
16 http://oeis.org/A003586
17 https://en.wikipedia.org/wiki/3-smooth
18 https://en.wikipedia.org/wiki/Vaughan_Ronald_Pratt
19 http://oeis.org/A003462
20 https://en.wikipedia.org/wiki/Donald_Knuth
21 https://en.wikipedia.org/wiki/Vaughan_Ronald_Pratt

166
Gap sequences

OEIS11 General term (k ≥1) Concrete gaps Worst-case Author and year of
∏ time complexity publication
aq , where

{ ( )q+1 }
I

5 ( √ )
aq = min n ∈ N: n ≥ , ∀p : 0 ≤ p < q ⇒ gcd(ap , n)
1+
= 18 ln(5/2)
A03656922 21, 3, 7, 21, 48, 112, . . . Incerpi & Sedgewick23 ,
{ }
ln(N )
O N
1( 2 ) 1985,[11] Knuth24 [3]
I= 0 ≤ q < r | q ̸= r +r −k
2
⌊√ √

r= 2k + 2k
( 4
)
A03656225 Sedgewick, 1982[6]
k k−1
4 +3·2 + 1, 1, 8, 23, 77, 281, . . . O N3
{
prefixed
( with 1
)
9 2k − 2 2
k
+1 k even, ( 4)
A03362226 1, 5, 19, 41, 109, . . . O N3 Sedgewick, 1986[12]
8 · 2k − 6 · 2(k+1)/2 + 1 k odd
{⌊ ⌋ }⌊ ⌋ ⌊ ⌊ ⌋⌋
5hk−1 5N 5 5N
hk = max , 1 , h0 = N , , . . .Unknown
,1 Gonnet27 & Baeza-
11 11 11 11 Yates28 , 1991[13]
⌈ ( ( )k−1 )⌉
1 9
A10887029 9· −4 1, 4, 9, 20, 46, 103, . . . Unknown Tokuda, 1992[14]
5 4
A10254930 Unknown (experimen- 1, 4, 10, 23, 57, 132, 301, 701Unknown Ciura, 2001[15]
tally derived)

When the binary representation of N contains many consecutive zeroes, Shellsort using
Shell's original gap sequence makes Θ(N2 ) comparisons in the worst case. For instance,
this case occurs for N equal to a power of two when elements greater and smaller than the
median occupy odd and even positions respectively, since they are compared only in the
last pass.
Although it has higher complexity than the O(N log N) that is optimal for comparison
sorts, Pratt's version lends itself to sorting networks31 and has the same asymptotic gate
complexity as Batcher's bitonic sorter32 .
Gonnet and Baeza-Yates observed that Shellsort makes the fewest comparisons on average
when the ratios of successive gaps are roughly equal to 2.2.[13] This is why their sequence
with ratio 2.2 and Tokuda's sequence with ratio 2.25 prove efficient. However, it is not
known why this is so. Sedgewick recommends to use gaps that have low greatest common
divisors33 or are pairwise coprime34 .[16]
With respect to the average number of comparisons, Ciura's sequence[15] has the best known
performance; gaps from 701 were not determined but the sequence can be further extended
according to the recursive formula hk = ⌊2.25hk−1 ⌋.

22 http://oeis.org/A036569
23 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
24 https://en.wikipedia.org/wiki/Donald_Knuth
25 http://oeis.org/A036562
26 http://oeis.org/A033622
27 https://en.wikipedia.org/wiki/Gaston_Gonnet
28 https://en.wikipedia.org/wiki/Ricardo_Baeza-Yates
29 http://oeis.org/A108870
30 http://oeis.org/A102549
31 https://en.wikipedia.org/wiki/Sorting_network
32 https://en.wikipedia.org/wiki/Bitonic_sorter
33 https://en.wikipedia.org/wiki/Greatest_common_divisor
34 https://en.wikipedia.org/wiki/Coprime

167
Shellsort

Tokuda's sequence, defined by the simple formula hk = ⌈h′k ⌉, where h′k = 2.25h′k−1 + 1,
h′1 = 1, can be recommended for practical applications.

10.4 Computational complexity

The following property holds: after h2 -sorting of any h1 -sorted array, the array remains
h1 -sorted.[17] Every h1 -sorted and h2 -sorted array is also (a1 h1 +a2 h2 )-sorted, for any non-
negative integers a1 and a2 . The worst-case complexity of Shellsort is therefore connected
with the Frobenius problem35 : for given integers h1 ,..., hn with gcd = 1, the Frobenius num-
ber g(h1 ,..., hn ) is the greatest integer that cannot be represented as a1 h1 + ... +an hn with
nonnegative integer a1 ,..., an . Using known formulae for Frobenius numbers, we can deter-
mine the worst-case complexity of Shellsort for several classes of gap sequences.[18] Proven
results are shown in the above table.
With respect to the average number of operations, none of the proven results concerns
a practical gap sequence.
√ For gaps that√ are powers of two, Espelid computed this av-
erage as 0.5349N N − 0.4387N − 0.097 N + O(1).[19] Knuth36 determined the average
2N 2 √
complexity of sorting an N-element array with two gaps (h, 1) to be + πN 3 h.[3]
h
It follows that a two-pass Shellsort with h = Θ(N1/3 ) makes on average O(N5/3 ) com-
parisons/inversions/running time. Yao37 found the average complexity of a three-pass
Shellsort.[20] His result was refined by Janson and Knuth:[21] the average number of com-
parisons/inversions/running time made during a Shellsort with √
three gaps (ch, cg, 1),
N2 1 π
where h and g are coprime, is + O(N ) in the first pass, (h − 1)N 3/2 + O(hN ) in
4ch √ 8g ch
1 π ( ) ( )
the second pass and ψ(h, g)N + (c − 1)N 3/2 + O (c − 1)gh1/2 N + O c2 g 3 h2 in the
8 c
third
√ pass. ψ(h, g) in the last formula is a complicated function asymptotically equal to
πh ( ) ( )
g + O g −1/2 h1/2 + O gh−1/2 . In particular, when h = Θ(N7/15 ) and g = Θ(N1/5 ),
128
the average time of sorting is O(N23/15 ).
Based on experiments, it is conjectured that Shellsort with Hibbard38 's gap sequence runs
in O(N5/4 ) average time,[3] and that Gonnet and Baeza-Yates's sequence requires on average
0.41NlnN(ln lnN+1/6) element moves.[13] Approximations of the average number of opera-
tions formerly put forward for other sequences fail when sorted arrays contain millions of
elements.
The graph below shows the average number of element comparisons in various variants of
Shellsort, divided by the theoretical lower bound, i.e. log2 N!, where the sequence 1, 4, 10,
23, 57, 132, 301, 701 has been extended according to the formula hk = ⌊2.25hk−1 ⌋.

35 https://en.wikipedia.org/wiki/Coin_problem
36 https://en.wikipedia.org/wiki/Donald_Knuth
37 https://en.wikipedia.org/wiki/Andrew_Yao
38 https://en.wikipedia.org/wiki/Thomas_N._Hibbard

168
Computational complexity

Figure 30

Applying the theory of Kolmogorov complexity39 , Jiang, Li40 , and Vitányi41 proved the
following lower bound for the order of the average number of operations/running time
in a p-pass Shellsort: Ω(pN1+1/p ) when p≤log2 N and Ω(pN) when p>log2 N.[22] There-
fore, Shellsort has prospects of running in an average time that asymptotically grows like
NlogN only when using gap sequences whose number of gaps grows in proportion to the
logarithm of the array size. It is, however, unknown whether Shellsort can reach this asymp-
totic order of average-case complexity, which is optimal for comparison sorts. The lower

p
bound was improved by Vitányi42[23] for every number of passes p to Ω(N hk−1 /hk )
k=1
where h0 = N . This result implies for example the Jiang-Li-Vitányi lower bound for all
p-pass increment sequences and improves that lower bound for particular increment se-
quences. In fact all bounds (lower and upper) currently known for the average case are
precisely matched by this lower bound. For example, this gives the new result that the
Janson43 -Knuth44 upper bound is matched by the resulting lower bound for the used
increment sequence, showing that three pass Shellsort for this increment sequence uses
Θ(N 23/15 ) comparisons/inversions/running time. The formula allows us to search for in-
crement sequences that yield lower bounds which are unknown; for example an increment
sequence for four passes which has a lower bound greater than Ω(pn1+1/p ) = Ω(n5/4 ) for the

39 https://en.wikipedia.org/wiki/Kolmogorov_complexity
40 https://en.wikipedia.org/wiki/Ming_Li
41 https://en.wikipedia.org/wiki/Paul_Vit%C3%A1nyi
42 https://en.wikipedia.org/wiki/Paul_Vit%C3%A1nyi
43 https://en.wikipedia.org/wiki/Svante_Janson
44 https://en.wikipedia.org/wiki/Donald_Knuth

169
Shellsort

increment sequence h1 = n11/16 , h2 = n7/16 , h3 = n3/16 , h4 = 1. The lower bound becomes


T = Ω(n · (n1−11/16 + n11/16−7/16 + n7/16−3/16 + n3/16 ) = Ω(n1+5/16 ) = Ω(n21/16 ).
The worst-case complexity of any version of Shellsort is of(higher order: Plaxton, Poonen45 ,
( )2 )
log N
and Suel46 showed that it grows at least as rapidly as Ω N .[24]
log log N

10.5 Applications

Shellsort performs more operations and has higher cache miss ratio47 than quicksort48 . How-
ever, since it can be implemented using little code and does not use the call stack49 , some
implementations of the qsort50 function in the C standard library51 targeted at embed-
ded systems52 use it instead of quicksort. Shellsort is, for example, used in the uClibc53
library.[25] For similar reasons, in the past, Shellsort was used in the Linux kernel54 .[26]
Shellsort can also serve as a sub-algorithm of introspective sort55 , to sort short subarrays
and to prevent a slowdown when the recursion depth exceeds a given limit. This principle
is employed, for instance, in the bzip256 compressor.[27]

10.6 See also


• Comb sort57

10.7 References
1. P, V R58 (1979). Shellsort and Sorting Networks (Outstanding
Dissertations in the Computer Sciences)59 . G. ISBN60 978-0-8240-4406-061 .
2. ”S & C”62 .

45 https://en.wikipedia.org/wiki/Bjorn_Poonen
46 https://en.wikipedia.org/wiki/Torsten_Suel
47 https://en.wikipedia.org/wiki/CPU_cache#Cache_miss
48 https://en.wikipedia.org/wiki/Quicksort
49 https://en.wikipedia.org/wiki/Call_stack
50 https://en.wikipedia.org/wiki/Qsort
51 https://en.wikipedia.org/wiki/C_standard_library
52 https://en.wikipedia.org/wiki/Embedded_systems
53 https://en.wikipedia.org/wiki/UClibc
54 https://en.wikipedia.org/wiki/Linux_kernel
55 https://en.wikipedia.org/wiki/Introsort
56 https://en.wikipedia.org/wiki/Bzip2
57 https://en.wikipedia.org/wiki/Comb_sort
58 https://en.wikipedia.org/wiki/Vaughan_Ronald_Pratt
59 http://www.dtic.mil/get-tr-doc/pdf?AD=AD0740110
60 https://en.wikipedia.org/wiki/ISBN_(identifier)
61 https://en.wikipedia.org/wiki/Special:BookSources/978-0-8240-4406-0
62 http://www.cs.wcupa.edu/rkline/ds/shell-comparison.html

170
References

3. K, D E.63 (1997). ”S' ”. The Art of Computer Pro-
gramming. Volume 3: Sorting and Searching (2nd ed.). Reading, Massachusetts:
Addison-Wesley. pp. 83–95. ISBN64 978-0-201-89685-565 .
4. S, D. L. (1959). ”A H-S S P”66 (PDF). Communi-
cations of the ACM. 2 (7): 30–32. doi67 :10.1145/368370.36838768 .
5. Some older textbooks and references call this the ”Shell–Metzner” sort after Marlene
Metzner Norton69 , but according to Metzner, ”I had nothing to do with the sort, and
my name should never have been attached to it.” See
”S ”70 . N I  S  T. R-
 17 J 2007.
6. S, R71 (1998). Algorithms in C72 . 1 (3rd ed.). Addison-Wesley.
pp. 273–28173 . ISBN74 978-0-201-31452-675 .
7. K, B W.76 ; R, D M.77 (1996). The C Programming
Language (2nd ed.). Prentice Hall. p. 62. ISBN78 978-7-302-02412-579 .
8. F, R. M.; L, R. B. (1960). ”A H-S S P”.
Communications of the ACM. 3 (1): 20–22. doi80 :10.1145/366947.36695781 .
9. H, T N. (1963). ”A E S  M
S S”. Communications of the ACM. 6 (5): 206–213.
doi82 :10.1145/366552.36655783 .
10. P, A. A.; S, G. V. (1965). ”A M  I
S  C M”84 (PDF). Problems of Information Transmission.
1 (3): 63–75.
11. I, J; S, R85 (1985). ”I U B 
S”86 (PDF). Journal of Computer and System Sciences. 31 (2): 210–224.
doi87 :10.1016/0022-0000(85)90042-x88 .

63 https://en.wikipedia.org/wiki/Donald_Knuth
64 https://en.wikipedia.org/wiki/ISBN_(identifier)
65 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-89685-5
66 http://penguin.ewu.edu/cscd300/Topic/AdvSorting/p30-shell.pdf
67 https://en.wikipedia.org/wiki/Doi_(identifier)
68 https://doi.org/10.1145%2F368370.368387
https://en.wikipedia.org/w/index.php?title=Marlene_Metzner_Norton&action=edit&
69
redlink=1
70 https://xlinux.nist.gov/dads/HTML/shellsort.html
71 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
72 https://archive.org/details/algorithmsinc00sedg/page/273
73 https://archive.org/details/algorithmsinc00sedg/page/273
74 https://en.wikipedia.org/wiki/ISBN_(identifier)
75 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-31452-6
76 https://en.wikipedia.org/wiki/Brian_Kernighan
77 https://en.wikipedia.org/wiki/Dennis_Ritchie
78 https://en.wikipedia.org/wiki/ISBN_(identifier)
79 https://en.wikipedia.org/wiki/Special:BookSources/978-7-302-02412-5
80 https://en.wikipedia.org/wiki/Doi_(identifier)
81 https://doi.org/10.1145%2F366947.366957
82 https://en.wikipedia.org/wiki/Doi_(identifier)
83 https://doi.org/10.1145%2F366552.366557
84 http://www.mathnet.ru/links/83f0a81df1ec06f76d3683c6cab7d143/ppi751.pdf
85 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
86 https://hal.inria.fr/inria-00076291/file/RR-0267.pdf
87 https://en.wikipedia.org/wiki/Doi_(identifier)
88 https://doi.org/10.1016%2F0022-0000%2885%2990042-x

171
Shellsort

12. S, R89 (1986). ”A N U B  S”. Journal
of Algorithms. 7 (2): 159–173. doi90 :10.1016/0196-6774(86)90001-591 .
13. G, G H.; B-Y, R (1991). ”S”. Handbook of
Algorithms and Data Structures: In Pascal and C (2nd ed.). Reading, Massachusetts:
Addison-Wesley. pp. 161–163. ISBN92 978-0-201-41607-793 .
14. T, N (1992). ”A I S”. I  L, J
(.). Proceedings of the IFIP 12th World Computer Congress on Algorithms,
Software, Architecture. Amsterdam: North-Holland Publishing Co. pp. 449–457.
ISBN94 978-0-444-89747-395 .
15. C, M (2001). ”B I   A C  S-
”96 (PDF). I F, R (.). Proceedings of the 13th International
Symposium on Fundamentals of Computation Theory. London: Springer-Verlag.
pp. 106–117. ISBN97 978-3-540-42487-198 .
16. S, R99 (1998). ”S”. Algorithms in C++, Parts 1–4:
Fundamentals, Data Structure, Sorting, Searching. Reading, Massachusetts: Addison-
Wesley. pp. 285–292. ISBN100 978-0-201-35088-3101 .
17. G, D; K, R M. (1972). ”A P   T-
  S”. Journal of Computer and System Sciences. 6 (2): 103–115.
doi102 :10.1016/S0022-0000(72)80016-3103 .
18. S, E S. (1989). ”O S   F P”.
BIT Numerical Mathematics. 29 (1): 37–40. doi104 :10.1007/BF01932703105 .
hdl106 :1956/19572107 .
19. E, T O. (1973). ”A   S A”. BIT Nu-
merical Mathematics. 13 (4): 394–400. doi108 :10.1007/BF01933401109 .
20. Y, A C-C (1980). ”A A  (h, k, 1)-Shellsort”. Journal of
Algorithms. 1 (1): 14–50. doi110 :10.1016/0196-6774(80)90003-6111 .

89 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
90 https://en.wikipedia.org/wiki/Doi_(identifier)
91 https://doi.org/10.1016%2F0196-6774%2886%2990001-5
92 https://en.wikipedia.org/wiki/ISBN_(identifier)
93 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-41607-7
94 https://en.wikipedia.org/wiki/ISBN_(identifier)
95 https://en.wikipedia.org/wiki/Special:BookSources/978-0-444-89747-3
96 http://sun.aei.polsl.pl/~mciura/publikacje/shellsort.pdf
97 https://en.wikipedia.org/wiki/ISBN_(identifier)
98 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-42487-1
99 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
100 https://en.wikipedia.org/wiki/ISBN_(identifier)
101 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-35088-3
102 https://en.wikipedia.org/wiki/Doi_(identifier)
103 https://doi.org/10.1016%2FS0022-0000%2872%2980016-3
104 https://en.wikipedia.org/wiki/Doi_(identifier)
105 https://doi.org/10.1007%2FBF01932703
106 https://en.wikipedia.org/wiki/Hdl_(identifier)
107 http://hdl.handle.net/1956%2F19572
108 https://en.wikipedia.org/wiki/Doi_(identifier)
109 https://doi.org/10.1007%2FBF01933401
110 https://en.wikipedia.org/wiki/Doi_(identifier)
111 https://doi.org/10.1016%2F0196-6774%2880%2990003-6

172
References

21. J, S112 ; K, D E.113 (1997). ”S


 T I”. Random Structures and Algorithms.
10 (1–2): 125–142. arXiv114 :cs/9608105115 . doi116 :10.1002/(SICI)1098-
2418(199701/03)10:1/2<125::AID-RSA6>3.0.CO;2-X117 . CiteSeerX118 :
119
10.1.1.54.9911 .
22. J, T; L, M; V, P (2000). ”A L B  
A-C C  S”120 . Journal of the ACM121 . 47 (5):
905–911. arXiv122 :cs/9906008123 . doi124 :10.1145/355483.355488125 . CiteSeerX126 :
10.1.1.6.6508127 .
23. V, P128 (2018). ”O  -   S-
”. Random Structures and Algorithms. 52 (2): 354–363. arXiv129 :cs/9906008130 .
doi131 :10.1002/rsa.20737132 .
24. P, C. G; P, B; S, T (1992). Improved Lower
Bounds for Shellsort. Annual Symposium on Foundations of Computer Science. 33.
pp. 226–235. CiteSeerX133 10.1.1.460.2429134 . doi135 :10.1109/SFCS.1992.267769136 .
ISBN137 978-0-8186-2900-6138 . CiteSeerX139 : 10.1.1.43.1393140 .
25. N, M III. ”//.”141 . R 29 O 2014.
26. ”/.”142 . R 5 M 2012.
27. J S. ”2/.”143 . R 30 M 2011.

112 https://en.wikipedia.org/wiki/Svante_Janson
113 https://en.wikipedia.org/wiki/Donald_Knuth
114 https://en.wikipedia.org/wiki/ArXiv_(identifier)
115 http://arxiv.org/abs/cs/9608105
116 https://en.wikipedia.org/wiki/Doi_(identifier)
https://doi.org/10.1002%2F%28SICI%291098-2418%28199701%2F03%2910%3A1%2F2%3C125%3A%
117
3AAID-RSA6%3E3.0.CO%3B2-X
118 https://en.wikipedia.org/wiki/CiteSeerX
119 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.54.9911
120 http://www.cwi.nl/~paulv/papers/shell2.ps
121 https://en.wikipedia.org/wiki/Journal_of_the_ACM
122 https://en.wikipedia.org/wiki/ArXiv_(identifier)
123 http://arxiv.org/abs/cs/9906008
124 https://en.wikipedia.org/wiki/Doi_(identifier)
125 https://doi.org/10.1145%2F355483.355488
126 https://en.wikipedia.org/wiki/CiteSeerX
127 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.6.6508
128 https://en.wikipedia.org/wiki/Paul_Vit%C3%A1nyi
129 https://en.wikipedia.org/wiki/ArXiv_(identifier)
130 http://arxiv.org/abs/cs/9906008
131 https://en.wikipedia.org/wiki/Doi_(identifier)
132 https://doi.org/10.1002%2Frsa.20737
133 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
134 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.460.2429
135 https://en.wikipedia.org/wiki/Doi_(identifier)
136 https://doi.org/10.1109%2FSFCS.1992.267769
137 https://en.wikipedia.org/wiki/ISBN_(identifier)
138 https://en.wikipedia.org/wiki/Special:BookSources/978-0-8186-2900-6
139 https://en.wikipedia.org/wiki/CiteSeerX
140 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.43.1393
141 http://git.uclibc.org/uClibc/tree/libc/stdlib/stdlib.c#n700
https://github.com/torvalds/linux/blob/72932611b4b05bbd89fafa369d564ac8e449809b/
142
kernel/groups.c#L105
https://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/lxr/source/src/util/compress/bzip2/
143
blocksort.c#L519

173
Shellsort

10.8 Bibliography
• K, D E.144 (1997). ”S' ”. The Art of Computer Program-
ming. Volume 3: Sorting and Searching145 (2 .). R, M:
A-W. . 83–95. ISBN146 978-0-201-89685-5147 .
• Analysis of Shellsort and Related Algorithms148 , Robert Sedgewick, Fourth European
Symposium on Algorithms, Barcelona, September 1996.

10.9 External links

The Wikibook Algorithm implementation149 has a page on the topic of: Shell
sort150

• Animated Sorting Algorithms: Shell Sort151 at the Wayback Machine152 (archived 10


March 2015) – graphical demonstration
• Shellsort with gaps 5, 3, 1 as a Hungarian folk dance153

Sorting algorithms

154

144 https://en.wikipedia.org/wiki/Donald_Knuth
145 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
146 https://en.wikipedia.org/wiki/ISBN_(identifier)
147 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-89685-5
148 http://www.cs.princeton.edu/~rs/shell/
149 https://en.wikibooks.org/wiki/Algorithm_implementation
150 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Shell_sort
https://web.archive.org/web/20150310043846/http://www.sorting-algorithms.com/shell-
151
sort
152 https://en.wikipedia.org/wiki/Wayback_Machine
153 https://www.youtube.com/watch?v=CmPA7zE8mx0
154 https://en.wikipedia.org/wiki/Wikipedia:Good_articles

174
11 Integer sorting

In computer science1 , integer sorting is the algorithmic2 problem of sorting3 a collection


of data values by integer4 keys. Algorithms designed for integer sorting may also often
be applied to sorting problems in which the keys are floating point5 numbers, rational
numbers6 , or text strings.[1] The ability to perform integer arithmetic on the keys allows
integer sorting algorithms to be faster than comparison sorting7 algorithms in many cases,
depending on the details of which operations are allowed in the model of computing and
how large the integers to be sorted are.
Integer sorting algorithms including pigeonhole sort8 , counting sort9 , and radix sort10 are
widely used and practical. Other integer sorting algorithms with smaller worst-case time
bounds are not believed to be practical for computer architectures with 64 or fewer bits per
word. Many such algorithms are known, with performance depending on a combination of
the number of items to be sorted, number of bits per key, and number of bits per word of
the computer performing the sorting algorithm.

11.1 General considerations

11.1.1 Models of computation

Time bounds for integer sorting algorithms typically depend on three parameters: the
number n of data values to be sorted, the magnitude K of the largest possible key to be
sorted, and the number w of bits that can be represented in a single machine word of
the computer on which the algorithm is to be performed. Typically, it is assumed that
w ≥log2 (max(n, K)); that is, that machine words are large enough to represent an index
into the sequence of input data, and also large enough to represent a single key.[2]
Integer sorting algorithms are usually designed to work in either the pointer machine11 or
random access machine12 models of computing. The main difference between these two

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Algorithm
3 https://en.wikipedia.org/wiki/Sorting_algorithm
4 https://en.wikipedia.org/wiki/Integer
5 https://en.wikipedia.org/wiki/Floating_point
6 https://en.wikipedia.org/wiki/Rational_number
7 https://en.wikipedia.org/wiki/Comparison_sort
8 https://en.wikipedia.org/wiki/Pigeonhole_sort
9 https://en.wikipedia.org/wiki/Counting_sort
10 https://en.wikipedia.org/wiki/Radix_sort
11 https://en.wikipedia.org/wiki/Pointer_machine
12 https://en.wikipedia.org/wiki/Random_access_machine

175
Integer sorting

models is in how memory may be addressed. The random access machine allows any value
that is stored in a register to be used as the address of memory read and write operations,
with unit cost per operation. This ability allows certain complex operations on data to
be implemented quickly using table lookups. In contrast, in the pointer machine model,
read and write operations use addresses stored in pointers, and it is not allowed to perform
arithmetic operations on these pointers. In both models, data values may be added, and
bitwise Boolean operations and binary shift operations may typically also be performed
on them, in unit time per operation. Different integer sorting algorithms make different
assumptions, however, about whether integer multiplication is also allowed as a unit-time
operation.[3] Other more specialized models of computation such as the parallel random
access machine13 have also been considered.[4]
Andersson, Miltersen & Thorup (1999)14 showed that in some cases the multiplications or
table lookups required by some integer sorting algorithms could be replaced by customized
operations that would be more easily implemented in hardware but that are not typically
available on general-purpose computers. Thorup (2003)15 improved on this by showing
how to replace these special operations by the bit field16 manipulation instructions already
available on Pentium17 processors.

11.1.2 Sorting versus integer priority queues

A priority queue18 is a data structure for maintaining a collection of items with numerical
priorities, having operations for finding and removing the item with the minimum priority
value. Comparison-based priority queues such as the binary heap19 take logarithmic time
per update, but other structures such as the van Emde Boas tree20 or bucket queue21 may
be faster for inputs whose priorities are small integers. These data structures can be used in
the selection sort22 algorithm, which sorts a collection of elements by repeatedly finding and
removing the smallest element from the collection, and returning the elements in the order
they were found. A priority queue can be used to maintain the collection of elements in this
algorithm, and the time for this algorithm on a collection of n elements can be bounded by
the time to initialize the priority queue and then to perform n find and remove operations.
For instance, using a binary heap23 as a priority queue in selection sort leads to the heap
sort24 algorithm, a comparison sorting algorithm that takes O(n log n) time. Instead, using
selection sort with a bucket queue gives a form of pigeonhole sort25 , and using van Emde
Boas trees or other integer priority queues leads to other fast integer sorting algorithms.[5]

13 https://en.wikipedia.org/wiki/Parallel_random_access_machine
14 #CITEREFAnderssonMiltersenThorup1999
15 #CITEREFThorup2003
16 https://en.wikipedia.org/wiki/Bit_field
17 https://en.wikipedia.org/wiki/Pentium
18 https://en.wikipedia.org/wiki/Priority_queue
19 https://en.wikipedia.org/wiki/Binary_heap
20 https://en.wikipedia.org/wiki/Van_Emde_Boas_tree
21 https://en.wikipedia.org/wiki/Bucket_queue
22 https://en.wikipedia.org/wiki/Selection_sort
23 https://en.wikipedia.org/wiki/Binary_heap
24 https://en.wikipedia.org/wiki/Heap_sort
25 https://en.wikipedia.org/wiki/Pigeonhole_sort

176
Practical algorithms

Instead of using an integer priority queue in a sorting algorithm, it is possible to go the


other direction, and use integer sorting algorithms as subroutines within an integer priority
queue data structure. Thorup (2007)26 used this idea to show that, if it is possible to
perform integer sorting in time T(n) per key, then the same time bound applies to the time
per insertion or deletion operation in a priority queue data structure. Thorup's reduction
is complicated and assumes the availability of either fast multiplication operations or table
lookups, but he also provides an alternative priority queue using only addition and Boolean
operations with time T(n) + T(log n) + T(log log n) + ... per operation, at most multiplying
the time by an iterated logarithm27 .[5]

11.1.3 Usability

The classical integer sorting algorithms of pigeonhole sort28 , counting sort29 , and radix
sort30 are widely used and practical.[6] Much of the subsequent research on integer sorting
algorithms has focused less on practicality and more on theoretical improvements in their
worst case analysis31 , and the algorithms that come from this line of research are not believed
to be practical for current 64-bit32 computer architectures, although experiments have shown
that some of these methods may be an improvement on radix sorting for data with 128
or more bits per key.[7] Additionally, for large data sets, the near-random memory access
patterns33 of many integer sorting algorithms can handicap them compared to comparison
sorting algorithms that have been designed with the memory hierarchy34 in mind.[8]
Integer sorting provides one of the six benchmarks35 in the DARPA36 High Productivity
Computing Systems37 Discrete Mathematics benchmark suite,[9] and one of eleven bench-
marks in the NAS Parallel Benchmarks38 suite.

11.2 Practical algorithms

Pigeonhole sort39 or counting sort40 can both sort n data items having keys in the range
from 0 to K − 1 in time O(n + K). In pigeonhole sort41 (often called bucket sort), pointers
to the data items are distributed to a table of buckets, represented as collection42 data types

26 #CITEREFThorup2007
27 https://en.wikipedia.org/wiki/Iterated_logarithm
28 https://en.wikipedia.org/wiki/Pigeonhole_sort
29 https://en.wikipedia.org/wiki/Counting_sort
30 https://en.wikipedia.org/wiki/Radix_sort
31 https://en.wikipedia.org/wiki/Worst_case_analysis
32 https://en.wikipedia.org/wiki/64-bit
33 https://en.wikipedia.org/wiki/Memory_access_pattern
34 https://en.wikipedia.org/wiki/Memory_hierarchy
35 https://en.wikipedia.org/wiki/Benchmark_(computing)
36 https://en.wikipedia.org/wiki/DARPA
37 https://en.wikipedia.org/wiki/High_Productivity_Computing_Systems
38 https://en.wikipedia.org/wiki/NAS_Parallel_Benchmarks
39 https://en.wikipedia.org/wiki/Pigeonhole_sort
40 https://en.wikipedia.org/wiki/Counting_sort
41 https://en.wikipedia.org/wiki/Pigeonhole_sort
42 https://en.wikipedia.org/wiki/Collection_(abstract_data_type)

177
Integer sorting

such as linked lists43 , using the keys as indices into the table. Then, all of the buckets are
concatenated together to form the output list.[10] Counting sort uses a table of counters
in place of a table of buckets, to determine the number of items with each key. Then, a
prefix sum44 computation is used to determine the range of positions in the sorted output at
which the values with each key should be placed. Finally, in a second pass over the input,
each item is moved to its key's position in the output array.[11] Both algorithms involve
only simple loops over the input data (taking time O(n)) and over the set of possible keys
(taking time O(K)), giving their O(n + K) overall time bound.
Radix sort45 is a sorting algorithm that works for larger keys than pigeonhole sort or count-
ing sort by performing multiple passes over the data. Each pass sorts the input using only
part of the keys, by using a different sorting algorithm (such as pigeonhole sort or counting
sort) that is suited only for small keys. To break the keys into parts, the radix sort algorithm
computes the positional notation46 for each key, according to some chosen radix47 ; then,
the part of the key used for the ith pass of the algorithm is the ith digit in the positional
notation for the full key, starting from the least significant digit and progressing to the most
significant. For this algorithm to work correctly, the sorting algorithm used in each pass
over the data must be stable48 : items with equal digits should not change positions with
each other. For greatest efficiency, the radix should be chosen to be near the number of
data items, n. Additionally, using a power of two49 near n as the radix allows the keys for
each pass to be computed quickly using only fast binary shift and mask operations. With
these choices, and with pigeonhole sort or counting sort as the base algorithm, the radix
sorting algorithm can sort n data items having keys in the range from 0 to K − 1 in time
O(n logn K).[12]

11.3 Theoretical algorithms

Many integer sorting algorithms have been developed whose theoretical analysis shows them
to behave better than comparison sorting, pigeonhole sorting, or radix sorting for large
enough combinations of the parameters defining the number of items to be sorted, range of
keys, and machine word size. Which algorithm has the best performance depends on the
values of these parameters. However, despite their theoretical advantages, these algorithms
are not an improvement for the typical ranges of these parameters that arise in practical
sorting problems.[7]

43 https://en.wikipedia.org/wiki/Linked_list
44 https://en.wikipedia.org/wiki/Prefix_sum
45 https://en.wikipedia.org/wiki/Radix_sort
46 https://en.wikipedia.org/wiki/Positional_notation
47 https://en.wikipedia.org/wiki/Radix
48 https://en.wikipedia.org/wiki/Sorting_algorithm#Stability
49 https://en.wikipedia.org/wiki/Power_of_two

178
Theoretical algorithms

11.3.1 Algorithms for small keys

A Van Emde Boas tree50 may be used as a priority queue to sort a set of n keys, each in
the range from 0 to K − 1, in time O(n log log K). This is a theoretical improvement over
radix sorting when K is sufficiently large. However, in order to use a Van Emde Boas tree,
one either needs a directly addressable memory of K words, or one needs to simulate it
using a hash table51 , reducing the space to linear but making the algorithm be randomized.
Another priority queue with similar performance (including the need for randomization in
the form of hash tables) is the Y-fast trie52 of Willard (1983)53 .
A more sophisticated technique with a similar flavor and with better theoretical performance
was developed by Kirkpatrick & Reisch (1984)54 . They observed that each pass of radix sort
can be interpreted as a range reduction technique that, in linear time, reduces the maximum
key size by a factor of n; instead, their technique reduces the key size to the square root of its
previous value (halving the number of bits needed to represent a key), again in linear time.
As in radix sort, they interpret the keys as two-digit base-b numbers for a base b that is

approximately K. They then group the items to be sorted into buckets according to their
high digits, in linear time, using either a large but uninitialized direct addressed memory
or a hash table. Each bucket has a representative, the item in the bucket with the largest
key; they then sort the list of items using as keys the high digits for the representatives and
the low digits for the non-representatives. By grouping the items from this list into buckets
again, each bucket may be placed into sorted order, and by extracting the representatives
from the sorted list the buckets may be concatenated together into sorted order. Thus, in
linear time, the sorting problem is reduced to another recursive sorting problem in which
the keys are much smaller, the square root of their previous magnitude. Repeating this
range reduction until the keys are small enough to bucket sort leads to an algorithm with
running time O(n log logn K).
A complicated randomized algorithm of Han & Thorup (2002)55 in the word RAM56 model

of computation57 allows these time bounds to be reduced even farther, to O(n log log K).

11.3.2 Algorithms for large words

An integer sorting algorithm is said to be non-conservative if it requires a word size w that is


significantly larger than log max(n, K).[13] As an extreme instance, if w ≥K, and all keys are
distinct, then the set of keys may be sorted in linear time by representing it as a bitvector58 ,
with a 1 bit in position i when i is one of the input keys, and then repeatedly removing the
least significant bit.[14]

50 https://en.wikipedia.org/wiki/Van_Emde_Boas_tree
51 https://en.wikipedia.org/wiki/Hash_table
52 https://en.wikipedia.org/wiki/Y-fast_trie
53 #CITEREFWillard1983
54 #CITEREFKirkpatrickReisch1984
55 #CITEREFHanThorup2002
56 https://en.wikipedia.org/wiki/Word_RAM
57 https://en.wikipedia.org/wiki/Model_of_computation
58 https://en.wikipedia.org/wiki/Bitvector

179
Integer sorting

The non-conservative packed sorting algorithm of Albers & Hagerup (1997)59 uses a subrou-
tine, based on Ken Batcher60 's bitonic sorting network61 , for merging62 two sorted sequences
of keys that are each short enough to be packed into a single machine word. The input to
the packed sorting algorithm, a sequence of items stored one per word, is transformed into
a packed form, a sequence of words each holding multiple items in sorted order, by using
this subroutine repeatedly to double the number of items packed into each word. Once
the sequence is in packed form, Albers and Hagerup use a form of merge sort63 to sort it;
when two sequences are being merged to form a single longer sequence, the same bitonic
sorting subroutine can be used to repeatedly extract packed words consisting of the smallest
remaining elements of the two sequences. This algorithm gains enough of a speedup from
its packed representation to sort its input in linear time whenever it is possible for a single
word to contain Ω(log n log log n) keys; that is, when log K log n log log n ≤cw for some
constant c > 0.

11.3.3 Algorithms for few items

Pigeonhole sort, counting sort, radix sort, and Van Emde Boas tree sorting all work best
when the key size is small; for large enough keys, they become slower than comparison
sorting algorithms. However, when the key size or the word size is very large relative to the
number of items (or equivalently when the number of items is small), it may again become
possible to sort quickly, using different algorithms that take advantage of the parallelism
inherent in the ability to perform arithmetic operations on large words.
An early result in this direction was provided by Ajtai, Fredman & Komlós (1984)64 using
the cell-probe model65 of computation (an artificial model in which the complexity of an
algorithm is measured only by the number of memory accesses it performs). Building on
their work, Fredman & Willard (1994)66 described two data structures, the Q-heap and the
atomic heap, that are implementable on a random access machine. The Q-heap is a bit-
parallel version of a binary trie67 , and allows both priority queue operations and successor
and predecessor queries to be performed in constant time for sets of O((log N)1/4 ) items,
where N ≤2w is the size of the precomputed tables needed to implement the data structure.
The atomic heap is a B-tree68 in which each tree node is represented as a Q-heap; it allows
constant time priority queue operations (and therefore sorting) for sets of (log N)O(1) items.
Andersson et al. (1998)69 provide a randomized algorithm called signature sort that allows
1/2 − ε )
for linear time sorting of sets of up to 2O((log w) items at a time, for any constant ε
> 0. As in the algorithm of Kirkpatrick and Reisch, they perform range reduction using
a representation of the keys as numbers in base b for a careful choice of b. Their range

59 #CITEREFAlbersHagerup1997
60 https://en.wikipedia.org/wiki/Ken_Batcher
61 https://en.wikipedia.org/wiki/Bitonic_sorter
62 https://en.wikipedia.org/wiki/Merge_algorithm
63 https://en.wikipedia.org/wiki/Merge_sort
64 #CITEREFAjtaiFredmanKoml%C3%B3s1984
65 https://en.wikipedia.org/wiki/Cell-probe_model
66 #CITEREFFredmanWillard1994
67 https://en.wikipedia.org/wiki/Trie
68 https://en.wikipedia.org/wiki/B-tree
69 #CITEREFAnderssonHagerupNilssonRaman1998

180
References

reduction algorithm replaces each digit by a signature, which is a hashed value with O(log
n) bits such that different digit values have different signatures. If n is sufficiently small, the
numbers formed by this replacement process will be significantly smaller than the original
keys, allowing the non-conservative packed sorting algorithm of Albers & Hagerup (1997)70
to sort the replaced numbers in linear time. From the sorted list of replaced numbers, it is
possible to form a compressed trie71 of the keys in linear time, and the children of each node
in the trie may be sorted recursively using only keys of size b, after which a tree traversal
produces the sorted order of the items.

11.3.4 Trans-dichotomous algorithms

Fredman & Willard (1993)72 introduced the transdichotomous model73 of analysis for integer
sorting algorithms, in which nothing is assumed about the range of the integer keys and
one must bound the algorithm's performance by a function of the number of data values
alone. Alternatively, in this model, the running time for an algorithm on a set of n items
is assumed to be the worst case74 running time for any possible combination of values of
K and w. The first algorithm of this type was Fredman and Willard's fusion tree75 sorting
algorithm, which runs in time O(n log n / log log n); this is an improvement over comparison
sorting for any choice of K and w. An alternative version of their algorithm that includes

the use of random numbers and integer division operations improves this to O(n log n).
Since their work, even better algorithms have been developed. For instance, by repeatedly
applying the Kirkpatrick–Reisch range reduction technique until the keys are small enough
to apply the Albers–Hagerup packed sorting algorithm, it is possible to sort in time O(n log
log n); however, the range reduction part of this algorithm requires either a large memory

(proportional to K) or randomization in the form of hash tables.[15]

Han & Thorup (2002)76 showed how to sort in randomized time O(n log log n). Their
technique involves using ideas related to signature sorting to partition the data into many
small sublists, of a size small enough that signature sorting can sort each of them efficiently.
It is also possible to use similar ideas to sort integers deterministically in time O(n log
log n) and linear space.[16] Using only simple arithmetic operations (no multiplications
or table lookups) it is possible to sort in randomized expected time O(n log log n)[17] or
deterministically in time O(n (log log n)1 + ε ) for any constant ε > 0.[1]

11.4 References

Footnotes

70 #CITEREFAlbersHagerup1997
71 https://en.wikipedia.org/wiki/Trie
72 #CITEREFFredmanWillard1993
73 https://en.wikipedia.org/wiki/Transdichotomous_model
74 https://en.wikipedia.org/wiki/Worst_case
75 https://en.wikipedia.org/wiki/Fusion_tree
76 #CITEREFHanThorup2002

181
Integer sorting

1. Han & Thorup (2002)77 .


2. Fredman & Willard (1993)78 .
3. The question of whether integer multiplication or table lookup operations should be
permitted goes back to Fredman & Willard (1993)79 ; see also Andersson, Miltersen &
Thorup (1999)80 .
4. Reif (1985)81 ; comment in Cole & Vishkin (1986)82 ; Hagerup (1987)83 ; Bhatt et al.
(1991)84 ; Albers & Hagerup (1997)85 .
5. Chowdhury (2008)86 .
6. McIlroy, Bostic & McIlroy (1993)87 ; Andersson & Nilsson (1998)88 .
7. Rahman & Raman (1998)89 .
8. Pedersen (1999)90 .
9. DARPA HPCS Discrete Mathematics Benchmarks91 , Duncan A. Buell, University of
South Carolina, retrieved 2011-04-20.
10. Goodrich & Tamassia (2002)92 . Although Cormen et al. (2001)93 also describe a
version of this sorting algorithm, the version they describe is adapted to inputs where
the keys are real numbers with a known distribution, rather than to integer sorting.
11. Cormen et al. (2001)94 , 8.2 Counting Sort, pp. 168–169.
12. Comrie (1929–1930)95 ; Cormen et al. (2001)96 , 8.3 Radix Sort, pp. 170–173.
13. Kirkpatrick & Reisch (1984)97 ; Albers & Hagerup (1997)98 .
14. Kirkpatrick & Reisch (1984)99 .
15. Andersson et al. (1998)100 .
16. Han (2004)101 .
17. Thorup (2002)102
Secondary sources

77 #CITEREFHanThorup2002
78 #CITEREFFredmanWillard1993
79 #CITEREFFredmanWillard1993
80 #CITEREFAnderssonMiltersenThorup1999
81 #CITEREFReif1985
82 #CITEREFColeVishkin1986
83 #CITEREFHagerup1987
84 #CITEREFBhattDiksHagerupPrasad1991
85 #CITEREFAlbersHagerup1997
86 #CITEREFChowdhury2008
87 #CITEREFMcIlroyBosticMcIlroy1993
88 #CITEREFAnderssonNilsson1998
89 #CITEREFRahmanRaman1998
90 #CITEREFPedersen1999
91 http://www.cse.sc.edu/~buell/Public_Data/DARPA_HPCS/DARPA_discrete_math.html
92 #CITEREFGoodrichTamassia2002
93 #CITEREFCormenLeisersonRivestStein2001
94 #CITEREFCormenLeisersonRivestStein2001
95 #CITEREFComrie1929%E2%80%931930
96 #CITEREFCormenLeisersonRivestStein2001
97 #CITEREFKirkpatrickReisch1984
98 #CITEREFAlbersHagerup1997
99 #CITEREFKirkpatrickReisch1984
100 #CITEREFAnderssonHagerupNilssonRaman1998
101 #CITEREFHan2004
102 #CITEREFThorup2002

182
References

• C, R A. (2008), ”E    


”103 ,  K, M-Y (.), Encyclopedia of Algorithms, Springer, pp. 278–
281, ISBN104 9780387307701105 .
• C, T H.106 ; L, C E.107 ; R, R L.108 ; S,
C109 (2001), Introduction to Algorithms110 (2 .), MIT P111 
MG-H112 , ISBN113 0-262-03293-7114 .
• G, M T.115 ; T, R116 (2002), ”4.5 B-S 
R-S”, Algorithm Design: Foundations, Analysis, and Internet Examples, John
Wiley & Sons, pp. 241–243.
Primary sources
• A, M.117 ; F, M.118 ; K, J.119 (1984), ”H   -
 ”, Information and Control120 , 63 (3): 217–225, doi121 :10.1016/S0019-
9958(84)80015-7122 , MR123 0837087124 .
• A, S125 ; H, T (1997), ”I  -
    ”, Information and Computa-
tion126 , 136 (1): 25–51, CiteSeerX127 10.1.1.53.498128 , doi129 :10.1006/inco.1997.2632130 ,
MR131 1457693132 .

103 https://books.google.com/books?id=i3S9_GnHZwYC&pg=PA278
104 https://en.wikipedia.org/wiki/ISBN_(identifier)
105 https://en.wikipedia.org/wiki/Special:BookSources/9780387307701
106 https://en.wikipedia.org/wiki/Thomas_H._Cormen
107 https://en.wikipedia.org/wiki/Charles_E._Leiserson
108 https://en.wikipedia.org/wiki/Ron_Rivest
109 https://en.wikipedia.org/wiki/Clifford_Stein
110 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
111 https://en.wikipedia.org/wiki/MIT_Press
112 https://en.wikipedia.org/wiki/McGraw-Hill
113 https://en.wikipedia.org/wiki/ISBN_(identifier)
114 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
115 https://en.wikipedia.org/wiki/Michael_T._Goodrich
116 https://en.wikipedia.org/wiki/Roberto_Tamassia
117 https://en.wikipedia.org/wiki/Mikl%C3%B3s_Ajtai
118 https://en.wikipedia.org/wiki/Michael_Fredman
119 https://en.wikipedia.org/wiki/J%C3%A1nos_Koml%C3%B3s_(mathematician)
120 https://en.wikipedia.org/wiki/Information_and_Computation
121 https://en.wikipedia.org/wiki/Doi_(identifier)
122 https://doi.org/10.1016%2FS0019-9958%2884%2980015-7
123 https://en.wikipedia.org/wiki/MR_(identifier)
124 http://www.ams.org/mathscinet-getitem?mr=0837087
125 https://en.wikipedia.org/wiki/Susanne_Albers
126 https://en.wikipedia.org/wiki/Information_and_Computation
127 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
128 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.498
129 https://en.wikipedia.org/wiki/Doi_(identifier)
130 https://doi.org/10.1006%2Finco.1997.2632
131 https://en.wikipedia.org/wiki/MR_(identifier)
132 http://www.ams.org/mathscinet-getitem?mr=1457693

183
Integer sorting

• A, A; H, T; N, S; R, R (1998),


”S   ?”, Journal of Computer and System Sciences133 , 57 (1): 74–93,
doi134 :10.1006/jcss.1998.1580135 , MR136 1649809137 .
• A, A; N, S (1998), ”I ”,
ACM Journal of Experimental Algorithmics, 3: 7–es, CiteSeerX138 10.1.1.54.4536139 ,
doi140 :10.1145/297096.297136141 , MR142 1717389143 .
• A, A; M, P B; T, M144 (1999), ”F-
      AC0 instructions only”, Theoretical Computer
Science145 , 215 (1–2): 337–344, CiteSeerX146 10.1.1.32.9401147 , doi148 :10.1016/S0304-
3975(98)00172-8149 , MR150 1678804151 .
• B, P. C. P.; D, K.; H, T.; P, V. C.; R, T.; S, S.
(1991), ”I    ”, Information and Com-
putation152 , 94 (1): 29–47, doi153 :10.1016/0890-5401(91)90031-V154 , MR155 1123154156 .
• C, R.; V, U.157 (1986), ”D    
    ”, Information and Control158 , 70 (1): 32–53,
doi159 :10.1016/S0019-9958(86)80023-7160 .
• C, L. J. (1929–1930), ”T H  P  ”,
Trans. Office Mach. Users' Assoc., LTD.: 25–37. Cited by Thorup (2007)161 as an early
source for radix sort162 .

133 https://en.wikipedia.org/wiki/Journal_of_Computer_and_System_Sciences
134 https://en.wikipedia.org/wiki/Doi_(identifier)
135 https://doi.org/10.1006%2Fjcss.1998.1580
136 https://en.wikipedia.org/wiki/MR_(identifier)
137 http://www.ams.org/mathscinet-getitem?mr=1649809
138 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
139 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.54.4536
140 https://en.wikipedia.org/wiki/Doi_(identifier)
141 https://doi.org/10.1145%2F297096.297136
142 https://en.wikipedia.org/wiki/MR_(identifier)
143 http://www.ams.org/mathscinet-getitem?mr=1717389
144 https://en.wikipedia.org/wiki/Mikkel_Thorup
145 https://en.wikipedia.org/wiki/Theoretical_Computer_Science_(journal)
146 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
147 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.9401
148 https://en.wikipedia.org/wiki/Doi_(identifier)
149 https://doi.org/10.1016%2FS0304-3975%2898%2900172-8
150 https://en.wikipedia.org/wiki/MR_(identifier)
151 http://www.ams.org/mathscinet-getitem?mr=1678804
152 https://en.wikipedia.org/wiki/Information_and_Computation
153 https://en.wikipedia.org/wiki/Doi_(identifier)
154 https://doi.org/10.1016%2F0890-5401%2891%2990031-V
155 https://en.wikipedia.org/wiki/MR_(identifier)
156 http://www.ams.org/mathscinet-getitem?mr=1123154
157 https://en.wikipedia.org/wiki/Uzi_Vishkin
158 https://en.wikipedia.org/wiki/Information_and_Computation
159 https://en.wikipedia.org/wiki/Doi_(identifier)
160 https://doi.org/10.1016%2FS0019-9958%2886%2980023-7
161 #CITEREFThorup2007
162 https://en.wikipedia.org/wiki/Radix_sort

184
References

• F, M L.163 ; W, D E.164 (1993), ”S 


-    ”, Journal of Computer and System
Sciences165 , 47 (3): 424–436, doi166 :10.1016/0022-0000(93)90040-4167 , MR168 1248864169 .
• F, M L.170 ; W, D E.171 (1994), ”T- -
       ”, Journal of Com-
puter and System Sciences172 , 48 (3): 533–551, doi173 :10.1016/S0022-0000(05)80064-9174 ,
MR175 1279413176 .
• H, T (1987), ”T    ”, In-
formation and Computation177 , 75 (1): 39–51, doi178 :10.1016/0890-5401(87)90062-9179 ,
MR180 0910976181 .
• H, Y (2004), ”D   O(n log log n) time and linear
space”, Journal of Algorithms, 50 (1): 96–105, doi182 :10.1016/j.jalgor.2003.09.001183 ,
MR184 2028585185 .

• H, Y; T, M.186 (2002), ”I   O(n log log n) ex-
pected time and linear space”, Proceedings of the 43rd Annual Symposium on Foun-
dations of Computer Science (FOCS 2002)187 , IEEE Computer Society, pp. 135–144,
doi188 :10.1109/SFCS.2002.1181890189 .
• K, D190 ; R, S (1984), ”U    -
    ”, Theoretical Computer Science191 , 28 (3): 263–276,
doi192 :10.1016/0304-3975(83)90023-3193 , MR194 0742289195 .

163 https://en.wikipedia.org/wiki/Michael_Fredman
164 https://en.wikipedia.org/wiki/Dan_Willard
165 https://en.wikipedia.org/wiki/Journal_of_Computer_and_System_Sciences
166 https://en.wikipedia.org/wiki/Doi_(identifier)
167 https://doi.org/10.1016%2F0022-0000%2893%2990040-4
168 https://en.wikipedia.org/wiki/MR_(identifier)
169 http://www.ams.org/mathscinet-getitem?mr=1248864
170 https://en.wikipedia.org/wiki/Michael_Fredman
171 https://en.wikipedia.org/wiki/Dan_Willard
172 https://en.wikipedia.org/wiki/Journal_of_Computer_and_System_Sciences
173 https://en.wikipedia.org/wiki/Doi_(identifier)
174 https://doi.org/10.1016%2FS0022-0000%2805%2980064-9
175 https://en.wikipedia.org/wiki/MR_(identifier)
176 http://www.ams.org/mathscinet-getitem?mr=1279413
177 https://en.wikipedia.org/wiki/Information_and_Computation
178 https://en.wikipedia.org/wiki/Doi_(identifier)
179 https://doi.org/10.1016%2F0890-5401%2887%2990062-9
180 https://en.wikipedia.org/wiki/MR_(identifier)
181 http://www.ams.org/mathscinet-getitem?mr=0910976
182 https://en.wikipedia.org/wiki/Doi_(identifier)
183 https://doi.org/10.1016%2Fj.jalgor.2003.09.001
184 https://en.wikipedia.org/wiki/MR_(identifier)
185 http://www.ams.org/mathscinet-getitem?mr=2028585
186 https://en.wikipedia.org/wiki/Mikkel_Thorup
187 https://en.wikipedia.org/wiki/Symposium_on_Foundations_of_Computer_Science
188 https://en.wikipedia.org/wiki/Doi_(identifier)
189 https://doi.org/10.1109%2FSFCS.2002.1181890
190 https://en.wikipedia.org/wiki/David_G._Kirkpatrick
191 https://en.wikipedia.org/wiki/Theoretical_Computer_Science_(journal)
192 https://en.wikipedia.org/wiki/Doi_(identifier)
193 https://doi.org/10.1016%2F0304-3975%2883%2990023-3
194 https://en.wikipedia.org/wiki/MR_(identifier)
195 http://www.ams.org/mathscinet-getitem?mr=0742289

185
Integer sorting

• MI, P M.; B, K; MI, M. D (1993), ”E


R S”196 (PDF), Computing Systems, 6 (1): 5–27.
• P, M N (1999), A study of the practical significance of word RAM
algorithms for internal integer sorting197 , M , D  C
S, U  C, D,    198
 2012-03-16,  2011-04-21.
• R, N; R, R (1998), ”A    -
     ”, Algorithm Engineering, 2nd In-
ternational Workshop, WAE '92, Saarbrücken, Germany, August 20–22, 1998, Proceed-
ings199 (PDF), M P I  C S200 , . 193–203.
• R, J H.201 (1985), ”A      ”,
Proceedings of the 26th Annual Symposium on Foundations of Computer Science (FOCS
1985)202 , IEEE C S, . 496–504, 203 :10.1109/SFCS.1985.9204 .
• T, M205 (2002), ”R   O(n log log n) time and linear
space using addition, shift, and bit-wise Boolean operations”, Journal of Algorithms,
42 (2): 205–230, CiteSeerX206 10.1.1.55.4443207 , doi208 :10.1006/jagm.2002.1211209 ,
MR210 1895974211 .
• T, M212 (2003), ”O AC0 implementations of fusion trees and atomic
heaps”, Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algo-
rithms (Baltimore, MD, 2003)213 , New York: ACM, pp. 699–707, MR214 1974982215 .
• T, M216 (2007), ”E     -
”, Journal of the ACM217 , 54 (6): Art. 28, doi218 :10.1145/1314690.1314692219 ,
MR220 2374029221 .

196 http://www.usenix.org/publications/compsystems/1993/win_mcilroy.pdf
https://web.archive.org/web/20120316082511/http://www.diku.dk/forskning/performance-
197
engineering/Publications/pedersen99.ps
198 http://www.diku.dk/forskning/performance-engineering/Publications/pedersen99.ps
199 http://www.cs.ru.nl/~elenam/WAEMS.pdf
200 https://en.wikipedia.org/wiki/Max_Planck_Institute_for_Computer_Science
201 https://en.wikipedia.org/wiki/John_Reif
202 https://en.wikipedia.org/wiki/Symposium_on_Foundations_of_Computer_Science
203 https://en.wikipedia.org/wiki/Doi_(identifier)
204 https://doi.org/10.1109%2FSFCS.1985.9
205 https://en.wikipedia.org/wiki/Mikkel_Thorup
206 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
207 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.4443
208 https://en.wikipedia.org/wiki/Doi_(identifier)
209 https://doi.org/10.1006%2Fjagm.2002.1211
210 https://en.wikipedia.org/wiki/MR_(identifier)
211 http://www.ams.org/mathscinet-getitem?mr=1895974
212 https://en.wikipedia.org/wiki/Mikkel_Thorup
213 http://portal.acm.org/citation.cfm?id=644221
214 https://en.wikipedia.org/wiki/MR_(identifier)
215 http://www.ams.org/mathscinet-getitem?mr=1974982
216 https://en.wikipedia.org/wiki/Mikkel_Thorup
217 https://en.wikipedia.org/wiki/Journal_of_the_ACM
218 https://en.wikipedia.org/wiki/Doi_(identifier)
219 https://doi.org/10.1145%2F1314690.1314692
220 https://en.wikipedia.org/wiki/MR_(identifier)
221 http://www.ams.org/mathscinet-getitem?mr=2374029

186
References

• W, D E.222 (1983), ”L- -  


    Θ(N)”, Information Processing Letters223 , 17 (2): 81–84,
doi224 :10.1016/0020-0190(83)90075-3225 , MR226 0731126227 .

222 https://en.wikipedia.org/wiki/Dan_Willard
223 https://en.wikipedia.org/wiki/Information_Processing_Letters
224 https://en.wikipedia.org/wiki/Doi_(identifier)
225 https://doi.org/10.1016%2F0020-0190%2883%2990075-3
226 https://en.wikipedia.org/wiki/MR_(identifier)
227 http://www.ams.org/mathscinet-getitem?mr=0731126

187
12 Counting sort

Counting sort
Class Sorting Algorithm
Data struc- Array
ture
Worst-case O(n + k), where k is the
performance range of the non-negative
key values.
Worst-case O(n + k)
space com-
plexity

In computer science1 , counting sort is an algorithm2 for sorting3 a collection of objects


according to keys that are small integers4 ; that is, it is an integer sorting5 algorithm. It
operates by counting the number of objects that have each distinct key value, and using
arithmetic on those counts to determine the positions of each key value in the output
sequence. Its running time is linear in the number of items and the difference between the
maximum and minimum key values, so it is only suitable for direct use in situations where
the variation in keys is not significantly greater than the number of items. However, it is
often used as a subroutine in another sorting algorithm, radix sort6 , that can handle larger
keys more efficiently.[1][2][3]
Because counting sort uses key values as indexes into an array, it is not a comparison sort7 ,
and the Ω8 (n log n) lower bound9 for comparison sorting does not apply to it.[1] Bucket
sort10 may be used for many of the same tasks as counting sort, with a similar time analysis;
however, compared to counting sort, bucket sort requires linked lists11 , dynamic arrays12
or a large amount of preallocated memory to hold the sets of items within each bucket,
whereas counting sort instead stores a single number (the count of items) per bucket.[4]

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Algorithm
3 https://en.wikipedia.org/wiki/Sorting_algorithm
4 https://en.wikipedia.org/wiki/Integer
5 https://en.wikipedia.org/wiki/Integer_sorting
6 https://en.wikipedia.org/wiki/Radix_sort
7 https://en.wikipedia.org/wiki/Comparison_sort
https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann.E2.80.93Landau_
8
notations
9 https://en.wikipedia.org/wiki/Lower_bound
10 https://en.wikipedia.org/wiki/Bucket_sort
11 https://en.wikipedia.org/wiki/Linked_list
12 https://en.wikipedia.org/wiki/Dynamic_array

189
Counting sort

12.1 Input and output assumptions

In the most general case, the input to counting sort consists of a collection13 of n items,
each of which has a non-negative integer key whose maximum value is at most k.[3] In
some descriptions of counting sort, the input to be sorted is assumed to be more simply a
sequence of integers itself,[1] but this simplification does not accommodate many applications
of counting sort. For instance, when used as a subroutine in radix sort14 , the keys for each
call to counting sort are individual digits of larger item keys; it would not suffice to return
only a sorted list of the key digits, separated from the items.
In applications such as in radix sort, a bound on the maximum key value k will be known
in advance, and can be assumed to be part of the input to the algorithm. However, if the
value of k is not already known then it may be computed, as a first step, by an additional
loop over the data to determine the maximum key value that actually occurs within the
data.
The output is an array15 of the items, in order by their keys. Because of the application to
radix sorting, it is important for counting sort to be a stable sort16 : if two items have the
same key as each other, they should have the same relative position in the output as they
did in the input.[1][2]

12.2 The algorithm

In pseudocode, the algorithm may be expressed as:


count = array of k+1 zeros
for x in input do
count[key(x)] += 1

total = 0
for i in 0, 1, ... k do
count[i], total = total, count[i] + total

output = array of the same length as input


for x in input do
output[count[key(x)]] = x
count[key(x)] += 1

return output

Here input is the input array to be sorted, key returns the numeric key of each item in
the input array, count is an auxiliary array used first to store the numbers of items with
each key, and then (after the second loop) to store the positions where items with each key
should be placed, k is the maximum value of the non-negative key values and output is the
sorted output array.

13 https://en.wikipedia.org/wiki/Collection_(abstract_data_type)
14 https://en.wikipedia.org/wiki/Radix_sort
15 https://en.wikipedia.org/wiki/Array_data_structure
16 https://en.wikipedia.org/wiki/Stable_sort

190
Complexity analysis

In summary, the algorithm loops over the items in the first loop, computing a histogram17 of
the number of times each key occurs within the input collection. After that, it then performs
a prefix sum18 computation on count to determine, for each key, the position range where
the items having that key should be placed in; i.e. items of key i should be placed starting
in position count[i]. This is done through the second loop. Finally, it loops over the items
again in the third loop, moving each item into its sorted position in the output array.[1][2][3]
The relative order of items with equal keys is preserved here; i.e., this is a stable sort19 .

12.3 Complexity analysis

Because the algorithm uses only simple for loops, without recursion or subroutine calls,
it is straightforward to analyze. The initialization of the count array, and the second for
loop which performs a prefix sum on the count array, each iterate at most k + 1 times and
therefore take O(k) time. The other two for loops, and the initialization of the output array,
each take O(n) time. Therefore, the time for the whole algorithm is the sum of the times
for these steps, O(n + k).[1][2]
Because it uses arrays of length k + 1 and n, the total space usage of the algorithm is also
O(n + k).[1] For problem instances in which the maximum key value is significantly smaller
than the number of items, counting sort can be highly space-efficient, as the only storage it
uses other than its input and output arrays is the Count array which uses space O(k).[5]

12.4 Variant algorithms

If each item to be sorted is itself an integer, and used as key as well, then the second and third
loops of counting sort can be combined; in the second loop, instead of computing the position
where items with key i should be placed in the output, simply append Count[i] copies of
the number i to the output.
This algorithm may also be used to eliminate duplicate keys, by replacing the Count array
with a bit vector20 that stores a one for a key that is present in the input and a zero for
a key that is not present. If additionally the items are the integer keys themselves, both
second and third loops can be omitted entirely and the bit vector will itself serve as output,
representing the values as offsets of the non-zero entries, added to the range's lowest value.
Thus the keys are sorted and the duplicates are eliminated in this variant just by being
placed into the bit array.
For data in which the maximum key size is significantly smaller than the number of data
items, counting sort may be parallelized21 by splitting the input into subarrays of approx-
imately equal size, processing each subarray in parallel to generate a separate count array
for each subarray, and then merging the count arrays. When used as part of a parallel radix

17 https://en.wikipedia.org/wiki/Histogram
18 https://en.wikipedia.org/wiki/Prefix_sum
19 https://en.wikipedia.org/wiki/Category:Stable_sorts
20 https://en.wikipedia.org/wiki/Bit_vector
21 https://en.wikipedia.org/wiki/Parallel_algorithm

191
Counting sort

sort algorithm, the key size (base of the radix representation) should be chosen to match
the size of the split subarrays.[6] The simplicity of the counting sort algorithm and its use
of the easily parallelizable prefix sum primitive also make it usable in more fine-grained
parallel algorithms.[7]
As described, counting sort is not an in-place algorithm22 ; even disregarding the count array,
it needs separate input and output arrays. It is possible to modify the algorithm so that it
places the items into sorted order within the same array that was given to it as the input,
using only the count array as auxiliary storage; however, the modified in-place version of
counting sort is not stable.[3]

12.5 History

Although radix sorting itself dates back far longer, counting sort, and its application to
radix sorting, were both invented by Harold H. Seward23 in 1954.[1][4][8]

12.6 References
1. C, T H.24 ; L, C E.25 ; R, R L.26 ; S,
C27 (2001), ”8.2 C S”, Introduction to Algorithms28 (2 .),
MIT P29  MG-H30 , . 168–170, ISBN31 0-262-03293-732 . See also
the historical notes on page 181.
2. E, J (2008), ”5.2 C S ( S S)”, How to Think
about Algorithms, Cambridge University Press, pp. 72–75, ISBN33 978-0-521-84931-
934 .
3. S, R35 (2003), ”6.10 K-I C”, Algorithms in
Java, Parts 1-4: Fundamentals, Data Structures, Sorting, and Searching (3rd ed.),
Addison-Wesley, pp. 312–314.

22 https://en.wikipedia.org/wiki/In-place_algorithm
23 https://en.wikipedia.org/wiki/Harold_H._Seward
24 https://en.wikipedia.org/wiki/Thomas_H._Cormen
25 https://en.wikipedia.org/wiki/Charles_E._Leiserson
26 https://en.wikipedia.org/wiki/Ron_Rivest
27 https://en.wikipedia.org/wiki/Clifford_Stein
28 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
29 https://en.wikipedia.org/wiki/MIT_Press
30 https://en.wikipedia.org/wiki/McGraw-Hill
31 https://en.wikipedia.org/wiki/ISBN_(identifier)
32 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
33 https://en.wikipedia.org/wiki/ISBN_(identifier)
34 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-84931-9
35 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)

192
External links

4. K, D. E.36 (1998), The Art of Computer Programming37 , Volume 3: Sort-


ing and Searching (2nd ed.), Addison-Wesley, ISBN38 0-201-89685-039 . Section 5.2,
Sorting by counting, pp. 75–80, and historical notes, p. 170.
5. B, D S.; S, K (1980), ”S   
  ”, Proceedings of the 18th annual Southeast Regional
Conference, New York, NY, USA: ACM, pp. 23–31, doi40 :10.1145/503838.50385541 ,
ISBN42 089791014143 .
6. Z, M; B, G E.44 (1991), ”R   
”, Proceedings of Supercomputing '91, November 18-22, 1991,
Albuquerque, NM, USA45 , IEEE C S / ACM, . 712–721,
46 :10.1145/125826.12616447 , ISBN48 089791459749 .
7. R, J H.50 (1985), ”A      -
”, Proc. 26th Annual Symposium on Foundations of Computer Science (FOCS
1985)51 , . 496–504, 52 :10.1109/SFCS.1985.953 , ISBN54 0-8186-0644-455 .
8. S, H. H. (1954), ”2.4.6 I S  F D S”,
Information sorting in the application of electronic digital computers to business oper-
ations56 (PDF), M' , R R-232, M I 
T57 , D C L, . 25–28.

12.7 External links

The Wikibook Algorithm implementation58 has a page on the topic of: Counting
sort59

36 https://en.wikipedia.org/wiki/Donald_Knuth
37 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
38 https://en.wikipedia.org/wiki/ISBN_(identifier)
39 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
40 https://en.wikipedia.org/wiki/Doi_(identifier)
41 https://doi.org/10.1145%2F503838.503855
42 https://en.wikipedia.org/wiki/ISBN_(identifier)
43 https://en.wikipedia.org/wiki/Special:BookSources/0897910141
44 https://en.wikipedia.org/wiki/Guy_Blelloch
45 https://www.cs.cmu.edu/~scandal/papers/cray-sort-supercomputing91.ps.gz
46 https://en.wikipedia.org/wiki/Doi_(identifier)
47 https://doi.org/10.1145%2F125826.126164
48 https://en.wikipedia.org/wiki/ISBN_(identifier)
49 https://en.wikipedia.org/wiki/Special:BookSources/0897914597
50 https://en.wikipedia.org/wiki/John_Reif
51 https://en.wikipedia.org/wiki/Symposium_on_Foundations_of_Computer_Science
52 https://en.wikipedia.org/wiki/Doi_(identifier)
53 https://doi.org/10.1109%2FSFCS.1985.9
54 https://en.wikipedia.org/wiki/ISBN_(identifier)
55 https://en.wikipedia.org/wiki/Special:BookSources/0-8186-0644-4
http://bitsavers.org/pdf/mit/whirlwind/R-series/R-232_Information_Sorting_in_the_
56
Application_of_Electronic_Digital_Computers_to_Business_Operations_May54.pdf
57 https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology
58 https://en.wikibooks.org/wiki/Algorithm_implementation
59 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Counting_sort

193
Counting sort

• Counting Sort html5 visualization60


• Demonstration applet from Cardiff University61
• K, A S. (2 J 2006), ” ”,  B, P E. (.), Dictionary
of Algorithms and Data Structures62 , U.S. N I  S 
T,  2011-04-21.
• A simple Counting Sort implementation.63

Sorting algorithms

60 http://www.cs.usfca.edu/~galles/visualization/CountingSort.html
61 http://users.cs.cf.ac.uk/C.L.Mumford/tristan/CountingSort.html
62 https://xlinux.nist.gov/dads/HTML/countingsort.html
63 http://www.codenlearn.com/2011/07/simple-counting-sort.html

194
13 Bucket sort

This article is about a variation of bucket sorting that allows multiple keys per bucket. For
the variation with one key per bucket, see pigeonhole sort1 .

Bucket sort
Class Sorting algorithm
Data struc- Array
ture
Worst-case O(n2 )
perfor-
mance
n2
Average O(n + + k), where k
k
perfor- is the number of buckets.
mance O(n), when k ≈ n.
Worst-case O(n · k)
space com-
plexity

This article needs attention from an expert in Computer science. Please


add a reason or a talk parameter to this template to explain the issue with the
article. WikiProject Computer science2 may be able to help recruit an expert.
(November 2008)

1 https://en.wikipedia.org/wiki/Pigeonhole_sort
2 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Computer_science

195
Bucket sort

Figure 31 Elements are distributed among bins

Figure 32 Then, elements are sorted within each bin

Bucket sort, or bin sort, is a sorting algorithm3 that works by distributing the elements
of an array4 into a number of buckets5 . Each bucket is then sorted individually, either using
a different sorting algorithm, or by recursively applying the bucket sorting algorithm. It is
a distribution sort6 , a generalization of pigeonhole sort7 , and is a cousin of radix sort8 in
the most-to-least significant digit flavor. Bucket sort can be implemented with comparisons
and therefore can also be considered a comparison sort9 algorithm. The computational

3 https://en.wikipedia.org/wiki/Sorting_algorithm
4 https://en.wikipedia.org/wiki/Array_data_structure
5 https://en.wikipedia.org/wiki/Bucket_(computing)
6 https://en.wikipedia.org/wiki/Distribution_sort
7 https://en.wikipedia.org/wiki/Pigeonhole_sort
8 https://en.wikipedia.org/wiki/Radix_sort
9 https://en.wikipedia.org/wiki/Comparison_sort

196
Pseudocode

complexity10 depends on the algorithm used to sort each bucket, the number of buckets to
use, and whether the input is uniformly distributed.
Bucket sort works as follows:
1. Set up an array of initially empty ”buckets”.
2. Scatter: Go over the original array, putting each object in its bucket.
3. Sort each non-empty bucket.
4. Gather: Visit the buckets in order and put all elements back into the original array.

13.1 Pseudocode
function bucketSort(array, k) is
buckets ← new array of k empty lists
M ← the maximum key value in the array
for i = 1 to length(array) do
insert array[i] into buckets[floor(k × array[i] / M)]
for i = 1 to k do
nextSort(buckets[i])
return the concatenation of buckets[1], ...., buckets[k]

Here array is the array to be sorted and k is the number of buckets to use. The maximum key
value can be computed in linear time11 by looking up all the keys once. The floor function12
must be used to convert a floating number to an integer. The function nextSort is a sorting
function used to sort each bucket. Conventionally, insertion sort13 would be used, but other
algorithms could be used as well. Using bucketSort itself as nextSort produces a relative of
radix sort14 ; in particular, the case n = 2 corresponds to quicksort15 (although potentially
with poor pivot choices).

13.2 Analysis

13.2.1 Worst-case analysis

Bucket sort is mainly useful when input is uniformly distributed over a range. When the
input contains several keys that are close to each other (clustering), those elements are likely
to be placed in the same bucket, which results in some buckets containing more elements
than average. The worst-case scenario occurs when all the elements are placed in a single
bucket. The overall performance would then be dominated by the algorithm used to sort
each bucket, which is typically O(n2 ) insertion sort16 , making bucket sort less optimal than
O(n log(n)) comparison sort17 algorithms like Quicksort18 .

10 https://en.wikipedia.org/wiki/Analysis_of_algorithms
11 https://en.wikipedia.org/wiki/Linear_time
12 https://en.wikipedia.org/wiki/Floor_function
13 https://en.wikipedia.org/wiki/Insertion_sort
14 https://en.wikipedia.org/wiki/Radix_sort
15 https://en.wikipedia.org/wiki/Quicksort
16 https://en.wikipedia.org/wiki/Insertion_sort
17 https://en.wikipedia.org/wiki/Comparison_sort
18 https://en.wikipedia.org/wiki/Quicksort

197
Bucket sort

13.2.2 Average-case analysis

Consider the case that the input is uniformly distributed. The first step, which is
initialize the buckets and find the maximum key value in the array, can be done
in O(n) time. If division and multiplication can be done in constant time, then
scattering each element to its bucket also costs O(n). Assume insertion sort is used to

sort each bucket, then the third step costs O( ki=1 n2i ), where ni is the length of the bucket
indexed i. Since we are concerning the average time, the expectation E(n2i ) has to be eval-
uated instead. Let Xij be the random variable that is 1 if element j is placed in bucket i,

n
and 0 otherwise. We have ni = Xij . Therefore,
j=1
 

n ∑
n
E(n2i ) = E  Xij Xik 
j=1 k=1
 

n ∑
n
= E Xij Xik 
j=1 k=1
   

n ∑ ∑
= E Xij2  + E  Xij Xik 
j=1 1≤j,k≤n j̸=k

The last line separates the summation into the case j = k and the case j ̸= k. Since the
chance of an object distributed to bucket i is 1/k, Xij is 1 with probability 1/k and 0
otherwise.
( ) ( )
1 1 1
E(Xij ) = 1 ·
2 2
+0 · 1−
2
=
k k k
( )( )
1 1 1
E(Xij Xik ) = 1 · =
k k k2
With the summation, it would be
   

n ∑ ∑ 1 1 n2 + nk − n
E Xij2  + E  Xij Xik  = n · + n(n − 1) · 2 =
j=1 1≤j,k≤n j̸=k
k k k2
( k ) ( k ) ( )
∑ ∑ n2 + nk − n n2
Finally, the complexity would be O E(n2i ) =O =O +n .
i=1 i=1
k2 k
The last step of bucket sort, which is concatenating all the(sorted objects
) in each buckets,
n 2
requires O(k) time. Therefore, the total complexity is O n + + k . Note that if k
k
is chosen to be k = Θ(n), then bucket sort runs in O(n) average time, given a uniformly
distributed input.[1]

198
Optimizations

13.3 Optimizations

A common optimization is to put the unsorted elements of the buckets back in the original
array first, then run insertion sort19 over the complete array; because insertion sort's runtime
is based on how far each element is from its final position, the number of comparisons
remains relatively small, and the memory hierarchy is better exploited by storing the list
contiguously in memory.[2]

13.4 Variants

13.4.1 Generic bucket sort

The most common variant of bucket sort operates on a list of n numeric inputs between zero
and some maximum value M and divides the value range into n buckets each of size M/n. If
each bucket is sorted using insertion sort20 , the sort can be shown to run in expected linear
time (where the average is taken over all possible inputs).[3] However, the performance of
this sort degrades with clustering; if many values occur close together, they will all fall
into a single bucket and be sorted slowly. This performance degradation is avoided in the
original bucket sort algorithm by assuming that the input is generated by a random process
that distributes elements uniformly over the interval [0,1).[1]

13.4.2 ProxmapSort

Main article: Proxmap sort21 Similar to generic bucket sort as described above,
ProxmapSort works by dividing an array of keys into subarrays via the use of a ”map
key” function that preserves a partial ordering on the keys; as each key is added to its
subarray, insertion sort is used to keep that subarray sorted, resulting in the entire array
being in sorted order when ProxmapSort completes. ProxmapSort differs from bucket sorts
in its use of the map key to place the data approximately where it belongs in sorted order,
producing a ”proxmap” — a proximity mapping — of the keys.

13.4.3 Histogram sort

Another variant of bucket sort known as histogram sort or counting sort22 adds an initial
pass that counts the number of elements that will fall into each bucket using a count array.
Using this information, the array values can be arranged into a sequence of buckets in-place
by a sequence of exchanges, leaving no space overhead for bucket storage.[4]

19 https://en.wikipedia.org/wiki/Insertion_sort
20 https://en.wikipedia.org/wiki/Insertion_sort
21 https://en.wikipedia.org/wiki/Proxmap_sort
22 https://en.wikipedia.org/wiki/Counting_sort

199
Bucket sort

13.4.4 Postman's sort

The Postman's sort is a variant of bucket sort that takes advantage of a hierarchical
structure of elements, typically described by a set of attributes. This is the algorithm
used by letter-sorting machines in post offices23 : mail is sorted first between domestic and
international; then by state, province or territory; then by destination post office; then by
routes, etc. Since keys are not compared against each other, sorting time is O(cn), where
c depends on the size of the key and number of buckets. This is similar to a radix sort24
that works ”top down,” or ”most significant digit first.”[5]

13.4.5 Shuffle sort

The shuffle sort[6] is a variant of bucket sort that begins by removing the first 1/8 of the
n items to be sorted, sorts them recursively, and puts them in an array. This creates n/8
”buckets” to which the remaining 7/8 of the items are distributed. Each ”bucket” is then
sorted, and the ”buckets” are concatenated into a sorted array.

13.5 Comparison with other sorting algorithms

Bucket sort can be seen as a generalization of counting sort25 ; in fact, if each bucket has
size 1 then bucket sort degenerates to counting sort. The variable bucket size of bucket sort
allows it to use O(n) memory instead of O(M) memory, where M is the number of distinct
values; in exchange, it gives up counting sort's O(n + M) worst-case behavior.
Bucket sort with two buckets is effectively a version of quicksort26 where the pivot value
is always selected to be the middle value of the value range. While this choice is effective
for uniformly distributed inputs, other means of choosing the pivot in quicksort such as
randomly selected pivots make it more resistant to clustering in the input distribution.
The n-way mergesort27 algorithm also begins by distributing the list into n sublists and
sorting each one; however, the sublists created by mergesort have overlapping value ranges
and so cannot be recombined by simple concatenation as in bucket sort. Instead, they
must be interleaved by a merge algorithm. However, this added expense is counterbalanced
by the simpler scatter phase and the ability to ensure that each sublist is the same size,
providing a good worst-case time bound.
Top-down radix sort28 can be seen as a special case of bucket sort where both the range of
values and the number of buckets is constrained to be a power of two. Consequently, each
bucket's size is also a power of two, and the procedure can be applied recursively. This
approach can accelerate the scatter phase, since we only need to examine a prefix of the bit
representation of each element to determine its bucket.

23 https://en.wikipedia.org/wiki/Post_office
24 https://en.wikipedia.org/wiki/Radix_sort
25 https://en.wikipedia.org/wiki/Counting_sort
26 https://en.wikipedia.org/wiki/Quicksort
27 https://en.wikipedia.org/wiki/Mergesort
28 https://en.wikipedia.org/wiki/Radix_sort

200
References

13.6 References
1. T H. C29 ; C E. L30 ; R L. R31 & C-
 S32 . Introduction to Algorithms33 . Bucket sort runs in linear time on the
average. Like counting sort, bucket sort is fast because it assumes something about
the input. Whereas counting sort assumes that the input consists of integers in a
small range, bucket sort assumes that the input is generated by a random process that
distributes elements uniformly over the interval [0,1). The idea of bucket sort is to
divide the interval [0, 1) into n equal-sized subintervals, or buckets, and then distribute
the n input numbers into the buckets. Since the inputs are uniformly distributed over
[0, 1), we don't expect many numbers to fall into each bucket. To produce the output,
we simply sort the numbers in each bucket and then go through the buckets in order,
listing the elements in each.
2. Corwin, E. and Logar, A. ”Sorting in linear time — variations on the bucket sort”.
Journal of Computing Sciences in Colleges, 20, 1, pp.197–202. October 2004.
3. Thomas H. Cormen34 , Charles E. Leiserson35 , Ronald L. Rivest36 , and Clifford Stein37 .
Introduction to Algorithms38 , Second Edition. MIT Press and McGraw-Hill, 2001.
ISBN39 0-262-03293-740 . Section 8.4: Bucket sort, pp.174−177.
4. NIST's Dictionary of Algorithms and Data Structures: histogram sort41
5. 42
6. A revolutionary new sort from John Cohen Nov 26, 199743
• Paul E. Black ”Postman's Sort”44 from Dictionary of Algorithms and Data Structures45
at NIST46 .
• Robert Ramey '”The Postman's Sort”47 C Users Journal Aug. 1992
• NIST's Dictionary of Algorithms and Data Structures: bucket sort48

29 https://en.wikipedia.org/wiki/Thomas_H._Cormen
30 https://en.wikipedia.org/wiki/Charles_E._Leiserson
31 https://en.wikipedia.org/wiki/Ronald_L._Rivest
32 https://en.wikipedia.org/wiki/Clifford_Stein
33 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
34 https://en.wikipedia.org/wiki/Thomas_H._Cormen
35 https://en.wikipedia.org/wiki/Charles_E._Leiserson
36 https://en.wikipedia.org/wiki/Ronald_L._Rivest
37 https://en.wikipedia.org/wiki/Clifford_Stein
38 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
39 https://en.wikipedia.org/wiki/ISBN_(identifier)
40 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
41 https://xlinux.nist.gov/dads/HTML/histogramSort.html
42 http://www.rrsd.com/psort/cuj/cuj.htm
43 https://groups.google.com/group/fido7.ru.algorithms/msg/26084cdb04008ab3
44 https://xlinux.nist.gov/dads/HTML/postmansort.html
45 https://en.wikipedia.org/wiki/Dictionary_of_Algorithms_and_Data_Structures
46 https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
47 http://www.rrsd.com/software_development/postmans_sort/cuj/cuj.htm
48 https://xlinux.nist.gov/dads/HTML/bucketsort.html

201
Bucket sort

13.7 External links


• Bucket Sort Code for Ansi C49
• Variant of Bucket Sort with Demo50

Sorting algorithms

49 http://www.dcc.uchile.cl/~rbaeza/handbook/algs/4/423.sort.c.html
50 http://www1bpt.bridgeport.edu/~dichter/lilly/bucketsort.htm

202
14 Radix sort

Radix sort
Class Sorting algorithm
Data struc- Array
ture
Worst-case O(w · n), where w is the
perfor- number of bits required to
mance store each key.
Worst-case O(w + n)
space com-
plexity

In computer science1 , radix sort is a non-comparative2 sorting algorithm3 . It avoids com-


parison by creating and distributing4 elements into buckets according to their radix5 . For
elements with more than one significant digit6 , this bucketing process is repeated for each
digit, while preserving the ordering of the prior step, until all digits have been considered.
For this reason, radix sort has also been called bucket sort7 and digital sort.
Radix sort can be applied to data that can be sorted lexicographically8 , be they integers,
words, punch cards, playing cards, or the mail9 .

14.1 History

Radix sort dates back as far as 1887 to the work of Herman Hollerith10 on tabulating
machines11 .[1] Radix sorting algorithms came into common use as a way to sort punched
cards12 as early as 1923.[2]

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Comparison_sort
3 https://en.wikipedia.org/wiki/Sorting_algorithm
4 https://en.wikipedia.org/wiki/Distribution_sort
5 https://en.wikipedia.org/wiki/Radix
6 https://en.wikipedia.org/wiki/Significant_digit
7 https://en.wikipedia.org/wiki/Bucket_sort
8 https://en.wikipedia.org/wiki/Lexicographical_order
9 https://en.wikipedia.org/wiki/Bucket_sort#Postman&#39;s_sort
10 https://en.wikipedia.org/wiki/Herman_Hollerith
11 https://en.wikipedia.org/wiki/Tabulating_machines
12 https://en.wikipedia.org/wiki/Punched_card

203
Radix sort

The first memory-efficient computer algorithm was developed in 1954 at MIT13 by Harold H.
Seward14 . Computerized radix sorts had previously been dismissed as impractical because of
the perceived need for variable allocation of buckets of unknown size. Seward's innovation
was to use a linear scan to determine the required bucket sizes and offsets beforehand,
allowing for a single static allocation of auxiliary memory. The linear scan is closely related
to Seward's other algorithm — counting sort15 .
In the modern era, radix sorts are most commonly applied to collections of binary strings16
and integers17 . It has been shown in some benchmarks to be faster than other more general
purpose sorting algorithms, sometimes 50% to three times as fast [3][4][5] .

Figure 33 An IBM card sorter performing a radix sort on a large set of punched cards.
Cards are fed into a hopper below the operator's chin and are sorted into one of the
machine's 13 output baskets, based on the data punched into one column on the cards.
The crank near the input hopper is used to move the read head to the next column as the
sort progresses. The rack in back holds cards from the previous sorting pass.

13 https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology
14 https://en.wikipedia.org/wiki/Harold_H._Seward
15 https://en.wikipedia.org/wiki/Counting_sort
16 https://en.wikipedia.org/wiki/String_(computer_science)
17 https://en.wikipedia.org/wiki/Integer_(computer_science)

204
Digit order

14.2 Digit order

Radix sorts can be implemented to start at either the most significant digit18 (MSD) or
least significant digit19 (LSD). For example, with 1234, one could start with 1 (MSD) or 4
(LSD).
LSD radix sorts typically use the following sorting order: short keys come before longer
keys, and then keys of the same length are sorted lexicographically20 . This coincides with
the normal order of integer representations, like the sequence [1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11]. LSD sorts are generally stable sorts21 .
MSD radix sorts are most suitable for sorting strings or fixed-length integer representations.
A sequence like [b, c, e, d, f, g, ba] would be sorted as [b, ba, c, d, e, f, g]. If
lexicographic ordering is used to sort variable-length integer in base 10, then numbers from
1 to 10 would be output as [1, 10, 2, 3, 4, 5, 6, 7, 8, 9], as if the shorter keys were
left-justified and padded on the right with blank characters to make the shorter keys as long
as the longest key. MSD sorts are not necessarily stable if the original ordering of duplicate
keys must always be maintained.
Other than the traversal order, MSD and LSD sorts differ in their handling of variable
length input. LSD sorts can group by length, radix sort each group, then concatenate the
groups in size order. MSD sorts must effectively 'extend' all shorter keys to the size of the
largest key and sort them accordingly, which can be more complicated than the grouping
required by LSD.
However, MSD sorts are more amenable to subdivision and recursion. Each bucket created
by an MSD step can itself be radix sorted using the next most significant digit, without
reference to any other buckets created in the previous step. Once the last digit is reached,
concatenating the buckets is all that is required to complete the sort.

14.3 Examples

14.3.1 Least significant digit

Input list (base 10):


[170, 45, 75, 90, 2, 802, 2, 66]
Starting from the rightmost (last) digit, sort the numbers based on that digit:
[{170, 90}, {02, 802, 02}, {45, 75}, {66}]
Notice that a 0 is prepended for the two 2s so that 802 maintains its relative order as in the previous
list (i.e. placed before the second 2) based on the merit of the second digit.

Sorting by the next left digit:

18 https://en.wikipedia.org/wiki/Most_significant_digit
19 https://en.wikipedia.org/wiki/Least_significant_digit
20 https://en.wikipedia.org/wiki/Lexicographically
21 https://en.wikipedia.org/wiki/Stable_sort

205
Radix sort

[{02, 802, 02}, {45}, {66}, {170, 75}, {90}]


And finally by the leftmost digit:
[{002, 002, 045, 066, 075, 090}, {170}, {802}]
Each step requires just a single pass over the data, since each item can be placed in its
bucket without comparison with any other element.
Some radix sort implementations allocate space for buckets by first counting the number
of keys that belong in each bucket before moving keys into those buckets. The number of
times that each digit occurs is stored in an array22 .
Although it's always possible to pre-determine the bucket boundaries using counts, some
implementations opt to use dynamic memory allocation instead.

14.3.2 Most significant digit, forward recursive

Input list, fixed width numeric strings with leading zeros:


[170, 045, 075, 025, 002, 024, 802, 066]
First digit, with brackets indicating buckets:
[{045, 075, 025, 002, 024, 066}, {170}, {802}]
Notice that 170 and 802 are already complete because they are all that remain in their buckets, so
no further recursion is needed

Next digit:
[{ {002}, {025, 024}, {045}, {066}, {075} }, 170, 802]
Final digit:
[ 002, { {024}, {025} }, 045, 066, 075 , 170, 802]
All that remains is concatenation:
[002, 024, 025, 045, 066, 075, 170, 802]

14.4 Complexity and performance

Radix sorts operates in O23 (nw) time, where n is the number of keys, and w is the key
length. LSD variants can achieve a lower bound for w of 'average key length' when splitting
variable length keys into groups as discussed above.
Optimized radix sorts can be very fast when working in a domain that suits them.[6] They
are constrained to lexicographic data, but for many practical applications this is not a
limitation. Large key sizes can hinder LSD implementations when the induced number of
passes becomes the bottleneck[2] .

22 https://en.wikipedia.org/wiki/Array_data_type
23 https://en.wikipedia.org/wiki/Big_O_notation

206
Specialized variants

14.5 Specialized variants

14.5.1 In-place MSD radix sort implementations

Binary MSD radix sort, also called binary quicksort, can be implemented in-place by split-
ting the input array into two bins - the 0s bin and the 1s bin. The 0s bin is grown from the
beginning of the array, whereas the 1s bin is grown from the end of the array. The 0s bin
boundary is placed before the first array element. The 1s bin boundary is placed after the
last array element. The most significant bit of the first array element is examined. If this
bit is a 1, then the first element is swapped with the element in front of the 1s bin boundary
(the last element of the array), and the 1s bin is grown by one element by decrementing
the 1s boundary array index. If this bit is a 0, then the first element remains at its current
location, and the 0s bin is grown by one element. The next array element examined is the
one in front of the 0s bin boundary (i.e. the first element that is not in the 0s bin or the
1s bin). This process continues until the 0s bin and the 1s bin reach each other. The 0s
bin and the 1s bin are then sorted recursively based on the next bit of each array element.
Recursive processing continues until the least significant bit has been used for sorting.[7][8]
Handling signed integers requires treating the most significant bit with the opposite sense,
followed by unsigned treatment of the rest of the bits.
In-place MSD binary-radix sort can be extended to larger radix and retain in-place capa-
bility. Counting sort24 is used to determine the size of each bin and their starting index.
Swapping is used to place the current element into its bin, followed by expanding the bin
boundary. As the array elements are scanned the bins are skipped over and only elements
between bins are processed, until the entire array has been processed and all elements end
up in their respective bins. The number of bins is the same as the radix used - e.g. 16
bins for 16-Radix. Each pass is based on a single digit (e.g. 4-bits per digit in the case of
16-Radix), starting from the most significant digit25 . Each bin is then processed recursively
using the next digit, until all digits have been used for sorting.[9][10]
Neither in-place binary-radix sort nor n-bit-radix sort, discussed in paragraphs above, are
stable algorithms26 .

14.5.2 Stable MSD radix sort implementations

MSD Radix Sort can be implemented as a stable algorithm, but requires the use of a memory
buffer of the same size as the input array. This extra memory allows the input buffer to be
scanned from the first array element to last, and move the array elements to the destination
bins in the same order. Thus, equal elements will be placed in the memory buffer in the
same order they were in the input array. The MSD-based algorithm uses the extra memory
buffer as the output on the first level of recursion, but swaps the input and output on the
next level of recursion, to avoid the overhead of copying the output result back to the input
buffer. Each of the bins are recursively processed, as is done for the in-place MSD Radix
Sort. After the sort by the last digit has been completed, the output buffer is checked to

24 https://en.wikipedia.org/wiki/Counting_sort
25 https://en.wikipedia.org/wiki/Most_significant_digit
26 https://en.wikipedia.org/wiki/Sorting_algorithm

207
Radix sort

see if it is the original input array, and if it's not, then a single copy is performed. If the
digit size is chosen such that the key size divided by the digit size is an even number, the
copy at the end is avoided.[11]

14.5.3 Hybrid approaches

Radix sort, such as two pass method where counting sort27 is used during the first pass
of each level of recursion, has a large constant overhead. Thus, when the bins get small,
other sorting algorithms should be used, such as insertion sort28 . A good implementation
of Insertion sort29 is fast for small arrays, stable, in-place, and can significantly speed up
Radix Sort.

14.5.4 Application to parallel computing

Note that this recursive sorting algorithm has particular application to parallel computing30 ,
as each of the bins can be sorted independently. In this case, each bin is passed to the next
available processor. A single processor would be used at the start (the most significant digit).
By the second or third digit, all available processors would likely be engaged. Ideally, as
each subdivision is fully sorted, fewer and fewer processors would be utilized. In the worst
case, all of the keys will be identical or nearly identical to each other, with the result that
there will be little to no advantage to using parallel computing to sort the keys.
In the top level of recursion, opportunity for parallelism is in the counting sort31 portion
of the algorithm. Counting is highly parallel, amenable to the parallel_reduce pattern,
and splits the work well across multiple cores until reaching memory bandwidth limit.
This portion of the algorithm has data-independent parallelism. Processing each bin in
subsequent recursion levels is data-dependent, however. For example, if all keys were of the
same value, then there would be only a single bin with any elements in it, and no parallelism
would be available. For random inputs all bins would be near equally populated and a large
amount of parallelism opportunity would be available.[12]
Note that there are faster parallel sorting algorithms available, for example optimal com-
plexity O(log(n)) are those of the Three Hungarians and Richard Cole[13][14] and Batcher32 's
bitonic merge sort33 has an algorithmic complexity of O(log2 (n)), all of which have a lower
algorithmic time complexity to radix sort on a CREW-PRAM34 . The fastest known PRAM
sorts were described in 1991 by David Powers with a parallelized quicksort that can op-
erate in O(log(n)) time on a CRCW-PRAM with n processors by performing partitioning
implicitly, as well as a radixsort that operates using the same trick in O(k), where k is
the maximum keylength.[15] However, neither the PRAM architecture or a single sequential

27 https://en.wikipedia.org/wiki/Counting_sort
28 https://en.wikipedia.org/wiki/Insertion_sort
29 https://en.wikipedia.org/wiki/Insertion_sort
30 https://en.wikipedia.org/wiki/Parallel_computing
31 https://en.wikipedia.org/wiki/Counting_sort
32 https://en.wikipedia.org/wiki/Ken_Batcher
33 https://en.wikipedia.org/wiki/Bitonic_sorter
34 https://en.wikipedia.org/wiki/Parallel_random-access_machine

208
See also

processor can actually be built in a way that will scale without the number of constant
fan-out35 gate delays per cycle increasing as O(log(n)), so that in effect a pipelined version
of Batcher's bitonic mergesort and the O(log(n)) PRAM sorts are all O(log2 (n)) in terms
of clock cycles, with Powers acknowledging that Batcher's would have lower constant in
terms of gate delays than his Parallel quicksort36 and radix sort, or Cole's merge sort37 , for
a keylength-independent sorting network38 of O(nlog2 (n)).[16]

14.5.5 Trie-based radix sort

Radix sorting can also be accomplished by building a trie39 (or radix tree) from the input
set, and doing a pre-order40 traversal, keeping only the leaves' values. This is similar to
the relationship between heapsort41 and the heap42 data structure. This can be useful for
certain data types, see Burstsort43 .

14.6 See also


• IBM 80 series Card Sorters44
• Other Distribution sorts45 .

14.7 References
1. US 39578146 and UK 32747
2. Donald Knuth48 . The Art of Computer Programming, Volume 3: Sorting and Search-
ing, Third Edition. Addison-Wesley, 1997. ISBN49 0-201-89685-050 . Section 5.2.5:
Sorting by Distribution, pp. 168–179.
3. ”I W  F S A”51 . 28 D 2016.
4. ”I        ?”52 . erik.gorset.no.
5. ”F  _ - 1.62.0”53 . www.boost.org.

35 https://en.wikipedia.org/wiki/Fan-out
36 https://en.wikipedia.org/wiki/Quicksort
37 https://en.wikipedia.org/wiki/Merge_sort
38 https://en.wikipedia.org/wiki/Sorting_network
39 https://en.wikipedia.org/wiki/Trie
40 https://en.wikipedia.org/wiki/Tree_traversal#Pre-order_(NLR)
41 https://en.wikipedia.org/wiki/Heapsort
42 https://en.wikipedia.org/wiki/Heap_(data_structure)
43 https://en.wikipedia.org/wiki/Burstsort
44 https://en.wikipedia.org/wiki/IBM_80_series_Card_Sorters
45 https://en.wikipedia.org/wiki/Distribution_sort
46 https://worldwide.espacenet.com/textdoc?DB=EPODOC&IDX=US395781
47 https://worldwide.espacenet.com/textdoc?DB=EPODOC&IDX=UK327
48 https://en.wikipedia.org/wiki/Donald_Knuth
49 https://en.wikipedia.org/wiki/ISBN_(identifier)
50 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
51 https://probablydance.com/2016/12/27/i-wrote-a-faster-sorting-algorithm/
52 https://erik.gorset.no/2011/04/radix-sort-is-faster-than-quicksort.html
https://www.boost.org/doc/libs/1_62_0/libs/sort/doc/html/boost/sort/spreadsort/
53
integer__idm45516074556032.html

209
Radix sort

6. ”E T-B S  L S  S”54 (PDF).


7. R. Sedgewick, ”Algorithms in C++”, third edition, 1998, p. 424-427
8. D, V J. ”A I  P
M: P 2”55 . Dr. Dobb's.
9. D, V J. ”A I  P
M: P 3”56 . Dr. Dobb's.
10. D, V J. ”P I-P R S S”57 . Dr.
Dobb's.
11. D, V J. ”A I  P
M: P 4”58 . Dr. Dobb's.
12. D, V J. ”P I-P N--R S”59 . Dr. Dobb's.
13. A. Gibbons and W. Rytter60 , Efficient Parallel Algorithms. Cambridge University
Press, 1988.
14. H. Casanova et al, Parallel Algorithms. Chapman & Hall, 2008.
15. David M. W. Powers, Parallelized Quicksort and Radixsort with Optimal Speedup61 ,
Proceedings of International Conference on Parallel Computing Technologies. Novosi-
birsk62 . 1991.
16. David M. W. Powers, Parallel Unification: Practical Complexity63 , Australasian Com-
puter Architecture Workshop, Flinders University, January 1995

14.8 External links

The Wikibook Algorithm implementation64 has a page on the topic of: Radix
sort65

• High Performance Implementation66 of LSD Radix sort in JavaScript67


• High Performance Implementation68 of LSD & MSD Radix sort in C#69 with source in
GitHub70
• Video tutorial of MSD Radix Sort71

54 http://goanna.cs.rmit.edu.au/~jz/fulltext/acsc03sz.pdf
http://www.drdobbs.com/architecture-and-design/algorithm-improvement-through-
55
performanc/220300654
http://www.drdobbs.com/architecture-and-design/algorithm-improvement-through-
56
performanc/221600153
57 http://www.drdobbs.com/parallel/parallel-in-place-radix-sort-simplified/229000734
58 http://www.drdobbs.com/tools/algorithm-improvement-through-performanc/222200161
59 http://www.drdobbs.com/parallel/parallel-in-place-n-bit-radix-sort/226600004
60 https://en.wikipedia.org/wiki/Wojciech_Rytter
61 http://citeseer.ist.psu.edu/327487.html
62 https://en.wikipedia.org/wiki/Novosibirsk
63 http://david.wardpowers.info/Research/AI/papers/199501-ACAW-PUPC.pdf
64 https://en.wikibooks.org/wiki/Algorithm_implementation
65 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Radix_sort
66 https://duvanenko.tech.blog/2017/06/15/faster-sorting-in-javascript/
67 https://en.wikipedia.org/wiki/JavaScript
68 https://duvanenko.tech.blog/2018/05/23/faster-sorting-in-c/
69 https://en.wikipedia.org/wiki/C_Sharp_(programming_language)
70 https://github.com/DragonSpit/HPCsharp/
71 https://www.youtube.com/watch?v=6YyflHO9GdE

210
External links

• Demonstration and comparison72 of Radix sort with Bubble sort73 , Merge sort74 and
Quicksort75 implemented in JavaScript76
• Article77 about Radix sorting IEEE floating-point78 numbers with implementation.
Faster Floating Point Sorting and Multiple Histogramming79 with implementation in
C++
• Pointers to radix sort visualizations80
• USort library81 contains tuned implementations of radix sort for most numerical C types
(C99)
• Donald Knuth82 . The Art of Computer Programming, Volume 3: Sorting and Searching,
Third Edition. Addison-Wesley, 1997. ISBN83 0-201-89685-084 . Section 5.2.5: Sorting by
Distribution, pp. 168–179.
• Thomas H. Cormen85 , Charles E. Leiserson86 , Ronald L. Rivest87 , and Clifford Stein88 .
Introduction to Algorithms89 , Second Edition. MIT Press and McGraw-Hill, 2001.
ISBN90 0-262-03293-791 . Section 8.3: Radix sort, pp. 170–173.
• BRADSORT v1.50 source code92
• Efficient Trie-Based Sorting of Large Sets of Strings93 , by Ranjan Sinha and Justin Zobel.
This paper describes a method of creating tries of buckets which figuratively burst into
sub-tries when the buckets hold more than a predetermined capacity of strings, hence the
name, ”Burstsort”.
• Open Data Structures - Java Edition - Section 11.2 - Counting Sort and Radix Sort94 ,
Pat Morin95
• Open Data Structures - C++ Edition - Section 11.2 - Counting Sort and Radix Sort96 ,
Pat Morin97

72 http://www.csse.monash.edu.au/~lloyd/tildeAlgDS/Sort/Radix/
73 https://en.wikipedia.org/wiki/Bubble_sort
74 https://en.wikipedia.org/wiki/Merge_sort
75 https://en.wikipedia.org/wiki/Quicksort
76 https://en.wikipedia.org/wiki/JavaScript
77 http://www.codercorner.com/RadixSortRevisited.htm
78 https://en.wikipedia.org/wiki/IEEE_floating-point_standard
79 http://www.stereopsis.com/radix.html
https://web.archive.org/web/20060829193644/http://web-cat.cs.vt.edu/AlgovizWiki/
80
RadixSort
81 https://bitbucket.org/ais/usort/wiki/Home
82 https://en.wikipedia.org/wiki/Donald_Knuth
83 https://en.wikipedia.org/wiki/ISBN_(identifier)
84 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
85 https://en.wikipedia.org/wiki/Thomas_H._Cormen
86 https://en.wikipedia.org/wiki/Charles_E._Leiserson
87 https://en.wikipedia.org/wiki/Ronald_L._Rivest
88 https://en.wikipedia.org/wiki/Clifford_Stein
89 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
90 https://en.wikipedia.org/wiki/ISBN_(identifier)
91 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
92 https://web.archive.org/web/20010714213118/http://www.chinet.com/~edlee/bradsort.c
93 https://pdfs.semanticscholar.org/15b4/e7759573298a36ace2f898e437c9218cdfca.pdf
http://opendatastructures.org/versions/edition-0.1e/ods-java/11_2_Counting_Sort_
94
Radix_So.html
95 https://en.wikipedia.org/wiki/Pat_Morin
96 http://opendatastructures.org/ods-cpp/11_2_Counting_Sort_Radix_So.html
97 https://en.wikipedia.org/wiki/Pat_Morin

211
Radix sort

Sorting algorithms

212
15 Data structure

Particular way of storing and organizing data in a computer Not to be confused with data
type1 . For information on Wikipedia's data structure, see Wikipedia:Administration § Data
structure and development2 .

Figure 34 A data structure known as a hash table.

In computer science3 , a data structure is a data organization, management, and storage


format that enables efficient4 access and modification.[1][2][3] More precisely, a data structure
is a collection of data values5 , the relationships among them, and the functions or operations
that can be applied to the data.[4]

1 https://en.wikipedia.org/wiki/Data_type
2 https://en.wikipedia.org/wiki/Wikipedia:Administration#Data_structure_and_development
3 https://en.wikipedia.org/wiki/Computer_science
4 https://en.wikipedia.org/wiki/Algorithmic_efficiency
5 https://en.wikipedia.org/wiki/Data

213
Data structure

15.1 Usage

Data structures serve as the basis for abstract data types6 (ADT). The ADT defines the
logical form of the data type. The data structure implements the physical form of the data
type.[5]
Different types of data structures are suited to different kinds of applications, and some are
highly specialized to specific tasks. For example, relational databases7 commonly use B-
tree8 indexes for data retrieval,[6] while compiler9 implementations usually use hash tables10
to look up identifiers.[7]
Data structures provide a means to manage large amounts of data efficiently for uses such
as large databases11 and internet indexing services12 . Usually, efficient data structures are
key to designing efficient algorithms13 . Some formal design methods and programming
languages14 emphasize data structures, rather than algorithms, as the key organizing factor
in software design. Data structures can be used to organize the storage and retrieval of
information stored in both main memory15 and secondary memory16 .[8]

15.2 Implementation

Data structures are generally based on the ability of a computer17 to fetch and store data
at any place in its memory, specified by a pointer18 —a bit string, representing a memory
address19 , that can be itself stored in memory and manipulated by the program. Thus,
the array20 and record21 data structures are based on computing the addresses of data
items with arithmetic operations22 , while the linked data structures23 are based on storing
addresses of data items within the structure itself.
The implementation of a data structure usually requires writing a set of procedures24 that
create and manipulate instances of that structure. The efficiency of a data structure cannot
be analyzed separately from those operations. This observation motivates the theoretical

6 https://en.wikipedia.org/wiki/Abstract_data_type
7 https://en.wikipedia.org/wiki/Relational_database
8 https://en.wikipedia.org/wiki/B-tree
9 https://en.wikipedia.org/wiki/Compiler
10 https://en.wikipedia.org/wiki/Hash_table
11 https://en.wikipedia.org/wiki/Database
12 https://en.wikipedia.org/wiki/Web_indexing
13 https://en.wikipedia.org/wiki/Algorithm
14 https://en.wikipedia.org/wiki/Programming_language
15 https://en.wikipedia.org/wiki/Main_memory
16 https://en.wikipedia.org/wiki/Secondary_memory
17 https://en.wikipedia.org/wiki/Computer
18 https://en.wikipedia.org/wiki/Pointer_(computer_programming)
19 https://en.wikipedia.org/wiki/Memory_address
20 https://en.wikipedia.org/wiki/Array_data_structure
21 https://en.wikipedia.org/wiki/Record_(computer_science)
22 https://en.wikipedia.org/wiki/Arithmetic_operations
23 https://en.wikipedia.org/wiki/Linked_data_structure
24 https://en.wikipedia.org/wiki/Subroutine

214
Examples

concept of an abstract data type25 , a data structure that is defined indirectly by the oper-
ations that may be performed on it, and the mathematical properties of those operations
(including their space and time cost).[9]

15.3 Examples

Main article: List of data structures26 There are numerous types of data structures, generally
built upon simpler primitive data types27 :[10]
• An array28 is a number of elements in a specific order, typically all of the same type
(depending on the language, individual elements may either all be forced to be the same
type, or may be of almost any type). Elements are accessed using an integer index to
specify which element is required. Typical implementations allocate contiguous memory
words for the elements of arrays (but this is not always a necessity). Arrays may be
fixed-length or resizable.
• A linked list29 (also just called list) is a linear collection of data elements of any type,
called nodes, where each node has itself a value, and points to the next node in the linked
list. The principal advantage of a linked list over an array, is that values can always
be efficiently inserted and removed without relocating the rest of the list. Certain other
operations, such as random access30 to a certain element, are however slower on lists than
on arrays.
• A record31 (also called tuple or struct) is an aggregate data structure. A record is a value
that contains other values, typically in fixed number and sequence and typically indexed
by names. The elements of records are usually called fields or members.
• A union32 is a data structure that specifies which of a number of permitted primitive
types may be stored in its instances, e.g. float or long integer. Contrast with a record33 ,
which could be defined to contain a float and an integer; whereas in a union, there is only
one value at a time. Enough space is allocated to contain the widest member datatype.
• A tagged union34 (also called variant35 , variant record, discriminated union, or disjoint
union) contains an additional field indicating its current type, for enhanced type safety.
• An object36 is a data structure that contains data fields, like a record does, as well as var-
ious methods37 which operate on the data contents. An object is an in-memory instance

25 https://en.wikipedia.org/wiki/Abstract_data_type
26 https://en.wikipedia.org/wiki/List_of_data_structures
27 https://en.wikipedia.org/wiki/Primitive_data_type
28 https://en.wikipedia.org/wiki/Array_data_structure
29 https://en.wikipedia.org/wiki/Linked_list
30 https://en.wikipedia.org/wiki/Random_access
31 https://en.wikipedia.org/wiki/Record_(computer_science)
32 https://en.wikipedia.org/wiki/Union_(computer_science)
33 https://en.wikipedia.org/wiki/Record_(computer_science)
34 https://en.wikipedia.org/wiki/Tagged_union
35 https://en.wikipedia.org/wiki/Variant_type
36 https://en.wikipedia.org/wiki/Object_(computer_science)
37 https://en.wikipedia.org/wiki/Method_(computer_programming)

215
Data structure

of a class from a taxonomy. In the context of object-oriented programming38 , records are


known as plain old data structures39 to distinguish them from objects.[11]
In addition, graphs40 and binary trees41 are other commonly used data structures.

15.4 Language support

Most assembly languages42 and some low-level languages43 , such as BCPL44 (Basic Com-
bined Programming Language), lack built-in support for data structures. On the other
hand, many high-level programming languages45 and some higher-level assembly languages,
such as MASM46 , have special syntax or other built-in support for certain data struc-
tures, such as records and arrays. For example, the C47 (a direct descendant of BCPL)
and Pascal48 languages support structs49 and records, respectively, in addition to vectors
(one-dimensional arrays50 ) and multi-dimensional arrays.[12][13]
Most programming languages feature some sort of library51 mechanism that allows data
structure implementations to be reused by different programs. Modern languages usually
come with standard libraries that implement the most common data structures. Examples
are the C++52 Standard Template Library53 , the Java Collections Framework54 , and the
Microsoft55 .NET Framework56 .
Modern languages also generally support modular programming57 , the separation between
the interface58 of a library module and its implementation. Some provide opaque data
types59 that allow clients to hide implementation details. Object-oriented programming
languages60 , such as C++61 , Java62 , and Smalltalk63 , typically use classes64 for this purpose.

38 https://en.wikipedia.org/wiki/Object-oriented_programming
39 https://en.wikipedia.org/wiki/Plain_old_data_structure
40 https://en.wikipedia.org/wiki/Graph_(computer_science)
41 https://en.wikipedia.org/wiki/Binary_trees
42 https://en.wikipedia.org/wiki/Assembly_language
43 https://en.wikipedia.org/wiki/Low-level_programming_language
44 https://en.wikipedia.org/wiki/BCPL
45 https://en.wikipedia.org/wiki/High-level_programming_language
46 https://en.wikipedia.org/wiki/MASM
47 https://en.wikipedia.org/wiki/C_(programming_language)
48 https://en.wikipedia.org/wiki/Pascal_(programming_language)
49 https://en.wikipedia.org/wiki/Record_(computer_science)
50 https://en.wikipedia.org/wiki/Array_data_type
51 https://en.wikipedia.org/wiki/Library_(computing)
52 https://en.wikipedia.org/wiki/C%2B%2B
53 https://en.wikipedia.org/wiki/Standard_Template_Library
54 https://en.wikipedia.org/wiki/Java_Collections_Framework
55 https://en.wikipedia.org/wiki/Microsoft
56 https://en.wikipedia.org/wiki/.NET_Framework
57 https://en.wikipedia.org/wiki/Modular_programming
58 https://en.wikipedia.org/wiki/Interface_(computing)
59 https://en.wikipedia.org/wiki/Opaque_data_type
60 https://en.wikipedia.org/wiki/Object-oriented_programming_language
61 https://en.wikipedia.org/wiki/C%2B%2B
62 https://en.wikipedia.org/wiki/Java_(programming_language)
63 https://en.wikipedia.org/wiki/Smalltalk
64 https://en.wikipedia.org/wiki/Classes_(computer_science)

216
See also

Many known data structures have concurrent65 versions which allow multiple computing
threads to access a single concrete instance of a data structure simultaneously.[14]

15.5 See also


• Abstract data type66
• Concurrent data structure67
• Data model68
• Dynamization69
• Linked data structure70
• List of data structures71
• Persistent data structure72
• Plain old data structure73
• Succinct data structure74

15.6 References
1. C, T H.; L, C E.; R, R L.; S, C-
 (2009). Introduction to Algorithms, Third Edition75 (3 .). T MIT
P. ISBN76 978-026203384877 .
2. B, P E. (15 D 2004). ” ”78 . I P,
V; B, P E. (.). Dictionary of Algorithms and Data Structures
[online]. National Institute of Standards and Technology79 . Retrieved 2018-11-06.
3. ”D ”80 . Encyclopaedia Britannica. 17 April 2017. Retrieved 2018-
11-06.
4. W, P; R, E D. (2003-08-29). Encyclopedia of Computer
Science81 . C, UK: J W  S. . 507–512. ISBN82 978-
047086412883 .
5. ”A D T”84 . Virginia Tech - CS3 Data Structures & Algorithms.

65 https://en.wikipedia.org/wiki/Concurrent_data_structure
66 https://en.wikipedia.org/wiki/Abstract_data_type
67 https://en.wikipedia.org/wiki/Concurrent_data_structure
68 https://en.wikipedia.org/wiki/Data_model
69 https://en.wikipedia.org/wiki/Dynamization
70 https://en.wikipedia.org/wiki/Linked_data_structure
71 https://en.wikipedia.org/wiki/List_of_data_structures
72 https://en.wikipedia.org/wiki/Persistent_data_structure
73 https://en.wikipedia.org/wiki/Plain_old_data_structure
74 https://en.wikipedia.org/wiki/Succinct_data_structure
75 https://dl.acm.org/citation.cfm?id=1614191
76 https://en.wikipedia.org/wiki/ISBN_(identifier)
77 https://en.wikipedia.org/wiki/Special:BookSources/978-0262033848
78 https://xlinux.nist.gov/dads/HTML/datastructur.html
79 https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
80 https://www.britannica.com/technology/data-structure
81 http://dl.acm.org/citation.cfm?id=1074100.1074312
82 https://en.wikipedia.org/wiki/ISBN_(identifier)
83 https://en.wikipedia.org/wiki/Special:BookSources/978-0470864128
84 https://opendsa-server.cs.vt.edu/ODSA/Books/CS3/html/ADT.html

217
Data structure

6. G P (2006). ”C 8: B F-P D


M”85 . Beginning Database Design. Wrox Publishing86 . ISBN87 978-0-7645-
7490-088 .
7. ”1.5 A   H T”89 . University of Regina - CS210 Lab: Hash
Table.
8. ”W          ”90 .
homes.sice.indiana.edu.
9. D, R. C. (2014). Advanced biotechnology : For B Sc and M Sc students of
biotechnology and other biological sciences. New Delhi: S Chand. ISBN91 978-81-219-
4290-492 . OCLC93 88369553394 .
10. S, L (2014). Data structures (Revised first ed.). New Delhi,
India: McGraw Hill Education. ISBN95 978125902996796 . OCLC97 92779372898 .
11. W E. B (S 29, 1999). ”C++ L N: POD
T”99 . F N A L100 . A  
101  2016-12-03. R 6 D 2016.
12. ”T GNU C M”102 . F S F. R 2014-10-
15.
13. ”F P: R G”103 . F P. R 2014-10-15.
14. M M  N S. ”C D S”104 (PDF).
cs.tau.ac.il.

15.7 Bibliography
• Peter Brass, Advanced Data Structures, Cambridge University Press105 , 2008,
ISBN106 978-0521880374107

85 http://searchsecurity.techtarget.com/generic/0,295582,sid87_gci1184450,00.html
86 https://en.wikipedia.org/wiki/Wrox_Press
87 https://en.wikipedia.org/wiki/ISBN_(identifier)
88 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7645-7490-0
89 http://www.cs.uregina.ca/Links/class-info/210/Hash/
90 http://homes.sice.indiana.edu/yye/lab/teaching/spring2014-C343/datatoobig.php
91 https://en.wikipedia.org/wiki/ISBN_(identifier)
92 https://en.wikipedia.org/wiki/Special:BookSources/978-81-219-4290-4
93 https://en.wikipedia.org/wiki/OCLC_(identifier)
94 http://www.worldcat.org/oclc/883695533
95 https://en.wikipedia.org/wiki/ISBN_(identifier)
96 https://en.wikipedia.org/wiki/Special:BookSources/9781259029967
97 https://en.wikipedia.org/wiki/OCLC_(identifier)
98 http://www.worldcat.org/oclc/927793728
https://web.archive.org/web/20161203130543/http://www.fnal.gov/docs/working-groups/
99
fpcltf/Pkg/ISOcxx/doc/POD.html
100 https://en.wikipedia.org/wiki/Fermi_National_Accelerator_Laboratory
101 http://www.fnal.gov/docs/working-groups/fpcltf/Pkg/ISOcxx/doc/POD.html
102 https://www.gnu.org/software/gnu-c-manual/gnu-c-manual.html
103 http://www.freepascal.org/docs-html/ref/ref.html
104 https://www.cs.tau.ac.il/~shanir/concurrent-data-structures.pdf
105 https://en.wikipedia.org/wiki/Cambridge_University_Press
106 https://en.wikipedia.org/wiki/ISBN_(identifier)
107 https://en.wikipedia.org/wiki/Special:BookSources/978-0521880374

218
Further reading

• Donald Knuth108 , The Art of Computer Programming109 , vol. 1. Addison-Wesley110 , 3rd


edition, 1997, ISBN111 978-0201896831112
• Dinesh Mehta and Sartaj Sahni113 , Handbook of Data Structures and Applications, Chap-
man and Hall114 /CRC Press115 , 2004, ISBN116 1584884355117
• Niklaus Wirth118 , Algorithms and Data Structures, Prentice Hall119 , 1985, ISBN120 978-
0130220059121

15.8 Further reading


• Alfred Aho122 , John Hopcroft123 , and Jeffrey Ullman124 , Data Structures and Algorithms,
Addison-Wesley, 1983, ISBN125 0-201-00023-7126
• G. H. Gonnet127 and R. Baeza-Yates128 , Handbook of Algorithms and Data Structures -
in Pascal and C129 , second edition, Addison-Wesley, 1991, ISBN130 0-201-41607-7131
• Ellis Horowitz132 and Sartaj Sahni, Fundamentals of Data Structures in Pascal, Computer
Science Press133 , 1984, ISBN134 0-914894-94-3135

15.9 External links

Data structureat Wikipedia's sister projects136

108 https://en.wikipedia.org/wiki/Donald_Knuth
109 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
110 https://en.wikipedia.org/wiki/Addison-Wesley
111 https://en.wikipedia.org/wiki/ISBN_(identifier)
112 https://en.wikipedia.org/wiki/Special:BookSources/978-0201896831
113 https://en.wikipedia.org/wiki/Sartaj_Sahni
114 https://en.wikipedia.org/wiki/Chapman_and_Hall
115 https://en.wikipedia.org/wiki/CRC_Press
116 https://en.wikipedia.org/wiki/ISBN_(identifier)
117 https://en.wikipedia.org/wiki/Special:BookSources/1584884355
118 https://en.wikipedia.org/wiki/Niklaus_Wirth
119 https://en.wikipedia.org/wiki/Prentice_Hall
120 https://en.wikipedia.org/wiki/ISBN_(identifier)
121 https://en.wikipedia.org/wiki/Special:BookSources/978-0130220059
122 https://en.wikipedia.org/wiki/Alfred_Aho
123 https://en.wikipedia.org/wiki/John_Hopcroft
124 https://en.wikipedia.org/wiki/Jeffrey_Ullman
125 https://en.wikipedia.org/wiki/ISBN_(identifier)
126 https://en.wikipedia.org/wiki/Special:BookSources/0-201-00023-7
127 https://en.wikipedia.org/wiki/Gaston_Gonnet
128 https://en.wikipedia.org/wiki/Ricardo_Baeza-Yates
129 https://users.dcc.uchile.cl/~rbaeza/handbook/hbook.html
130 https://en.wikipedia.org/wiki/ISBN_(identifier)
131 https://en.wikipedia.org/wiki/Special:BookSources/0-201-41607-7
132 https://en.wikipedia.org/wiki/Ellis_Horowitz
133 https://en.wikipedia.org/wiki/Computer_Science_Press
134 https://en.wikipedia.org/wiki/ISBN_(identifier)
135 https://en.wikipedia.org/wiki/Special:BookSources/0-914894-94-3
136 https://en.wikipedia.org/wiki/Wikipedia:Wikimedia_sister_projects

219
Data structure

• Definitions137 from Wiktionary


• Media138 from Wikimedia Commons
• Quotations139 from Wikiquote
• Texts140 from Wikisource
• Textbooks141 from Wikibooks
• Resources142 from Wikiversity
• Descriptions143 from the Dictionary of Algorithms and Data Structures144
• Data structures course145
• An Examination of Data Structures from .NET perspective146
• Schaffer, C. Data Structures and Algorithm Analysis147

Data structures

Data types

Data model

137 https://en.wiktionary.org/wiki/data_structure
138 https://commons.wikimedia.org/wiki/Category:Data_structures
139 https://en.wikiquote.org/wiki/Special:Search/Data_structure
140 https://en.wikisource.org/wiki/Special:Search/Data_structure
141 https://en.wikibooks.org/wiki/Data_Structures
142 https://en.wikiversity.org/wiki/Topic:Data_structures
143 https://web.archive.org/web/20050624234059/http://www.nist.gov/dads/
144 https://en.wikipedia.org/wiki/Dictionary_of_Algorithms_and_Data_Structures
145 http://www.cs.auckland.ac.nz/software/AlgAnim/ds_ToC.html
146 http://msdn.microsoft.com/en-us/library/aa289148(VS.71).aspx
147 http://people.cs.vt.edu/~shaffer/Book/C++3e20110915.pdf

220
External links

Strings

221
16 Search algorithm

Any algorithm which solves the search problem

This article has multiple issues. Please help improve it1 or discuss these
issues on the talk page2 . (Learn how and when to remove these template mes-
This article needs attention from an expert in computer science.
The specific problem is: longstanding subpar state of article struc-
ture and content, which is currently list based, dated, and non-
sages3 )
encyclopedic, and without external sourcing. See the talk page4 for
details. WikiProject Computer science5 may be able to help recruit an ex-
pert. (December 2014)
This article focuses too much on specific examples without explaining
their importance6 to its main subject. Please help improve this article7 by
citing reliable, secondary sources8 that evaluate and synthesize these or
similar examples within a broader context. (December 2014)

This article needs additional citations for verification9 . Please help


improve this article10 by adding citations to reliable sources11 . Unsourced (Learn how
material may be challenged and removed.
Find sources: ”Search algorithm”12 –
news13 · newspapers14 · books15 · scholar16 · JSTOR17 (April 2016)(Learn
how and when to remove this template message18 )
and when to remove this template message19 )

19 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

223
Search algorithm

Figure 36 Visual representation of a hash table, a data structure that allows for fast
retrieval of information.

In computer science20 , a search algorithm is any algorithm21 which solves the search
problem22 , namely, to retrieve information stored within some data structure, or calculated
in the search space23 of a problem domain24 , either with discrete or continuous values25 .
Specific applications of search algorithms include:
• Problems in combinatorial optimization26 , such as:
• The vehicle routing problem27 , a form of shortest path problem28
• The knapsack problem29 : Given a set of items, each with a weight and a value, deter-
mine the number of each item to include in a collection so that the total weight is less
than or equal to a given limit and the total value is as large as possible.
• The nurse scheduling problem30

20 https://en.wikipedia.org/wiki/Computer_science
21 https://en.wikipedia.org/wiki/Algorithm
22 https://en.wikipedia.org/wiki/Search_problem
23 https://en.wikipedia.org/wiki/Feasible_region
24 https://en.wikipedia.org/w/index.php?title=Problem_domain&action=edit&redlink=1
25 https://en.wikipedia.org/wiki/Continuous_or_discrete_variable
26 https://en.wikipedia.org/wiki/Combinatorial_optimization
27 https://en.wikipedia.org/wiki/Vehicle_routing_problem
28 https://en.wikipedia.org/wiki/Shortest_path_problem
29 https://en.wikipedia.org/wiki/Knapsack_problem
30 https://en.wikipedia.org/wiki/Nurse_scheduling_problem

224
External links

• Problems in constraint satisfaction31 , such as:


• The map coloring problem32
• Filling in a sudoku33 or crossword puzzle34
• In game theory35 and especially combinatorial game theory36 , choosing the best move to
make next (such as with the minmax37 algorithm)
• Finding a combination or password from the whole set of possibilities
• Factoring38 an integer (an important problem in cryptography39 )
• Optimizing an industrial process, such as a chemical reaction40 , by changing the param-
eters of the process (like temperature, pressure, and pH)
• Retrieving a record from a database41
• Finding the maximum or minimum value in a list42 or array43
• Checking to see if a given value is present in a set of values
The classic search problems described above and web search44 are both problems in in-
formation retrieval45 , but are generally studied as separate subfields and are solved and
evaluated differently. Web search problems are generally focused on filtering and finding
documents that are most relevant to human queries. Classic search algorithms are typically
evaluated on how fast they can find a solution, and whether that solution is guaranteed to
be optimal. Though information retrieval algorithms must be fast, the quality of ranking
is more important, as is whether good results have been left out and bad results included.
The appropriate search algorithm often depends on the data structure being searched, and
may also include prior knowledge about the data. Some database structures are specially
constructed to make search algorithms faster or more efficient, such as a search tree46 , hash
map47 , or a database index48 . [1][2]
Search algorithms can be classified based on their mechanism of searching. Linear search49
algorithms check every record for the one associated with a target key in a linear fashion.[3]
Binary, or half interval searches50 , repeatedly target the center of the search structure and
divide the search space in half. Comparison search algorithms improve on linear searching
by successively eliminating records based on comparisons of the keys until the target record
is found, and can be applied on data structures with a defined order.[4] Digital search algo-

31 https://en.wikipedia.org/wiki/Constraint_satisfaction
32 https://en.wikipedia.org/wiki/Map_coloring_problem
33 https://en.wikipedia.org/wiki/Sudoku
34 https://en.wikipedia.org/wiki/Crossword_puzzle
35 https://en.wikipedia.org/wiki/Game_theory
36 https://en.wikipedia.org/wiki/Combinatorial_game_theory
37 https://en.wikipedia.org/wiki/Minmax
38 https://en.wikipedia.org/wiki/Factorization
39 https://en.wikipedia.org/wiki/Cryptography
40 https://en.wikipedia.org/wiki/Chemical_reaction
41 https://en.wikipedia.org/wiki/Database
42 https://en.wikipedia.org/wiki/List_(abstract_data_type)
43 https://en.wikipedia.org/wiki/Array_data_structure
44 https://en.wikipedia.org/wiki/Web_search
45 https://en.wikipedia.org/wiki/Information_retrieval
46 https://en.wikipedia.org/wiki/Search_tree
47 https://en.wikipedia.org/wiki/Hash_map
48 https://en.wikipedia.org/wiki/Database_index
49 https://en.wikipedia.org/wiki/Linear_search
50 https://en.wikipedia.org/wiki/Binary_search_algorithm

225
Search algorithm

rithms work based on the properties of digits in data structures that use numerical keys.[5]
Finally, hashing51 directly maps keys to records based on a hash function52 .[6] Searches
outside a linear search require that the data be sorted in some way.
Algorithms are often evaluated by their computational complexity53 , or maximum theoret-
ical run time. Binary search functions, for example, have a maximum complexity of O(log
n), or logarithmic time. This means that the maximum number of operations needed to
find the search target is a logarithmic function of the size of the search space.

16.1 Classes

16.1.1 For virtual search spaces

See also: Solver54 Algorithms for searching virtual spaces are used in the constraint satis-
faction problem55 , where the goal is to find a set of value assignments to certain variables
that will satisfy specific mathematical equations56 and inequations57 / equalities. They are
also used when the goal is to find a variable assignment that will maximize or minimize58 a
certain function of those variables. Algorithms for these problems include the basic brute-
force search59 (also called ”naïve” or ”uninformed” search), and a variety of heuristics60 that
try to exploit partial knowledge about the structure of this space, such as linear relaxation,
constraint generation, and constraint propagation61 .
An important subclass are the local search62 methods, that view the elements of the search
space as the vertices63 of a graph, with edges defined by a set of heuristics applicable to the
case; and scan the space by moving from item to item along the edges, for example according
to the steepest descent64 or best-first65 criterion, or in a stochastic search66 . This category
includes a great variety of general metaheuristic67 methods, such as simulated annealing68 ,
tabu search69 , A-teams, and genetic programming70 , that combine arbitrary heuristics in
specific ways.

51 https://en.wikipedia.org/wiki/Hash_table
52 https://en.wikipedia.org/wiki/Hash_function
53 https://en.wikipedia.org/wiki/Computational_complexity
54 https://en.wikipedia.org/wiki/Solver
55 https://en.wikipedia.org/wiki/Constraint_satisfaction_problem
56 https://en.wikipedia.org/wiki/Equation
57 https://en.wikipedia.org/wiki/Inequation
58 https://en.wikipedia.org/wiki/Discrete_optimization
59 https://en.wikipedia.org/wiki/Brute-force_search
60 https://en.wikipedia.org/wiki/Heuristic_function
61 https://en.wikipedia.org/wiki/Local_consistency
62 https://en.wikipedia.org/wiki/Local_search_(optimization)
63 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
64 https://en.wikipedia.org/wiki/Gradient_descent
65 https://en.wikipedia.org/wiki/Best-first_search
66 https://en.wikipedia.org/wiki/Stochastic_optimization
67 https://en.wikipedia.org/wiki/Metaheuristic
68 https://en.wikipedia.org/wiki/Simulated_annealing
69 https://en.wikipedia.org/wiki/Tabu_search
70 https://en.wikipedia.org/wiki/Genetic_programming

226
Classes

This class also includes various tree search algorithms71 , that view the elements as vertices
of a tree72 , and traverse that tree in some special order. Examples of the latter include
the exhaustive methods such as depth-first search73 and breadth-first search74 , as well as
various heuristic-based search tree pruning75 methods such as backtracking76 and branch
and bound77 . Unlike general metaheuristics, which at best work only in a probabilistic sense,
many of these tree-search methods are guaranteed to find the exact or optimal solution, if
given enough time. This is called ”completeness78 ”.
Another important sub-class consists of algorithms for exploring the game tree79 of multiple-
player games, such as chess80 or backgammon81 , whose nodes consist of all possible game
situations that could result from the current situation. The goal in these problems is to find
the move that provides the best chance of a win, taking into account all possible moves of
the opponent(s). Similar problems occur when humans or machines have to make successive
decisions whose outcomes are not entirely under one's control, such as in robot82 guidance
or in marketing83 , financial84 , or military85 strategy planning. This kind of problem — com-
binatorial search86 — has been extensively studied in the context of artificial intelligence87 .
Examples of algorithms for this class are the minimax algorithm88 , alpha–beta pruning89 ,
* Informational search [7] and the A* algorithm90 .

16.1.2 For sub-structures of a given structure

The name ”combinatorial search” is generally used for algorithms that look for a specific sub-
structure of a given discrete structure91 , such as a graph, a string92 , a finite group93 , and so
on. The term combinatorial optimization94 is typically used when the goal is to find a sub-
structure with a maximum (or minimum) value of some parameter. (Since the sub-structure
is usually represented in the computer by a set of integer variables with constraints, these
problems can be viewed as special cases of constraint satisfaction or discrete optimization;

71 https://en.wikipedia.org/wiki/Tree_traversal
72 https://en.wikipedia.org/wiki/Tree_(graph_theory)
73 https://en.wikipedia.org/wiki/Depth-first_search
74 https://en.wikipedia.org/wiki/Breadth-first_search
75 https://en.wikipedia.org/wiki/Pruning_(decision_trees)
76 https://en.wikipedia.org/wiki/Backtracking
77 https://en.wikipedia.org/wiki/Branch_and_bound
78 https://en.wikipedia.org/wiki/Completeness_(logic)
79 https://en.wikipedia.org/wiki/Game_tree
80 https://en.wikipedia.org/wiki/Chess
81 https://en.wikipedia.org/wiki/Backgammon
82 https://en.wikipedia.org/wiki/Robot
83 https://en.wikipedia.org/wiki/Marketing
84 https://en.wikipedia.org/wiki/Finance
85 https://en.wikipedia.org/wiki/Military
86 https://en.wikipedia.org/wiki/Combinatorial_search
87 https://en.wikipedia.org/wiki/Artificial_intelligence
88 https://en.wikipedia.org/wiki/Minimax
89 https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning
90 https://en.wikipedia.org/wiki/A*_search_algorithm
91 https://en.wikipedia.org/wiki/Discrete_mathematics
92 https://en.wikipedia.org/wiki/String_(computer_science)
93 https://en.wikipedia.org/wiki/Group_(mathematics)
94 https://en.wikipedia.org/wiki/Combinatorial_optimization

227
Search algorithm

but they are usually formulated and solved in a more abstract setting where the internal
representation is not explicitly mentioned.)
An important and extensively studied subclass are the graph algorithms95 , in particular
graph traversal96 algorithms, for finding specific sub-structures in a given graph — such as
subgraphs97 , paths98 , circuits, and so on. Examples include Dijkstra's algorithm99 , Kruskal's
algorithm100 , the nearest neighbour algorithm101 , and Prim's algorithm102 .
Another important subclass of this category are the string searching algorithms103 , that
search for patterns within strings. Two famous examples are the Boyer–Moore104 and
Knuth–Morris–Pratt algorithms105 , and several algorithms based on the suffix tree106 data
structure.

16.1.3 Search for the maximum of a function

In 1953, American statistician107 Jack Kiefer108 devised Fibonacci search109 which can be
used to find the maximum of a unimodal function and has many other applications in
computer science.

16.1.4 For quantum computers

There are also search methods designed for quantum computers110 , like Grover's algo-
rithm111 , that are theoretically faster than linear or brute-force search even without the
help of data structures or heuristics.

16.2 See also


• Backward induction112
• Content-addressable memory113 hardware

95 https://en.wikipedia.org/wiki/List_of_algorithms#Graph_algorithms
96 https://en.wikipedia.org/wiki/Graph_traversal
97 https://en.wikipedia.org/wiki/Glossary_of_graph_theory#Subgraphs
98 https://en.wikipedia.org/wiki/Path_(graph_theory)
99 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
100 https://en.wikipedia.org/wiki/Kruskal%27s_algorithm
101 https://en.wikipedia.org/wiki/Nearest_neighbour_algorithm
102 https://en.wikipedia.org/wiki/Prim%27s_algorithm
103 https://en.wikipedia.org/wiki/String_searching_algorithm
104 https://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_string_search_algorithm
105 https://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm
106 https://en.wikipedia.org/wiki/Suffix_tree
107 https://en.wikipedia.org/wiki/Statistics
108 https://en.wikipedia.org/wiki/Jack_Kiefer_(statistician)
109 https://en.wikipedia.org/wiki/Fibonacci_search_technique
110 https://en.wikipedia.org/wiki/Quantum_computing
111 https://en.wikipedia.org/wiki/Grover%27s_algorithm
112 https://en.wikipedia.org/wiki/Backward_induction
113 https://en.wikipedia.org/wiki/Content-addressable_memory

228
References

• Dual-phase evolution114 − A process that drives self-organization within complex adap-


tive systems
• Linear search problem115
• No free lunch in search and optimization116
• Recommender system117 , also use statistical methods to rank results in very large data
sets
• Search engine (computing)118
• Search game119
• Selection algorithm120
• Solver121
• Sorting algorithm122 − An algorithm that arranges lists in order, necessary for executing
certain search algorithms
• Web search engine123 − Software system that is designed to search for information on the
World Wide Web
Categories:
• Category:Search algorithms124

16.3 References

16.3.1 Citations
1. Beame & Fich 2002125 , p. 39. sfn error: no target: CITEREFBeameFich2002 (help126 )
2. Knuth 1998127 , §6.5 (”Retrieval on Secondary Keys”). sfn error: no target: CITERE-
FKnuth1998 (help128 )
3. Knuth 1998129 , §6.1 (”Sequential Searching”). sfn error: no target: CITERE-
FKnuth1998 (help130 )
4. Knuth 1998131 , §6.2 (”Searching by Comparison of Keys”). sfn error: no target:
CITEREFKnuth1998 (help132 )

114 https://en.wikipedia.org/wiki/Dual-phase_evolution
115 https://en.wikipedia.org/wiki/Linear_search_problem
116 https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization
117 https://en.wikipedia.org/wiki/Recommender_system
118 https://en.wikipedia.org/wiki/Search_engine_(computing)
119 https://en.wikipedia.org/wiki/Search_game
120 https://en.wikipedia.org/wiki/Selection_algorithm
121 https://en.wikipedia.org/wiki/Solver
122 https://en.wikipedia.org/wiki/Sorting_algorithm
123 https://en.wikipedia.org/wiki/Web_search_engine
124 https://en.wikipedia.org/wiki/Category:Search_algorithms
125 #CITEREFBeameFich2002
126 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
127 #CITEREFKnuth1998
128 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
129 #CITEREFKnuth1998
130 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
131 #CITEREFKnuth1998
132 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors

229
Search algorithm

5. Knuth 1998133 , §6.3 (Digital Searching). sfn error: no target: CITEREFKnuth1998


(help134 )
6. Knuth 1998135 , §6.4, (Hashing). sfn error: no target: CITEREFKnuth1998 (help136 )
7. K E.  B-G I. (2014). ”A G-T A  O-
 I L”137 (PDF). IIE T, 46:2, 164-184. Cite
journal requires |journal= (help138 )

16.3.2 Bibliography

Books
• K, D139 (1998). Sorting and Searching. The Art of Computer Program-
ming140 . 3 (2nd ed.). Reading, MA: Addison-Wesley Professional.CS1 maint: ref=harv
(link141 )

Articles
• S, T; S, F E. (2002-08-01). ”O B 
 P P  R P”. Journal of Computer and
System Sciences. 65 (1): 38–72. doi142 :10.1006/jcss.2002.1822143 .CS1 maint: ref=harv
(link144 )

16.4 External links


• Uninformed Search Project145 at the Wikiversity146 .

133 #CITEREFKnuth1998
134 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
135 #CITEREFKnuth1998
136 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
137 http://www.eng.tau.ac.il/~bengal/GTA.pdf
138 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
139 https://en.wikipedia.org/wiki/Donald_Knuth
140 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
141 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
142 https://en.wikipedia.org/wiki/Doi_(identifier)
143 https://doi.org/10.1006%2Fjcss.2002.1822
144 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
145 https://en.wikiversity.org/wiki/Uninformed_Search_Project
146 https://en.wikipedia.org/wiki/Wikiversity

230
17 Linear search

This article has multiple issues. Please help improve it1 or discuss these
issues on the talk page2 . (Learn how and when to remove these template mes-

This article needs additional citations for verification4 . Please help


sages3 ) improve this article5 by adding citations to reliable sources6 . Unsourced
material may be challenged and removed.
Find sources: ”Linear search”7 –
news8 · newspapers9 · books10 · scholar11 · JSTOR12 (November 2010)(Learn how
and when to remove this template message13 )

This article relies largely or entirely on a single source14 . Relevant


discussion may be found on the talk page15 . Please help improve this arti- (Learn how
cle16 by introducing citations17 to additional sources.
Find sources: ”Linear search”18 –
news19 · newspapers20 · books21 · scholar22 · JSTOR23 (November 2010)
and when to remove this template message24 )

Linear search
Class Search algo-
rithm
Worst-case perfor- O(n)
mance
Best-case performance O(1)
Average performance O(n)
Worst-case space com- O(1) iterative
plexity

In computer science25 , a linear search or sequential search is a method for finding an


element within a list26 . It sequentially checks each element of the list until a match is found
or the whole list has been searched.[1]
A linear search runs in at worst linear time27 and makes at most n comparisons, where n is
the length of the list. If each element is equally likely to be searched, then linear search has
an average case of n+1/2 comparisons, but the average case can be affected if the search

24 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
25 https://en.wikipedia.org/wiki/Computer_science
26 https://en.wikipedia.org/wiki/List_(computing)
27 https://en.wikipedia.org/wiki/Time_complexity#linear_time

231
Linear search

probabilities for each element vary. Linear search is rarely practical because other search
algorithms and schemes, such as the binary search algorithm28 and hash tables29 , allow
significantly faster searching for all but short lists.[2]

17.1 Algorithm

A linear search sequentially checks each element of the list until it finds an element that
matches the target value. If the algorithm reaches the end of the list, the search terminates
unsuccessfully.[1]

17.1.1 Basic algorithm

Given a list L of n elements with values or records30 L0 .... Ln−1 , and target value T, the
following subroutine31 uses linear search to find the index of the target T in L.[3]
1. Set i to 0.
2. If Li = T, the search terminates successfully; return i.
3. Increase i by 1.
4. If i < n, go to step 2. Otherwise, the search terminates unsuccessfully.

17.1.2 With a sentinel

The basic algorithm above makes two comparisons per iteration: one to check if Li equals
T, and the other to check if i still points to a valid index of the list. By adding an extra
record Ln to the list (a sentinel value32 ) that equals the target, the second comparison can
be eliminated until the end of the search, making the algorithm faster. The search will
reach the sentinel if the target is not contained within the list.[4]
1. Set i to 0.
2. If Li = T, go to step 4.
3. Increase i by 1 and go to step 2.
4. If i < n, the search terminates successfully; return i. Else, the search terminates
unsuccessfully.

17.1.3 In an ordered table

If the list is ordered such that L0 ≤ L1 ... ≤ Ln−1 , the search can establish the absence of
the target more quickly by concluding the search once Li exceeds the target. This variation
requires a sentinel that is greater than the target.[5]

28 https://en.wikipedia.org/wiki/Binary_search_algorithm
29 https://en.wikipedia.org/wiki/Hash_table
30 https://en.wikipedia.org/wiki/Record_(computer_science)
31 https://en.wikipedia.org/wiki/Subroutine
32 https://en.wikipedia.org/wiki/Sentinel_value

232
Analysis

1. Set i to 0.
2. If Li ≥ T, go to step 4.
3. Increase i by 1 and go to step 2.
4. If Li = T, the search terminates successfully; return i. Else, the search terminates
unsuccessfully.

17.2 Analysis

For a list with n items, the best case is when the value is equal to the first element of the
list, in which case only one comparison is needed. The worst case is when the value is not in
the list (or occurs only once at the end of the list), in which case n comparisons are needed.
If the value being sought occurs k times in the list, and all orderings of the list are equally
likely, the expected number of comparisons is


n if k = 0
 n+1
 if 1 ≤ k ≤ n.
k+1
For example, if the value being sought occurs once in the list, and all orderings of the list are
n+1
equally likely, the expected number of comparisons is . However, if it is known that
2
it occurs once, then at most n - 1 comparisons are needed, and the expected number of
comparisons is
(n + 2)(n − 1)
2n
(for example, for n = 2 this is 1, corresponding to a single if-then-else construct).
Either way, asymptotically33 the worst-case cost and the expected cost of linear search are
both O34 (n).

17.2.1 Non-uniform probabilities

The performance of linear search improves if the desired value is more likely to be near the
beginning of the list than to its end. Therefore, if some values are much more likely to be
searched than others, it is desirable to place them at the beginning of the list.
In particular, when the list items are arranged in order of decreasing probability, and these
probabilities are geometrically distributed35 , the cost of linear search is only O(1). [6]

33 https://en.wikipedia.org/wiki/Asymptotic_complexity
34 https://en.wikipedia.org/wiki/Big_O_notation
35 https://en.wikipedia.org/wiki/Geometric_distribution

233
Linear search

17.3 Application

Linear search is usually very simple to implement, and is practical when the list has only a
few elements, or when performing a single search in an un-ordered list.
When many values have to be searched in the same list, it often pays to pre-process the
list in order to use a faster method. For example, one may sort36 the list and use binary
search37 , or build an efficient search data structure38 from it. Should the content of the list
change frequently, repeated re-organization may be more trouble than it is worth.
As a result, even though in theory other search algorithms may be faster than linear search
(for instance binary search39 ), in practice even on medium-sized arrays (around 100 items
or less) it might be infeasible to use anything else. On larger arrays, it only makes sense
to use other, faster search methods if the data is large enough, because the initial time to
prepare (sort) the data is comparable to many linear searches.[7]

17.4 See also


• Ternary search40
• Hash table41
• Linear search problem42

17.5 References

17.5.1 Citations
1. Knuth 199843 , §6.1 (”Sequential search”). sfn error: no target: CITEREFKnuth1998
(help44 )
2. Knuth 199845 , §6.2 (”Searching by Comparison Of Keys”). sfn error: no target:
CITEREFKnuth1998 (help46 )
3. Knuth 199847 , §6.1 (”Sequential search”), subsection ”Algorithm B”. sfn error: no
target: CITEREFKnuth1998 (help48 )

36 https://en.wikipedia.org/wiki/Sort_(computing)
37 https://en.wikipedia.org/wiki/Binary_search_algorithm
38 https://en.wikipedia.org/wiki/Search_data_structure
39 https://en.wikipedia.org/wiki/Binary_search
40 https://en.wikipedia.org/wiki/Ternary_search
41 https://en.wikipedia.org/wiki/Hash_table
42 https://en.wikipedia.org/wiki/Linear_search_problem
43 #CITEREFKnuth1998
44 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
45 #CITEREFKnuth1998
46 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
47 #CITEREFKnuth1998
48 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors

234
References

4. Knuth 199849 , §6.1 (”Sequential search”), subsection ”Algorithm Q”. sfn error: no
target: CITEREFKnuth1998 (help50 )
5. Knuth 199851 , §6.1 (”Sequential search”), subsection ”Algorithm T”. sfn error: no
target: CITEREFKnuth1998 (help52 )
6. K, D53 (1997). ”S 6.1: S S,”. Sorting
and Searching. The Art of Computer Programming. 3 (3rd ed.). Addison-Wesley.
pp. 396–408. ISBN54 0-201-89685-055 .
7. H, A. ”B       
.NET  M ”56 . R 19 A 2013.

17.5.2 Works
• K, D57 (1998). Sorting and Searching. The Art of Computer Program-
ming58 . 3 (2nd ed.). Reading, MA: Addison-Wesley Professional.CS1 maint: ref=harv
(link59 ) ISBN60 0-201-89685-061
62 63

49 #CITEREFKnuth1998
50 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
51 #CITEREFKnuth1998
52 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
53 https://en.wikipedia.org/wiki/Donald_Knuth
54 https://en.wikipedia.org/wiki/ISBN_(identifier)
55 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
56 http://blog.teamleadnet.com/2012/02/quicksort-binary-search-and-linear.html
57 https://en.wikipedia.org/wiki/Donald_Knuth
58 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
59 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
60 https://en.wikipedia.org/wiki/ISBN_(identifier)
61 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
62 https://doi.org/10.15347/WJS/2019.005
63 https://en.wikipedia.org/wiki/Wikipedia:Featured_articles

235
18 Binary search algorithm

This article is about searching a finite sorted array. For searching continuous function
values, see bisection method1 . Search algorithm finding the position of a target value
within a sorted array

Binary search algorithm


Visualization of the binary search algorithm where 7 is the target value
Class Search algo-
rithm
Data structure Array
Worst-case perfor- O(log n)
mance
Best-case performance O(1)
Average performance O(log n)
Worst-case space com- O(1)
plexity

In computer science2 , binary search, also known as half-interval search,[1] logarithmic


search,[2] or binary chop,[3] is a search algorithm3 that finds the position of a target value
within a sorted array4 .[4][5] Binary search compares the target value to the middle element
of the array. If they are not equal, the half in which the target cannot lie is eliminated and
the search continues on the remaining half, again taking the middle element to compare to
the target value, and repeating this until the target value is found. If the search ends with
the remaining half being empty, the target is not in the array.
Binary search runs in logarithmic time5 in the worst case6 , making O(log n) comparisons,
where n is the number of elements in the array, the O is Big O notation7 , and log is the
logarithm8 .[6] Binary search is faster than linear search9 except for small arrays. However,
the array must be sorted first to be able to apply binary search. There are specialized data
structures10 designed for fast searching, such as hash tables11 , that can be searched more
efficiently than binary search. However, binary search can be used to solve a wider range

1 https://en.wikipedia.org/wiki/Bisection_method
2 https://en.wikipedia.org/wiki/Computer_science
3 https://en.wikipedia.org/wiki/Search_algorithm
4 https://en.wikipedia.org/wiki/Sorted_array
5 https://en.wikipedia.org/wiki/Time_complexity#Logarithmic_time
6 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
7 https://en.wikipedia.org/wiki/Big_O_notation
8 https://en.wikipedia.org/wiki/Logarithm
9 https://en.wikipedia.org/wiki/Linear_search
10 https://en.wikipedia.org/wiki/Data_structures
11 https://en.wikipedia.org/wiki/Hash_tables

237
Binary search algorithm

of problems, such as finding the next-smallest or next-largest element in the array relative
to the target even if it is absent from the array.
There are numerous variations of binary search. In particular, fractional cascading12 speeds
up binary searches for the same value in multiple arrays. Fractional cascading efficiently
solves a number of search problems in computational geometry13 and in numerous other
fields. Exponential search14 extends binary search to unbounded lists. The binary search
tree15 and B-tree16 data structures are based on binary search.

18.1 Algorithm

Binary search works on sorted arrays. Binary search begins by comparing an element in
the middle of the array with the target value. If the target value matches the element, its
position in the array is returned. If the target value is less than the element, the search
continues in the lower half of the array. If the target value is greater than the element, the
search continues in the upper half of the array. By doing this, the algorithm eliminates the
half in which the target value cannot lie in each iteration.[7]

18.1.1 Procedure

Given an array A of n elements with values or records17 A0 , A1 , A2 , . . . , An−1 sorted such


that A0 ≤ A1 ≤ A2 ≤ · · · ≤ An−1 , and target value T , the following subroutine18 uses binary
search to find the index of T in A.[7]
1. Set L to 0 and R to n − 1.
2. If L > R, the search terminates as unsuccessful.
L+R
3. Set m (the position of the middle element) to the floor19 of , which is the greatest
2
L+R
integer less than or equal to .
2
4. If Am < T , set L to m + 1 and go to step 2.
5. If Am > T , set R to m − 1 and go to step 2.
6. Now Am = T , the search is done; return m.
This iterative procedure keeps track of the search boundaries with the two variables L and
R. The procedure may be expressed in pseudocode20 as follows, where the variable names
and types remain the same as above, floor is the floor function, and unsuccessful refers
to a specific value that conveys the failure of the search.[7]

12 https://en.wikipedia.org/wiki/Fractional_cascading
13 https://en.wikipedia.org/wiki/Computational_geometry
14 https://en.wikipedia.org/wiki/Exponential_search
15 https://en.wikipedia.org/wiki/Binary_search_tree
16 https://en.wikipedia.org/wiki/B-tree
17 https://en.wikipedia.org/wiki/Record_(computer_science)
18 https://en.wikipedia.org/wiki/Subroutine
19 https://en.wikipedia.org/wiki/Floor_and_ceiling_functions
20 https://en.wikipedia.org/wiki/Pseudocode

238
Algorithm

function binary_search(A, n, T) is
L := 0
R := n − 1
while L ≤ R do
m := floor((L + R) / 2)
if A[m] < T then
L := m + 1
else if A[m] > T then
R := m - 1
else:
return m
return unsuccessful

L+R
Alternatively, the algorithm may take the ceiling21 of . This may change the result if
2
the target value appears more than once in the array.

Alternative procedure

In the above procedure, the algorithm checks whether the middle element (m) is equal to
the target (T ) in every iteration. Some implementations leave out this check during each
iteration. The algorithm would perform this check only when one element is left (when
L = R). This results in a faster comparison loop, as one comparison is eliminated per
iteration. However, it requires one more iteration on average.[8]
Hermann Bottenbruch22 published the first implementation to leave out this check in
1962.[8][9]
1. Set L to 0 and R to n − 1.
2. While L ̸= R,
L+R
a) Set m (the position of the middle element) to the ceiling23 of , which is
2
L+R
the least integer greater than or equal to .
2
b) If Am > T , set R to m − 1.
c) Else, Am ≤ T ; set L to m.
3. Now L = R, the search is done. If AL = T , return L. Otherwise, the search terminates
as unsuccessful.
Where ceil is the ceiling function, the pseudocode for this version is:
function binary_search_alternative(A, n, T) is
L := 0
R := n − 1
while L != R do
m := ceil((L + R) / 2)
if A[m] > T then
R := m - 1
else:
L := m
if A[L] = T then
return L
return unsuccessful

21 https://en.wikipedia.org/wiki/Floor_and_ceiling_functions
22 https://en.wikipedia.org/wiki/Hermann_Bottenbruch
23 https://en.wikipedia.org/wiki/Floor_and_ceiling_functions

239
Binary search algorithm

18.1.2 Duplicate elements

The procedure may return any index whose element is equal to the target value, even if
there are duplicate elements in the array. For example, if the array to be searched was
[1, 2, 3, 4, 4, 5, 6, 7] and the target was 4, then it would be correct for the algorithm to either
return the 4th (index 3) or 5th (index 4) element. The regular procedure would return the
4th element (index 3) in this case. It does not always return the first duplicate (consider
[1, 2, 4, 4, 4, 5, 6, 7] which still returns the 4th element). However, it is sometimes necessary
to find the leftmost element or the rightmost element for a target value that is duplicated
in the array. In the above example, the 4th element is the leftmost element of the value 4,
while the 5th element is the rightmost element of the value 4. The alternative procedure
above will always return the index of the rightmost element if such an element exists.[9]

Procedure for finding the leftmost element

To find the leftmost element, the following procedure can be used:[10]


1. Set L to 0 and R to n.
2. While L < R,
L+R
a) Set m (the position of the middle element) to the floor24 of , which is the
2
L+R
greatest integer less than or equal to .
2
b) If Am < T , set L to m + 1.
c) Else, Am ≥ T ; set R to m.
3. Return L.
If L < n and AL = T , then AL is the leftmost element that equals T . Even if T is not in
the array, L is the rank25 of T in the array, or the number of elements in the array that are
less than T .
Where floor is the floor function, the pseudocode for this version is:
function binary_search_leftmost(A, n, T):
L := 0
R := n
while L < R:
m := floor((L + R) / 2)
if A[m] < T:
L := m + 1
else:
R := m
return L

Procedure for finding the rightmost element

To find the rightmost element, the following procedure can be used:[10]


1. Set L to 0 and R to n.

24 https://en.wikipedia.org/wiki/Floor_and_ceiling_functions
25 #Approximate_matches

240
Algorithm

2. While L < R,
L+R
a) Set m (the position of the middle element) to the floor26 of , which is the
2
L+R
greatest integer less than or equal to .
2
b) If Am > T , set R to m.
c) Else, Am ≤ T ; set L to m + 1.
3. Return R − 1.
If L > 0 and AL−1 = T , then AL−1 is the rightmost element that equals T . Even if T is not
in the array, n − L is the number of elements in the array that are greater than T .
Where floor is the floor function, the pseudocode for this version is:
function binary_search_rightmost(A, n, T):
L := 0
R := n
while L < R:
m := floor((L + R) / 2)
if A[m] > T:
R := m
else:
L := m + 1
return R - 1

18.1.3 Approximate matches

Figure 39 Binary search can be adapted to compute approximate matches. In the


example above, the rank, predecessor, successor, and nearest neighbor are shown for the
target value 5, which is not in the array.

The above procedure only performs exact matches, finding the position of a target value.
However, it is trivial to extend binary search to perform approximate matches because

26 https://en.wikipedia.org/wiki/Floor_and_ceiling_functions

241
Binary search algorithm

binary search operates on sorted arrays. For example, binary search can be used to compute,
for a given value, its rank (the number of smaller elements), predecessor (next-smallest
element), successor (next-largest element), and nearest neighbor27 . Range queries28 seeking
the number of elements between two values can be performed with two rank queries.[11]
• Rank queries can be performed with the procedure for finding the leftmost element29 .
The number of elements less than the target value is returned by the procedure.[11]
• Predecessor queries can be performed with rank queries. If the rank of the target value
is r, its predecessor is r − 1.[12]
• For successor queries, the procedure for finding the rightmost element30 can be used. If
the result of running the procedure for the target value is r, then the successor of the
target value is r + 1.[12]
• The nearest neighbor of the target value is either its predecessor or successor, whichever
is closer.
• Range queries are also straightforward.[12] Once the ranks of the two values are known,
the number of elements greater than or equal to the first value and less than the second is
the difference of the two ranks. This count can be adjusted up or down by one according
to whether the endpoints of the range should be considered to be part of the range and
whether the array contains entries matching those endpoints.[13]

27 https://en.wikipedia.org/wiki/Nearest_neighbor_search
28 https://en.wikipedia.org/wiki/Range_query_(data_structures)
29 #Procedure_for_finding_the_leftmost_element
30 #Procedure_for_finding_the_rightmost_element

242
Performance

18.2 Performance

Figure 40 A tree representing binary search. The array being searched here is
[20, 30, 40, 50, 80, 90, 100], and the target value is 40.

243
Binary search algorithm

Figure 41 The worst case is reached when the search reaches the deepest level of the
tree, while the best case is reached when the target value is the middle element.

In terms of the number of comparisons, the performance of binary search can be analyzed
by viewing the run of the procedure on a binary tree. The root node of the tree is the
middle element of the array. The middle element of the lower half is the left child node of
the root, and the middle element of the upper half is the right child node of the root. The
rest of the tree is built in a similar fashion. Starting from the root node, the left or right
subtrees are traversed depending on whether the target value is less or more than the node
under consideration.[6][14]
In the worst case, binary search makes ⌊log2 (n) + 1⌋ iterations of the comparison loop,
where the ⌊⌋ notation denotes the floor function31 that yields the greatest integer less than
or equal to the argument, and log2 is the binary logarithm32 . This is because the worst
case is reached when the search reaches the deepest level of the tree, and there are always
⌊log2 (n) + 1⌋ levels in the tree for any binary search.
The worst case may also be reached when the target element is not in the array. If n is one
less than a power of two, then this is always the case. Otherwise, the search may perform
⌊log2 (n) + 1⌋iterations if the search reaches the deepest level of the tree. However, it may
make ⌊log2 (n)⌋ iterations, which is one less than the worst case, if the search ends at the
second-deepest level of the tree.[15]
On average, assuming that each element is equally likely to be searched, binary search makes
⌊log2 (n)⌋ + 1 − (2⌊log2 (n)⌋+1 − ⌊log2 (n)⌋ − 2)/n iterations when the target element is in the
array. This is approximately equal to log2 (n) − 1 iterations. When the target element is not
in the array, binary search makes ⌊log2 (n)⌋ + 2 − 2⌊log2 (n)⌋+1 /(n + 1) iterations on average,
assuming that the range between and outside elements is equally likely to be searched.[14]
In the best case, where the target value is the middle element of the array, its position is
returned after one iteration.[16]

31 https://en.wikipedia.org/wiki/Floor_function
32 https://en.wikipedia.org/wiki/Binary_logarithm

244
Performance

In terms of iterations, no search algorithm that works only by comparing elements can
exhibit better average and worst-case performance than binary search. The comparison
tree representing binary search has the fewest levels possible as every level above the lowest
level of the tree is filled completely.[a] Otherwise, the search algorithm can eliminate few
elements in an iteration, increasing the number of iterations required in the average and
worst case. This is the case for other search algorithms based on comparisons, as while they
may work faster on some target values, the average performance over all elements is worse
than binary search. By dividing the array in half, binary search ensures that the size of
both subarrays are as similar as possible.[14]

Space complexity

Binary search requires three pointers to elements, which may be array indices or pointers to
memory locations, regardless of the size of the array. However, it requires at least ⌈log2 (n)⌉
bits to encode a pointer to an element of an array with n elements.[17] Therefore, the space
complexity of binary search is O(log n). In addition, it takes O(n) space to store the array.

18.2.1 Derivation of average case

The average number of iterations performed by binary search depends on the probability
of each element being searched. The average case is different for successful searches and
unsuccessful searches. It will be assumed that each element is equally likely to be searched
for successful searches. For unsuccessful searches, it will be assumed that the intervals33 be-
tween and outside elements are equally likely to be searched. The average case for successful
searches is the number of iterations required to search every element exactly once, divided
by n, the number of elements. The average case for unsuccessful searches is the number of
iterations required to search an element within every interval exactly once, divided by the
n + 1 intervals.[14]

Successful searches

In the binary tree representation, a successful search can be represented by a path from
the root to the target node, called an internal path. The length of a path is the number of
edges (connections between nodes) that the path passes through. The number of iterations
performed by a search, given that the corresponding path has length l, is l + 1 counting the
initial iteration. The internal path length is the sum of the lengths of all unique internal
paths. Since there is only one path from the root to any single node, each internal path
represents a search for a specific element. If there are n elements, which is a positive integer,
and the internal path length is I(n), then the average number of iterations for a successful
I(n)
search T (n) = 1 + , with the one iteration added to count the initial iteration.[14]
n
Since binary search is the optimal algorithm for searching with comparisons, this problem
is reduced to calculating the minimum internal path length of all binary trees with n nodes,
which is equal to:[18]

33 https://en.wikipedia.org/wiki/Interval_(mathematics)

245
Binary search algorithm


n
I(n) = ⌊log2 (k)⌋
k=1

For example, in a 7-element array, the root requires one iteration, the two elements below
the root require two iterations, and the four elements below require three iterations. In this
case, the internal path length is:[18]

7
⌊log2 (k)⌋ = 0 + 2(1) + 4(2) = 2 + 8 = 10
k=1
10 3
The average number of iterations would be 1 + = 2 based on the equation for the
7 7
average case. The sum for I(n) can be simplified to:[14]

n
I(n) = ⌊log2 (k)⌋ = (n + 1) ⌊log2 (n + 1)⌋ − 2⌊log2 (n+1)⌋+1 + 2
k=1

Substituting the equation for I(n) into the equation for T (n):[14]
(n + 1) ⌊log2 (n + 1)⌋ − 2⌊log2 (n+1)⌋+1 + 2
T (n) = 1 + = ⌊log2 (n)⌋ + 1 − (2⌊log2 (n)⌋+1 − ⌊log2 (n)⌋ − 2)/n
n
For integer n, this is equivalent to the equation for the average case on a successful search
specified above.

Unsuccessful searches

Unsuccessful searches can be represented by augmenting the tree with external nodes, which
forms an extended binary tree. If an internal node, or a node present in the tree, has fewer
than two child nodes, then additional child nodes, called external nodes, are added so that
each internal node has two children. By doing so, an unsuccessful search can be represented
as a path to an external node, whose parent is the single element that remains during the
last iteration. An external path is a path from the root to an external node. The external
path length is the sum of the lengths of all unique external paths. If there are n elements,
which is a positive integer, and the external path length is E(n), then the average number
E(n)
of iterations for an unsuccessful search T ′ (n) = , with the one iteration added to count
n+1
the initial iteration. The external path length is divided by n + 1 instead of n because there
are n + 1 external paths, representing the intervals between and outside the elements of the
array.[14]
This problem can similarly be reduced to determining the minimum external path length
of all binary trees with n nodes. For all binary trees, the external path length is equal to
the internal path length plus 2n.[18] Substituting the equation for I(n):[14]
[ ]
E(n) = I(n) + 2n = (n + 1) ⌊log2 (n + 1)⌋ − 2⌊log2 (n+1)⌋+1 + 2 + 2n = (n + 1)(⌊log2 (n)⌋ + 2) − 2⌊log2 (n)⌋+1

Substituting the equation for E(n) into the equation for T ′ (n), the average case for unsuc-
cessful searches can be determined:[14]
(n + 1)(⌊log2 (n)⌋ + 2) − 2⌊log2 (n)⌋+1
T ′ (n) = = ⌊log2 (n)⌋ + 2 − 2⌊log2 (n)⌋+1 /(n + 1)
(n + 1)

246
Performance

Performance of alternative procedure

Each iteration of the binary search procedure defined above makes one or two comparisons,
checking if the middle element is equal to the target in each iteration. Assuming that each
element is equally likely to be searched, each iteration makes 1.5 comparisons on average.
A variation of the algorithm checks whether the middle element is equal to the target at
the end of the search. On average, this eliminates half a comparison from each iteration.
This slightly cuts the time taken per iteration on most computers. However, it guarantees
that the search takes the maximum number of iterations, on average adding one iteration
to the search. Because the comparison loop is performed only ⌊log2 (n) + 1⌋ times in the
worst case, the slight increase in efficiency per iteration does not compensate for the extra
iteration for all but very large n.[b][19][20]

18.2.2 Running time and cache use

In analyzing the performance of binary search, another consideration is the time required
to compare two elements. For integers and strings, the time required increases linearly as
the encoding length (usually the number of bits34 ) of the elements increase. For example,
comparing a pair of 64-bit unsigned integers would require comparing up to double the bits
as comparing a pair of 32-bit unsigned integers. The worst case is achieved when the integers
are equal. This can be significant when the encoding lengths of the elements are large, such
as with large integer types or long strings, which makes comparing elements expensive.
Furthermore, comparing floating-point35 values (the most common digital representation of
real numbers36 ) is often more expensive than comparing integers or short strings.
On most computer architectures, the processor37 has a hardware cache38 separate from
RAM39 . Since they are located within the processor itself, caches are much faster to access
but usually store much less data than RAM. Therefore, most processors store memory
locations that have been accessed recently, along with memory locations close to it. For
example, when an array element is accessed, the element itself may be stored along with the
elements that are stored close to it in RAM, making it faster to sequentially access array
elements that are close in index to each other (locality of reference40 ). On a sorted array,
binary search can jump to distant memory locations if the array is large, unlike algorithms
(such as linear search41 and linear probing42 in hash tables43 ) which access elements in
sequence. This adds slightly to the running time of binary search for large arrays on most
systems.[21]

34 https://en.wikipedia.org/wiki/Bit
35 https://en.wikipedia.org/wiki/Floating-point_arithmetic
36 https://en.wikipedia.org/wiki/Real_number
37 https://en.wikipedia.org/wiki/Central_processing_unit
38 https://en.wikipedia.org/wiki/Cache_(computing)
39 https://en.wikipedia.org/wiki/Random-access_memory
40 https://en.wikipedia.org/wiki/Locality_of_reference
41 https://en.wikipedia.org/wiki/Linear_search
42 https://en.wikipedia.org/wiki/Linear_probing
43 https://en.wikipedia.org/wiki/Hash_tables

247
Binary search algorithm

18.3 Binary search versus other schemes

Sorted arrays with binary search are a very inefficient solution when insertion and deletion
operations are interleaved with retrieval, taking O(n) time for each such operation. In
addition, sorted arrays can complicate memory use especially when elements are often
inserted into the array.[22] There are other data structures that support much more efficient
insertion and deletion. Binary search can be used to perform exact matching and set
membership44 (determining whether a target value is in a collection of values). There are
data structures that support faster exact matching and set membership. However, unlike
many other searching schemes, binary search can be used for efficient approximate matching,
usually performing such matches in O(log n) time regardless of the type or structure of the
values themselves.[23] In addition, there are some operations, like finding the smallest and
largest element, that can be performed efficiently on a sorted array.[11]

18.3.1 Linear search

Linear search45 is a simple search algorithm that checks every record until it finds the
target value. Linear search can be done on a linked list, which allows for faster insertion
and deletion than an array. Binary search is faster than linear search for sorted arrays
except if the array is short, although the array needs to be sorted beforehand.[c][25] All
sorting algorithms46 based on comparing elements, such as quicksort47 and merge sort48 ,
require at least O(n log n) comparisons in the worst case.[26] Unlike linear search, binary
search can be used for efficient approximate matching. There are operations such as finding
the smallest and largest element that can be done efficiently on a sorted array but not on
an unsorted array.[27]

44 https://en.wikipedia.org/wiki/Set_(abstract_data_type)
45 https://en.wikipedia.org/wiki/Linear_search
46 https://en.wikipedia.org/wiki/Sorting_algorithm
47 https://en.wikipedia.org/wiki/Quicksort
48 https://en.wikipedia.org/wiki/Merge_sort

248
Binary search versus other schemes

18.3.2 Trees

Figure 42 Binary search trees are searched using an algorithm similar to binary search.

A binary search tree49 is a binary tree50 data structure that works based on the principle
of binary search. The records of the tree are arranged in sorted order, and each record
in the tree can be searched using an algorithm similar to binary search, taking on average
logarithmic time. Insertion and deletion also require on average logarithmic time in binary
search trees. This can be faster than the linear time insertion and deletion of sorted arrays,
and binary trees retain the ability to perform all the operations possible on a sorted array,
including range and approximate queries.[23][28]
However, binary search is usually more efficient for searching as binary search trees will most
likely be imperfectly balanced, resulting in slightly worse performance than binary search.
This even applies to balanced binary search trees51 , binary search trees that balance their
own nodes, because they rarely produce the tree with the fewest possible levels. Except

49 https://en.wikipedia.org/wiki/Binary_search_tree
50 https://en.wikipedia.org/wiki/Binary_tree
51 https://en.wikipedia.org/wiki/Balanced_binary_search_tree

249
Binary search algorithm

for balanced binary search trees, the tree may be severely imbalanced with few internal
nodes with two children, resulting in the average and worst-case search time approaching n
comparisons.[d] Binary search trees take more space than sorted arrays.[30]
Binary search trees lend themselves to fast searching in external memory stored in hard
disks, as binary search trees can be efficiently structured in filesystems. The B-tree52 gen-
eralizes this method of tree organization. B-trees are frequently used to organize long-term
storage such as databases53 and filesystems54 .[31][32]

18.3.3 Hashing

For implementing associative arrays55 , hash tables56 , a data structure that maps keys to
records57 using a hash function58 , are generally faster than binary search on a sorted array
of records.[33] Most hash table implementations require only amortized59 constant time on
average.[e][35] However, hashing is not useful for approximate matches, such as computing the
next-smallest, next-largest, and nearest key, as the only information given on a failed search
is that the target is not present in any record.[36] Binary search is ideal for such matches,
performing them in logarithmic time. Binary search also supports approximate matches.
Some operations, like finding the smallest and largest element, can be done efficiently on
sorted arrays but not on hash tables.[23]

18.3.4 Set membership algorithms

A related problem to search is set membership60 . Any algorithm that does lookup, like
binary search, can also be used for set membership. There are other algorithms that are
more specifically suited for set membership. A bit array61 is the simplest, useful when the
range of keys is limited. It compactly stores a collection of bits62 , with each bit representing
a single key within the range of keys. Bit arrays are very fast, requiring only O(1) time.[37]
The Judy1 type of Judy array63 handles 64-bit keys efficiently.[38]
For approximate results, Bloom filters64 , another probabilistic data structure based on hash-
ing, store a set65 of keys by encoding the keys using a bit array66 and multiple hash func-
tions. Bloom filters are much more space-efficient than bit arrays in most cases and not

52 https://en.wikipedia.org/wiki/B-tree
53 https://en.wikipedia.org/wiki/Database
54 https://en.wikipedia.org/wiki/Filesystem
55 https://en.wikipedia.org/wiki/Associative_arrays
56 https://en.wikipedia.org/wiki/Hash_table
57 https://en.wikipedia.org/wiki/Record_(computer_science)
58 https://en.wikipedia.org/wiki/Hash_function
59 https://en.wikipedia.org/wiki/Amortized_analysis
60 https://en.wikipedia.org/wiki/Set_(abstract_data_type)
61 https://en.wikipedia.org/wiki/Bit_array
62 https://en.wikipedia.org/wiki/Bit
63 https://en.wikipedia.org/wiki/Judy_array
64 https://en.wikipedia.org/wiki/Bloom_filter
65 https://en.wikipedia.org/wiki/Set_(mathematics)
66 https://en.wikipedia.org/wiki/Bit_array

250
Variations

much slower: with k hash functions, membership queries require only O(k) time. However,
Bloom filters suffer from false positives67 .[f][g][40]

18.3.5 Other data structures

There exist data structures that may improve on binary search in some cases for both search-
ing and other operations available for sorted arrays. For example, searches, approximate
matches, and the operations available to sorted arrays can be performed more efficiently
than binary search on specialized data structures such as van Emde Boas trees68 , fusion
trees69 , tries70 , and bit arrays71 . These specialized data structures are usually only faster
because they take advantage of the properties of keys with a certain attribute (usually keys
that are small integers), and thus will be time or space consuming for keys that lack that
attribute.[23] As long as the keys can be ordered, these operations can always be done at
least efficiently on a sorted array regardless of the keys. Some structures, such as Judy
arrays, use a combination of approaches to mitigate this while retaining efficiency and the
ability to perform approximate matching.[38]

18.4 Variations

18.4.1 Uniform binary search

Main article: Uniform binary search72

Figure 43 Uniform binary search stores the difference between the current and the two
next possible middle elements instead of specific bounds.

Uniform binary search stores, instead of the lower and upper bounds, the difference in the
index of the middle element from the current iteration to the next iteration. A lookup

67 https://en.wikipedia.org/wiki/False_positives_and_false_negatives
68 https://en.wikipedia.org/wiki/Van_Emde_Boas_tree
69 https://en.wikipedia.org/wiki/Fusion_tree
70 https://en.wikipedia.org/wiki/Trie
71 https://en.wikipedia.org/wiki/Bit_array
72 https://en.wikipedia.org/wiki/Uniform_binary_search

251
Binary search algorithm

table73 containing the differences is computed beforehand. For example, if the array to be
searched is [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], the middle element (m) would be 6. In this case,
the middle element of the left subarray ([1, 2, 3, 4, 5]) is 3 and the middle element of the
right subarray ([7, 8, 9, 10, 11]) is 9. Uniform binary search would store the value of 3 as
both indices differ from 6 by this same amount.[41] To reduce the search space, the algorithm
either adds or subtracts this change from the index of the middle element. Uniform binary
search may be faster on systems where it is inefficient to calculate the midpoint, such as on
decimal computers74 .[42]

18.4.2 Exponential search

Main article: Exponential search75

Figure 44 Visualization of exponential searching finding the upper bound for the
subsequent binary search

Exponential search extends binary search to unbounded lists. It starts by finding the first
element with an index that is both a power of two and greater than the target value.
Afterwards, it sets that index as the upper bound, and switches to binary search. A search
takes ⌊log2 x + 1⌋ iterations before binary search is started and at most ⌊log2 x⌋ iterations
of the binary search, where x is the position of the target value. Exponential search works
on bounded lists, but becomes an improvement over binary search only if the target value
lies near the beginning of the array.[43]

18.4.3 Interpolation search

Main article: Interpolation search76

73 https://en.wikipedia.org/wiki/Lookup_table
74 https://en.wikipedia.org/wiki/Decimal_computer
75 https://en.wikipedia.org/wiki/Exponential_search
76 https://en.wikipedia.org/wiki/Interpolation_search

252
Variations

Figure 45 Visualization of interpolation search using linear interpolation. In this case,


no searching is needed because the estimate of the target's location within the array is
correct. Other implementations may specify another function for estimating the target's
location.

Instead of calculating the midpoint, interpolation search estimates the position of the target
value, taking into account the lowest and highest elements in the array as well as length of
the array. It works on the basis that the midpoint is not the best guess in many cases. For
example, if the target value is close to the highest element in the array, it is likely to be
located near the end of the array.[44]
A common interpolation function is linear interpolation77 . If A is the array, L, R are the
lower and upper bounds respectively, and T is the target, then the target is estimated to
be about (T − AL )/(AR − AL ) of the way between L and R. When linear interpolation is
used, and the distribution of the array elements is uniform or near uniform, interpolation
search makes O(log log n) comparisons.[44][45][46]
In practice, interpolation search is slower than binary search for small arrays, as interpola-
tion search requires extra computation. Its time complexity grows more slowly than binary
search, but this only compensates for the extra computation for large arrays.[44]

18.4.4 Fractional cascading

Main article: Fractional cascading78

Figure 46 In fractional cascading, each array has pointers to every second element of
another array, so only one binary search has to be performed to search all the arrays.

77 https://en.wikipedia.org/wiki/Linear_interpolation
78 https://en.wikipedia.org/wiki/Fractional_cascading

253
Binary search algorithm

Fractional cascading is a technique that speeds up binary searches for the same element in
multiple sorted arrays. Searching each array separately requires O(k log n) time, where k is
the number of arrays. Fractional cascading reduces this to O(k + log n) by storing specific
information in each array about each element and its position in the other arrays.[47][48]
Fractional cascading was originally developed to efficiently solve various computational
geometry79 problems. Fractional cascading has been applied elsewhere, such as in data
mining80 and Internet Protocol81 routing.[47]

18.4.5 Generalization to graphs

Binary search has been generalized to work on certain types of graphs, where the target
value is stored in a vertex instead of an array element. Binary search trees are one such
generalization—when a vertex (node) in the tree is queried, the algorithm either learns that
the vertex is the target, or otherwise which subtree the target would be located in. However,
this can be further generalized as follows: given an undirected, positively weighted graph
and a target vertex, the algorithm learns upon querying a vertex that it is equal to the
target, or it is given an incident edge that is on the shortest path from the queried vertex
to the target. The standard binary search algorithm is simply the case where the graph is a
path. Similarly, binary search trees are the case where the edges to the left or right subtrees
are given when the queried vertex is unequal to the target. For all undirected, positively
weighted graphs, there is an algorithm that finds the target vertex in O(log n) queries in
the worst case.[49]

79 https://en.wikipedia.org/wiki/Computational_geometry
80 https://en.wikipedia.org/wiki/Data_mining
81 https://en.wikipedia.org/wiki/Internet_Protocol

254
Variations

18.4.6 Noisy binary search

Figure 47 In noisy binary search, there is a certain probability that a comparison is


incorrect.

Noisy binary search algorithms solve the case where the algorithm cannot reliably compare
elements of the array. For each pair of elements, there is a certain probability that the
algorithm makes the wrong comparison. Noisy binary search can find the correct position
of the target with a given probability that controls the reliability of the yielded position.
log (n) 10
Every noisy binary search procedure must make at least (1 − τ ) 2 − comparisons
H(p) H(p)
on average, where H(p) = −p log2 (p) − (1 − p) log2 (1 − p) is the binary entropy function82
and τ is the probability that the procedure yields the wrong position.[50][51][52] The noisy
binary search problem can be considered as a case of the Rényi-Ulam game83 ,[53] a variant
of Twenty Questions84 where the answers may be wrong.[54]

18.4.7 Quantum binary search

Classical computers are bounded to the worst case of exactly ⌊log2 n + 1⌋ iterations when
performing binary search. Quantum algorithms85 for binary search are still bounded to

82 https://en.wikipedia.org/wiki/Binary_entropy_function
83 https://en.wikipedia.org/wiki/Ulam%27s_game
84 https://en.wikipedia.org/wiki/Twenty_Questions
85 https://en.wikipedia.org/wiki/Quantum_algorithm

255
Binary search algorithm

a proportion of log2 n queries (representing iterations of the classical procedure), but the
constant factor is less than one, providing for a lower time complexity on quantum com-
puters86 . Any exact quantum binary search procedure—that is, a procedure that always
yields the correct result—requires at least π1 (ln n − 1) ≈ 0.22 log2 n queries in the worst case,
where ln is the natural logarithm87 .[55] There is an exact quantum binary search procedure
that runs in 4 log605 n ≈ 0.433 log2 n queries in the worst case.[56] In comparison, Grover's
algorithm88 is the √optimal quantum algorithm for searching an unordered list of elements,
and it requires O( n) queries.[57]

18.5 History

The idea of sorting a list of items to allow for faster searching dates back to antiquity. The
earliest known example was the Inakibit-Anu tablet from Babylon dating back to c. 200
BCE. The tablet contained about 500 Sexagesimal89 numbers and their reciprocals90 sorted
in Lexicographical order91 , which made searching for a specific entry easier. In addition,
several lists of names that were sorted by their first letter were discovered on the Aegean
Islands92 . Catholicon93 , a Latin dictionary finished in 1286 CE, was the first work to describe
rules for sorting words into alphabetical order, as opposed to just the first few letters.[9]
In 1946, John Mauchly94 made the first mention of binary search as part of the Moore School
Lectures95 , a seminal and foundational college course in computing.[9] In 1957, William
Wesley Peterson96 published the first method for interpolation search.[9][58] Every published
binary search algorithm worked only for arrays whose length is one less than a power of
two[h] until 1960, when Derrick Henry Lehmer97 published a binary search algorithm that
worked on all arrays.[60] In 1962, Hermann Bottenbruch presented an ALGOL 6098 imple-
mentation of binary search that placed the comparison for equality at the end99 , increasing
the average number of iterations by one, but reducing to one the number of comparisons
per iteration.[8] The uniform binary search100 was developed by A. K. Chandra of Stanford
University101 in 1971.[9] In 1986, Bernard Chazelle102 and Leonidas J. Guibas103 introduced

86 https://en.wikipedia.org/wiki/Quantum_computing
87 https://en.wikipedia.org/wiki/Natural_logarithm
88 https://en.wikipedia.org/wiki/Grover%27s_algorithm
89 https://en.wikipedia.org/wiki/Sexagesimal
90 https://en.wikipedia.org/wiki/Multiplicative_inverse
91 https://en.wikipedia.org/wiki/Lexicographical_order
92 https://en.wikipedia.org/wiki/Aegean_Islands
93 https://en.wikipedia.org/wiki/Catholicon_(1286)
94 https://en.wikipedia.org/wiki/John_Mauchly
95 https://en.wikipedia.org/wiki/Moore_School_Lectures
96 https://en.wikipedia.org/wiki/W._Wesley_Peterson
97 https://en.wikipedia.org/wiki/Derrick_Henry_Lehmer
98 https://en.wikipedia.org/wiki/ALGOL_60
99 #Alternative_procedure
100 #Uniform_binary_search
101 https://en.wikipedia.org/wiki/Stanford_University
102 https://en.wikipedia.org/wiki/Bernard_Chazelle
103 https://en.wikipedia.org/wiki/Leonidas_J._Guibas

256
Implementation issues

fractional cascading104 as a method to solve numerous search problems in computational


geometry105 .[47][61][62]

18.6 Implementation issues

Although the basic idea of binary search is comparatively straightforward, the details
can be surprisingly tricky ... — Donald Knuth106[2]
When Jon Bentley107 assigned binary search as a problem in a course for professional pro-
grammers, he found that ninety percent failed to provide a correct solution after several
hours of working on it, mainly because the incorrect implementations failed to run or re-
turned a wrong answer in rare edge cases108 .[63] A study published in 1988 shows that
accurate code for it is only found in five out of twenty textbooks.[64] Furthermore, Bent-
ley's own implementation of binary search, published in his 1986 book Programming Pearls,
contained an overflow error109 that remained undetected for over twenty years. The Java
programming language110 library implementation of binary search had the same overflow
bug for more than nine years.[65]
In a practical implementation, the variables used to represent the indices will often be of
fixed size, and this can result in an arithmetic overflow111 for very large arrays. If the
L+R
midpoint of the span is calculated as , then the value of L + R may exceed the range
2
of integers of the data type used to store the midpoint, even if L and R are within the
range. If L and R are nonnegative, this can be avoided by calculating the midpoint as
R − L [66]
L+ .
2
An infinite loop may occur if the exit conditions for the loop are not defined correctly. Once
L exceeds R, the search has failed and must convey the failure of the search. In addition, the
loop must be exited when the target element is found, or in the case of an implementation
where this check is moved to the end, checks for whether the search was successful or failed
at the end must be in place. Bentley found that most of the programmers who incorrectly
implemented binary search made an error in defining the exit conditions.[8][67]

18.7 Library support

Many languages' standard libraries112 include binary search routines:

104 https://en.wikipedia.org/wiki/Fractional_cascading
105 https://en.wikipedia.org/wiki/Computational_geometry
106 https://en.wikipedia.org/wiki/Donald_Knuth
107 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
108 https://en.wikipedia.org/wiki/Edge_case
109 https://en.wikipedia.org/wiki/Integer_overflow
110 https://en.wikipedia.org/wiki/Java_(programming_language)
111 https://en.wikipedia.org/wiki/Integer_overflow
112 https://en.wikipedia.org/wiki/Standard_library

257
Binary search algorithm

• C113 provides the function114 bsearch() in its standard library115 , which is typically
implemented via binary search, although the official standard does not require it so.[68]
• C++116 's Standard Template Library117 provides the functions binary_search(),
lower_bound(), upper_bound() and equal_range().[69]
• D118 's standard library Phobos, in std.range module provides a type
SortedRange (returned by sort() and assumeSorted() functions) with methods
contains(), equaleRange(), lowerBound() and trisect(), that use binary search
techniques by default for ranges that offer random access.[70]
• COBOL119 provides the SEARCH ALL verb for performing binary searches on COBOL
ordered tables.[71]
• Go120 's sort standard library package contains the functions Search, SearchInts,
SearchFloat64s, and SearchStrings, which implement general binary search, as well
as specific implementations for searching slices of integers, floating-point numbers, and
strings, respectively.[72]
• Java121 offers a set of overloaded122 binarySearch() static methods in the classes Ar-
rays123 and Collections124 in the standard java.util package for performing binary
searches on Java arrays and on Lists, respectively.[73][74]
• Microsoft125 's .NET Framework126 2.0 offers static generic127 versions of the binary search
algorithm in its collection base classes. An example would be System.Array's method
BinarySearch<T>(T[] array, T value).[75]
• For Objective-C128 , the Cocoa129 framework provides the NSArray -
indexOfObject:inSortedRange:options:usingComparator: 130 method in Mac
OS X 10.6+.[76] Apple's Core Foundation131 C framework also contains a CFArrayB-
SearchValues()132 function.[77]
• Python133 provides the bisect module.[78]
• Ruby134 's Array class includes a bsearch method with built-in approximate matching.[79]

113 https://en.wikipedia.org/wiki/C_(programming_language)
114 https://en.wikipedia.org/wiki/Subroutine
115 https://en.wikipedia.org/wiki/C_standard_library
116 https://en.wikipedia.org/wiki/C%2B%2B
117 https://en.wikipedia.org/wiki/Standard_Template_Library
118 https://en.wikipedia.org/wiki/D_(programming_language)
119 https://en.wikipedia.org/wiki/COBOL
120 https://en.wikipedia.org/wiki/Go_(programming_language)
121 https://en.wikipedia.org/wiki/Java_(programming_language)
122 https://en.wikipedia.org/wiki/Function_overloading
123 https://docs.oracle.com/javase/10/docs/api/java/util/Arrays.html
124 https://docs.oracle.com/javase/10/docs/api/java/util/Collections.html
125 https://en.wikipedia.org/wiki/Microsoft
126 https://en.wikipedia.org/wiki/.NET_Framework
127 https://en.wikipedia.org/wiki/Generic_programming
128 https://en.wikipedia.org/wiki/Objective-C
129 https://en.wikipedia.org/wiki/Cocoa_(API)
https://developer.apple.com/library/mac/documentation/Cocoa/Reference/Foundation/
130 Classes/NSArray_Class/NSArray.html#//apple_ref/occ/instm/NSArray/indexOfObject:
inSortedRange:options:usingComparator:
131 https://en.wikipedia.org/wiki/Core_Foundation
https://developer.apple.com/library/mac/documentation/CoreFoundation/Reference/
132
CFArrayRef/Reference/reference.html#//apple_ref/c/func/CFArrayBSearchValues
133 https://en.wikipedia.org/wiki/Python_(programming_language)
134 https://en.wikipedia.org/wiki/Ruby_(programming_language)

258
Notes and references

18.8 See also


• Bisection method135 − Algorithm for finding a zero of a function – the same idea used to
solve equations in the real numbers
• Multiplicative binary search136 – binary search variation with simplified midpoint calcu-
lation

18.9 Notes and references


137
The 2019 version of this article has passed academic peer review138 (here139 )
and was published in WikiJournal of Science. It can be cited as:
A L;  . (2 J 2019), ”B  ”, Wik-
iJournal of Science, 2 (1): 5, doi140 :10.15347/WJS/2019.005141 , Wikidata142
Q81434400143 144

18.9.1 Notes
1. Any search algorithm based solely on comparisons can be represented using a binary
comparison tree. An internal path is any path from the root to an existing node. Let
I be the internal path length, the sum of the lengths of all internal paths. If each
I
element is equally likely to be searched, the average case is 1 + or simply one plus
n
the average of all the internal path lengths of the tree. This is because internal paths
represent the elements that the search algorithm compares to the target. The lengths
of these internal paths represent the number of iterations after the root node. Adding
the average of these lengths to the one iteration at the root yields the average case.
Therefore, to minimize the average number of comparisons, the internal path length I
must be minimized. It turns out that the tree for binary search minimizes the internal
path length. Knuth 1998145 proved that the external path length (the path length over
all nodes where both children are present for each already-existing node) is minimized
when the external nodes (the nodes with no children) lie within two consecutive levels
of the tree. This also applies to internal paths as internal path length I is linearly
related to external path length E. For any tree of n nodes, I = E − 2n. When each
subtree has a similar number of nodes, or equivalently the array is divided into halves
in each iteration, the external nodes as well as their interior parent nodes lie within
two levels. It follows that binary search minimizes the number of average comparisons
as its comparison tree has the lowest possible internal path length.[14]

135 https://en.wikipedia.org/wiki/Bisection_method
136 https://en.wikipedia.org/wiki/Multiplicative_binary_search
137 http://www.example.com
138 https://en.wikipedia.org/wiki/Scholarly_peer_review
139 https://en.wikiversity.org/wiki/Talk:WikiJournal_of_Science/Binary_search_algorithm
140 https://en.wikipedia.org/wiki/Doi_(identifier)
141 https://doi.org/10.15347%2FWJS%2F2019.005
142 https://en.wikipedia.org/wiki/Wikidata
143 https://www.wikidata.org/wiki/Q81434400
144 https://doi.org/10.15347/WJS/2019.005
145 #CITEREFKnuth1998

259
Binary search algorithm

2. Knuth 1998146 showed on his MIX147 computer model, which Knuth designed as a
representation of an ordinary computer, that the average running time of this variation
for a successful search is 17.5 log2 n + 17 units of time compared to 18 log2 n − 16 units
for regular binary search. The time complexity for this variation grows slightly more
slowly, but at the cost of higher initial complexity. [19]
3. Knuth 1998148 performed a formal time performance analysis of both of these search
algorithms. On Knuth's MIX149 computer, which Knuth designed as a representation
of an ordinary computer, binary search takes on average 18 log n − 16 units of time for
a successful search, while linear search with a sentinel node150 at the end of the list
takes 1.75n + 8.5 − n mod
4n
2
units. Linear search has lower initial complexity because it
requires minimal computation, but it quickly outgrows binary search in complexity.
On the MIX computer, binary search only outperforms linear search with a sentinel
if n > 44.[14][24]
4. Inserting the values in sorted order or in an alternating lowest-highest key pattern
will result in a binary search tree that maximizes the average and worst-case search
time.[29]
5. It is possible to search some hash table implementations in guaranteed constant
time.[34]
6. This is because simply setting all of the bits which the hash functions point to for a
specific key can affect queries for other keys which have a common hash location for
one or more of the functions.[39]
7. There exist improvements of the Bloom filter which improve on its complexity or
support deletion; for example, the cuckoo filter exploits cuckoo hashing151 to gain
these advantages.[39]
8. That is, arrays of length 1, 3, 7, 15, 31 ...[59]

18.9.2 Citations
1. W, J., L F. (22 A 1976). A modification to the half-interval
search (binary search) method152 . P   14 ACM S
C. ACM. . 95–101. 153 :10.1145/503561.503582154 . A155
    12 M 2017. R 29 J 2018.
2. Knuth 1998156 , §6.2.1 (”Searching an ordered table”), subsection ”Binary search”.
3. Butterfield & Ngondi 2016157 , p. 46.
4. Cormen et al. 2009158 , p. 39.

146 #CITEREFKnuth1998
147 https://en.wikipedia.org/wiki/MIX
148 #CITEREFKnuth1998
149 https://en.wikipedia.org/wiki/MIX
150 https://en.wikipedia.org/wiki/Sentinel_node
151 https://en.wikipedia.org/wiki/Cuckoo_hashing
152 https://dl.acm.org/citation.cfm?doid=503561.503582
153 https://en.wikipedia.org/wiki/Doi_(identifier)
154 https://doi.org/10.1145%2F503561.503582
https://web.archive.org/web/20170312215255/http://dl.acm.org/citation.cfm?doid=
155
503561.503582
156 #CITEREFKnuth1998
157 #CITEREFButterfieldNgondi2016
158 #CITEREFCormenLeisersonRivestStein2009

260
Notes and references

5. W, E W.159 ”B ”160 . MathWorld161 .


6. F, I; M, G (1 S 1971). ”A  
    ”. Communications of the ACM162 . 14 (9):
602–603. Bibcode163 :1985CACM...28...22S164 . doi165 :10.1145/362663.362752166 .
ISSN 0001-0782168 .
167

7. Knuth 1998169 , §6.2.1 (”Searching an ordered table”), subsection ”Algorithm B”.


8. B, H (1 A 1962). ”S    ALGOL
60”. Journal of the ACM170 . 9 (2): 161–221. doi171 :10.1145/321119.321120172 .
ISSN173 0004-5411174 .CS1 maint: ref=harv (link175 ) Procedure is described at p. 214
(§43), titled ”Program for Binary Search”.
9. Knuth 1998176 , §6.2.1 (”Searching an ordered table”), subsection ”History and bibliog-
raphy”.
10. Kasahara & Morishita 2006177 , pp. 8–9.
11. Sedgewick & Wayne 2011178 , §3.1, subsection ”Rank and selection”.
12. Goldman & Goldman 2008179 , pp. 461–463.
13. Sedgewick & Wayne 2011180 , §3.1, subsection ”Range queries”.
14. Knuth 1998181 , §6.2.1 (”Searching an ordered table”), subsection ”Further analysis of
binary search”.
15. Knuth 1998182 , §6.2.1 (”Searching an ordered table”), ”Theorem B”.
16. Chang 2003183 , p. 169.

159 https://en.wikipedia.org/wiki/Eric_W._Weisstein
160 https://mathworld.wolfram.com/BinarySearch.html
161 https://en.wikipedia.org/wiki/MathWorld
162 https://en.wikipedia.org/wiki/Communications_of_the_ACM
163 https://en.wikipedia.org/wiki/Bibcode_(identifier)
164 https://ui.adsabs.harvard.edu/abs/1985CACM...28...22S
165 https://en.wikipedia.org/wiki/Doi_(identifier)
166 https://doi.org/10.1145%2F362663.362752
167 https://en.wikipedia.org/wiki/ISSN_(identifier)
168 http://www.worldcat.org/issn/0001-0782
169 #CITEREFKnuth1998
170 https://en.wikipedia.org/wiki/Journal_of_the_ACM
171 https://en.wikipedia.org/wiki/Doi_(identifier)
172 https://doi.org/10.1145%2F321119.321120
173 https://en.wikipedia.org/wiki/ISSN_(identifier)
174 http://www.worldcat.org/issn/0004-5411
175 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
176 #CITEREFKnuth1998
177 #CITEREFKasaharaMorishita2006
178 #CITEREFSedgewickWayne2011
179 #CITEREFGoldmanGoldman2008
180 #CITEREFSedgewickWayne2011
181 #CITEREFKnuth1998
182 #CITEREFKnuth1998
183 #CITEREFChang2003

261
Binary search algorithm

17. S, C E.184 (J 1948). ”A M T  C-


”. Bell System Technical Journal185 . 27 (3): 379–423. doi186 :10.1002/j.1538-
7305.1948.tb01338.x187 . hdl188 :11858/00-001M-0000-002C-4314-2189 .
18. Knuth 1997190 , §2.3.4.5 (”Path length”).
19. Knuth 1998191 , §6.2.1 (”Searching an ordered table”), subsection ”Exercise 23”.
20. R, T J. (1997). ”A   -
   ”. ACM SIGNUM Newsletter. 32 (4): 15–19.
doi192 :10.1145/289251.289255193 .
21. K, P-V; M, P194 . ”A L  C-
B S”. Journal of Experimental Algorithmics. 22. Article 1.3.
doi195 :10.1145/289251.289255196 .
22. Knuth 1997197 , §2.2.2 (”Sequential Allocation”).
23. B, P; F, F E.198 (2001). ”O    -
    ”. Journal of Computer and System Sci-
ences199 . 65 (1): 38–72. doi200 :10.1006/jcss.2002.1822201 .
24. Knuth 1998202 , Answers to Exercises (§6.2.1) for ”Exercise 5”.
25. Knuth 1998203 , §6.2.1 (”Searching an ordered table”).
26. Knuth 1998204 , §5.3.1 (”Minimum-Comparison sorting”).
27. Sedgewick & Wayne 2011205 , §3.2 (”Ordered symbol tables”).
28. Sedgewick & Wayne 2011206 , §3.2 (”Binary Search Trees”), subsection ”Order-based
methods and deletion”.
29. Knuth 1998207 , §6.2.2 (”Binary tree searching”), subsection ”But what about the worst
case?”.
30. Sedgewick & Wayne 2011208 , §3.5 (”Applications”), ”Which symbol-table implementa-
tion should I use?”.

184 https://en.wikipedia.org/wiki/Claude_Shannon
185 https://en.wikipedia.org/wiki/Bell_System_Technical_Journal
186 https://en.wikipedia.org/wiki/Doi_(identifier)
187 https://doi.org/10.1002%2Fj.1538-7305.1948.tb01338.x
188 https://en.wikipedia.org/wiki/Hdl_(identifier)
189 http://hdl.handle.net/11858%2F00-001M-0000-002C-4314-2
190 #CITEREFKnuth1997
191 #CITEREFKnuth1998
192 https://en.wikipedia.org/wiki/Doi_(identifier)
193 https://doi.org/10.1145%2F289251.289255
194 https://en.wikipedia.org/wiki/Pat_Morin
195 https://en.wikipedia.org/wiki/Doi_(identifier)
196 https://doi.org/10.1145%2F289251.289255
197 #CITEREFKnuth1997
198 https://en.wikipedia.org/wiki/Faith_Ellen
199 https://en.wikipedia.org/wiki/Journal_of_Computer_and_System_Sciences
200 https://en.wikipedia.org/wiki/Doi_(identifier)
201 https://doi.org/10.1006%2Fjcss.2002.1822
202 #CITEREFKnuth1998
203 #CITEREFKnuth1998
204 #CITEREFKnuth1998
205 #CITEREFSedgewickWayne2011
206 #CITEREFSedgewickWayne2011
207 #CITEREFKnuth1998
208 #CITEREFSedgewickWayne2011

262
Notes and references

31. Knuth 1998209 , §5.4.9 (”Disks and Drums”).


32. Knuth 1998210 , §6.2.4 (”Multiway trees”).
33. Knuth 1998211 , §6.4 (”Hashing”).
34. Knuth 1998212 , §6.4 (”Hashing”), subsection ”History”.
35. D, M; K, A213 ; M, K214 ; M 
 H, F; R, H; T, R E.215 (A 1994).
”D  :    ”. SIAM Journal on Com-
puting216 . 23 (4): 738–761. doi217 :10.1137/S0097539791194094218 .
36. M, P. ”H ”219 (PDF). . 1. R 28 M 2016.
37. Knuth 2011220 , §7.1.3 (”Bitwise Tricks and Techniques”).
38. S, A, Judy IV shop manual221 (PDF), H-P222 ,
. 80–81
39. F, B; A, D G.; K, M; M, M
D. (2014). Cuckoo filter: practically better than Bloom. Proceedings of the 10th ACM
International on Conference on Emerging Networking Experiments and Technologies.
pp. 75–88. doi223 :10.1145/2674005.2674994224 .
40. B, B H. (1970). ”S/ -   -
   ”. Communications of the ACM225 . 13 (7):
422–426. Bibcode :1985CACM...28...22S227 . CiteSeerX228 10.1.1.641.9096229 .
226

doi230 :10.1145/362686.362692231 .
41. Knuth 1998232 , §6.2.1 (”Searching an ordered table”), subsection ”An important vari-
ation”.
42. Knuth 1998233 , §6.2.1 (”Searching an ordered table”), subsection ”Algorithm U”.
43. Moffat & Turpin 2002234 , p. 33.

209 #CITEREFKnuth1998
210 #CITEREFKnuth1998
211 #CITEREFKnuth1998
212 #CITEREFKnuth1998
213 https://en.wikipedia.org/wiki/Anna_Karlin
214 https://en.wikipedia.org/wiki/Kurt_Mehlhorn
215 https://en.wikipedia.org/wiki/Robert_Tarjan
216 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
217 https://en.wikipedia.org/wiki/Doi_(identifier)
218 https://doi.org/10.1137%2FS0097539791194094
219 http://cglab.ca/~morin/teaching/5408/notes/hashing.pdf
220 #CITEREFKnuth2011
221 http://judy.sourceforge.net/doc/shop_interm.pdf
222 https://en.wikipedia.org/wiki/Hewlett-Packard
223 https://en.wikipedia.org/wiki/Doi_(identifier)
224 https://doi.org/10.1145%2F2674005.2674994
225 https://en.wikipedia.org/wiki/Communications_of_the_ACM
226 https://en.wikipedia.org/wiki/Bibcode_(identifier)
227 https://ui.adsabs.harvard.edu/abs/1985CACM...28...22S
228 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
229 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.641.9096
230 https://en.wikipedia.org/wiki/Doi_(identifier)
231 https://doi.org/10.1145%2F362686.362692
232 #CITEREFKnuth1998
233 #CITEREFKnuth1998
234 #CITEREFMoffatTurpin2002

263
Binary search algorithm

44. Knuth 1998235 , §6.2.1 (”Searching an ordered table”), subsection ”Interpolation search”.
45. Knuth 1998236 , §6.2.1 (”Searching an ordered table”), subsection ”Exercise 22”.
46. P, Y; I, A; A, H (1978). ”I —
   n search”. Communications of the ACM237 . 21 (7): 550–553. Bib-
code238 :1985CACM...28...22S239 . doi240 :10.1145/359545.359557241 .
47. C, B242 ; L, D (6 J 2001). Lower bounds for intersection
searching and fractional cascading in higher dimension243 . 33 ACM S 
T  C244 . ACM. . 322–329. 245 :10.1145/380752.380818246 .
ISBN247 978-1-58113-349-3248 . R 30 J 2018.
48. C, B249 ; L, D (1 M 2004). ”L 
        -
”250 (PDF). Journal of Computer and System Sciences. 68 (2): 269–284.
CiteSeerX251 10.1.1.298.7772252 . doi253 :10.1016/j.jcss.2003.07.003254 . ISSN255 0022-
0000256 . Retrieved 30 June 2018.
49. E-Z, E; K, D; S, V (2016).
Deterministic and probabilistic binary search in graphs. 48th ACM Sym-
posium on Theory of Computing . 257 pp. 519–532. arXiv258 :1503.00805259 .
260
doi :10.1145/2897518.2897656 . 261

50. B-O, M; H, A (2008). ”T B   -
     (      )”262
(PDF). 49th Symposium on Foundations of Computer Science263 . pp. 221–230.

235 #CITEREFKnuth1998
236 #CITEREFKnuth1998
237 https://en.wikipedia.org/wiki/Communications_of_the_ACM
238 https://en.wikipedia.org/wiki/Bibcode_(identifier)
239 https://ui.adsabs.harvard.edu/abs/1985CACM...28...22S
240 https://en.wikipedia.org/wiki/Doi_(identifier)
241 https://doi.org/10.1145%2F359545.359557
242 https://en.wikipedia.org/wiki/Bernard_Chazelle
243 https://dl.acm.org/citation.cfm?doid=380752.380818
244 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
245 https://en.wikipedia.org/wiki/Doi_(identifier)
246 https://doi.org/10.1145%2F380752.380818
247 https://en.wikipedia.org/wiki/ISBN_(identifier)
248 https://en.wikipedia.org/wiki/Special:BookSources/978-1-58113-349-3
249 https://en.wikipedia.org/wiki/Bernard_Chazelle
250 http://www.cs.princeton.edu/~chazelle/pubs/FClowerbounds.pdf
251 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
252 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.298.7772
253 https://en.wikipedia.org/wiki/Doi_(identifier)
254 https://doi.org/10.1016%2Fj.jcss.2003.07.003
255 https://en.wikipedia.org/wiki/ISSN_(identifier)
256 http://www.worldcat.org/issn/0022-0000
257 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
258 https://en.wikipedia.org/wiki/ArXiv_(identifier)
259 http://arxiv.org/abs/1503.00805
260 https://en.wikipedia.org/wiki/Doi_(identifier)
261 https://doi.org/10.1145%2F2897518.2897656
262 http://www2.lns.mit.edu/~avinatan/research/search-full.pdf
https://en.wikipedia.org/wiki/Annual_IEEE_Symposium_on_Foundations_of_Computer_
263
Science

264
Notes and references

doi264 :10.1109/FOCS.2008.58265 . ISBN266 978-0-7695-3436-7267 .CS1 maint: ref=harv


(link268 )
51. P, A (1989). ”S    ”. Theo-
retical Computer Science. 63 (2): 185–202. doi269 :10.1016/0304-3975(89)90077-7270 .
52. R, R L.271 ; M, A R.272 ; K, D J.273 ; W-
, K. Coping with errors in binary search procedures. 10th ACM Symposium on
Theory of Computing274 . doi275 :10.1145/800133.804351276 .
53. P, A (2002). ”S   — 
   ”. Theoretical Computer Science. 270 (1–2): 71–109.
doi277 :10.1016/S0304-3975(01)00303-6278 .
54. R, A (1961). ”O     ”. Magyar Tu-
dományos Akadémia Matematikai Kutató Intézetének Közleményei (in Hungarian).
6: 505–516. MR279 0143666280 .
55. H, P; N, J; S, Y (2002). ”Q -
   , ,   ”. Algorith-
mica281 . 34 (4): 429–448. arXiv282 :quant-ph/0102078283 . doi284 :10.1007/s00453-002-
0976-3285 .CS1 maint: ref=harv (link286 )
56. C, A M.; L, A J.; P, P A. (2007). ”Q-
         -
”. Physical Review A. 75 (3). 032335. arXiv287 :quant-ph/0608161288 . Bib-
code289 :2007PhRvA..75c2335C290 . doi291 :10.1103/PhysRevA.75.032335292 .CS1 maint:
ref=harv (link293 )

264 https://en.wikipedia.org/wiki/Doi_(identifier)
265 https://doi.org/10.1109%2FFOCS.2008.58
266 https://en.wikipedia.org/wiki/ISBN_(identifier)
267 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7695-3436-7
268 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
269 https://en.wikipedia.org/wiki/Doi_(identifier)
270 https://doi.org/10.1016%2F0304-3975%2889%2990077-7
271 https://en.wikipedia.org/wiki/Ronald_Rivest
272 https://en.wikipedia.org/wiki/Albert_R._Meyer
273 https://en.wikipedia.org/wiki/Daniel_Kleitman
274 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
275 https://en.wikipedia.org/wiki/Doi_(identifier)
276 https://doi.org/10.1145%2F800133.804351
277 https://en.wikipedia.org/wiki/Doi_(identifier)
278 https://doi.org/10.1016%2FS0304-3975%2801%2900303-6
279 https://en.wikipedia.org/wiki/MR_(identifier)
280 http://www.ams.org/mathscinet-getitem?mr=0143666
281 https://en.wikipedia.org/wiki/Algorithmica
282 https://en.wikipedia.org/wiki/ArXiv_(identifier)
283 http://arxiv.org/abs/quant-ph/0102078
284 https://en.wikipedia.org/wiki/Doi_(identifier)
285 https://doi.org/10.1007%2Fs00453-002-0976-3
286 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
287 https://en.wikipedia.org/wiki/ArXiv_(identifier)
288 http://arxiv.org/abs/quant-ph/0608161
289 https://en.wikipedia.org/wiki/Bibcode_(identifier)
290 https://ui.adsabs.harvard.edu/abs/2007PhRvA..75c2335C
291 https://en.wikipedia.org/wiki/Doi_(identifier)
292 https://doi.org/10.1103%2FPhysRevA.75.032335
293 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

265
Binary search algorithm

57. G, L K.294 (1996). A fast quantum mechanical algorithm for database
search. 28th ACM Symposium on Theory of Computing295 . Philadelphia, PA.
pp. 212–219. arXiv296 :quant-ph/9605043297 . doi298 :10.1145/237814.237866299 .
58. P, W W300 (1957). ”A  -
”. IBM Journal of Research and Development. 1 (2): 130–146.
doi301 :10.1147/rd.12.0130302 .
59. ”2n −1”. OEIS303 A000225304 Archived305 8 June 2016 at the Wayback Machine306 .
Retrieved 7 May 2016.
60. L, D (1960). Teaching combinatorial tricks to a com-
puter. Proceedings of Symposia in Applied Mathematics. 10. pp. 180–181.
doi307 :10.1090/psapm/010308 .
61. C, B309 ; G, L J.310 (1986). ”F -
: I. A   ”311 (PDF). Algorithmica312 . 1 (1–4): 133–
162. CiteSeerX313 10.1.1.117.8349314 . doi315 :10.1007/BF01840440316 .
62. C, B317 ; G, L J.318 (1986), ”F -
: II. A”319 (PDF), Algorithmica320 , 1 (1–4): 163–191,
321
doi :10.1007/BF01840441 322

63. Bentley 2000323 , §4.1 (”The Challenge of Binary Search”).


64. P, R E.324 (1988). ”T    ”.
SIGCSE Bulletin. 20: 190–194. doi325 :10.1145/52965.53012326 .

294 https://en.wikipedia.org/wiki/Lov_Grover
295 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
296 https://en.wikipedia.org/wiki/ArXiv_(identifier)
297 http://arxiv.org/abs/quant-ph/9605043
298 https://en.wikipedia.org/wiki/Doi_(identifier)
299 https://doi.org/10.1145%2F237814.237866
300 https://en.wikipedia.org/wiki/W._Wesley_Peterson
301 https://en.wikipedia.org/wiki/Doi_(identifier)
302 https://doi.org/10.1147%2Frd.12.0130
303 https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences
304 http://oeis.org/A000225
305 https://web.archive.org/web/20160608084228/http://oeis.org/A000225
306 https://en.wikipedia.org/wiki/Wayback_Machine
307 https://en.wikipedia.org/wiki/Doi_(identifier)
308 https://doi.org/10.1090%2Fpsapm%2F010
309 https://en.wikipedia.org/wiki/Bernard_Chazelle
310 https://en.wikipedia.org/wiki/Leonidas_J._Guibas
311 http://www.cs.princeton.edu/~chazelle/pubs/FractionalCascading1.pdf
312 https://en.wikipedia.org/wiki/Algorithmica
313 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
314 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.117.8349
315 https://en.wikipedia.org/wiki/Doi_(identifier)
316 https://doi.org/10.1007%2FBF01840440
317 https://en.wikipedia.org/wiki/Bernard_Chazelle
318 https://en.wikipedia.org/wiki/Leonidas_J._Guibas
319 http://www.cs.princeton.edu/~chazelle/pubs/FractionalCascading2.pdf
320 https://en.wikipedia.org/wiki/Algorithmica
321 https://en.wikipedia.org/wiki/Doi_(identifier)
322 https://doi.org/10.1007%2FBF01840441
323 #CITEREFBentley2000
324 https://en.wikipedia.org/wiki/Richard_E._Pattis
325 https://en.wikipedia.org/wiki/Doi_(identifier)
326 https://doi.org/10.1145%2F52965.53012

266
Notes and references

65. B, J327 (2 J 2006). ”E,  –    : 
      ”328 . Google Research Blog.
Archived329 from the original on 1 April 2016. Retrieved 21 April 2016.
66. R, S (2003). ”O   -   -
”330 (PDF). Information Processing Letters331 . 87 (2): 67–71. Cite-
SeerX332 10.1.1.13.5631333 . doi334 :10.1016/S0020-0190(03)00263-1335 . Archived336
(PDF) from the original on 3 July 2006. Retrieved 19 March 2016.
67. Bentley 2000337 , §4.4 (”Principles”).
68. ” –     ”338 . The Open Group Base
Specifications (7th ed.). The Open Group339 . 2013. Archived340 from the original
on 21 March 2016. Retrieved 28 March 2016.
69. Stroustrup 2013341 , p. 945.
70. ”. - D P L”342 . dlang.org. Retrieved 29 April
2020.
71. U343 (2012), COBOL ANSI-85 programming reference manual, 1, pp. 598–601
72. ”P ”344 . The Go Programming Language. Archived345 from the original
on 25 April 2016. Retrieved 28 April 2016.
73. ”..A”346 . Java Platform Standard Edition 8 Documentation. Oracle
Corporation347 . Archived348 from the original on 29 April 2016. Retrieved 1 May
2016.
74. ”..C”349 . Java Platform Standard Edition 8 Documentation.
Oracle Corporation350 . Archived351 from the original on 23 April 2016. Retrieved 1
May 2016.

327 https://en.wikipedia.org/wiki/Joshua_Bloch
328 http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html
https://web.archive.org/web/20160401140544/http://googleresearch.blogspot.com/2006/
329
06/extra-extra-read-all-about-it-nearly.html
330 http://www.di.unipi.it/~ruggieri/Papers/semisum.pdf
331 https://en.wikipedia.org/wiki/Information_Processing_Letters
332 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
333 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.13.5631
334 https://en.wikipedia.org/wiki/Doi_(identifier)
335 https://doi.org/10.1016%2FS0020-0190%2803%2900263-1
https://web.archive.org/web/20060703173514/http://www.di.unipi.it/~ruggieri/Papers/
336
semisum.pdf
337 #CITEREFBentley2000
338 http://pubs.opengroup.org/onlinepubs/9699919799/functions/bsearch.html
339 https://en.wikipedia.org/wiki/The_Open_Group
https://web.archive.org/web/20160321211605/http://pubs.opengroup.org/onlinepubs/
340
9699919799/functions/bsearch.html
341 #CITEREFStroustrup2013
342 https://dlang.org/phobos/std_range.html#SortedRange
343 https://en.wikipedia.org/wiki/Unisys
344 https://golang.org/pkg/sort/
345 https://web.archive.org/web/20160425055919/https://golang.org/pkg/sort/
346 https://docs.oracle.com/javase/8/docs/api/java/util/Arrays.html
347 https://en.wikipedia.org/wiki/Oracle_Corporation
https://web.archive.org/web/20160429064301/http://docs.oracle.com/javase/8/docs/api/
348
java/util/Arrays.html
349 https://docs.oracle.com/javase/8/docs/api/java/util/Collections.html
350 https://en.wikipedia.org/wiki/Oracle_Corporation
https://web.archive.org/web/20160423092424/https://docs.oracle.com/javase/8/docs/api/
351
java/util/Collections.html

267
Binary search algorithm

75. ”L<T>.BS  (T)”352 . Microsoft Developer Network.


353
Archived from the original on 7 May 2016. Retrieved 10 April 2016.
76. ”NSA”354 . Mac Developer Library. Apple Inc.355 Archived356 from the original
on 17 April 2016. Retrieved 1 May 2016.
77. ”CFA”357 . Mac Developer Library. Apple Inc.358 Archived359 from the original
on 20 April 2016. Retrieved 1 May 2016.
78. ”8.6.  — A  ”360 . The Python Standard Library.
Python Software Foundation. Archived361 from the original on 25 March 2018. Re-
trieved 26 March 2018.
79. Fitzgerald 2007362 , p. 152.

18.9.3 Works
• B, J363 (2000). Programming pearls (2nd ed.). Addison-Wesley364 .
365 366
ISBN 978-0-201-65788-3 .CS1 maint: ref=harv (link ) 367

• B, A; N, G E. (2016). A dictionary of computer


science (7th ed.). Oxford, UK: Oxford University Press368 . ISBN369 978-0-19-968897-
5370 .CS1 maint: ref=harv (link371 )
• C, S-K (2003). Data structures and algorithms. Software Engineering and
Knowledge Engineering. 13. Singapore: World Scientific372 . ISBN373 978-981-238-348-
8374 .CS1 maint: ref=harv (link375 )

352 https://msdn.microsoft.com/en-us/library/w4e7fxsh%28v=vs.110%29.aspx
https://web.archive.org/web/20160507141014/https://msdn.microsoft.com/en-us/library/
353
w4e7fxsh%28v=vs.110%29.aspx
https://developer.apple.com/library/mac/documentation/Cocoa/Reference/Foundation/
354 Classes/NSArray_Class/index.html#//apple_ref/occ/instm/NSArray/indexOfObject:
inSortedRange:options:usingComparator:
355 https://en.wikipedia.org/wiki/Apple_Inc.
https://web.archive.org/web/20160417163718/https://developer.apple.com/library/mac/
356 documentation/Cocoa/Reference/Foundation/Classes/NSArray_Class/index.html#//apple_
ref/occ/instm/NSArray/indexOfObject:inSortedRange:options:usingComparator:
https://developer.apple.com/library/mac/documentation/CoreFoundation/Reference/
357
CFArrayRef/index.html#//apple_ref/c/func/CFArrayBSearchValues
358 https://en.wikipedia.org/wiki/Apple_Inc.
https://web.archive.org/web/20160420193823/https://developer.apple.com/library/mac/
359 documentation/CoreFoundation/Reference/CFArrayRef/index.html#//apple_ref/c/func/
CFArrayBSearchValues
360 https://docs.python.org/3.6/library/bisect.html#module-bisect
https://web.archive.org/web/20180325105932/https://docs.python.org/3.6/library/
361
bisect.html#module-bisect
362 #CITEREFFitzgerald2007
363 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
364 https://en.wikipedia.org/wiki/Addison-Wesley
365 https://en.wikipedia.org/wiki/ISBN_(identifier)
366 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-65788-3
367 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
368 https://en.wikipedia.org/wiki/Oxford_University_Press
369 https://en.wikipedia.org/wiki/ISBN_(identifier)
370 https://en.wikipedia.org/wiki/Special:BookSources/978-0-19-968897-5
371 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
372 https://en.wikipedia.org/wiki/World_Scientific
373 https://en.wikipedia.org/wiki/ISBN_(identifier)
374 https://en.wikipedia.org/wiki/Special:BookSources/978-981-238-348-8
375 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

268
Notes and references

• C, T H.376 ; L, C E.377 ; R, R L.378 ; S,
C379 (2009). Introduction to algorithms380 (3 .). MIT P  MG-
H. ISBN381 978-0-262-03384-8382 .CS1 maint: ref=harv (link383 )
• F, M (2007). Ruby pocket reference. Sebastopol, California: O'Reilly
Media384 . ISBN385 978-1-4919-2601-7386 .CS1 maint: ref=harv (link387 )
• G, S A.; G, K J. (2008). A practical guide to data struc-
tures and algorithms using Java. Boca Raton, Florida: CRC Press388 . ISBN389 978-1-
58488-455-2390 .CS1 maint: ref=harv (link391 )
• K, M; M, S (2006). Large-scale genome sequence
processing. London, UK: Imperial College Press. ISBN392 978-1-86094-635-6393 .CS1
maint: ref=harv (link394 )
• K, D395 (1997). Fundamental algorithms. The Art of Computer Program-
ming396 . 1 (3rd ed.). Reading, MA: Addison-Wesley Professional. ISBN397 978-0-201-
89683-1398 .CS1 maint: ref=harv (link399 )
• K, D400 (1998). Sorting and searching. The Art of Computer Program-
ming401 . 3 (2nd ed.). Reading, MA: Addison-Wesley Professional. ISBN402 978-0-201-
89685-5403 .CS1 maint: ref=harv (link404 )

376 https://en.wikipedia.org/wiki/Thomas_H._Cormen
377 https://en.wikipedia.org/wiki/Charles_E._Leiserson
378 https://en.wikipedia.org/wiki/Ron_Rivest
379 https://en.wikipedia.org/wiki/Clifford_Stein
380 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
381 https://en.wikipedia.org/wiki/ISBN_(identifier)
382 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03384-8
383 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
384 https://en.wikipedia.org/wiki/O%27Reilly_Media
385 https://en.wikipedia.org/wiki/ISBN_(identifier)
386 https://en.wikipedia.org/wiki/Special:BookSources/978-1-4919-2601-7
387 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
388 https://en.wikipedia.org/wiki/CRC_Press
389 https://en.wikipedia.org/wiki/ISBN_(identifier)
390 https://en.wikipedia.org/wiki/Special:BookSources/978-1-58488-455-2
391 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
392 https://en.wikipedia.org/wiki/ISBN_(identifier)
393 https://en.wikipedia.org/wiki/Special:BookSources/978-1-86094-635-6
394 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
395 https://en.wikipedia.org/wiki/Donald_Knuth
396 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
397 https://en.wikipedia.org/wiki/ISBN_(identifier)
398 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-89683-1
399 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
400 https://en.wikipedia.org/wiki/Donald_Knuth
401 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
402 https://en.wikipedia.org/wiki/ISBN_(identifier)
403 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-89685-5
404 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

269
Binary search algorithm

• K, D405 (2011). Combinatorial algorithms. The Art of Computer Pro-


gramming406 . 4A (1st ed.). Reading, MA: Addison-Wesley Professional. ISBN407 978-0-
201-03804-0408 .CS1 maint: ref=harv (link409 )
• M, A; T, A (2002). Compression and coding algorithms.
Hamburg, Germany: Kluwer Academic Publishers. doi410 :10.1007/978-1-4615-0935-6411 .
ISBN412 978-0-7923-7668-2413 .CS1 maint: ref=harv (link414 )
• S, R415 ; W, K (2011). Algorithms416 (4 .). U
S R, N J: A-W P. ISBN417 978-0-321-
57351-3418 .CS1 maint: ref=harv (link419 ) Condensed web version 420 ; book version 421 .
• S, B422 (2013). The C++ programming language (4th ed.). Up-
per Saddle River, New Jersey: Addison-Wesley Professional. ISBN423 978-0-321-56384-
2424 .CS1 maint: ref=harv (link425 )

18.10 External links

The Wikibook Algorithm implementation426 has a page on the topic of: Binary
search427

• NIST Dictionary of Algorithms and Data Structures: binary search428


• Comparisons and benchmarks of a variety of binary search implementations in C429

405 https://en.wikipedia.org/wiki/Donald_Knuth
406 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
407 https://en.wikipedia.org/wiki/ISBN_(identifier)
408 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-03804-0
409 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
410 https://en.wikipedia.org/wiki/Doi_(identifier)
411 https://doi.org/10.1007%2F978-1-4615-0935-6
412 https://en.wikipedia.org/wiki/ISBN_(identifier)
413 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7923-7668-2
414 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
415 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
416 http://algs4.cs.princeton.edu/home/
417 https://en.wikipedia.org/wiki/ISBN_(identifier)
418 https://en.wikipedia.org/wiki/Special:BookSources/978-0-321-57351-3
419 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
420 https://en.wikipedia.org/wiki/Open_access
421 https://en.wikipedia.org/wiki/Paywall
422 https://en.wikipedia.org/wiki/Bjarne_Stroustrup
423 https://en.wikipedia.org/wiki/ISBN_(identifier)
424 https://en.wikipedia.org/wiki/Special:BookSources/978-0-321-56384-2
425 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
426 https://en.wikibooks.org/wiki/Algorithm_implementation
427 https://en.wikibooks.org/wiki/Algorithm_implementation/Search/Binary_search
https://web.archive.org/web/20161104005739/https://xlinux.nist.gov/dads/HTML/
428
binarySearch.html
429 https://sites.google.com/site/binarysearchcube/binary-search

270
19 Binary search tree

Binary search tree


Type tree
In- 1960
vented
In- P.F. Windley, A.D. Booth, A.J.T.
vented Colin, and T.N. Hibbard
by
Time complexity in big O notation
Algo- Average Worst case
rithm
Space O(n) O(n)
Search O(log n) O(n)
Insert O(log n) O(n)
Delete O(log n) O(n)

271
Binary search tree

Figure 48 A binary search tree of size 9 and depth 3, with 8 at the root. The leaves are
not drawn.

In computer science1 , binary search trees (BST), sometimes called ordered or sorted
binary trees, are a particular type of container2 : a data structure3 that stores ”items”
(such as numbers, names etc.) in memory4 . They allow fast lookup, addition and removal
of items, and can be used to implement either dynamic sets5 of items, or lookup tables6
that allow finding an item by its key (e.g., finding the phone number of a person by name).
Binary search trees keep their keys in sorted order, so that lookup and other operations
can use the principle of binary search7 : when looking for a key in a tree (or a place to
insert a new key), they traverse the tree from root to leaf, making comparisons to keys
stored in the nodes of the tree and deciding, on the basis of the comparison, to continue

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Collection_(abstract_data_type)
3 https://en.wikipedia.org/wiki/Data_structure
4 https://en.wikipedia.org/wiki/Computer_memory
5 https://en.wikipedia.org/wiki/Set_(abstract_data_type)
6 https://en.wikipedia.org/wiki/Lookup_table
7 https://en.wikipedia.org/wiki/Binary_search

272
Definition

searching in the left or right subtrees. On average, this means that each comparison allows
the operations to skip about half of the tree, so that each lookup, insertion or deletion takes
time proportional to8 the logarithm9 of the number of items stored in the tree. This is
much better than the linear time10 required to find items by key in an (unsorted) array, but
slower than the corresponding operations on hash tables11 .
Several variants of the binary search tree have been studied in computer science; this ar-
ticle deals primarily with the basic type, making references to more advanced types when
appropriate.

19.1 Definition

A binary search tree is a rooted12 binary tree13 , whose internal nodes each store a key
(and optionally, an associated value) and each have two distinguished sub-trees, commonly
denoted left and right. The tree additionally satisfies the binary search14 property, which
states that the key in each node must be greater than or equal to any key stored in the
left sub-tree, and less than or equal to any key stored in the right sub-tree.[1]:287 The leaves
(final nodes) of the tree contain no key and have no structure to distinguish them from one
another.
Frequently, the information represented by each node is a record rather than a single data
element. However, for sequencing purposes, nodes are compared according to their keys
rather than any part of their associated records. The major advantage of binary search trees
over other data structures is that the related sorting algorithms15 and search algorithms16
such as in-order traversal17 can be very efficient; they are also easy to code.
Binary search trees are a fundamental data structure used to construct more abstract data
structures such as sets18 , multisets19 , and associative arrays20 .
• When inserting or searching for an element in a binary search tree, the key of each visited
node has to be compared with the key of the element to be inserted or found.
• The shape of the binary search tree depends entirely on the order of insertions and
deletions, and can become degenerate.
• After a long intermixed sequence of random insertion and deletion, the expected height

of the tree approaches square root of the number of keys, n, which grows much faster
than log n.

8 https://en.wikipedia.org/wiki/Time_complexity
9 https://en.wikipedia.org/wiki/Logarithm
10 https://en.wikipedia.org/wiki/Linear_time
11 https://en.wikipedia.org/wiki/Hash_table
12 https://en.wikipedia.org/wiki/Rooted_tree
13 https://en.wikipedia.org/wiki/Binary_tree
14 https://en.wikipedia.org/wiki/Binary_search
15 https://en.wikipedia.org/wiki/Sorting_algorithm
16 https://en.wikipedia.org/wiki/Search_algorithm
17 https://en.wikipedia.org/wiki/In-order_traversal
18 https://en.wikipedia.org/wiki/Set_(computer_science)
19 https://en.wikipedia.org/wiki/Set_(computer_science)#Multiset
20 https://en.wikipedia.org/wiki/Associative_array

273
Binary search tree

• There has been a lot of research to prevent degeneration of the tree resulting in worst
case time complexity of O(n) (for details see section Types21 ).

19.1.1 Order relation

Binary search requires an order relation by which every element (item) can be compared
with every other element in the sense of a total preorder22 . The part of the element which
effectively takes place in the comparison is called its key. Whether duplicates, i.e. different
elements with same key, shall be allowed in the tree or not, does not depend on the order
relation, but on the application only.
In the context of binary search trees a total preorder is realized most flexibly by means of
a three-way comparison23 subroutine24 .

19.2 Operations

Binary search trees support three main operations: insertion of elements, deletion of ele-
ments, and lookup (checking whether a key is present).

19.2.1 Searching

Searching in a binary search tree for a specific key can be programmed recursively25 or
iteratively26 .
We begin by examining the root node27 . If the tree is null, the key we are searching for does
not exist in the tree. Otherwise, if the key equals that of the root, the search is successful
and we return the node. If the key is less than that of the root, we search the left subtree.
Similarly, if the key is greater than that of the root, we search the right subtree. This
process is repeated until the key is found or the remaining subtree is null. If the searched
key is not found after a null subtree is reached, then the key is not present in the tree. This
is easily expressed as a recursive algorithm (implemented in Python28 ):

1 def search_recursively(key, node):


2 if node is None or node.key == key:
3 return node
4 if key < node.key:
5 return search_recursively(key, node.left)
6 if key > node.key:
7 return search_recursively(key, node.right)

21 #Types
22 https://en.wikipedia.org/wiki/Total_preorder
23 https://en.wikipedia.org/wiki/Three-way_comparison
24 https://en.wikipedia.org/wiki/Subroutine
25 https://en.wikipedia.org/wiki/Recursion_(computer_science)
26 https://en.wikipedia.org/wiki/Iteration#Computing
27 https://en.wikipedia.org/wiki/Tree_(data_structure)#root_nodes
28 https://en.wikipedia.org/wiki/Python_(programming_language)

274
Operations

The same algorithm can be implemented iteratively:

1 def search_iteratively(key, node):


2 current_node = node
3 while current_node is not None:
4 if key == current_node.key:
5 return current_node
6 if key < current_node.key:
7 current_node = current_node.left
8 else: # key > current_node.key:
9 current_node = current_node.right
10 return current_node

These two examples rely on the order relation being a total order.
If the order relation is only a total preorder, a reasonable extension of the functionality is
the following: also in case of equality search down to the leaves in a direction specified by
the user. A binary tree sort29 equipped with such a comparison function becomes stable30 .
Because in the worst case this algorithm must search from the root of the tree to the leaf far-
thest from the root, the search operation takes time proportional to the tree's height (see tree
terminology31 ). On average, binary search trees with n nodes have O32 (log n) height.[note 1]
However, in the worst case, binary search trees can have O(n) height, when the unbalanced
tree resembles a linked list33 (degenerate tree34 ).

19.2.2 Insertion

Insertion begins as a search would begin; if the key is not equal to that of the root, we
search the left or right subtrees as before. Eventually, we will reach an external node and
add the new key-value pair (here encoded as a record 'newNode') as its right or left child,
depending on the node's key. In other words, we examine the root and recursively insert
the new node to the left subtree if its key is less than that of the root, or the right subtree
if its key is greater than or equal to the root.
Here's how a typical binary search tree insertion might be performed in a binary tree in
C++35 :

void insert(Node*& root, int key, int value) {


if (!root)
root = new Node(key, value);
else if (key == root->key)
root->value = value;
else if (key < root->key)
insert(root->left, key, value);
else // key > root->key
insert(root->right, key, value);
}

29 https://en.wikipedia.org/wiki/Tree_sort
30 https://en.wikipedia.org/wiki/Sorting_algorithm#Stability
31 https://en.wikipedia.org/wiki/Tree_(data_structure)#Terminology
32 https://en.wikipedia.org/wiki/Big_O_notation
33 https://en.wikipedia.org/wiki/Linked_list
34 https://en.wikipedia.org/wiki/Binary_Tree#Types_of_binary_trees
35 https://en.wikipedia.org/wiki/C%2B%2B

275
Binary search tree

Alternatively, a non-recursive version might be implemented like this. Using a pointer-to-


pointer to keep track of where we came from lets the code avoid explicit checking for and
handling of the case where it needs to insert a node at the tree root[2] :

void insert(Node** root, int key, int value) {


Node **walk = root;
while (*walk) {
int curkey = (*walk)->key;
if (curkey == key) {
(*walk)->value = value;
return;
}
if (key > curkey)
walk = &(*walk)->right;
else
walk = &(*walk)->left;
}
*walk = new Node(key, value);
}

The above destructive procedural variant modifies the tree in place. It uses only constant
heap space (and the iterative version uses constant stack space as well), but the prior version
of the tree is lost. Alternatively, as in the following Python36 example, we can reconstruct
all ancestors of the inserted node; any reference to the original tree root remains valid,
making the tree a persistent data structure37 :

def binary_tree_insert(node, key, value):


if node is None:
return NodeTree(None, key, value, None)
if key == node.key:
return NodeTree(node.left, key, value, node.right)
if key < node.key:
return NodeTree(binary_tree_insert(node.left, key, value), node.key,
node.value, node.right)
return NodeTree(node.left, node.key, node.value,
binary_tree_insert(node.right, key, value))

The part that is rebuilt uses O38 (log n) space in the average case and O(n) in the worst
case.
In either version, this operation requires time proportional to the height of the tree in the
worst case, which is O(log n) time in the average case over all trees, but O(n) time in the
worst case.
Another way to explain insertion is that in order to insert a new node in the tree, its key is
first compared with that of the root. If its key is less than the root's, it is then compared
with the key of the root's left child. If its key is greater, it is compared with the root's right
child. This process continues, until the new node is compared with a leaf node, and then it
is added as this node's right or left child, depending on its key: if the key is less than the
leaf's key, then it is inserted as the leaf's left child, otherwise as the leaf's right child.
There are other ways of inserting nodes into a binary tree, but this is the only way of
inserting nodes at the leaves and at the same time preserving the BST structure.

36 https://en.wikipedia.org/wiki/Python_(programming_language)
37 https://en.wikipedia.org/wiki/Persistent_data_structure
38 https://en.wikipedia.org/wiki/Big_O_notation

276
Operations

19.2.3 Deletion

When removing a node from a binary search tree it is mandatory to maintain the in-order
sequence of the nodes. There are many possibilities to do this. However, the following
method which has been proposed by T. Hibbard in 1962[3] guarantees that the heights of
the subject subtrees are changed by at most one. There are three possible cases to consider:
• Deleting a node with no children: simply remove the node from the tree.
• Deleting a node with one child: remove the node and replace it with its child.
• Deleting a node with two children: call the node to be deleted D. Do not delete D. Instead,
choose either its in-order39 predecessor node or its in-order successor node as replacement
node E (s. figure). Copy the user values of E to D.[note 2] If E does not have a child simply
remove E from its previous parent G. If E has a child, say F, it is a right child. Replace
E with F at E's parent.

Figure 49 Deleting a node with two children from a binary search tree. First the
leftmost node in the right subtree, the in-order successor E, is identified. Its value is
copied into the node D being deleted. The in-order successor can then be easily deleted
because it has at most one child. The same method works symmetrically using the
in-order predecessor C.

In all cases, when D happens to be the root, make the replacement node root again.
Nodes with two children are harder to delete. A node's in-order successor is its right
subtree's left-most child, and a node's in-order predecessor is the left subtree's right-most
child. In either case, this node will have only one or no child at all. Delete it according to
one of the two simpler cases above.
Consistently using the in-order successor or the in-order predecessor for every instance of
the two-child case can lead to an unbalanced40 tree, so some implementations select one or
the other at different times.
Runtime analysis: Although this operation does not always traverse the tree down to a leaf,
this is always a possibility; thus in the worst case it requires time proportional to the height
of the tree. It does not require more even when the node has two children, since it still
follows a single path and does not visit any node twice.

def find_min(self):

39 https://en.wikipedia.org/wiki/Tree_traversal
40 https://en.wikipedia.org/wiki/Self-balancing_binary_search_tree

277
Binary search tree

"""Get minimum node in a subtree."""


current_node = self
while current_node.left_child:
current_node = current_node.left_child
return current_node

def replace_node_in_parent(self, new_value=None) -> None:


if self.parent:
if self == self.parent.left_child:
self.parent.left_child = new_value
else:
self.parent.right_child = new_value
if new_value:
new_value.parent = self.parent

def binary_tree_delete(self, key) -> None:


if key < self.key:
self.left_child.binary_tree_delete(key)
return
if key > self.key:
self.right_child.binary_tree_delete(key)
return
# Delete the key here
if self.left_child and self.right_child: # If both children are present
successor = self.right_child.find_min()
self.key = successor.key
successor.binary_tree_delete(successor.key)
elif self.left_child: # If the node has only a *left* child
self.replace_node_in_parent(self.left_child)
elif self.right_child: # If the node has only a *right* child
self.replace_node_in_parent(self.right_child)
else:
self.replace_node_in_parent(None) # This node has no children

19.2.4 Traversal

Main article: Tree traversal41 Once the binary search tree has been created, its elements can
be retrieved in-order42 by recursively43 traversing the left subtree of the root node, accessing
the node itself, then recursively traversing the right subtree of the node, continuing this
pattern with each node in the tree as it's recursively accessed. As with all binary trees,
one may conduct a pre-order traversal44 or a post-order traversal45 , but neither are likely to
be useful for binary search trees. An in-order traversal of a binary search tree will always
result in a sorted list of node items (numbers, strings or other comparable items).
The code for in-order traversal in Python is given below. It will call callback46 (some
function the programmer wishes to call on the node's value, such as printing to the screen)
for every node in the tree.

def traverse_binary_tree(node, callback):


if node is None:
return
traverse_binary_tree(node.leftChild, callback)

41 https://en.wikipedia.org/wiki/Tree_traversal
42 https://en.wikipedia.org/wiki/In-order_traversal
43 https://en.wikipedia.org/wiki/Recursion
44 https://en.wikipedia.org/wiki/Pre-order_traversal
45 https://en.wikipedia.org/wiki/Post-order_traversal
46 https://en.wikipedia.org/wiki/Callback_(computer_programming)

278
Operations

callback(node.value)
traverse_binary_tree(node.rightChild, callback)

Traversal requires O47 (n) time, since it must visit every node. This algorithm is also O(n),
so it is asymptotically optimal48 .
Traversal can also be implemented iteratively49 . For certain applications, e.g. greater equal
search, approximative search, an operation for single step (iterative) traversal can be very
useful. This is, of course, implemented without the callback construct and takes O(1) on
average and O(log n) in the worst case.

19.2.5 Verification

Sometimes we already have a binary tree, and we need to determine whether it is a BST.
This problem has a simple recursive solution.
The BST property—every node on the right subtree has to be larger than the current node
and every node on the left subtree has to be smaller than the current node—is the key to
figuring out whether a tree is a BST or not. The greedy algorithm50 —simply traverse the
tree, at every node check whether the node contains a value larger than the value at the left
child and smaller than the value on the right child—does not work for all cases. Consider
the following tree:
20
/ \
10 30
/ \
5 40

In the tree above, each node meets the condition that the node contains a value larger than
its left child and smaller than its right child hold, and yet it is not a BST: the value 5 is on
the right subtree of the node containing 20, a violation of the BST property.
Instead of making a decision based solely on the values of a node and its children, we also
need information flowing down from the parent as well. In the case of the tree above, if we
could remember about the node containing the value 20, we would see that the node with
value 5 is violating the BST property contract.
So the condition we need to check at each node is:
• if the node is the left child of its parent, then it must be smaller than (or equal to) the
parent and it must pass down the value from its parent to its right subtree to make sure
none of the nodes in that subtree is greater than the parent
• if the node is the right child of its parent, then it must be larger than the parent and
it must pass down the value from its parent to its left subtree to make sure none of the
nodes in that subtree is lesser than the parent.
A recursive solution in C++ can explain this further:

47 https://en.wikipedia.org/wiki/Big_O_notation
48 https://en.wikipedia.org/wiki/Asymptotically_optimal
49 https://en.wikipedia.org/wiki/Iteration#Computing
50 https://en.wikipedia.org/wiki/Greedy_algorithm

279
Binary search tree

struct TreeNode {
int key;
int value;
struct TreeNode *left;
struct TreeNode *right;
};

bool isBST(struct TreeNode *node, int minKey, int maxKey) {


if (node == NULL) return true;
if (node->key < minKey || node->key > maxKey) return false;

return isBST(node->left, minKey, node->key-1) && isBST(node->right,


node->key+1, maxKey);
}

node->key+1 and node->key-1 are done to allow only distinct elements in BST.
If we want same elements to also be present, then we can use only node->key in both
places.
The initial call to this function can be something like this:

if (isBST(root, INT_MIN, INT_MAX)) {


puts("This is a BST.");
} else {
puts("This is NOT a BST!");
}

Essentially we keep creating a valid range (starting from [MIN_VALUE, MAX_VALUE])


and keep shrinking it down for each node as we go down recursively.
As pointed out in section #Traversal, an in-order traversal of a binary search tree returns
the nodes sorted. Thus we only need to keep the last visited node while traversing the tree
and check whether its key is smaller (or smaller/equal, if duplicates are to be allowed in the
tree) compared to the current key.

19.3 Examples of applications

19.3.1 Sort

Main article: Tree sort51 A binary search tree can be used to implement a simple sorting
algorithm52 . Similar to heapsort53 , we insert all the values we wish to sort into a new
ordered data structure—in this case a binary search tree—and then traverse it in order.
The worst-case time of build_binary_tree is O(n2 )—if you feed it a sorted list of values, it
chains them into a linked list54 with no left subtrees. For example, build_binary_tree([1,
2, 3, 4, 5]) yields the tree (1 (2 (3 (4 (5))))).

51 https://en.wikipedia.org/wiki/Tree_sort
52 https://en.wikipedia.org/wiki/Sorting_algorithm
53 https://en.wikipedia.org/wiki/Heapsort
54 https://en.wikipedia.org/wiki/Linked_list

280
Types

There are several schemes for overcoming this flaw with simple binary trees; the most
common is the self-balancing binary search tree55 . If this same procedure is done using
such a tree, the overall worst-case time is O(n log n), which is asymptotically optimal56 for
a comparison sort57 . In practice, the added overhead in time and space for a tree-based sort
(particularly for node allocation58 ) make it inferior to other asymptotically optimal sorts
such as heapsort59 for static list sorting. On the other hand, it is one of the most efficient
methods of incremental sorting, adding items to a list over time while keeping the list sorted
at all times.

19.3.2 Priority queue operations

Binary search trees can serve as priority queues60 : structures that allow insertion of arbitrary
key as well as lookup and deletion of the minimum (or maximum) key. Insertion works as
previously explained. Find-min walks the tree, following left pointers as far as it can without
hitting a leaf:
// Precondition: T is not a leaf
function find-min(T):
while hasLeft(T):
T ? left(T)
return key(T)

Find-max is analogous: follow right pointers as far as possible. Delete-min (max) can simply
look up the minimum (maximum), then delete it. This way, insertion and deletion both
take logarithmic time, just as they do in a binary heap61 , but unlike a binary heap and
most other priority queue implementations, a single tree can support all of find-min, find-
max, delete-min and delete-max at the same time, making binary search trees suitable as
double-ended priority queues62 .[4]:156

19.4 Types

There are many types of binary search trees. AVL trees63 and red-black trees64 are both
forms of self-balancing binary search trees65 . A splay tree66 is a binary search tree that
automatically moves frequently accessed elements nearer to the root. In a treap67 (tree
heap68 ), each node also holds a (randomly chosen) priority and the parent node has higher

55 https://en.wikipedia.org/wiki/Self-balancing_binary_search_tree
56 https://en.wikipedia.org/wiki/Asymptotically_optimal
57 https://en.wikipedia.org/wiki/Comparison_sort
58 https://en.wikipedia.org/wiki/Dynamic_memory_allocation
59 https://en.wikipedia.org/wiki/Heapsort
60 https://en.wikipedia.org/wiki/Priority_queue
61 https://en.wikipedia.org/wiki/Binary_heap
62 https://en.wikipedia.org/wiki/Double-ended_priority_queue
63 https://en.wikipedia.org/wiki/AVL_tree
64 https://en.wikipedia.org/wiki/Red-black_tree
65 https://en.wikipedia.org/wiki/Self-balancing_binary_search_tree
66 https://en.wikipedia.org/wiki/Splay_tree
67 https://en.wikipedia.org/wiki/Treap
68 https://en.wikipedia.org/wiki/Heap_(data_structure)

281
Binary search tree

priority than its children. Tango trees69 are trees optimized for fast searches. T-trees70 are
binary search trees optimized to reduce storage space overhead, widely used for in-memory
databases
A degenerate tree is a tree where for each parent node, there is only one associated child
node. It is unbalanced and, in the worst case, performance degrades to that of a linked list.
If your add node function does not handle re-balancing, then you can easily construct a
degenerate tree by feeding it with data that is already sorted. What this means is that in a
performance measurement, the tree will essentially behave like a linked list data structure.

19.4.1 Performance comparisons

D. A. Heger (2004)[5] presented a performance comparison of binary search trees. Treap71


was found to have the best average performance, while red-black tree72 was found to have
the smallest number of performance variations.

19.4.2 Optimal binary search trees

Main article: Optimal binary search tree73

Figure 50 Tree rotations are very common internal operations in binary trees to keep
perfect, or near-to-perfect, internal balance in the tree.

69 https://en.wikipedia.org/wiki/Tango_tree
70 https://en.wikipedia.org/wiki/T-tree
71 https://en.wikipedia.org/wiki/Treap
72 https://en.wikipedia.org/wiki/Red-black_tree
73 https://en.wikipedia.org/wiki/Optimal_binary_search_tree

282
See also

If we do not plan on modifying a search tree, and we know exactly how often each item will
be accessed, we can construct[6] an optimal binary search tree, which is a search tree where
the average cost of looking up an item (the expected search cost) is minimized.
Even if we only have estimates of the search costs, such a system can considerably speed
up lookups on average. For example, if we have a BST of English words used in a spell
checker74 , we might balance the tree based on word frequency in text corpora75 , placing
words like the near the root and words like agerasia near the leaves. Such a tree might be
compared with Huffman trees76 , which similarly seek to place frequently used items near
the root in order to produce a dense information encoding; however, Huffman trees store
data elements only in leaves, and these elements need not be ordered.
If the sequence in which the elements in the tree will be accessed is unknown in advance,
splay trees77 can be used which are asymptotically as good as any static search tree we can
construct for any particular sequence of lookup operations.
Alphabetic trees are Huffman trees with the additional constraint on order, or, equivalently,
search trees with the modification that all elements are stored in the leaves. Faster algo-
rithms exist for optimal alphabetic binary trees (OABTs).

19.5 See also


• Binary search algorithm78
• Search tree79
• Self-balancing binary search tree80
• AVL tree81
• Red–black tree82
• Randomized binary search tree83
• Tango tree84

19.6 Notes
1. The notion of an average BST is made precise as follows. Let a random BST be one
built using only insertions out of a sequence of unique elements in random order (all
permutations equally likely); then the expected85 height of the tree is O(log n). If

74 https://en.wikipedia.org/wiki/Spell_checker
75 https://en.wikipedia.org/wiki/Text_corpus
76 https://en.wikipedia.org/wiki/Huffman_tree
77 https://en.wikipedia.org/wiki/Splay_tree
78 https://en.wikipedia.org/wiki/Binary_search_algorithm
79 https://en.wikipedia.org/wiki/Search_tree
80 https://en.wikipedia.org/wiki/Self-balancing_binary_search_tree
81 https://en.wikipedia.org/wiki/AVL_tree
82 https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
83 https://en.wikipedia.org/wiki/Randomized_binary_search_tree
84 https://en.wikipedia.org/wiki/Tango_tree
85 https://en.wikipedia.org/wiki/Expected_value

283
Binary search tree

deletions are allowed as well as insertions, ”little is known about the average height of
a binary search tree”.[1]:300
2. Of course, a generic software package has to work the other way around: It has to
leave the user data untouched and to furnish E with all the BST links to and from D.

19.7 References
1. C, T H.86 ; L, C E.87 ; R, R L.88 ; S,
C89 (2009) [1990]. Introduction to Algorithms90 (3 .). MIT P 
MG-H. ISBN91 0-262-03384-492 .
2. T, G. ”L   ”93 . R
21 F 2019.
3. s. Robert Sedgewick94 , Kevin Wayne: Algorithms Fourth Edition.95 Pearson Educa-
tion, 2011, ISBN96 978-0-321-57351-397 , p. 410.
4. M, K98 ; S, P99 (2008). Algorithms and Data Structures:
The Basic Toolbox100 (PDF). S.
5. H, D A. (2004), ”A D  T P B-
  B S T D S”101 (PDF), European Journal for
the Informatics Professional, 5 (5): 67–75
6. G, G. ”O B S T”102 . Scientific Computation.
ETH Zürich. Archived from the original103 on 12 October 2014. Retrieved 1 December
2013.

86 https://en.wikipedia.org/wiki/Thomas_H._Cormen
87 https://en.wikipedia.org/wiki/Charles_E._Leiserson
88 https://en.wikipedia.org/wiki/Ron_Rivest
89 https://en.wikipedia.org/wiki/Clifford_Stein
90 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
91 https://en.wikipedia.org/wiki/ISBN_(identifier)
92 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
93 https://grisha.org/blog/2013/04/02/linus-on-understanding-pointers/
94 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
95 http://www.albertstam.com/Algorithms.pdf
96 https://en.wikipedia.org/wiki/ISBN_(identifier)
97 https://en.wikipedia.org/wiki/Special:BookSources/978-0-321-57351-3
98 https://en.wikipedia.org/wiki/Kurt_Mehlhorn
99 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
100 http://people.mpi-inf.mpg.de/~mehlhorn/ftp/Toolbox/SortedSequences.pdf
101 http://www.cepis.org/upgrade/files/full-2004-V.pdf
102 https://web.archive.org/web/20141012033537/http://linneus20.ethz.ch:8080/4_7_1.html
103 http://linneus20.ethz.ch:8080/4_7_1.html

284
Further reading

19.8 Further reading


• This article incorporates public domain material104 from the NIST105 document:
B, P E. ”B S T”106 . Dictionary of Algorithms and Data Struc-
tures107 .
• C, T H.108 ; L, C E.109 ; R, R L.110 ; S,
C111 (2001). ”12: B  , 15.5: O  
”. Introduction to Algorithms112 (2 .). MIT P & MG-H.
. 253–272, 356–363. ISBN113 0-262-03293-7114 .
• J, D J. (3 D 2005). ”B T T”115 . Interactive
Data Structure Visualizations. University of Maryland116 .
• K, D117 (1997). ”6.2.2: B T S”. The Art of Computer
Programming118 . 3: ”S  S” (3 .). A-W. . 426–
458. ISBN119 0-201-89685-0120 .
• L, S. ”B S T”121 (PPT122 ). Data Structures and Algorithms
Visualization-A PowerPoint Slides Based Approach. SUNY Oneonta123 .
• P, N (2001). ”B T”124 . CS Education Library. Stanford Uni-
versity125 .

19.9 External links


• Binary Tree Visualizer126 (JavaScript animation of various BT-based data structures)
• K, K. ”B S T”127 (J 128 ). K?
  .

https://en.wikipedia.org/wiki/Copyright_status_of_works_by_the_federal_government_of_
104
the_United_States
105 https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
106 https://xlinux.nist.gov/dads/HTML/binarySearchTree.html
107 https://en.wikipedia.org/wiki/Dictionary_of_Algorithms_and_Data_Structures
108 https://en.wikipedia.org/wiki/Thomas_H._Cormen
109 https://en.wikipedia.org/wiki/Charles_E._Leiserson
110 https://en.wikipedia.org/wiki/Ronald_L._Rivest
111 https://en.wikipedia.org/wiki/Clifford_Stein
112 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
113 https://en.wikipedia.org/wiki/ISBN_(identifier)
114 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
115 http://nova.umuc.edu/~jarc/idsv/lesson1.html
116 https://en.wikipedia.org/wiki/University_of_Maryland
117 https://en.wikipedia.org/wiki/Donald_Knuth
118 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
119 https://en.wikipedia.org/wiki/ISBN_(identifier)
120 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
http://employees.oneonta.edu/zhangs/PowerPointPlatform/resources/samples/
121
binarysearchtree.ppt
122 https://en.wikipedia.org/wiki/Microsoft_PowerPoint
123 https://en.wikipedia.org/wiki/SUNY_Oneonta
124 http://cslibrary.stanford.edu/110/BinaryTrees.html
125 https://en.wikipedia.org/wiki/Stanford_University
126 http://btv.melezinek.cz
127 http://people.ksp.sk/~kuko/bak/
128 https://en.wikipedia.org/wiki/Java_applet

285
Binary search tree

• M, J (18 A 2009). ”B S T”129 . JDServer. Archived
from the original130 on 28 March 2010. C++ implementation.
• Binary Search Tree Example in Python131
• ”R  P (C++)”132 . MSDN133 . M134 . 2005. Gives an
example binary tree implementation.

Tree data structures

Data structures

https://web.archive.org/web/20100328221436/http://jdserver.homelinux.org/wiki/Binary_
129
Search_Tree
130 http://jdserver.homelinux.org/wiki/Binary_Search_Tree
131 http://code.activestate.com/recipes/286239/
132 http://msdn.microsoft.com/en-us/library/1sf8shae%28v=vs.80%29.aspx
133 https://en.wikipedia.org/wiki/MSDN
134 https://en.wikipedia.org/wiki/Microsoft

286
20 Trie

This article is about a tree data structure. For the French commune, see Trie-sur-Baïse1 .
A type of search tree data structure

Figure 51 A trie for keys ”A”, ”to”, ”tea”, ”ted”, ”ten”, ”i”, ”in”, and ”inn”. Note that this
example does not have all the children alphabetically sorted from left to right as it should
be (the root and node 't').

1 https://en.wikipedia.org/wiki/Trie-sur-Ba%C3%AFse

287
Trie

In computer science2 , a trie, also called digital tree or prefix tree, is a kind of search
tree3 —an ordered tree4 data structure5 used to store a dynamic set6 or associative array7
where the keys are usually strings8 . Unlike a binary search tree9 , no node in the tree stores
the key associated with that node; instead, its position in the tree defines the key with
which it is associated; i.e., the value of the key is distributed across the structure. All
the descendants of a node have a common prefix10 of the string associated with that node,
and the root is associated with the empty string11 . Keys tend to be associated with leaves,
though some inner nodes may correspond to keys of interest. Hence, keys are not necessarily
associated with every node. For the space-optimized presentation of prefix tree, see compact
prefix tree12 .
In the example shown, keys are listed in the nodes and values below them. Each complete
English word has an arbitrary integer value associated with it. A trie can be seen as
a tree-shaped deterministic finite automaton13 . Each finite language14 is generated by a
trie automaton, and each trie can be compressed into a deterministic acyclic finite state
automaton15 .
Though tries can be keyed by character strings, they need not be. The same algorithms can
be adapted to serve similar functions on ordered lists of any construct; e.g., permutations
on a list of digits or shapes. In particular, a bitwise trie is keyed on the individual bits
making up any fixed-length binary datum, such as an integer or memory address.

20.1 History and etymology

Tries were first described by René de la Briandais in 1959.[1][2]:336 The term trie was coined
two years later by Edward Fredkin16 , who pronounces it /ˈtriː/17 (as ”tree”), after the middle
syllable of retrieval.[3][4] However, other authors pronounce it /ˈtraɪ/18 (as ”try”), in an
attempt to distinguish it verbally from ”tree”.[3][4][5]

2 https://en.wikipedia.org/wiki/Computer_science
3 https://en.wikipedia.org/wiki/Search_tree
4 https://en.wikipedia.org/wiki/Tree_(data_structure)
5 https://en.wikipedia.org/wiki/Data_structure
6 https://en.wikipedia.org/wiki/Set_(abstract_data_type)
7 https://en.wikipedia.org/wiki/Associative_array
8 https://en.wikipedia.org/wiki/String_(computer_science)
9 https://en.wikipedia.org/wiki/Binary_search_tree
10 https://en.wikipedia.org/wiki/Prefix
11 https://en.wikipedia.org/wiki/Empty_string
12 https://en.wikipedia.org/wiki/Compact_prefix_tree
13 https://en.wikipedia.org/wiki/Deterministic_finite_automaton
14 https://en.wikipedia.org/wiki/Finite_language
15 https://en.wikipedia.org/wiki/Deterministic_acyclic_finite_state_automaton
16 https://en.wikipedia.org/wiki/Edward_Fredkin
17 https://en.wikipedia.org/wiki/Help:IPA/English
18 https://en.wikipedia.org/wiki/Help:IPA/English

288
Applications

20.2 Applications

20.2.1 As a replacement for other data structures

As discussed below, a trie has a number of advantages over binary search trees.[6]
A trie can also be used to replace a hash table19 , over which it has the following advantages:
• Looking up data in a trie is faster in the worst case, O(m) time (where m is the length
of a search string), compared to an imperfect hash table. An imperfect hash table can
have key collisions. A key collision is the hash function mapping of different keys to the
same position in a hash table. The worst-case lookup speed in an imperfect hash table is
O(N)20 time, but far more typically is O(1), with O(m) time spent evaluating the hash.
• There are no collisions of different keys in a trie.
• Buckets in a trie, which are analogous to hash table buckets that store key collisions, are
necessary only if a single key is associated with more than one value.
• There is no need to provide a hash function or to change hash functions as more keys are
added to a trie.
• A trie can provide an alphabetical ordering of the entries by key.
However, a trie also has some drawbacks compared to a hash table:
• Trie lookup can be slower than hash table lookup, especially if the data is directly accessed
on a hard disk drive or some other secondary storage device where the random-access
time is high compared to main memory.[7]
• Some keys, such as floating point numbers, can lead to long chains and prefixes that are
not particularly meaningful. Nevertheless, a bitwise trie can handle standard IEEE single
21
and double format floating point numbers.[citation needed ]
• Some tries can require more space than a hash table, as memory may be allocated for
each character in the search string, rather than a single chunk of memory for the whole
entry, as in most hash tables.

20.2.2 Dictionary representation

A common application of a trie is storing a predictive text22 or autocomplete23 dictionary,


such as found on a mobile telephone24 . Such applications take advantage of a trie's ability to
quickly search for, insert, and delete entries; however, if storing dictionary words is all that
is required (i.e., storage of information auxiliary to each word is not required), a minimal
deterministic acyclic finite state automaton25 (DAFSA) would use less space than a trie.
This is because a DAFSA can compress identical branches from the trie which correspond
to the same suffixes (or parts) of different words being stored.

19 https://en.wikipedia.org/wiki/Hash_table
20 https://en.wikipedia.org/wiki/Hash_table#Chaining
22 https://en.wikipedia.org/wiki/Predictive_text
23 https://en.wikipedia.org/wiki/Autocomplete
24 https://en.wikipedia.org/wiki/Mobile_telephone
25 https://en.wikipedia.org/wiki/Deterministic_acyclic_finite_state_automaton

289
Trie

Tries are also well suited for implementing approximate matching algorithms,[8] including
those used in spell checking26 and hyphenation27[4] software.

20.2.3 Term indexing

A discrimination tree28 term index29 stores its information in a trie data structure.[9]

20.3 Algorithms

The trie is a tree of nodes which supports Find and Insert operations. Find returns the
value for a key string, and Insert inserts a string (the key) and a value into the trie. Both
Insert and Find run in O(m) time, where m is the length of the key.
A simple Node class can be used to represent nodes in the trie:

class Node:
def __init__(self) -> None:
# Note that using dictionary for children (as in this implementation)
# would not allow lexicographic sorting mentioned in the next section
# (Sorting), because an ordinary dictionary would not preserve the
# order of the keys
self.children: Dict[str, Node] = {} # mapping from character ==> Node
self.value: Any = None

Note that children is a dictionary of characters to a node's children; and it is said that a
”terminal” node is one which represents a complete string.
A trie's value can be looked up as follows:

def find(node: Node, key: str) -> Any:


"""Find value by key in node."""
for char in key:
if char in node.children:
node = node.children[char]
else:
return None
return node.value

Slight modifications of this routine can be utilized


• to check if there is any word in the trie that starts with a given prefix, and
• to return the deepest node corresponding to some prefix of a given string.
Insertion proceeds by walking the trie according to the string to be inserted, then appending
new nodes for the suffix of the string that is not contained in the trie:

def insert(node: Node, key: str, value: Any) -> None:


"""Insert key/value pair into node."""
for char in key:
if char not in node.children:

26 https://en.wikipedia.org/wiki/Spell_checking
27 https://en.wikipedia.org/wiki/Hyphenation_algorithm
28 https://en.wikipedia.org/w/index.php?title=Discrimination_tree&action=edit&redlink=1
29 https://en.wikipedia.org/wiki/Term_indexing

290
Algorithms

node.children[char] = Node()
node = node.children[char]
node.value = value

20.3.1 Sorting

Lexicographic sorting of a set of keys can be accomplished by building a trie from them,
and traversing it in pre-order30 , printing only the leaves' values. This algorithm is a form
of radix sort31 .[10]
A trie forms the fundamental data structure of Burstsort32 , which (in 2007) was the
fastest known string sorting algorithm.[11] However, now there are faster string sorting
algorithms.[12]

20.3.2 Full-text search

A special kind of trie, called a suffix tree33 , can be used to index all suffixes in a text in
order to carry out fast full text searches.

30 https://en.wikipedia.org/wiki/Tree_traversal#Pre-order
31 https://en.wikipedia.org/wiki/Radix_sort
32 https://en.wikipedia.org/wiki/Burstsort
33 https://en.wikipedia.org/wiki/Suffix_tree

291
Trie

20.4 Implementation strategies

Figure 52 A trie implemented as a left-child right-sibling binary tree: vertical arrows


are child pointers, dashed horizontal arrows are next pointers. The set of strings stored in
this trie is {baby, bad, bank, box, dad, dance}. The lists are sorted to allow traversal in
lexicographic order.

There are several ways to represent tries, corresponding to different trade-offs between
memory use and speed of the operations. The basic form is that of a linked set of nodes,
where each node contains an array of child pointers, one for each symbol in the alphabet34

34 https://en.wikipedia.org/wiki/Alphabet_(computer_science)

292
Implementation strategies

(so for the English alphabet35 , one would store 26 child pointers and for the alphabet of
bytes, 256 pointers). This is simple but wasteful in terms of memory: using the alphabet of
bytes (size 256) and four-byte pointers, each node requires a kilobyte of storage, and when
there is little overlap in the strings' prefixes, the number of required nodes is roughly the
combined length of the stored strings.[2]:341 Put another way, the nodes near the bottom
of the tree tend to have few children and there are many of them, so the structure wastes
space storing null pointers.[13]
The storage problem can be alleviated by an implementation technique called alphabet
reduction, whereby the original strings are reinterpreted as longer strings over a smaller
alphabet. E.g., a string of n bytes can alternatively be regarded as a string of 2n four-bit
units36 and stored in a trie with sixteen pointers per node. Lookups need to visit twice
as many nodes in the worst case, but the storage requirements go down by a factor of
eight.[2]:347–352
An alternative implementation represents a node as a triple (symbol, child, next) and links
the children of a node together as a singly linked list37 : child points to the node's first child,
next to the parent node's next child.[13][14] The set of children can also be represented as
a binary search tree38 ; one instance of this idea is the ternary search tree39 developed by
Bentley40 and Sedgewick41 .[2]:353
Another alternative in order to avoid the use of an array of 256 pointers (ASCII), as sug-
gested before, is to store the alphabet array as a bitmap of 256 bits representing the ASCII
alphabet, reducing dramatically the size of the nodes.[15]

20.4.1 Bitwise tries

35 https://en.wikipedia.org/wiki/English_alphabet
36 https://en.wikipedia.org/wiki/Nibble
37 https://en.wikipedia.org/wiki/Singly_linked_list
38 https://en.wikipedia.org/wiki/Binary_search_tree
39 https://en.wikipedia.org/wiki/Ternary_search_tree
40 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
41 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)

293
Trie

This section does not cite42 any sources43 . Please help improve this section44
by adding citations to reliable sources45 . Unsourced material may be challenged
and removed46 .
Find sources: ”Trie”47 – news48 · newspapers49 · books50 · scholar51 · JSTOR52 (February
2015)(Learn how and when to remove this template message53 )

Bitwise tries are much the same as a normal character-based trie except that individual
bits are used to traverse what effectively becomes a form of binary tree. Generally, imple-
mentations use a special CPU instruction to very quickly find the first set bit54 in a fixed
length key (e.g., GCC's __builtin_clz() intrinsic). This value is then used to index a
32- or 64-entry table which points to the first item in the bitwise trie with that number of
leading zero bits. The search then proceeds by testing each subsequent bit in the key and
choosing child[0] or child[1] appropriately until the item is found.
Although this process might sound slow, it is very cache-local and highly parallelizable
due to the lack of register dependencies and therefore in fact has excellent performance
on modern out-of-order execution55 CPUs. A red-black tree56 for example performs much
better on paper, but is highly cache-unfriendly and causes multiple pipeline and TLB57 stalls
on modern CPUs which makes that algorithm bound by memory latency rather than CPU
speed. In comparison, a bitwise trie rarely accesses memory, and when it does, it does so only
to read, thus avoiding SMP cache coherency overhead. Hence, it is increasingly becoming
the algorithm of choice for code that performs many rapid insertions and deletions, such as
memory allocators (e.g., recent versions of the famous Doug Lea's allocator (dlmalloc) and
its descendants58 ). The worst case of steps for lookup is the same as bits used to index bins
in the tree.[16]
Alternatively, the term ”bitwise trie” can more generally refer to a binary tree structure
holding integer values, sorting them by their binary prefix. An example is the x-fast trie59 .

42 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
43 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
44 https://en.wikipedia.org/w/index.php?title=Trie&action=edit
45 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
46 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
47 http://www.google.com/search?as_eq=wikipedia&q=%22Trie%22
48 http://www.google.com/search?tbm=nws&q=%22Trie%22+-wikipedia
http://www.google.com/search?&q=%22Trie%22+site:news.google.com/newspapers&source=
49
newspapers
50 http://www.google.com/search?tbs=bks:1&q=%22Trie%22+-wikipedia
51 http://scholar.google.com/scholar?q=%22Trie%22
52 https://www.jstor.org/action/doBasicSearch?Query=%22Trie%22&acc=on&wc=on
53 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
54 https://en.wikipedia.org/wiki/Find_first_set
55 https://en.wikipedia.org/wiki/Out-of-order_execution
56 https://en.wikipedia.org/wiki/Red-black_tree
57 https://en.wikipedia.org/wiki/Translation_lookaside_buffer
58 https://en.wikipedia.org/wiki/Malloc#dlmalloc
59 https://en.wikipedia.org/wiki/X-fast_trie

294
Implementation strategies

20.4.2 Compressing tries

Compressing the trie and merging the common branches can sometimes yield large perfor-
mance gains. This works best under the following conditions:
• The trie is (mostly) static, so that no key insertions to or deletions are required (e.g.,
after bulk creation of the trie).
• Only lookups are needed.
• The trie nodes are not keyed by node-specific data, or the nodes' data are common.[17]
• The total set of stored keys is very sparse within their representation space (so compres-
sion pays off).
For example, it may be used to represent sparse bitsets60 ; i.e., subsets of a much larger,
fixed enumerable set. In such a case, the trie is keyed by the bit element position within the
full set. The key is created from the string of bits needed to encode the integral position of
each element. Such tries have a very degenerate form with many missing branches. After
detecting the repetition of common patterns or filling the unused gaps, the unique leaf nodes
(bit strings) can be stored and compressed easily, reducing the overall size of the trie.
Such compression is also used in the implementation of the various fast lookup tables for
retrieving Unicode61 character properties. These could include case-mapping tables (e.g.,
for the Greek62 letter pi63 , from Π to π), or lookup tables normalizing the combination of
base and combining characters (like the a-umlaut64 in German65 , ä, or the dalet66 -patah67 -
dagesh68 -ole69 in Biblical Hebrew70 , ‫ ַ֫)ּד‬. For such applications, the representation is similar
to transforming a very large, unidimensional, sparse table (e.g., Unicode code points) into
a multidimensional matrix of their combinations, and then using the coordinates in the
hyper-matrix as the string key of an uncompressed trie to represent the resulting character.
The compression will then consist of detecting and merging the common columns within the
hyper-matrix to compress the last dimension in the key. For example, to avoid storing the
full, multibyte Unicode code point of each element forming a matrix column, the groupings
of similar code points can be exploited. Each dimension of the hyper-matrix stores the
start position of the next dimension, so that only the offset (typically a single byte) need be
stored. The resulting vector is itself compressible when it is also sparse, so each dimension
(associated to a layer level in the trie) can be compressed separately.
Some implementations do support such data compression within dynamic sparse tries and
allow insertions and deletions in compressed tries. However, this usually has a significant
cost when compressed segments need to be split or merged. Some tradeoff has to be made

60 https://en.wikipedia.org/wiki/Bitset
61 https://en.wikipedia.org/wiki/Unicode
62 https://en.wikipedia.org/wiki/Greek_language
63 https://en.wikipedia.org/wiki/Pi_(letter)
64 https://en.wikipedia.org/wiki/Umlaut_(diacritic)
65 https://en.wikipedia.org/wiki/German_language
66 https://en.wikipedia.org/wiki/Dalet#Hebrew_Dalet
67 https://en.wikipedia.org/wiki/Patah
68 https://en.wikipedia.org/wiki/Dagesh
69 https://en.wikipedia.org/wiki/Ole_(cantillation)
70 https://en.wikipedia.org/wiki/Biblical_Hebrew

295
Trie

between data compression and update speed. A typical strategy is to limit the range of
71
global lookups for comparing the common branches in the sparse trie.[citation needed ]
The result of such compression may look similar to trying to transform the trie into a
directed acyclic graph72 (DAG), because the reverse transform from a DAG to a trie is
obvious and always possible. However, the shape of the DAG is determined by the form of
the key chosen to index the nodes, in turn constraining the compression possible.
Another compression strategy is to ”unravel” the data structure into a single byte array.[18]
This approach eliminates the need for node pointers, substantially reducing the memory
requirements. This in turn permits memory mapping and the use of virtual memory to
efficiently load the data from disk.
One more approach is to ”pack” the trie.[4] Liang describes a space-efficient implementation
of a sparse packed trie applied to automatic hyphenation73 , in which the descendants of
each node may be interleaved in memory.

20.4.3 External memory tries

Several trie variants are suitable for maintaining sets of strings in external memory74 , in-
cluding suffix trees. A combination of trie and B-tree75 , called the B-trie has also been
suggested for this task; compared to suffix trees, they are limited in the supported opera-
tions but also more compact, while performing update operations faster.[19]

20.5 See also


• Suffix tree76
• Radix tree77
• Directed acyclic word graph78 (aka DAWG)
• Acyclic deterministic finite automata79
• Hash trie80
• Deterministic finite automata81
• Judy array82
• Search algorithm83
• Extendible hashing84

72 https://en.wikipedia.org/wiki/Directed_acyclic_graph
73 https://en.wikipedia.org/wiki/Hyphenation_algorithm
74 https://en.wikipedia.org/wiki/Auxiliary_memory
75 https://en.wikipedia.org/wiki/B-tree
76 https://en.wikipedia.org/wiki/Suffix_tree
77 https://en.wikipedia.org/wiki/Radix_tree
78 https://en.wikipedia.org/wiki/Deterministic_acyclic_finite_state_automaton
79 https://en.wikipedia.org/wiki/Acyclic_deterministic_finite_automata
80 https://en.wikipedia.org/wiki/Hash_trie
81 https://en.wikipedia.org/wiki/Deterministic_finite_automata
82 https://en.wikipedia.org/wiki/Judy_array
83 https://en.wikipedia.org/wiki/Search_algorithm
84 https://en.wikipedia.org/wiki/Extendible_hashing

296
References

• Hash array mapped trie85


• Prefix hash tree86
• Burstsort87
• Luleå algorithm88
• Huffman coding89
• Ctrie90
• HAT-trie91

20.6 References
1.   B, R (1959). File searching using variable length keys. Proc.
Western J. Computer Conf. pp. 295–298. Cited by Brass.
2. B, P (2008). Advanced Data Structures. Cambridge University Press.
3. B, P E. (2009-11-16). ””92 . Dictionary of Algorithms and Data Struc-
tures. National Institute of Standards and Technology93 . Archived94 from the original
on 2011-04-29.
4. F M L (1983). Word Hy-phen-a-tion By Com-put-er95 (PDF)
(D  P ). S U. A96 (PDF)
    2005-11-11. R 2010-03-28.
5. K, D97 (1997). ”6.3: D S”. The Art of Computer
Programming Volume 3: Sorting and Searching (2nd ed.). Addison-Wesley. p. 492.
ISBN98 0-201-89685-099 .
6. B, J; S, R100 (1998-04-01). ”T S
T”101 . Dr. Dobb's Journal102 . D D'. A   103
 2008-06-23.
7. E F104 (1960). ”T M”. Communications of the ACM. 3 (9):
490–499. doi105 :10.1145/367390.367400106 .

85 https://en.wikipedia.org/wiki/Hash_array_mapped_trie
86 https://en.wikipedia.org/wiki/Prefix_hash_tree
87 https://en.wikipedia.org/wiki/Burstsort
88 https://en.wikipedia.org/wiki/Lule%C3%A5_algorithm
89 https://en.wikipedia.org/wiki/Huffman_coding
90 https://en.wikipedia.org/wiki/Ctrie
91 https://en.wikipedia.org/wiki/HAT-trie
92 https://xlinux.nist.gov/dads/HTML/trie.html
93 https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
94 https://web.archive.org/web/20110429080033/http://xlinux.nist.gov/dads/HTML/trie.html
95 http://www.tug.org/docs/liang/liang-thesis.pdf
https://web.archive.org/web/20051111105124/http://www.tug.org/docs/liang/liang-
96
thesis.pdf
97 https://en.wikipedia.org/wiki/Donald_Knuth
98 https://en.wikipedia.org/wiki/ISBN_(identifier)
99 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
100 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
101 https://web.archive.org/web/20080623071352/http://www.ddj.com/windows/184410528
102 https://en.wikipedia.org/wiki/Dr._Dobb%27s_Journal
103 http://www.ddj.com/windows/184410528
104 https://en.wikipedia.org/wiki/Edward_Fredkin
105 https://en.wikipedia.org/wiki/Doi_(identifier)
106 https://doi.org/10.1145%2F367390.367400

297
Trie

8. A, A V.; C, M J. (J 1975). ”E S


M: A A  B S”107 (PDF). Communications of the
ACM108 . 18 (6): 333–340. doi109 :10.1145/360825.360855110 .
9. John W. Wheeler; Guarionex Jordan. ”An Empirical Study of Term Indexing in the
Darwin Implementation of the Model Evolution Calculus”111 . 2004. p. 5.
10. K, R (2018). ”T A R T (R #14-708-887)”112
(PDF). University of Zurich: Department of Informatics, Research Publications.
11. ”C-E S S U C”113 (PDF). R
2008-11-15.
12. ”I S  S P  I
R”. L N  C S. 2008: 3–14.
 :10.1007/978-3-540-89097-3_3 . Cite journal requires |journal= (help116 );
114 115

|contribution= ignored (help117 )


13. A, L. ”T”118 . R 18 F 2014.
14. S, S. ”T”119 . Data Structures, Algorithms, & Applications in Java.
University of Florida. Retrieved 18 February 2014.
15. B, X (2014). ”A H-E M-C
S  GPU-A I D S”. Proceedings
of the 7th International Conference on Security of Information and Networks - SIN
'14. Glasgow, Scotland, UK: ACM. pp. 302:302–302:309. arXiv120 :1704.02272121 .
doi122 :10.1145/2659651.2659723123 . ISBN124 978-1-4503-3033-6125 .
16. L, D. ”A M A”126 . R 1 D 2019. HTTP
for Source Code127 . Binary Trie is described in Version 2.8.6, Section ”Overlaid data
structures”, Structure ”malloc_tree_chunk”.
17. J D; S M; B W. W; R E. W (2000).
”I C  M A F-S A”128 .
Computational Linguistics. Association for Computational Linguistics. 26: 3–16.

107 https://pdfs.semanticscholar.org/3547/ac839d02f6efe3f6f76a8289738a22528442.pdf
108 https://en.wikipedia.org/wiki/Communications_of_the_ACM
109 https://en.wikipedia.org/wiki/Doi_(identifier)
110 https://doi.org/10.1145%2F360825.360855
111 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.83.1789&rep=rep1&type=pdf
https://www.ifi.uzh.ch/dam/jcr:27d15f69-2a44-40f9-8b41-6d11b5926c67/
112
ReportKallisMScBasis.pdf
113 http://www.cs.mu.oz.au/~rsinha/papers/SinhaRingZobel-2006.pdf
114 https://en.wikipedia.org/wiki/Doi_(identifier)
115 https://doi.org/10.1007%2F978-3-540-89097-3_3
116 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
117 https://en.wikipedia.org/wiki/Help:CS1_errors#chapter_ignored
118 http://www.allisons.org/ll/AlgDS/Tree/Trie/
119 https://www.cise.ufl.edu/~sahni/dsaaj/enrich/c16/tries.htm
120 https://en.wikipedia.org/wiki/ArXiv_(identifier)
121 http://arxiv.org/abs/1704.02272
122 https://en.wikipedia.org/wiki/Doi_(identifier)
123 https://doi.org/10.1145%2F2659651.2659723
124 https://en.wikipedia.org/wiki/ISBN_(identifier)
125 https://en.wikipedia.org/wiki/Special:BookSources/978-1-4503-3033-6
126 http://gee.cs.oswego.edu/dl/html/malloc.html
127 http://gee.cs.oswego.edu/pub/misc/malloc.c
128 http://www.mitpressjournals.org/doi/abs/10.1162/089120100561601

298
External links

arXiv129 :cs/0007009130 . doi131 :10.1162/089120100561601132 . Archived from the origi-


nal133 on 2006-03-13. Retrieved 2009-05-28. This paper presents a method for direct
building of minimal acyclic finite states automaton which recognizes a given finite list
of words in lexicographical order. Our approach is to construct a minimal automa-
ton in a single phase by adding new strings one by one and minimizing the resulting
automaton on-the-fly
18. U G; E J; S L (2009). ”T 
:       ,     ,
”134 (PDF). ACL Workshops: Proceedings of the Workshop on Software Engineer-
ing, Testing, and Quality Assurance for Natural Language Processing. Association for
Computational Linguistics. pp. 31–39. We present Tightly Packed Tries (TPTs), a
compact implementation of read-only, compressed trie structures with fast on-demand
paging and short load times. We demonstrate the benefits of TPTs for storing n-gram
back-off language models and phrase tables for statistical machine translation135 . En-
coded as TPTs, these databases require less space than flat text file representations of
the same data compressed with the gzip utility. At the same time, they can be mapped
into memory quickly and be searched directly in time linear in the length of the key,
without the need to decompress the entire file. The overhead for local decompression
during search is marginal.
19. A, N; Z, J (2008). ”B-  D- S
M”136 (PDF). VLDB Journal: 1–26. ISSN137 1066-8888138 .

20.7 External links

Wikimedia Commons has media related to Trie139 .

Look up trie140 in Wiktionary, the free dictionary.

• NIST's Dictionary of Algorithms and Data Structures: Trie141

Tree data structures

129 https://en.wikipedia.org/wiki/ArXiv_(identifier)
130 http://arxiv.org/abs/cs/0007009
131 https://en.wikipedia.org/wiki/Doi_(identifier)
132 https://doi.org/10.1162%2F089120100561601
133 http://www.pg.gda.pl/~jandac/daciuk98.ps.gz
134 http://www.aclweb.org/anthology/W/W09/W09-1505.pdf
135 https://en.wikipedia.org/wiki/Statistical_machine_translation
136 http://people.eng.unimelb.edu.au/jzobel/fulltext/vldbj09.pdf
137 https://en.wikipedia.org/wiki/ISSN_(identifier)
138 http://www.worldcat.org/issn/1066-8888
139 https://commons.wikimedia.org/wiki/Category:Trie
140 https://en.wiktionary.org/wiki/Special:Search/trie
141 https://xlinux.nist.gov/dads/HTML/trie.html

299
Trie

Tree data structures

Data structures

Strings

300
21 Hash table

Not to be confused with Hash list1 or Hash tree2 . ”Rehash” redirects here. For the South
Park episode, see Rehash (South Park)3 . For the IRC command, see List of Internet Relay
Chat commands § REHASH4 .
Associates data values with key values - a lookup table

Hash table
Type Unordered associative array
Invented 1953
Time complexity in big O notation
Algo- Average Worst case
rithm
Space O(n)[1] O(n)
Search O(1) O(n)
Insert O(1) O(n)
Delete O(1) O(n)

1 https://en.wikipedia.org/wiki/Hash_list
2 https://en.wikipedia.org/wiki/Hash_tree_(disambiguation)
3 https://en.wikipedia.org/wiki/Rehash_(South_Park)
4 https://en.wikipedia.org/wiki/List_of_Internet_Relay_Chat_commands#REHASH

301
Hash table

Figure 54 A small phone book as a hash table

In computing5 , a hash table (hash map) is a data structure6 that implements an as-
sociative array7 abstract data type8 , a structure that can map keys9 to values10 . A hash
table uses a hash function11 to compute an index, also called a hash code, into an array
of buckets or slots, from which the desired value can be found. During lookup, the key is
hashed and the resulting hash indicates where the corresponding value is stored.
Ideally, the hash function will assign each key to a unique bucket, but most hash table
designs employ an imperfect hash function, which might cause hash collisions12 where the
hash function generates the same index for more than one key. Such collisions are always
accommodated in some way.
In a well-dimensioned hash table, the average cost (number of instructions13 ) for each lookup
is independent of the number of elements stored in the table. Many hash table designs

5 https://en.wikipedia.org/wiki/Computing
6 https://en.wikipedia.org/wiki/Data_structure
7 https://en.wikipedia.org/wiki/Associative_array
8 https://en.wikipedia.org/wiki/Abstract_data_type
9 https://en.wikipedia.org/wiki/Unique_key
10 https://en.wikipedia.org/wiki/Value_(computer_science)
11 https://en.wikipedia.org/wiki/Hash_function
12 https://en.wikipedia.org/wiki/Collision_(computer_science)
13 https://en.wikipedia.org/wiki/Instruction_(computer_science)

302
Hashing

also allow arbitrary insertions and deletions of key-value pairs, at (amortized14[2] ) constant
average cost per operation.[3][4]
In many situations, hash tables turn out to be on average more efficient than search trees15
or any other table16 lookup structure. For this reason, they are widely used in many kinds
of computer software17 , particularly for associative arrays18 , database indexing19 , caches20 ,
and sets21 .

21.1 Hashing

Main article: Hash function22 The idea of hashing is to distribute the entries (key/value
pairs) across an array of buckets. Given a key, the algorithm computes an index that suggests
where the entry can be found:
index = f(key, array_size)

Often this is done in two steps:


hash = hashfunc(key)
index = hash % array_size

In this method, the hash is independent of the array size, and it is then reduced to an index
(a number between 0 and array_size − 1) using the modulo operator23 (%).
In the case that the array size is a power of two24 , the remainder operation is reduced to
masking25 , which improves speed, but can increase problems with a poor hash function.[5]

21.1.1 Choosing a hash function

A basic requirement is that the function should provide a uniform distribution26 of hash
values. A non-uniform distribution increases the number of collisions and the cost of re-
solving them. Uniformity is sometimes difficult to ensure by design, but may be evaluated
empirically using statistical tests, e.g., a Pearson's chi-squared test27 for discrete uniform
distributions.[6][7]

14 https://en.wikipedia.org/wiki/Amortized_analysis
15 https://en.wikipedia.org/wiki/Search_tree
16 https://en.wikipedia.org/wiki/Table_(computing)
17 https://en.wikipedia.org/wiki/Software
18 https://en.wikipedia.org/wiki/Associative_array
19 https://en.wikipedia.org/wiki/Database_index
20 https://en.wikipedia.org/wiki/Cache_(computing)
21 https://en.wikipedia.org/wiki/Set_(abstract_data_type)
22 https://en.wikipedia.org/wiki/Hash_function
23 https://en.wikipedia.org/wiki/Modulo_operation
24 https://en.wikipedia.org/wiki/Power_of_two
25 https://en.wikipedia.org/wiki/Mask_(computing)
26 https://en.wikipedia.org/wiki/Uniform_distribution_(discrete)
https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test#Discrete_uniform_
27
distribution

303
Hash table

The distribution needs to be uniform only for table sizes that occur in the application. In
particular, if one uses dynamic resizing with exact doubling and halving of the table size,
then the hash function needs to be uniform only when the size is a power of two28 . Here the
index can be computed as some range of bits of the hash function. On the other hand, some
hashing algorithms prefer to have the size be a prime number29 .[8] The modulus operation
may provide some additional mixing; this is especially useful with a poor hash function.
For open addressing30 schemes, the hash function should also avoid clustering, the mapping
of two or more keys to consecutive slots. Such clustering may cause the lookup cost to sky-
rocket, even if the load factor is low and collisions are infrequent. The popular multiplicative
hash[3] is claimed to have particularly poor clustering behavior.[8]
Cryptographic hash functions31 are believed to provide good hash functions for any table
34
size, either by modulo32 reduction or by bit masking33[citation needed ] . They may also be
appropriate if there is a risk of malicious users trying to sabotage35 a network service by
submitting requests designed to generate a large number of collisions in the server's hash
tables. However, the risk of sabotage can also be avoided by cheaper methods (such as
applying a secret salt36 to the data, or using a universal hash function37 ). A drawback of
cryptographic hashing functions is that they are often slower to compute, which means that
in cases where the uniformity for anysize is not necessary, a non-cryptographic hashing
38
function might be preferable.[citation needed ]

21.1.2 Perfect hash function

If all keys are known ahead of time, a perfect hash function39 can be used to create a perfect
hash table that has no collisions. If minimal perfect hashing40 is used, every location in the
hash table can be used as well.
Perfect hashing allows for constant time41 lookups in all cases. This is in contrast to most
chaining and open addressing methods, where the time for lookup is low on average, but
may be very large, O(n), for instance when all the keys hash to a few values.

21.2 Key statistics

A critical statistic for a hash table is the load factor, defined as

28 https://en.wikipedia.org/wiki/Power_of_two
29 https://en.wikipedia.org/wiki/Prime_number
30 https://en.wikipedia.org/wiki/Open_addressing
31 https://en.wikipedia.org/wiki/Cryptographic_hash_function
32 https://en.wikipedia.org/wiki/Modulo_operation
33 https://en.wikipedia.org/wiki/Mask_(computing)
35 https://en.wikipedia.org/wiki/Denial_of_service_attack
36 https://en.wikipedia.org/wiki/Salt_(cryptography)
37 https://en.wikipedia.org/wiki/Universal_hash_function
39 https://en.wikipedia.org/wiki/Perfect_hash_function
40 https://en.wikipedia.org/wiki/Perfect_hash_function#Minimal_perfect_hash_function
41 https://en.wikipedia.org/wiki/Constant_time

304
Collision resolution

n
load factor = ,
k
where
• n is the number of entries occupied in the hash table.
• k is the number of buckets.
As the load factor grows larger, the hash table becomes slower, and it may even fail to work
(depending on the method used). The expected constant time42 property of a hash table
assumes that the load factor be kept below some bound. For a fixed number of buckets, the
time for a lookup grows with the number of entries, and therefore the desired constant time
is not achieved. In some implementations, the solution is to automatically grow (usually,
double) the size of the table when the load factor bound is reached, thus forcing to re-hash
all entries. As a real-world example, the default load factor for a HashMap in Java 10 is
0.75, which ”offers a good trade-off between time and space costs.”[9]
Second to the load factor, one can examine the variance of number of entries per bucket.
For example, two tables both have 1,000 entries and 1,000 buckets; one has exactly one
entry in each bucket, the other has all entries in the same bucket. Clearly the hashing is
not working in the second one.
A low load factor is not especially beneficial. As the load factor approaches 0, the proportion
of unused areas in the hash table increases, but there is not necessarily any reduction in
search cost. This results in wasted memory.

21.3 Collision resolution

Hash collisions43 are practically unavoidable when hashing a random subset of a large set
of possible keys. For example, if 2,450 keys are hashed into a million buckets, even with
a perfectly uniform random distribution, according to the birthday problem44 there is ap-
proximately a 95% chance of at least two of the keys being hashed to the same slot.
Therefore, almost all hash table implementations have some collision resolution strategy to
handle such events. Some common strategies are described below. All these methods require
that the keys (or pointers to them) be stored in the table, together with the associated
values.

42 https://en.wikipedia.org/wiki/Constant_time
43 https://en.wikipedia.org/wiki/Collision_(computer_science)
44 https://en.wikipedia.org/wiki/Birthday_problem

305
Hash table

21.3.1 Separate chaining

Figure 55 Hash collision resolved by separate chaining.

In the method known as separate chaining, each bucket is independent, and has some sort
of list45 of entries with the same index. The time for hash table operations is the time to
find the bucket (which is constant) plus the time for the list operation.
In a good hash table, each bucket has zero or one entries, and sometimes two or three, but
rarely more than that. Therefore, structures that are efficient in time and space for these
cases are preferred. Structures that are efficient for a fairly large number of entries per
bucket are not needed or desirable. If these cases happen often, the hashing function needs
to be fixed.[10]

Separate chaining with linked lists

Chained hash tables with linked lists46 are popular because they require only basic data
structures with simple algorithms, and can use simple hash functions that are unsuitable
47
for other methods.[citation needed ]

45 https://en.wikipedia.org/wiki/List_(abstract_data_type)
46 https://en.wikipedia.org/wiki/Linked_list

306
Collision resolution

The cost of a table operation is that of scanning the entries of the selected bucket for the
desired key. If the distribution of keys is sufficiently uniform48 , the average cost of a lookup
depends only on the average number of keys per bucket—that is, it is roughly proportional
to the load factor.
For this reason, chained hash tables remain effective even when the number of table entries
n is much higher than the number of slots. For example, a chained hash table with 1000
slots and 10,000 stored keys (load factor 10) is five to ten times slower than a 10,000-slot
table (load factor 1); but still 1000 times faster than a plain sequential list.
For separate-chaining, the worst-case scenario is when all entries are inserted into the same
bucket, in which case the hash table is ineffective and the cost is that of searching the bucket
data structure. If the latter is a linear list, the lookup procedure may have to scan all its
entries, so the worst-case cost is proportional to the number n of entries in the table.
The bucket chains are often searched sequentially using the order the entries were added
to the bucket. If the load factor is large and some keys are more likely to come up than
others, then rearranging the chain with a move-to-front heuristic49 may be effective. More
sophisticated data structures, such as balanced search trees, are worth considering only if
the load factor is large (about 10 or more), or if the hash distribution is likely to be very
non-uniform, or if one must guarantee good performance even in a worst-case scenario.
However, using a larger table and/or a better hash function may be even more effective in
50
those cases.[citation needed ]
Chained hash tables also inherit the disadvantages of linked lists. When storing small keys
and values, the space overhead of the next pointer in each entry record can be significant.
An additional disadvantage is that traversing a linked list has poor cache performance51 ,
making the processor cache ineffective.

48 https://en.wikipedia.org/wiki/SUHA
49 https://en.wikipedia.org/wiki/Self-organizing_list#Move_to_Front_Method_(MTF)
51 https://en.wikipedia.org/wiki/Locality_of_reference

307
Hash table

Separate chaining with list head cells

Figure 56 Hash collision by separate chaining with head records in the bucket array.

Some chaining implementations store the first record of each chain in the slot array itself.[4]
The number of pointer traversals is decreased by one for most cases. The purpose is to
increase cache efficiency of hash table access.
The disadvantage is that an empty bucket takes the same space as a bucket with one entry.
To save space, such hash tables often have about as many slots as stored entries, meaning
52
that many slots have two or more entries.[citation needed ]

Separate chaining with other structures

Instead of a list, one can use any other data structure that supports the required opera-
tions. For example, by using a self-balancing binary search tree53 , the theoretical worst-case
time of common hash table operations (insertion, deletion, lookup) can be brought down
to O(log n)54 rather than O(n). However, this introduces extra complexity into the imple-
mentation, and may cause even worse performance for smaller hash tables, where the time
spent inserting into and balancing the tree is greater than the time needed to perform a
linear search55 on all of the elements of a list.[3][11] A real world example of a hash table that
uses a self-balancing binary search tree for buckets is the HashMap class in Java56 version
857 .[12]

53 https://en.wikipedia.org/wiki/Self-balancing_binary_search_tree
54 https://en.wikipedia.org/wiki/Big_O_notation
55 https://en.wikipedia.org/wiki/Linear_search
56 https://en.wikipedia.org/wiki/Java_(programming_language)
57 https://en.wikipedia.org/wiki/Java_8

308
Collision resolution

The variant called array hash table58 uses a dynamic array59 to store all the entries that hash
to the same slot.[13][14][15] Each newly inserted entry gets appended to the end of the dynamic
array that is assigned to the slot. The dynamic array is resized in an exact-fit manner,
meaning it is grown only by as many bytes as needed. Alternative techniques such as
growing the array by block sizes or pages were found to improve insertion performance,
but at a cost in space. This variation makes more efficient use of CPU caching60 and the
translation lookaside buffer61 (TLB), because slot entries are stored in sequential memory
positions. It also dispenses with the next pointers that are required by linked lists, which
saves space. Despite frequent array resizing, space overheads incurred by the operating
62
system such as memory fragmentation were found to be small.[citation needed ]
An elaboration on this approach is the so-called dynamic perfect hashing63 ,[16] where a
bucket that contains k entries is organized as a perfect hash table with k2 slots. While it
uses more memory (n2 slots for n entries, in the worst case and n × k slots in the average
case), this variant has guaranteed constant worst-case lookup time, and low amortized time
for insertion. It is also possible to use a fusion tree64 for each bucket, achieving constant
time for all operations with high probability.[17]

21.3.2 Open addressing

Main article: Open addressing65

58 https://en.wikipedia.org/w/index.php?title=Array_hash_table&action=edit&redlink=1
59 https://en.wikipedia.org/wiki/Dynamic_array
60 https://en.wikipedia.org/wiki/CPU_cache
61 https://en.wikipedia.org/wiki/Translation_lookaside_buffer
63 https://en.wikipedia.org/wiki/Dynamic_perfect_hashing
64 https://en.wikipedia.org/wiki/Fusion_tree
65 https://en.wikipedia.org/wiki/Open_addressing

309
Hash table

Figure 57 Hash collision resolved by open addressing with linear probing (interval=1).
Note that ”Ted Baker” has a unique hash, but nevertheless collided with ”Sandra Dee”,
that had previously collided with ”John Smith”.

In another strategy, called open addressing, all entry records are stored in the bucket array
itself. When a new entry has to be inserted, the buckets are examined, starting with the
hashed-to slot and proceeding in some probe sequence, until an unoccupied slot is found.
When searching for an entry, the buckets are scanned in the same sequence, until either
the target record is found, or an unused array slot is found, which indicates that there is
no such key in the table.[18] The name ”open addressing” refers to the fact that the location
(”address”) of the item is not determined by its hash value. (This method is also called
closed hashing; it should not be confused with ”open hashing” or ”closed addressing” that
usually mean separate chaining.)
Well-known probe sequences include:

310
Collision resolution

• Linear probing66 , in which the interval between probes is fixed (usually 1). Because of
good CPU cache67 utilization and high performance this algorithm is most widely used
on modern computer architectures in hash table implementations.[19]
• Quadratic probing68 , in which the interval between probes is increased by adding the
successive outputs of a quadratic polynomial to the starting value given by the original
hash computation
• Double hashing69 , in which the interval between probes is computed by a second hash
function
A drawback of all these open addressing schemes is that the number of stored entries cannot
exceed the number of slots in the bucket array. In fact, even with good hash functions, their
performance dramatically degrades when the load factor grows beyond 0.7 or so. For many
applications, these restrictions mandate the use of dynamic resizing, with its attendant
70
costs.[citation needed ]
Open addressing schemes also put more stringent requirements on the hash function: besides
distributing the keys more uniformly over the buckets, the function must also minimize the
clustering of hash values that are consecutive in the probe order. Using separate chaining,
the only concern is that too many objects map to the same hash value; whether they are
71
adjacent or nearby is completely irrelevant.[citation needed ]
Open addressing only saves memory if the entries are small (less than four times the size
of a pointer) and the load factor is not too small. If the load factor is close to zero (that
is, there are far more buckets than stored entries), open addressing is wasteful even if each
entry is just two words72 .

66 https://en.wikipedia.org/wiki/Linear_probing
67 https://en.wikipedia.org/wiki/CPU_cache
68 https://en.wikipedia.org/wiki/Quadratic_probing
69 https://en.wikipedia.org/wiki/Double_hashing
72 https://en.wikipedia.org/wiki/Word_(computer_architecture)

311
Hash table

Figure 58 This graph compares the average number of CPU cache misses required to
look up elements in large hash tables (far exceeding size of the cache) with chaining and
linear probing. Linear probing performs better due to better locality of reference, though
as the table gets full, its performance degrades drastically.

Open addressing avoids the time overhead of allocating each new entry record, and can
be implemented even in the absence of a memory allocator. It also avoids the extra indi-
rection required to access the first entry of each bucket (that is, usually the only one). It
also has better locality of reference73 , particularly with linear probing. With small record
sizes, these factors can yield better performance than chaining, particularly for lookups.
Hash tables with open addressing are also easier to serialize74 , because they do not use
75
pointers.[citation needed ]
On the other hand, normal open addressing is a poor choice for large elements, because
these elements fill entire CPU cache76 lines (negating the cache advantage), and a large
amount of space is wasted on large empty table slots. If the open addressing table only
stores references to elements (external storage), it uses space comparable to chaining even
77
for large records but loses its speed advantage.[citation needed ]
Generally speaking, open addressing is better used for hash tables with small records that
can be stored within the table (internal storage) and fit in a cache line. They are particularly
suitable for elements of one word78 or less. If the table is expected to have a high load factor,

73 https://en.wikipedia.org/wiki/Locality_of_reference
74 https://en.wikipedia.org/wiki/Serialization
76 https://en.wikipedia.org/wiki/CPU_cache
78 https://en.wikipedia.org/wiki/Word_(computer_architecture)

312
Collision resolution

the records are large, or the data is variable-sized, chained hash tables often perform as well
79
or better.[citation needed ]

Coalesced hashing

A hybrid of chaining and open addressing, coalesced hashing80 links together chains of nodes
within the table itself.[18] Like open addressing, it achieves space usage and (somewhat
diminished) cache advantages over chaining. Like chaining, it does not exhibit clustering
effects; in fact, the table can be efficiently filled to a high density. Unlike chaining, it cannot
have more elements than table slots.

Cuckoo hashing

Another alternative open-addressing solution is cuckoo hashing81 , which ensures constant


lookup and deletion time in the worst case, and constant amortized time for insertions (with
low probability that the worst-case will be encountered). It uses two or more hash functions,
which means any key/value pair could be in two or more locations. For lookup, the first
hash function is used; if the key/value is not found, then the second hash function is used,
and so on. If a collision happens during insertion, then the key is re-hashed with the second
hash function to map it to another bucket. If all hash functions are used and there is still
a collision, then the key it collided with is removed to make space for the new key, and the
old key is re-hashed with one of the other hash functions, which maps it to another bucket.
If that location also results in a collision, then the process repeats until there is no collision
or the process traverses all the buckets, at which point the table is resized. By combining
multiple hash functions with multiple cells per bucket, very high space utilization can be
82
achieved.[citation needed ]

Hopscotch hashing

Another alternative open-addressing solution is hopscotch hashing83 ,[20] which combines


the approaches of cuckoo hashing84 and linear probing85 , yet seems in general to avoid their
limitations. In particular it works well even when the load factor grows beyond 0.9. The
algorithm is well suited for implementing a resizable concurrent hash table86 .
The hopscotch hashing algorithm works by defining a neighborhood of buckets near the
original hashed bucket, where a given entry is always found. Thus, search is limited to the
number of entries in this neighborhood, which is logarithmic in the worst case, constant on
average, and with proper alignment of the neighborhood typically requires one cache miss.
When inserting an entry, one first attempts to add it to a bucket in the neighborhood.
However, if all buckets in this neighborhood are occupied, the algorithm traverses buckets

80 https://en.wikipedia.org/wiki/Coalesced_hashing
81 https://en.wikipedia.org/wiki/Cuckoo_hashing
83 https://en.wikipedia.org/wiki/Hopscotch_hashing
84 https://en.wikipedia.org/wiki/Cuckoo_hashing
85 https://en.wikipedia.org/wiki/Linear_probing
86 https://en.wikipedia.org/wiki/Concurrent_hash_table

313
Hash table

in sequence until an open slot (an unoccupied bucket) is found (as in linear probing). At that
point, since the empty bucket is outside the neighborhood, items are repeatedly displaced
in a sequence of hops. (This is similar to cuckoo hashing, but with the difference that in this
case the empty slot is being moved into the neighborhood, instead of items being moved out
with the hope of eventually finding an empty slot.) Each hop brings the open slot closer
to the original neighborhood, without invalidating the neighborhood property of any of the
buckets along the way. In the end, the open slot has been moved into the neighborhood,
87
and the entry being inserted can be added to it.[citation needed ]

Robin Hood hashing

One interesting variation on double-hashing collision resolution is Robin Hood hashing.[21][22]


The idea is that a new key may displace a key already inserted, if its probe count is larger
than that of the key at the current position. The net effect of this is that it reduces
worst case search times in the table. This is similar to ordered hash tables[23] except that
the criterion for bumping a key does not depend on a direct relationship between the keys.
Since both the worst case and the variation in the number of probes is reduced dramatically,
an interesting variation is to probe the table starting at the expected successful probe value
and then expand from that position in both directions.[24] External Robin Hood hashing is
an extension of this algorithm where the table is stored in an external file and each table
position corresponds to a fixed-sized page or bucket with B records.[25]

21.3.3 2-choice hashing

2-choice hashing88 employs two different hash functions, h1 (x) and h2 (x), for the hash table.
Both hash functions are used to compute two table locations. When an object is inserted
in the table, it is placed in the table location that contains fewer objects (with the default
being the h1 (x) table location if there is equality in bucket size). 2-choice hashing employs
the principle of the power of two choices.[26]

21.4 Dynamic resizing

When an insert is made such that the number of entries in a hash table exceeds the product
of the load factor and the current capacity then the hash table will need to be rehashed.[9]
Rehashing includes increasing the size of the underlying data structure[9] and mapping
existing items to new bucket locations. In some implementations, if the initial capacity is
greater than the maximum number of entries divided by the load factor, no rehash operations
will ever occur.[9]
To limit the proportion of memory wasted due to empty buckets, some implementations also
shrink the size of the table—followed by a rehash—when items are deleted. From the point
of space–time tradeoffs, this operation is similar to the deallocation in dynamic arrays.

88 https://en.wikipedia.org/wiki/2-choice_hashing

314
Dynamic resizing

21.4.1 Resizing by copying all entries

A common approach is to automatically trigger a complete resizing when the load factor
exceeds some threshold rmax . Then a new larger table is allocated89 , each entry is removed
from the old table, and inserted into the new table. When all entries have been removed
from the old table then the old table is returned to the free storage pool. Likewise, when
the load factor falls below a second threshold rmin , all entries are moved to a new smaller
table.
For hash tables that shrink and grow frequently, the resizing downward can be skipped
entirely. In this case, the table size is proportional to the maximum number of entries that
ever were in the hash table at one time, rather than the current number. The disadvantage
is that memory usage will be higher, and thus cache behavior may be worse. For best
control, a ”shrink-to-fit” operation can be provided that does this only on request.
If the table size increases or decreases by a fixed percentage at each expansion, the total
cost of these resizings, amortized90 over all insert and delete operations, is still a constant,
independent of the number of entries n and of the number m of operations performed.
For example, consider a table that was created with the minimum possible size and is
doubled each time the load ratio exceeds some threshold. If m elements are inserted into
that table, the total number of extra re-insertions that occur in all dynamic resizings of the
table is at most m − 1. In other words, dynamic resizing roughly doubles the cost of each
insert or delete operation.

21.4.2 Alternatives to all-at-once rehashing

Some hash table implementations, notably in real-time systems91 , cannot pay the price of
enlarging the hash table all at once, because it may interrupt time-critical operations. If
one cannot avoid dynamic resizing, a solution is to perform the resizing gradually.
Disk-based hash tables almost always use some alternative to all-at-once rehashing, since
the cost of rebuilding the entire table on disk would be too high.

Incremental resizing

One alternative to enlarging the table all at once is to perform the rehashing gradually:
• During the resize, allocate the new hash table, but keep the old table unchanged.
• In each lookup or delete operation, check both tables.
• Perform insertion operations only in the new table.
• At each insertion also move r elements from the old table to the new table.
• When all elements are removed from the old table, deallocate it.

89 https://en.wikipedia.org/wiki/Dynamic_memory_allocation
90 https://en.wikipedia.org/wiki/Amortized_analysis
91 https://en.wikipedia.org/wiki/Real-time_system

315
Hash table

To ensure that the old table is completely copied over before the new table itself needs to
be enlarged, it is necessary to increase the size of the table by a factor of at least (r +
1)/r during resizing.

Monotonic keys

If it is known that keys will be stored in monotonically92 increasing (or decreasing) order,
then a variation of consistent hashing93 can be achieved.
Given some initial key k1 , a subsequent key ki partitions94 the key domain [k1 , ∞) into the
set {[k1 , ki ), [ki , ∞)}. In general, repeating this process gives a finer partition {[k1 , ki0 ), [ki0 ,
ki1 ), ..., [kin - 1 , kin ), [kin , ∞)} for some sequence of monotonically increasing keys (ki0 , ...,
kin ), where n is the number of refinements95 . The same process applies, mutatis mutandis96 ,
to monotonically decreasing keys. By assigning to each subinterval97 of this partition a
different hash function or hash table (or both), and by refining the partition whenever the
hash table is resized, this approach guarantees that any key's hash, once issued, will never
change, even when the hash table is grown.
Since it is common to grow the overall number of entries by doubling, there will only
be O(log(N))98 subintervals to check, and binary search time for the redirection will be
O(log(log(N))).

Linear hashing

Linear hashing99[27] is a hash table algorithm that permits incremental hash table expansion.
It is implemented using a single hash table, but with two possible lookup functions.

Hashing for distributed hash tables

Another way to decrease the cost of table resizing is to choose a hash function in such a way
that the hashes of most values do not change when the table is resized. Such hash functions
are prevalent in disk-based and distributed hash tables100 , where rehashing is prohibitively
costly. The problem of designing a hash such that most values do not change when the
table is resized is known as the distributed hash table101 problem. The four most pop-

92 https://en.wikipedia.org/wiki/Monotonic_function
93 https://en.wikipedia.org/wiki/Consistent_hashing
94 https://en.wikipedia.org/wiki/Partition_of_an_interval
95 https://en.wikipedia.org/wiki/Partition_of_an_interval#Refinement_of_a_partition
96 https://en.wikipedia.org/wiki/Mutatis_mutandis
97 https://en.wikipedia.org/wiki/Partition_of_an_interval
98 https://en.wikipedia.org/wiki/Big_O_notation
99 https://en.wikipedia.org/wiki/Linear_hashing
100 https://en.wikipedia.org/wiki/Distributed_hash_table
101 https://en.wikipedia.org/wiki/Distributed_hash_table

316
Performance analysis

ular approaches are rendezvous hashing102 , consistent hashing103 , the content addressable
network104 algorithm, and Kademlia105 distance.

21.5 Performance analysis

In the simplest model, the hash function is completely unspecified and the table does not
resize. With an ideal hash function, a table of size k with open addressing has no collisions
and holds up to k elements with a single comparison for successful lookup, while a table
n
of size k with chaining and n keys has the minimum max(0, n − k) collisions and Θ(1 + )
k
comparisons for lookup. With the worst possible hash function, every insertion causes a
collision, and hash tables degenerate to linear search, with Θ(n) amortized comparisons per
insertion and up to n comparisons for a successful lookup.
Adding rehashing to this model is straightforward. As in a dynamic array106 , geometric
n
resizing by a factor of b implies that only i keys are inserted i or more times, so that the
b
bn
total number of insertions is bounded above by , which is Θ(n). By using rehashing to
b−1
maintain n < k, tables using both chaining and open addressing can have unlimited elements
and perform successful lookup in a single comparison for the best choice of hash function.
In more realistic models, the hash function is a random variable107 over a probability dis-
tribution of hash functions, and performance is computed on average over the choice of
hash function. When this distribution is uniform108 , the assumption is called ”simple uni-
n
form hashing” and it can be shown that hashing with chaining requires Θ(1 + ) com-
k
parisons
( on )average for an unsuccessful lookup, and hashing with open addressing requires
1 n
Θ .[28] Both these bounds are constant, if we maintain ' < c using table resizing,
1 − n/k k
where c is a fixed constant less than 1.
There're 2 factors affecting significantly the latency of operations on a hash table [29] :

• Cache missing. With the increasing of load factor, the search and insertion performance
of hash tables can be degraded a lot due to the rise of average cache missing.
• Cost of resizing. Resizing becomes an extreme time-consuming task when hash tables
grow massive.
In latency-sensitive programs, the time consumption of operations on both the average and
the worst cases are required to be small, stable, and even predictable. The K hash table
[30] is designed for a general scenario of low-latency applications, aiming to achieve cost-stable

operations on a growing huge-sized table.

102 https://en.wikipedia.org/wiki/Rendezvous_hashing
103 https://en.wikipedia.org/wiki/Consistent_hashing
104 https://en.wikipedia.org/wiki/Content_addressable_network
105 https://en.wikipedia.org/wiki/Kademlia
106 https://en.wikipedia.org/wiki/Dynamic_array
107 https://en.wikipedia.org/wiki/Random_variable
108 https://en.wikipedia.org/wiki/Uniform_distribution_(discrete)

317
Hash table

21.6 Features

21.6.1 Advantages
• The main advantage of hash tables over other table data structures is speed. This advan-
tage is more apparent when the number of entries is large. Hash tables are particularly
efficient when the maximum number of entries can be predicted in advance, so that the
bucket array can be allocated once with the optimum size and never resized.
• If the set of key-value pairs is fixed and known ahead of time (so insertions and deletions
are not allowed), one may reduce the average lookup cost by a careful choice of the hash
function, bucket table size, and internal data structures. In particular, one may be able
to devise a hash function that is collision-free, or even perfect. In this case the keys need
not be stored in the table.

21.6.2 Drawbacks
• Although operations on a hash table take constant time on average, the cost of a good
hash function can be significantly higher than the inner loop of the lookup algorithm
for a sequential list or search tree. Thus hash tables are not effective when the number
of entries is very small. (However, in some cases the high cost of computing the hash
function can be mitigated by saving the hash value together with the key.)
• For certain string processing applications, such as spell-checking109 , hash tables may be
less efficient than tries110 , finite automata111 , or Judy arrays112 . Also, if there are not too
many possible keys to store—that is, if each key can be represented by a small enough
number of bits—then, instead of a hash table, one may use the key directly as the index
into an array of values. Note that there are no collisions in this case.
• The entries stored in a hash table can be enumerated efficiently (at constant cost per
entry), but only in some pseudo-random order. Therefore, there is no efficient way to
locate an entry whose key is nearest to a given key. Listing all n entries in some specific
order generally requires a separate sorting step, whose cost is proportional to log(n) per
entry. In comparison, ordered search trees have lookup and insertion cost proportional to
log(n), but allow finding the nearest key at about the same cost, and ordered enumeration
of all entries at constant cost per entry. However, a LinkingHashMap can be made to
create a hash table with a non-random sequence. [31]
• If the keys are not stored (because the hash function is collision-free), there may be no
easy way to enumerate the keys that are present in the table at any given moment.
• Although the average cost per operation is constant and fairly small, the cost of a single
operation may be quite high. In particular, if the hash table uses dynamic resizing113 , an
insertion or deletion operation may occasionally take time proportional to the number of
entries. This may be a serious drawback in real-time or interactive applications.

109 https://en.wikipedia.org/wiki/Spell_checker
110 https://en.wikipedia.org/wiki/Trie
111 https://en.wikipedia.org/wiki/Finite_automata
112 https://en.wikipedia.org/wiki/Judy_array
113 #Dynamic_resizing

318
Uses

• Hash tables in general exhibit poor locality of reference114 —that is, the data to be accessed
is distributed seemingly at random in memory. Because hash tables cause access patterns
that jump around, this can trigger microprocessor cache115 misses that cause long delays.
Compact data structures such as arrays searched with linear search116 may be faster, if
the table is relatively small and keys are compact. The optimal performance point varies
from system to system.
• Hash tables become quite inefficient when there are many collisions. While extremely
uneven hash distributions are extremely unlikely to arise by chance, a malicious adver-
sary117 with knowledge of the hash function may be able to supply information to a hash
that creates worst-case behavior by causing excessive collisions, resulting in very poor
performance, e.g., a denial of service attack118 .[32][33][34] In critical applications, a data
structure with better worst-case guarantees can be used; however, universal hashing119 —
a randomized algorithm120 that prevents the attacker from predicting which inputs cause
worst-case behavior—may be preferable.[35] The hash function used by the hash table in
the Linux routing table121 cache was changed with Linux version 2.4.2 as a countermea-
sure against such attacks.[36]

21.7 Uses

This section does not cite122 any sources123 . Please help improve this sec-
tion124 by adding citations to reliable sources125 . Unsourced material may be chal-
lenged and removed126 .
Find sources: ”Hash table”127 – news128 · newspapers129 · books130 · scholar131 · JSTOR132
(July 2013)(Learn how and when to remove this template message133 )

114 https://en.wikipedia.org/wiki/Locality_of_reference
115 https://en.wikipedia.org/wiki/CPU_cache
116 https://en.wikipedia.org/wiki/Linear_search
117 https://en.wikipedia.org/wiki/Black_hat_hacking
118 https://en.wikipedia.org/wiki/Denial_of_service_attack
119 https://en.wikipedia.org/wiki/Universal_hashing
120 https://en.wikipedia.org/wiki/Randomized_algorithm
121 https://en.wikipedia.org/wiki/Routing_table
122 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
123 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
124 https://en.wikipedia.org/w/index.php?title=Hash_table&action=edit
125 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
126 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
127 http://www.google.com/search?as_eq=wikipedia&q=%22Hash+table%22
128 http://www.google.com/search?tbm=nws&q=%22Hash+table%22+-wikipedia
http://www.google.com/search?&q=%22Hash+table%22+site:news.google.com/newspapers&
129
source=newspapers
130 http://www.google.com/search?tbs=bks:1&q=%22Hash+table%22+-wikipedia
131 http://scholar.google.com/scholar?q=%22Hash+table%22
132 https://www.jstor.org/action/doBasicSearch?Query=%22Hash+table%22&acc=on&wc=on
133 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

319
Hash table

21.7.1 Associative arrays

Main article: Associative array134 Hash tables are commonly used to implement many
types of in-memory tables. They are used to implement associative arrays135 (arrays whose
indices are arbitrary strings136 or other complicated objects), especially in interpreted137
programming languages138 like Ruby139 , Python140 , and PHP141 .
When storing a new item into a multimap142 and a hash collision occurs, the multimap
unconditionally stores both items.
When storing a new item into a typical associative array and a hash collision occurs, but
the actual keys themselves are different, the associative array likewise stores both items.
However, if the key of the new item exactly matches the key of an old item, the associative
array typically erases the old item and overwrites it with the new item, so every item in the
table has a unique key.

21.7.2 Database indexing

Hash tables may also be used as disk143 -based data structures and database indices144
(such as in dbm145 ) although B-trees146 are more popular in these applications. In multi-
node database systems, hash tables are commonly used to distribute rows amongst nodes,
reducing network traffic for hash joins147 .

21.7.3 Caches

Main article: Cache (computing)148 Hash tables can be used to implement caches149 , aux-
iliary data tables that are used to speed up the access to data that is primarily stored in
slower media. In this application, hash collisions can be handled by discarding one of the
two colliding entries—usually erasing the old item that is currently stored in the table and
overwriting it with the new item, so every item in the table has a unique hash value.

134 https://en.wikipedia.org/wiki/Associative_array
135 https://en.wikipedia.org/wiki/Associative_array
136 https://en.wikipedia.org/wiki/String_(computing)
137 https://en.wikipedia.org/wiki/Interpreter_(computing)
138 https://en.wikipedia.org/wiki/Programming_language
139 https://en.wikipedia.org/wiki/Ruby_(programming_language)
140 https://en.wikipedia.org/wiki/Python_(programming_language)
141 https://en.wikipedia.org/wiki/PHP
142 https://en.wikipedia.org/wiki/Multimap
143 https://en.wikipedia.org/wiki/Disk_drive
144 https://en.wikipedia.org/wiki/Index_(database)
145 https://en.wikipedia.org/wiki/DBM_(computing)
146 https://en.wikipedia.org/wiki/B-tree
147 https://en.wikipedia.org/wiki/Hash_join
148 https://en.wikipedia.org/wiki/Cache_(computing)
149 https://en.wikipedia.org/wiki/Cache_(computing)

320
Uses

21.7.4 Sets

Besides recovering the entry that has a given key, many hash table implementations can
also tell whether such an entry exists or not.
Those structures can therefore be used to implement a set data structure150 [37] , which
merely records whether a given key belongs to a specified set of keys. In this case, the
structure can be simplified by eliminating all parts that have to do with the entry values.
Hashing can be used to implement both static and dynamic sets.

21.7.5 Object representation

Several dynamic languages, such as Perl151 , Python152 , JavaScript153 , Lua154 , and Ruby155 ,
use hash tables to implement objects156 . In this representation, the keys are the names of
the members and methods of the object, and the values are pointers to the corresponding
member or method.

21.7.6 Unique data representation

Main article: String interning157 Hash tables can be used by some programs to avoid creating
multiple character strings with the same contents. For that purpose, all strings in use
by the program are stored in a single string pool implemented as a hash table, which is
checked whenever a new string has to be created. This technique was introduced in Lisp158
interpreters under the name hash consing159 , and can be used with many other kinds of
data (expression trees160 in a symbolic algebra system, records in a database, files in a file
system, binary decision diagrams, etc.).

21.7.7 Transposition table

Main article: Transposition table161 A transposition table162 to a complex Hash Table which
stores information about each section that has been searched. [38]

150 https://en.wikipedia.org/wiki/Set_data_structure
151 https://en.wikipedia.org/wiki/Perl
152 https://en.wikipedia.org/wiki/Python_(programming_language)
153 https://en.wikipedia.org/wiki/JavaScript
154 https://en.wikipedia.org/wiki/Lua_(programming_language)
155 https://en.wikipedia.org/wiki/Ruby_(programming_language)
156 https://en.wikipedia.org/wiki/Object_(computer_science)
157 https://en.wikipedia.org/wiki/String_interning
158 https://en.wikipedia.org/wiki/Lisp_(programming_language)
159 https://en.wikipedia.org/wiki/Hash_consing
160 https://en.wikipedia.org/wiki/Expression_tree
161 https://en.wikipedia.org/wiki/Transposition_table
162 https://en.wikipedia.org/wiki/Transposition_table

321
Hash table

21.8 Implementations

21.8.1 In programming languages

Many programming languages provide hash table functionality, either as built-in associative
arrays or as standard library163 modules. In C++11164 , for example, the unordered_map165
class provides hash tables for keys and values of arbitrary type.
The Java166 programming language (including the variant which is used on An-
droid167 ) includes the HashSet, HashMap, LinkedHashSet, and LinkedHashMap generic168
collections.[39]
In PHP169 5 and 7, the Zend 2 engine and the Zend 3 engine (respectively) use one of the
hash functions from Daniel J. Bernstein170 to generate the hash values used in managing
the mappings of data pointers stored in a hash table. In the PHP source code, it is labelled
as DJBX33A (Daniel J. Bernstein, Times 33 with Addition).
Python171 's built-in hash table implementation, in the form of the dict type, as well as
Perl172 's hash type (%) are used internally to implement namespaces and therefore need
to pay more attention to security, i.e., collision attacks. Python sets173 also use hashes
internally, for fast lookup (though they store only keys, not values).[40] CPython 3.6+ uses
an insertion-ordered variant of the hash table, implemented by splitting out the value storage
into an array and having the vanilla hash table only store a set of indices.[41]
In the .NET Framework174 , support for hash tables is provided via the non-generic
Hashtable and generic Dictionary classes, which store key-value pairs, and the generic
HashSet class, which stores only values.
In Ruby175 the hash table uses the open addressing model from Ruby 2.4 onwards.[42][43]
In Rust176 's standard library, the generic HashMap and HashSet structs use linear probing
with Robin Hood bucket stealing.
ANSI177 Smalltalk178 defines the classes Set / IdentitySet and Dictionary / Identi-
tyDictionary. All Smalltalk implementations provide additional (not yet standardized)
versions of WeakSet, WeakKeyDictionary and WeakValueDictionary.

163 https://en.wikipedia.org/wiki/Library_(computing)
164 https://en.wikipedia.org/wiki/C%2B%2B11
165 https://en.wikipedia.org/wiki/Unordered_map_(C%2B%2B)
166 https://en.wikipedia.org/wiki/Java_(programming_language)
167 https://en.wikipedia.org/wiki/Android_(operating_system)
168 https://en.wikipedia.org/wiki/Generics_in_Java
169 https://en.wikipedia.org/wiki/PHP
170 https://en.wikipedia.org/wiki/Daniel_J._Bernstein
171 https://en.wikipedia.org/wiki/Python_(programming_language)
172 https://en.wikipedia.org/wiki/Perl
173 https://en.wikipedia.org/wiki/Set_(abstract_data_type)
174 https://en.wikipedia.org/wiki/.NET_Framework
175 https://en.wikipedia.org/wiki/Ruby_(programming_language)
176 https://en.wikipedia.org/wiki/Rust_(programming_language)
177 https://en.wikipedia.org/wiki/ANSI
178 https://en.wikipedia.org/wiki/Smalltalk_(programming_language)

322
History

Tcl179 array variables are hash tables, and Tcl dictionaries are immutable values based on
hashes. The functionality is also available as C library functions Tcl_InitHashTable et
al.180 (for generic hash tables) and Tcl_NewDictObj et al.181 (for dictionary values). The
performance has been independently benchmarked as extremely competitive.[44]
In Wolfram language182 supports hash tables since version 10. They are implemented under
the name Association.

21.9 History

The idea of hashing arose independently in different places. In January 1953, Hans Peter
Luhn183 wrote an internal IBM memorandum that used hashing with chaining.[45] Gene Am-
dahl184 , Elaine M. McGraw185 , Nathaniel Rochester186 , and Arthur Samuel187 implemented
a program using hashing at about the same time. Open addressing with linear probing
(relatively prime stepping) is credited to Amdahl, but Ershov188 (in Russia) had the same
idea.[45]

21.10 See also


• Rabin–Karp string search algorithm189
• Stable hashing190
• Consistent hashing191
• Extendible hashing192
• Lazy deletion193
• Pearson hashing194
• PhotoDNA195
• Search data structure196
• Concurrent hash table197
• Record (computer science)198

179 https://en.wikipedia.org/wiki/Tcl
180 http://www.tcl.tk/man/tcl8.6/TclLib/Hash.htm
181 http://www.tcl.tk/man/tcl8.6/TclLib/DictObj.htm
182 https://en.wikipedia.org/wiki/Wolfram_language
183 https://en.wikipedia.org/wiki/Hans_Peter_Luhn
184 https://en.wikipedia.org/wiki/Gene_Amdahl
185 https://en.wikipedia.org/wiki/Elaine_M._McGraw
186 https://en.wikipedia.org/wiki/Nathaniel_Rochester_(computer_scientist)
187 https://en.wikipedia.org/wiki/Arthur_Samuel
188 https://en.wikipedia.org/wiki/Andrey_Ershov
189 https://en.wikipedia.org/wiki/Rabin%E2%80%93Karp_string_search_algorithm
190 https://en.wikipedia.org/wiki/Stable_hashing
191 https://en.wikipedia.org/wiki/Consistent_hashing
192 https://en.wikipedia.org/wiki/Extendible_hashing
193 https://en.wikipedia.org/wiki/Lazy_deletion
194 https://en.wikipedia.org/wiki/Pearson_hashing
195 https://en.wikipedia.org/wiki/PhotoDNA
196 https://en.wikipedia.org/wiki/Search_data_structure
197 https://en.wikipedia.org/wiki/Concurrent_hash_table
198 https://en.wikipedia.org/wiki/Record_(computer_science)

323
Hash table

21.10.1 Related data structures

There are several data structures that use hash functions but cannot be considered special
cases of hash tables:
• Bloom filter199 , memory efficient data-structure designed for constant-time approximate
lookups; uses hash function(s) and can be seen as an approximate hash table.
• Distributed hash table200 (DHT), a resilient dynamic table spread over several nodes of
a network.
• Hash array mapped trie201 , a trie202 structure, similar to the array mapped trie203 , but
where each key is hashed first.

21.11 References
1. C, T H.204 ; L, C E.205 ; R, R L.206 ;
S, C207 (2009). Introduction to Algorithms208 (3 .). M-
 I  T. . 253–280. ISBN209 978-0-262-03384-
8210 .
2. Charles E. Leiserson211 , Amortized Algorithms, Table Doubling, Potential Method212
Archived213 August 7, 2009, at the Wayback Machine214 Lecture 13, course MIT
6.046J/18.410J Introduction to Algorithms—Fall 2005
3. K, D215 (1998). The Art of Computer Programming. 3: Sorting and
Searching (2nd ed.). Addison-Wesley. pp. 513–558. ISBN216 978-0-201-89685-5217 .
4. C, T H.218 ; L, C E.219 ; R, R L.220 ;
S, C221 (2001). ”C 11: H T”. Introduction to Algo-

199 https://en.wikipedia.org/wiki/Bloom_filter
200 https://en.wikipedia.org/wiki/Distributed_hash_table
201 https://en.wikipedia.org/wiki/Hash_array_mapped_trie
202 https://en.wikipedia.org/wiki/Trie
203 https://en.wikipedia.org/w/index.php?title=Array_mapped_trie&action=edit&redlink=1
204 https://en.wikipedia.org/wiki/Thomas_H._Cormen
205 https://en.wikipedia.org/wiki/Charles_E._Leiserson
206 https://en.wikipedia.org/wiki/Ronald_L._Rivest
207 https://en.wikipedia.org/wiki/Clifford_Stein
208 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
209 https://en.wikipedia.org/wiki/ISBN_(identifier)
210 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03384-8
211 https://en.wikipedia.org/wiki/Charles_E._Leiserson
212 http://videolectures.net/mit6046jf05_leiserson_lec13/
https://web.archive.org/web/20090807022046/http://videolectures.net/mit6046jf05_
213
leiserson_lec13/
214 https://en.wikipedia.org/wiki/Wayback_Machine
215 https://en.wikipedia.org/wiki/Donald_Knuth
216 https://en.wikipedia.org/wiki/ISBN_(identifier)
217 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-89685-5
218 https://en.wikipedia.org/wiki/Thomas_H._Cormen
219 https://en.wikipedia.org/wiki/Charles_E._Leiserson
220 https://en.wikipedia.org/wiki/Ronald_L._Rivest
221 https://en.wikipedia.org/wiki/Clifford_Stein

324
References

rithms222 (2 .). MIT P  MG-H. . 221–252. ISBN223 978-0-
262-53196-2224 .
5. ”JDK HM H ”225 . A226   -
  M 21, 2017.
6. P, K227 (1900). ”O        -
            -
            
 ”. Philosophical Magazine. Series 5. 50 (302). pp. 157–175.
doi228 :10.1080/14786440009463897229 .
7. P, R230 (1983). ”K P   C-S T”.
International Statistical Review (International Statistical Institute (ISI)). 51 (1).
pp. 59–72. doi231 :10.2307/1402731232 .
8. W, T (M 1997). ”P D H T”233 . A
  234  S 3, 1999. R M 10, 2015.
9. Javadoc for HashMap in Java 10 235
10. ”CS H T”236 . everythingcomputerscience.com.
11. P, M (A 30, 2010). ”L  B S”237 . A238
    N 20, 2016. R N 20, 2016.
12. ”H   HM   JAVA”239 . -.. A240
    N 19, 2016.
13. A, N; Z, J (O 2005). Cache-conscious Collision
Resolution in String Hash Tables. Proceedings of the 12th International Conference,
String Processing and Information Retrieval (SPIRE 2005). 3772/2005. pp. 91–102.
doi241 :10.1007/11575832_11242 . ISBN243 978-3-540-29740-6244 .

222 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
223 https://en.wikipedia.org/wiki/ISBN_(identifier)
224 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-53196-2
http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/
225
util/HashMap.java#l322
https://web.archive.org/web/20170521033827/http://hg.openjdk.java.net/jdk8/jdk8/jdk/
226
file/687fd7c7986d/src/share/classes/java/util/HashMap.java#l322
227 https://en.wikipedia.org/wiki/Karl_Pearson
228 https://en.wikipedia.org/wiki/Doi_(identifier)
229 https://doi.org/10.1080%2F14786440009463897
230 https://en.wikipedia.org/wiki/Robin_Plackett
231 https://en.wikipedia.org/wiki/Doi_(identifier)
232 https://doi.org/10.2307%2F1402731
https://web.archive.org/web/19990903133921/http://www.concentric.net/~Ttwang/tech/
233
primehash.htm
234 https://www.concentric.net/~Ttwang/tech/primehash.htm
235 https://docs.oracle.com/javase/10/docs/api/java/util/HashMap.html
https://everythingcomputerscience.com/discrete_mathematics/Data_Structures/Hash_
236
Table.html
237 https://schani.wordpress.com/2010/04/30/linear-vs-binary-search/
https://web.archive.org/web/20161120154213/https://schani.wordpress.com/2010/04/30/
238
linear-vs-binary-search/
239 http://coding-geek.com/how-does-a-hashmap-work-in-java/#JAVA_8_improvements
https://web.archive.org/web/20161119095443/http://coding-geek.com/how-does-a-hashmap-
240
work-in-java/#JAVA_8_improvements
241 https://en.wikipedia.org/wiki/Doi_(identifier)
242 https://doi.org/10.1007%2F11575832_11
243 https://en.wikipedia.org/wiki/ISBN_(identifier)
244 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-29740-6

325
Hash table

14. A, N; S, R (2010). ”E , 


     ”. The VLDB Journal. 17 (5): 633–660.
doi245 :10.1007/s00778-010-0183-9246 . ISSN247 1066-8888248 .
15. A, N (2009). Fast and Compact Hash Tables for Integer Keys249
(PDF). Proceedings of the 32nd Australasian Computer Science Conference (ACSC
2009). 91. pp. 113–122. ISBN250 978-1-920682-72-9251 . Archived from the original252
(PDF) on February 16, 2011. Retrieved June 13, 2010.
16. Erik Demaine, Jeff Lind. 6.897: Advanced Data Structures. MIT Computer Science
and Artificial Intelligence Laboratory. Spring 2003.
”A ”253 (PDF). A254 (PDF)     J 15,
2010. R J 30, 2008.CS1 maint: archived copy as title (link255 )
17. W, D E.256 (2000). ”E  ,  E
B ,         ”. SIAM
Journal on Computing257 . 29 (3): 1030–1049. doi258 :10.1137/S0097539797322425259 .
MR260 1740562261 ..
18. T, A M.; L, Y; A, M J. (1990).
Data Structures Using C. Prentice Hall. pp. 456–461, p. 472. ISBN262 978-0-13-
199746-2263 .
19. P, R; R, F F (2001). ”C H”. Al-
gorithms — ESA 2001. Lecture Notes in Computer Science. 2161. pp. 121–133.
CiteSeerX264 10.1.1.25.4189265 . doi266 :10.1007/3-540-44676-1_10267 . ISBN268 978-3-
540-42493-2269 .
20. H, M; S, N; T, M (2008). ”H
H”. DISC '08: Proceedings of the 22nd international symposium on Dis-

245 https://en.wikipedia.org/wiki/Doi_(identifier)
246 https://doi.org/10.1007%2Fs00778-010-0183-9
247 https://en.wikipedia.org/wiki/ISSN_(identifier)
248 http://www.worldcat.org/issn/1066-8888
https://web.archive.org/web/20110216180225/http://crpit.com/confpapers/
249
CRPITV91Askitis.pdf
250 https://en.wikipedia.org/wiki/ISBN_(identifier)
251 https://en.wikipedia.org/wiki/Special:BookSources/978-1-920682-72-9
252 http://crpit.com/confpapers/CRPITV91Askitis.pdf
253 http://courses.csail.mit.edu/6.897/spring03/scribe_notes/L2/lecture2.pdf
https://web.archive.org/web/20100615203901/http://courses.csail.mit.edu/6.897/
254
spring03/scribe_notes/L2/lecture2.pdf
255 https://en.wikipedia.org/wiki/Category:CS1_maint:_archived_copy_as_title
256 https://en.wikipedia.org/wiki/Dan_Willard
257 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
258 https://en.wikipedia.org/wiki/Doi_(identifier)
259 https://doi.org/10.1137%2FS0097539797322425
260 https://en.wikipedia.org/wiki/MR_(identifier)
261 http://www.ams.org/mathscinet-getitem?mr=1740562
262 https://en.wikipedia.org/wiki/ISBN_(identifier)
263 https://en.wikipedia.org/wiki/Special:BookSources/978-0-13-199746-2
264 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
265 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.25.4189
266 https://en.wikipedia.org/wiki/Doi_(identifier)
267 https://doi.org/10.1007%2F3-540-44676-1_10
268 https://en.wikipedia.org/wiki/ISBN_(identifier)
269 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-42493-2

326
References

tributed Computing. Berlin, Heidelberg: Springer-Verlag. pp. 350–364. Cite-


SeerX270 10.1.1.296.8742271 .
21. C, P272 (1986). Robin Hood hashing273 (PDF) (T -
). C S D, U  W. CS-86-14.
A274 (PDF)     J 17, 2014.
22. G, E (2013). ”R H ”275 . A276 
   M 21, 2014.
23. A, O; K, D (1974). ”O  ”. Computer Journal.
17 (2): 135. doi277 :10.1093/comjnl/17.2.135278 .
24. V, A (O 2005). ”E    -
    ”. Transactions on Algorithms (TALG).
1 (2): 214–242. doi279 :10.1145/1103963.1103965280 .
25. C, P281 (M 1988). External Robin Hood Hashing (Technical report).
Computer Science Department, Indiana University. TR246.
26. ”A ”282 (PDF). A283 (PDF)     M
25, 2015. R A 10, 2015.CS1 maint: archived copy as title (link284 )
27. L, W (1980). ”L : A      
”. Proc. 6th Conference on Very Large Databases. pp. 212–223.
28. Doug Dunham. CS 4521 Lecture Notes285 Archived286 July 22, 2009, at the Wayback
Machine287 . University of Minnesota Duluth. Theorems 11.2, 11.6. Last modified
April 21, 2009.
29. Andy Ke. Inside the latency of hash table operations288 Last modified December 30,
2019.
30. Andy Ke. The K hash table, a design for low-latency applications289 Last modified
December 20, 2019.

270 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
271 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.296.8742
272 https://en.wikipedia.org/wiki/Pedro_Celis
273 https://cs.uwaterloo.ca/research/tr/1986/CS-86-14.pdf
https://web.archive.org/web/20140717220245/https://cs.uwaterloo.ca/research/tr/1986/
274
CS-86-14.pdf
275 http://codecapsule.com/2013/11/11/robin-hood-hashing/
https://web.archive.org/web/20140321175355/http://codecapsule.com/2013/11/11/robin-
276
hood-hashing/
277 https://en.wikipedia.org/wiki/Doi_(identifier)
278 https://doi.org/10.1093%2Fcomjnl%2F17.2.135
279 https://en.wikipedia.org/wiki/Doi_(identifier)
280 https://doi.org/10.1145%2F1103963.1103965
281 https://en.wikipedia.org/wiki/Pedro_Celis
282 http://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf
https://web.archive.org/web/20150325175258/http://www.eecs.harvard.edu/%7Emichaelm/
283
postscripts/handbook2001.pdf
284 https://en.wikipedia.org/wiki/Category:CS1_maint:_archived_copy_as_title
285 http://www.duluth.umn.edu/~ddunham/cs4521s09/notes/ch11.txt
https://web.archive.org/web/20090722072750/http://www.duluth.umn.edu/~ddunham/
286
cs4521s09/notes/ch11.txt
287 https://en.wikipedia.org/wiki/Wayback_Machine
288 https://www.linkedin.com/pulse/inside-latency-hash-table-operations-li-andy-ke/
https://www.linkedin.com/pulse/k-hash-table-design-low-latency-applications-li-andy-
289
ke/

327
Hash table

31. ”LHM (J P SE 7 )”290 . docs.oracle.com. Retrieved May


1, 2020.
32. Alexander Klink and Julian Wälde's Efficient Denial of Service Attacks on Web Ap-
plication Platforms291 Archived292 September 16, 2016, at the Wayback Machine293 ,
December 28, 2011, 28th Chaos Communication Congress. Berlin, Germany.
33. Mike Lennon ”Hash Table Vulnerability Enables Wide-Scale DDoS At-
tacks”294 Archived295 September 19, 2016, at the Wayback Machine296 . 2011.
34. ”H P' H F”297 . N 6, 2013. A298 
   S 16, 2016.
35. Crosby and Wallach. Denial of Service via Algorithmic Complexity At-
tacks299 Archived300 March 4, 2016, at the Wayback Machine301 . quote: ”modern
universal hashing techniques can yield performance comparable to commonplace hash
functions while being provably secure against these attacks.” ”Universal hash functions
... are ... a solution suitable for adversarial environments. ... in production systems.”
36. B-Y, N; W, A (2007). Remote algorithmic complexity attacks
against randomized hash tables Proc. International Conference on Security and Cryp-
tography (SECRYPT)302 (PDF). . 124. A303 (PDF)   
 S 16, 2014.
37. ”S (J P SE 7 )”304 . docs.oracle.com. Retrieved May 1, 2020.
38. ”T T - C ”305 .
www.chessprogramming.org. Retrieved May 1, 2020.
39. ”L: I (T J™ T > C)”306 .
docs.oracle.com. Archived307 from the original on January 18, 2017. Retrieved April
27, 2018.

290 https://docs.oracle.com/javase/7/docs/api/java/util/LinkedHashMap.html
https://events.ccc.de/congress/2011/Fahrplan/attachments/2007_28C3_Effective_DoS_on_
291
web_application_platforms.pdf
https://web.archive.org/web/20160916135747/https://events.ccc.de/congress/2011/
292
Fahrplan/attachments/2007_28C3_Effective_DoS_on_web_application_platforms.pdf
293 https://en.wikipedia.org/wiki/Wayback_Machine
http://www.securityweek.com/hash-table-collision-attacks-could-trigger-ddos-massive-
294
scale
https://web.archive.org/web/20160919171346/http://www.securityweek.com/hash-table-
295
collision-attacks-could-trigger-ddos-massive-scale
296 https://en.wikipedia.org/wiki/Wayback_Machine
297 http://blog.booking.com/hardening-perls-hash-function.html
https://web.archive.org/web/20160916233851/http://blog.booking.com/hardening-perls-
298
hash-function.html
https://www.usenix.org/conference/12th-usenix-security-symposium/denial-service-
299
algorithmic-complexity-attacks
https://web.archive.org/web/20160304121231/https://www.usenix.org/conference/12th-
300
usenix-security-symposium/denial-service-algorithmic-complexity-attacks
301 https://en.wikipedia.org/wiki/Wayback_Machine
302 https://www.eng.tau.ac.il/~yash/C2_039_Wool.pdf
https://web.archive.org/web/20140916120456/http://www.eng.tau.ac.il/%7Eyash/C2_039_
303
Wool.pdf
304 https://docs.oracle.com/javase/7/docs/api/java/util/Set.html
305 https://www.chessprogramming.org/Transposition_Table
306 https://docs.oracle.com/javase/tutorial/collections/implementations/index.html
https://web.archive.org/web/20170118041252/https://docs.oracle.com/javase/tutorial/
307
collections/implementations/index.html

328
Further reading

40. ”P: L  D    ”308 . stackoverflow.com. Archived309


from the original on December 2, 2017. Retrieved April 27, 2018.
41. D F H. ”A    P
3.6+?”310 . Stack Overflow.
42. D V (J 19, 2018). ”D Y K H H T W?
(R E)”311 . anadea.info. Retrieved July 3, 2019.
43. J S (D 25, 2016). ”R 2.4 R: F H,
U I  B R”312 . heroku.com. Retrieved July 3, 2019.
44. W, E. ”H T S 2: R   I M-
”313 . LuaHashMap: An easy to use hash table library for C. PlayControl Soft-
ware. Archived from the original314 on October 14, 2013. Retrieved October 24, 2019.
Did Tcl win? In any case, these benchmarks showed that these interpreter implemen-
tations have very good hash implementations and are competative with our reference
benchmark of the STL unordered_map. Particularly in the case of Tcl and Lua, they
were extremely competative and often were within 5%-10% of unordered_map when
they weren't beating it. (On 2019-10-24, the original site still has the text, but the
figures appear to be broken, whereas they are intact in the archive.)
45. M, D P.; S, S315 (O 28, 2004). Handbook of Datas-
tructures and Applications. p. 9-15. ISBN316 978-1-58488-435-4317 .

21.12 Further reading


• T, R; G, M T. (2006). ”C N: M 
D”. Data structures and algorithms in Java : [updated for Java 5.0] (4th
ed.). Hoboken, NJ: Wiley. pp. 369–418. ISBN318 978-0-471-73884-8319 .
• MK, B. J.; H, R.; B, T. (F 1990). ”S
  ”. Software Practice & Experience. 20 (2): 209–224.
doi320 :10.1002/spe.4380200207321 . hdl322 :10092/9691323 .

21.13 External links

308 https://stackoverflow.com/questions/513882/python-list-vs-dict-for-look-up-table
https://web.archive.org/web/20171202040407/https://stackoverflow.com/questions/
309
513882/python-list-vs-dict-for-look-up-table
310 https://stackoverflow.com/a/39980744/
311 https://anadea.info/blog/how-hash-table-works-ruby-examples
312 https://blog.heroku.com/ruby-2-4-features-hashes-integers-rounding#hash-changes
https://web.archive.org/web/20131014184050/http://playcontrol.net/opensource/
313
LuaHashMap/benchmarks.html
314 https://playcontrol.net/opensource/LuaHashMap/benchmarks.html
315 https://en.wikipedia.org/wiki/Sartaj_Sahni
316 https://en.wikipedia.org/wiki/ISBN_(identifier)
317 https://en.wikipedia.org/wiki/Special:BookSources/978-1-58488-435-4
318 https://en.wikipedia.org/wiki/ISBN_(identifier)
319 https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-73884-8
320 https://en.wikipedia.org/wiki/Doi_(identifier)
321 https://doi.org/10.1002%2Fspe.4380200207
322 https://en.wikipedia.org/wiki/Hdl_(identifier)
323 http://hdl.handle.net/10092%2F9691

329
Hash table

Wikimedia Commons has media related to Hash tables324 .

Wikibooks has a book on the topic of: Data Structures/Hash Tables325

• A Hash Function for Hash Table Lookup326 by Bob Jenkins.


• Hash functions327 by Paul Hsieh
• Design of Compact and Efficient Hash Tables for Java328
• NIST329 entry on hash tables330
• Lecture on Hash Tables from Stanford's CS106A331
• Open Data Structures – Chapter 5 – Hash Tables332 , Pat Morin333
• MIT's Introduction to Algorithms: Hashing 1334 MIT OCW lecture Video
• MIT's Introduction to Algorithms: Hashing 2335 MIT OCW lecture Video

Data structures

324 https://commons.wikimedia.org/wiki/Category:Hash_tables
325 https://en.wikibooks.org/wiki/Data_Structures/Hash_Tables
326 http://www.burtleburtle.net/bob/hash/doobs.html
327 http://www.azillionmonkeys.com/qed/hash.html
https://web.archive.org/web/20110505033634/http://blog.griddynamics.com/2011/03/
328
ultimate-sets-and-maps-for-java-part-i.html
329 https://en.wikipedia.org/wiki/NIST
330 https://xlinux.nist.gov/dads/HTML/hashtab.html
https://web.stanford.edu/class/archive/cs/cs106a/cs106a.1178/lectures/Lecture20/
331
Lecture20.pdf
332 http://opendatastructures.org/versions/edition-0.1e/ods-java/5_Hash_Tables.html
333 https://en.wikipedia.org/wiki/Pat_Morin
http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-
334 introduction-to-algorithms-sma-5503-fall-2005/video-lectures/lecture-7-hashing-hash-
functions/
http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-
335 introduction-to-algorithms-sma-5503-fall-2005/video-lectures/lecture-8-universal-
hashing-perfect-hashing/

330
22 Hash function

This article is about a computer programming construct. For other meanings of ”hash” and
”hashing”, see Hash (disambiguation)1 .

This article needs additional citations for verification2 . Please help improve
this article3 by adding citations to reliable sources4 . Unsourced material may be
challenged and removed.
Find sources: ”Hash function”5 – news6 · newspapers7 · books8 · scholar9 · JSTOR10 (July
2010)(Learn how and when to remove this template message11 )

1 https://en.wikipedia.org/wiki/Hash_(disambiguation)
2 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
3 https://en.wikipedia.org/w/index.php?title=Hash_function&action=edit
4 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
5 http://www.google.com/search?as_eq=wikipedia&q=%22Hash+function%22
6 http://www.google.com/search?tbm=nws&q=%22Hash+function%22+-wikipedia
http://www.google.com/search?&q=%22Hash+function%22+site:news.google.com/newspapers&
7
source=newspapers
8 http://www.google.com/search?tbs=bks:1&q=%22Hash+function%22+-wikipedia
9 http://scholar.google.com/scholar?q=%22Hash+function%22
10 https://www.jstor.org/action/doBasicSearch?Query=%22Hash+function%22&acc=on&wc=on
11 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

331
Hash function

Figure 61 A hash function that maps names to integers from 0 to 15. There is a
collision between keys ”John Smith” and ”Sandra Dee”.

A hash function is any function12 that can be used to map data13 of arbitrary size to
fixed-size values. The values returned by a hash function are called hash values, hash codes,
digests, or simply hashes. The values are used to index a fixed-size table called a hash
table14 . Use of a hash function to index a hash table is called hashing or scatter storage
addressing.
Hash functions and their associated hash tables are used in data storage and retrieval appli-
cations to access data in a small and nearly constant time per retrieval, and storage space
only fractionally greater than the total space required for the data or records themselves.
Hashing is a computationally and storage space efficient form of data access which avoids
the non-linear access time of ordered and unordered lists and structured trees, and the often
exponential storage requirements of direct access of state spaces of large or variable-length
keys.

12 https://en.wikipedia.org/wiki/Function_(mathematics)
13 https://en.wikipedia.org/wiki/Data_(computing)
14 https://en.wikipedia.org/wiki/Hash_table

332
Overview

Use of hash functions relies on statistical properties of key and function interaction: worst
case behavior is intolerably bad with a vanishingly small probability, and average case
behavior can be nearly optimal (minimal collisions).[1]
Hash functions are related to (and often confused with) checksums15 , check digits16 , fin-
gerprints17 , lossy compression18 , randomization functions19 , error-correcting codes20 , and
ciphers21 . Although the concepts overlap to some extent, each one has its own uses and
requirements and is designed and optimized differently.

22.1 Overview

A hash function takes an input as a key, which is associated with a datum or record and
used to identify it to the data storage and retrieval application. The keys may be fixed
length, like an integer, or variable length, like a name. In some cases, the key is the datum
itself. The output is a hash code used to index a hash table holding the data or records, or
pointers to them.
A hash function may be considered to perform three functions:
• Convert variable length keys into fixed length (usually machine word length or less)
values, by folding them by words or other units using a parity-preserving operator like
ADD or XOR.
• Scramble the bits of the key so that the resulting values are uniformly distributed over
the key space.
• Map the key values into ones less than or equal to the size of the table
A good hash function satisfies two basic properties: 1) it should be very fast to compute;
2) it should minimize duplication of output values (collisions). Hash functions rely on gen-
erating favorable probability distributions for their effectiveness, reducing access time to
nearly constant. High table loading factors, pathological22 key sets and poorly designed
hash functions can result in access times approaching linear in the number of items in the
table. Hash functions can be designed to give best worst-case performance,[Notes 1] good
performance under high table loading factors, and in special cases, perfect (collisionless)
mapping of keys into hash codes. Implementation is based on parity-preserving bit opera-
tions (XOR and ADD), multiply, or divide. A necessary adjunct to the hash function is a
collision-resolution method that employs an auxiliary data structure like linked lists23 , or
systematic probing of the table to find an empty slot.

15 https://en.wikipedia.org/wiki/Checksums
16 https://en.wikipedia.org/wiki/Check_digit
17 https://en.wikipedia.org/wiki/Fingerprint_(computing)
18 https://en.wikipedia.org/wiki/Lossy_compression
19 https://en.wikipedia.org/wiki/Randomization_function
20 https://en.wikipedia.org/wiki/Error_correction_code
21 https://en.wikipedia.org/wiki/Cipher
22 https://en.wikipedia.org/wiki/Pathological_(mathematics)
23 https://en.wikipedia.org/wiki/Linked_list

333
Hash function

22.2 Hash tables

Main article: Hash table24 Hash functions are used in conjunction with Hash table25 to store
and retrieve data items or data records. The hash function translates the key associated
with each datum or record into a hash code which is used to index the hash table. When
an item is to be added to the table, the hash code may index an empty slot (also called a
bucket), in which case the item is added to the table there. If the hash code indexes a full
slot, some kind of collision resolution is required: the new item may be omitted (not added
to the table), or replace the old item, or it can be added to the table in some other location
by a specified procedure. That procedure depends on the structure of the hash table: In
chained hashing, each slot is the head of a linked list or chain, and items that collide at the
slot are added to the chain. Chains may be kept in random order and searched linearly, or
in serial order, or as a self-ordering list by frequency to speed up access. In open address
hashing, the table is probed starting from the occupied slot in a specified manner, usually
by linear probing26 , quadratic probing27 , or double hashing28 until an open slot is located
or the entire table is probed (overflow). Searching for the item follows the same procedure
until the item is located, an open slot is found or the entire table has been searched (item
not in table).

22.2.1 Specialized uses

Hash functions are also used to build caches29 for large data sets stored in slow media. A
cache is generally simpler than a hashed search table, since any collision can be resolved by
30
discarding or writing back the older of the two colliding items.[citation needed ]
Hash functions are an essential ingredient of the Bloom filter31 , a space-efficient probabilis-
tic32 data structure33 that is used to test whether an element34 is a member of a set35 .
A special case of hashing is known as geometric hashing36 or the grid method. In these
applications, the set of all inputs is some sort of metric space37 , and the hashing function
can be interpreted as a partition38 of that space into a grid of cells. The table is often
an array with two or more indices (called a grid file39 , grid index, bucket grid, and similar
names), and the hash function returns an index tuple40 . This principle is widely used in

24 https://en.wikipedia.org/wiki/Hash_table
25 https://en.wikipedia.org/wiki/Hash_tables
26 https://en.wikipedia.org/wiki/Linear_probing
27 https://en.wikipedia.org/wiki/Quadratic_probing
28 https://en.wikipedia.org/wiki/Double_hashing
29 https://en.wikipedia.org/wiki/Cache_(computing)
31 https://en.wikipedia.org/wiki/Bloom_filter
32 https://en.wikipedia.org/wiki/Probability
33 https://en.wikipedia.org/wiki/Data_structure
34 https://en.wikipedia.org/wiki/Element_(mathematics)
35 https://en.wikipedia.org/wiki/Set_(computer_science)
36 https://en.wikipedia.org/wiki/Geometric_hashing
37 https://en.wikipedia.org/wiki/Metric_space
38 https://en.wikipedia.org/wiki/Partition_(mathematics)
39 https://en.wikipedia.org/wiki/Grid_file
40 https://en.wikipedia.org/wiki/Tuple

334
Properties

computer graphics41 , computational geometry42 and many other disciplines, to solve many
proximity problems43 in the plane44 or in three-dimensional space45 , such as finding closest
pairs46 in a set of points, similar shapes in a list of shapes, similar images47 in an image
database48 , and so on.
Hash tables are also used to implement associative arrays49 and dynamic sets50 .[2]

22.3 Properties

This section needs additional citations for verification51 . Please help im-
prove this article52 by adding citations to reliable sources53 . Unsourced material
may be challenged and removed.
Find sources: ”Hash function”54 – news55 · newspapers56 · books57 · scholar58 · JSTOR59
(October 2017)(Learn how and when to remove this template message60 )

22.3.1 Uniformity

A good hash function should map the expected inputs as evenly as possible over its output
range. That is, every hash value in the output range should be generated with roughly the
same probability61 . The reason for this last requirement is that the cost of hashing-based
methods goes up sharply as the number of collisions—pairs of inputs that are mapped to
the same hash value—increases. If some hash values are more likely to occur than others, a
larger fraction of the lookup operations will have to search through a larger set of colliding
table entries.

41 https://en.wikipedia.org/wiki/Computer_graphics
42 https://en.wikipedia.org/wiki/Computational_geometry
43 https://en.wikipedia.org/wiki/Proximity_problem
44 https://en.wikipedia.org/wiki/Plane_(geometry)
45 https://en.wikipedia.org/wiki/Three-dimensional_space
46 https://en.wikipedia.org/wiki/Closest_pair_problem
47 https://en.wikipedia.org/wiki/Image_processing
48 https://en.wikipedia.org/wiki/Image_retrieval
49 https://en.wikipedia.org/wiki/Associative_array
50 https://en.wikipedia.org/wiki/Set_(abstract_data_type)
51 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
52 https://en.wikipedia.org/w/index.php?title=Hash_function&action=edit
53 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
54 http://www.google.com/search?as_eq=wikipedia&q=%22Hash+function%22
55 http://www.google.com/search?tbm=nws&q=%22Hash+function%22+-wikipedia
http://www.google.com/search?&q=%22Hash+function%22+site:news.google.com/newspapers&
56
source=newspapers
57 http://www.google.com/search?tbs=bks:1&q=%22Hash+function%22+-wikipedia
58 http://scholar.google.com/scholar?q=%22Hash+function%22
59 https://www.jstor.org/action/doBasicSearch?Query=%22Hash+function%22&acc=on&wc=on
60 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
61 https://en.wikipedia.org/wiki/Probability

335
Hash function

Note that this criterion only requires the value to be uniformly distributed, not random in
any sense. A good randomizing function is (barring computational efficiency concerns)
generally a good choice as a hash function, but the converse need not be true.
Hash tables often contain only a small subset of the valid inputs. For instance, a club
membership list may contain only a hundred or so member names, out of the very large
set of all possible names. In these cases, the uniformity criterion should hold for almost all
typical subsets of entries that may be found in the table, not just for the global set of all
possible entries.
In other words, if a typical set of m records is hashed to n table slots, the probability of a
bucket receiving many more than m/n records should be vanishingly small. In particular,
if m is less than n, very few buckets should have more than one or two records. A small
number of collisions is virtually inevitable, even if n is much larger than m – see the birthday
problem62 .
In special cases when the keys are known in advance and the key set is static, a hash function
can be found that achieves absolute (or collisionless) uniformity. Such a hash function is
said to be perfect63 . There is no algorithmic way of constructing such a function - searching
for one is a factorial64 function of the number of keys to be mapped versus the number of
table slots they're mapped into. Finding a perfect hash function over more than a very
small set of keys is usually computationally infeasible; the resulting function is likely to be
more computationally complex than a standard hash function, and provides only a marginal
advantage over a function with good statistical properties that yields a minimum number
of collisions. See universal hash function65 .

22.3.2 Testing and measurement

When testing a hash function, the uniformity of the distribution of hash values can be
evaluated by the chi-squared test66 . This test is a goodness-of-fit measure: it's the actual
distribution of items in buckets versus the expected (or uniform) distribution of items. The
∑m−1
j=0 (bj )(bj + 1)/2
formula is:
(n/2m)(n + 2m − 1)
where: n is the number of keys, m is the number of buckets, bj is the number of items in
bucket j
A ratio within one confidence interval (0.95 - 1.05) is indicative that the hash function
evaluated has an expected uniform distribution.
Hash functions can have some technical properties that make it more likely that they'll have
a uniform distribution when applied. One is the strict avalanche criterion67 : whenever a
single input bit is complemented, each of the output bits changes with a 50% probability.
The reason for this property is that selected subsets of the key space may have low variability.

62 https://en.wikipedia.org/wiki/Birthday_problem
63 https://en.wikipedia.org/wiki/Perfect_hash_function
64 https://en.wikipedia.org/wiki/Factorial
65 https://en.wikipedia.org/wiki/Universal_hashing
66 https://en.wikipedia.org/wiki/Chi-squared_test
67 https://en.wikipedia.org/wiki/Strict_avalanche_criterion

336
Properties

In order for the output to be uniformly distributed, a low amount of variability, even one bit,
should translate into a high amount of variability (i.e. distribution over the table space) in
the output. Each bit should change with probability 50% because if some bits are reluctant
to change, the keys become clustered around those values. If the bits want to change too
readily, the mapping is approaching a fixed XOR function of a single bit. Standard tests
for this property have been described in the literature.[3] The relevance of the criterion to a
multiplicative hash function is assessed here.[4]

22.3.3 Efficiency

In data storage and retrieval applications, use of a hash function is a trade off between search
time and data storage space. If search time were unbounded, a very compact unordered
linear list would be the best medium; if storage space were unbounded, a randomly accessible
structure indexable by the key value would be very large, very sparse, but very fast. A hash
function takes a finite amount of time to map a potentially large key space to a feasible
amount of storage space searchable in a bounded amount of time regardless of the number
of keys. In most applications, it is highly desirable that the hash function be computable
with minimum latency and secondarily in a minimum number of instructions.
Computational complexity varies with the number of instructions required and latency of
individual instructions, with the simplest being the bitwise methods (folding), followed by
the multiplicative methods, and the most complex (slowest) are the division-based methods.
Because collisions should be infrequent, and cause a marginal delay but are otherwise harm-
less, it's usually preferable to choose a faster hash function over one that needs more com-
putation but saves a few collisions.
Division-based implementations can be of particular concern, because division is micropro-
grammed on nearly all chip architectures. Divide (modulo) by a constant can be inverted to
become a multiply by the word-size multiplicative-inverse of the constant. This can be done
by the programmer, or by the compiler. Divide can also be reduced directly into a series of
shift-subtracts and shift-adds, though minimizing the number of such operations required
is a daunting problem; the number of assembly instructions resulting may be more than a
dozen, and swamp the pipeline. If the architecture has a hardware multiply functional unit,
the multiply-by-inverse is likely a better approach.
We can allow the table size n to not be a power of 2 and still not have to perform any
remainder or division operation, as these computations are sometimes costly. For example,
let n be significantly less than 2b . Consider a pseudorandom number generator68 function
P(key) that is uniform on the interval [0, 2b − 1]. A hash function uniform on the interval
[0, n-1] is n P(key)/2b . We can replace the division by a (possibly faster) right bit shift69 :
nP(key) >> b.
If keys are being hashed repeatedly, and the hash function is costly, computing time can be
saved by precomputing the hash codes and storing them with the keys. Matching hash codes
almost certainly mean the keys are identical. This technique is used for the transposition

68 https://en.wikipedia.org/wiki/Pseudorandom_number_generator
69 https://en.wikipedia.org/wiki/Bit_shifting

337
Hash function

table in game-playing programs, which stores a 64-bit hashed representation of the board
position.

22.3.4 Universality

Main article: Universal hashing70 A universal hashing scheme is a randomized algorithm71


that selects a hashing function h among a family of such functions, in such a way that
the probability of a collision of any two distinct keys is 1/m, where m is the number of
distinct hash values desired—independently of the two keys. Universal hashing ensures (in
a probabilistic sense) that the hash function application will behave as well as if it were
using a random function, for any distribution of the input data. It will, however, have more
collisions than perfect hashing and may require more operations than a special-purpose
hash function.

22.3.5 Applicability

A hash function should be applicable to all situations in which a hash function might be
used. A hash function that allows only certain table sizes, strings only up to a certain
length, or can't accept a seed (i.e. allow double hashing) isn't as useful as one that does.

22.3.6 Deterministic

A hash procedure must be deterministic72 —meaning that for a given input value it must
always generate the same hash value. In other words, it must be a function73 of the data to
be hashed, in the mathematical sense of the term. This requirement excludes hash functions
that depend on external variable parameters, such as pseudo-random number generators74
or the time of day. It also excludes functions that depend on the memory address of the
object being hashed in cases that the address may change during execution (as may happen
on systems that use certain methods of garbage collection75 ), although sometimes rehashing
of the item is possible.
The determinism is in the context of the reuse of the function. For example, Python76 adds
the feature that hash functions make use of a randomized seed that is generated once when
the Python process starts in addition to the input to be hashed.[5] The Python hash is still
a valid hash function when used within a single run. But if the values are persisted (for
example, written to disk) they can no longer be treated as valid hash values, since in the
next run the random value might differ.

70 https://en.wikipedia.org/wiki/Universal_hashing
71 https://en.wikipedia.org/wiki/Randomized_algorithm
72 https://en.wikipedia.org/wiki/Deterministic_algorithm
73 https://en.wikipedia.org/wiki/Function_(mathematics)
74 https://en.wikipedia.org/wiki/Pseudo-random_number_generator
75 https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)
76 https://en.wikipedia.org/wiki/Python_(programming_language)

338
Properties

22.3.7 Defined range

It is often desirable that the output of a hash function have fixed size (but see below).
If, for example, the output is constrained to 32-bit integer values, the hash values can be
used to index into an array. Such hashing is commonly used to accelerate data searches.[6]
Producing fixed-length output from variable length input can be accomplished by breaking
the input data into chunks of specific size. Hash functions used for data searches use some
arithmetic expression which iteratively processes chunks of the input (such as the characters
in a string) to produce the hash value.[6]

22.3.8 Variable range

In many applications, the range of hash values may be different for each run of the program,
or may change along the same run (for instance, when a hash table needs to be expanded).
In those situations, one needs a hash function which takes two parameters—the input data
z, and the number n of allowed hash values.
A common solution is to compute a fixed hash function with a very large range (say, 0 to
232 − 1), divide the result by n, and use the division's remainder77 . If n is itself a power of
2, this can be done by bit masking78 and bit shifting79 . When this approach is used, the
hash function must be chosen so that the result has fairly uniform distribution between 0
and n − 1, for any value of n that may occur in the application. Depending on the function,
the remainder may be uniform only for certain values of n, e.g. odd80 or prime numbers81 .

22.3.9 Variable range with minimal movement (dynamic hash function)

When the hash function is used to store values in a hash table that outlives the run of the
program, and the hash table needs to be expanded or shrunk, the hash table is referred to
as a dynamic hash table.
A hash function that will relocate the minimum number of records when the table is resized
is desirable. What is needed is a hash function H(z,n) – where z is the key being hashed and
n is the number of allowed hash values – such that H(z,n + 1) = H(z,n) with probability
close to n/(n + 1).
Linear hashing82 and spiral storage are examples of dynamic hash functions that execute in
constant time but relax the property of uniformity to achieve the minimal movement prop-
erty. Extendible hashing83 uses a dynamic hash function that requires space proportional
to n to compute the hash function, and it becomes a function of the previous keys that have
been inserted. Several algorithms that preserve the uniformity property but require time
84
proportional to n to compute the value of H(z,n) have been invented.[clarification needed ]

77 https://en.wikipedia.org/wiki/Modulo_operation
78 https://en.wikipedia.org/wiki/Mask_(computing)
79 https://en.wikipedia.org/wiki/Bit_shifting
80 https://en.wikipedia.org/wiki/Odd_number
81 https://en.wikipedia.org/wiki/Prime_number
82 https://en.wikipedia.org/wiki/Linear_hashing
83 https://en.wikipedia.org/wiki/Extendible_hashing

339
Hash function

A hash function with minimal movement is especially useful in distributed hash tables85 .

22.3.10 Data normalization

In some applications, the input data may contain features that are irrelevant for comparison
purposes. For example, when looking up a personal name, it may be desirable to ignore
the distinction between upper and lower case letters. For such data, one must use a hash
function that is compatible with the data equivalence86 criterion being used: that is, any
two inputs that are considered equivalent must yield the same hash value. This can be
accomplished by normalizing the input before hashing it, as by upper-casing all letters.

22.4 Hashing integer data types

There are several common algorithms for hashing integers. The method giving the best
distribution is data-dependent. One of the simplest and most common methods in practice
is the modulo division method.

22.4.1 Identity hash function

If the data to be hashed is small enough, one can use the data itself (reinterpreted as
an integer) as the hashed value. The cost of computing this identity87 hash function is
effectively zero. This hash function is perfect88 , as it maps each input to a distinct hash
value.
The meaning of ”small enough” depends on the size of the type that is used as the hashed
value. For example, in Java89 , the hash code is a 32-bit integer. Thus the 32-bit integer
Integer and 32-bit floating-point Float objects can simply use the value directly; whereas
the 64-bit integer Long and 64-bit floating-point Double cannot use this method.
Other types of data can also use this hashing scheme. For example, when mapping character
strings90 between upper and lower case91 , one can use the binary encoding of each character,
interpreted as an integer, to index a table that gives the alternative form of that character
(”A” for ”a”, ”8” for ”8”, etc.). If each character is stored in 8 bits (as in extended ASCII92[7]
or ISO Latin 193 ), the table has only 28 = 256 entries; in the case of Unicode94 characters,
the table would have 17×216 = 1114112 entries.

85 https://en.wikipedia.org/wiki/Distributed_hash_table
86 https://en.wikipedia.org/wiki/Equivalence_relation
87 https://en.wikipedia.org/wiki/Identity_function
88 https://en.wikipedia.org/wiki/Perfect_hash_function
89 https://en.wikipedia.org/wiki/Java_(programming_language)
90 https://en.wikipedia.org/wiki/Character_string
91 https://en.wikipedia.org/wiki/Letter_case
92 https://en.wikipedia.org/wiki/ASCII
93 https://en.wikipedia.org/wiki/ISO_Latin_1
94 https://en.wikipedia.org/wiki/Unicode

340
Hashing integer data types

The same technique can be used to map two-letter country codes95 like ”us” or ”za” to
country names (262 = 676 table entries), 5-digit zip codes like 13083 to city names (100000
entries), etc. Invalid data values (such as the country code ”xx” or the zip code 00000) may
be left undefined in the table or mapped to some appropriate ”null” value.

22.4.2 Trivial hash function

If the keys are uniformly or sufficiently uniformly distributed over the key space, so that
the key values are essentially random, they may be considered to be already 'hashed'. In
this case, any number of any bits in the key may be dialed out and collated as an index
into the hash table. A simple such hash function would be to mask off the bottom m bits
to use as an index into a table of size 2m .

22.4.3 Folding

A folding hash code is produced by dividing the input into n sections of m bits, where 2^m is
the table size, and using a parity-preserving bitwise operation like ADD or XOR, to combine
the sections. The final operation is a mask or shift to trim off any excess bits at the high or
low end. For example, for a table size of 15 bits and key value of 0x0123456789ABCDEF,
there are 5 sections 0x4DEF, 0x1357, 0x159E, 0x091A and 0x8. Adding, we obtain 0x7AA4,
a 15-bit value.

22.4.4 Mid-squares

A mid-squares hash code is produced by squaring the input and extracting an appropriate
number of middle digits or bits. For example, if the input is 123,456,789 and the hash table
size 10,000, squaring the key produces 1.524157875019e16, so the hash code is taken as
the middle 4 digits of the 17-digit number (ignoring the high digit) 8750. The mid-squares
method produces a reasonable hash code if there are not a lot of leading or trailing zeros in
the key. This is a variant of multiplicative hashing, but not as good, because an arbitrary
key is not a good multiplier.

22.4.5 Division hashing

A standard technique is to use a modulo function on the key, by selecting a divisor M


which is a prime number close to the table size, so h(K) = K mod M . The table size is
usually a power of 2. This gives a distribution from {0, M − 1}. This gives good results over
a large number of key sets. A significant drawback of division hashing is that division is
microprogrammed on most modern architectures including x86, and can be 10 times slower
than multiply. A second drawback is that it won't break up clustered keys. For example,
the keys 123000, 456000, 789000, etc. modulo 1000 all map to the same address. This
technique works well in practice because many key sets are sufficiently random already, and
the probability that a key set will be cyclical by a large prime number is small.

95 https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2

341
Hash function

22.4.6 Algebraic coding

Algebraic coding is a variant of the division method of hashing which uses di-
vision by a polynomial modulo 2 instead of an integer to map n bits to m
bits.[8] In this approach, M = 2m and we postulate an mth degree polynomial
Z(x) = xm + ζm−1 xm−1 + ... + ζ0 . A key K = (kn−1 ...k1 k0 )2 can be regarded as the polyno-
mial K(x) = kn−1 xn−1 + ... + k1 x + k0 . The remainder using polynomial arithmetic modulo
2 is K(x) mod Z(x) = hm−1 xm−1 + ... + h1 x + h0 . Then h(K) = (hm−1 ...h1 h0 )2 . If Z(x) is
constructed to have t or fewer non-zero coefficients, then keys differing by t or fewer bits
are guaranteed to not collide.
Z a function of k, t and n, a divisor of 2k -1, is constructed from the GF(2k ) field. Knuth
gives an example: for n=15, m=10 and t=7, Z(x) = x10 + x8 + x5 + x4 + x2 + x + 1. The
derivation is as follows:
Let <semantics> <mrow class="MJX-TeXAtom-ORD"> <mstyle displaystyle="true" scriptlevel="0"> <mi>S</mi>
</mstyle> </mrow> {\displaystyle S} </semantics> be the smallest set of integers <semantics> <mrow
class="MJX-TeXAtom-ORD"> <mstyle displaystyle="true" scriptlevel="0"> <mo> </mo> <mo fence="false"
stretchy="false">{</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo>
<mo>,</mo> <mi>t</mi> <mo fence="false" stretchy="false">}</mo> <mo> </mo> <mi>S</mi> <mo> </mo>
<mo stretchy="false">(</mo> <mn>2</mn> <mi>j</mi> <mo stretchy="false">)</mo> <mspace width="1em" />
<mi>mod</mi> <mspace width="thinmathspace" /> <mspace width="thinmathspace" /> <mi>n</mi> <mo> </mo>
<mi>S</mi> <mi mathvariant="normal"> </mi> <mi>j</mi> <mo> </mo> <mi>S</mi> </mstyle> </mrow> {\dis-
96
playstyle \ni \{1,2,...,t\}\subseteq S\land (2j)\mod n\in S\forall j\in S} </semantics> [Notes 2]
Define <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mi>P</mi>
<mo stretchy=”false”>(</mo> <mi>x</mi> <mo stretchy=”false”>)</mo> <mo>=</mo> <munder> <mo> </mo>
<mrow class=”MJX-TeXAtom-ORD”> <mi>j</mi> <mo> </mo> <mi>S</mi> </mrow> </munder> <mo
stretchy=”false”>(</mo> <mi>x</mi> <mo>−</mo> <msup> <mi>α</mi> <mrow class=”MJX-TeXAtom-ORD”>
<mi>j</mi> </mrow> </msup> <mo stretchy=”false”>)</mo> </mstyle> </mrow> {\displaystyle P(x)=\prod _{j\in
S}(x-\alpha ^{j})} </semantics> where <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true”
scriptlevel=”0”> <mi>α</mi> <msup> <mo> </mo> <mrow class=”MJX-TeXAtom-ORD”> <mi>n</mi> </mrow>
</msup> <mi>G</mi> <mi>F</mi> <mo stretchy=”false”>(</mo> <msup> <mn>2</mn> <mrow class=”MJX-
TeXAtom-ORD”> <mi>k</mi> </mrow> </msup> <mo stretchy=”false”>)</mo> </mstyle> </mrow> {\displaystyle
\alpha \in ^{n}GF(2^{k})} </semantics> and where the coefficients of <semantics> <mrow class=”MJX-TeXAtom-
ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mi>P</mi> <mo stretchy=”false”>(</mo> <mi>x</mi> <mo
stretchy=”false”>)</mo> </mstyle> </mrow> {\displaystyle P(x)} </semantics> are computed in this field.
Then the degree of <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mi>P</mi> <mo stretchy=”false”>(</mo> <mi>x</mi> <mo stretchy=”false”>)</mo> <mo>=</mo> <mrow
class=”MJX-TeXAtom-ORD”> <mo stretchy=”false”>|</mo> </mrow> <mi>S</mi> <mrow class=”MJX-TeXAtom-
ORD”> <mo stretchy=”false”>|</mo> </mrow> </mstyle> </mrow> {\displaystyle P(x)=|S|} </semantics>
. Since <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <msup>
<mi>α</mi> <mrow class=”MJX-TeXAtom-ORD”> <mn>2</mn> <mi>j</mi> </mrow> </msup> </mstyle>
</mrow> {\displaystyle \alpha ^{2j}} </semantics> is a root of <semantics> <mrow class=”MJX-TeXAtom-ORD”>
<mstyle displaystyle=”true” scriptlevel=”0”> <mi>P</mi> <mo stretchy=”false”>(</mo> <mi>x</mi> <mo
stretchy=”false”>)</mo> </mstyle> </mrow> {\displaystyle P(x)} </semantics> whenever <semantics> <mrow
class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <msup> <mi>α</mi> <mrow class=”MJX-
TeXAtom-ORD”> <mi>j</mi> </mrow> </msup> </mstyle> </mrow> {\displaystyle \alpha ^{j}} </semantics>
is a root, it follows
that the coefficients <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<msup> <mi>p</mi> <mrow class=”MJX-TeXAtom-ORD”> <mi>i</mi> </mrow> </msup> </mstyle> </mrow>
{\displaystyle p^{i}} </semantics> of <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true”
scriptlevel=”0”> <mi>P</mi> <mo stretchy=”false”>(</mo> <mi>x</mi> <mo stretchy=”false”>)</mo> </mstyle>
</mrow> {\displaystyle P(x)} </semantics> satisfy <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle
displaystyle=”true” scriptlevel=”0”> <msubsup> <mi>p</mi> <mrow class=”MJX-TeXAtom-ORD”> <mi>i</mi>
</mrow> <mrow class=”MJX-TeXAtom-ORD”> <mn>2</mn> </mrow> </msubsup> <mo>=</mo> <msub>
<mi>p</mi> <mrow class=”MJX-TeXAtom-ORD”> <mi>i</mi> </mrow> </msub> </mstyle> </mrow>
{\displaystyle p_{i}^{2}=p_{i}} </semantics> so they are all 0 or 1. If <semantics> <mrow class=”MJX-TeXAtom-
ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mi>R</mi> <mo stretchy=”false”>(</mo> <mi>x</mi>
<mo stretchy=”false”>)</mo> <mo>=</mo> <msub> <mi>r</mi> <mrow class=”MJX-TeXAtom-ORD”>
<mo stretchy=”false”>(</mo> <mi>n</mi> <mo>−</mo> <mn>1</mn> <mo stretchy=”false”>)</mo>
</mrow> </msub> <msup> <mi>x</mi> <mrow class=”MJX-TeXAtom-ORD”> <mi>n</mi> <mo>−</mo>
<mn>1</mn> </mrow> </msup> <mo>+</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>+</mo>
<msub> <mi>r</mi> <mrow class=”MJX-TeXAtom-ORD”> <mn>1</mn> </mrow> </msub> <mi>x</mi>

342
Hashing integer data types

<mo>+</mo> <msub> <mi>r</mi> <mrow class=”MJX-TeXAtom-ORD”> <mn>0</mn> </mrow> </msub>


</mstyle> </mrow> {\displaystyle R(x)=r_{(n-1)}x^{n-1}+...+r_{1}x+r_{0}} </semantics> is any
nonzero polynomial modulo 2 with at most t nonzero coefficients, then <semantics> <mrow class=”MJX-TeXAtom-
ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mi>R</mi> <mo stretchy=”false”>(</mo> <mi>x</mi> <mo
stretchy=”false”>)</mo> </mstyle> </mrow> {\displaystyle R(x)} </semantics> is not a
multiple of <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mi>P</mi> <mo stretchy=”false”>(</mo> <mi>x</mi> <mo stretchy=”false”>)</mo> </mstyle> </mrow>
97
{\displaystyle P(x)} </semantics> modulo 2.[Notes 3] If follows that the corresponding hash function will
98
map keys with fewer than t bits in common to unique indices.[9]

The usual outcome is that either n will get large, or t will get large, or both, in order for the
scheme to be computationally feasible. Therefore, its more suited to hardware or microcode
implementation.[10]

22.4.7 Unique permutation hashing

See also unique permutation hashing, which has a guaranteed best worst-case insertion
time.[11]

22.4.8 Multiplicative hashing

Standard multiplicative hashing uses the formula ha (K) = ⌊(aK mod W )/(W/M )⌋ which
produces a hash value in {0, . . . , M − 1}. The value a is an appropriately chosen value that
should be relatively prime99 to W ; it should be large and its binary representation a random
mix of 1's and 0's. An important practical special case occurs when W = 2w and M = 2m
are powers of 2 and w is the machine word size100 . In this case this formula becomes
ha (K) = ⌊(aK mod 2w )/2w−m ⌋. This is special because arithmetic modulo 2w is done by
default in low-level programming languages and integer division by a power of 2 is simply
a right-shift, so, in C, for example, this function becomes
unsigned hash(unsigned K)
{
return (a*K) >> (w-m);
}

and for fixed m and w this translates into a single integer multiplication and right-shift
making it one of the fastest hash functions to compute.
Multiplicative hashing is susceptible to a ”common mistake” that leads to poor
diffusion—higher-value input bits do not affect lower-value output bits.[12] A transmuta-
tion on the input which shifts the span of retained top bits down and XORs or ADDs them
to the key before the multiplication step corrects for this. So the resulting function looks
like:[13]
unsigned hash(unsigned K)
{
K ^= K >> (w-m);
return (a*K) >> (w-m);
}

99 https://en.wikipedia.org/wiki/Coprime_integers
100 https://en.wikipedia.org/wiki/Word_size

343
Hash function

22.4.9 Fibonacci hashing

Fibonacci101 hashing is a form of multiplicative hashing in which the multiplier is 2w /ϕ,


where w is the machine word length and ϕ (phi) is the golden ratio102 . ϕ is an irrational
number103 with approximate value 5/3, and decimal expansion of 1.618033... A property
of this multiplier is that it uniformly distributes over the table space, blocks of consecutive
keys with respect to any block of bits in the key. Consecutive keys within the high bits or
low bits of the key (or some other field) are relatively common. The multipliers for various
word lengths w are:
• 16: a=4050310
• 32: a=265443576910
• 48: a=17396110258977110 [Notes 4]
• 64: a=1140071481932319848510 [Notes 5]

22.4.10 Zobrist hashing

Main articles: Tabulation hashing104 and Zobrist hashing105 Tabulation hashing, more gen-
erally known as Zobrist hashing after Albert Zobrist106 , an American computer scientist, is
a method for constructing universal families of hash functions by combining table lookup
with XOR operations. This algorithm has proven to be very fast and of high quality for
hashing purposes (especially hashing of integer-number keys).[14]
Zobrist hashing was originally introduced as a means of compactly representing chess po-
sitions in computer game playing programs. A unique random number was assigned to
represent each type of piece (six each for black and white) on each space of the board.
Thus a table of 64x12 such numbers is initialized at the start of the program. The ran-
dom numbers could be any length, but 64 bits was natural due to the 64 squares on the
board. A position was transcribed by cycling through the pieces in a position, indexing
the corresponding random numbers (vacant spaces were not included in the calculation),
and XORing them together (the starting value could be 0, the identity value for XOR, or a
random seed). The resulting value was reduced by modulo, folding or some other operation
to produce a hash table index. The original Zobrist hash was stored in the table as the
representation of the position.
Later, the method was extended to hashing integers by representing each byte in each of 4
possible positions in the word by a unique 32-bit random number. Thus, a table of 28 x4 of
such random numbers is constructed. A 32-bit hashed integer is transcribed by successively
indexing the table with the value of each byte of the plain text integer and XORing the
loaded values together (again, the starting value can be the identity value or a random
seed). The natural extension to 64-bit integers is by use of a table of 28 x8 64-bit random
numbers.

101 https://en.wikipedia.org/wiki/Fibonacci_number
102 https://en.wikipedia.org/wiki/Golden_ratio
103 https://en.wikipedia.org/wiki/Irrational_number
104 https://en.wikipedia.org/wiki/Tabulation_hashing
105 https://en.wikipedia.org/wiki/Zobrist_hashing
106 https://en.wikipedia.org/wiki/Albert_Lindsey_Zobrist

344
Hashing variable-length data

This kind of function has some nice theoretical properties, one of which is called 3-tuple
independence meaning every 3-tuple of keys is equally likely to be mapped to any 3-tuple
of hash values.

22.4.11 Customized hash function

A hash function can be designed to exploit existing entropy in the keys. If the keys have
leading or trailing zeros, or particular fields that are unused, always zero or some other
constant, or generally vary little, then masking out only the volatile bits and hashing on
those will provide a better and possibly faster hash function. Selected divisors or multipliers
in the division and multiplicative schemes may make more uniform hash functions if the
keys are cyclic or have other redundancies.

22.5 Hashing variable-length data

When the data values are long (or variable-length) character strings107 —such as personal
names, web page addresses108 , or mail messages—their distribution is usually very uneven,
with complicated dependencies. For example, text in any natural language109 has highly
non-uniform distributions of characters110 , and character pairs111 , characteristic of the lan-
guage. For such data, it is prudent to use a hash function that depends on all characters of
112
the string—and depends on each character in a different way.[clarification needed ]

22.5.1 Middle and ends

Simplistic hash functions may add the first and last n characters of a string along with
the length, or form a word-size hash from the middle 4 characters of a string. This saves
iterating over the (potentially long) string, but hash functions which do not hash on all
characters of a string can readily become linear due to redundancies, clustering or other
pathologies in the key set. Such strategies may be effective as a custom hash function if the
structure of the keys is such that either the middle, ends or other field(s) are zero or some
other invariant constant that doesn't differentiate the keys; then the invariant parts of the
keys can be ignored.

22.5.2 Character folding

The paradigmatic example of folding by characters is to add up the integer values of all the
characters in the string. A better idea is to multiply the hash total by a constant, typically
a sizeable prime number, before adding in the next character, ignoring overflow. Using
exclusive 'or' instead of add is also a plausible alternative. The final operation would be a

107 https://en.wikipedia.org/wiki/Character_string
108 https://en.wikipedia.org/wiki/URL
109 https://en.wikipedia.org/wiki/Natural_language
110 https://en.wikipedia.org/wiki/Character_(computing)
111 https://en.wikipedia.org/wiki/Digraph_(computing)

345
Hash function

modulo, mask, or other function to reduce the word value to an index the size of the table.
The weakness of this procedure is that information may cluster in the upper or lower bits
of the bytes, which clustering will remain in the hashed result and cause more collisions
than a proper randomizing hash. ASCII byte codes, for example, have an upper bit of 0
and printable strings don't use the first 32 byte codes, so the information (95 byte codes)
is clustered in the remaining bits in an unobvious manner.
The classic approach dubbed the PJW hash based on the work of Peter. J. Weinberger at
ATT Bell Labs in the 1970s, was originally designed for hashing identifiers into compiler
symbol tables as given in the ”Dragon Book”.[15] This hash function offsets the bytes 4 bits
before ADDing them together. When the quantity wraps, the high 4 bits are shifted out
and if non-zero, XORed back into the low byte of the cumulative quantity. The result is
a word size hash code to which a modulo or other reducing operation can be applied to
produce the final hash index.
Today, especially with the advent of 64-bit word sizes, much more efficient variable length
string hashing by word-chunks is available.

22.5.3 Word length folding

See also: Universal hashing § Hashing strings113 Modern microprocessors will allow for much
faster processing, if 8-bit character strings are not hashed by processing one character at
a time, but by interpreting the string as an array of 32 bit or 64 bit integers and hash-
ing/accumulating these ”wide word” integer values by means of arithmetic operations (e.g.
multiplication by constant and bit-shifting). The final word, which may have unoccupied
byte positions, is filled with zeros or a specified ”randomizing” value before being folded into
the hash. The accumulated hash code is reduced by a final modulo or other operation to
yield an index into the table.

22.5.4 Radix conversion hashing

Analogous to the way an ASCII or EBCDIC character string representing a decimal number
is converted to a numeric quantity for computing, a variable length string can be converted
as (x0 ak−1 +x1 ak−2 +...+xk−2 a+xk−1 ). This is simply a polynomial in a non-zero ”radix” a!=1
that takes the components (x0 ,x1 ,...,xk−1 ) as the characters of the input string of length k. It
can be used directly as the hash code, or a hash function applied to it to map the potentially
large value to the hash table size. The value of a is usually a prime number at least large
enough to hold the number of different characters in the character set of potential keys.
Radix conversion hashing of strings minimizes the number of collisions.[16] Available data
sizes may restrict the maximum length of string that can be hashed with this method.
For example, a 128-bit double long word will hash only a 26 character alphabetic string
(ignoring case) with a radix of 29; a printable ASCII string is limited to 9 characters using
radix 97 and a 64-bit long word. However, alphabetic keys are usually of modest length,
because keys must be stored in the hash table. Numeric character strings are usually not a
problem; 64 bits can count up to 1019 , or 19 decimal digits with radix 10.

113 https://en.wikipedia.org/wiki/Universal_hashing#Hashing_strings

346
Analysis

22.5.5 Rolling hash

Main article: Rolling hash114 See also: Linear congruential generator115 In some applica-
tions, such as substring search116 , one can compute a hash function h for every k-character
substring117 of a given n-character string by advancing a window of width k characters
along the string; where k is a fixed integer, and n is greater than k. The straightforward
solution, which is to extract such a substring at every character position in the text and
compute h separately, requires a number of operations proportional to k·n. However, with
the proper choice of h, one can use the technique of rolling hash to compute all those
hashes with an effort proportional to mk + n where m is the number of occurrences of the
118 119
substring.[citation needed ][what is the choice of h? ]
The most familiar algorithm of this type is Rabin-Karp120 with best and average case
performance O(n+mk) and worst case O(n·k) (in all fairness, the worst case here is gravely
pathological: both the text string and substring are composed of a repeated single character,
such as t=”AAAAAAAAAAA”, and s=”AAA”). The hash function used for the algorithm
is usually the Rabin fingerprint121 , designed to avoid collisions in 8-bit character strings,
but other suitable hash functions are also used.

22.6 Analysis

Worst case result for a hash function can be assessed two ways: theoretical and practical.
Theoretical worst case is the probability that all keys map to a single slot. Practical worst
case is expected longest probe sequence (hash function + collision resolution method). This
analysis considers uniform hashing, that is, any key will map to any particular slot with
probability 1/m, characteristic of universal hash functions.
While Knuth worries about adversarial attack on real time systems,[17] Gonnet has shown
that the probability of such a case is ”ridiculously small”. His representation was that the
e−α αk
probability of k of n keys mapping to a single slot is where α is the load factor,
k!
n/m. [18]

22.7 History

The term ”hash” offers a natural analogy with its non-technical meaning (to ”chop” or ”make
a mess” out of something), given how hash functions scramble their input data to derive
their output.[19] In his research for the precise origin of the term, Donald Knuth122 notes

114 https://en.wikipedia.org/wiki/Rolling_hash
115 https://en.wikipedia.org/wiki/Linear_congruential_generator
116 https://en.wikipedia.org/wiki/String_searching_algorithm
117 https://en.wikipedia.org/wiki/Substring
120 https://en.wikipedia.org/wiki/Rabin-Karp
121 https://en.wikipedia.org/wiki/Rabin_fingerprint
122 https://en.wikipedia.org/wiki/Donald_Knuth

347
Hash function

that, while Hans Peter Luhn123 of IBM124 appears to have been the first to use the concept
of a hash function in a memo dated January 1953, the term itself would only appear in
published literature in the late 1960s, on Herbert Hellerman's Digital Computer System
Principles, even though it was already widespread jargon by then.[20]

22.8 See also

Look up hash125 in Wiktionary, the free dictionary.

• List of hash functions126


• Nearest neighbor search127
• Cryptographic hash function128
• Distributed hash table129
• Identicon130
• Low-discrepancy sequence131
• Transposition table132

22.9 Notes
1. useful in cases where keys are devised by a malicious agent, for example in pursuit of
a DOS attack.
2. For example, for n=15, k=4, t=6, S = {1, 2, 3, 4, 5, 6, 8, 10, 12, 9} [Knuth]
3. Knuth conveniently leaves the proof of this to the reader.
4. Unisys large systems
5. 11400714819323198486 is closer, but the bottom bit is zero, essentially throwing away
a bit. The next closest odd number is that given.

22.10 References
1. Knuth, D. 1973, The Art of Computer Science, Vol. 3, Sorting and Searching, p.527.
Addison-Wesley, Reading, MA., United States

123 https://en.wikipedia.org/wiki/Hans_Peter_Luhn
124 https://en.wikipedia.org/wiki/IBM
125 https://en.wiktionary.org/wiki/hash
126 https://en.wikipedia.org/wiki/List_of_hash_functions
127 https://en.wikipedia.org/wiki/Nearest_neighbor_search
128 https://en.wikipedia.org/wiki/Cryptographic_hash_function
129 https://en.wikipedia.org/wiki/Distributed_hash_table
130 https://en.wikipedia.org/wiki/Identicon
131 https://en.wikipedia.org/wiki/Low-discrepancy_sequence
132 https://en.wikipedia.org/wiki/Transposition_table

348
References

2. M, A J.;  O, P C.; V, S A (1996).
Handbook of Applied Cryptography133 . CRC P. ISBN134 978-0849385230135 .
3. Castro, et.al., 2005, ”The strict avalanche criterion randomness test”, Mathematics
and Computers in Simulation 68 (2005) 1–7,Elsevier,
4. Malte Sharupke, 2018, ”Fibonacci Hashing: The Optimization that the World Forgot
(or: a Better Alternative to Integer Modulo)”
5. ”3. D  — P 3.6.1 ”136 . docs.python.org. Re-
trieved 2017-03-24.
6. S, R (2002). ”14. H”. Algorithms in Java (3 ed.). Addison
Wesley. ISBN137 978-0201361209138 .
7. Plain ASCII is a 7-bit character encoding, although it is often stored in 8-bit bytes
with the highest-order bit always clear (zero). Therefore, for plain ASCII, the bytes
have only 27 = 128 valid values, and the character translation table has only this
many entries.
8. Knuth, D. 1973, The Art of Computer Science, Vol. 3, Sorting and Searching, p.512-
13. Addison-Wesley, Reading, MA., United States
9. Knuth, pp.542-43
10. Knuth, ibid.
11. ”U  ”. 139 :10.1016/..2012.12.047140 . Cite jour-
nal requires |journal= (help141 )
12. ”CS 3110 Lecture 21: Hash functions”142 .Section ”Multiplicative hashing”.
13. S, M. ”F H: T O   W
F”143 . probablydance.com. wordpress.com.
14. Z, A L.144 (A 1970), A New Hashing Method with Application
for Game Playing145 (PDF), T. R. 88, M, W: C
S D, U  W.
15. Aho, Sethi, Ullman, 1986, Compilers: Principles, Techniques and Tools, pp.435.
Addison-Wesley, Reading, MA.
16. Performance in Practice of String Hashing Functions146 CiteSeerx147 : 10.1.1.18.7520148
17. Knuth, D. 1975, Art of Computer Propgramming, Vol. 3. Sorting and Searching,
pp.540. Addison-Wesley, Reading, MA
18. Gonnet, G. 1978, ”Expected Length of the Longest Probe Sequence in Hash Code
Searching”, CS-RR-78-46, University of Waterloo, Ontario, Canada

133 https://archive.org/details/handbookofapplie0000mene
134 https://en.wikipedia.org/wiki/ISBN_(identifier)
135 https://en.wikipedia.org/wiki/Special:BookSources/978-0849385230
136 https://docs.python.org/3/reference/datamodel.html#object.__hash__
137 https://en.wikipedia.org/wiki/ISBN_(identifier)
138 https://en.wikipedia.org/wiki/Special:BookSources/978-0201361209
139 https://en.wikipedia.org/wiki/Doi_(identifier)
140 https://doi.org/10.1016%2Fj.tcs.2012.12.047
141 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
142 http://www.cs.cornell.edu/courses/cs3110/2008fa/lectures/lec21.html
https://probablydance.com/2018/06/16/fibonacci-hashing-the-optimization-that-the-
143
world-forgot-or-a-better-alternative-to-integer-modulo/
144 https://en.wikipedia.org/wiki/Albert_Lindsey_Zobrist
145 https://www.cs.wisc.edu/techreports/1970/TR88.pdf
146 http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.7520
147 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
148 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.7520

349
Hash function

19. K, D E. (2000). Sorting and searching (2. ed., 6. printing, newly
updated and rev. ed.). Boston [u.a.]: Addison-Wesley. p. 514. ISBN149 978-0-201-
89685-5150 .
20. K, D E. (2000). Sorting and searching (2. ed., 6. printing, newly
updated and rev. ed.). Boston [u.a.]: Addison-Wesley. pp. 547–548. ISBN151 978-0-
201-89685-5152 .

22.11 External links

Look up hash153 in Wiktionary, the free dictionary.

• Calculate hash of a given value154 by Timo Denk


• The Goulburn Hashing Function155 (PDF156 ) by Mayur Patel
• Hash Function Construction for Textual and Geometrical Data Retrieval157 Latest Trends
on Computers, Vol.2, pp. 483–489, CSCC conference, Corfu, 2010

149 https://en.wikipedia.org/wiki/ISBN_(identifier)
150 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-89685-5
151 https://en.wikipedia.org/wiki/ISBN_(identifier)
152 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-89685-5
153 https://en.wiktionary.org/wiki/hash
154 http://tools.timodenk.com/?p=hash-function
https://web.archive.org/web/20090319175044/http://www.geocities.com/drone115b/
155
Goulburn06.pdf
156 https://en.wikipedia.org/wiki/Portable_Document_Format
https://web.archive.org/web/20120426035245/http://herakles.zcu.cz/~skala/PUBL/PUBL_
157
2010/2010_WSEAS-Corfu_Hash-final.pdf

350
23 Collision (computer science)

Not to be confused with Packet collision1 in telecommunications.


This article has multiple issues. Please help improve it2 or discuss these
issues on the talk page3 . (Learn how and when to remove these template mes-

This article relies largely or entirely on a single source5 . Relevant


sages4 ) discussion may be found on the talk page6 . Please help improve this arti-
cle7 by introducing citations8 to additional sources.
Find sources: ”Collision” computer science9 –
news10 · newspapers11 · books12 · scholar13 · JSTOR14 (May 2013)

This article needs additional citations for verification15 . Please help


improve this article16 by adding citations to reliable sources17 . Unsourced
material may be challenged and removed.
Find sources: ”Collision” computer science18 –
news19 · newspapers20 · books21 · scholar22 · JSTOR23 (May 2013)(Learn
how and when to remove this template message24 )
This article may contain excessive or inappropriate references to
self-published sources25 . Please help improve it26 by removing refer-
(Learn how
ences to unreliable sources27 where they are used inappropriately. (May
2013)(Learn how and when to remove this template message28 )
and when to remove this template message29 )

In computer science30 , a collision or clash is a situation that occurs when two distinct
pieces of data31 have the same hash value32 , checksum33 , fingerprint34 , or cryptographic
digest35 .[1]
Due to the possible applications of hash functions in data management36 and computer
security37 (in particular, cryptographic hash functions38 ), collision avoidance has become a
fundamental topic in computer science.

1 https://en.wikipedia.org/wiki/Collision_(telecommunications)
29 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
30 https://en.wikipedia.org/wiki/Computer_science
31 https://en.wikipedia.org/wiki/Data
32 https://en.wikipedia.org/wiki/Hash_function
33 https://en.wikipedia.org/wiki/Checksum
34 https://en.wikipedia.org/wiki/Fingerprint_(computing)
35 https://en.wikipedia.org/wiki/Cryptographic_hash_function
36 https://en.wikipedia.org/wiki/Data_management
37 https://en.wikipedia.org/wiki/Computer_security
38 https://en.wikipedia.org/wiki/Cryptographic_hash_function

351
Collision (computer science)

Collisions are unavoidable whenever members of a very large set (such as all possible person
names, or all possible computer files39 ) are mapped40 to a relatively short bit string. This
is merely an instance of the pigeonhole principle41 .[1]
The impact of collisions depends on the application. When hash functions and fingerprints
are used to identify similar data, such as homologous42 DNA43 sequences or similar audio
files44 , the functions are designed so as to maximize the probability of collision between
distinct but similar data, using techniques like locality-sensitive hashing45 .[2] Checksums46 ,
on the other hand, are designed to minimize the probability of collisions between similar
inputs, without regard for collisions between very different inputs.[3]

23.1 Computer security

See also: Collision attack47 Hash functions can map different data to the same hash (by
virtue of the pigeonhole principle48 ), malicious users can take advantage of this to mimic
data.[4]
For example; consider a hash function that hashes data by returning the first three charac-
ters of the string it is given (i.e. ”Password12345” goes to ”Pas”). A hacker, who does not
know the user's password, could instead enter ”Pass” - which would generate the same hash
value of ”Pas”. Even though the hacker does not know the correct password, they do have
a password that gives them the same hash - which would give them access. This type of
attack is called a preimage attack49 .
In practice, security-related applications use cryptographic hash algorithms, which are de-
signed to be long enough for random matches to be unlikely, fast enough that they can be
used anywhere, and safe enough that it would be extremely hard to find collisions.[3]

23.2 See also


• Birthday attack50
• Collision attack51 (against cryptographic hash functions)
• Collision resistance52
• Collision resolution53 (hash tables)

39 https://en.wikipedia.org/wiki/File_(computer)
40 https://en.wikipedia.org/wiki/Map_(mathematics)
41 https://en.wikipedia.org/wiki/Pigeonhole_principle
42 https://en.wikipedia.org/wiki/Homology_(biology)
43 https://en.wikipedia.org/wiki/DNA
44 https://en.wikipedia.org/wiki/Audio_file
45 https://en.wikipedia.org/wiki/Locality-sensitive_hashing
46 https://en.wikipedia.org/wiki/Checksum
47 https://en.wikipedia.org/wiki/Collision_attack
48 https://en.wikipedia.org/wiki/Pigeonhole_principle
49 https://en.wikipedia.org/wiki/Preimage_attack
50 https://en.wikipedia.org/wiki/Birthday_attack
51 https://en.wikipedia.org/wiki/Collision_attack
52 https://en.wikipedia.org/wiki/Collision_resistance
53 https://en.wikipedia.org/wiki/Hash_table#Collision_resolution

352
References

• Cryptographic hash function54


• Name collision55 , accidental use of identical variable names
• Password hash56 (key derivation function)
• Perfect hash function57 , a hash function that is free of collisions by design
• Preimage attack58
• Provably secure hash function59
• Random oracle60

23.3 References
1. J F (2008-07-18). ”W  H C R M?”61 . -
..: P  P. R 2011-03-24. For
the long explanation on cryptographic hashes and hash collisions, I wrote a column
a bit back for SNW Online, ”What you need to know about cryptographic hashes and
enterprise storage”. The short version is that deduplicating systems that use crypto-
graphic hashes use those hashes to generate shorter ”fingerprints” to uniquely identify
each piece of data, and determine if that data already exists in the system. The trouble
is, by a mathematical rule called the ”pigeonhole principle”, you can’t uniquely map
any possible files or file chunk to a shorter fingerprint. Statistically, there are multiple
possible files that have the same hash.
2. R, A.; U, J.62 (2010). ”M  M D, C. 3”63 .
3. A-K, S; D, J H.; B, R J. (2011). ”C-
 H F: R D T  S N”64 .
Cite journal requires |journal= (help65 )
4. S, B66 . ”C  MD5  SHA: T   N
S”67 . Computerworld. Retrieved 2016-04-20. Much more than encryption
algorithms, one-way hash functions are the workhorses of modern cryptography.

54 https://en.wikipedia.org/wiki/Cryptographic_hash_function
55 https://en.wikipedia.org/wiki/Name_collision
56 https://en.wikipedia.org/wiki/Password_hash
57 https://en.wikipedia.org/wiki/Perfect_hash_function
58 https://en.wikipedia.org/wiki/Preimage_attack
59 https://en.wikipedia.org/wiki/Provably_secure_hash_function
60 https://en.wikipedia.org/wiki/Random_oracle
61 https://permabit.wordpress.com/2008/07/18/what-do-hash-collisions-really-mean/
62 https://en.wikipedia.org/wiki/Jeffrey_Ullman
63 http://infolab.stanford.edu/~ullman/mmds.html
64 https://eprint.iacr.org/2011/565
65 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
66 https://en.wikipedia.org/wiki/Bruce_Schneier
67 https://www.schneier.com/essays/archives/2004/08/cryptanalysis_of_md5.html

353
Collision (computer science)

This computer science68 article is a stub69 . You can help Wikipedia by expanding
it70 .
• v71
• t72
• e73

68 https://en.wikipedia.org/wiki/Computer_science
69 https://en.wikipedia.org/wiki/Wikipedia:Stub
70 https://en.wikipedia.org/w/index.php?title=Collision_(computer_science)&action=edit
71 https://en.wikipedia.org/wiki/Template:Comp-sci-stub
72 https://en.wikipedia.org/wiki/Template_talk:Comp-sci-stub
73 https://en.wikipedia.org/w/index.php?title=Template:Comp-sci-stub&action=edit

354
24 Perfect hash function

Figure 66 A perfect hash function for the four names shown

355
Perfect hash function

Figure 67 A minimal perfect hash function for the four names shown

In computer science1 , a perfect hash function for a set S is a hash function2 that maps
distinct elements in S to a set of integers, with no collisions3 . In mathematical terms, it is
an injective function4 .
Perfect hash functions may be used to implement a lookup table5 with constant worst-case
access time. A perfect hash function has many of the same applications6 as other hash
functions, but with the advantage that no collision resolution7 has to be implemented. In
addition, if the keys are not the data, the keys do not need to be stored in the lookup table,
saving space.

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Hash_function
3 https://en.wikipedia.org/wiki/Hash_collision
4 https://en.wikipedia.org/wiki/Injective_function
5 https://en.wikipedia.org/wiki/Lookup_table
6 https://en.wikipedia.org/wiki/Hash_function#Applications
7 https://en.wikipedia.org/wiki/Hash_table#Collision_resolution

356
Application

24.1 Application

A perfect hash function with values in a limited range can be used for efficient lookup
operations, by placing keys from S (or other associated values) in a lookup table8 indexed
by the output of the function. One can then test whether a key is present in S, or look up
a value associated with that key, by looking for it at its cell of the table. Each such lookup
takes constant time9 in the worst case10 .[1]

24.2 Construction

A perfect hash function for a specific set S that can be evaluated in constant time, and
with values in a small range, can be found by a randomized algorithm11 in a number of
operations that is proportional to the size of S. The original construction of Fredman,
Komlós & Szemerédi (1984)12 uses a two-level scheme to map a set S of n elements to a
range of O(n) indices, and then map each index to a range of hash values. The first level
of their construction chooses a large prime p (larger than the size of the universe13 from
which S is drawn), and a parameter k, and maps each element x of S to the index
g(x) = (kx mod p) mod n.
If k is chosen randomly, this step is likely to have collisions, but the number of elements
ni that are simultaneously mapped to the same index i is likely to be small. The second
level of their construction assigns disjoint ranges of O(ni 2 ) integers to each index i. It uses
a second set of linear modular functions, one for each index i, to map each member x of S
into the range associated with g(x).[1]
As Fredman, Komlós & Szemerédi (1984)14 show, there exists a choice of the parameter k
such that the sum of the lengths of the ranges for the n different values of g(x) is O(n).
Additionally, for each value of g(x), there exists a linear modular function that maps the
corresponding subset of S into the range associated with that value. Both k, and the second-
level functions for each value of g(x), can be found in polynomial time15 by choosing values
randomly until finding one that works.[1]
The hash function itself requires storage space O(n) to store k, p, and all of the second-level
linear modular functions. Computing the hash value of a given key x may be performed
in constant time by computing g(x), looking up the second-level function associated with
g(x), and applying this function to x. A modified version of this two-level scheme with a
larger number of values at the top level can be used to construct a perfect hash function
that maps S into a smaller range of length n + o(n).[1]

8 https://en.wikipedia.org/wiki/Lookup_table
9 https://en.wikipedia.org/wiki/Constant_time
10 https://en.wikipedia.org/wiki/Worst-case_complexity
11 https://en.wikipedia.org/wiki/Randomized_algorithm
12 #CITEREFFredmanKoml%C3%B3sSzemer%C3%A9di1984
13 https://en.wikipedia.org/wiki/Universe_(mathematics)
14 #CITEREFFredmanKoml%C3%B3sSzemer%C3%A9di1984
15 https://en.wikipedia.org/wiki/Polynomial_time

357
Perfect hash function

24.3 Space lower bounds

The use of O(n) words of information to store the function of Fredman, Komlós & Szemerédi
(1984)16 is near-optimal: any perfect hash function that can be calculated in constant time
requires at least a number of bits that is proportional to the size of S.[2]

24.4 Extensions

24.4.1 Dynamic perfect hashing

Main article: Dynamic perfect hashing17 Using a perfect hash function is best in situations
where there is a frequently queried large set, S, which is seldom updated. This is because
any modification of the set S may cause the hash function to no longer be perfect for the
modified set. Solutions which update the hash function any time the set is modified are
known as dynamic perfect hashing18 ,[3] but these methods are relatively complicated to
implement.

24.4.2 Minimal perfect hash function

A minimal perfect hash function is a perfect hash function that maps n keys to n consecutive
integers – usually the numbers from 0 to n − 1 or from 1 to n. A more formal way of
expressing this is: Let j and k be elements of some finite set S. Then F is a minimal perfect
hash function if and only if F(j) = F(k) implies j = k (injectivity19 ) and there exists an
integer a such that the range of F is a..a + |S| − 1. It has been proven that a general
purpose minimal perfect hash scheme requires at least 1.44 bits/key.[4] The best currently
known minimal perfect hashing schemes can be represented using less than 1.56 bits/key if
given enough time. [5]

24.4.3 Order preservation

A minimal perfect hash function F is order preserving if keys are given in some order a1 , a2 ,
..., an and for any keys aj and ak , j < k implies F(aj ) < F(ak ).[6] In this case, the function
value is just the position of each key in the sorted ordering of all of the keys. A simple
implementation of order-preserving minimal perfect hash functions with constant access
time is to use an (ordinary) perfect hash function or cuckoo hashing20 to store a lookup
table of the positions of each key. If the keys to be hashed are themselves stored in a sorted
array, it is possible to store a small number of additional bits per key in a data structure
that can be used to compute hash values quickly.[7] Order-preserving minimal perfect hash
functions require necessarily Ω(n log n) bits to be represented.[8]

16 #CITEREFFredmanKoml%C3%B3sSzemer%C3%A9di1984
17 https://en.wikipedia.org/wiki/Dynamic_perfect_hashing
18 https://en.wikipedia.org/wiki/Dynamic_perfect_hashing
19 https://en.wikipedia.org/wiki/Injectivity
20 https://en.wikipedia.org/wiki/Cuckoo_hashing

358
Related constructions

24.5 Related constructions

A simple alternative to perfect hashing, which also allows dynamic updates, is cuckoo
hashing21 . This scheme maps keys to two or more locations within a range (unlike perfect
hashing which maps each key to a single location) but does so in such a way that the keys
can be assigned one-to-one to locations to which they have been mapped. Lookups with
this scheme are slower, because multiple locations must be checked, but nevertheless take
constant worst-case time.[9]

24.6 References
1. F, M L.22 ; K, J23 ; S, E24 (1984), ”S-
  S T  O(1) Worst Case Access Time”, Journal of the ACM25 ,
31 (3): 538, doi26 :10.1145/828.188427 , MR28 081915629
2. F, M L.30 ; K, J31 (1984), ”O    
      ”, SIAM Journal on Algebraic
and Discrete Methods32 , 5 (1): 61–68, doi33 :10.1137/060500934 , MR35 073185736 .
3. D, M; K, A37 ; M, K38 ; M 
 H, F; R, H; T, R E.39 (1994), ”D
 :    ”, SIAM Journal on Computing40 ,
23 (4): 738–761, doi41 :10.1137/S009753979119409442 , MR43 128357244 .
4. B, D; B, F C.; D, M
(2009), ”H, ,  ”45 (PDF), Algorithms—ESA 2009: 17th
Annual European Symposium, Copenhagen, Denmark, September 7-9, 2009, Pro-

21 https://en.wikipedia.org/wiki/Cuckoo_hashing
22 https://en.wikipedia.org/wiki/Michael_Fredman
23 https://en.wikipedia.org/wiki/J%C3%A1nos_Koml%C3%B3s_(mathematician)
24 https://en.wikipedia.org/wiki/Endre_Szemer%C3%A9di
25 https://en.wikipedia.org/wiki/Journal_of_the_ACM
26 https://en.wikipedia.org/wiki/Doi_(identifier)
27 https://doi.org/10.1145%2F828.1884
28 https://en.wikipedia.org/wiki/MR_(identifier)
29 http://www.ams.org/mathscinet-getitem?mr=0819156
30 https://en.wikipedia.org/wiki/Michael_Fredman
31 https://en.wikipedia.org/wiki/J%C3%A1nos_Koml%C3%B3s_(mathematician)
32 https://en.wikipedia.org/wiki/SIAM_Journal_on_Algebraic_and_Discrete_Methods
33 https://en.wikipedia.org/wiki/Doi_(identifier)
34 https://doi.org/10.1137%2F0605009
35 https://en.wikipedia.org/wiki/MR_(identifier)
36 http://www.ams.org/mathscinet-getitem?mr=0731857
37 https://en.wikipedia.org/wiki/Anna_Karlin
38 https://en.wikipedia.org/wiki/Kurt_Mehlhorn
39 https://en.wikipedia.org/wiki/Robert_Tarjan
40 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
41 https://en.wikipedia.org/wiki/Doi_(identifier)
42 https://doi.org/10.1137%2FS0097539791194094
43 https://en.wikipedia.org/wiki/MR_(identifier)
44 http://www.ams.org/mathscinet-getitem?mr=1283572
45 http://cmph.sourceforge.net/papers/esa09.pdf

359
Perfect hash function

ceedings46 (PDF), L N  C S47 , 5757, Berlin:


Springer, pp. 682–693, CiteSeerX48 10.1.1.568.13049 , doi50 :10.1007/978-3-642-04128-
0_6151 , MR52 255779453 .
5. E, E; M G, T; V, S (2020),
”RS: M P H  R S”, 2020 Proceed-
ings of the Symposium on Algorithm Engineering and Experiments (ALENEX), Pro-
ceedings54 , pp. 175–185, arXiv55 :1910.0641656 , doi57 :10.1137/1.9781611976007.1458 .
6. J, B (14 A 2009), ”-   ”,
 B, P E. (.), Dictionary of Algorithms and Data Structures59 , U.S.
N I  S  T,  2013-03-05
7. B, D; B, P; P, R; V, S
(N 2008), ”T      
”, Journal of Experimental Algorithmics, 16, Art. no. 3.2, 26pp,
doi60 :10.1145/1963190.202537861 .
8. F, E A.; C, Q F; D, A M.; H, L S. (J
1991), ”O-      
”62 (PDF), ACM Transactions on Information Systems, New York, NY,
USA: ACM, 9 (3): 281–308, doi63 :10.1145/125187.12520064 .
9. P, R; R, F F (2004), ”C ”, Journal
of Algorithms, 51 (2): 122–144, doi65 :10.1016/j.jalgor.2003.12.00266 , MR67 205014068 .

24.7 Further reading


• Richard J. Cichelli. Minimal Perfect Hash Functions Made Simple, Communications of
the ACM, Vol. 23, Number 1, January 1980.

46 http://cmph.sourceforge.net/papers/esa09.pdf
47 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
48 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
49 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.568.130
50 https://en.wikipedia.org/wiki/Doi_(identifier)
51 https://doi.org/10.1007%2F978-3-642-04128-0_61
52 https://en.wikipedia.org/wiki/MR_(identifier)
53 http://www.ams.org/mathscinet-getitem?mr=2557794
54 https://en.wikipedia.org/wiki/Proceedings
55 https://en.wikipedia.org/wiki/ArXiv_(identifier)
56 http://arxiv.org/abs/1910.06416
57 https://en.wikipedia.org/wiki/Doi_(identifier)
58 https://doi.org/10.1137%2F1.9781611976007.14
59 https://xlinux.nist.gov/dads/HTML/orderPreservMinPerfectHash.html
60 https://en.wikipedia.org/wiki/Doi_(identifier)
61 https://doi.org/10.1145%2F1963190.2025378
62 http://eprints.cs.vt.edu/archive/00000248/01/TR-91-01.pdf
63 https://en.wikipedia.org/wiki/Doi_(identifier)
64 https://doi.org/10.1145%2F125187.125200
65 https://en.wikipedia.org/wiki/Doi_(identifier)
66 https://doi.org/10.1016%2Fj.jalgor.2003.12.002
67 https://en.wikipedia.org/wiki/MR_(identifier)
68 http://www.ams.org/mathscinet-getitem?mr=2050140

360
External links

• Thomas H. Cormen69 , Charles E. Leiserson70 , Ronald L. Rivest71 , and Clifford Stein72 .


Introduction to Algorithms73 , Third Edition. MIT Press, 2009. ISBN74 978-026203384875 .
Section 11.5: Perfect hashing, pp.267, 277−282.
• Fabiano C. Botelho, Rasmus Pagh and Nivio Ziviani. ”Perfect Hashing for Data Manage-
ment Applications”76 .
• Fabiano C. Botelho and Nivio Ziviani77 . ”External perfect hashing for very large key
sets”78 . 16th ACM Conference on Information and Knowledge Management (CIKM07),
Lisbon, Portugal, November 2007.
• Djamal Belazzougui, Paolo Boldi, Rasmus Pagh, and Sebastiano Vigna. ”Monotone min-
imal perfect hashing: Searching a sorted table with O(1) accesses”79 . In Proceedings of
the 20th Annual ACM-SIAM Symposium On Discrete Mathematics (SODA), New York,
2009. ACM Press.
• Douglas C. Schmidt, GPERF: A Perfect Hash Function Generator80 , C++ Report, SIGS,
Vol. 10, No. 10, November/December, 1998.

24.8 External links


• gperf81 is an Open Source82 C and C++ perfect hash generator (very fast, but only works
for small sets)
• Minimal Perfect Hashing (bob algorithm)83 by Bob Jenkins
• cmph84 : C Minimal Perfect Hashing Library, open source implementations for many
(minimal) perfect hashes (works for big sets)
• Sux4J85 : open source monotone minimal perfect hashing in Java
• MPHSharp86 : perfect hashing methods in C#
• BBHash87 : minimal perfect hash function in header-only C++
• Perfect::Hash88 , perfect hash generator in Perl that makes C code. Has a ”prior art”
section worth looking at.

69 https://en.wikipedia.org/wiki/Thomas_H._Cormen
70 https://en.wikipedia.org/wiki/Charles_E._Leiserson
71 https://en.wikipedia.org/wiki/Ronald_L._Rivest
72 https://en.wikipedia.org/wiki/Clifford_Stein
73 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
74 https://en.wikipedia.org/wiki/ISBN_(identifier)
75 https://en.wikipedia.org/wiki/Special:BookSources/978-0262033848
76 https://arxiv.org/pdf/cs/0702159
77 https://en.wikipedia.org/wiki/Nivio_Ziviani
78 http://homepages.dcc.ufmg.br/~nivio/papers/cikm07.pdf
https://web.archive.org/web/20140125080021/http://vigna.dsi.unimi.it/ftp/papers/
79
MonotoneMinimalPerfectHashing.pdf
80 http://www.dre.vanderbilt.edu/~schmidt/PDF/gperf.pdf
81 https://www.gnu.org/software/gperf/
82 https://en.wikipedia.org/wiki/Open_Source
83 http://burtleburtle.net/bob/hash/perfect.html
84 http://cmph.sourceforge.net/index.html
85 http://sux.di.unimi.it/
86 https://web.archive.org/web/20130729211948/http://www.dupuis.me/node/9
87 https://github.com/rizkg/BBHash
88 https://github.com/rurban/Perfect-Hash

361
25 Open addressing

Figure 68 Hash collision resolved by linear probing (interval=1).

Open addressing, or closed hashing, is a method of collision resolution in hash tables1 .


With this method a hash collision is resolved by probing, or searching through alternate
locations in the array (the probe sequence) until either the target record is found, or an
unused array slot is found, which indicates that there is no such key in the table.[1] Well-
known probe sequences include:
Linear probing2
in which the interval between probes is fixed — often set to 1.
Quadratic probing3
in which the interval between probes increases quadratically (hence, the indices are de-
scribed by a quadratic function).

1 https://en.wikipedia.org/wiki/Hash_table#Collision_resolution
2 https://en.wikipedia.org/wiki/Linear_probing
3 https://en.wikipedia.org/wiki/Quadratic_probing

363
Open addressing

Double hashing4
in which the interval between probes is fixed for each record but is computed by another
hash function.
The main tradeoffs between these methods are that linear probing has the best cache perfor-
mance5 but is most sensitive to clustering, while double hashing has poor cache performance
but exhibits virtually no clustering; quadratic probing falls in-between in both areas. Double
hashing can also require more computation than other forms of probing.
Some open addressing methods, such as Hopscotch hashing6 , Robin Hood hashing7 , last-
come-first-served hashing8 and cuckoo hashing9 move existing keys around in the array to
make room for the new key. This gives better maximum search times than the methods
based on probing. [2][3][4][5][6]
A critical influence on performance of an open addressing hash table is the load factor;
that is, the proportion of the slots in the array that are used. As the load factor increases
towards 100%, the number of probes that may be required to find or insert a given key rises
dramatically. Once the table becomes full, probing algorithms may even fail to terminate.
Even with good hash functions, load factors are normally limited to 80%. A poor hash
function can exhibit poor performance even at very low load factors by generating significant
clustering, especially with the simplest linear addressing method. Generally typical load
factors with most open addressing methods are 50%, whilst separate chaining10 typically can
11
use up to 100%. What causes hash functions to cluster is not well understood[citation needed ] ,
and it is easy to unintentionally write a hash function that causes severe clustering.

25.1 Example pseudocode

The following pseudocode12 is an implementation of an open addressing hash table with


linear probing and single-slot stepping, a common approach that is effective if the hash
function is good. Each of the lookup, set and remove functions use a common internal
function find_slot to locate the array slot that either does or should contain a given key.
record pair { key, value }
var pair array slot[0..num_slots-1]

function find_slot(key)
i := hash(key) modulo num_slots
// search until we either find the key, or find an empty slot.
while (slot[i] is occupied) and ( slot[i].key ≠ key )
i = (i + 1) modulo num_slots
return i

4 https://en.wikipedia.org/wiki/Double_hashing
5 https://en.wikipedia.org/wiki/Locality_of_reference
6 https://en.wikipedia.org/wiki/Hopscotch_hashing
7 https://en.wikipedia.org/wiki/Hash_table#Robin_Hood_hashing
https://en.wikipedia.org/w/index.php?title=Last-come-first-served_hashing&action=
8
edit&redlink=1
9 https://en.wikipedia.org/wiki/Cuckoo_hashing
10 https://en.wikipedia.org/wiki/Hash_table#Separate_chaining
12 https://en.wikipedia.org/wiki/Pseudocode

364
Example pseudocode

function lookup(key)
i := find_slot(key)
if slot[i] is occupied // key is in table
return slot[i].value
else // key is not in table
return not found

function set(key, value)


i := find_slot(key)
if slot[i] is occupied // we found our key
slot[i].value = value
return
if the table is almost full
rebuild the table larger (note 1)
i = find_slot(key)
slot[i].key = key
slot[i].value = value

note 1
Rebuilding the table requires allocating a larger array and recursively using the
set operation to insert all the elements of the old array into the new larger array. It
is common to increase the array size exponentially13 , for example by doubling the old
array size.
function remove(key)
i := find_slot(key)
if slot[i] is unoccupied
return // key is not in the table
j := i
loop
mark slot[i] as unoccupied
r2: (note 2)
j := (j+1) modulo num_slots
if slot[j] is unoccupied
exit loop
k := hash(slot[j].key) modulo num_slots
// determine if k lies cyclically in (i,j]
// | i.k.j |
// |....j i.k.| or |.k..j i...|
if ( (i<=j) ? ((i<k)&&(k<=j)) : ((i<k)||(k<=j)) )
goto r2;
slot[i] := slot[j]
i := j

note 2
For all records in a cluster, there must be no vacant slots between their natural hash
position and their current position (else lookups will terminate before finding the record).
At this point in the pseudocode, i is a vacant slot that might be invalidating this property
for subsequent records in the cluster. j is such a subsequent record. k is the raw hash
where the record at j would naturally land in the hash table if there were no collisions.
This test is asking if the record at j is invalidly positioned with respect to the required
properties of a cluster now that i is vacant.
Another technique for removal is simply to mark the slot as deleted. However this eventually
requires rebuilding the table simply to remove deleted records. The methods above provide

13 https://en.wikipedia.org/wiki/Exponential_growth

365
Open addressing

O(1) updating and removal of existing records, with occasional rebuilding if the high-water
mark of the table size grows.
The O(1) remove method above is only possible in linearly probed hash tables with single-
slot stepping. In the case where many records are to be deleted in one operation, marking
the slots for deletion and later rebuilding may be more efficient.

25.2 See also


• Lazy deletion14 – a method of deleting from a hash table using open addressing.

25.3 References
1. T, A M.; L, Y; A, M J. (1990),
Data Structures Using C, Prentice Hall, pp. 456–461, pp. 472, ISBN15 0-13-199746-716
2. Poblete; Viola; Munro. ”The Analysis of a Hashing Scheme by the Diagonal Poisson
Transform”. p. 95 of Jan van Leeuwen (Ed.) ”Algorithms - ESA '94”17 . 1994.
3. Steve Heller. ”Efficient C/C++ Programming: Smaller, Faster, Better”18 2014. p. 33.
4. Patricio V. Poblete, Alfredo Viola. ”Robin Hood Hashing really has constant average
search cost and variance in full tables”19 . 2016.
5. Paul E. Black, ”Last-Come First-Served Hashing”20 , in Dictionary of Algorithms and
Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 September 2015.
6. Paul E. Black, ”Robin Hood hashing”21 , in Dictionary of Algorithms and Data Struc-
tures [online], Vreda Pieterse and Paul E. Black, eds. 17 September 2015.
22

14 https://en.wikipedia.org/wiki/Lazy_deletion
15 https://en.wikipedia.org/wiki/ISBN_(identifier)
16 https://en.wikipedia.org/wiki/Special:BookSources/0-13-199746-7
17 https://books.google.com/books?id=2aCoW8m40AwC
18 https://books.google.com/books?id=gaajBQAAQBAJ
19 https://arxiv.org/abs/1605.04031
20 https://xlinux.nist.gov/dads/HTML/LastComeFirstServedHashing.html
21 https://www.nist.gov/dads/HTML/robinHoodHashing.html
22 https://en.wikipedia.org/wiki/Wikipedia:Good_articles

366
26 Linear probing

Figure 69 The collision between John Smith and Sandra Dee (both hashing to cell 873)
is resolved by placing Sandra Dee at the next free location, cell 874.

Linear probing is a scheme in computer programming1 for resolving collisions2 in hash


tables3 , data structures4 for maintaining a collection of key–value pairs5 and looking up the
value associated with a given key. It was invented in 1954 by Gene Amdahl6 , Elaine M.
McGraw7 , and Arthur Samuel8 and first analyzed in 1963 by Donald Knuth9 .

1 https://en.wikipedia.org/wiki/Computer_programming
2 https://en.wikipedia.org/wiki/Hash_collision
3 https://en.wikipedia.org/wiki/Hash_table
4 https://en.wikipedia.org/wiki/Data_structure
5 https://en.wikipedia.org/wiki/Attribute%E2%80%93value_pair
6 https://en.wikipedia.org/wiki/Gene_Amdahl
7 https://en.wikipedia.org/wiki/Elaine_M._McGraw
8 https://en.wikipedia.org/wiki/Arthur_Samuel
9 https://en.wikipedia.org/wiki/Donald_Knuth

367
Linear probing

Along with quadratic probing10 and double hashing11 , linear probing is a form of open
addressing12 . In these schemes, each cell of a hash table stores a single key–value pair.
When the hash function13 causes a collision by mapping a new key to a cell of the hash
table that is already occupied by another key, linear probing searches the table for the
closest following free location and inserts the new key there. Lookups are performed in the
same way, by searching the table sequentially starting at the position given by the hash
function, until finding a cell with a matching key or an empty cell.
As Thorup & Zhang (2012)14 write, ”Hash tables are the most commonly used nontrivial
data structures, and the most popular implementation on standard hardware uses linear
probing, which is both fast and simple.”[1] Linear probing can provide high performance
because of its good locality of reference15 , but is more sensitive to the quality of its hash
function than some other collision resolution schemes. It takes constant expected time
per search, insertion, or deletion when implemented using a random hash function, a 5-
independent hash function16 , or tabulation hashing17 . Good results can also be achieved in
practice with other hash functions such as MurmurHash18 .[2]

26.1 Operations

Linear probing is a component of open addressing19 schemes for using a hash table20 to solve
the dictionary problem21 . In the dictionary problem, a data structure should maintain a
collection of key–value pairs subject to operations that insert or delete pairs from the
collection or that search for the value associated with a given key. In open addressing
solutions to this problem, the data structure is an array22 T (the hash table) whose cells
T[i] (when nonempty) each store a single key–value pair. A hash function23 is used to map
each key into the cell of T where that key should be stored, typically scrambling the keys so
that keys with similar values are not placed near each other in the table. A hash collision24
occurs when the hash function maps a key into a cell that is already occupied by a different
key. Linear probing is a strategy for resolving collisions, by placing the new key into the
closest following empty cell.[3][4]

10 https://en.wikipedia.org/wiki/Quadratic_probing
11 https://en.wikipedia.org/wiki/Double_hashing
12 https://en.wikipedia.org/wiki/Open_addressing
13 https://en.wikipedia.org/wiki/Hash_function
14 #CITEREFThorupZhang2012
15 https://en.wikipedia.org/wiki/Locality_of_reference
16 https://en.wikipedia.org/wiki/K-independent_hashing
17 https://en.wikipedia.org/wiki/Tabulation_hashing
18 https://en.wikipedia.org/wiki/MurmurHash
19 https://en.wikipedia.org/wiki/Open_addressing
20 https://en.wikipedia.org/wiki/Hash_table
21 https://en.wikipedia.org/wiki/Associative_array
22 https://en.wikipedia.org/wiki/Array_data_structure
23 https://en.wikipedia.org/wiki/Hash_function
24 https://en.wikipedia.org/wiki/Hash_collision

368
Operations

26.1.1 Search

To search for a given key x, the cells of T are examined, beginning with the cell at index
h(x) (where h is the hash function) and continuing to the adjacent cells h(x) + 1, h(x) +
2, ..., until finding either an empty cell or a cell whose stored key is x. If a cell containing
the key is found, the search returns the value from that cell. Otherwise, if an empty cell
is found, the key cannot be in the table, because it would have been placed in that cell in
preference to any later cell that has not yet been searched. In this case, the search returns
as its result that the key is not present in the dictionary.[3][4]

26.1.2 Insertion

To insert a key–value pair (x,v) into the table (possibly replacing any existing pair with the
same key), the insertion algorithm follows the same sequence of cells that would be followed
for a search, until finding either an empty cell or a cell whose stored key is x. The new
key–value pair is then placed into that cell.[3][4]
If the insertion would cause the load factor25 of the table (its fraction of occupied cells)
to grow above some preset threshold, the whole table may be replaced by a new table,
larger by a constant factor, with a new hash function, as in a dynamic array26 . Setting
this threshold close to zero and using a high growth rate for the table size leads to faster
hash table operations but greater memory usage than threshold values close to one and low
growth rates. A common choice would be to double the table size when the load factor
would exceed 1/2, causing the load factor to stay between 1/4 and 1/2.[5]

26.1.3 Deletion

Figure 70 When a key–value pair is deleted, it may be necessary to move another pair
backwards into its cell, to prevent searches for the moved key from finding an empty cell.

It is also possible to remove a key–value pair from the dictionary. However, it is not sufficient
to do so by simply emptying its cell. This would affect searches for other keys that have
a hash value earlier than the emptied cell, but that are stored in a position later than the

25 https://en.wikipedia.org/wiki/Load_factor_(computer_science)
26 https://en.wikipedia.org/wiki/Dynamic_array

369
Linear probing

emptied cell. The emptied cell would cause those searches to incorrectly report that the
key is not present.
Instead, when a cell i is emptied, it is necessary to search forward through the following
cells of the table until finding either another empty cell or a key that can be moved to cell i
(that is, a key whose hash value is equal to or earlier than i). When an empty cell is found,
then emptying cell i is safe and the deletion process terminates. But, when the search finds
a key that can be moved to cell i, it performs this move. This has the effect of speeding
up later searches for the moved key, but it also empties out another cell, later in the same
block of occupied cells. The search for a movable key continues for the new emptied cell,
in the same way, until it terminates by reaching a cell that was already empty. In this
process of moving keys to earlier cells, each key is examined only once. Therefore, the time
to complete the whole process is proportional to the length of the block of occupied cells
containing the deleted key, matching the running time of the other hash table operations.[3]
Alternatively, it is possible to use a lazy deletion27 strategy in which a key–value pair is
removed by replacing the value by a special flag value28 indicating a deleted key. However,
these flag values will contribute to the load factor of the hash table. With this strategy, it
may become necessary to clean the flag values out of the array and rehash all the remaining
key–value pairs once too large a fraction of the array becomes occupied by deleted keys.[3][4]

26.2 Properties

Linear probing provides good locality of reference29 , which causes it to require few un-
cached memory accesses per operation. Because of this, for low to moderate load factors,
it can provide very high performance. However, compared to some other open addressing
strategies, its performance degrades more quickly at high load factors because of primary
clustering30 , a tendency for one collision to cause more nearby collisions.[3] Additionally,
achieving good performance with this method requires a higher-quality hash function than
for some other collision resolution schemes.[6] When used with low-quality hash functions
that fail to eliminate nonuniformities in the input distribution, linear probing can be slower
than other open-addressing strategies such as double hashing31 , which probes a sequence
of cells whose separation is determined by a second hash function, or quadratic probing32 ,
where the size of each step varies depending on its position within the probe sequence.[7]

27 https://en.wikipedia.org/wiki/Lazy_deletion
28 https://en.wikipedia.org/wiki/Sentinel_value
29 https://en.wikipedia.org/wiki/Locality_of_reference
30 https://en.wikipedia.org/wiki/Primary_clustering
31 https://en.wikipedia.org/wiki/Double_hashing
32 https://en.wikipedia.org/wiki/Quadratic_probing

370
Analysis

26.3 Analysis

Using linear probing, dictionary operations can be implemented in constant expected time33 .
In other words, insert, remove and search operations can be implemented in O(1)34 , as long
as the load factor35 of the hash table is a constant strictly less than one.[8]
In more detail, the time for any particular operation (a search, insertion, or deletion) is
proportional to the length of the contiguous block of occupied cells at which the operation
starts. If all starting cells are equally likely, in a hash table with N cells, then a maximal
block of k occupied cells will have probability k/N of containing the starting location of a
search, and will take time O(k) whenever it is the starting location. Therefore, the expected
time for an operation can be calculated as the product of these two terms, O(k2 /N), summed
over all of the maximal blocks of contiguous cells in the table. A similar sum of squared
block lengths gives the expected time bound for a random hash function (rather than for
a random starting location into a specific state of the hash table), by summing over all
the blocks that could exist (rather than the ones that actually exist in a given state of the
table), and multiplying the term for each potential block by the probability that the block
is actually occupied. That is, defining Block(i,k) to be the event that there is a maximal
contiguous block of occupied cells of length k beginning at index i, the expected time per
operation is
N ∑
∑ n
E[T ] = O(1) + O(k 2 /N ) Pr[Block(i, k)].
i=1 k=1

This formula can be simplified by replacing Block(i,k) by a simpler necessary condition


Full(k), the event that at least k elements have hash values that lie within a block of cells
of length k. After this replacement, the value within the sum no longer depends on i, and
the 1/N factor cancels the N terms of the outer summation. These simplifications lead to
the bound

n
E[T ] ≤ O(1) + O(k 2 ) Pr[Full(k)].
k=1

But by the multiplicative form of the Chernoff bound36 , when the load factor is bounded
away from one, the probability that a block of length k contains at least k hashed values
is exponentially small as a function of k, causing this sum to be bounded by a constant
independent of n.[3] It is also possible to perform the same analysis using Stirling's approx-
imation37 instead of the Chernoff bound to estimate the probability that a block contains
exactly k hashed values.[4][9]
In terms of the load factor α, the expected time for a successful search is O(1 + 1/(1 − α)),
and the expected time for an unsuccessful search (or the insertion of a new key) is O(1 +
1/(1 − α)2 ).[10] For constant load factors, with high probability, the longest probe sequence
(among the probe sequences for all keys stored in the table) has logarithmic length.[11]

33 https://en.wikipedia.org/wiki/Expected_time
34 https://en.wikipedia.org/wiki/Big_O_notation
35 https://en.wikipedia.org/wiki/Load_factor_(computer_science)
36 https://en.wikipedia.org/wiki/Chernoff_bound
37 https://en.wikipedia.org/wiki/Stirling%27s_approximation

371
Linear probing

26.4 Choice of hash function

Because linear probing is especially sensitive to unevenly distributed hash values,[7] it is


important to combine it with a high-quality hash function that does not produce such
irregularities.
The analysis above assumes that each key's hash is a random number independent of the
hashes of all the other keys. This assumption is unrealistic for most applications of hashing.
However, random or pseudorandom38 hash values may be used when hashing objects by
their identity rather than by their value. For instance, this is done using linear probing
by the IdentityHashMap class of the Java collections framework39 .[12] The hash value that
this class associates with each object, its identityHashCode, is guaranteed to remain fixed
for the lifetime of an object but is otherwise arbitrary.[13] Because the identityHashCode is
constructed only once per object, and is not required to be related to the object's address
or value, its construction may involve slower computations such as the call to a random or
pseudorandom number generator. For instance, Java 8 uses an Xorshift40 pseudorandom
number generator to construct these values.[14]
For most applications of hashing, it is necessary to compute the hash function for each
value every time that it is hashed, rather than once when its object is created. In such
applications, random or pseudorandom numbers cannot be used as hash values, because then
different objects with the same value would have different hashes. And cryptographic hash
functions41 (which are designed to be computationally indistinguishable from truly random
functions) are usually too slow to be used in hash tables.[15] Instead, other methods for
constructing hash functions have been devised. These methods compute the hash function
quickly, and can be proven to work well with linear probing. In particular, linear probing
has been analyzed from the framework of k-independent hashing42 , a class of hash functions
that are initialized from a small random seed and that are equally likely to map any k-
tuple of distinct keys to any k-tuple of indexes. The parameter k can be thought of as a
measure of hash function quality: the larger k is, the more time it will take to compute the
hash function but it will behave more similarly to completely random functions. For linear
probing, 5-independence is enough to guarantee constant expected time per operation,[16]
while some 4-independent hash functions perform badly, taking up to logarithmic time per
operation.[6]
Another method of constructing hash functions with both high quality and practical speed
is tabulation hashing43 . In this method, the hash value for a key is computed by using each
byte of the key as an index into a table of random numbers (with a different table for each
byte position). The numbers from those table cells are then combined by a bitwise exclusive
or44 operation. Hash functions constructed this way are only 3-independent. Nevertheless,
linear probing using these hash functions takes constant expected time per operation.[4][17]
Both tabulation hashing and standard methods for generating 5-independent hash functions

38 https://en.wikipedia.org/wiki/Pseudorandom
39 https://en.wikipedia.org/wiki/Java_collections_framework
40 https://en.wikipedia.org/wiki/Xorshift
41 https://en.wikipedia.org/wiki/Cryptographic_hash_function
42 https://en.wikipedia.org/wiki/K-independent_hashing
43 https://en.wikipedia.org/wiki/Tabulation_hashing
44 https://en.wikipedia.org/wiki/Exclusive_or

372
History

are limited to keys that have a fixed number of bits. To handle strings45 or other types of
variable-length keys, it is possible to compose46 a simpler universal hashing47 technique that
maps the keys to intermediate values and a higher quality (5-independent or tabulation)
hash function that maps the intermediate values to hash table indices.[1][18]
In an experimental comparison, Richter et al. found that the Multiply-Shift family of hash
functions (defined as hz (x) = (x · z mod 2w ) ÷ 2w−d ) was ”the fastest hash function when
integrated with all hashing schemes, i.e., producing the highest throughputs and also of good
quality” whereas tabulation hashing produced ”the lowest throughput”.[2] They point out
that each table look-up require several cycles, being more expensive than simple arithmetic
operations. They also found MurmurHash48 to be superior than tabulation hashing: ”By
studying the results provided by Mult and Murmur, we think that the trade-off for by
tabulation (...) is less attractive in practice”.

26.5 History

The idea of an associative array49 that allows data to be accessed by its value rather than
by its address dates back to the mid-1940s in the work of Konrad Zuse50 and Vannevar
Bush51 ,[19] but hash tables were not described until 1953, in an IBM memorandum by Hans
Peter Luhn52 . Luhn used a different collision resolution method, chaining, rather than linear
probing.[20]
Knuth53 (196354 ) summarizes the early history of linear probing. It was the first open
addressing method, and was originally synonymous with open addressing. According to
Knuth, it was first used by Gene Amdahl55 , Elaine M. McGraw56 (née Boehme), and Arthur
Samuel57 in 1954, in an assembler58 program for the IBM 70159 computer.[8] The first
published description of linear probing is by Peterson (1957)60 ,[8] who also credits Samuel,
Amdahl, and Boehme but adds that ”the system is so natural, that it very likely may have
been conceived independently by others either before or since that time”.[21] Another early
publication of this method was by Soviet researcher Andrey Ershov61 , in 1958.[22]

45 https://en.wikipedia.org/wiki/String_(computer_science)
46 https://en.wikipedia.org/wiki/Function_composition_(computer_science)
47 https://en.wikipedia.org/wiki/Universal_hashing
48 https://en.wikipedia.org/wiki/MurmurHash
49 https://en.wikipedia.org/wiki/Associative_array
50 https://en.wikipedia.org/wiki/Konrad_Zuse
51 https://en.wikipedia.org/wiki/Vannevar_Bush
52 https://en.wikipedia.org/wiki/Hans_Peter_Luhn
53 https://en.wikipedia.org/wiki/Donald_Knuth
54 #CITEREFKnuth1963
55 https://en.wikipedia.org/wiki/Gene_Amdahl
56 https://en.wikipedia.org/wiki/Elaine_M._McGraw
57 https://en.wikipedia.org/wiki/Arthur_Samuel
58 https://en.wikipedia.org/wiki/Assembly_language
59 https://en.wikipedia.org/wiki/IBM_701
60 #CITEREFPeterson1957
61 https://en.wikipedia.org/wiki/Andrey_Ershov

373
Linear probing

The first theoretical analysis of linear probing, showing that it takes constant expected time
per operation with random hash functions, was given by Knuth.[8] Sedgewick62 calls Knuth's
work ”a landmark in the analysis of algorithms”.[10] Significant later developments include
a more detailed analysis of the probability distribution63 of the running time,[23][24] and the
proof that linear probing runs in constant time per operation with practically usable hash
functions rather than with the idealized random functions assumed by earlier analysis.[16][17]

26.6 References
1. T, M64 ; Z, Y (2012), ”T- 5-
         -
”, SIAM Journal on Computing, 41 (2): 293–331, doi65 :10.1137/10080077466 ,
MR67 291432968 .
2. R, S; A, V; D, J (2015), ”A -
         
”69 (PDF), Proceedings of the VLDB Endowment, 9 (3): 293–331.
3. G, M T.70 ; T, R71 (2015), ”S 6.3.3: L
P”, Algorithm Design and Applications, Wiley, pp. 200–203.
4. M, P72 (F 22, 2014), ”S 5.2: LHT: L
P”, Open Data Structures (in pseudocode)73 (0.1Gβ .), . 108–116, -
 2016-01-15.
5. S, R74 ; W, K (2011), Algorithms75 (4 .), A-
W P, . 471, ISBN76 978032157351377 . Sedgewick and Wayne also
halve the table size when a deletion would cause the load factor to become too low,
causing them to use a wider range [1/8,1/2] in the possible values of the load factor.
6. PĂŞ, M78 ; T, M79 (2010), ”O  - -
      ”80 (PDF), Automata,
Languages and Programming81 , 37th International Colloquium, ICALP 2010, Bor-
deaux, France, July 6–10, 2010, Proceedings, Part I, Lecture Notes in Computer

62 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
63 https://en.wikipedia.org/wiki/Probability_distribution
64 https://en.wikipedia.org/wiki/Mikkel_Thorup
65 https://en.wikipedia.org/wiki/Doi_(identifier)
66 https://doi.org/10.1137%2F100800774
67 https://en.wikipedia.org/wiki/MR_(identifier)
68 http://www.ams.org/mathscinet-getitem?mr=2914329
69 http://www.vldb.org/pvldb/vol9/p96-richter.pdf
70 https://en.wikipedia.org/wiki/Michael_T._Goodrich
71 https://en.wikipedia.org/wiki/Roberto_Tamassia
72 https://en.wikipedia.org/wiki/Pat_Morin
73 http://opendatastructures.org/
74 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
75 https://books.google.com/books?id=MTpsAQAAQBAJ&pg=PA471
76 https://en.wikipedia.org/wiki/ISBN_(identifier)
77 https://en.wikipedia.org/wiki/Special:BookSources/9780321573513
78 https://en.wikipedia.org/wiki/Mihai_P%C4%83tra%C8%99cu
79 https://en.wikipedia.org/wiki/Mikkel_Thorup
80 http://people.csail.mit.edu/mip/papers/kwise-lb/kwise-lb.pdf
https://en.wikipedia.org/wiki/International_Colloquium_on_Automata,_Languages_and_
81
Programming

374
References

Science82 , 6198, Springer, pp. 715–726, arXiv83 :1302.512784 , doi85 :10.1007/978-3-642-


14165-2_6086
7. H, G L.; L, W (2005), ”H   -
”87 (PDF), Seventh Workshop on Algorithm Engineering and Experiments
(ALENEX 2005), pp. 141–154.
8. K, D88 (1963), Notes on ”Open” Addressing89 ,   
90  2016-03-03
9. E, D91 (O 13, 2011), ”L   ”92 , 0xDE.
10. S, R93 (2003), ”S 14.3: L P”, Algorithms in
Java, Parts 1–4: Fundamentals, Data Structures, Sorting, Searching (3rd ed.), Addi-
son Wesley, pp. 615–620, ISBN94 978032162397395 .
11. P, B. (1987), ”L :     
      ”, Journal of Algorithms,
8 (2): 236–249, doi96 :10.1016/0196-6774(87)90040-X97 , MR98 089087499 .
12. ”IHM”100 , Java SE 7 Documentation, Oracle, retrieved 2016-01-15.
13. F, J (2012), Beginning Java 7101 , E'   J, A,
. 376, ISBN102 9781430239109103 .
14. K, H M. (S 9, 2014), ”I C”104 , The Java Spe-
cialists' Newsletter, 222.
15. W, M A (2014), ”C 3: D S”105 ,  G,
T; D-H, J; T, A (.), Computing Handbook,
1 (3rd ed.), CRC Press, p. 3-11, ISBN106 9781439898536107 .

82 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
83 https://en.wikipedia.org/wiki/ArXiv_(identifier)
84 http://arxiv.org/abs/1302.5127
85 https://en.wikipedia.org/wiki/Doi_(identifier)
86 https://doi.org/10.1007%2F978-3-642-14165-2_60
87 http://www.siam.org/meetings/alenex05/papers/13gheileman.pdf
88 https://en.wikipedia.org/wiki/Donald_Knuth
https://web.archive.org/web/20160303225949/http://algo.inria.fr/AofA/Research/11-
89
97.html
90 http://algo.inria.fr/AofA/Research/11-97.html
91 https://en.wikipedia.org/wiki/David_Eppstein
92 https://11011110.github.io/blog/2011/10/13/linear-probing-made.html
93 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
94 https://en.wikipedia.org/wiki/ISBN_(identifier)
95 https://en.wikipedia.org/wiki/Special:BookSources/9780321623973
96 https://en.wikipedia.org/wiki/Doi_(identifier)
97 https://doi.org/10.1016%2F0196-6774%2887%2990040-X
98 https://en.wikipedia.org/wiki/MR_(identifier)
99 http://www.ams.org/mathscinet-getitem?mr=0890874
100 https://docs.oracle.com/javase/7/docs/api/java/util/IdentityHashMap.html
101 https://books.google.com/books?id=CwSaQpCtfPkC&pg=PA376
102 https://en.wikipedia.org/wiki/ISBN_(identifier)
103 https://en.wikipedia.org/wiki/Special:BookSources/9781430239109
104 http://www.javaspecialists.eu/archive/Issue222.html
105 https://books.google.com/books?id=wyHSBQAAQBAJ&pg=SA3-PA11
106 https://en.wikipedia.org/wiki/ISBN_(identifier)
107 https://en.wikipedia.org/wiki/Special:BookSources/9781439898536

375
Linear probing

16. P, A; P, R; RĆ, M (2009), ”L  
 ”, SIAM Journal on Computing108 , 39 (3): 1107–1120,
arXiv109 :cs/0612055110 , doi111 :10.1137/070702278112 , MR113 2538852114
17. PĂŞ, M115 ; T, M116 (2011), ”T   -
  ”, Proceedings of the 43rd annual ACM Sympo-
sium on Theory of Computing117 (STOC '11), pp. 1–10, arXiv118 :1011.5200119 ,
doi120 :10.1145/1993636.1993638121
18. T, M122 (2009), ”S    ”, Pro-
ceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Al-
gorithms, Philadelphia, PA: SIAM, pp. 655–664, CiteSeerX123 10.1.1.215.4253124 ,
doi125 :10.1137/1.9781611973068.72126 , MR127 2809270128 .
19. P, B (2006), Introduction to Parallel Processing: Algorithms and
Architectures129 , S  C S, S, 4.1 D 
 , . 67, ISBN130 9780306469640131 .
20. M, P (2004), ”H ”132 ,  M, D P.; S, S
(.), Handbook of Data Structures and Applications, Chapman & Hall / CRC, p. 9-
15, ISBN133 9781420035179134 .
21. P, W. W.135 (A 1957), ”A  - -
”, IBM Journal of Research and Development136 , R, NJ, USA: IBM
C., 1 (2): 130–146, doi137 :10.1147/rd.12.0130138 .

108 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
109 https://en.wikipedia.org/wiki/ArXiv_(identifier)
110 http://arxiv.org/abs/cs/0612055
111 https://en.wikipedia.org/wiki/Doi_(identifier)
112 https://doi.org/10.1137%2F070702278
113 https://en.wikipedia.org/wiki/MR_(identifier)
114 http://www.ams.org/mathscinet-getitem?mr=2538852
115 https://en.wikipedia.org/wiki/Mihai_P%C4%83tra%C8%99cu
116 https://en.wikipedia.org/wiki/Mikkel_Thorup
117 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
118 https://en.wikipedia.org/wiki/ArXiv_(identifier)
119 http://arxiv.org/abs/1011.5200
120 https://en.wikipedia.org/wiki/Doi_(identifier)
121 https://doi.org/10.1145%2F1993636.1993638
122 https://en.wikipedia.org/wiki/Mikkel_Thorup
123 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
124 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.215.4253
125 https://en.wikipedia.org/wiki/Doi_(identifier)
126 https://doi.org/10.1137%2F1.9781611973068.72
127 https://en.wikipedia.org/wiki/MR_(identifier)
128 http://www.ams.org/mathscinet-getitem?mr=2809270
129 https://books.google.com/books?id=iNQLBwAAQBAJ&pg=PA67
130 https://en.wikipedia.org/wiki/ISBN_(identifier)
131 https://en.wikipedia.org/wiki/Special:BookSources/9780306469640
132 https://books.google.com/books?id=fQVZy1zcpJkC&pg=SA9-PA15
133 https://en.wikipedia.org/wiki/ISBN_(identifier)
134 https://en.wikipedia.org/wiki/Special:BookSources/9781420035179
135 https://en.wikipedia.org/wiki/W._Wesley_Peterson
136 https://en.wikipedia.org/wiki/IBM_Journal_of_Research_and_Development
137 https://en.wikipedia.org/wiki/Doi_(identifier)
138 https://doi.org/10.1147%2Frd.12.0130

376
References

22. E, A. P.139 (1958), ”O P  A O”,


Communications of the ACM140 , 1 (8): 3–6, doi141 :10.1145/368892.368907142 . Trans-
lated from Doklady AN USSR 118 (3): 427–430, 1958, by Morris D. Friedman. Linear
probing is described as algorithm A2.
23. F, P.143 ; P, P.; V, A. (1998), ”O  
   ”144 (PDF), Algorithmica145 , 22 (4): 490–515,
doi146 :10.1007/PL00009236147 , MR148 1701625149 .
24. K, D. E.150 (1998), ”L   ”, Algorithmica151 , 22 (4):
561–568, arXiv152 :cs/9801103153 , doi154 :10.1007/PL00009240155 , MR156 1701629157 .

139 https://en.wikipedia.org/wiki/Andrey_Ershov
140 https://en.wikipedia.org/wiki/Communications_of_the_ACM
141 https://en.wikipedia.org/wiki/Doi_(identifier)
142 https://doi.org/10.1145%2F368892.368907
143 https://en.wikipedia.org/wiki/Philippe_Flajolet
144 http://algo.inria.fr/flajolet/Publications/FlPoVi98.pdf
145 https://en.wikipedia.org/wiki/Algorithmica
146 https://en.wikipedia.org/wiki/Doi_(identifier)
147 https://doi.org/10.1007%2FPL00009236
148 https://en.wikipedia.org/wiki/MR_(identifier)
149 http://www.ams.org/mathscinet-getitem?mr=1701625
150 https://en.wikipedia.org/wiki/Donald_Knuth
151 https://en.wikipedia.org/wiki/Algorithmica
152 https://en.wikipedia.org/wiki/ArXiv_(identifier)
153 http://arxiv.org/abs/cs/9801103
154 https://en.wikipedia.org/wiki/Doi_(identifier)
155 https://doi.org/10.1007%2FPL00009240
156 https://en.wikipedia.org/wiki/MR_(identifier)
157 http://www.ams.org/mathscinet-getitem?mr=1701629

377
27 Quadratic probing

This article needs additional citations for verification1 . Please help improve
this article2 by adding citations to reliable sources3 . Unsourced material may be
challenged and removed.
Find sources: ”Quadratic probing”4 – news5 · newspapers6 · books7 · scholar8 · JSTOR9
(September 2019)(Learn how and when to remove this template message10 )

Quadratic probing is an open addressing11 scheme in computer programming12 for re-


solving hash collisions13 in hash tables14 . Quadratic probing operates by taking the original
hash index and adding successive values of an arbitrary quadratic polynomial15 until an
open slot is found.
An example sequence using quadratic probing is:
H + 12 , H + 22 , H + 32 , H + 42 , ..., H + k 2
Quadratic probing can be a more efficient algorithm in a open addressing16 table, since
it better avoids the clustering problem that can occur with linear probing17 , although
it is not immune. It also provides good memory caching because it preserves some lo-
cality of reference18 ; however, linear probing has greater locality and, thus, better cache
19 20 21
performance.[dubious − discuss ][citation needed ]

1 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
2 https://en.wikipedia.org/w/index.php?title=Quadratic_probing&action=edit
3 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
4 http://www.google.com/search?as_eq=wikipedia&q=%22Quadratic+probing%22
5 http://www.google.com/search?tbm=nws&q=%22Quadratic+probing%22+-wikipedia
http://www.google.com/search?&q=%22Quadratic+probing%22+site:news.google.com/
6
newspapers&source=newspapers
7 http://www.google.com/search?tbs=bks:1&q=%22Quadratic+probing%22+-wikipedia
8 http://scholar.google.com/scholar?q=%22Quadratic+probing%22
9 https://www.jstor.org/action/doBasicSearch?Query=%22Quadratic+probing%22&acc=on&wc=on
10 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
11 https://en.wikipedia.org/wiki/Open_addressing
12 https://en.wikipedia.org/wiki/Computer_programming
13 https://en.wikipedia.org/wiki/Hash_collisions
14 https://en.wikipedia.org/wiki/Hash_table
15 https://en.wikipedia.org/wiki/Quadratic_polynomial
16 https://en.wikipedia.org/wiki/Open_addressing
17 https://en.wikipedia.org/wiki/Linear_probing
18 https://en.wikipedia.org/wiki/Locality_of_reference

379
Quadratic probing

27.1 Quadratic function

Let h(k) be a hash function22 that maps an element k to an integer in [0, m−1], where m is
the size of the table. Let the ith probe position for a value k be given by the function
h(k, i) = h(k) + c1 i + c2 i2 (mod m)
where c2 ≠ 0. (If c2 = 0, then h(k,i) degrades to a linear probe23 .) For a given hash table24 ,
the values of c1 and c2 remain constant.
Examples:
• If h(k, i) = (h(k) + i + i2 ) (mod m), then the probe sequence will be
h(k), h(k) + 2, h(k) + 6, ...
• For m = 2n , a good choice for the constants are c1 = c2 = 1/2, as the val-
ues of h(k,i) for i in [0, m−1] are all distinct. This leads to a probe sequence of
h(k), h(k) + 1, h(k) + 3, h(k) + 6, ... (the triangular numbers25 ) where the values increase
by 1, 2, 3, ...
• For prime m > 2, most choices of c1 and c2 will make h(k,i) distinct for i in [0, (m−1)/2].
Such choices include c1 = c2 = 1/2, c1 = c2 = 1, and c1 = 0, c2 = 1. However, there
are only m/2 distinct probes for a given element, requiring other techniques to guarantee
that insertions will succeed when the load factor is exceeds 1/2.

27.2 Limitations

When using quadratic probing, however (with the exception of triangular number26 cases
for a hash table of size 2n ),[1] there is no guarantee of finding an empty cell once the table
becomes more than half full, or even before this if the table size is composite27 ,[2] because
collisions must be resolved using half of the table at most.
The inverse of this can be proven as such: Suppose a hash table has size p (a prime greater
than 3), with an initial location h(k) and two alternative locations h(k) + x2 (mod p) and
h(k) + y 2 (mod p) (where 0 ≤ x and y ≤ p/2). If these two locations point to the same key
space, but x ̸= y, then
h(k) + x2 = h(k) + y 2 (mod p)
x2 = y 2 (mod p)
x2 − y 2 = 0 (mod p)
(x − y)(x + y) = 0 (mod p).
As p is a prime number, either (x − y) or (x + y) must be divisible by p. Since x and y are
different (modulo p), (x − y) ̸= 0, and since both variables are greater than zero, (x + y) ̸= 0.

22 https://en.wikipedia.org/wiki/Hash_function
23 https://en.wikipedia.org/wiki/Linear_probing
24 https://en.wikipedia.org/wiki/Hash_table
25 https://en.wikipedia.org/wiki/Triangular_numbers
26 https://en.wikipedia.org/wiki/Triangular_number
27 https://en.wikipedia.org/wiki/Composite_number

380
References

Thus, by contradiction, the first p/2 alternative locations after h(k) must be unique, and
subsequently, an empty space can always be found so long as at most p/2 locations are filled
(i.e., the hash table is not more than half full).

27.2.1 Alternating signs

If the sign of the offset is alternated (e.g. +1, −4, +9, −16, etc.), and if the number of
buckets is a prime number p congruent to 3 modulo 4 (e.g. 3, 7, 11, 19, 23, 31, etc.),
28
then the first p offsets will be unique (modulo p).[further explanation needed ] In other words, a
permutation of 0 through p − 1 is obtained, and, consequently, a free bucket will always be
found as long as at least one exists.

27.3 References
1. H, F. R A.; D, J H.29 (N 1972). ”T
Q H M         2”30 . Computer
Journal. 15 (4): 314–5. doi31 :10.1093/comjnl/15.4.31432 . Retrieved 2020-02-07.
2. W, M A (2009). ”§5.4.2 Q ”. Data Structures and
Algorithm Analysis in C++. Pearson Education. ISBN33 978-81-317-1474-434 .

27.4 External links


• Tutorial/quadratic probing35

29 https://en.wikipedia.org/wiki/James_H._Davenport
30 http://www.chilton-computing.org.uk/acl/literature/reports/p012.htm
31 https://en.wikipedia.org/wiki/Doi_(identifier)
32 https://doi.org/10.1093%2Fcomjnl%2F15.4.314
33 https://en.wikipedia.org/wiki/ISBN_(identifier)
34 https://en.wikipedia.org/wiki/Special:BookSources/978-81-317-1474-4
35 http://research.cs.vt.edu/AVresearch/hashing/quadratic.php

381
28 Double hashing

Double hashing is a computer programming1 technique used in conjunction with open-


addressing in hash tables2 to resolve hash collisions3 , by using a secondary hash of the key
as an offset when a collision occurs. Double hashing with open addressing is a classical data
structure on a table T .
It uses one hash value as an index into the table and then repeatedly steps forward an
interval until the desired value is located, an empty location is reached, or the entire table
has been searched; but this interval is set by a second, independent hash function4 . Unlike
the alternative collision-resolution methods of linear probing5 and quadratic probing6 , the
interval depends on the data, so that values mapping to the same location have different
bucket sequences; this minimizes repeated collisions and the effects of clustering.
Given two random, uniform, and independent hash functions h1 and h2 , the ith
location in the bucket sequence for value k in a hash table of |T | buckets is:
h(i, k) = (h1 (k) + i · h2 (k)) mod |T |. Generally, h1 and h2 are selected from a set of universal
hash7 functions; h1 is selected to have a range of {0, |T | − 1} and h2 to have a range of
{1, |T | − 1}. Double hashing approximates a random distribution; more precisely, pair-wise
independent hash functions yield a probability of (n/|T |)2 that any pair of keys will follow
the same bucket sequence.

28.1 Selection of h2 (k)

The secondary hash function h2 (k) should have several characteristics:


• it should never yield an index of zero
• it should cycle through the whole table
• it should be very fast to compute
• it should be pair-wise independent of h1 (k)
• The distribution characteristics of h2 are irrelevant. It is analogous to a random-number
generator - it is only necessary that h2 be ’’relatively prime’’ to |T|.
In practice, if division hashing is used for both functions, the divisors are chosen as primes.

1 https://en.wikipedia.org/wiki/Computer_programming
2 https://en.wikipedia.org/wiki/Hash_table
3 https://en.wikipedia.org/wiki/Hash_collision
4 https://en.wikipedia.org/wiki/Hash_function
5 https://en.wikipedia.org/wiki/Linear_probing
6 https://en.wikipedia.org/wiki/Quadratic_probing
7 https://en.wikipedia.org/wiki/Universal_hash

383
Double hashing

28.2 Analysis

Let n be the number of elements stored in T , then T 's load factor is α = n/|T |. That is,
start by randomly, uniformly and independently selecting two universal hash8 functions h1
and h2 to build a double hashing table T . All elements are put in T by double hashing
using h1 and h2 . Given a key k, the (i + 1)-st hash location is computed by:
h(i, k) = (h1 (k) + i · h2 (k)) mod |T |.
Let T have fixed load factor α : 1 > α > 0.

This article is missing information about where’s the rest of the


proof/argument?. Please expand the article to include this information. Fur-
ther details may exist on the talk page9 . (September 2019)

Bradford and Katehakis10[1] showed the expected number of probes for an unsuccessful
1
search in T , still using these initially chosen hash functions, is regardless of the
1−α
distribution of the inputs. Pair-wise independence of the hash functions suffices.
Like all other forms of open addressing, double hashing becomes linear as the hash table
approaches maximum capacity. The usual heuristic is to limit the table loading to 75% of
capacity. Eventually, rehashing to a larger size will be necessary, as with all other open
addressing schemes.

28.3 Enhanced double hashing

Peter Dillinger's PhD thesis[2] points out that double hashing produces unwanted equiva-
lent hash functions when the hash functions are treated as a set, as in Bloom filters11 : If
h2 (y) = −h2 (x) and h1 (y) = h1 (x) + k · h2 (x), then h(i, y) = h(k − i, x) and the sets of hashes
{h(0, x), ..., h(k, x)} = {h(0, y), ..., h(k, y)} are identical. This makes a collision twice as likely
as the hoped-for 1/|T |2 .
There are additionally a significant number of mostly-overlapping hash sets; if h2 (y) = h2 (x)
and h1(y) = h1(x) ± h2 (x), then h(i, y) = h(i ± 1, x), and comparing additional hash values
(expanding the range of i) is of no help.
Adding a quadratic term i2 ,[3] i(i + 1)/2 (a triangular number12 ) or even i2 · h3 (x) (triple
hashing) to the hash function improves the hash function somewhat[3] but does not fix this
problem; if:

8 https://en.wikipedia.org/wiki/Universal_hash
9 https://en.wikipedia.org/wiki/Talk:Double_hashing
10 https://en.wikipedia.org/wiki/Michael_N._Katehakis
11 https://en.wikipedia.org/wiki/Bloom_filter
12 https://en.wikipedia.org/wiki/Triangular_number

384
Enhanced double hashing

h1 (y) = h1 (x) + k · h2 (x) + k 2 · h3 (x),


h2 (y) = −h2 (x) − 2k · h3 (x), and
h3 (y) = h3 (x).
then
h(k − i, y) = h1 (y) + (k − i) · h2 (y) + (k − i)2 · h3 (y)
= h1 (y) + (k − i)(−h2 (x) − 2kh3 (x)) + (k − i)2 h3 (x)
= h1 (y) + (i − k)h2 (x) + (2ki − 2k 2 )h3 (x) + (k 2 − 2ki + i2 )h3 (x)
= h1 (y) + (i − k)h2 (x) + (i2 − k 2 )h3 (x)
= h1 (x) + kh2 (x) + k 2 h3 (x) + (i − k)h2 (x) + (i2 − k 2 )h3 (x)
= h1 (x) + ih2 (x) + i2 h3 (x)
= h(i, x).
Adding a cubic term13 i3 [3] or (i3 − i)/6 (a tetrahedral number14 ),[4] does solve the problem,
a technique known as enhanced double hashing. This can be computed efficiently by
forward differencing15 :

struct key; // Opaque


extern unsigned int h1(struct key const *), h2(struct key const *);

// Calculate k hash values from two underlying hash function


// h1() and h2() using enhanced double hashing. On return,
// hashes[i] = h1(x) + i*h2(x) + (i*i*i - i)/6
// Takes advantage of automatic wrapping (modular reduction)
// of unsigned types in C.
void hash(struct key const *x, unsigned int hashes[], unsigned int n)
{
unsigned int a = h1(x), b = h2(x), i;

for (i = 0; i < n; i++) {


hashes[i] = a;
a += b; // Add quadratic difference to get cubic
b += i; // Add linear difference to get quadratic
// i++ adds constant difference to get linear
}
}
// Produces the same result, less legibly.
void hash_alt(struct key const *x, unsigned int hashes[], unsigned int n)
{
unsigned int a = h1(x), b = h2(x), i;

hashes[0] = a;
for (i = i; i < n; )
hashes[i] = a += b += i += 1;
}

13 https://en.wikipedia.org/wiki/Cubic_function
14 https://en.wikipedia.org/wiki/Tetrahedral_number
15 https://en.wikipedia.org/wiki/Forward_difference

385
Double hashing

28.4 See also


• Cuckoo hashing16
• 2-choice hashing17

28.5 References
1. B, P G.; K, M N.18 (A 2007), ”A P-
 S  C E  H”19 (PDF),
SIAM Journal on Computing, 37 (1): 83–111, doi20 :10.1137/S009753970444630X21 ,
MR22 230628423 , archived from the original24 (PDF) on 2016-01-25.
2. D, P C. (D 2010). Adaptive Approximate State Storage25
(PDF) (PD ). N U. . 93–112.
3. K, A; M, M26 (S 2008). ”L H-
, S P: B  B B F”27 (PDF).  Ran-
dom Structures and Algorithms. 33 (2): 187–218. CiteSeerX28 10.1.1.152.57929 .
doi30 :10.1002/rsa.2020831 .
4. D, P C.; M, P (N 15–17, 2004). Bloom
Filters in Probabilistic Verification32 (PDF). 5 I C 
F M  C A D (FMCAD 2004). A, T.
CSX33 10.1.1.119.62834 . 35 :10.1007/978-3-540-30494-4_2636 .CS1 maint:
date format (link37 )

28.6 External links


• How Caching Affects Hashing38 by Gregory L. Heileman and Wenbin Luo 2005.

16 https://en.wikipedia.org/wiki/Cuckoo_hashing
17 https://en.wikipedia.org/wiki/2-choice_hashing
18 https://en.wikipedia.org/wiki/Michael_N._Katehakis
https://web.archive.org/web/20160125172602/http://phillipbradford.com/papers/
19
AProbStudyExpandersAndHashing.pdf
20 https://en.wikipedia.org/wiki/Doi_(identifier)
21 https://doi.org/10.1137%2FS009753970444630X
22 https://en.wikipedia.org/wiki/MR_(identifier)
23 http://www.ams.org/mathscinet-getitem?mr=2306284
24 http://phillipbradford.com/papers/AProbStudyExpandersAndHashing.pdf
25 http://peterd.org/pcd-diss.pdf#page=93
26 https://en.wikipedia.org/wiki/Michael_Mitzenmacher
27 https://www.eecs.harvard.edu/~michaelm/postscripts/rsa2008.pdf
28 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
29 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.152.579
30 https://en.wikipedia.org/wiki/Doi_(identifier)
31 https://doi.org/10.1002%2Frsa.20208
32 https://www.khoury.northeastern.edu/~pete/pub/bloom-filters-verification.pdf
33 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
34 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.119.628
35 https://en.wikipedia.org/wiki/Doi_(identifier)
36 https://doi.org/10.1007%2F978-3-540-30494-4_26
37 https://en.wikipedia.org/wiki/Category:CS1_maint:_date_format
38 http://www.siam.org/meetings/alenex05/papers/13gheileman.pdf

386
External links

• Hash Table Animation39


• klib40 a C library that includes double hashing functionality.

39 http://www.cs.pitt.edu/~kirk/cs1501/animations/Hashing.html
40 https://github.com/attractivechaos/klib

387
29 Cuckoo hashing

Figure 73 Cuckoo hashing example. The arrows show the alternative location of each
key. A new item would be inserted in the location of A by moving A to its alternative
location, currently occupied by B, and moving B to its alternative location which is
currently vacant. Insertion of a new item in the location of H would not succeed: Since H
is part of a cycle (together with W), the new item would get kicked out again.

389
Cuckoo hashing

Cuckoo hashing is a scheme in computer programming1 for resolving hash collisions2 of


values of hash functions3 in a table4 , with worst-case5 constant6 lookup time. The name
derives from the behavior of some species of cuckoo7 , where the cuckoo chick pushes the
other eggs or young out of the nest when it hatches; analogously, inserting a new key into
a cuckoo hashing table may push an older key to a different location in the table.

29.1 History

Cuckoo hashing was first described by Rasmus Pagh8 and Flemming Friche Rodler9 in
2001.[1]

29.2 Operation

Cuckoo hashing is a form of open addressing10 in which each non-empty cell of a hash table11
contains a key12 or key–value pair13 . A hash function14 is used to determine the location
for each key, and its presence in the table (or the value associated with it) can be found
by examining that cell of the table. However, open addressing suffers from collisions15 ,
which happen when more than one key is mapped to the same cell. The basic idea of
cuckoo hashing is to resolve collisions by using two hash functions instead of only one. This
provides two possible locations in the hash table for each key. In one of the commonly used
variants of the algorithm, the hash table is split into two smaller tables of equal size, and
each hash function provides an index into one of these two tables. It is also possible for
both hash functions to provide indexes into a single table.
Lookup requires inspection of just two locations in the hash table, which takes constant
time in the worst case. This is in contrast to many other hash table algorithms, which may
not have a constant worst-case bound on the time to do a lookup. Deletions, also, may be
performed by blanking the cell containing a key, in constant worst case time, more simply
than some other schemes such as linear probing16 .

1 https://en.wikipedia.org/wiki/Computer_programming
2 https://en.wikipedia.org/wiki/Hash_collision
3 https://en.wikipedia.org/wiki/Hash_function
4 https://en.wikipedia.org/wiki/Hash_table
5 https://en.wikipedia.org/wiki/Worst_case_analysis
6 https://en.wikipedia.org/wiki/Constant_time
7 https://en.wikipedia.org/wiki/Cuckoo
8 https://en.wikipedia.org/w/index.php?title=Rasmus_Pagh&action=edit&redlink=1
https://en.wikipedia.org/w/index.php?title=Flemming_Friche_Rodler&action=edit&
9
redlink=1
10 https://en.wikipedia.org/wiki/Open_addressing
11 https://en.wikipedia.org/wiki/Hash_table
12 https://en.wikipedia.org/wiki/Unique_key
13 https://en.wikipedia.org/wiki/Attribute%E2%80%93value_pair
14 https://en.wikipedia.org/wiki/Hash_function
15 https://en.wikipedia.org/wiki/Collision_(computer_science)
16 https://en.wikipedia.org/wiki/Linear_probing

390
Theory

When a new key is inserted, and one of its two cells is empty, it may be placed in that cell.
However, when both cells are already full, it will be necessary to move other keys to their
second locations (or back to their first locations) to make room for the new key. A greedy
algorithm17 is used: The new key is inserted in one of its two possible locations, ”kicking
out”, that is, displacing, any key that might already reside in this location. This displaced
key is then inserted in its alternative location, again kicking out any key that might reside
there. The process continues in the same way until an empty position is found, completing
the algorithm. However, it is possible for this insertion process to fail, by entering an infinite
loop18 or by finding a very long chain (longer than a preset threshold that is logarithmic19 in
the table size). In this case, the hash table is rebuilt in-place20 using new hash functions21 :
There is no need to allocate new tables for the rehashing: We may simply run through
the tables to delete and perform the usual insertion procedure on all keys found not to
be at their intended position in the table.

P & R, ”C H”[1]

29.3 Theory

Insertions succeed in expected constant time,[1] even considering the possibility of having
to rebuild the table, as long as the number of keys is kept below half of the capacity of the
hash table, i.e., the load factor22 is below 50%.
One method of proving this uses the theory of random graphs23 : one may form an undirected
graph24 called the ”cuckoo graph” that has a vertex for each hash table location, and an
edge for each hashed value, with the endpoints of the edge being the two possible locations
of the value. Then, the greedy insertion algorithm for adding a set of values to a cuckoo
hash table succeeds if and only if the cuckoo graph for this set of values is a pseudoforest25 ,
a graph with at most one cycle in each of its connected components26 . Any vertex-induced
subgraph with more edges than vertices corresponds to a set of keys for which there are an
insufficient number of slots in the hash table. When the hash function is chosen randomly,
the cuckoo graph is a random graph27 in the Erdős–Rényi model28 . With high probability,
for a random graph in which the ratio of the number of edges to the number of vertices is
bounded below 1/2, the graph is a pseudoforest and the cuckoo hashing algorithm succeeds
in placing all keys. Moreover, the same theory also proves that the expected size of a

17 https://en.wikipedia.org/wiki/Greedy_algorithm
18 https://en.wikipedia.org/wiki/Infinite_loop
19 https://en.wikipedia.org/wiki/Logarithm
20 https://en.wikipedia.org/wiki/In-place_algorithm
21 https://en.wikipedia.org/wiki/Hash_function
22 https://en.wikipedia.org/wiki/Load_factor_(computer_science)
23 https://en.wikipedia.org/wiki/Random_graph
24 https://en.wikipedia.org/wiki/Undirected_graph
25 https://en.wikipedia.org/wiki/Pseudoforest
26 https://en.wikipedia.org/wiki/Connected_component_(graph_theory)
27 https://en.wikipedia.org/wiki/Random_graph
28 https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model

391
Cuckoo hashing

connected component29 of the cuckoo graph is small, ensuring that each insertion takes
constant expected time.[2]

29.4 Practice

In practice, cuckoo hashing is about 20–30% slower than linear probing30 , which is the
fastest of the common approaches.[1] The reason is that cuckoo hashing often causes two
cache misses per search, to check the two locations where a key might be stored, while linear
probing usually causes only one cache miss per search. However, because of its worst case
guarantees on search time, cuckoo hashing can still be valuable when real-time response
rates31 are required.

29.5 Example

The following hash functions are given:


h (k) = k mod 11

⌊ ⌋
′ k
h (k) = mod 11
11

k h(k) h'(k)
20 9 1
50 6 4
53 9 4
75 9 6
100 1 9
67 1 6
105 6 9
3 3 0
36 3 3
39 6 3

Columns in the following two tables show the state of the hash tables over time as the
elements are inserted.
1. table for h(k)
20 50 53 75 100 67 105 3 36 39
0
1 100 67 67 67 67 100
2
3 3 36 36
4

29 https://en.wikipedia.org/wiki/Connected_component_(graph_theory)
30 https://en.wikipedia.org/wiki/Linear_probing
31 https://en.wikipedia.org/wiki/Real-time_computing

392
Example

1. table for h(k)


5
6 50 50 50 50 50 105 105 105 50
7
8
9 20 20 53 75 75 75 53 53 53 75

2. table for h'(k)


20 50 53 75 100 67 105 3 36 39
0 3 3
1 20 20 20 20 20 20 20 20
2
3 39
4 53 53 53 50 50 50 53
5
6 75 75 75 67
7
8
9 100 100 100 100 105

29.5.1 Cycle

If you now wish to insert the element 6, then you get into a cycle. In the last row of the
table we find the same initial situation as at the beginning again.
h (6) = 6 mod 11 = 6
⌊ ⌋
′ 6
h (6) = mod 11 = 0
11

table 1 table 2
6 replaces 50 in cell 6 50 replaces 53 in cell 4
53 replaces 75 in cell 9 75 replaces 67 in cell 6
67 replaces 100 in cell 1 100 replaces 105 in cell 9
105 replaces 6 in cell 6 6 replaces 3 in cell 0
3 replaces 36 in cell 3 36 replaces 39 in cell 3
39 replaces 105 in cell 6 105 replaces 100 in cell 9
100 replaces 67 in cell 1 67 replaces 75 in cell 6
75 replaces 53 in cell 9 53 replaces 50 in cell 4
50 replaces 39 in cell 6 39 replaces 36 in cell 3
36 replaces 3 in cell 3 3 replaces 6 in cell 0
6 replaces 50 in cell 6 50 replaces 53 in cell 4

393
Cuckoo hashing

29.6 Variations

Several variations of cuckoo hashing have been studied, primarily with the aim of improving
its space usage by increasing the load factor32 that it can tolerate to a number greater than
the 50% threshold of the basic algorithm. Some of these methods can also be used to reduce
the failure rate of cuckoo hashing, causing rebuilds of the data structure to be much less
frequent.
Generalizations of cuckoo hashing that use more than two alternative hash functions can be
expected to utilize a larger part of the capacity of the hash table efficiently while sacrificing
some lookup and insertion speed. Using just three hash functions increases the load to
91%.[3] Another generalization of cuckoo hashing, called blocked cuckoo hashing consists in
using more than one key per bucket. Using just 2 keys per bucket permits a load factor
above 80%.[4]
Another variation of cuckoo hashing that has been studied is cuckoo hashing with a stash.
The stash, in this data structure, is an array of a constant number of keys, used to store
keys that cannot successfully be inserted into the main hash table of the structure. This
modification reduces the failure rate of cuckoo hashing to an inverse-polynomial function
with an exponent that can be made arbitrarily large by increasing the stash size. However,
larger stashes also mean slower searches for keys that are not present or are in the stash.
A stash can be used in combination with more than two hash functions or with blocked
cuckoo hashing to achieve both high load factors and small failure rates.[5] The analysis of
cuckoo hashing with a stash extends to practical hash functions, not just to the random
hash function model commonly used in theoretical analysis of hashing.[6]
Some people recommend a simplified generalization of cuckoo hashing called skewed-
associative cache33 in some CPU caches34 .[7]
Another variation of a cuckoo hash table, called a cuckoo filter, replaces the stored keys
of a cuckoo hash table with much shorter fingerprints, computed by applying another hash
function to the keys. In order to allow these fingerprints to be moved around within the
cuckoo filter, without knowing the keys that they came from, the two locations of each
fingerprint may be computed from each other by a bitwise exclusive or35 operation with the
fingerprint, or with a hash of the fingerprint. This data structure forms an approximate set
membership data structure with much the same properties as a Bloom filter36 : it can store
the members of a set of keys, and test whether a query key is a member, with some chance
of false positives37 (queries that are incorrectly reported as being part of the set) but no
false negatives38 . However, it improves on a Bloom filter in multiple respects: its memory
usage is smaller by a constant factor, it has better locality of reference39 , and (unlike Bloom
filters) it allows for fast deletion of set elements with no additional storage penalty.[8]

32 https://en.wikipedia.org/wiki/Load_factor_(computer_science)
33 https://en.wikipedia.org/wiki/CPU_cache#Two-way_skewed_associative_cache
34 https://en.wikipedia.org/wiki/CPU_cache
35 https://en.wikipedia.org/wiki/Exclusive_or
36 https://en.wikipedia.org/wiki/Bloom_filter
37 https://en.wikipedia.org/wiki/False_positive
38 https://en.wikipedia.org/wiki/False_negative
39 https://en.wikipedia.org/wiki/Locality_of_reference

394
Comparison with related structures

29.7 Comparison with related structures

A study by Zukowski et al.[9] has shown that cuckoo hashing is much faster than chained
hashing40 for small, cache41 -resident hash tables on modern processors. Kenneth Ross[10] has
shown bucketized versions of cuckoo hashing (variants that use buckets that contain more
than one key) to be faster than conventional methods also for large hash tables, when space
utilization is high. The performance of the bucketized cuckoo hash table was investigated
further by Askitis,[11] with its performance compared against alternative hashing schemes.
A survey by Mitzenmacher42[3] presents open problems related to cuckoo hashing as of 2009.

29.8 See also


• Perfect hashing43
• Double hashing44
• Quadratic probing45
• Hopscotch hashing46

29.9 References
1. P, R; R, F F (2001). ”C H”. Al-
gorithms — ESA 2001. Lecture Notes in Computer Science. 2161. pp. 121–133.
CiteSeerX47 10.1.1.25.418948 . doi49 :10.1007/3-540-44676-1_1050 . ISBN51 978-3-540-
42493-252 .
2. K, R (2006). Bipartite random graphs and cuckoo hashing53
(PDF). F C  M  C S. D-
 M  T C S. AG. pp. 403–406.
3. M, M54 (2009-09-09). ”S O Q R 
C H | P  ESA 2009”55 (PDF). R 2010-11-10.
Cite journal requires |journal= (help56 )

40 https://en.wikipedia.org/wiki/Separate_chaining
41 https://en.wikipedia.org/wiki/CPU_cache
42 https://en.wikipedia.org/wiki/Michael_Mitzenmacher
43 https://en.wikipedia.org/wiki/Perfect_hashing
44 https://en.wikipedia.org/wiki/Double_hashing
45 https://en.wikipedia.org/wiki/Quadratic_probing
46 https://en.wikipedia.org/wiki/Hopscotch_hashing
47 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
48 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.25.4189
49 https://en.wikipedia.org/wiki/Doi_(identifier)
50 https://doi.org/10.1007%2F3-540-44676-1_10
51 https://en.wikipedia.org/wiki/ISBN_(identifier)
52 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-42493-2
https://www.dmtcs.org/dmtcs-ojs/index.php/proceedings/article/viewFile/dmAG0133/1710.
53
pdf
54 https://en.wikipedia.org/wiki/Michael_Mitzenmacher
55 http://www.eecs.harvard.edu/~michaelm/postscripts/esa2009.pdf
56 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical

395
Cuckoo hashing

4. D, M; W, C (2007), ”B -


        ”, Theoret.
Comput. Sci., 380 (1–2): 47–68, doi57 :10.1016/j.tcs.2007.02.05458 , MR59 233064160 .
5. K, A; M, M D.; W, U (2010), ”M -
 :     ”, SIAM J. Comput., 39 (4): 1543–
1561, doi61 :10.1137/08072874362 , MR63 258053964 .
6. A, M; D, M; W, P (2014), ”E-
           ”,
Algorithmica, 70 (3): 428–456, arXiv65 :1204.443166 , doi67 :10.1007/s00453-013-9840-
x68 , MR69 324737470 .
7. ”Micro-Architecture”71 .
8. F, B; A, D G.; K, M; M, M
D.72 (2014), ”C : P   B”, Proc. 10th
ACM Int. Conf. Emerging Networking Experiments and Technologies (CoNEXT
'14), pp. 75–88, doi73 :10.1145/2674005.267499474
9. Z, M; H, S; B, P (J 2006).
”A-C H”75 (PDF). P   I-
 W  D M  N H (DMN). R-
 2008-10-16. Cite journal requires |journal= (help76 )
10. R, K (2006-11-08). ”E H P  M P-
”77 (PDF). IBM R R RC24100. RC24100. R 2008-10-
16. Cite journal requires |journal= (help78 )
11. A, N (2009). Fast and Compact Hash Tables for Integer Keys79 (PDF).
Proceedings of the 32nd Australasian Computer Science Conference (ACSC 2009).

57 https://en.wikipedia.org/wiki/Doi_(identifier)
58 https://doi.org/10.1016%2Fj.tcs.2007.02.054
59 https://en.wikipedia.org/wiki/MR_(identifier)
60 http://www.ams.org/mathscinet-getitem?mr=2330641
61 https://en.wikipedia.org/wiki/Doi_(identifier)
62 https://doi.org/10.1137%2F080728743
63 https://en.wikipedia.org/wiki/MR_(identifier)
64 http://www.ams.org/mathscinet-getitem?mr=2580539
65 https://en.wikipedia.org/wiki/ArXiv_(identifier)
66 http://arxiv.org/abs/1204.4431
67 https://en.wikipedia.org/wiki/Doi_(identifier)
68 https://doi.org/10.1007%2Fs00453-013-9840-x
69 https://en.wikipedia.org/wiki/MR_(identifier)
70 http://www.ams.org/mathscinet-getitem?mr=3247374
71 http://www.irisa.fr/caps/PROJECTS/Architecture/
72 https://en.wikipedia.org/wiki/Michael_Mitzenmacher
73 https://en.wikipedia.org/wiki/Doi_(identifier)
74 https://doi.org/10.1145%2F2674005.2674994
75 https://www.cs.cmu.edu/~damon2006/pdf/zukowski06archconscioushashing.pdf
76 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
http://domino.research.ibm.com/library/cyberdig.nsf/papers/DF54E3545C82E8A585257222006FD9A2/
77
\protect\TU\textdollar{}File/rc24100.pdf
78 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
https://web.archive.org/web/20110216180225/http://crpit.com/confpapers/
79
CRPITV91Askitis.pdf

396
External links

91. pp. 113–122. ISBN80 978-1-920682-72-981 . Archived from the original82 (PDF)
on 2011-02-16. Retrieved 2010-06-13.

29.10 External links


• A cool and practical alternative to traditional hash tables83 , U. Erlingsson, M. Manasse,
F. Mcsherry, 2006.
• Cuckoo Hashing for Undergraduates, 200684 , R. Pagh, 2006.
• Cuckoo Hashing, Theory and Practice85 (Part 1, Part 286 and Part 387 ), Michael Mitzen-
macher, 2007.
• N, M; S, G; W, U (2008). ”H-I C
H”88 . International Colloquium on Automata, Languages and Programming
(ICALP). Reykjavik, Iceland. Retrieved 2008-07-21.
• Algorithmic Improvements for Fast Concurrent Cuckoo Hashing89 , X. Li, D. Andersen,
M. Kaminsky, M. Freedman. EuroSys 2014.

29.10.1 Examples
• Concurrent high-performance Cuckoo hashtable written in C++90
• Cuckoo hash map written in C++91
• Static cuckoo hashtable generator for C/C++92
• Cuckoo hash table written in Haskell93
• Cuckoo hashing for Go94

80 https://en.wikipedia.org/wiki/ISBN_(identifier)
81 https://en.wikipedia.org/wiki/Special:BookSources/978-1-920682-72-9
82 http://crpit.com/confpapers/CRPITV91Askitis.pdf
83 http://www.ru.is/faculty/ulfar/CuckooHash.pdf
84 http://www.itu.dk/people/pagh/papers/cuckoo-undergrad.pdf
85 http://mybiasedcoin.blogspot.com/2007/06/cuckoo-hashing-theory-and-practice-part.html
http://mybiasedcoin.blogspot.com/2007/06/cuckoo-hashing-theory-and-practice-
86
part_15.html
http://mybiasedcoin.blogspot.com/2007/06/cuckoo-hashing-theory-and-practice-
87
part_19.html
88 http://www.wisdom.weizmann.ac.il/~naor/PAPERS/cuckoo_hi_abs.html
89 http://www.cs.princeton.edu/~mfreed/docs/cuckoo-eurosys14.pdf
90 https://github.com/efficient/libcuckoo
91 http://sourceforge.net/projects/cuckoo-cpp/
92 http://www.theiling.de/projects/lookuptable.html
http://hackage.haskell.org/packages/archive/hashtables/latest/doc/html/Data-
93
HashTable-ST-Cuckoo.html
94 https://github.com/salviati/cuckoo

397
30 Random number generation

This article needs additional citations for verification1 . Please help improve
this article2 by adding citations to reliable sources3 . Unsourced material may be
challenged and removed.
Find sources: ”Random number generation”4 –
news5 · newspapers6 · books7 · scholar8 · JSTOR9 (June 2009)(Learn how and
when to remove this template message10 )

1 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
2 https://en.wikipedia.org/w/index.php?title=Random_number_generation&action=edit
3 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
4 http://www.google.com/search?as_eq=wikipedia&q=%22Random+number+generation%22
5 http://www.google.com/search?tbm=nws&q=%22Random+number+generation%22+-wikipedia
http://www.google.com/search?&q=%22Random+number+generation%22+site:news.google.com/
6
newspapers&source=newspapers
7 http://www.google.com/search?tbs=bks:1&q=%22Random+number+generation%22+-wikipedia
8 http://scholar.google.com/scholar?q=%22Random+number+generation%22
https://www.jstor.org/action/doBasicSearch?Query=%22Random+number+generation%22&acc=
9
on&wc=on
10 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

399
Random number generation

Figure 75 Dice are an example of a mechanical hardware random number generator.


When a cubical die is rolled, a random number from 1 to 6 is obtained.

A random number generator (RNG) is a device that generates a sequence of num-


bers11 or symbols that cannot be reasonably predicted better than by a random12 chance.
Random number generators can be true hardware random-number generators13
(HRNG), which generate genuinely random numbers, or pseudo-random number genera-
tors14 (PRNG), which generate numbers that look random, but are actually deterministic,
and can be reproduced if the state of the PRNG is known.
Various applications of randomness15 have led to the development of several different meth-
ods for generating random16 data, of which some have existed since ancient times, among
whose ranks are well-known ”classic” examples, including the rolling of dice17 , coin flip-
ping18 , the shuffling19 of playing cards20 , the use of yarrow21 stalks (for divination22 ) in
the I Ching23 , as well as countless other techniques. Because of the mechanical nature of

11 https://en.wikipedia.org/wiki/Number
12 https://en.wikipedia.org/wiki/Random
13 https://en.wikipedia.org/wiki/Hardware_random_number_generator
14 https://en.wikipedia.org/wiki/Pseudorandom_number_generator
15 https://en.wikipedia.org/wiki/Applications_of_randomness
16 https://en.wikipedia.org/wiki/Randomness
17 https://en.wikipedia.org/wiki/Dice
18 https://en.wikipedia.org/wiki/Coin_flipping
19 https://en.wikipedia.org/wiki/Shuffling
20 https://en.wikipedia.org/wiki/Playing_card
21 https://en.wikipedia.org/wiki/Yarrow
22 https://en.wikipedia.org/wiki/Divination
23 https://en.wikipedia.org/wiki/I_Ching

400
Practical applications and uses

these techniques, generating large numbers of sufficiently random numbers (important in


statistics) required a lot of work and/or time. Thus, results would sometimes be collected
and distributed as random number tables24 .
Several computational methods for pseudo-random number generation exist. All fall short
of the goal of true randomness, although they may meet, with varying success, some of the
statistical tests for randomness25 intended to measure how unpredictable their results are
(that is, to what degree their patterns are discernible). This generally makes them unusable
for applications such as cryptography26 . However, carefully designed cryptographically
secure pseudo-random number generators27 (CSPRNG) also exist, with special fea-
tures specifically designed for use in cryptography.

30.1 Practical applications and uses

Main article: Applications of randomness28 Random number generators have applications


in gambling29 , statistical sampling30 , computer simulation31 , cryptography32 , completely
randomized design33 , and other areas where producing an unpredictable result is desirable.
Generally, in applications having unpredictability as the paramount, such as in security
applications, hardware generators34 are generally preferred over pseudo-random algorithms,
where feasible.
Random number generators are very useful in developing Monte Carlo-method35 simula-
tions, as debugging36 is facilitated by the ability to run the same sequence of random
numbers again by starting from the same random seed37 . They are also used in cryptog-
raphy38 – so long as the seed is secret. Sender and receiver can generate the same set of
numbers automatically to use as keys.
The generation of pseudo-random numbers39 is an important and common task in com-
puter programming. While cryptography and certain numerical algorithms require a very
high degree of apparent randomness, many other operations only need a modest amount of
unpredictability. Some simple examples might be presenting a user with a ”Random Quote
of the Day”, or determining which way a computer-controlled adversary might move in a

24 https://en.wikipedia.org/wiki/Random_number_table
25 https://en.wikipedia.org/wiki/Statistical_randomness
26 https://en.wikipedia.org/wiki/Cryptography
27 https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
28 https://en.wikipedia.org/wiki/Applications_of_randomness
29 https://en.wikipedia.org/wiki/Gambling
30 https://en.wikipedia.org/wiki/Statistical_sampling
31 https://en.wikipedia.org/wiki/Computer_simulation
32 https://en.wikipedia.org/wiki/Cryptography
33 https://en.wikipedia.org/wiki/Completely_randomized_design
34 https://en.wikipedia.org/wiki/Hardware_random_number_generator
35 https://en.wikipedia.org/wiki/Monte_Carlo_method
36 https://en.wikipedia.org/wiki/Debugging
37 https://en.wikipedia.org/wiki/Random_seed
38 https://en.wikipedia.org/wiki/Cryptography
39 https://en.wikipedia.org/wiki/Pseudo-random_number

401
Random number generation

computer game. Weaker forms of randomness are used in hash algorithms40 and in creating
amortized41 searching42 and sorting algorithms43 .
Some applications which appear at first sight to be suitable for randomization are in fact
not quite so simple. For instance, a system that ”randomly” selects music tracks for a
background music system must only appear random, and may even have ways to control
the selection of music: a true random system would have no restriction on the same item
appearing two or three times in succession.

30.2 ”True” vs. pseudo-random numbers

Main articles: Pseudorandom number generator44 and Hardware random number genera-
tor45 See also: Cryptographically secure pseudorandom number generator46 There are two
principal methods used to generate random numbers. The first method measures some
physical phenomenon that is expected to be random and then compensates for possible bi-
ases in the measurement process. Example sources include measuring atmospheric noise47 ,
thermal noise, and other external electromagnetic and quantum phenomena. For exam-
ple, cosmic background radiation or radioactive decay as measured over short timescales
represent sources of natural entropy48 .
The speed at which entropy can be harvested from natural sources is dependent on the
underlying physical phenomena being measured. Thus, sources of naturally occurring ”true”
entropy are said to be blocking49 − they are rate-limited until enough entropy is harvested
to meet the demand. On some Unix-like systems, including most Linux distributions50 , the
pseudo device file /dev/random51 will block until sufficient entropy is harvested from the
environment.[1] Due to this blocking behavior, large bulk reads from /dev/random52 , such
as filling a hard disk drive53 with random bits, can often be slow on systems that use this
type of entropy source.
The second method uses computational algorithms54 that can produce long sequences of
apparently random results, which are in fact completely determined by a shorter initial
value, known as a seed value or key55 . As a result, the entire seemingly random sequence
can be reproduced if the seed value is known. This type of random number generator is

40 https://en.wikipedia.org/wiki/Hash_algorithm
41 https://en.wikipedia.org/wiki/Amortization
42 https://en.wikipedia.org/wiki/Search_algorithm
43 https://en.wikipedia.org/wiki/Sorting_algorithm
44 https://en.wikipedia.org/wiki/Pseudorandom_number_generator
45 https://en.wikipedia.org/wiki/Hardware_random_number_generator
46 https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
47 https://en.wikipedia.org/wiki/Atmospheric_noise
48 https://en.wikipedia.org/wiki/Entropy_(information_theory)
49 https://en.wikipedia.org/wiki/Blocking_(computing)
50 https://en.wikipedia.org/wiki/Linux_distribution
51 https://en.wikipedia.org/wiki//dev/random
52 https://en.wikipedia.org/wiki//dev/random
53 https://en.wikipedia.org/wiki/Hard_disk_drive
54 https://en.wikipedia.org/wiki/Algorithm
55 https://en.wikipedia.org/wiki/Key_(cryptography)

402
Generation methods

often called a pseudorandom number generator56 . This type of generator typically does
not rely on sources of naturally occurring entropy, though it may be periodically seeded by
natural sources. This generator type is non-blocking, so they are not rate-limited by an
external event, making large bulk reads a possibility.
Some systems take a hybrid approach, providing randomness harvested from natural sources
when available, and falling back to periodically re-seeded software-based cryptographically
secure pseudorandom number generators57 (CSPRNGs). The fallback occurs when the
desired read rate of randomness exceeds the ability of the natural harvesting approach
to keep up with the demand. This approach avoids the rate-limited blocking behavior of
random number generators based on slower and purely environmental methods.
While a pseudorandom number generator based solely on deterministic logic can never be
regarded as a ”true” random number source in the purest sense of the word, in practice they
are generally sufficient even for demanding security-critical applications. Indeed, carefully
designed and implemented pseudo-random number generators can be certified for security-
critical cryptographic purposes, as is the case with the yarrow algorithm58 and fortuna59 .
The former is the basis of the /dev/random source of entropy on FreeBSD60 , AIX61 , OS X62 ,
NetBSD63 , and others. OpenBSD64 uses a pseudo-random number algorithm known as
arc4random65 .[2]
In October 2019, it was noted that the introduction of Quantum Random Number Gener-
ators (QRNGs) to machine learning models including Neural Networks and Convolutional
Neural Networks for random initial weight distribution and Random Forests for splitting
processes had a profound effect on their ability when compared to the classical method of
Pseudorandom Number Generators (PRNGs)[3] .

30.3 Generation methods

30.3.1 Physical methods

Main article: Hardware random number generator66 The earliest methods for generating
random numbers, such as dice67 , coin flipping68 and roulette69 wheels, are still used today,

56 https://en.wikipedia.org/wiki/Pseudorandom_number_generator
57 https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
58 https://en.wikipedia.org/wiki/Yarrow_algorithm
59 https://en.wikipedia.org/wiki/Fortuna_(PRNG)
60 https://en.wikipedia.org/wiki/FreeBSD
61 https://en.wikipedia.org/wiki/AIX
62 https://en.wikipedia.org/wiki/OS_X
63 https://en.wikipedia.org/wiki/NetBSD
64 https://en.wikipedia.org/wiki/OpenBSD
65 https://en.wikipedia.org/wiki/RC4#RC4-based_random_number_generators
66 https://en.wikipedia.org/wiki/Hardware_random_number_generator
67 https://en.wikipedia.org/wiki/Dice
68 https://en.wikipedia.org/wiki/Coin_flipping
69 https://en.wikipedia.org/wiki/Roulette

403
Random number generation

mainly in games70 and gambling as they tend to be too slow for most applications in statistics
and cryptography.
A physical random number generator can be based on an essentially random atomic or sub-
atomic physical phenomenon whose unpredictability can be traced to the laws of quantum
mechanics71 . Sources of entropy72 include radioactive decay73 , thermal noise74 , shot noise75 ,
avalanche noise in Zener diodes76 , clock drift77 , the timing of actual movements of a hard
disk78 read/write head, and radio noise79 . However, physical phenomena and tools used
to measure them generally feature asymmetries and systematic biases80 that make their
outcomes not uniformly random. A randomness extractor81 , such as a cryptographic hash
function82 , can be used to approach a uniform distribution of bits from a non-uniformly
random source, though at a lower bit rate.
The appearance of wideband photonic entropy sources, such as optical chaos83 and amplified
spontaneous emission84 noise, greatly aid the development of the physical random number
generator. Among them, optical chaos[4][5] has a high potential to physically produce high-
speed random numbers due to its high bandwidth and large amplitude. A prototype of a
high speed, real-time physical random bit generator based on a chaotic laser was built in
2013.[6]
Various imaginative ways of collecting this entropic information have been devised. One
technique is to run a hash function against a frame of a video stream from an unpredictable
source. Lavarand85 used this technique with images of a number of lava lamps86 . Hot-
Bits87 measures radioactive decay with Geiger–Muller tubes88 ,[7] while Random.org89 uses
variations in the amplitude of atmospheric noise recorded with a normal radio.
Another common entropy source is the behavior of human users of the system. While
people are not considered good randomness generators upon request, they generate random
behavior quite well in the context of playing mixed strategy90 games.[8] Some security-
related computer software requires the user to make a lengthy series of mouse movements

70 https://en.wikipedia.org/wiki/Game
71 https://en.wikipedia.org/wiki/Quantum_mechanics
72 https://en.wikipedia.org/wiki/Entropy_(information_theory)
73 https://en.wikipedia.org/wiki/Radioactive_decay
74 https://en.wikipedia.org/wiki/Johnson%E2%80%93Nyquist_noise
75 https://en.wikipedia.org/wiki/Shot_noise
76 https://en.wikipedia.org/wiki/Zener_diode
77 https://en.wikipedia.org/wiki/Clock_drift#Random_number_generators
78 https://en.wikipedia.org/wiki/Hard_disk
79 https://en.wikipedia.org/wiki/Noise_(radio)
80 https://en.wikipedia.org/wiki/Systematic_bias
81 https://en.wikipedia.org/wiki/Randomness_extractor
82 https://en.wikipedia.org/wiki/Cryptographic_hash_function
83 https://en.wikipedia.org/wiki/Optical_chaos
84 https://en.wikipedia.org/wiki/Amplified_spontaneous_emission
85 https://en.wikipedia.org/wiki/Lavarand
86 https://en.wikipedia.org/wiki/Lava_lamp
87 http://www.fourmilab.ch/hotbits/
88 https://en.wikipedia.org/wiki/Geiger%E2%80%93Muller_tube
89 https://en.wikipedia.org/wiki/Random.org
90 https://en.wikipedia.org/wiki/Mixed_strategy

404
Generation methods

or keyboard inputs to create sufficient entropy needed to generate random keys91 or to


initialize pseudorandom number generators.[9]

30.3.2 Computational methods

Most computer generated random numbers use pseudorandom number generators92


(PRNGs) which are algorithms93 that can automatically create long runs of numbers with
good random properties but eventually the sequence repeats (or the memory usage grows
without bound). These random numbers are fine in many situations but are not as random
as numbers generated from electromagnetic atmospheric noise used as a source of entropy.[10]
The series of values generated by such algorithms is generally determined by a fixed number
called a seed. One of the most common PRNG is the linear congruential generator94 , which
uses the recurrence
Xn+1 = (aXn + b) mod m
to generate numbers, where a, b and m are large integers, and Xn+1 is the next in X as
a series of pseudo-random numbers. The maximum number of numbers the formula can
produce is one less than the modulus95 , m-1. The recurrence relation can be extended
to matrices to have much longer periods and better statistical properties .[11] To avoid
certain non-random properties of a single linear congruential generator, several such random
number generators with slightly different values of the multiplier coefficient, a, can be used
in parallel, with a ”master” random number generator that selects from among the several
96
different generators.[citation needed ]
A simple pen-and-paper method for generating random numbers is the so-called middle
square method97 suggested by John von Neumann98 . While simple to implement, its output
is of poor quality. It has a very short period and severe weaknesses, such as the output
sequence almost always converging to zero. A recent innovation is to combine the middle
square with a Weyl sequence99 . This method produces high quality output through a long
period. See Middle Square Weyl Sequence PRNG100 .
Most computer programming languages include functions or library routines that provide
random number generators. They are often designed to provide a random byte or word, or
a floating point101 number uniformly distributed102 between 0 and 1.
The quality i.e. randomness of such library functions varies widely from completely pre-
dictable output, to cryptographically secure. The default random number generator in
many languages, including Python, Ruby, R, IDL and PHP is based on the Mersenne

91 https://en.wikipedia.org/wiki/Key_(cryptography)
92 https://en.wikipedia.org/wiki/Pseudorandom_number_generator
93 https://en.wikipedia.org/wiki/Algorithm
94 https://en.wikipedia.org/wiki/Linear_congruential_generator
95 https://en.wikipedia.org/wiki/Modulus_(algebraic_number_theory)
97 https://en.wikipedia.org/wiki/Middle_square_method
98 https://en.wikipedia.org/wiki/John_von_Neumann
99 https://en.wikipedia.org/wiki/Weyl_sequence
100 https://en.wikipedia.org/wiki/Middle_square_method#Middle_Square_Weyl_Sequence_PRNG
101 https://en.wikipedia.org/wiki/Floating_point
102 https://en.wikipedia.org/wiki/Uniform_distribution_(continuous)

405
Random number generation

Twister103 algorithm and is not sufficient for cryptography purposes, as is explicitly stated
in the language documentation. Such library functions often have poor statistical proper-
ties and some will repeat patterns after only tens of thousands of trials. They are often
initialized using a computer's real time clock104 as the seed, since such a clock generally
measures in milliseconds, far beyond the person's precision105 . These functions may pro-
vide enough randomness for certain tasks (for example video games) but are unsuitable
where high-quality randomness is required, such as in cryptography applications, statistics
106
or numerical analysis.[citation needed ]
Much higher quality random number sources are available on most operating systems; for
example /dev/random107 on various BSD flavors, Linux, Mac OS X, IRIX, and Solaris,
or CryptGenRandom108 for Microsoft Windows. Most programming languages, including
those mentioned above, provide a means to access these higher quality sources.

30.3.3 Generation from a probability distribution

There are a couple of methods to generate a random number based on a probability density
function109 . These methods involve transforming a uniform random number in some way.
Because of this, these methods work equally well in generating both pseudo-random and true
random numbers. One method, called the inversion method110 , involves integrating up to an
area greater than or equal to the random number (which should be generated between 0 and
1 for proper distributions). A second method, called the acceptance-rejection method111 ,
involves choosing an x and y value and testing whether the function of x is greater than
the y value. If it is, the x value is accepted. Otherwise, the x value is rejected and the
algorithm tries again.[12][13]

30.3.4 By humans

Random number generation may also be performed by humans, in the form of collecting
various inputs from end users112 and using them as a randomization source. However, most
studies find that human subjects have some degree of non-randomness when attempting to
produce a random sequence of e.g. digits or letters. They may alternate too much between
choices when compared to a good random generator;[14] thus, this approach is not widely
used.

103 https://en.wikipedia.org/wiki/Mersenne_Twister
104 https://en.wikipedia.org/wiki/Real_time_clock
105 https://en.wikipedia.org/wiki/Accuracy_and_precision
107 https://en.wikipedia.org/wiki//dev/random
108 https://en.wikipedia.org/wiki/CryptGenRandom
109 https://en.wikipedia.org/wiki/Probability_density_function
110 https://en.wikipedia.org/wiki/Inverse_transform_sampling
111 https://en.wikipedia.org/wiki/Rejection_sampling
112 https://en.wikipedia.org/wiki/End_user

406
Post-processing and statistical checks

30.4 Post-processing and statistical checks

See also: Randomness tests113 , Statistical randomness114 , and List of random number gen-
erators115 Even given a source of plausible random numbers (perhaps from a quantum
mechanically based hardware generator), obtaining numbers which are completely unbi-
ased takes care. In addition, behavior of these generators often changes with temperature,
power supply voltage, the age of the device, or other outside interference. And a software
bug in a pseudo-random number routine, or a hardware bug in the hardware it runs on,
may be similarly difficult to detect.
Generated random numbers are sometimes subjected to statistical tests before use to en-
sure that the underlying source is still working, and then post-processed to improve their
statistical properties. An example would be the TRNG9803[15] hardware random number
generator, which uses an entropy measurement as a hardware test, and then post-processes
the random sequence with a shift register stream cipher. It is generally hard to use sta-
tistical tests to validate the generated random numbers. Wang and Nicol[16] proposed a
distance-based statistical testing technique that is used to identify the weaknesses of sev-
eral random generators. Li and Wang[17] proposed a method of testing random numbers
based on laser chaotic entropy sources using Brownian motion properties.

30.5 Other considerations

Random numbers uniformly distributed between 0 and 1 can be used to generate random
numbers of any desired distribution by passing them through the inverse cumulative dis-
tribution function116 (CDF) of the desired distribution (see Inverse transform sampling117 ).
Inverse CDFs are also called quantile functions118 . To generate a pair of statistically inde-
pendent119 standard normally distributed120 random numbers (x, y), one may first generate
the polar coordinates121 (r, θ), where r2 ~χ2 2122 and θ~UNIFORM(0,2π)123 (see Box–Muller
transform124 ).
Some 0 to 1 RNGs include 0 but exclude 1, while others include or exclude both.
The outputs of multiple independent RNGs can be combined (for example, using a bit-wise
XOR125 operation) to provide a combined RNG at least as good as the best RNG used.
This is referred to as software whitening126 .

113 https://en.wikipedia.org/wiki/Randomness_tests
114 https://en.wikipedia.org/wiki/Statistical_randomness
115 https://en.wikipedia.org/wiki/List_of_random_number_generators
116 https://en.wikipedia.org/wiki/Cumulative_distribution_function
117 https://en.wikipedia.org/wiki/Inverse_transform_sampling
118 https://en.wikipedia.org/wiki/Quantile_function
119 https://en.wikipedia.org/wiki/Statistical_independence
120 https://en.wikipedia.org/wiki/Normal_distribution
121 https://en.wikipedia.org/wiki/Polar_coordinates
122 https://en.wikipedia.org/wiki/Chi-squared_distribution
123 https://en.wikipedia.org/wiki/Uniform_distribution_(continuous)
124 https://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform
125 https://en.wikipedia.org/wiki/XOR
126 https://en.wikipedia.org/wiki/Hardware_random_number_generator#Software_whitening

407
Random number generation

Computational and hardware random number generators are sometimes combined to reflect
the benefits of both kinds. Computational random number generators can typically generate
pseudo-random numbers much faster than physical generators, while physical generators can
generate ”true randomness.”

30.6 Low-discrepancy sequences as an alternative

Some computations making use of a random number generator can be summarized as the
computation of a total or average value, such as the computation of integrals by the Monte
Carlo method127 . For such problems, it may be possible to find a more accurate solution
by the use of so-called low-discrepancy sequences128 , also called quasirandom129 numbers.
Such sequences have a definite pattern that fills in gaps evenly, qualitatively speaking; a
truly random sequence may, and usually does, leave larger gaps.

30.7 Activities and demonstrations

The following sites make available Random Number samples:


1. The SOCR130 resource pages contain a number of hands-on interactive activities and
demonstrations131 of random number generation using Java applets.
2. The Quantum Optics Group at the ANU132 generates random numbers sourced from
quantum vacuum. You can download a sample of random numbers by visiting their
quantum random number generator 133 research page.
3. Random.org134 makes available random numbers that are sourced from the random-
ness of atmospheric noise.
4. The Quantum Random Bit Generator Service135 at the Ruđer Bošković Institute136
harvests randomness from the quantum process of photonic emission in semiconduc-
tors. They supply a variety of ways of fetching the data, including libraries for several
programming languages.
5. The Group at the Taiyuan University of technology generates random numbers
sourced from chaotic laser. You can obtain a sample of random number by visit-
ing their Physical Random Number Generator Service137 .

127 https://en.wikipedia.org/wiki/Monte_Carlo_method
128 https://en.wikipedia.org/wiki/Low-discrepancy_sequence
129 https://en.wikipedia.org/wiki/Quasirandom
130 https://en.wikipedia.org/wiki/SOCR
131 http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_RNG
132 https://en.wikipedia.org/wiki/ANU
133 http://photonics.anu.edu.au/qoptics/Research/qrng.php
134 http://Random.org
135 http://random.irb.hr/
136 https://en.wikipedia.org/wiki/Ru%C4%91er_Bo%C5%A1kovi%C4%87_Institute
137 http://random-number.net/

408
Backdoors

30.8 Backdoors

Main article: Random number generator attack138 Further information: Backdoor (com-
puting)139 Since much cryptography depends on a cryptographically secure random number
generator for key and cryptographic nonce140 generation, if a random number generator can
be made predictable, it can be used as backdoor141 by an attacker to break the encryption.
The NSA is reported to have inserted a backdoor into the NIST142 certified cryptograph-
ically secure pseudorandom number generator143 Dual_EC_DRBG144 . If for example an
SSL connection is created using this random number generator, then according to Matthew
Green145 it would allow NSA to determine the state of the random number generator, and
thereby eventually be able to read all data sent over the SSL connection.[18] Even though
it was apparent that Dual_EC_DRBG was a very poor and possibly backdoored pseu-
dorandom number generator long before the NSA backdoor was confirmed in 2013, it had
seen significant usage in practice until 2013, for example by the prominent security company
RSA Security146 .[19] There have subsequently been accusations that RSA Security knowingly
inserted a NSA backdoor into its products, possibly as part of the Bullrun program147 . RSA
has denied knowingly inserting a backdoor into its products.[20]
It has also been theorized that hardware RNGs could be secretly modified to have less
entropy than stated, which would make encryption using the hardware RNG susceptible to
attack. One such method which has been published works by modifying the dopant mask
of the chip, which would be undetectable to optical reverse-engineering.[21] For example, for
random number generation in Linux, it is seen as unacceptable to use Intel's RDRAND148
hardware RNG without mixing in the RDRAND output with other sources of entropy to
counteract any backdoors in the hardware RNG, especially after the revelation of the NSA
Bullrun program.[22][23]
In 2010, a U.S. lottery draw was rigged149 by the information security director of the Multi-
State Lottery Association150 (MUSL), who surreptitiously installed backdoor malware151 on
the MUSL's secure RNG computer during routine maintenance.[24] During the hacks the
man won a total amount of $16,500,000 by predicting the numbers correctly a few times in
year.
Address space layout randomization152 (ASLR), a mitigation against rowhammer and re-
lated attacks on the physical hardware of memory chips has been found to be inadequate

138 https://en.wikipedia.org/wiki/Random_number_generator_attack
139 https://en.wikipedia.org/wiki/Backdoor_(computing)
140 https://en.wikipedia.org/wiki/Cryptographic_nonce
141 https://en.wikipedia.org/wiki/Backdoor_(computing)
142 https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
143 https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
144 https://en.wikipedia.org/wiki/Dual_EC_DRBG
145 https://en.wikipedia.org/wiki/Matthew_Green_(cryptographer)
146 https://en.wikipedia.org/wiki/RSA_Security
147 https://en.wikipedia.org/wiki/Bullrun_(decryption_program)
148 https://en.wikipedia.org/wiki/RDRAND
149 https://en.wikipedia.org/wiki/Hot_Lotto_fraud_scandal
150 https://en.wikipedia.org/wiki/Multi-State_Lottery_Association
151 https://en.wikipedia.org/wiki/Malware
152 https://en.wikipedia.org/wiki/Address_space_layout_randomization

409
Random number generation

as of early 2017 by VUSec. The random number algorithm if based on a shift register
implemented in hardware is predictable at sufficiently large values of p and can be reverse
engineered with enough processing power. This also indirectly means that malware using
this method can run on both GPUs and CPUs if coded to do so, even using GPU to break
ASLR on the CPU itself.[25]

30.9 See also


• Flipism153
• List of random number generators154
• PP (complexity)155
• Procedural generation156
• Randomization157
• Randomized algorithm158
• Random number generator attack159
• Random password generator160
• Random variable161 , contains a chance-dependent value
• Randomness162

30.10 References
1. random(4)163 – Linux164 Programmer's Manual165 – Special Files
2. arc4random(3)166 – OpenBSD167 Library Functions Manual168
3. B, J J.; E, A; F, D R. (2019-10-28). ”O 
    -   
 ”. Soft Computing. Springer Science and Business Media LLC.
doi169 :10.1007/s00500-019-04450-0170 . ISSN171 1432-7643172 .
4. L, P; W, Y-C; Z, J-Z (2010-09-13). ”A-
   ”. Optics Express. 18 (19): 20360–20369.

153 https://en.wikipedia.org/wiki/Flipism
154 https://en.wikipedia.org/wiki/List_of_random_number_generators
155 https://en.wikipedia.org/wiki/PP_(complexity)
156 https://en.wikipedia.org/wiki/Procedural_generation
157 https://en.wikipedia.org/wiki/Randomization
158 https://en.wikipedia.org/wiki/Randomized_algorithm
159 https://en.wikipedia.org/wiki/Random_number_generator_attack
160 https://en.wikipedia.org/wiki/Random_password_generator
161 https://en.wikipedia.org/wiki/Random_variable
162 https://en.wikipedia.org/wiki/Randomness
163 http://man7.org/linux/man-pages/man4/random.4.html
164 https://en.wikipedia.org/wiki/Linux
165 https://en.wikipedia.org/wiki/Man_page
166 https://man.openbsd.org/arc4random.3
167 https://en.wikipedia.org/wiki/OpenBSD
168 https://en.wikipedia.org/wiki/Man_page
169 https://en.wikipedia.org/wiki/Doi_(identifier)
170 https://doi.org/10.1007%2Fs00500-019-04450-0
171 https://en.wikipedia.org/wiki/ISSN_(identifier)
172 http://www.worldcat.org/issn/1432-7643

410
References

Bibcode173 :2010OExpr..1820360L174 . doi175 :10.1364/OE.18.020360176 . ISSN177 1094-


4087178 . PMID179 20940928180 .
5. L, P; S, Y; L, X; Y, X; Z, J;
G, X; G, Y; W, Y (2016-07-15). ”F -
    ”. Optics Letters. 41 (14): 3347–3350.
Bibcode181 :2016OptL...41.3347L182 . doi183 :10.1364/OL.41.003347184 . ISSN185 1539-
4794186 . PMID187 27420532188 .
6. W, A; L, P; Z, J; Z, J; L, L;
W, Y (2013-08-26). ”4.5 G - - 
  ”. Optics Express. 21 (17): 20452–20462. Bib-
code189 :2013OExpr..2120452W190 . doi191 :10.1364/OE.21.020452192 . ISSN193 1094-
4087194 . PMID195 24105589196 .
7. W, J. ”HB: G R N”197 . R 2009-
06-27.
8. H, R; N, M198 . ”G  E R”199
(PDF). D  C S  A M, W-
 I  S. R 2009-06-27. Cite journal requires
|journal= (help200 )
9. TC F. ”TC B' T, P 3”201 . R-
 2009-06-27.
10. ”RANDOM.ORG - T R N S”202 . www.random.org. Re-
trieved 2016-01-14.

173 https://en.wikipedia.org/wiki/Bibcode_(identifier)
174 https://ui.adsabs.harvard.edu/abs/2010OExpr..1820360L
175 https://en.wikipedia.org/wiki/Doi_(identifier)
176 https://doi.org/10.1364%2FOE.18.020360
177 https://en.wikipedia.org/wiki/ISSN_(identifier)
178 http://www.worldcat.org/issn/1094-4087
179 https://en.wikipedia.org/wiki/PMID_(identifier)
180 http://pubmed.ncbi.nlm.nih.gov/20940928
181 https://en.wikipedia.org/wiki/Bibcode_(identifier)
182 https://ui.adsabs.harvard.edu/abs/2016OptL...41.3347L
183 https://en.wikipedia.org/wiki/Doi_(identifier)
184 https://doi.org/10.1364%2FOL.41.003347
185 https://en.wikipedia.org/wiki/ISSN_(identifier)
186 http://www.worldcat.org/issn/1539-4794
187 https://en.wikipedia.org/wiki/PMID_(identifier)
188 http://pubmed.ncbi.nlm.nih.gov/27420532
189 https://en.wikipedia.org/wiki/Bibcode_(identifier)
190 https://ui.adsabs.harvard.edu/abs/2013OExpr..2120452W
191 https://en.wikipedia.org/wiki/Doi_(identifier)
192 https://doi.org/10.1364%2FOE.21.020452
193 https://en.wikipedia.org/wiki/ISSN_(identifier)
194 http://www.worldcat.org/issn/1094-4087
195 https://en.wikipedia.org/wiki/PMID_(identifier)
196 http://pubmed.ncbi.nlm.nih.gov/24105589
197 http://www.fourmilab.ch/hotbits/
198 https://en.wikipedia.org/wiki/Moni_Naor
199 http://www.neko.co.il/games4rand.pdf
200 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
201 http://www.truecrypt.org/docs/?s=tutorial3
202 https://www.random.org/

411
Random number generation

11. ”H D P R N G”203 . R


2018-11-21.
12. T MW. ”C  ”204 . R 2011-10-13.
13. T N A G. ”G05 – R N G-
”205 (PDF). NAG Library Manual, Mark 23. Retrieved 2012-02-09.
14. W. A. W (1972). ”G      -
:      ”. Psychological Bulletin206 . 77 (1):
65–72. CiteSeerX207 10.1.1.211.9085208 . doi209 :10.1037/h0032060210 .
15. D, B. (2009). ”TRNG9803 T R N G”211 .
M: .TRNG98..
16. W, Y (2014). ”S P  P R S-
  E  PHP  D OSSL”. Computer Se-
curity - ESORICS 2014. Lecture Notes in Computer Science. 8712. Heidelberg:
Springer LNCS. pp. 454–471. doi212 :10.1007/978-3-319-11203-9_26213 . ISBN214 978-
3-319-11202-2215 .
17. L, P; Y, X; L, X; W, Y; W, Y
(2016-07-11). ”B      
    ”. Optics Express. 24 (14): 15822–15833.
Bibcode216 :2016OExpr..2415822L217 . doi218 :10.1364/OE.24.015822219 . ISSN220 1094-
4087221 . PMID222 27410852223 .
18.  G (2013-09-18). ”T M F  D_EC_DRBG”224 .
19. M G (2013-09-20). ”RSA      RSA
”225 .
20. ”W '      , RSA  -
”226 . Ars Technica. 2013-09-20.

203 http://www.ianschumacher.ca/prng/article.html
204 http://www.mathworks.de/help/toolbox/stats/br5k9hi-1.html
205 http://www.nag.co.uk/numeric/fl/nagdoc_fl23/pdf/G05/g05intro.pdf
206 https://en.wikipedia.org/wiki/Psychological_Bulletin
207 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
208 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.211.9085
209 https://en.wikipedia.org/wiki/Doi_(identifier)
210 https://doi.org/10.1037%2Fh0032060
211 http://www.trng98.se/serial_trng_9803.html
212 https://en.wikipedia.org/wiki/Doi_(identifier)
213 https://doi.org/10.1007%2F978-3-319-11203-9_26
214 https://en.wikipedia.org/wiki/ISBN_(identifier)
215 https://en.wikipedia.org/wiki/Special:BookSources/978-3-319-11202-2
216 https://en.wikipedia.org/wiki/Bibcode_(identifier)
217 https://ui.adsabs.harvard.edu/abs/2016OExpr..2415822L
218 https://en.wikipedia.org/wiki/Doi_(identifier)
219 https://doi.org/10.1364%2FOE.24.015822
220 https://en.wikipedia.org/wiki/ISSN_(identifier)
221 http://www.worldcat.org/issn/1094-4087
222 https://en.wikipedia.org/wiki/PMID_(identifier)
223 http://pubmed.ncbi.nlm.nih.gov/27410852
224 http://blog.cryptographyengineering.com/2013/09/the-many-flaws-of-dualecdrbg.html
http://blog.cryptographyengineering.com/2013/09/rsa-warns-developers-against-its-
225
own.html
https://arstechnica.com/security/2013/09/we-dont-enable-backdoors-in-our-crypto-
226
products-rsa-tells-customers/

412
Further reading

21. ”R       I' I B
CPU”227 . Ars Technica. 2013-09-18.
22. T T'. ”I    I    I 
  //     RDRAND ”228 . G
P.
23. T T'. ”R: [PATCH] //: I   
 ”229 . LWN.
24. N, M.L. (J 7, 2015). ”I  B L S E”230 .
The Daily Beast. Retrieved July 10, 2015.
25. ”AC - VUS”231 . R 13 J 2018.

30.11 Further reading


• D K232 (1997). ”C 3 – R N”. The Art of Computer
Programming233 . V. 2: S  (3 .).
• L'E, P234 (2017). ”H  U R N G-
”235 (PDF). Proceedings of the 2017 Winter Simulation Conference. IEEE Press.
pp. 202–230.
• L'E, P236 (2012). ”R N G”237 (PDF). I J. E.
G, W. H,  Y. M (.). Handbook of Computational Statistics:
Concepts and Methods. Handbook of Computational Statistics (second ed.). Springer-
Verlag. pp. 35–71. doi238 :10.1007/978-3-642-21551-3_3239 . hdl240 :10419/22195241 .
ISBN242 978-3-642-21550-6243 .CS1 maint: uses editors parameter (link244 )
• K, D. P.245 ; T, T.; B, Z.I. (2011). ”C 1 – U R
N G”246 . Handbook of Monte Carlo Methods. New York: John Wiley
& Sons. p. 772. ISBN247 978-0-470-17793-8248 .CS1 maint: ref=harv (link249 )

https://arstechnica.com/security/2013/09/researchers-can-slip-an-undetectable-trojan-
227
into-intels-ivy-bridge-cpus/
228 https://plus.google.com/117091380454742934025/posts/SDcoemc9V3J
229 https://lwn.net/Articles/567077/
http://www.thedailybeast.com/articles/2015/07/07/inside-the-biggest-lottery-scam-
230
ever.html
231 https://www.vusec.net/projects/anc/
232 https://en.wikipedia.org/wiki/Donald_Knuth
233 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
234 https://en.wikipedia.org/w/index.php?title=Pierre_L%27Ecuyer&action=edit&redlink=1
235 https://www.informs-sim.org/wsc17papers/includes/files/016.pdf
236 https://en.wikipedia.org/w/index.php?title=Pierre_L%27Ecuyer&action=edit&redlink=1
237 https://www.econstor.eu/bitstream/10419/22195/1/21_pl.pdf
238 https://en.wikipedia.org/wiki/Doi_(identifier)
239 https://doi.org/10.1007%2F978-3-642-21551-3_3
240 https://en.wikipedia.org/wiki/Hdl_(identifier)
241 http://hdl.handle.net/10419%2F22195
242 https://en.wikipedia.org/wiki/ISBN_(identifier)
243 https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-21550-6
244 https://en.wikipedia.org/wiki/Category:CS1_maint:_uses_editors_parameter
245 https://en.wikipedia.org/wiki/Dirk_Kroese
246 http://www.montecarlohandbook.org
247 https://en.wikipedia.org/wiki/ISBN_(identifier)
248 https://en.wikipedia.org/wiki/Special:BookSources/978-0-470-17793-8
249 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

413
Random number generation

• P, WH; T, SA; V, WT; F, BP (2007). ”C


7. R N”250 . Numerical Recipes: The Art of Scientific Computing (3rd
ed.). New York: Cambridge University Press. ISBN251 978-0-521-88068-8252 .
• NIST SP800-90A, B, C series on random number generation253
• M. T, M. S,  M. P (O 2000). ”O 
  -    - -
 ”. IEEE Transactions on Computers. 49 (10): 1146–1151.
doi254 :10.1109/12.888056255 .CS1 maint: uses authors parameter (link256 )

30.12 External links


• RANDOM.ORG257 True Random Number Service
• Random and Pseudorandom258 on In Our Time259 at the BBC260
• C, J. ”R N”261 . Numberphile. Brady Haran262 .
• jRand263 a Java-based framework for the generation of simulation sequences, including
pseudo-random sequences of numbers
• Random number generators in NAG Fortran Library264
• Randomness Beacon265 at NIST266 , broadcasting full-entropy bit-strings in blocks of 512
bits every 60 seconds. Designed to provide unpredictability, autonomy, and consistency.
• A system call for random numbers: getrandom()267 , a LWN.net268 article describing a
dedicated Linux system call
• Statistical Properties of Pseudo Random Sequences and Experiments with PHP and
Debian OpenSSL269
• Cryptographic ISAAC pseudorandom lottery numbers generator270
• Random Sequence Generator based on Avalanche Noise271

250 http://apps.nrbook.com/empanel/index.html#pg=340
251 https://en.wikipedia.org/wiki/ISBN_(identifier)
252 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-88068-8
253 http://csrc.nist.gov/publications/PubsSPs.html
254 https://en.wikipedia.org/wiki/Doi_(identifier)
255 https://doi.org/10.1109%2F12.888056
256 https://en.wikipedia.org/wiki/Category:CS1_maint:_uses_authors_parameter
257 https://random.org
258 https://www.bbc.co.uk/programmes/b00x9xjb
259 https://en.wikipedia.org/wiki/In_Our_Time_(radio_series)
260 https://en.wikipedia.org/wiki/BBC
261 http://www.numberphile.com/videos/random_numbers.html
262 https://en.wikipedia.org/wiki/Brady_Haran
263 https://sites.google.com/site/simulationarchitecture/jrand
264 http://www.nag.co.uk/numeric/fl/nagdoc_fl24/html/G05/g05conts.html
265 https://www.nist.gov/itl/csd/ct/nist_beacon.cfm
266 https://en.wikipedia.org/wiki/NIST
267 https://lwn.net/Articles/606141/
268 https://en.wikipedia.org/wiki/LWN.net
269 https://link.springer.com/chapter/10.1007%2F978-3-319-11203-9_26
270 http://www.wynikilotto.net.pl/generatorzakladowISAAC.htm
271 http://holdenc.altervista.org/avalanche/

414
31 Pseudorandom number generator

This page is about commonly encountered characteristics of pseudorandom number gener-


ator algorithms. For the formal concept in theoretical computer science, see Pseudorandom
generator1 . A pseudorandom number generator (PRNG), also known as a deter-
ministic random bit generator (DRBG),[1] is an algorithm2 for generating a sequence
of numbers whose properties approximate the properties of sequences of random numbers3 .
The PRNG-generated sequence is not truly random4 , because it is completely determined
by an initial value, called the PRNG's seed5 (which may include truly random values). Al-
though sequences that are closer to truly random can be generated using hardware random
number generators6 , pseudorandom7 number generators are important in practice for their
speed in number generation and their reproducibility.[2]
PRNGs are central in applications such as simulations8 (e.g. for the Monte Carlo method9 ),
electronic games10 (e.g. for procedural generation11 ), and cryptography12 . Cryptographic
applications require the output not to be predictable from earlier outputs, and more elab-
orate algorithms13 , which do not inherit the linearity of simpler PRNGs, are needed.
Good statistical properties are a central requirement for the output of a PRNG. In general,
careful mathematical analysis is required to have any confidence that a PRNG generates
numbers that are sufficiently close to random to suit the intended use. John von Neumann14
cautioned about the misinterpretation of a PRNG as a truly random generator, and joked
that ”Anyone who considers arithmetical methods of producing random digits is, of course,
in a state of sin.”[3]

1 https://en.wikipedia.org/wiki/Pseudorandom_generator
2 https://en.wikipedia.org/wiki/Algorithm
3 https://en.wikipedia.org/wiki/Random_number_generation
4 https://en.wikipedia.org/wiki/Random
5 https://en.wikipedia.org/wiki/Random_seed
6 https://en.wikipedia.org/wiki/Hardware_random_number_generator
7 https://en.wikipedia.org/wiki/Pseudorandomness
8 https://en.wikipedia.org/wiki/Simulation
9 https://en.wikipedia.org/wiki/Monte_Carlo_method
10 https://en.wikipedia.org/wiki/Electronic_game
11 https://en.wikipedia.org/wiki/Procedural_generation
12 https://en.wikipedia.org/wiki/Cryptography
13 https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
14 https://en.wikipedia.org/wiki/John_von_Neumann

415
Pseudorandom number generator

31.1 Potential problems with deterministic generators

In practice, the output from many common PRNGs exhibit artifacts15 that cause them to
fail statistical pattern-detection tests. These include:
• Shorter-than-expected periods for some seed states (such seed states may be called ”weak”
in this context);
• Lack of uniformity of distribution for large quantities of generated numbers;
• Correlation of successive values;
• Poor dimensional distribution of the output sequence;
• Distances between where certain values occur are distributed differently from those in a
random sequence distribution.
Defects exhibited by flawed PRNGs range from unnoticeable (and unknown) to very obvious.
An example was the RANDU16 random number algorithm used for decades on mainframe
computers17 . It was seriously flawed, but its inadequacy went undetected for a very long
time.
In many fields, much research work prior to the 21st century that relied on random se-
lection or on Monte Carlo18 simulations, or in other ways relied on PRNGs, is much less
reliable than it might have been as a result of using poor-quality PRNGs.[4] Even today,
caution is sometimes required, as illustrated by the following warning, which is given in the
International Encyclopedia of Statistical Science19 (2010).[5]
The list of widely used generators that should be discarded is much longer [than the list
of good generators]. Do not trust blindly the software vendors. Check the default RNG
of your favorite software and be ready to replace it if needed. This last recommendation
has been made over and over again over the past 40 years. Perhaps amazingly, it remains
as relevant today as it was 40 years ago.
As an illustration, consider the widely used programming language Java20 . As of
21
2017[update] , Java still relies on a linear congruential generator22 (LCG) for its PRNG,[6][7]
which are of low quality—see further below.
One well-known PRNG to avoid major problems and still run fairly quickly was the
Mersenne Twister23 (discussed below), which was published in 1998. Other higher-quality
PRNGs, both in terms of computational and statistical performance, were developed before
and after this date; these can be identified in the List of pseudorandom number generators24 .

15 https://en.wikipedia.org/wiki/Artifact_(error)
16 https://en.wikipedia.org/wiki/RANDU
17 https://en.wikipedia.org/wiki/Mainframe_computer
18 https://en.wikipedia.org/wiki/Monte_Carlo_Method
19 https://en.wikipedia.org/wiki/International_Encyclopedia_of_Statistical_Science
20 https://en.wikipedia.org/wiki/Java_(programming_language)
22 https://en.wikipedia.org/wiki/Linear_congruential_generator
23 https://en.wikipedia.org/wiki/Mersenne_Twister
24 https://en.wikipedia.org/wiki/List_of_pseudorandom_number_generators

416
Generators based on linear recurrences

31.2 Generators based on linear recurrences

In the second half of the 20th century, the standard class of algorithms used for PRNGs
comprised linear congruential generators25 . The quality of LCGs was known to be inade-
quate, but better methods were unavailable. Press et al. (2007) described the result thusly:
”If all scientific papers whose results are in doubt because of [LCGs and related] were to
disappear from library shelves, there would be a gap on each shelf about as big as your
fist.”[8]
A major advance in the construction of pseudorandom generators was the introduction
of techniques based on linear recurrences on the two-element field; such generators are
related to linear feedback shift registers26 .
The 1997 invention of the Mersenne Twister27 ,[9] in particular, avoided many of the prob-
lems with earlier generators. The Mersenne Twister has a period of 219 937 −1 iterations
(≈4.3×106001 ), is proven to be equidistributed28 in (up to) 623 dimensions (for 32-bit val-
ues), and at the time of its introduction was running faster than other statistically reasonable
generators.
In 2003, George Marsaglia29 introduced the family of xorshift30 generators,[10] again based
on a linear recurrence. Such generators are extremely fast and, combined with a nonlinear
operation, they pass strong statistical tests.[11][12][13]
In 2006 the WELL31 family of generators was developed.[14] The WELL generators in some
ways improves on the quality of the Mersenne Twister—which has a too-large state space
and a very slow recovery from state spaces with a large number of zeros.

31.3 Cryptographically secure pseudorandom number


generators

Main article: Cryptographically secure pseudorandom number generator32 A PRNG suit-


able for cryptographic33 applications is called a cryptographically secure PRNG (CSPRNG).
A requirement for a CSPRNG is that an adversary not knowing the seed has only negligible34
advantage35 in distinguishing the generator's output sequence from a random sequence. In
other words, while a PRNG is only required to pass certain statistical tests, a CSPRNG
must pass all statistical tests that are restricted to polynomial time36 in the size of the seed.
Though a proof of this property is beyond the current state of the art of computational

25 https://en.wikipedia.org/wiki/Linear_congruential_generator
26 https://en.wikipedia.org/wiki/Linear_feedback_shift_register
27 https://en.wikipedia.org/wiki/Mersenne_Twister
28 https://en.wikipedia.org/wiki/Equidistributed
29 https://en.wikipedia.org/wiki/George_Marsaglia
30 https://en.wikipedia.org/wiki/Xorshift
31 https://en.wikipedia.org/wiki/Well_Equidistributed_Long-period_Linear
32 https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
33 https://en.wikipedia.org/wiki/Cryptography
34 https://en.wikipedia.org/wiki/Negligible_function
35 https://en.wikipedia.org/wiki/Advantage_(cryptography)
36 https://en.wikipedia.org/wiki/Polynomial_time

417
Pseudorandom number generator

complexity theory37 , strong evidence may be provided by reducing38 the CSPRNG to a


problem39 that is assumed to be hard40 , such as integer factorization41 .[15] In general, years
of review may be required before an algorithm can be certified as a CSPRNG.
Some classes of CSPRNGs include the following:
• stream ciphers42
• block ciphers43 running in counter44[16] or output feedback45 mode
• PRNGs that have been designed specifically to be cryptographically secure, such as
Microsoft46 's Cryptographic Application Programming Interface47 function CryptGen-
Random48 , the Yarrow algorithm49 (incorporated in Mac OS X50 and FreeBSD51 ), and
Fortuna52
• combination PRNGs which attempt to combine several PRNG primitive algorithms with
the goal of removing any detectable non-randomness
• special designs based on mathematical hardness assumptions: examples include the
Micali–Schnorr generator,[17] Naor-Reingold pseudorandom function53 and the Blum
Blum Shub54 algorithm, which provide a strong security proof (such algorithms are rather
slow compared to traditional constructions, and impractical for many applications)
• generic PRNGs: while it has been shown that a (cryptographically) secure PRNG can
be constructed generically from any one-way function55 ,[18] this generic construction is
extremely slow in practice, so is mainly of theoretical interest.
It has been shown to be likely that the NSA56 has inserted an asymmetric backdoor57 into
the NIST58 certified pseudorandom number generator Dual_EC_DRBG59 .[19]
Most PRNG algorithms produce sequences that are uniformly distributed60 by any of several
tests. It is an open question, and one central to the theory and practice of cryptography61 ,
whether there is any way to distinguish the output of a high-quality PRNG from a truly

37 https://en.wikipedia.org/wiki/Computational_complexity_theory
38 https://en.wikipedia.org/wiki/Reduction_(complexity)
39 https://en.wikipedia.org/wiki/Mathematical_problem
40 https://en.wikipedia.org/wiki/Computational_hardness_assumption
41 https://en.wikipedia.org/wiki/Integer_factorization
42 https://en.wikipedia.org/wiki/Stream_cipher
43 https://en.wikipedia.org/wiki/Block_cipher
44 https://en.wikipedia.org/wiki/Counter_mode
45 https://en.wikipedia.org/wiki/Output_feedback
46 https://en.wikipedia.org/wiki/Microsoft
47 https://en.wikipedia.org/wiki/Cryptographic_Application_Programming_Interface
48 https://en.wikipedia.org/wiki/CryptGenRandom
49 https://en.wikipedia.org/wiki/Yarrow_algorithm
50 https://en.wikipedia.org/wiki/Mac_OS_X
51 https://en.wikipedia.org/wiki/FreeBSD
52 https://en.wikipedia.org/wiki/Fortuna_(PRNG)
53 https://en.wikipedia.org/wiki/Naor-Reingold_Pseudorandom_Function
54 https://en.wikipedia.org/wiki/Blum_Blum_Shub
55 https://en.wikipedia.org/wiki/One-way_function
56 https://en.wikipedia.org/wiki/National_Security_Agency
57 https://en.wikipedia.org/wiki/Backdoor_(computing)
58 https://en.wikipedia.org/wiki/NIST
59 https://en.wikipedia.org/wiki/Dual_EC_DRBG
60 https://en.wikipedia.org/wiki/Uniform_distribution_(discrete)
61 https://en.wikipedia.org/wiki/Cryptography

418
BSI evaluation criteria

random sequence. In this setting, the distinguisher knows that either the known PRNG
algorithm was used (but not the state with which it was initialized) or a truly random
algorithm was used, and has to distinguish between the two.[20] The security of most cryp-
tographic algorithms and protocols using PRNGs is based on the assumption that it is
infeasible to distinguish use of a suitable PRNG from use of a truly random sequence. The
simplest examples of this dependency are stream ciphers62 , which (most often) work by
exclusive or63 -ing the plaintext64 of a message with the output of a PRNG, producing ci-
phertext65 . The design of cryptographically adequate PRNGs is extremely difficult because
they must meet additional criteria. The size of its period is an important factor in the
cryptographic suitability of a PRNG, but not the only one.

31.4 BSI evaluation criteria

The German Federal Office for Information Security66 (Bundesamt für Sicherheit in der
Informationstechnik, BSI) has established four criteria for quality of deterministic random
number generators.[21] They are summarized here:
• K1 – There should be a high probability that generated sequences of random numbers
are different from each other.
• K2 – A sequence of numbers is indistinguishable from ”truly random” numbers according
to specified statistical tests. The tests are the monobit67 test (equal numbers of ones and
zeros in the sequence), poker test (a special instance of the chi-squared test68 ), runs test
(counts the frequency of runs of various lengths), longruns test (checks whether there
exists any run of length 34 or greater in 20 000 bits of the sequence)—both from BSI69[21]
and NIST70 ,[22] and the autocorrelation test. In essence, these requirements are a test of
how well a bit sequence: has zeros and ones equally often; after a sequence of n zeros (or
ones), the next bit a one (or zero) with probability one-half; and any selected subsequence
contains no information about the next element(s) in the sequence.
• K3 – It should be impossible for an attacker (for all practical purposes) to calculate,
or otherwise guess, from any given subsequence, any previous or future values in the
sequence, nor any inner state of the generator.
• K4 – It should be impossible, for all practical purposes, for an attacker to calculate, or
guess from an inner state of the generator, any previous numbers in the sequence or any
previous inner generator states.
For cryptographic applications, only generators meeting the K3 or K4 standards are ac-
ceptable.

62 https://en.wikipedia.org/wiki/Stream_cipher
63 https://en.wikipedia.org/wiki/Exclusive_or
64 https://en.wikipedia.org/wiki/Plaintext
65 https://en.wikipedia.org/wiki/Ciphertext
66 https://en.wikipedia.org/wiki/Federal_Office_for_Information_Security
67 https://en.wikipedia.org/w/index.php?title=Monobit&action=edit&redlink=1
68 https://en.wikipedia.org/wiki/Chi-squared_test
69 https://en.wikipedia.org/wiki/Federal_Office_for_Information_Security
70 https://en.wikipedia.org/wiki/NIST

419
Pseudorandom number generator

31.5 Mathematical definition

Given
• P – a probability distribution on (R, B) (where B is the standard Borel set71 on the real
line)
• F – a non-empty collection of Borel sets F ⊆ B, e.g. F = {(−∞, t] : t ∈ R}. If F is not
specified, it may be either B or {(−∞, t] : t ∈ R}, depending on context.
• A ⊆ R – a non-empty set (not necessarily a Borel set). Often A is a set between P 's
support72 and its interior73 ; for instance, if P is the uniform distribution on the interval
(0, 1], A might be (0, 1]. If A is not specified, it is assumed to be some set contained in
the support of P and containing its interior, depending on context.
We call a function f : N1 → R (where N1 = {1, 2, 3, . . . } is the set of positive integers) a
pseudo-random number generator for P given F taking values in A if and only if74
• f (N1 ) ⊆ A
# {i ∈ {1, 2, . . . , n} : f (i) ∈ E}
• ∀E ∈ F ∀0 < ε ∈ R ∃N ∈ N1 ∀N ≤ n ∈ N1 , − P (E) <ε
n

(#S denotes the number of elements in the finite set S.)


It can be shown that if f is a pseudo-random number generator for the uniform distribution
on (0, 1) and if F is the CDF75 of some given probability distribution P , then F ∗ ◦ f is a
pseudo-random number generator for P , where F ∗ : (0, 1) → R is the percentile of P , i.e.
F ∗ (x) := inf {t ∈ R : x ≤ F (t)}. Intuitively, an arbitrary distribution can be simulated from
a simulation of the standard uniform distribution.

31.6 Early approaches

An early computer-based PRNG, suggested by John von Neumann76 in 1946, is known


as the middle-square method77 . The algorithm is as follows: take any number, square it,
remove the middle digits of the resulting number as the ”random number”, then use that
number as the seed for the next iteration. For example, squaring the number ”1111” yields
”1234321”, which can be written as ”01234321”, an 8-digit number being the square of a
4-digit number. This gives ”2343” as the ”random” number. Repeating this procedure gives
”4896” as the next result, and so on. Von Neumann used 10 digit numbers, but the process
was the same.
A problem with the ”middle square” method is that all sequences eventually repeat them-
selves, some very quickly, such as ”0000”. Von Neumann was aware of this, but he found

71 https://en.wikipedia.org/wiki/Borel_set
72 https://en.wikipedia.org/wiki/Support_(mathematics)
73 https://en.wikipedia.org/wiki/Interior_(topology)
74 https://en.wikipedia.org/wiki/If_and_only_if
75 https://en.wikipedia.org/wiki/Cumulative_distribution_function
76 https://en.wikipedia.org/wiki/John_von_Neumann
77 https://en.wikipedia.org/wiki/Middle-square_method

420
Non-uniform generators

the approach sufficient for his purposes and was worried that mathematical ”fixes” would
simply hide errors rather than remove them.
Von Neumann judged hardware random number generators unsuitable, for, if they did not
record the output generated, they could not later be tested for errors. If they did record
their output, they would exhaust the limited computer memories then available, and so the
computer's ability to read and write numbers. If the numbers were written to cards, they
would take very much longer to write and read. On the ENIAC78 computer he was using,
the ”middle square” method generated numbers at a rate some hundred times faster than
reading numbers in from punched cards79 .
The middle-square method has since been supplanted by more elaborate generators.
A recent innovation is to combine the middle square with a Weyl sequence80 . This method
produces high-quality output through a long period (see Middle Square Weyl Sequence
PRNG81 ).

31.7 Non-uniform generators

Main article: Pseudo-random number sampling82 Numbers selected from a non-uniform


probability distribution can be generated using a uniform distribution83 PRNG and a func-
tion that relates the two distributions.
First, one needs the cumulative distribution function84 F (b) of the target distribution f (b):
∫ b
F (b) = f (b′ )db′
−∞

Note that 0 = F (−∞) ≤ F (b) ≤ F (∞) = 1. Using a random number c from a uniform dis-
tribution as the probability density to ”pass by”, we get
F (b) = c
so that
b = F −1 (c)
is a number randomly selected from distribution f (b).
For example, the inverse of cumulative Gaussian distribution85 erf −1 (x) with an ideal uni-
form PRNG with range (0, 1) as input x would produce a sequence of (positive only) values
with a Gaussian distribution; however
• When using practical number representations, the infinite ”tails” of the distribution have
to be truncated to finite values.

78 https://en.wikipedia.org/wiki/ENIAC
79 https://en.wikipedia.org/wiki/Punched_card
80 https://en.wikipedia.org/wiki/Weyl_sequence
81 https://en.wikipedia.org/wiki/Middle_square_method#Middle_Square_Weyl_Sequence_PRNG
82 https://en.wikipedia.org/wiki/Pseudo-random_number_sampling
83 https://en.wikipedia.org/wiki/Uniform_distribution_(continuous)
84 https://en.wikipedia.org/wiki/Cumulative_distribution_function
85 https://en.wikipedia.org/wiki/Gaussian_distribution

421
Pseudorandom number generator

• Repetitive recalculation of erf −1 (x) should be reduced by means such as ziggurat algo-
rithm86 for faster generation.
Similar considerations apply to generating other non-uniform distributions such as
Rayleigh87 and Poisson88 .

31.8 See also

• Mathematics portal89
• List of pseudorandom number generators90
• Applications of randomness91
• Low-discrepancy sequence92
• Pseudorandom binary sequence93
• Pseudorandom noise94
• Random number generation95
• Random number generator attack96
• Randomness97
• Statistical randomness98

31.9 References
1. B, E; B, W; B, W; P, W; S,
M (J 2012). ”R  K M”99 (PDF).
NIST 100 Special Publication 800-57. NIST101 . Retrieved 19 August 2013.
2. ”P  ”102 . Khan Academy. Retrieved 2016-01-
11.

86 https://en.wikipedia.org/wiki/Ziggurat_algorithm
87 https://en.wikipedia.org/wiki/Rayleigh_distribution
88 https://en.wikipedia.org/wiki/Poisson_distribution
89 https://en.wikipedia.org/wiki/Portal:Mathematics
90 https://en.wikipedia.org/wiki/List_of_pseudorandom_number_generators
91 https://en.wikipedia.org/wiki/Applications_of_randomness
92 https://en.wikipedia.org/wiki/Low-discrepancy_sequence
93 https://en.wikipedia.org/wiki/Pseudorandom_binary_sequence
94 https://en.wikipedia.org/wiki/Pseudorandom_noise
95 https://en.wikipedia.org/wiki/Random_number_generation
96 https://en.wikipedia.org/wiki/Random_number_generator_attack
97 https://en.wikipedia.org/wiki/Randomness
98 https://en.wikipedia.org/wiki/Statistical_randomness
99 http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf
100 https://en.wikipedia.org/wiki/NIST
101 https://en.wikipedia.org/wiki/NIST
https://www.khanacademy.org/computing/computer-science/cryptography/crypt/v/random-
102
vs-pseudorandom-number-generators

422
References

3. V N, J (1951). ”V     


 ”103 (PDF). National Bureau of Standards Applied Mathematics Series.
12: 36–38.
4. Press et al. (2007), chap.7
5. L'E, P (2010). ”U   ”. I
L, M (.). International Encyclopedia of Statistical Science104 .
S. . 1629. ISBN105 3-642-04897-8106 .
6. Random (Java Platform SE 8)107 , Java Platform Standard Edition 8 Documentation.
7. Random.java108 at OpenJDK109 .
8. Press et al. (2007) §7.1
9. M, M; N, T (1998). ”M : 
623- -  -  -
”110 (PDF). ACM Transactions on Modeling and Computer Simulation. ACM111 .
8 (1): 3–30. doi112 :10.1145/272991.272995113 .
10. M, G (J 2003). ”X RNG”114 . Journal of Statistical
Software115 . 8 (14).
11. S.V. ”*/+    PRNG ”116 .
12. Vigna S. (2016), ”An experimental exploration of Marsaglia’s xorshift generators”,
ACM Transactions on Mathematical Software117 , 42; doi118 :10.1145/2845077119 .
13. Vigna S. (2017), ”Further scramblings of Marsaglia’s xorshift generators”, Journal of
Computational and Applied Mathematics, 315; doi120 :10.1016/j.cam.2016.11.006121 .
14. P, F; L'E, P; M, M (2006). ”I-
 -      
2”122 (PDF). ACM Transactions on Mathematical Software123 . 32 (1): 1–16.
doi124 :10.1145/1132973.1132974125 .

103 https://dornsifecms.usc.edu/assets/sites/520/docs/VonNeumann-ams12p36-38.pdf
104 https://en.wikipedia.org/wiki/International_Encyclopedia_of_Statistical_Science
105 https://en.wikipedia.org/wiki/ISBN_(identifier)
106 https://en.wikipedia.org/wiki/Special:BookSources/3-642-04897-8
107 https://docs.oracle.com/javase/8/docs/api/java/util/Random.html
http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/tip/src/share/classes/java/util/Random.
108
java
109 https://en.wikipedia.org/wiki/OpenJDK
110 http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/ARTICLES/mt.pdf
111 https://en.wikipedia.org/wiki/Association_for_Computing_Machinery
112 https://en.wikipedia.org/wiki/Doi_(identifier)
113 https://doi.org/10.1145%2F272991.272995
114 http://www.jstatsoft.org/v08/i14/paper
115 https://en.wikipedia.org/wiki/Journal_of_Statistical_Software
116 http://prng.di.unimi.it
117 https://en.wikipedia.org/wiki/ACM_Transactions_on_Mathematical_Software
118 https://en.wikipedia.org/wiki/Doi_(identifier)
119 https://doi.org/10.1145%2F2845077
120 https://en.wikipedia.org/wiki/Doi_(identifier)
121 https://doi.org/10.1016%2Fj.cam.2016.11.006
122 http://www.iro.umontreal.ca/~lecuyer/myftp/papers/wellrng.pdf
123 https://en.wikipedia.org/wiki/ACM_Transactions_on_Mathematical_Software
124 https://en.wikipedia.org/wiki/Doi_(identifier)
125 https://doi.org/10.1145%2F1132973.1132974

423
Pseudorandom number generator

15. S Y. Y. Cryptanalytic Attacks on RSA. Springer, 2007. p. 73. ISBN126 978-
0-387-48741-0127 .
16. N F128 , B S129 , T K130 (2010). ”C-
 E: D P  P A,
C 9.4: T G”131 (PDF).CS1 maint: multiple names: authors
list (link132 )
17. K P (2016). ”IV.4 P R G”133 . Cryp-
tology. uni-mainz.de134 . Retrieved 2017-11-12. The MICALI-SCHNORR generator135
18. P, R. ”L 11: T G-L T”136 (PDF). COM
S 687 Introduction to Cryptography. Retrieved 20 July 2016.
19. M G137 . ”T M F  D_EC_DRBG”138 .
20. K, J; Y, L (2014). Introduction to modern cryptography.
CRC press. p. 70.
21. S, W (2 D 1999). ”F C  E-
 M  D R N G”139
(PDF). Anwendungshinweise und Interpretationen (AIS). Bundesamt für Sicherheit
in der Informationstechnik140 . pp. 5–11. Retrieved 19 August 2013.
22. ”S    ”141 . FIPS142 . NIST143 .
1994-01-11. . 4.11.1 P-U T. A   144  M
27, 2013. R 19 A 2013.

31.10 Bibliography
• Barker E., Kelsey J.145 , Recommendation for Random Number Generation Using Deter-
ministic Random Bit Generators146 , NIST147 SP800-90A, January 2012

126 https://en.wikipedia.org/wiki/ISBN_(identifier)
127 https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-48741-0
128 https://en.wikipedia.org/wiki/Niels_Ferguson
129 https://en.wikipedia.org/wiki/Bruce_Schneier
130 https://en.wikipedia.org/w/index.php?title=Tadayoshi_Kohno&action=edit&redlink=1
131 https://www.schneier.com/fortuna.pdf
132 https://en.wikipedia.org/wiki/Category:CS1_maint:_multiple_names:_authors_list
133 https://www.staff.uni-mainz.de/pommeren/Cryptology/Bitstream/4_Perfect/
134 https://en.wikipedia.org/wiki/Johannes_Gutenberg_University_of_Mainz
135 https://www.staff.uni-mainz.de/pommeren/Cryptology/Bitstream/4_Perfect/MicSch.pdf
136 http://www.cs.cornell.edu/courses/cs687/2006fa/lectures/lecture11.pdf
137 https://en.wikipedia.org/wiki/Matthew_D._Green
138 http://blog.cryptographyengineering.com/2013/09/the-many-flaws-of-dualecdrbg.html
https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Zertifizierung/Interpretationen/
139
AIS_20_Functionality_Classes_Evaluation_Methodology_DRNG_e.pdf?__blob=publicationFile
https://en.wikipedia.org/wiki/Bundesamt_f%C3%BCr_Sicherheit_in_der_
140
Informationstechnik
https://web.archive.org/web/20130527090643/http://csrc.nist.gov/publications/fips/
141
fips1401.htm
142 https://en.wikipedia.org/wiki/Federal_Information_Processing_Standard
143 https://en.wikipedia.org/wiki/NIST
144 http://csrc.nist.gov/publications/fips/fips1401.htm
145 https://en.wikipedia.org/wiki/John_Kelsey_(cryptanalyst)
146 http://csrc.nist.gov/publications/nistpubs/800-90A/SP800-90A.pdf
147 https://en.wikipedia.org/wiki/NIST

424
External links

• Brent R.P.148 , ”Some long-period random number generators using shifts and xors”,
ANZIAM Journal149 , 2007; 48:C188–C202
• Gentle J.E. (2003), Random Number Generation and Monte Carlo Methods, Springer.
• Hörmann W., Leydold J., Derflinger G. (2004, 2011), Automatic Nonuniform Random
Variate Generation, Springer-Verlag.
• Knuth D.E.150 . The Art of Computer Programming151 , Volume 2: Seminumerical Al-
gorithms, Third Edition. Addison-Wesley, 1997. ISBN152 0-201-89684-2153 . Chapter 3.
[Extensive coverage of statistical tests for non-randomness.]
• Luby M., Pseudorandomness and Cryptographic Applications, Princeton Univ Press, 1996.
ISBN154 9780691025469155
• von Neumann J., ”Various techniques used in connection with random digits,” in A.S.
Householder, G.E. Forsythe, and H.H. Germond, eds., Monte Carlo Method, National
Bureau of Standards Applied Mathematics Series, 12 (Washington, D.C.: U.S. Govern-
ment Printing Office, 1951): 36–38.
• P, I (1997). The Jungles of Randomness : a mathematical safari156 .
N Y: J W & S. ISBN157 0-471-16449-6158 .
• Press W.H., Teukolsky S.A., Vetterling W.T., Flannery B.P. (2007), Numerical Recipes159
(Cambridge University Press160 ).
• Viega J.161 , ”Practical Random Number Generation in Software162 ”, in Proc. 19th Annual
Computer Security Applications Conference, Dec. 2003.

31.11 External links


• TestU01163 : A free, state-of-the-art (GPL164 ) C++165 Random Number Test Suite.
• DieHarder166 : A free (GPL167 ) C168 Random Number Test Suite.
• ”Generating random numbers169 ” (in embedded systems170 ) by Eric Uner (2004)

148 https://en.wikipedia.org/wiki/Richard_P._Brent
149 https://en.wikipedia.org/wiki/ANZIAM_Journal
150 https://en.wikipedia.org/wiki/Donald_Knuth
151 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
152 https://en.wikipedia.org/wiki/ISBN_(identifier)
153 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89684-2
154 https://en.wikipedia.org/wiki/ISBN_(identifier)
155 https://en.wikipedia.org/wiki/Special:BookSources/9780691025469
156 https://archive.org/details/junglesofrandomn00ivar
157 https://en.wikipedia.org/wiki/ISBN_(identifier)
158 https://en.wikipedia.org/wiki/Special:BookSources/0-471-16449-6
159 https://en.wikipedia.org/wiki/Numerical_Recipes
160 https://en.wikipedia.org/wiki/Cambridge_University_Press
161 https://en.wikipedia.org/wiki/John_Viega
162 http://www.acsac.org/2003/papers/79.pdf
163 http://simul.iro.umontreal.ca/testu01/tu01.html
164 https://en.wikipedia.org/wiki/GNU_General_Public_License
165 https://en.wikipedia.org/wiki/C%2B%2B
166 http://www.phy.duke.edu/~rgb/General/rand_rate.php
167 https://en.wikipedia.org/wiki/GNU_General_Public_License
168 https://en.wikipedia.org/wiki/C_(programming_language)
169 http://www.embedded.com/design/configurable-systems/4024972/Generating-random-numbers
170 https://en.wikipedia.org/wiki/Embedded_systems

425
Pseudorandom number generator

• ”Analysis of the Linux Random Number Generator171 ” by Zvi Gutterman, Benny


Pinkas172 , and Tzachy Reinman (2006)
• ”Better pseudorandom generators173 ” by Parikshit Gopalan, Raghu Meka, Omer Rein-
gold174 , Luca Trevisan175 , and Salil Vadhan176 (Microsoft Research177 , 2012)
• rand() Considered Harmful178 on YouTube179 by Stephan Lavavej (Microsoft, 2013)
• Wsphynx180 a simple online random number generator.Random number are generated by
Javascript pseudorandom number generators (PRNGs) algorithms

171 http://eprint.iacr.org/2006/086
172 https://en.wikipedia.org/w/index.php?title=Benny_Pinkas&action=edit&redlink=1
173 http://research.microsoft.com/apps/pubs/default.aspx?id=168806
174 https://en.wikipedia.org/wiki/Omer_Reingold
175 https://en.wikipedia.org/wiki/Luca_Trevisan
176 https://en.wikipedia.org/wiki/Salil_Vadhan
177 https://en.wikipedia.org/wiki/Microsoft_Research
178 https://www.youtube.com/watch?v=LDPMpc-ENqY
179 https://en.wikipedia.org/wiki/YouTube
180 http://wsphynx.com/simpleApp/random.html

426
32 Linear congruential generator

Figure 77 Two modulo-9 LCGs show how different parameters lead to different cycle
lengths. Each row shows the state evolving until it repeats. The top row shows a
generator with m = 9, a = 2, c = 0, and a seed of 1, which produces a cycle of length 6.
The second row is the same generator with a seed of 3, which produces a cycle of length 2.
Using a = 4 and c = 1 (bottom row) gives a cycle length of 9 with any seed in [0, 8].

A linear congruential generator (LCG) is an algorithm1 that yields a sequence of


pseudo-randomized numbers calculated with a discontinuous piecewise linear equation2 .
The method represents one of the oldest and best-known pseudorandom number genera-
tor3 algorithms.[1] The theory behind them is relatively easy to understand, and they are
easily implemented and fast, especially on computer hardware which can provide modular
arithmetic4 by storage-bit truncation.
The generator is defined by recurrence relation5 :
Xn+1 = (aXn + c) mod m
where X is the sequence6 of pseudorandom values, and
m, 0 < m — the ”modulus7 ”
a, 0 < a < m — the ”multiplier”
c, 0 ≤ c < m — the ”increment”

1 https://en.wikipedia.org/wiki/Algorithm
2 https://en.wikipedia.org/wiki/Piecewise_linear_function
3 https://en.wikipedia.org/wiki/Pseudorandom_number_generator
4 https://en.wikipedia.org/wiki/Modular_arithmetic
5 https://en.wikipedia.org/wiki/Recurrence_relation
6 https://en.wikipedia.org/wiki/Sequence
7 https://en.wikipedia.org/wiki/Modulo_operation

427
Linear congruential generator

X0 , 0 ≤ X0 < m — the ”seed” or ”start value”


are integer8 constants that specify the generator. If c = 0, the generator is often called a
multiplicative congruential generator (MCG), or Lehmer RNG9 . If c ≠ 0, the method
is called a mixed congruential generator.[2]:4-
When c ≠ 0, a mathematician would call the recurrence an affine transformation10 , not a
linear11 one, but the misnomer is well-established in computer science.[3]:1

32.1 Period length

A benefit of LCGs is that with appropriate choice of parameters, the period is known and
long. Although not the only criterion, too short a period is a fatal flaw in a pseudorandom
number generator.[4]
While LCGs are capable of producing pseudorandom numbers12 which can pass formal
tests for randomness13 , the quality of the output is extremely sensitive to the choice of the
parameters m and a.[5][2][6][7][8][3] For example, a = 1 and c = 1 produces a simple modulo-
m counter, which has a long period, but is obviously non-random.
Historically, poor choices for a have led to ineffective implementations of LCGs. A particu-
larly illustrative example of this is RANDU14 , which was widely used in the early 1970s and
led to many results which are currently being questioned because of the use of this poor
LCG.[9]
There are three common families of parameter choice:

32.1.1 m prime, c = 0

Main article: Lehmer random number generator15 This is the original Lehmer RNG con-
struction. The period is m−1 if the multiplier a is chosen to be a primitive element16 of the
integers modulo m. The initial state must be chosen between 1 and m−1.
One disadvantage of a prime modulus is that the modular reduction requires a double-
width product and an explicit reduction step. Often a prime just less than a power of
2 is used (the Mersenne primes17 231 −1 and 261 −1 are popular), so that the reduction
modulo m = 2e − d can be computed as (ax mod 2e ) + d ⌊ax/2e ⌋. This must be followed
by a conditional subtraction of m if the result is too large, but the number of subtractions
is limited to ad/m, which can be easily limited to one if d is small.

8 https://en.wikipedia.org/wiki/Integer
9 https://en.wikipedia.org/wiki/Lehmer_RNG
10 https://en.wikipedia.org/wiki/Affine_transformation
11 https://en.wikipedia.org/wiki/Linear_transformation
12 https://en.wikipedia.org/wiki/Pseudorandom_numbers
13 https://en.wikipedia.org/wiki/Tests_for_randomness
14 https://en.wikipedia.org/wiki/RANDU
15 https://en.wikipedia.org/wiki/Lehmer_random_number_generator
16 https://en.wikipedia.org/wiki/Primitive_element_(finite_field)
17 https://en.wikipedia.org/wiki/Mersenne_prime

428
Period length

If a double-width product is unavailable, and the multiplier is chosen carefully, Schrage's


method[10] may be used. To do this, factor m = qa+r, i.e. q = ⌊m/a⌋ and r = m mod
a. Then compute ax mod m = a(x mod q) − r ⌊x/q⌋. Since x mod q < q ≤m/a, the first
term is strictly less than am/a = m. If a is chosen so that r ≤ q (and thus r/q ≤ 1), then
the second term is also less than m: r ⌊x/q⌋ ≤rx/q = x(r/q) ≤ x < m. Thus, both products
can be computed with a single-width product, and the difference between them lies in the
range [1−m, m−1], so can be reduced to [0, m−1] with a single conditional add.[11]
A second disadvantage is that it is awkward to convert the value 1 ≤ x < m to uniform
random bits. If a prime just less than a power of 2 is used, sometimes the missing values
are simply ignored.

32.1.2 m a power of 2, c = 0

Choosing m to be a power of 218 , most often m = 232 or m = 264 , produces a particularly


efficient LCG, because this allows the modulus operation to be computed by simply trun-
cating the binary representation. In fact, the most significant bits are usually not computed
at all. There are, however, disadvantages.
This form has maximal period m/4, achieved if a ≡ 3 or a ≡ 5 (mod 8). The initial state X0
must be odd, and the low three bits of X alternate between two states and are not useful.
It can be shown that this form is equivalent to a generator with a modulus a quarter the
size and c ≠0.[2]
A more serious issue with the use of a power-of-two modulus is that the low bits have a
shorter period than the high bits. The lowest-order bit of X never changes (X is always
odd), and the next two bits alternate between two states. (If a ≡ 5 (mod 8), then bit 1
never changes and bit 2 alternates. If a ≡ 3 (mod 8), then bit 2 never changes and bit 1
alternates.) Bit 3 repeats with a period of 4, bit 4 has a period of 8, and so on. Only the
most significant bit of X achieves the full period.

32.1.3 c ≠0

When c ≠0, correctly chosen parameters allow a period equal to m, for all seed values. This
will occur if and only if19 :[2]:17—19
1. m and c are relatively prime20 ,
2. a − 1 is divisible by all prime factors21 of m,
3. a − 1 is divisible by 4 if m is divisible by 4.
These three requirements are referred to as the Hull–Dobell Theorem.[12][13]
This form may be used with any m, but only works well for m with many repeated prime
factors, such as a power of 2; using a computer's word size22 is the most common choice.

18 https://en.wikipedia.org/wiki/Power_of_two
19 https://en.wikipedia.org/wiki/If_and_only_if
20 https://en.wikipedia.org/wiki/Coprime_integers
21 https://en.wikipedia.org/wiki/Prime_factor
22 https://en.wikipedia.org/wiki/Word_size

429
Linear congruential generator

If m were a square-free integer23 , this would only allow a ≡ 1 (mod m), which makes a very
poor PRNG; a selection of possible full-period multipliers is only available when m has
repeated prime factors.
Although the Hull–Dobell theorem provides maximum period, it is not sufficient to guar-
antee a good generator. For example, it is desirable for a − 1 to not be any more divisible
by prime factors of m than necessary. Thus, if m is a power of 2, then a − 1 should be
divisible by 4 but not divisible by 8, i.e. a ≡ 5 (mod 8).[2]:§3.2.1.3
Indeed, most multipliers produce a sequence which fails one test for non-randomness or
another, and finding a multiplier which is satisfactory to all applicable criteria[2]:§3.3.3 is
quite challenging. The spectral test24 is one of the most important tests.[14]
Note that a power-of-2 modulus shares the problem as described above for c = 0: the low
k bits form a generator with modulus 2k and thus repeat with a period of 2k ; only the most
significant bit achieves the full period. If a pseudorandom number less than r is desired,
⌊rX/m⌋ is a much higher-quality result than X mod r. Unfortunately, most programming
languages make the latter much easier to write (X % r), so it is the more commonly used
form.
The generator is not sensitive to the choice of c, as long as it is relatively prime to the
modulus (e.g. if m is a power of 2, then c must be odd), so the value c=1 is commonly
chosen.
The series produced by other choices of c can be written as a simple function of the series
when c=1.[2]:11 Specifically, if Y is the prototypical series defined by Y0 = 0 and Yn+1
= aYn +1 mod m, then a general series Xn+1 = aXn +c mod m can be written as an affine
function of Y:
Xn = (X0 (a − 1) + c)Yn + X0 = (X1 − X0 )Yn + X0 (mod m).
More generally, any two series X and Z with the same multiplier and modulus are related
by
Xn − X0 an − 1 Zn − Z0
= Yn = = (mod m).
X1 − X0 a−1 Z1 − Z0

32.2 Parameters in common use

The following table lists the parameters of LCGs in common use, including built-in
rand() functions in runtime libraries25 of various compilers26 . This table is to show popular-
ity, not examples to emulate; many of these parameters are poor. Tables of good parameters
are available.[8][3]
Source modulus multiplier increment output bits of
m a c seed in rand() or
Random(L)

23 https://en.wikipedia.org/wiki/Square-free_integer
24 https://en.wikipedia.org/wiki/Spectral_test
25 https://en.wikipedia.org/wiki/Runtime_library
26 https://en.wikipedia.org/wiki/Compiler

430
Parameters in common use

Source modulus multiplier increment output bits of


m a c seed in rand() or
Random(L)
Numerical Recipes27 2³² 1664525 1013904223
Borland28 C/C++ 2³² 22695477 1 bits 30..16 in rand(),
30..0 in lrand()
glibc29 (used by 2³¹ 1103515245 12345 bits 30..0
GCC30 )[15]
ANSI C31 : Wat- 2³¹ 1103515245 12345 bits 30..16
com32 , Digital
Mars33 , CodeWar-
rior34 , IBM Visu-
alAge35 C/C++ [16]
C9036 , C9937 , C1138 :
Suggestion in the
ISO/IEC 9899,[17]
C1839
Borland Delphi40 , 2³² 134775813 1 bits 63..32 of (seed ×
Virtual Pascal41 L)
Turbo Pascal42 2³² 134775813 1
(0x8088405₁₆)
Microsoft Vi- 2³² 214013 (343FD₁₆) 2531011 (269EC3₁₆) bits 30..16
sual/Quick C/C++43
Microsoft Visual 2²⁴ 1140671485 12820163 (C39EC3₁₆)
Basic44 (6 and (43FD43FD₁₆)
earlier)[18]
RtlUniform from 2³¹ − 1 2147483629 2147483587 (7FFFFFC3₁₆)
Native API45[19] (7FFFFFED₁₆)
Apple Carbon- 2³¹ − 1 16807 0 see MINSTD48
Lib46 , C++1147 's
minstd_rand0[20]
C++1149 's 2³¹ − 1 48271 0 see MINSTD50
minstd_rand[20]

27 https://en.wikipedia.org/wiki/Numerical_Recipes
28 https://en.wikipedia.org/wiki/Borland
29 https://en.wikipedia.org/wiki/Glibc
30 https://en.wikipedia.org/wiki/GNU_Compiler_Collection
31 https://en.wikipedia.org/wiki/ANSI_C
32 https://en.wikipedia.org/wiki/Watcom_C_compiler
33 https://en.wikipedia.org/wiki/Digital_Mars
34 https://en.wikipedia.org/wiki/CodeWarrior
35 https://en.wikipedia.org/wiki/IBM_VisualAge
36 https://en.wikipedia.org/wiki/C90_(C_version)
37 https://en.wikipedia.org/wiki/C99
38 https://en.wikipedia.org/wiki/C11_(C_standard_revision)
39 https://en.wikipedia.org/wiki/C18_(C_standard_revision)
40 https://en.wikipedia.org/wiki/Borland_Delphi
41 https://en.wikipedia.org/wiki/Virtual_Pascal
42 https://en.wikipedia.org/wiki/Turbo_Pascal
43 https://en.wikipedia.org/wiki/Visual_C%2B%2B
44 https://en.wikipedia.org/wiki/Visual_Basic
45 https://en.wikipedia.org/wiki/Native_API
46 https://en.wikipedia.org/wiki/CarbonLib
47 https://en.wikipedia.org/wiki/C%2B%2B11
48 https://en.wikipedia.org/wiki/MINSTD
49 https://en.wikipedia.org/wiki/C%2B%2B11
50 https://en.wikipedia.org/wiki/MINSTD

431
Linear congruential generator

Source modulus multiplier increment output bits of


m a c seed in rand() or
Random(L)
MMIX51 by Donald 2⁶⁴ 6364136223846793005 1442695040888963407
Knuth52
Newlib53 , Musl54 2⁶⁴ 6364136223846793005 1 bits 63..32
VMS55 's 2³² 69069 (10DCD₁₆) 1
MTH$RANDOM,[21]
old versions of
glibc56
Java57 's 2⁴⁸ 25214903917 11 bits 47..16
java.util.Random, (5DEECE66D₁₆)
POSIX58 [ln]rand48,
glibc59 [ln]rand48[_r]
Xn
random0[22][23][24][25][26] 134456 = 2³7⁵ 8121 28411
134456
POSIX60[27] 2⁴⁸ 25214903917 11 bits 47..15
[jm]rand48, glibc61 (5DEECE66D₁₆)
[mj]rand48[_r]
POSIX62 2⁴⁸ 25214903917 11 bits 47..0
[de]rand48, glibc63 (5DEECE66D₁₆)
[de]rand48[_r]
cc6564[28] 2²³ 65793 (10101₁₆) 4282663 (415927₁₆) bits 22..8
cc6565 2³² 16843009 (1010101₁₆) 826366247 (31415927₁₆) bits 31..16
Formerly 2³¹ 65539 0
common: RANDU66
[9]

As shown above, LCGs do not always use all of the bits in the values they produce. For
example, the Java67 implementation operates with 48-bit values at each iteration but returns
only their 32 most significant bits. This is because the higher-order bits have longer periods
than the lower-order bits (see below). LCGs that use this truncation technique produce
statistically better values than those that do not. This is especially noticeable in scripts
that use the mod operation to reduce range; modifying the random number mod 2 will lead
to alternating 0 and 1 without truncation.

32.3 Advantages and disadvantages

51 https://en.wikipedia.org/wiki/MMIX
52 https://en.wikipedia.org/wiki/Donald_Knuth
53 https://en.wikipedia.org/wiki/Newlib
54 https://en.wikipedia.org/wiki/Musl
55 https://en.wikipedia.org/wiki/OpenVMS
56 https://en.wikipedia.org/wiki/Glibc
57 https://en.wikipedia.org/wiki/Java_(programming_language)
58 https://en.wikipedia.org/wiki/POSIX
59 https://en.wikipedia.org/wiki/Glibc
60 https://en.wikipedia.org/wiki/POSIX
61 https://en.wikipedia.org/wiki/Glibc
62 https://en.wikipedia.org/wiki/POSIX
63 https://en.wikipedia.org/wiki/Glibc
64 https://en.wikipedia.org/wiki/Cc65
65 https://en.wikipedia.org/wiki/Cc65
66 https://en.wikipedia.org/wiki/RANDU
67 https://en.wikipedia.org/wiki/Java_(programming_language)

432
Advantages and disadvantages

This section does not cite68 any sources69 . Please help improve this section70
by adding citations to reliable sources71 . Unsourced material may be challenged
and removed72 .
Find sources: ”Linear congruential generator”73 –
news74 · newspapers75 · books76 · scholar77 · JSTOR78 (June 2017)(Learn how and
when to remove this template message79 )

LCGs are fast and require minimal memory (one modulo-m number, often 32 or 64 bits) to
retain state. This makes them valuable for simulating multiple independent streams. LCGs
are not intended, and must not be used, for cryptographic applications; use a cryptograph-
ically secure pseudorandom number generator80 for such applications.

68 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
69 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
70 https://en.wikipedia.org/w/index.php?title=Linear_congruential_generator&action=edit
71 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
72 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
73 http://www.google.com/search?as_eq=wikipedia&q=%22Linear+congruential+generator%22
74 http://www.google.com/search?tbm=nws&q=%22Linear+congruential+generator%22+-wikipedia
http://www.google.com/search?&q=%22Linear+congruential+generator%22+site:news.google.
75
com/newspapers&source=newspapers
http://www.google.com/search?tbs=bks:1&q=%22Linear+congruential+generator%22+-
76
wikipedia
77 http://scholar.google.com/scholar?q=%22Linear+congruential+generator%22
https://www.jstor.org/action/doBasicSearch?Query=%22Linear+congruential+generator%22&
78
acc=on&wc=on
79 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
80 https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator

433
Linear congruential generator

Figure 79 Hyperplanes of a linear congruential generator in three dimensions. This


structure is what the spectral test measures.

Although LCGs have a few specific weaknesses, many of their flaws come from having too
small a state. The fact that people have been lulled for so many years into using them with
such small moduli can be seen as a testament to strength of the technique. A LCG with
large enough state can pass even stringent statistical tests; a modulo-2 LCG which returns
82
the high 32 bits passes TestU0181 's SmallCrush suite,[citation needed ] and a 96-bit LCG passes
the most stringent BigCrush suite.[29]
For a specific example, an ideal random number generator with 32 bits of output is ex-

pected (by the Birthday theorem83 ) to begin duplicating earlier outputs after m ≈ 216

81 https://en.wikipedia.org/wiki/TestU01
83 https://en.wikipedia.org/wiki/Birthday_theorem

434
Advantages and disadvantages

results. Any PRNG whose output is its full, untruncated state will not produce duplicates
until its full period elapses, an easily detectable statistical flaw. For related reasons, any
PRNG should have a period longer than the square of the number of outputs required.
Given modern computer speeds, this means a period of 264 for all but the least demanding
applications, and longer for demanding simulations.
One flaw specific to LCGs is that, if used to choose points in an n-dimensional space,

the points will lie on, at most, n n!⋅m hyperplanes84 (Marsaglia's Theorem85 , developed
by George Marsaglia86 ).[5] This is due to serial correlation between successive values of the
sequence Xn . Carelessly chosen multipliers will usually have far fewer, widely spaced planes,
which can lead to problems. The spectral test87 , which is a simple test of an LCG's quality,
measures this spacing and allows a good multiplier to be chosen.
The plane spacing depends both on the modulus and the multiplier. A large enough modulus
can reduce this distance below the resolution of double precision numbers. The choice of
the multiplier becomes less important when the modulus is large. It is still necessary to
calculate the spectral index and make sure that the multiplier is not a bad one, but purely
probabilistically it becomes extremely unlikely to encounter a bad multiplier when the
modulus is larger than about 264 .
Another flaw specific to LCGs is the short period of the low-order bits when m is chosen to
be a power of 2. This can be mitigated by using a modulus larger than the required output,
and using the most significant bits of the state.
Nevertheless, for some applications LCGs may be a good option. For instance, in an em-
bedded system, the amount of memory available is often severely limited. Similarly, in an
environment such as a video game console88 taking a small number of high-order bits of an
LCG may well suffice. (The low-order bits of LCGs when m is a power of 2 should never
be relied on for any degree of randomness whatsoever.) The low order bits go through
very short cycles. In particular, any full-cycle LCG, when m is a power of 2, will produce
alternately odd and even results.
LCGs should be evaluated very carefully for suitability in non-cryptographic applications
where high-quality randomness89 is critical. For Monte Carlo simulations, an LCG must
use a modulus greater and preferably much greater than the cube of the number of random
samples which are required. This means, for example, that a (good) 32-bit LCG can be used
to obtain about a thousand random numbers; a 64-bit LCG is good for about 221 random
samples (a little over two million), etc. For this reason, in practice LCGs are not suitable
for large-scale Monte Carlo simulations.

84 https://en.wikipedia.org/wiki/Hyperplanes
https://en.wikipedia.org/w/index.php?title=Marsaglia%27s_Theorem&action=edit&redlink=
85
1
86 https://en.wikipedia.org/wiki/George_Marsaglia
87 https://en.wikipedia.org/wiki/Spectral_test
88 https://en.wikipedia.org/wiki/Video_game_console
89 https://en.wikipedia.org/wiki/Randomness

435
Linear congruential generator

32.4 Sample Python code

The following is an implementation of an LCG in Python90 :

def lcg(modulus, a, c, seed):


"""Linear congruential generator."""
while True:
seed = (a * seed + c) % modulus
yield seed

32.5 Sample Free Pascal code

Free Pascal uses a Mersenne Twister91 as its default pseudo random number generator
whereas Delphi uses a LCG. Here is a Delphi compatible example in Free Pascal92 based on
the information in the table above. Given the same RandSeed value it generates the same
sequence of random numbers as Delphi.

unit lcg_random;
{$ifdef fpc}{$mode delphi}{$endif}
interface

function LCGRandom: extended; overload;inline;


function LCGRandom(const range:longint):longint;overload;inline;

implementation
function IM:cardinal;inline;
begin
RandSeed := RandSeed * 134775813 + 1;
Result := RandSeed;
end;

function LCGRandom: extended; overload;inline;


begin
Result := IM * 2.32830643653870e-10;
end;

function LCGRandom(const range:longint):longint;overload;inline;


begin
Result := IM * range shr 32;
end;

Like all pseudorandom number generators, a LCG needs to store state and alter it each time
it generates a new number. Multiple threads may access this state simultaneously causing a
race condition. Implementations should use different state each with unique initialization for
different threads to avoid equal sequences of random numbers on simultaneously executing
threads.

90 https://en.wikipedia.org/wiki/Python_(programming_language)
91 https://en.wikipedia.org/wiki/Mersenne_Twister
92 https://en.wikipedia.org/wiki/Free_Pascal

436
LCG derivatives

32.6 LCG derivatives

There are several generators which are linear congruential generators in a different form,
and thus the techniques used to analyze LCGs can be applied to them.
One method of producing a longer period is to sum the outputs of several LCGs of dif-
ferent periods having a large least common multiple93 ; the Wichmann–Hill94 generator is
an example of this form. (We would prefer them to be completely coprime95 , but a prime
modulus implies an even period, so there must be a common factor of 2, at least.) This
can be shown to be equivalent to a single LCG with a modulus equal to the product of the
component LCG moduli.
Marsaglia96 's add-with-carry and subtract-with-borrow97 PRNGs with a word size of
b=2w and lags r and s (r > s) are equivalent to LCGs with a modulus of br ± bs ± 1.[30][31]
Multiply-with-carry98 PRNGs with a multiplier of a are equivalent to LCGs with a large
prime modulus of abr −1 and a power-of-2 multiplier b.
A permuted congruential generator99 begins with a power-of-2-modulus LCG and applies
an output transformation to eliminate the short period problem in the low-order bits.

32.7 Comparison with other PRNGs

The other widely used primitive for obtaining long-period pseudorandom sequences is the
linear feedback shift register100 construction, which is based on arithmetic in GF(2)[x], the
polynomial ring101 over GF(2)102 . Rather than integer addition and multiplication, the basic
operations are exclusive-or103 and carry-less multiplication104 , which is usually implemented
as a sequence of logical shifts105 . These have the advantage that all of their bits are full-
period; they do not suffer from the weakness in the low-order bits that plagues arithmetic
modulo 2k .[32]
Examples of this family include xorshift106 generators and the Mersenne twister107 . The
latter provides a very long period (219937 −1) and variate uniformity, but it fails some statis-

93 https://en.wikipedia.org/wiki/Least_common_multiple
94 https://en.wikipedia.org/wiki/Wichmann%E2%80%93Hill
95 https://en.wikipedia.org/wiki/Coprime
96 https://en.wikipedia.org/wiki/George_Marsaglia
97 https://en.wikipedia.org/wiki/Subtract_with_carry
98 https://en.wikipedia.org/wiki/Multiply-with-carry
99 https://en.wikipedia.org/wiki/Permuted_congruential_generator
100 https://en.wikipedia.org/wiki/Linear_feedback_shift_register
101 https://en.wikipedia.org/wiki/Polynomial_ring
102 https://en.wikipedia.org/wiki/GF(2)
103 https://en.wikipedia.org/wiki/Exclusive-or
104 https://en.wikipedia.org/wiki/Carry-less_multiplication
105 https://en.wikipedia.org/wiki/Logical_shift
106 https://en.wikipedia.org/wiki/Xorshift
107 https://en.wikipedia.org/wiki/Mersenne_twister

437
Linear congruential generator

tical tests.[33] Lagged Fibonacci generators108 also fall into this category; although they use
arithmetic addition, their period is ensured by an LFSR among the least-significant bits.
It is easy to detect the structure of a linear feedback shift register with appropriate tests[34]
such as the linear complexity test implemented in the TestU01109 suite; a boolean circu-
lant matrix110 initialized from consecutive bits of an LFSR will never have rank111 greater
than the degree of the polynomial. Adding a non-linear output mixing function (as in the
xoshiro256**112 and permuted congruential generator113 constructions) can greatly improve
the performance on statistical tests.
Another structure for a PRNG is a very simple recurrence function combined with a powerful
output mixing function. This includes counter mode114 block ciphers and non-cryptographic
generators such as SplitMix64115 .
A structure similar to LCGs, but not equivalent, is the multiple-recursive generator: Xn =
(a1 Xn−1 + a2 Xn−2 + ··· + ak Xn−k ) mod m for k ≥ 2. With a prime modulus, this can gen-
erate periods up to mk −1, so is a useful extension of the LCG structure to larger periods.
A powerful technique for generating high-quality pseudorandom numbers is to combine two
or more PRNGs of different structure; the sum of an LFSR and an LCG (as in the KISS116
or xorwow117 constructions) can do very well at some cost in speed.

32.8 See also


• List of random number generators118 – other PRNGs including some with better statistical
qualitites
• ACORN generator119 – not to be confused with ACG which term appears to have been
used for variants of LCG and LFSR generators
• Permuted congruential generator120
• Full cycle121
• Inversive congruential generator122
• Multiply-with-carry123
• Lehmer RNG124 (sometimes called the Park–Miller RNG)

108 https://en.wikipedia.org/wiki/Lagged_Fibonacci_generator
109 https://en.wikipedia.org/wiki/TestU01
110 https://en.wikipedia.org/wiki/Circulant_matrix
111 https://en.wikipedia.org/wiki/Rank_(linear_algebra)
112 https://en.wikipedia.org/wiki/Xorshift#xoshiro256**
113 https://en.wikipedia.org/wiki/Permuted_congruential_generator
114 https://en.wikipedia.org/wiki/Counter_mode
115 http://prng.di.unimi.it/splitmix64.c
116 https://en.wikipedia.org/wiki/KISS_(algorithm)
117 https://en.wikipedia.org/wiki/Xorshift#xorwow
118 https://en.wikipedia.org/wiki/List_of_random_number_generators
119 https://en.wikipedia.org/wiki/ACORN_(PRNG)
120 https://en.wikipedia.org/wiki/Permuted_congruential_generator
121 https://en.wikipedia.org/wiki/Full_cycle
122 https://en.wikipedia.org/wiki/Inversive_congruential_generator
123 https://en.wikipedia.org/wiki/Multiply-with-carry
124 https://en.wikipedia.org/wiki/Lehmer_RNG

438
Notes

• Combined linear congruential generator125

32.9 Notes
1. ”Linear Congruential Generators126 ” by Joe Bolte, Wolfram Demonstrations
Project127 .
2. K, D128 (1997). Seminumerical Algorithms. The Art of Computer
Programming129 . 2 (3rd ed.). Reading, MA: Addison-Wesley Professional. pp. 10–
26.
3. S, G130 ; V, S131 (15 J 2020). ”C
,       -
 ”. X132 :2001.05304133 [.DS134 ]. At this point it is unlikely
that the now-traditional names will be corrected. Mathematics of Computation135 (to
appear). Associated data at 136 .
4. L'E, P (13 J 2017). C, W. K. V.; D’A, A.;
Z, G.; M, N.; W, G.; P, E. (.). History of Uniform
Random Number Generation137 (PDF). P   2017 W S-
 C ( ). L V, U S. -01561551138 .
5. M, G139 (S 1968). ”R N
F M   P” 140 141
(PDF). PNAS . 61 (1): 25–28. Bib-
code142 :1968PNAS...61...25M143 . doi144 :10.1073/pnas.61.1.25145 . PMC146 285899147 .
PMID148 16591687149 .

125 https://en.wikipedia.org/wiki/Combined_linear_congruential_generator
126 http://demonstrations.wolfram.com/LinearCongruentialGenerators/
127 https://en.wikipedia.org/wiki/Wolfram_Demonstrations_Project
128 https://en.wikipedia.org/wiki/Donald_Knuth
129 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
130 https://en.wikipedia.org/wiki/Guy_Steele
131 https://en.wikipedia.org/wiki/Sebastiano_Vigna
132 https://en.wikipedia.org/wiki/ArXiv_(identifier)
133 http://arxiv.org/abs/2001.05304
134 http://arxiv.org/archive/cs.DS
135 https://en.wikipedia.org/wiki/Mathematics_of_Computation
136 https://github.com/vigna/CPRNG
137 https://www.iro.umontreal.ca/~lecuyer/myftp/papers/wsc17rng-history.pdf
138 https://hal.inria.fr/hal-01561551
139 https://en.wikipedia.org/wiki/George_Marsaglia
140 https://www.pnas.org/content/61/1/25.full.pdf
141 https://en.wikipedia.org/wiki/PNAS
142 https://en.wikipedia.org/wiki/Bibcode_(identifier)
143 https://ui.adsabs.harvard.edu/abs/1968PNAS...61...25M
144 https://en.wikipedia.org/wiki/Doi_(identifier)
145 https://doi.org/10.1073%2Fpnas.61.1.25
146 https://en.wikipedia.org/wiki/PMC_(identifier)
147 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC285899
148 https://en.wikipedia.org/wiki/PMID_(identifier)
149 http://pubmed.ncbi.nlm.nih.gov/16591687

439
Linear congruential generator

6. P, S K.; M, K W. (O 1988). ”R N


G: G O A H T F”150 (PDF). Communications of the
ACM151 . 31 (10): 1192–1201. doi152 :10.1145/63039.63042153 .
7. H, W; D, G (1993). ”A P U
R N G W S   R M”154
(PDF). ACM Transactions on Mathematical Software. 19 (4): 489–495. Cite-
SeerX155 10.1.1.52.3811156 . doi157 :10.1145/168173.168414158 . a multiplier about as

small as m, produces random numbers with a bad one-dimensional distribution.
8. L'E, P (1999). ”T  L C G
 D S  G L S”159 . Mathematics of Compu-
tation160 . 68 (225): 249–260. CiteSeerX161 10.1.1.34.1024162 . doi163 :10.1090/S0025-
5718-99-00996-5164 . Be sure to read the Errata165 as well.
9. P, W H.;  . (1992). Numerical Recipes in Fortran 77: The Art of
Scientific Computing166 (2 .). ISBN167 978-0-521-43064-7168 .
10. J, R (9 J 2010). ”C S P A C-
 26: R-N G”169 (PDF). . 19–20. R 2017-
10-31.
11. F, P (11 S 2006). ”S' M”170 . R
2017-10-31.
12. H, T. E.; D, A. R. (J 1962). ”R N G”171
(PDF). SIAM Review. 4 (3): 230–254. doi172 :10.1137/1004061173 . Retrieved 2016-
06-26.
13. S, F (2001). System Modeling and Simulation. John Wiley & Sons,
Ltd. p. 86. ISBN174 978-0-471-49694-6175 .

150 http://www.firstpr.com.au/dsp/rand31/p1192-park.pdf
151 https://en.wikipedia.org/wiki/Communications_of_the_ACM
152 https://en.wikipedia.org/wiki/Doi_(identifier)
153 https://doi.org/10.1145%2F63039.63042
154 https://epub.wu.ac.at/1288/1/document.pdf
155 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
156 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.52.3811
157 https://en.wikipedia.org/wiki/Doi_(identifier)
158 https://doi.org/10.1145%2F168173.168414
159 http://citeseer.ist.psu.edu/132363.html
160 https://en.wikipedia.org/wiki/Mathematics_of_Computation
161 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
162 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.1024
163 https://en.wikipedia.org/wiki/Doi_(identifier)
164 https://doi.org/10.1090%2FS0025-5718-99-00996-5
165 https://www.iro.umontreal.ca/~lecuyer/myftp/papers/latrules99Errata.pdf
166 https://en.wikipedia.org/wiki/Numerical_Recipes
167 https://en.wikipedia.org/wiki/ISBN_(identifier)
168 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-43064-7
169 http://www.cse.wustl.edu/~jain/iucee/ftp/k_26rng.pdf#page=19
170 http://home.earthlink.net/~pfenerty/pi/schrages_method.html
http://chagall.med.cornell.edu/BioinfoCourse/PDFs/Lecture4/random_number_generator.
171
pdf
172 https://en.wikipedia.org/wiki/Doi_(identifier)
173 https://doi.org/10.1137%2F1004061
174 https://en.wikipedia.org/wiki/ISBN_(identifier)
175 https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-49694-6

440
Notes

14. A, D (M 2008). ”R N: N L 


C”176 . A M S.
15. Implementation in glibc-2.26 release.177 See the code after the test for ”TYPE_0”; the
GNU C library's rand() in stdlib.h178 uses a simple (single state) linear congruential
generator only in case that the state is declared as 8 bytes. If the state is larger
(an array), the generator becomes an additive feedback generator (initialized using
minstd_rand0179 ) and the period increases. See the simplified code180 that reproduces
the random sequence from this library.
16. K. E (21 A 1997). A collection of selected pseudorandom number
generators with linear structures181 . CSX182 10.1.1.53.3686183 . R 16
J 2012.
17. ”L  C D  A 12, 2011”184 (PDF). . 346. R-
 21 D 2014.
18. ”H V B G P-R N   RND
F”185 . Microsoft Support. Microsoft. Retrieved 17 June 2011.
19. In spite of documentation on MSDN186 , RtlUniform uses LCG, and not Lehmer's
algorithm, implementations before Windows Vista187 are flawed, because the result of
multiplication is cut to 32 bits, before modulo is applied
20. ”ISO/IEC 14882:2011”188 . ISO. 2 S 2011. R 3 S
2011.
21. GNU Scientific Library: Other random number generators189
22. Stephen J. Chapman. ”Example 6.4 – Random Number Generator”. ”MATLAB Pro-
gramming for Engineers”190 . 2015. pp. 253–256.
23. Stephen J. Chapman. ”Example 6.4 – Random Number Generator”. ”MATLAB Pro-
gramming with Applications for Engineers”191 . 2012. pp. 292–295.
24. S. J. Chapman. random0192 . 2004.
25. Stephen J. Chapman. ”Introduction to Fortran 90/95”193 . 1998. pp. 322–324.
26. Wu-ting Tsai. ”'Module': A Major Feature of the Modern Fortran”194 . pp. 6–7.
27. The Open Group Base Specifications Issue 7195 IEEE Std 1003.1, 2013 Edition

176 https://www.ams.org/publicoutreach/feature-column/fcarc-random
177 https://sourceware.org/git/?p=glibc.git;a=blob;f=stdlib/random_r.c;hb=glibc-2.26#l362
178 https://en.wikipedia.org/wiki/Stdlib.h
179 https://sourceware.org/git/?p=glibc.git;a=blob;f=stdlib/random_r.c;hb=glibc-2.26#l187
180 http://www.mscs.dal.ca/~selinger/random/
181 http://citeseer.ist.psu.edu/viewdoc/download?doi=10.1.1.53.3686&rep=rep1&type=pdf
182 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
183 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.3686
184 http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
185 http://support.microsoft.com/kb/231847
186 http://msdn.microsoft.com/en-us/library/bb432429(VS.85).aspx
187 https://en.wikipedia.org/wiki/Windows_Vista
188 http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=50372
189 https://www.gnu.org/software/gsl/manual/html_node/Other-random-number-generators.html
190 https://books.google.com/books?id=e80HBgAAQBAJ
191 https://books.google.com/books?id=of8KAAAAQBAJ
192 http://www.udel.edu/CIS/106/pconrad/MPE3/code/chap5/random0.m
193 https://books.google.com/books?id=QoVGAAAAYAAJ
194 http://homepage.ntu.edu.tw/~wttsai/fortran/ppt/11.Module.pdf
195 http://pubs.opengroup.org/onlinepubs/9699919799/

441
Linear congruential generator

28. C, S. ”.”196 . cc65. Retrieved 8 July 2016.


29. O'N, M E. (5 S 2014). PCG: A Family of Simple Fast Space-
Efficient Statistically Good Algorithms for Random Number Generation197 (PDF)
(T ). H M C198 . . 6–7. HMC-CS-2014-0905.
30. T, S; L’E, P (O 1993). On the Lattice Structure of
the Add-with-Carry and Subtract-with-Borrow Random Number Generators199 (PDF).
W  S N. K U.
31. T, S; L'E, P (D 1992). Analysis of Add-with-Carry
and Subtract-with-Borrow Generators200 (PDF). P   1992 W
S C. . 443–447.
32. G, N201 (1999). ”S 5.3.2: L F”. The Na-
ture of Mathematical Modeling (First ed.). Cambridge University Press. p. 59.
ISBN202 978-0-521-57095-4203 .
33. M, M; N, T (J 1998). ”M
:  623-   - -
 ”204 (PDF). ACM Transactions on Modeling and Computer Simula-
tion. 8 (1): 3–30. CiteSeerX205 10.1.1.215.1141206 . doi207 :10.1145/272991.272995208 .
34. E, D E. 3; S, J I.; C, S (J
2005). ”T P- S”209 . Randomness Requirements
for Security210 . IETF211 . . 6.1.3. 212 :10.17487/RFC4086213 . BCP 106. RFC214
4086215 .

32.10 References
• P, WH; T, SA; V, WT; F, BP (2007), ”S
7.1.1. S H”216 , Numerical Recipes: The Art of Scientific Computing (3rd ed.),
New York: Cambridge University Press, ISBN217 978-0-521-88068-8218

196 https://github.com/cc65/cc65/blob/master/libsrc/common/rand.s
197 http://www.pcg-random.org/pdf/hmc-cs-2014-0905.pdf#page=9
198 https://en.wikipedia.org/wiki/Harvey_Mudd_College
199 https://core.ac.uk/download/pdf/39215926.pdf
200 http://www.informs-sim.org/wsc92papers/1992_0059.pdf
201 https://en.wikipedia.org/wiki/Neil_Gershenfeld
202 https://en.wikipedia.org/wiki/ISBN_(identifier)
203 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-57095-4
204 https://pdfs.semanticscholar.org/098d/5792ffa43e9885f9fc644ffdd7b6a59b0922.pdf
205 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
206 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.215.1141
207 https://en.wikipedia.org/wiki/Doi_(identifier)
208 https://doi.org/10.1145%2F272991.272995
209 https://tools.ietf.org/html/rfc4086#section-6.1.3
210 https://tools.ietf.org/html/rfc4086
211 https://en.wikipedia.org/wiki/Internet_Engineering_Task_Force
212 https://en.wikipedia.org/wiki/Doi_(identifier)
213 https://doi.org/10.17487%2FRFC4086
214 https://en.wikipedia.org/wiki/RFC_(identifier)
215 https://tools.ietf.org/html/rfc4086
216 http://apps.nrbook.com/empanel/index.html#pg=343
217 https://en.wikipedia.org/wiki/ISBN_(identifier)
218 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-88068-8

442
External links

• Gentle, James E., (2003). Random Number Generation and Monte Carlo Methods, 2nd
edition, Springer, ISBN219 0-387-00178-6220 .
• J B (1989). ”I    -
 ”221 (PDF). Journal of the ACM222 . 36 (1): 129–141.
doi223 :10.1145/58562.59305224 . (in this paper, efficient algorithms are given for inferring
sequences produced by certain pseudo-random number generators).

32.11 External links


• The simulation Linear Congruential Generator225 visualizes the correlations between the
pseudo-random numbers when manipulating the parameters.
• Security of Random Number Generation: An Annotated Bibliography226
• Linear Congruential Generators post to sci.math227
• The ”Death of Art” computer art project at Goldstein Technologies LLC, uses an LCG
to generate 33,554,432 images228
• P. L'Ecuyer and R. Simard, ”TestU01: A C Library for Empirical Testing of Random
Number Generators”229 , May 2006, revised November 2006, ACM Transactions on Math-
ematical Software, 33, 4, Article 22, August 2007.
• Article about another way of cracking LCG230

219 https://en.wikipedia.org/wiki/ISBN_(identifier)
220 https://en.wikipedia.org/wiki/Special:BookSources/0-387-00178-6
221 http://asterix.cs.gsu.edu/crypto/p129-boyar.pdf
222 https://en.wikipedia.org/wiki/Journal_of_the_ACM
223 https://en.wikipedia.org/wiki/Doi_(identifier)
224 https://doi.org/10.1145%2F58562.59305
225 http://www.vias.org/simulations/simusoft_lincong.html
https://web.archive.org/web/20081211083300/http://www.cs.virginia.edu/~rjg7v/
226
annotated.html
https://web.archive.org/web/20090108194540/http://www.math.niu.edu/~rusin/known-
227
math/99/LCG
228 http://www.goldsteintech.com/art.php
229 http://www.iro.umontreal.ca/~lecuyer/myftp/papers/testu01.pdf
230 https://web.archive.org/web/20150616223328/http://yurichev.com/blog/modulo/

443
33 Middle-square method

Figure 80 One iteration of the middle-square method, showing a six digit seed, which is
then squared, and the resulting value has its middle six digits as the output value (and
also as the next seed for the sequence).

445
Middle-square method

Figure 81 Directed graph of all 100 2-digit pseudorandom numbers obtained using the
middle-square method with n = 2.

In mathematics1 , the middle-square method is a method of generating pseudorandom


numbers2 . In practice it is not a good method, since its period3 is usually very short and it
has some severe weaknesses; repeated enough times, the middle-square method will either
begin repeatedly generating the same number or cycle to a previous number in the sequence
and loop indefinitely.

1 https://en.wikipedia.org/wiki/Mathematics
2 https://en.wikipedia.org/wiki/Pseudorandom_number
3 https://en.wikipedia.org/wiki/Pseudorandom_number_generator#Periodicity

446
History

33.1 History

33.1.1 In mathematics

The method was invented by John von Neumann4 , and was described at a conference in
1949.[1]
In the 1949 talk, Von Neumann quipped that, ”Any one who considers arithmetical methods
of producing random digits is, of course, in a state of sin.” What he meant, he elaborated,
was that there were no true ”random numbers”, just means to produce them, and ”a strict
arithmetic procedure”, like the middle-square method, ”is not such a method.” Nevertheless
he found these methods hundreds of times faster than reading ”truly” random numbers
off punch cards5 , which had practical importance for his ENIAC6 work. He found the
”destruction” of middle-square sequences to be a factor in their favor, because it could be
easily detected: ”one always fears the appearance of undetected short cycles.”[1] Nicholas
Metropolis7 reported sequences of 750,000 digits before ”destruction” by means of using
38-bit numbers with the ”middle-square” method.[2]

33.1.2 First invention theory

The book The Broken Dice by Ivar Ekeland8 gives an extended account of how the method
was invented by a Franciscan friar known only as Brother Edvin sometime between 1240
and 1250.[3] Supposedly, the manuscript is now lost, but Jorge Luis Borges9 sent Ekeland a
copy that he made at the Vatican Library.

33.2 The method

To generate a sequence of n-digit pseudorandom numbers, an n-digit starting value is created


and squared, producing a 2n-digit number. If the result has fewer than 2n digits, leading
zeroes10 are added to compensate. The middle n digits of the result would be the next
number in the sequence, and returned as the result. This process is then repeated to
generate more numbers.
The value of n must be even in order for the method to work-- if the value of n is odd then
there will not necessarily be a uniquely defined 'middle n-digits' to select from. Consider
the following: If a 3-digit number is squared it can yield a 6 digit number (eg: 5402 =
291600). If there were to be a middle three digit that would leave 6 − 3 = 3 digits to be
distributed to the left and right of the middle. It is impossible to evenly distribute these
digits equally on both sides of the middle number and therefore there are no 'middle digits.'

4 https://en.wikipedia.org/wiki/John_von_Neumann
5 https://en.wikipedia.org/wiki/Punch_cards
6 https://en.wikipedia.org/wiki/ENIAC
7 https://en.wikipedia.org/wiki/Nicholas_Metropolis
8 https://en.wikipedia.org/wiki/Ivar_Ekeland
9 https://en.wikipedia.org/wiki/Jorge_Luis_Borges
10 https://en.wikipedia.org/wiki/Leading_zero

447
Middle-square method

It is acceptable to pad the seeds with zeros to the left in order to create an even valued
n-digit (eg: 540 → 0540).
For a generator of n-digit numbers, the period can be no longer than 8n . If the middle
n digits are all zeroes, the generator then outputs zeroes forever. If the first half of a
number in the sequence is zeroes, the subsequent numbers will be decreasing to zero. While
these runs of zero are easy to detect, they occur too frequently for this method to be of
practical use. The middle-squared method can also get stuck on a number other than zero.
For n = 4, this occurs with the values 0100, 2500, 3792, and 7600. Other seed values form
very short repeating cycles, e.g., 0540 → 2916 → 5030 → 3009. These phenomena are
even more obvious when n = 2, as none of the 100 possible seeds generates more than 14
iterations without reverting to 0, 10, 50, 60, or a 24 ↔57 loop.

33.2.1 Example implementation

Here, the algorithm is rendered in Python 311 .

seed_number = int(input("Please enter a four digit number:\n[####] "))


number = seed_number
already_seen = set()
counter = 0

while number not in already_seen:


counter += 1
already_seen.add(number)
number = int(str(number * number).zfill(8)[2:6]) # zfill adds padding of
zeroes
print(f"#{counter}: {number}")

print(f"We began with {seed_number}, and"


f" have repeated ourselves after {counter} steps"
f" with {number}.")

33.3 Middle Square Weyl Sequence PRNG

The defects associated with the original middle-square generator can be rectified by running
the middle square with a Weyl sequence12[4][5] . The Weyl sequence prevents convergence to
zero. The Weyl sequence also prevents the repeating cycle problem described above. A C13
implementation is shown below.

#include <stdint.h>

uint64_t x = 0, w = 0, s = 0xb5ad4eceda1ce2a9;

inline static uint32_t msws() {


x *= x;
x += (w += s);
return x = (x>>32) | (x<<32);
}

11 https://en.wikipedia.org/wiki/Python_3
12 https://en.wikipedia.org/wiki/Weyl_sequence
13 https://en.wikipedia.org/wiki/C_(programming_language)

448
References

The Weyl sequence (w += s) is added to the square of x. The middle is extracted by


shifting right 32 bits. This generator passes BigCrush[6][7] . and PractRand[8] . This may be
the world's fastest PRNG for producing 32-bit floating-point numbers[4] . A free software
package is available here14[5] .
A counter-based15 version of this generator called ”squares” is now available. See arXiv16
article ”Squares: A Fast Counter-Based RNG”[9] .

33.4 References
1. The 1949 papers were not reprinted until 1951. John von Neumann, “Various tech-
niques used in connection with random digits,” in A.S. Householder, G.E. Forsythe,
and H.H. Germond, eds., Monte Carlo Method, National Bureau of Standards Applied
Mathematics Series, vol. 12 (Washington, D.C.: U.S. Government Printing Office,
1951): pp. 36-38.
2. Donald E. Knuth, The art of computer programming, Vol. 2, Seminumerical
algorithms, 2nd edn. (Reading, Mass.: Addison-Wesley, 1981), ch. 3, section 3.1.
3. I E (15 J 1996). The Broken Dice, and Other Mathematical Tales
of Chance. University of Chicago Press. ISBN17 978-0-226-19992-418 .
4. W, B (A 2017). ”M S W S RNG”.
X19 :1704.00358420 .
5. Middle Square Weyl Sequence RNG website21 .
6. Pierre L’Ecuyer & Richard Simard (2007), ”TestU01: A Software Library in ANSI
C for Empirical Testing of Random Number Generators22 ”, ACM Transactions on
Mathematical Software23 , 33: 22.
7. The TestU01 web site24 .
8. Chris Doty-Humphrey (2018), ”Practically Random: C++ library of statistical tests
for RNGs.25 ” version 0.94.
9. W, B (A 2020). ”S: A F C-B RNG”.
X26 :2004.0627827 .

14 http://mswsrng.wixsite.com/rand
15 https://en.wikipedia.org/wiki/Counter-based_random_number_generator_(CBRNG)
16 https://en.wikipedia.org/wiki/ArXiv
17 https://en.wikipedia.org/wiki/ISBN_(identifier)
18 https://en.wikipedia.org/wiki/Special:BookSources/978-0-226-19992-4
19 https://en.wikipedia.org/wiki/ArXiv_(identifier)
20 http://arxiv.org/abs/1704.00358v4
21 http://mswsrng.wixsite.com/rand
22 http://dl.acm.org/citation.cfm?doid=1268776.1268777
23 https://en.wikipedia.org/wiki/ACM_Transactions_on_Mathematical_Software
24 http://simul.iro.umontreal.ca/testu01/tu01.html
25 http://pracrand.sourceforge.net
26 https://en.wikipedia.org/wiki/ArXiv_(identifier)
27 http://arxiv.org/abs/2004.06278

449
Middle-square method

33.5 See also


• Linear congruential generator28
• Blum Blum Shub29

28 https://en.wikipedia.org/wiki/Linear_congruential_generator
29 https://en.wikipedia.org/wiki/Blum_Blum_Shub

450
34 Xorshift

Class of pseudorandom number generators Xorshift random number generators, also called
shift-register generators are a class of pseudorandom number generators1 that were dis-
covered by George Marsaglia2 .[1] They are a subset of linear-feedback shift registers3 (LF-
SRs) which allow a particularly efficient implementation without using excessively sparse
polynomials.[2] They generate the next number in their sequence by repeatedly taking the
exclusive or4 of a number with a bit-shifted5 version of itself. This makes them extremely
fast on modern computer architectures. Like all LFSRs, the parameters have to be chosen
very carefully in order to achieve a long period.[3]
Xorshift generators are among the fastest non-cryptographically-secure random number
generators6 , requiring very small code and state. Although they do not pass every sta-
tistical test without further refinement, this weakness is well-known and easily amended
(as pointed out by Marsaglia in the original paper) by combining them with a non-linear
function, resulting e.g. in a xorshift+ or xorshift* generator. A native C implementation
of a xorshift+ generator that passes all tests from the BigCrush suite (with an order of
magnitude fewer failures than Mersenne Twister7 or WELL8 ) typically takes fewer than 10
clock cycles on x869 to generate a random number, thanks to instruction pipelining10 .[4]
The scramblers known as + and * still leave weakness in the low bits,[5] so they're intended
for floating point use, as conversion of a random number to floating point discards the
low bits. For general purpose, the scrambler ** (pronounced 'starstar') makes the LFSR
generators pass in all bits.
Because plain xorshift generators (without a non-linear step) fail a few statistical tests, they
have been accused of being unreliable.[3]:360

34.1 Example implementation

A C11 version[note 1] of three xorshift algorithms[1]:4,5 is given here. The first has one 32-bit
word of state, and period 232 −1. The second has one 64-bit word of state and period 264 −1.

1 https://en.wikipedia.org/wiki/Pseudorandom_number_generator
2 https://en.wikipedia.org/wiki/George_Marsaglia
3 https://en.wikipedia.org/wiki/Linear-feedback_shift_register
4 https://en.wikipedia.org/wiki/Bitwise_operation#XOR
5 https://en.wikipedia.org/wiki/Logical_shift
6 https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
7 https://en.wikipedia.org/wiki/Mersenne_Twister
8 https://en.wikipedia.org/wiki/Well_equidistributed_long-period_linear
9 https://en.wikipedia.org/wiki/X86
10 https://en.wikipedia.org/wiki/Instruction_pipelining
11 https://en.wikipedia.org/wiki/C_(programming_language)

451
Xorshift

The last has four 32-bit words of state, and period 2128 −1. All use three shifts and three
or four exclusive-or operations:

#include <stdint.h>

struct xorshift32_state {
uint32_t a;
};

/* The state word must be initialized to non-zero */


uint32_t xorshift32(struct xorshift32_state *state)
{
/* Algorithm "xor" from p. 4 of Marsaglia, "Xorshift RNGs" */
uint32_t x = state->a;
x ^= x << 13;
x ^= x >> 17;
x ^= x << 5;
return state->a = x;
}

struct xorshift64_state {
uint64_t a;
};

uint64_t xorshift64(struct xorshift64_state *state)


{
uint64_t x = state->a;
x ^= x << 13;
x ^= x >> 7;
x ^= x << 17;
return state->a = x;
}

struct xorshift128_state {
uint32_t a, b, c, d;
};

/* The state array must be initialized to not be all zero */


uint32_t xorshift128(struct xorshift128_state *state)
{
/* Algorithm "xor128" from p. 5 of Marsaglia, "Xorshift RNGs" */
uint32_t t = state->d;

uint32_t const s = state->a;


state->d = state->c;
state->c = state->b;
state->b = s;

t ^= t << 11;
t ^= t >> 8;
return state->a = t ^ s ^ (s >> 19);
}

The 128-bit algorithm passes the diehard tests12 . However, it fails the MatrixRank and
LinearComp tests of the BigCrush test suite from the TestU0113 framework.

12 https://en.wikipedia.org/wiki/Diehard_tests
13 https://en.wikipedia.org/wiki/TestU01

452
Variations

34.2 Variations

All xorshift generators fail some tests out of TestU0114 's BigCrush test suite. This is true
for all generators based on linear recurrences, such as the Mersenne Twister15 or WELL16 .
However, it is easy to scramble the output of such generators to improve their quality.

34.2.1 xorwow

Marsaglia suggested scrambling the output by combining it with a simple additive counter
modulo 232 (which he calls a ”Weyl sequence” after Weyl's equidistribution theorem17 ). This
also increases the period by a factor of 232 , to 2160 −232 :

#include <stdint.h>

struct xorwow_state {
uint32_t a, b, c, d;
uint32_t counter;
};

/* The state array must be initialized to not be all zero in the first four
words */
uint32_t xorwow(struct xorwow_state *state)
{
/* Algorithm "xorwow" from p. 5 of Marsaglia, "Xorshift RNGs" */
uint32_t t = state->d;

uint32_t const s = state->a;


state->d = state->c;
state->c = state->b;
state->b = s;

t ^= t >> 2;
t ^= t << 1;
t ^= s ^ (s << 4);
state->a = t;

state->counter += 362437;
return t + state->counter;
}

This performs well, but fails a few tests in BigCrush.[6] This generator is the default in
Nvidia's CUDA18 toolkit.[7]

34.2.2 xorshift*

A xorshift* generator takes a xorshift generator and applies an invertible multiplica-


tion (modulo the word size) to its output as a non-linear transformation, as suggested by
Marsaglia.[1] The following 64-bit generator with 64 bits of state has a maximal period of
264 −1[8] and fails only the MatrixRank test of BigCrush:

14 https://en.wikipedia.org/wiki/TestU01
15 https://en.wikipedia.org/wiki/Mersenne_Twister
16 https://en.wikipedia.org/wiki/Well_Equidistributed_Long-period_Linear
17 https://en.wikipedia.org/wiki/Weyl%27s_equidistribution_theorem
18 https://en.wikipedia.org/wiki/CUDA

453
Xorshift

#include <stdint.h>

struct xorshift64s_state {
uint64_t a;
};

uint64_t xorshift64s(struct xorshift64s_state *state)


{
uint64_t x = state->a; /* The state must be seeded with a nonzero value.
*/
x ^= x >> 12; // a
x ^= x << 25; // b
x ^= x >> 27; // c
state->a = x;
return x * UINT64_C(0x2545F4914F6CDD1D);
}

A similar generator is suggested in Numerical Recipes[9] as RanQ1, but it also fails the
BirthdaySpacings test.
If the generator is modified to return only the high 32 bits, then it passes BigCrush with
zero failures.[10]:7 In fact, a reduced version with only 40 bits of internal state passes the
suite, suggesting a large safety margin.[10]:19
Vigna[8] suggests the following xorshift1024* generator with 1024 bits of state and a
maximal period of 21024 −1; it however does not always pass BigCrush.[5] xoshiro256** is
therefore a much better option.

#include <stdint.h>

/* The state must be seeded so that there is at least one non-zero element in
array */
struct xorshift1024s_state {
uint64_t array[16];
int index;
};

uint64_t xorshift1024s(struct xorshift1024s_state *state)


{
int index = state->index;
uint64_t const s = state->array[index++];
uint64_t t = state->array[index &= 15];
t ^= t << 31; // a
t ^= t >> 11; // b
t ^= s ^ (s >> 30); // c
state->array[index] = t;
state->index = index;
return t * (uint64_t)1181783497276652981;
}

Both generators, as it happens with all xorshift* generators, emit a sequence of 64-bit
values that is equidistributed in the maximum possible dimension (except that they will
never output zero for 16 calls, i.e. 128 bytes, in a row).[8]

34.2.3 xorshift+

Rather than using multiplication, it is possible to use addition as a faster non-linear trans-
formation. The idea was first proposed by Saito and Matsumoto (also responsible for the

454
xoshiro and xoroshiro

Mersenne Twister19 ) in the XSadd generator, which adds two consecutive outputs of an
underlying xorshift generator based on 32-bit shifts.[11]
XSadd, however, has some weakness in the low-order bits of its output; it fails sev-
eral BigCrush tests when the output words are bit-reversed. To correct this prob-
lem, Vigna[12] introduced the xorshift+ family, based on 64-bit shifts: the following
xorshift128+ generator uses 128 bits of state and has a maximal period of 2128 −1. It
passes BigCrush, but not when reversed.[5]

#include <stdint.h>

struct xorshift128p_state {
uint64_t a, b;
};

/* The state must be seeded so that it is not all zero */


uint64_t xorshift128p(struct xorshift128p_state *state)
{
uint64_t t = state->a;
uint64_t const s = state->b;
state->a = s;
t ^= t << 23; // a
t ^= t >> 17; // b
t ^= s ^ (s >> 26); // c
state->b = t;
return t + s;
}

This generator is one of the fastest generators passing BigCrush.[4] One disadvan-
tage of adding consecutive outputs is while the underlying xorshift128 generator
is 2-dimensionally equidistributed, the associated xorshift128+ generator is only 1-
dimensionally equidistributed.[12]
Xorshift+ generators, even as large as xorshift1024+, exhibit some detectable linearity in
the low-order bits of their output.[5]

34.3 xoshiro and xoroshiro

xoshiro and xoroshiro are other variations of the shift-register generators, using rotations in
addition to shifts. According to Vigna, they are faster and produce better quality output
than xorshift.[13][14]
This class of generator has variants for 32-bit and 64-bit integer and floating point output;
for floating point, one takes the upper 53 bits (for binary6420 ) or the upper 23 bits (for
binary3221 ), since the upper bits are of better quality than the lower bits in the floating
point generators. The algorithms also include a jump function, which sets the state forward
by some number of steps – usually a power of two that allows many threads of execution22
to start at distinct initial states.

19 https://en.wikipedia.org/wiki/Mersenne_Twister
20 https://en.wikipedia.org/wiki/Double-precision_floating-point_format
21 https://en.wikipedia.org/wiki/Single-precision_floating-point_format
22 https://en.wikipedia.org/wiki/Thread_(computing)

455
Xorshift

34.3.1 xoshiro256**

xoshiro256** is the family's general-purpose random 64-bit number generator.

/* Adapted from the code included on Sebastian Vignas website */

#include <stdint.h>

uint64_t rol64(uint64_t x, int k)


{
return (x << k) | (x >> (64 - k));
}

struct xoshiro256ss_state {
uint64_t s[4];
};

uint64_t xoshiro256ss(struct xoshiro256ss_state *state)


{
uint64_t *s = state->s;
uint64_t const result = rol64(s[1] * 5, 7) * 9;
uint64_t const t = s[1] << 17;

s[2] ^= s[0];
s[3] ^= s[1];
s[1] ^= s[2];
s[0] ^= s[3];

s[2] ^= t;
s[3] = rol64(s[3], 45);

return result;
}

34.3.2 xoshiro256+

xoshiro256+ is approximately 15% faster than xoshiro256**, but the lowest three bits have
low linear complexity; therefore, it should be used only for floating point results by extract-
ing the upper 53 bits.

#include <stdint.h>

uint64_t rol64(uint64_t x, int k)


{
return (x << k) | (x >> (64 - k));
}

struct xoshiro256p_state {
uint64_t s[4];
};

uint64_t xoshiro256p(struct xoshiro256p_state *state)


{
uint64_t (*s)[4] = &state->s;
uint64_t const result = s[0] + s[3];
uint64_t const t = s[1] << 17;

s[2] ^= s[0];
s[3] ^= s[1];
s[1] ^= s[2];
s[0] ^= s[3];

456
Initialization

s[2] ^= t;
s[3] = rol64(s[3], 45);

return result;
}

34.3.3 Other variants

If space is at a premium, xoroshiro128** is the equivalent of xoshiro256**, and


xoroshiro128+ is the equivalent of xoshiro256+. These have smaller state spaces, and
thus are less useful for massively parallel programs. xoroshiro128+ also exhibits a mild
dependency in Hamming weights, generating a failure after 5TB of output. The authors do
not believe that this can be detected in real world programs.
For 32-bit output, xoshiro128** and xoshiro128+ are exactly equivalent to xoshiro256**
and xoshiro256+, with uint32_t in place of uint64_t, and with different shift/rotate con-
stants; similarly, xoroshiro64** and xoroshiro64* are the equivalent of xoroshiro128** and
xoroshiro128+ respectively. Unlike the functions with larger state, xoroshiro64** and
xoroshiro64* are not straightforward ports of their higher-precision counterparts.

34.4 Initialization

It is the recommendation of the authors of the xoshiro paper to initialize the state of the
generators using a generator which is radically different from the initialized generators, as
well as one which will never give the ”all-zero” state; for shift-register generators, this state
is impossible to escape from.[14][15] The authors specifically recommend using the SplitMix64
generator, from a 64-bit seed, as follows:

#include <stdint.h>

struct splitmix64_state {
uint64_t s;
};

uint64_t splitmix64(struct splitmix64_state *state) {


uint64_t result = state->s;

state->s = result + 0x9E3779B97f4A7C15;


result = (result ^ (result >> 30)) * 0xBF58476D1CE4E5B9;
result = (result ^ (result >> 27)) * 0x94D049BB133111EB;
return result ^ (result >> 31);
}

// as an example; one could do this same thing for any of the other generators
struct xorshift128_state xorshift128_init(uint64_t seed) {
struct splitmix64_state smstate = {seed};
struct xorshift128_state result = {0};

uint64_t tmp = splitmix64(&smstate);


result.a = (uint32_t)tmp;
result.b = (uint32_t)(tmp >> 32);

tmp = splitmix64(&smstate);
result.c = (uint32_t)tmp;
result.d = (uint32_t)(tmp >> 32);

457
Xorshift

return result;
}

34.5 Notes
1. In C and most other C-based languages, the caret23 (^) represents the bitwise XOR24 ,
and the ” <<” and ” >>” operators represent left and right bitwise shifts25 , respec-
tively.

34.6 References
1. M, G26 (J 2003). ”X RNG”. Journal of Statistical
Software27 . 8 (14). doi28 :10.18637/jss.v008.i1429 .
2. B, R P.30 (A 2004). ”N  M' X
R N G”. Journal of Statistical Software31 . 11 (5).
doi32 :10.18637/jss.v011.i0533 .
3. P, F; L'E, P (O 2005). ”O  
  ”34 (PDF). ACM Transactions on Modeling and Com-
puter Simulation. 15 (4): 346–361. doi35 :10.1145/1113316.111331936 .
4. V, S. ”*/+    PRNG
”37 . R 2014-10-25.
5. L, D; O’N, M E. (A 2019). ”X1024*,
X1024+, X128+  X128+ F S T
 L”. Computational and Applied Mathematics. 350: 139–142.
38 39 40 41
arXiv :1810.05313 . doi :10.1016/j.cam.2018.10.019 . We report that these scram-
bled generators systematically fail Big Crush—specifically the linear-complexity and
matrix-rank tests that detect linearity—when taking the 32 lowest-order bits in reverse
order from each 64-bit word.

23 https://en.wikipedia.org/wiki/Caret
24 https://en.wikipedia.org/wiki/Bitwise_operation#XOR
25 https://en.wikipedia.org/wiki/Logical_shift
26 https://en.wikipedia.org/wiki/George_Marsaglia
27 https://en.wikipedia.org/wiki/Journal_of_Statistical_Software
28 https://en.wikipedia.org/wiki/Doi_(identifier)
29 https://doi.org/10.18637%2Fjss.v008.i14
30 https://en.wikipedia.org/wiki/Richard_P._Brent
31 https://en.wikipedia.org/wiki/Journal_of_Statistical_Software
32 https://en.wikipedia.org/wiki/Doi_(identifier)
33 https://doi.org/10.18637%2Fjss.v011.i05
34 https://www.iro.umontreal.ca/~lecuyer/myftp/papers/xorshift.pdf
35 https://en.wikipedia.org/wiki/Doi_(identifier)
36 https://doi.org/10.1145%2F1113316.1113319
37 http://prng.di.unimi.it
38 https://en.wikipedia.org/wiki/ArXiv_(identifier)
39 http://arxiv.org/abs/1810.05313
40 https://en.wikipedia.org/wiki/Doi_(identifier)
41 https://doi.org/10.1016%2Fj.cam.2018.10.019

458
References

6. L F', F (12 J 2011). ”XORWOW L' TU01 R-


”42 . Chase The Devil (blog). Retrieved 2017-11-02.
7. ”RAND T”43 . N44 . R 2017-11-02.
8. V, S (J 2016). ”A   
M'  , ”45 (PDF). ACM Transactions on
Mathematical Software. 42 (4): 30. arXiv46 :1402.624647 . doi48 :10.1145/284507749 .
Proposes xorshift* generators, adding a final multiplication by a constant.
9. P, WH50 ; T, SA51 ; V, WT; F, BP (2007). ”S-
 7.1.2.A. 64- X M”52 . Numerical Recipes: The Art of Scientific
Computing (3rd ed.). New York: Cambridge University Press. ISBN53 978-0-521-
88068-854 .
10. O'N, M E. (5 S 2014). PCG: A Family of Simple Fast Space-
Efficient Statistically Good Algorithms for Random Number Generation55 (PDF)
(T ). H M C56 . . 6–8. HMC-CS-2014-0905.
11. S, M; M, M (2014). ”XORSHIFT-ADD (XS): A
  XORSHIFT”57 . R 2014-10-25.
12. V, S (M 2017). ”F   M' -
 ”58 (PDF). Journal of Computational and Applied Mathematics.
315 (C): 175–181. arXiv59 :1404.039060 . doi61 :10.1016/j.cam.2016.11.00662 . Describes
xorshift+ generators, a generalization of XSadd.
13. V, S. ”/    PRNG
63
” . R 2019-07-07.
14. B, D; V, S (2018). ”S L P-
 N G”. X64 :1805.0140765 . Cite journal requires
|journal= (help66 )

42 https://chasethedevil.github.io/post/xorwow-lecuyer-testu01-results/
43 https://docs.nvidia.com/cuda/curand/testing.html
44 https://en.wikipedia.org/wiki/Nvidia
45 http://vigna.di.unimi.it/ftp/papers/xorshift.pdf
46 https://en.wikipedia.org/wiki/ArXiv_(identifier)
47 http://arxiv.org/abs/1402.6246
48 https://en.wikipedia.org/wiki/Doi_(identifier)
49 https://doi.org/10.1145%2F2845077
50 https://en.wikipedia.org/wiki/William_H._Press
51 https://en.wikipedia.org/wiki/Saul_Teukolsky
52 http://apps.nrbook.com/empanel/index.html#pg=345
53 https://en.wikipedia.org/wiki/ISBN_(identifier)
54 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-88068-8
55 http://www.pcg-random.org/pdf/hmc-cs-2014-0905.pdf
56 https://en.wikipedia.org/wiki/Harvey_Mudd_College
57 http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/XSADD/
58 http://vigna.di.unimi.it/ftp/papers/xorshiftplus.pdf
59 https://en.wikipedia.org/wiki/ArXiv_(identifier)
60 http://arxiv.org/abs/1404.0390
61 https://en.wikipedia.org/wiki/Doi_(identifier)
62 https://doi.org/10.1016%2Fj.cam.2016.11.006
63 http://xoroshiro.di.unimi.it
64 https://en.wikipedia.org/wiki/ArXiv_(identifier)
65 http://arxiv.org/abs/1805.01407
66 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical

459
Xorshift

15. M, M; W, I; K, A; A, H


(S 2007). ”C     
 ”. ACM Transactions on Modeling and Computer Simulation.
17 (4): 15–es. doi67 :10.1145/1276927.127692868 .

34.7 Further reading


• B, R P.69 (J 2006). ”S -   -
    ”70 . ANZIAM Journal. 48: C188–C202. Lists generators
of various sizes with four shifts (two per feedback word).

67 https://en.wikipedia.org/wiki/Doi_(identifier)
68 https://doi.org/10.1145%2F1276927.1276928
69 https://en.wikipedia.org/wiki/Richard_P._Brent
70 https://maths-people.anu.edu.au/~brent/pub/pub224.html

460
35 Mersenne Twister

The Mersenne Twister is a pseudorandom number generator1 (PRNG). It is by far the


most widely used general-purpose PRNG.[1] Its name derives from the fact that its period
length is chosen to be a Mersenne prime2 .
The Mersenne Twister was developed in 1997 by Makoto Matsumoto3 (松本眞) and Takuji
Nishimura4 (西村拓士).[2] It was designed specifically to rectify most of the flaws found in
older PRNGs.
The most commonly used version of the Mersenne Twister algorithm is based on the
Mersenne prime 219937 −1. The standard implementation of that, MT19937, uses a 32-
bit5 word length. There is another implementation (with five variants[3] ) that uses a 64-bit
word length, MT19937-64; it generates a different sequence.

35.1 Adoption in software systems

The Mersenne Twister is the default PRNG for the following software systems: Mi-
crosoft Excel6 ,[4] GAUSS7 ,[5] GLib8 ,[6] GNU Multiple Precision Arithmetic Library9 ,[7] GNU
Octave10 ,[8] GNU Scientific Library11 ,[9] gretl12 ,[10] IDL13 ,[11] Julia14 ,[12] CMU Common
Lisp15 ,[13] Embeddable Common Lisp16 ,[14] Steel Bank Common Lisp17 ,[15] Maple18 ,[16] MAT-

1 https://en.wikipedia.org/wiki/Pseudorandom_number_generator
2 https://en.wikipedia.org/wiki/Mersenne_prime
3 https://en.wikipedia.org/w/index.php?title=Makoto_Matsumoto&action=edit&redlink=1
4 https://en.wikipedia.org/w/index.php?title=Takuji_Nishimura&action=edit&redlink=1
5 https://en.wikipedia.org/wiki/32-bit
6 https://en.wikipedia.org/wiki/Microsoft_Excel
7 https://en.wikipedia.org/wiki/GAUSS_(software)
8 https://en.wikipedia.org/wiki/GLib
9 https://en.wikipedia.org/wiki/GNU_Multiple_Precision_Arithmetic_Library
10 https://en.wikipedia.org/wiki/GNU_Octave
11 https://en.wikipedia.org/wiki/GNU_Scientific_Library
12 https://en.wikipedia.org/wiki/Gretl
13 https://en.wikipedia.org/wiki/IDL_(programming_language)
14 https://en.wikipedia.org/wiki/Julia_(programming_language)
15 https://en.wikipedia.org/wiki/CMU_Common_Lisp
16 https://en.wikipedia.org/wiki/Embeddable_Common_Lisp
17 https://en.wikipedia.org/wiki/Steel_Bank_Common_Lisp
18 https://en.wikipedia.org/wiki/Maple_(software)

461
Mersenne Twister

LAB19 ,[17] Free Pascal20 ,[18] PHP21 ,[19] Python22 ,[20][21] R23 ,[22] Ruby24 ,[23] SageMath25 ,[24]
Scilab26 ,[25] Stata27 .[26]
It is also available in Apache Commons28 ,[27] in standard C++29 (since C++1130 ),[28][29]
and in Mathematica31 .[30] Add-on implementations are provided in many program libraries,
including the Boost C++ Libraries32 ,[31] the CUDA Library33 ,[32] and the NAG Numerical
Library34 .[33]
The Mersenne Twister is one of two PRNGs in SPSS35 : the other generator is kept only
for compatibility with older programs, and the Mersenne Twister is stated to be ”more
reliable”.[34] The Mersenne Twister is similarly one of the PRNGs in SAS36 : the other gen-
erators are older and deprecated.[35]

35.2 Advantages
• Permissively-licensed37 and patent-free for all variants except CryptMT.
• Passes numerous tests for statistical randomness, including the Diehard tests38 and most,
but not all of the TestU0139 tests.[36]
• A very long period of 219937 − 1. Note that while a long period is not a guarantee of
quality in a random number generator, short periods, such as the 232 common in many
older software packages, can be problematic.[37]
• k-distributed40 to 32-bit accuracy for every 1 ≤k ≤623 (for a definition of k-distributed,
see below41 )
• Implementations generally create random numbers faster than other methods. A study
found that the Mersenne Twister creates 64-bit floating point random numbers approxi-
mately twenty times faster than the hardware-implemented, processor-based RDRAND42
instruction set.[38]

19 https://en.wikipedia.org/wiki/MATLAB
20 https://en.wikipedia.org/wiki/Free_Pascal
21 https://en.wikipedia.org/wiki/PHP_(programming_language)
22 https://en.wikipedia.org/wiki/Python_(programming_language)
23 https://en.wikipedia.org/wiki/R_(programming_language)
24 https://en.wikipedia.org/wiki/Ruby_(programming_language)
25 https://en.wikipedia.org/wiki/SageMath
26 https://en.wikipedia.org/wiki/Scilab
27 https://en.wikipedia.org/wiki/Stata
28 https://en.wikipedia.org/wiki/Apache_Commons
29 https://en.wikipedia.org/wiki/C%2B%2B
30 https://en.wikipedia.org/wiki/C%2B%2B11
31 https://en.wikipedia.org/wiki/Mathematica
32 https://en.wikipedia.org/wiki/Boost_(C%2B%2B_libraries)
33 https://en.wikipedia.org/wiki/CUDA
34 https://en.wikipedia.org/wiki/NAG_Numerical_Library
35 https://en.wikipedia.org/wiki/SPSS
36 https://en.wikipedia.org/wiki/SAS_(software)
37 https://en.wikipedia.org/wiki/Permissive_software_licence
38 https://en.wikipedia.org/wiki/Diehard_tests
39 https://en.wikipedia.org/wiki/TestU01
40 #k-distribution
41 #k-distribution
42 https://en.wikipedia.org/wiki/RDRAND

462
Disadvantages

35.3 Disadvantages
• Relatively large state buffer, of 2.5 KiB43 , unless the TinyMT variant (discussed below)
is used.
• Mediocre throughput by modern standards,[39] unless the SFMT variant (discussed below)
is used.
• Exhibits two clear failures (linear complexity) in both Crush and BigCrush in the TestU01
suite.[36] There are a number of other generators that do pass all the tests (and numerous
generators that fail badly).
• Multiple instances that differ only in seed value (but not other parameters) are not
generally appropriate for Monte-Carlo simulations44 that require independent random
number generators, though there exists a method for choosing multiple sets of parameter
values.[40][41]
• Can take a long time to start generating output that passes randomness tests45 , if the
initial state is highly non-random—particularly if the initial state has many zeros. A con-
sequence of this is that two instances of the generator, started with initial states that are
almost the same, will usually output nearly the same sequence for many iterations, before
eventually diverging. The 2002 update to the MT algorithm has improved initialization,
so that beginning with such a state is very unlikely.[42]
• Is not cryptographically secure46 , unless the CryptMT variant (discussed below) is used.
The reason is that observing a sufficient number of iterations (624 in the case of MT19937,
since this is the size of the state vector from which future iterations are produced) allows
one to predict all future iterations.

35.4 Alternatives

An alternative generator, WELL47 (”Well Equidistributed Long-period Linear”), offers


quicker recovery, and equal randomness, and nearly equal speed.[43]
Marsaglia's xorshift48 generators and variants are the fastest in this class.[44]
64-bit MELGs (”64-bit Maximally Equidistributed F2 -Linear Generators with Mersenne
Prime Period”) are completely optimized in terms of the k-distribution properties.[45]
The ACORN family49 (published 1989) is another k-distributed PRNG, which shows similar
computational speed to MT, and better statistical properties as it satisfies all the current
(2019) TestU01 criteria; when used with appropriate choices of parameters, ACORN can
have arbitrarily long period and precision.

43 https://en.wikipedia.org/wiki/KiB
44 https://en.wikipedia.org/wiki/Monte-Carlo_simulation
45 https://en.wikipedia.org/wiki/Randomness_tests
46 https://en.wikipedia.org/wiki/CSPRNG
47 https://en.wikipedia.org/wiki/Well_Equidistributed_Long-period_Linear
48 https://en.wikipedia.org/wiki/Xorshift
49 https://en.wikipedia.org/wiki/ACORN_(PRNG)

463
Mersenne Twister

35.5 k-distribution

A pseudorandom sequence xi of w-bit integers of period P is said to be k-distributed to v-bit


accuracy if the following holds.
Let truncv (x) denote the number formed by the leading v bits of x, and consider P of the
k v-bit vectors
(truncv (xi ), truncv (xi+1 ), ..., truncv (xi+k−1 )) (0 ≤ i < P ).
Then each of the 2kv possible combinations of bits occurs the same number of times in a
period, except for the all-zero combination that occurs once less often.

35.6 Algorithmic detail

Figure 82 Visualisation of generation of pseudo-random 32-bit integers using a


Mersenne Twister. The 'Extract number' section shows an example where integer 0 has
already been output and the index is at integer 1. 'Generate numbers' is run when all
integers have been output.

For a w-bit word length, the Mersenne Twister generates integers in the range [0, 2w −1].
The Mersenne Twister algorithm is based on a matrix linear recurrence50 over a finite bi-
nary51 field52 F2 . The algorithm is a twisted generalised feedback shift register53[46] (twisted
GFSR, or TGFSR) of rational normal form54 (TGFSR(R)), with state bit reflection and
tempering. The basic idea is to define a series xi through a simple recurrence relation, and

50 https://en.wikipedia.org/wiki/Recurrence_relation
51 https://en.wikipedia.org/wiki/Binary_numeral_system
52 https://en.wikipedia.org/wiki/Field_(mathematics)
53 https://en.wikipedia.org/wiki/Generalised_feedback_shift_register
54 https://en.wikipedia.org/wiki/Rational_normal_form

464
Algorithmic detail

then output numbers of the form xi T , where T is an invertible F2 matrix called a tempering
matrix55 .
The general algorithm is characterized by the following quantities (some of these explana-
tions make sense only after reading the rest of the algorithm):
• w: word size (in number of bits)
• n: degree of recurrence
• m: middle word, an offset used in the recurrence relation defining the series x, 1 ≤ m < n
• r: separation point of one word, or the number of bits of the lower bitmask, 0 ≤ r ≤w - 1
• a: coefficients of the rational normal form twist matrix
• b, c: TGFSR(R) tempering bitmasks
• s, t: TGFSR(R) tempering bit shifts
• u, d, l: additional Mersenne Twister tempering bit shifts/masks
with the restriction that 2nw − r − 1 is a Mersenne prime. This choice simplifies the primi-
tivity test and k-distribution56 test that are needed in the parameter search.
The series x is defined as a series of w-bit quantities with the recurrence relation:
( )
xk+n := xk+m ⊕ (xk u || xk+1 l )A k = 0, 1, . . .

where || denotes concatenation57 of bit vectors (with upper bits on the left), ⊕ the bitwise
exclusive or58 (XOR), xk u means the upper w − r bits of xk , and xlk+1 means the lower r
bits of xk+1 . The twist transformation A is defined in rational normal form as:
( )
0 Iw−1
A=
aw−1 (aw−2 , . . . , a0 )
with In − 1 as the (n − 1) × (n − 1) identity matrix. The rational normal form has the
benefit that multiplication by A can be efficiently expressed as: (remember that here matrix
multiplication is being done in F2 , and therefore bitwise XOR takes the place of addition)
{
x≫1 x0 = 0
xA =
(x ≫ 1) ⊕ a x0 = 1
where x0 is the lowest order bit of x.
As like TGFSR(R), the Mersenne Twister is cascaded with a tempering transform59 to
compensate for the reduced dimensionality of equidistribution (because of the choice of
A being in the rational normal form). Note that this is equivalent to using the matrix A
where A = T −1 AT for T an invertible matrix, and therefore the analysis of characteristic
polynomial mentioned below still holds.
As with A, we choose a tempering transform to be easily computable, and so do not actually
construct T itself. The tempering is defined in the case of Mersenne Twister as
y := x ⊕ ((x >> u) & d)

55 https://en.wikipedia.org/wiki/Tempered_representation
56 https://en.wikipedia.org/wiki/K-distribution
57 https://en.wikipedia.org/wiki/Concatenation
58 https://en.wikipedia.org/wiki/Exclusive_or
59 https://en.wikipedia.org/wiki/Tempered_representation

465
Mersenne Twister

y := y ⊕ ((y << s) & b)


y := y ⊕ ((y << t) & c)
z := y ⊕ (y >> l)
where x is the next value from the series, y a temporary intermediate value, z the value
returned from the algorithm, with <<, >> as the bitwise left and right shifts, and & as
the bitwise and60 . The first and last transforms are added in order to improve lower-bit
equidistribution. From the property of TGFSR, s + t ≥ ⌊w/2⌋ − 1 is required to reach the
upper bound of equidistribution for the upper bits.
The coefficients for MT19937 are:
• (w, n, m, r) = (32, 624, 397, 31)
• a = 9908B0DF16
• (u, d) = (11, FFFFFFFF16 )
• (s, b) = (7, 9D2C568016 )
• (t, c) = (15, EFC6000016 )
• l = 18
Note that 32-bit implementations of the Mersenne Twister generally have
d = FFFFFFFF16 . As a result, the d is occasionally omitted from the algorithm
description, since the bitwise and61 with d in that case has no effect.
The coefficients for MT19937-64 are:[47]
• (w, n, m, r) = (64, 312, 156, 31)
• a = B5026F5AA96619E916
• (u, d) = (29, 555555555555555516 )
• (s, b) = (17, 71D67FFFEDA6000016 )
• (t, c) = (37, FFF7EEE00000000016 )
• l = 43

35.6.1 Initialization

The state needed for a Mersenne Twister implementation is an array of n values of w bits
each. To initialize the array, a w-bit seed value is used to supply x0 through xn − 1 by setting
x0 to the seed value and thereafter setting
xi = f × (xi−1 ⊕ (xi−1 >> (w−2))) + i
for i from 1 to n−1. The first value the algorithm then generates is based on xn , not
on x0 . The constant f forms another parameter to the generator, though not part of the
algorithm proper. The value for f for MT19937 is 1812433253 and for MT19937-64 is
6364136223846793005.[47]

60 https://en.wikipedia.org/wiki/Logical_conjunction
61 https://en.wikipedia.org/wiki/Logical_conjunction

466
Algorithmic detail

35.6.2 Comparison with classical GFSR

In order to achieve the 2nw − r − 1 theoretical upper limit of the period in a TGFSR62 , φB (t)
must be a primitive polynomial63 , φB (t) being the characteristic polynomial64 of
 
0 Iw ··· 0 0
 .. 
 . 
 
 .. .. .. .. 
I . . . . 
 w ← m-th row
B=
 ..


 . 
 
0

0 · · · Iw 0 
0 0 ··· 0 Iw−r 
S 0 ··· 0 0
( )
0 Ir
S= A
Iw−r 0

The twist transformation improves the classical GFSR65 with the following key properties:
• The period reaches the theoretical upper limit 2nw − r − 1 (except if initialized with 0)
• Equidistribution in n dimensions (e.g. linear congruential generators66 can at best manage
reasonable distribution in five dimensions)

35.6.3 Pseudocode

The following pseudocode implements the general Mersenne Twister algorithm. The con-
stants w, n, m, r, a, u, d, s, b, t, c, l, and f are as in the algorithm description above. It
is assumed that int represents a type sufficient to hold values with w bits:
// Create a length n array to store the state of the generator
int[0..n-1] MT
int index := n+1
const int lower_mask = (1 << r) - 1 // That is, the binary number of r 1's
const int upper_mask = lowest w bits of (not67 lower_mask)

// Initialize the generator from a seed


function seed_mt(int seed) {
index := n
MT[0] := seed
for i from 1 to (n - 1) { // loop over each element
MT[i] := lowest w bits of (f * (MT[i-1] xor68 (MT[i-1] >> (w-2))) + i)
}
}

// Extract a tempered value based on MT[index]


// calling twist() every n numbers
function extract_number() {

62 https://en.wikipedia.org/wiki/GFSR
63 https://en.wikipedia.org/wiki/Primitive_polynomial_(field_theory)
64 https://en.wikipedia.org/wiki/Characteristic_polynomial
65 https://en.wikipedia.org/wiki/GFSR
66 https://en.wikipedia.org/wiki/Linear_congruential_generator
67 https://en.wikipedia.org/wiki/Bitwise_operation#NOT
68 https://en.wikipedia.org/wiki/Bitwise_operation#XOR

467
Mersenne Twister

if index >= n {
if index > n {
error ”Generator was never seeded”
69
// Alternatively, seed with constant value; 5489 is used in reference C code[48]
}
twist()
}

int y := MT[index]
y := y xor70 ((y >> u) and71 d)
y := y xor72 ((y << s) and73 b)
y := y xor74 ((y << t) and75 c)
y := y xor76 (y >> l)

index := index + 1
return lowest w bits of (y)
}

// Generate the nextn values from the series x_i


function twist() {
for i from 0 to (n-1) {
int x := (MT[i] and77 upper_mask)
+ (MT[(i+1) mod78 n] and79 lower_mask)
int xA := x >> 1
if (x mod80 2) != 0 { // lowest bit of x is 1
xA := xA xor81 a
}
MT[i] := MT[(i + m) mod82 n] xor83 xA
}
index := 0
}

35.7 Variants

35.7.1 CryptMT

Main article: CryptMT84 CryptMT85 is a stream cipher86 and cryptographically secure


pseudorandom number generator87 which uses Mersenne Twister internally.[49][50] It was
developed by Matsumoto and Nishimura alongside Mariko Hagita and Mutsuo Saito. It has

70 https://en.wikipedia.org/wiki/Bitwise_operation#XOR
71 https://en.wikipedia.org/wiki/Bitwise_operation#AND
72 https://en.wikipedia.org/wiki/Bitwise_operation#XOR
73 https://en.wikipedia.org/wiki/Bitwise_operation#AND
74 https://en.wikipedia.org/wiki/Bitwise_operation#XOR
75 https://en.wikipedia.org/wiki/Bitwise_operation#AND
76 https://en.wikipedia.org/wiki/Bitwise_operation#XOR
77 https://en.wikipedia.org/wiki/Bitwise_operation#AND
78 https://en.wikipedia.org/wiki/Modulo_operation
79 https://en.wikipedia.org/wiki/Bitwise_operation#AND
80 https://en.wikipedia.org/wiki/Modulo_operation
81 https://en.wikipedia.org/wiki/Bitwise_operation#XOR
82 https://en.wikipedia.org/wiki/Modulo_operation
83 https://en.wikipedia.org/wiki/Bitwise_operation#XOR
84 https://en.wikipedia.org/wiki/CryptMT
85 https://en.wikipedia.org/wiki/CryptMT
86 https://en.wikipedia.org/wiki/Stream_cipher
87 https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator

468
Variants

been submitted to the eSTREAM88 project of the eCRYPT89 network.[49] Unlike Mersenne
Twister or its other derivatives, CryptMT is patented90 .

35.7.2 MTGP

MTGP is a variant of Mersenne Twister optimised for graphics processing units91 pub-
lished by Mutsuo Saito and Makoto Matsumoto.[51] The basic linear recurrence operations
are extended from MT and parameters are chosen to allow many threads to compute the
recursion in parallel, while sharing their state space to reduce memory load. The paper
claims improved equidistribution92 over MT and performance on a very old GPU (Nvidia93
GTX260 with 192 cores) of 4.7 ms for 5×107 random 32-bit integers.

35.7.3 SFMT

This section needs expansion. You can help by adding to it94 . (June 2007)

The SFMT (SIMD95 -oriented Fast Mersenne Twister) is a variant of Mersenne Twister,
introduced in 2006,[52] designed to be fast when it runs on 128-bit SIMD.
• It is roughly twice as fast as Mersenne Twister.[53]
• It has a better equidistribution96 property of v-bit accuracy than MT but worse than
WELL (”Well Equidistributed Long-period Linear”)97 .
• It has quicker recovery from zero-excess initial state than MT, but slower than WELL.
• It supports various periods from 2607 −1 to 2216091 −1.
Intel SSE298 and PowerPC99 AltiVec are supported by SFMT. It is also used for games with
the Cell BE100 in the PlayStation 3101 .[54]

35.7.4 TinyMT

TinyMT is a variant of Mersenne Twister, proposed by Saito and Matsumoto in 2011.[55]


TinyMT uses just 127 bits of state space, a significant decrease compared to the original's

88 https://en.wikipedia.org/wiki/ESTREAM
89 https://en.wikipedia.org/wiki/ECRYPT
90 https://en.wikipedia.org/wiki/Software_patent
91 https://en.wikipedia.org/wiki/Graphics_processing_unit
92 https://en.wikipedia.org/wiki/Equidistribution
93 https://en.wikipedia.org/wiki/Nvidia
94 https://en.wikipedia.org/w/index.php?title=Mersenne_Twister&action=edit&section=
95 https://en.wikipedia.org/wiki/Single_instruction,_multiple_data
96 https://en.wikipedia.org/wiki/Equidistribution
97 https://en.wikipedia.org/wiki/Well_Equidistributed_Long-period_Linear
98 https://en.wikipedia.org/wiki/SSE2
99 https://en.wikipedia.org/wiki/PowerPC
100 https://en.wikipedia.org/wiki/Cell_(microprocessor)
101 https://en.wikipedia.org/wiki/PlayStation_3

469
Mersenne Twister

2.5 KiB of state. However, it has a period of 2127 −1, far shorter than the original, so it is
only recommended by the authors in cases where memory is at a premium.

35.8 References
1. E.g. Marsland S. (2011) Machine Learning (CRC Press102 ), §4.1.1. Also see the
section ”Adoption in software systems”.
2. M, M.; N, T. (1998). ”M :  623-
   -  -
”103 (PDF). ACM Transactions on Modeling and Computer Simulation. 8 (1):
3–30. CiteSeerX104 10.1.1.215.1141105 . doi106 :10.1145/272991.272995107 .
3. J S. ”T M T”108 . A subsequent paper, published in the
year 2000, gave five additional forms of the Mersenne Twister with period 2^19937-
1. All five were designed to be implemented with 64-bit arithmetic instead of 32-bit
arithmetic.
4. M, G. (2014), ”O      
M E 2010”, Computational Statistics109 , 29 (5): 1095–1128, Cite-
SeerX110 10.1.1.455.5508111 , doi112 :10.1007/s00180-014-0482-5113 .
5. GAUSS 14 Language Reference114
6. Random Numbers: GLib Reference Manual115
7. ”R N A”116 . GNU MP. Retrieved 2013-11-21.
8. ”16.3 S U M”117 . GNU Octave. Built-in Function: rand
9. ”R   ”118 . GNU Scientific Library. Re-
trieved 2013-11-24.
10. ”uniform119 ”. Gretl Function Reference.
11. ”RANDOMU (IDL R)”120 . Exelis VIS Docs Center. Retrieved 2013-08-
23.
12. ”R N”121 . Julia Language Documentation—The Standard Library.

102 https://en.wikipedia.org/wiki/CRC_Press
103 http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/ARTICLES/mt.pdf
104 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
105 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.215.1141
106 https://en.wikipedia.org/wiki/Doi_(identifier)
107 https://doi.org/10.1145%2F272991.272995
108 http://www.quadibloc.com/crypto/co4814.htm
109 https://en.wikipedia.org/wiki/Computational_Statistics_(journal)
110 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
111 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.455.5508
112 https://en.wikipedia.org/wiki/Doi_(identifier)
113 https://doi.org/10.1007%2Fs00180-014-0482-5
114 http://www.aptech.com/wp-content/uploads/2014/01/GAUSS14_LR.pdf
115 https://developer.gnome.org/glib/stable/glib-Random-Numbers.html
116 http://gmplib.org/manual/Random-Number-Algorithms.html
117 https://www.gnu.org/software/octave/doc/interpreter/Special-Utility-Matrices.html
https://www.gnu.org/software/gsl/manual/html_node/Random-number-environment-
118
variables.html
119 http://gretl.sourceforge.net/gretl-help/funcref.html#uniform
120 http://www.exelisvis.com/docs/RANDOMU.html
121 https://docs.julialang.org/en/v1/stdlib/Random/

470
References

13. ”D   ”122 . CMUCL User's Manual. Retrieved 2014-
02-03.
14. ”R ”123 . The ECL manual. Retrieved 2015-09-20.
15. ”R N G”124 . SBCL User's Manual.
16. ”  ”125 . Maple Online Help. Retrieved 2013-11-21.
17. ”R   ”126 . Documentation Center, Math-
Works127 .
18. ””128 . free pascal documentation. Retrieved 2013-11-28.
19. ”_ — G    ”129 . PHP Manual. Retrieved
2016-03-02.
20. ”9.6  — G - ”130 . Python v2.6.8 docu-
mentation. Retrieved 2012-05-29.
21. ”8.6  — G - ”131 . Python v3.2 documen-
tation. Retrieved 2012-05-29.
22. ”R N G”132 . CRAN Task View: Probability Distributions.
Retrieved 2012-05-29.
23. ” ”R”  ”133 . Ruby 1.9.3 documentation. Retrieved
2012-05-29.
24. Probability Distributions — Sage Reference Manual v7.2: Probablity134
25. ” - R ”135 . Scilab Help.
26. New random-number generator—64-bit Mersenne Twister136
27. ”D G”137 . Apache Commons Math User Guide.
28. ”R N G  C++11”138 (PDF). Standard C++ Foundation.
29. ”::__”139 . Pseudo Random Number Generation.
Retrieved 2012-09-25.
30. [1]140 Mathematica Documentation
31. ”//_.”141 . Boost C++ Libraries. Retrieved
2012-05-29.
32. ”H API O”142 . CUDA Toolkit Documentation. Retrieved 2016-08-02.

122 http://common-lisp.net/project/cmucl/doc/cmu-user/extensions.html
123 https://common-lisp.net/project/ecl/manual/ch12s02.html
124 http://www.sbcl.org/manual/#Random-Number-Generation
125 http://www.maplesoft.com/support/help/Maple/view.aspx?path=rand
126 http://www.mathworks.co.uk/help/matlab/ref/randstream.list.html
127 https://en.wikipedia.org/wiki/MathWorks
128 http://www.freepascal.org/docs-html/rtl/system/random.html
129 http://php.net/manual/en/function.mt-rand.php
130 https://docs.python.org/release/2.6.8/library/random.html
131 https://docs.python.org/release/3.2/library/random.html
132 https://cran.r-project.org/web/views/Distributions.html
133 http://www.ruby-doc.org/core-1.9.3/Random.html
http://doc.sagemath.org/html/en/reference/probability/sage/gsl/probability_
134
distribution.html
135 https://help.scilab.org/docs/5.5.2/en_US/grand.html
136 https://www.stata.com/new-in-stata/random-number-generators/
137 http://commons.apache.org/proper/commons-math/userguide/random.html
138 https://isocpp.org/files/papers/n3551.pdf
139 http://en.cppreference.com/w/cpp/numeric/random/mersenne_twister_engine
140 http://reference.wolfram.com/language/tutorial/RandomNumberGeneration.html#569959585
141 http://www.boost.org/doc/libs/1_49_0/boost/random/mersenne_twister.hpp
142 http://docs.nvidia.com/cuda/curand/host-api-overview.html#generator-types

471
Mersenne Twister

33. ”G05 – R N G”143 . NAG Library Chapter Introduction.


Retrieved 2012-05-29.
34. ”R N G”144 . IBM SPSS Statistics. Retrieved 2013-11-21.
35. ”U R-N F”145 . SAS Language Reference. Retrieved
2013-11-21.
36. P. L'Ecuyer and R. Simard, ”TestU01: ”A C library for empirical testing of random
number generators146 ”, ACM Transactions on Mathematical Software147 , 33, 4, Article
22 (August 2007).
37. Note: 219937 is approximately 4.3 × 106001 ; this is many orders of magnitude larger
than the estimated number of particles in the observable universe148 , which is 1087 .
38. R, M (A 10, 2017). ”R- U
D P S”. The Astrophysical Journal. 845 (1):
66. arXiv149 :1707.02212150 . Bibcode151 :2017ApJ...845...66R152 . doi153 :10.3847/1538-
4357/aa7ede154 .
39. ”SIMD- F M T (SFMT):   
M T”155 . Japan Society for the Promotion of Science. Retrieved
27 March 2017.
40. M M; T N. ”D C  P-
 N G”156 (PDF). R 19 J 2015.
41. H H; M M; T N; F P-
; P L’E. ”E J A  F2-L R
N G”157 (PDF). R 12 N 2015.
42. ”19937: M T   ”158 .
hiroshima-u.ac.jp. Retrieved 4 October 2015.
43. P. L'Ecuyer, ”Uniform Random Number Generators”, International Encyclopedia of
Statistical Science159 , Lovric, Miodrag (Ed.), Springer-Verlag, 2010.
44. ”*/+    PRNG ”160 .
45. H, S.; K, T. (2018). ”I 64- M E-
 F2 -Linear Generators with Mersenne Prime Period”161 . ACM Trans-

143 http://www.nag.co.uk/numeric/fl/nagdoc_fl23/xhtml/G05/g05intro.xml
http://pic.dhe.ibm.com/infocenter/spssstat/v20r0m0/index.jsp?topic=%2Fcom.ibm.spss.
144
statistics.help%2Fidh_seed.htm
http://support.sas.com/documentation/cdl/en/lrdict/64316/HTML/default/viewer.htm#
145
a001281561.htm
146 http://www.iro.umontreal.ca/~lecuyer/myftp/papers/testu01.pdf
147 https://en.wikipedia.org/wiki/ACM_Transactions_on_Mathematical_Software
148 https://en.wikipedia.org/wiki/Observable_universe#Matter_content
149 https://en.wikipedia.org/wiki/ArXiv_(identifier)
150 http://arxiv.org/abs/1707.02212
151 https://en.wikipedia.org/wiki/Bibcode_(identifier)
152 https://ui.adsabs.harvard.edu/abs/2017ApJ...845...66R
153 https://en.wikipedia.org/wiki/Doi_(identifier)
154 https://doi.org/10.3847%2F1538-4357%2Faa7ede
155 http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/
156 http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/DC/dgene.pdf
157 http://www.iro.umontreal.ca/~lecuyer/myftp/papers/jumpf2.pdf
158 http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/MT2002/emt19937ar.html
159 https://en.wikipedia.org/wiki/International_Encyclopedia_of_Statistical_Science
160 http://prng.di.unimi.it
161 https://github.com/sharase/melg-64

472
Further reading

actions on Mathematical Software. 44 (3): 30:1–30:11. arXiv162 :1505.06582163 .


doi164 :10.1145/3159444165 .
46. M, M.; K, Y. (1992). ”T GFSR ”.
ACM Transactions on Modeling and Computer Simulation. 2 (3): 179–194.
doi166 :10.1145/146382.146383167 .
47. ”::__”168 . Pseudo Random Number Generation.
Retrieved 2015-07-20.
48. T N; M M. ”A C-  MT19937, 
  2002/1/26”169 . R 20 J 2015.
49. ”CM  F”170 . CRYPT171 . R 2017-11-12.
50. M, M; N, T; H, M; S, M
(2005). ”C M T  F S/B C-
”172 (PDF).
51. M S; M M (2010). ”V  M
T S  G P”. X173 :1005.49733174
[.MS ].175

52. ”SIMD- F M T (SFMT)”176 . hiroshima-u.ac.jp. Re-


trieved 4 October 2015.
53. ”SFMT:C  ”177 . hiroshima-u.ac.jp. Retrieved 4 October 2015.
54. ”PS®3 L”178 . scei.co.jp. Retrieved 4 October 2015.
55. ”T M T (TMT)”179 . hiroshima-u.ac.jp. Retrieved 4 October
2015.

35.9 Further reading


• H, S. (2014), ”O  2 -linear relations of Mersenne Twister pseudoran-
dom number generators”, Mathematics and Computers in Simulation, 100: 103–113,
arXiv180 :1301.5435181 , doi182 :10.1016/j.matcom.2014.02.002183 .

162 https://en.wikipedia.org/wiki/ArXiv_(identifier)
163 http://arxiv.org/abs/1505.06582
164 https://en.wikipedia.org/wiki/Doi_(identifier)
165 https://doi.org/10.1145%2F3159444
166 https://en.wikipedia.org/wiki/Doi_(identifier)
167 https://doi.org/10.1145%2F146382.146383
168 http://en.cppreference.com/w/cpp/numeric/random/mersenne_twister_engine
169 http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/MT2002/CODES/mt19937ar.c
170 http://www.ecrypt.eu.org/stream/cryptmtfubuki.html
171 https://en.wikipedia.org/wiki/ECRYPT
172 http://eprint.iacr.org/2005/165.pdf
173 https://en.wikipedia.org/wiki/ArXiv_(identifier)
174 http://arxiv.org/abs/1005.4973v3
175 http://arxiv.org/archive/cs.MS
176 http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html
177 http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/speed.html
178 http://www.scei.co.jp/ps3-license/index.html
179 http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/TINYMT/index.html
180 https://en.wikipedia.org/wiki/ArXiv_(identifier)
181 http://arxiv.org/abs/1301.5435
182 https://en.wikipedia.org/wiki/Doi_(identifier)
183 https://doi.org/10.1016%2Fj.matcom.2014.02.002

473
Mersenne Twister

• H, S. (2019), ”C  M T  -


- ”, Mathematics and Computers in Simulation, 161: 76–83,
arXiv184 :1708.06018185 , doi186 :10.1016/j.matcom.2018.08.006187 .

35.10 External links


• The academic paper for MT, and related articles by Makoto Matsumoto188
• Mersenne Twister home page, with codes in C, Fortran, Java, Lisp and some other lan-
guages189
• Mersenne Twister examples190 —a collection of Mersenne Twister implementations, in
several programming languages - at GitHub191
• SFMT in Action: Part I – Generating a DLL Including SSE2 Support192 – at Code
Project193

Marin Mersenne

184 https://en.wikipedia.org/wiki/ArXiv_(identifier)
185 http://arxiv.org/abs/1708.06018
186 https://en.wikipedia.org/wiki/Doi_(identifier)
187 https://doi.org/10.1016%2Fj.matcom.2018.08.006
188 http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/ARTICLES/earticles.html
https://web.archive.org/web/20070828025507/http://www.math.sci.hiroshima-u.ac.jp/~m-
189
mat/MT/emt.html
190 https://github.com/bmurray7/mersenne-twister-examples
191 https://en.wikipedia.org/wiki/GitHub
192 http://www.codeproject.com/KB/DLL/SFMT_dll.aspx?msg=3130186
193 https://en.wikipedia.org/wiki/Code_Project

474
36 Cryptographically secure
pseudorandom number generator

Type of functions designed for being unsolvable by root-finding algorithms A cryptograph-


ically secure pseudorandom number generator (CSPRNG) or cryptographic
pseudorandom number generator (CPRNG)[1] is a pseudorandom number genera-
tor1 (PRNG) with properties that make it suitable for use in cryptography2 . It is also
loosely known as a cryptographic random number generator (CRNG) (see Random
number generation#”True” vs. pseudo-random numbers3 ).[2][3]
Most cryptographic applications4 require random5 numbers, for example:
• key generation6
• nonces7
• salts8 in certain signature schemes, including ECDSA9 , RSASSA-PSS10
The ”quality” of the randomness required for these applications varies. For example, creating
a nonce11 in some protocols12 needs only uniqueness. On the other hand, the generation
of a master key13 requires a higher quality, such as more entropy14 . And in the case of
one-time pads15 , the information-theoretic16 guarantee of perfect secrecy only holds if the
key material comes from a true random source with high entropy, and thus any kind of
pseudo-random number generator is insufficient.
Ideally, the generation of random numbers in CSPRNGs uses entropy obtained from a high-
quality source, generally the operating system's randomness API. However, unexpected
correlations have been found in several such ostensibly independent processes. From an
information-theoretic point of view, the amount of randomness, the entropy that can be
generated, is equal to the entropy provided by the system. But sometimes, in practical

1 https://en.wikipedia.org/wiki/Pseudorandom_number_generator
2 https://en.wikipedia.org/wiki/Cryptography
https://en.wikipedia.org/wiki/Random_number_generation#&quot;True&quot;_vs._pseudo-
3
random_numbers
4 https://en.wikipedia.org/wiki/Cryptography
5 https://en.wikipedia.org/wiki/Random
6 https://en.wikipedia.org/wiki/Key_generation
7 https://en.wikipedia.org/wiki/Cryptographic_nonce
8 https://en.wikipedia.org/wiki/Salt_(cryptography)
9 https://en.wikipedia.org/wiki/ECDSA
10 https://en.wikipedia.org/wiki/PKCS_1
11 https://en.wikipedia.org/wiki/Cryptographic_nonce
12 https://en.wikipedia.org/wiki/Cryptographic_protocol
13 https://en.wikipedia.org/wiki/Key_(cryptography)
14 https://en.wikipedia.org/wiki/Entropy_(computing)
15 https://en.wikipedia.org/wiki/One-time_pad
16 https://en.wikipedia.org/wiki/Information_theory

475
Cryptographically secure pseudorandom number generator

situations, more random numbers are needed than there is entropy available. Also, the
processes to extract randomness from a running system are slow in actual practice. In
such instances, a CSPRNG can sometimes be used. A CSPRNG can ”stretch” the available
entropy over more bits.

36.1 Requirements

The requirements of an ordinary PRNG are also satisfied by a cryptographically secure


PRNG, but the reverse is not true. CSPRNG requirements fall into two groups: first,
that they pass statistical randomness tests17 ; and secondly, that they hold up well under
serious attack, even when part of their initial or running state becomes available to an
18
attacker.[citation needed ]
• Every CSPRNG should satisfy the next-bit test19 . That is, given the first k bits of a
random sequence, there is no polynomial-time20 algorithm that can predict the (k+1)th
bit with probability of success non-negligibly better than 50%.[4] Andrew Yao21 proved
in 1982 that a generator passing the next-bit test will pass all other polynomial-time
statistical tests for randomness.[5]
• Every CSPRNG should withstand ”state compromise extensions”. In the event that part
or all of its state has been revealed (or guessed correctly), it should be impossible to
reconstruct the stream of random numbers prior to the revelation. Additionally, if there
is an entropy input while running, it should be infeasible to use knowledge of the input's
state to predict future conditions of the CSPRNG state.
Example: If the CSPRNG under consideration produces output by computing bits of
π22 in sequence, starting from some unknown point in the binary expansion, it may well
satisfy the next-bit test and thus be statistically random, as π appears to be a random
sequence. (This would be guaranteed if π is a normal number23 , for example.) However,
this algorithm is not cryptographically secure; an attacker who determines which bit of pi
(i.e. the state of the algorithm) is currently in use will be able to calculate all preceding
bits as well.
Most PRNGs are not suitable for use as CSPRNGs and will fail on both counts. First,
while most PRNGs outputs appear random to assorted statistical tests, they do not resist
determined reverse engineering. Specialized statistical tests may be found specially tuned
to such a PRNG that shows the random numbers not to be truly random. Second, for most
PRNGs, when their state has been revealed, all past random numbers can be retrodicted,
allowing an attacker to read all past messages, as well as future ones.
CSPRNGs are designed explicitly to resist this type of cryptanalysis24 .

17 https://en.wikipedia.org/wiki/Randomness_tests
19 https://en.wikipedia.org/wiki/Next-bit_test
20 https://en.wikipedia.org/wiki/Polynomial-time
21 https://en.wikipedia.org/wiki/Andrew_Yao
22 https://en.wikipedia.org/wiki/Pi
23 https://en.wikipedia.org/wiki/Normal_number
24 https://en.wikipedia.org/wiki/Cryptanalysis

476
Definitions

36.2 Definitions

In the asymptotic setting25 , a family of deterministic polynomial time computable func-


tions Gk : {0, 1}k → {0, 1}p(k) for some polynomial p, is a pseudorandom number
generator (PRNG, or PRG in some references), if it stretches the length of its input
(p(k) > k for any k), and if its output is computationally indistinguishable26 from true ran-
domness, i.e. for any probabilistic polynomial time algorithm A, which outputs 1 or 0 as a
distinguisher,



Pr [A(G(x)) = 1] − Pr [A(r) = 1] < µ(k)
x←{0,1}k r←{0,1}p(k)

for some negligible function27 µ.[6] (The notation x ← X means that x is chosen uniformly28
at random from the set X.)
There is an equivalent characterization: For any function family Gk : {0, 1}k → {0, 1}p(k) , G
is a PRNG if and only if the next output bit of G cannot be predicted by a polynomial time
algorithm.[7]
A forward-secure PRNG with block length t(k) is a PRNG
Gk : {0, 1}k → {0, 1}k × {0, 1}t(k) , where the input string si with length k is the cur-
rent state at period i, and the output (si+1 , yi ) consists of the next state si+1 and the
pseudorandom output block yi of period i, such that it withstands state compromise
extensions in the following sense. If the initial state s1 is chosen uniformly at random
from {0, 1}k , then for any i, the sequence (y1 , y2 , . . . , yi , si+1 ) must be computationally
indistinguishable from (r1 , r2 , . . . , ri , si+1 ), in which the ri are chosen uniformly at random
from {0, 1}t(k) .[8]
Any PRNG G : {0, 1}k → {0, 1}p(k) can be turned into a forward secure PRNG with block
length p(k) − k by splitting its output into the next state and the actual output. This is
done by setting G(s) = G0 (s)∥G1 (s), in which |G0 (s)| = |s| = k and |G1 (s)| = p(k) − k; then
G is a forward secure PRNG with G0 as the next state and G1 as the pseudorandom output
block of the current period.

36.3 Entropy extraction

Main article: Randomness extractor29 Santha and Vazirani proved that several bit streams
with weak randomness can be combined to produce a higher-quality quasi-random bit
stream.[9] Even earlier, John von Neumann30 proved that a simple algorithm31 can remove
a considerable amount of the bias in any bit stream[10] which should be applied to each bit
stream before using any variation of the Santha–Vazirani design.

25 https://en.wikipedia.org/wiki/Asymptotic_security
26 https://en.wikipedia.org/wiki/Computational_indistinguishability
27 https://en.wikipedia.org/wiki/Negligible_function
28 https://en.wikipedia.org/wiki/Uniform_distribution_(discrete)
29 https://en.wikipedia.org/wiki/Randomness_extractor
30 https://en.wikipedia.org/wiki/John_von_Neumann
31 https://en.wikipedia.org/wiki/Randomness_extractor#Von_Neumann_extractor

477
Cryptographically secure pseudorandom number generator

36.4 Designs

In the discussion below, CSPRNG designs are divided into three classes:
1. those based on cryptographic primitives such as ciphers32 and cryptographic hashes33 ,
2. those based upon mathematical problems thought to be hard, and
3. special-purpose designs.
The last often introduces additional entropy when available and, strictly speaking, are not
”pure” pseudorandom number generators, as their output is not completely determined by
their initial state. This addition can prevent attacks even if the initial state is compromised.

36.4.1 Designs based on cryptographic primitives


• A secure block cipher34 can be converted into a CSPRNG by running it in counter
36 37
mode35[dubious − discuss ] . This is done by choosing a random38 key and encrypting a
0, then encrypting a 1, then encrypting a 2, etc. The counter can also be started at an
arbitrary number other than zero. Assuming an n-bit block cipher the output can be
distinguished from random data after around 2n/2 blocks since, following the birthday
problem39 , colliding blocks should become likely at that point, whereas a block cipher
in CTR mode will never output identical blocks. For 64-bit block ciphers this limits the
safe output size to a few gigabytes, with 128-bit blocks the limitation is large enough
not to impact typical applications. However, when used alone it does not meet all of the
criteria of a CSPRNG (as stated above) since it is not strong against ”state compromise
extensions”: with knowledge of the state (in this case a counter and a key) you can predict
all past output.
• A cryptographically secure hash40 of a counter might also act as a good CSPRNG in some
cases. In this case, it is also necessary that the initial value of this counter is random and
secret. However, there has been little study of these algorithms for use in this manner,
41
and at least some authors warn against this use.[vague ][11]
• Most stream ciphers42 work by generating a pseudorandom stream of bits that are com-
bined (almost always XORed43 ) with the plaintext44 ; running the cipher on a counter
will return a new pseudorandom stream, possibly with a longer period. The cipher can
only be secure if the original stream is a good CSPRNG, although this is not necessarily
the case (see the RC4 cipher45 ). Again, the initial state must be kept secret.

32 https://en.wikipedia.org/wiki/Cipher
33 https://en.wikipedia.org/wiki/Cryptographic_hash
34 https://en.wikipedia.org/wiki/Block_cipher
35 https://en.wikipedia.org/wiki/Block_cipher_modes_of_operation
38 https://en.wikipedia.org/wiki/Random
39 https://en.wikipedia.org/wiki/Birthday_problem
40 https://en.wikipedia.org/wiki/Cryptographic_hash_function
42 https://en.wikipedia.org/wiki/Stream_cipher
43 https://en.wikipedia.org/wiki/Bitwise_XOR
44 https://en.wikipedia.org/wiki/Plaintext
45 https://en.wikipedia.org/wiki/RC4_cipher

478
Designs

36.4.2 Number-theoretic designs


• The Blum Blum Shub46 algorithm has a security proof based on the difficulty of the
quadratic residuosity problem47 . Since the only known way to solve that problem is to
factor the modulus, it is generally regarded that the difficulty of integer factorization48
provides a conditional security proof for the Blum Blum Shub algorithm. However the
algorithm is very inefficient and therefore impractical unless extreme security is needed.
• The Blum–Micali algorithm49 has an unconditional security proof based on the difficulty
of the discrete logarithm problem50 but is also very inefficient.
• Daniel Brown of Certicom51 has written a 2006 security proof for Dual_EC_DRBG52 ,
based on the assumed hardness of the Decisional Diffie–Hellman assumption53 , the x-
logarithm problem, and the truncated point problem. The 2006 proof explicitly assumes
a lower outlen than in the Dual_EC_DRBG standard, and that the P and Q in the
Dual_EC_DRBG standard (which were revealed in 2013 to be probably backdoored by
NSA) are replaced with non-backdoored values.

36.4.3 Special designs

There are a number of practical PRNGs that have been designed to be cryptographically
secure, including
• the Yarrow algorithm54 which attempts to evaluate the entropic quality of its inputs.
Yarrow is used in macOS55 (also as /dev/random56 ).
• the ChaCha2057 algorithm replaced RC458 in OpenBSD59 (version 5.4),[12] NetBSD60
(version 7.0),[13] and FreeBSD61 (version 12.0).[14]
• ChaCha20 also replaced SHA-162 in Linux63 in version 4.8.[15]
• the Fortuna algorithm64 , the successor to Yarrow, which does not attempt to evaluate
the entropic quality of its inputs. Fortuna is used in FreeBSD.

46 https://en.wikipedia.org/wiki/Blum_Blum_Shub
47 https://en.wikipedia.org/wiki/Quadratic_residuosity_problem
48 https://en.wikipedia.org/wiki/Integer_factorization
49 https://en.wikipedia.org/wiki/Blum%E2%80%93Micali_algorithm
50 https://en.wikipedia.org/wiki/Discrete_logarithm_problem
51 https://en.wikipedia.org/wiki/Certicom
52 https://en.wikipedia.org/wiki/Dual_EC_DRBG
53 https://en.wikipedia.org/wiki/Decisional_Diffie%E2%80%93Hellman_assumption
54 https://en.wikipedia.org/wiki/Yarrow_algorithm
55 https://en.wikipedia.org/wiki/MacOS
56 https://en.wikipedia.org/wiki//dev/random
57 https://en.wikipedia.org/wiki/ChaCha20
58 https://en.wikipedia.org/wiki/RC4
59 https://en.wikipedia.org/wiki/OpenBSD
60 https://en.wikipedia.org/wiki/NetBSD
61 https://en.wikipedia.org/wiki/FreeBSD
62 https://en.wikipedia.org/wiki/SHA-1
63 https://en.wikipedia.org/wiki/Linux
64 https://en.wikipedia.org/wiki/Fortuna_(PRNG)

479
Cryptographically secure pseudorandom number generator

• the function CryptGenRandom65 provided in Microsoft66 's Cryptographic Application


Programming Interface67
• ISAAC68 based on a variant of the RC469 cipher
• Evolutionary algorithm70 based on the NIST71 Statistical Test Suite.[16][17]
• arc4random72
• AES73 -CTR74 DRBG is often used as a random number generator in systems that use
AES encryption.[18][19]
• ANSI75 X9.17 standard (Financial Institution Key Management (wholesale)), which has
been adopted as a FIPS76 standard as well. It takes as input a TDEA77 (keying option
278 ) key bundle k and (the initial value of) a 64-bit random seed79 s.[20] Each time a
random number is required it:
• Obtains the current date/time D to the maximum resolution possible.
• Computes a temporary value t = TDEAk (D)
• Computes the random value x = TDEAk (s ⊕t), where ⊕ denotes bitwise exclusive or80 .
• Updates the seed s = TDEAk (x ⊕t)
Obviously, the technique is easily generalized to any block cipher; AES81 has been
suggested.[21]

36.5 Standards

Several CSPRNGs have been standardized. For example,


• FIPS 186-482
• NIST SP 800-90A83 :
This withdrawn standard has four PRNGs. Two of them are uncontroversial and proven:
CSPRNGs named Hash_DRBG[22] and HMAC_DRBG.[23]

65 https://en.wikipedia.org/wiki/CryptGenRandom
66 https://en.wikipedia.org/wiki/Microsoft
67 https://en.wikipedia.org/wiki/Cryptographic_Application_Programming_Interface
68 https://en.wikipedia.org/wiki/ISAAC_(cipher)
69 https://en.wikipedia.org/wiki/RC4
70 https://en.wikipedia.org/wiki/Evolutionary_algorithm
71 https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
72 https://en.wikipedia.org/wiki/Rc4#RC4-based_random_number_generators
73 https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
74 https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#CTR
75 https://en.wikipedia.org/wiki/American_National_Standards_Institute
76 https://en.wikipedia.org/wiki/Federal_Information_Processing_Standard
77 https://en.wikipedia.org/wiki/Triple_DES
78 https://en.wikipedia.org/wiki/Triple_DES#Keying_options
79 https://en.wikipedia.org/wiki/Random_seed
80 https://en.wikipedia.org/wiki/Exclusive_or
81 https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
82 http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf
83 https://en.wikipedia.org/wiki/NIST_SP_800-90A

480
Standards

The third PRNG in this standard, CTR_DRBG84 , is based on a block cipher85 running
in counter mode86 . It has an uncontroversial design but has been proven to be weaker
in terms of distinguishing attack, than the security level87 of the underlying block cipher
when the number of bits output from this PRNG is greater than two to the power of the
underlying block cipher's block size in bits.[24]
When the maximum number of bits output from this PRNG is equal to the 2blocksize ,
the resulting output delivers the mathematically expected security level that the key size
would be expected to generate, but the output is shown to not be indistinguishable from
a true random number generator.[24] When the maximum number of bits output from this
PRNG is less than it, the expected security level is delivered and the output appears to
be indistinguishable from a true random number generator.[24]
It is noted in the next revision that claimed security strength for CTR_DRBG depends on
limiting the total number of generate requests and the bits provided per generate request.
The fourth and final PRNG in this standard is named Dual_EC_DRBG88 . It has been
shown to not be cryptographically secure and is believed to have a kleptographic89 NSA
backdoor.[25]
• NIST SP 800-90A Rev.1: This is essentially NIST SP 800-90A with Dual_EC_DRBG
removed, and is the withdrawn standard's replacement.
• ANSI X9.17-1985 Appendix C
• ANSI X9.31-1998 Appendix A.2.4
• ANSI X9.62-1998 Annex A.4, obsoleted by ANSI X9.62-2005, Annex D (HMAC_DRBG)
A good reference90 is maintained by NIST91 .
There are also standards for statistical testing of new CSPRNG designs:
• A Statistical Test Suite for Random and Pseudorandom Number Generators, NIST Special
Publication 800-2292 .

84 https://en.wikipedia.org/wiki/CTR_DRBG
85 https://en.wikipedia.org/wiki/Block_cipher
86 https://en.wikipedia.org/wiki/Counter_mode
87 https://en.wikipedia.org/wiki/Security_level
88 https://en.wikipedia.org/wiki/Dual_EC_DRBG
89 https://en.wikipedia.org/wiki/Kleptography
90 http://csrc.nist.gov/groups/ST/toolkit/random_number.html
91 https://en.wikipedia.org/wiki/NIST
92 http://csrc.nist.gov/groups/ST/toolkit/rng/documents/SP800-22rev1a.pdf

481
Cryptographically secure pseudorandom number generator

36.6 NSA kleptographic backdoor in the Dual_EC_DRBG


PRNG

Main article: Dual_EC_DRBG93 The Guardian94 and The New York Times95 have re-
ported that the National Security Agency96 (NSA) inserted a backdoor97 into a pseudoran-
dom number generator98 (PRNG) of NIST SP 800-90A99 which allows the NSA to readily
decrypt material that was encrypted with the aid of Dual_EC_DRBG100 . Both papers
report[26][27] that, as independent security experts long suspected,[28] the NSA has been in-
troducing weaknesses into CSPRNG standard 800-90; this being confirmed for the first time
by one of the top secret documents leaked to the Guardian by Edward Snowden101 . The
NSA worked covertly to get its own version of the NIST draft security standard approved
for worldwide use in 2006. The leaked document states that ”eventually, NSA became
the sole editor.” In spite of the known potential for a kleptographic102 backdoor and other
known significant deficiencies with Dual_EC_DRBG, several companies such as RSA Se-
curity103 continued using Dual_EC_DRBG until the backdoor was confirmed in 2013.[29]
RSA Security104 received a $10 million payment from the NSA to do so.[30]

36.7 Security flaws

36.7.1 DUHK attack

On October 23, 2017, Shaanan Cohney105 , Matthew Green106 , and Nadia Heninger107 ,
cryptographers108 at The University of Pennsylvania109 and Johns Hopkins University110
released details of the DUHK (Don't Use Hard-coded Keys) attack on WPA2111 where
hardware vendors use a hardcoded seed key for the ANSI X9.31 RNG algorithm in con-
junction with the usage of the ANSI X9.31 Random Number Generator, ”an attacker can
brute-force encrypted data to discover the rest of the encryption parameters and deduce the

93 https://en.wikipedia.org/wiki/Dual_EC_DRBG
94 https://en.wikipedia.org/wiki/The_Guardian
95 https://en.wikipedia.org/wiki/The_New_York_Times
96 https://en.wikipedia.org/wiki/National_Security_Agency
97 https://en.wikipedia.org/wiki/Backdoor_(computing)
98 https://en.wikipedia.org/wiki/Pseudorandom_number_generator
99 https://en.wikipedia.org/wiki/NIST_SP_800-90A
100 https://en.wikipedia.org/wiki/Dual_EC_DRBG
101 https://en.wikipedia.org/wiki/Edward_Snowden
102 https://en.wikipedia.org/wiki/Kleptography
103 https://en.wikipedia.org/wiki/RSA_Security
104 https://en.wikipedia.org/wiki/RSA_Security
105 https://en.wikipedia.org/w/index.php?title=Shaanan_Cohney&action=edit&redlink=1
106 https://en.wikipedia.org/wiki/Matthew_D._Green
107 https://en.wikipedia.org/wiki/Nadia_Heninger
108 https://en.wikipedia.org/wiki/Cryptographer
109 https://en.wikipedia.org/wiki/University_of_Pennsylvania
110 https://en.wikipedia.org/wiki/Johns_Hopkins_University
111 https://en.wikipedia.org/wiki/WPA2

482
References

master encryption key used to encrypt web sessions or virtual private network112 (VPN)
connections.”[31][32]

36.8 References
1. H, A113 (2003). Hacking the Xbox: An Introduction to Reverse
Engineering114 . N S P S. N S P115 . . 111.
ISBN116 9781593270292117 . R 2013-10-24. [...] the keystream generator [...]
can be thought of as a cryptographic pseudo-random number generator (CPRNG).
2. D, C. ”H       
   ”118 . Exoscale.
3. ”// I M L // W L 5.6 - P”119 .
www.phoronix.com.
4. K, J; L, Y (2008). Introduction to Modern Cryptogra-
phy120 . CRC . . 70121 . ISBN122 978-1584885511123 .
5. Andrew Chi-Chih Yao124 . Theory and applications of trapdoor functions125 . In Pro-
ceedings of the 23rd IEEE Symposium on Foundations of Computer Science, 1982.
6. G, O (2001), Foundations of cryptography I: Basic Tools, Cambridge:
Cambridge University Press, ISBN126 978-0-511-54689-1127 , def 3.3.1.
7. G, O (2001), Foundations of cryptography I: Basic Tools, Cambridge:
Cambridge University Press, ISBN128 978-0-511-54689-1129 , Theorem 3.3.7.
8. D, Y, Lecture 5 Notes of Introduction to Cryptography130 (PDF), -
 3 J 2016, def 4.
9. M S, U V. V (1984-10-24). ”G -
   - ”131 (PDF). Proceedings of

112 https://en.wikipedia.org/wiki/Virtual_private_network
113 https://en.wikipedia.org/wiki/Andrew_Huang_(hacker)
https://archive.org/details/Hacking_the_Xbox_An_Introduction_to_Reverse_Engineering_
114
2003_No_Starch_Press
115 https://en.wikipedia.org/wiki/No_Starch_Press
116 https://en.wikipedia.org/wiki/ISBN_(identifier)
117 https://en.wikipedia.org/wiki/Special:BookSources/9781593270292
118 https://www.exoscale.com/syslog/random-numbers-generation-in-virtual-machines/
119 https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.6-Random-Rework
120 https://archive.org/details/Introduction_to_Modern_Cryptography
121 https://archive.org/details/Introduction_to_Modern_Cryptography/page/n88
122 https://en.wikipedia.org/wiki/ISBN_(identifier)
123 https://en.wikipedia.org/wiki/Special:BookSources/978-1584885511
124 https://en.wikipedia.org/wiki/Andrew_Chi-Chih_Yao
125 https://www.di.ens.fr/users/phan/secuproofs/yao82.pdf
126 https://en.wikipedia.org/wiki/ISBN_(identifier)
127 https://en.wikipedia.org/wiki/Special:BookSources/978-0-511-54689-1
128 https://en.wikipedia.org/wiki/ISBN_(identifier)
129 https://en.wikipedia.org/wiki/Special:BookSources/978-0-511-54689-1
130 http://cs.nyu.edu/courses/fall08/G22.3210-001/lect/lecture5.pdf
131 http://www.cs.berkeley.edu/~vazirani/pubs/quasi.pdf

483
Cryptographically secure pseudorandom number generator

the 25th IEEE Symposium on Foundations of Computer Science. University of Cali-


fornia132 . pp. 434–440. ISBN133 0-8186-0591-X134 . Retrieved 2006-11-29.
10. J  N135 (1963-03-01). ”V     -
   ”. The Collected Works of John von Neumann. Pergamon
Press136 . pp. 768–770. ISBN137 0-08-009566-6138 .
11. A Y, M Y (2004-02-01). Malicious Cryptography: Exposing Cryp-
tovirology139 .  3.2: J W & S140 . . 416. ISBN141 978-0-7645-4975-
5142 .CS1 maint: location (link143 )
12. ”CVS   4.”144 . CVS. O 1, 2013.
13. ”CVS   4.”145 . CVS. N 16, 2014.
14. ”FBSD 12.0-RELEASE R N: R L  API”146 .
FreeBSD.org. 5 March 2019. Retrieved 24 August 2019.
15. ”G   .”147 . G. J 2, 2016.
16. ”A S T S  R  P N G-
  C A”148 (PDF). S P.
NIST. A 2010.
17. P, A.; S, A.; K, A. (M 2008). ”G H
Q P R N U E M”149 (PDF).
IEEE Congress on Computational Intelligence and Security. 9: 331–335.
18. K, D; K, M (2012). Embedded Systems Se-
curity: Practical Methods for Safe and Secure Software and Systems Development150 .
E. . 256.
19. C, G; D, C; J, DJ (2011). ”I' D R
N G (DRNG)”151 (PDF). Cite journal requires |journal= (help152 )

132 https://en.wikipedia.org/wiki/University_of_California
133 https://en.wikipedia.org/wiki/ISBN_(identifier)
134 https://en.wikipedia.org/wiki/Special:BookSources/0-8186-0591-X
135 https://en.wikipedia.org/wiki/John_von_Neumann
136 https://en.wikipedia.org/wiki/Pergamon_Press
137 https://en.wikipedia.org/wiki/ISBN_(identifier)
138 https://en.wikipedia.org/wiki/Special:BookSources/0-08-009566-6
139 http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0764549758.html
140 https://en.wikipedia.org/wiki/John_Wiley_%26_Sons
141 https://en.wikipedia.org/wiki/ISBN_(identifier)
142 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7645-4975-5
143 https://en.wikipedia.org/wiki/Category:CS1_maint:_location
http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/lib/libc/crypt/arc4random.c?rev=1.25&
144
content-type=text/x-cvsweb-markup
http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/gen/arc4random.c?rev=1.26&content-
145
type=text/x-cvsweb-markup&only_with_tag=MAIN
146 https://www.freebsd.org/releases/12.0R/relnotes.html#userland-libraries
https://github.com/torvalds/linux/blob/e192be9d9a30555aae2ca1dc3aad37cba484cd4a/
147
drivers/char/random.c
148 http://csrc.nist.gov/publications/nistpubs/800-22-rev1a/SP800-22rev1a.pdf
149 http://www.computer.org/csdl/proceedings/cis/2008/3508/01/3508a331.pdf
150 https://books.google.com/books?id=E9hBXN-HK1cC
http://www.hotchips.org/wp-content/uploads/hc_archives/hc23/HC23.18.2-security/HC23.
151
18.210-Random-Numbers-Cox-Intel-e.pdf
152 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical

484
References

20. M, A153 ;  O, P154 ; V, S155 (1996).


”C 5: P B  S”156 (PDF). Handbook of Applied
Cryptography157 . CRC P.
21. Y, A; Y, M (2004-02-01). Malicious Cryptography: Exposing
Cryptovirology158 .  3.5.1: J W & S159 . ISBN160 978-0-7645-4975-
5161 .CS1 maint: location (link162 )
22. K, W (S 4, 2007). ”A  U A
 NIST DRBG”163 (PDF). R N 19, 2016.
23. Y, K Q (A 2016). ”T N PRG: F -
   HMAC-DRBG   ”164 (PDF).
R N 19, 2016.
24. C, M J. (N 1, 2006). ”S B  
NIST C- D R B G”165 (PDF). R-
 N 19, 2016.
25. P, N (S 10, 2013). ”G A S
 R C  E S”166 . The New York Times.
Retrieved November 19, 2016.
26. J B; G G (6 S 2013). ”R: 
US  UK       ”167 . The
Guardian. The Guardian. Retrieved 7 September 2013.
27. N P (5 S 2013). ”N.S.A. A  F B S-
  P  W”168 . The New York Times. Retrieved 7 September
2013.
28. B S (15 N 2007). ”D NSA P  S B 
N E S?”169 . Wired. Retrieved 7 September 2013.
29. M G. ”RSA      RSA ”170 .
30. J M (20 D 2013). ”E: S   NSA
   ”171 . Reuters.

153 https://en.wikipedia.org/wiki/Alfred_Menezes
154 https://en.wikipedia.org/wiki/Paul_van_Oorschot
155 https://en.wikipedia.org/wiki/Scott_Vanstone
156 http://www.cacr.math.uwaterloo.ca/hac/about/chap5.pdf
157 http://www.cacr.math.uwaterloo.ca/hac
158 http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0764549758.html
159 https://en.wikipedia.org/wiki/John_Wiley_%26_Sons
160 https://en.wikipedia.org/wiki/ISBN_(identifier)
161 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7645-4975-5
162 https://en.wikipedia.org/wiki/Category:CS1_maint:_location
163 https://eprint.iacr.org/2007/345.pdf
164 https://www.cs.cmu.edu/~kqy/resources/thesis.pdf
165 http://eprint.iacr.org/2006/379.pdf
http://bits.blogs.nytimes.com/2013/09/10/government-announces-steps-to-restore-
166
confidence-on-encryption-standards/
167 https://www.theguardian.com/world/2013/sep/05/nsa-gchq-encryption-codes-security
https://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html?
168
pagewanted=all&_r=0
https://www.wired.com/politics/security/commentary/securitymatters/2007/11/
169
securitymatters_1115
http://blog.cryptographyengineering.com/2013/09/rsa-warns-developers-against-its-
170
own.html
171 https://www.reuters.com/article/2013/12/20/us-usa-security-rsa-idUSBRE9BJ1C220131220

485
Cryptographically secure pseudorandom number generator

31. S C172 ; M D. G173 ; N H174 . ”P


     RNG ”175 (PDF).
duhkattack.com.
32. ”DUHK C A R E K, E VPN C-
”176 . slashdot.org. Retrieved 25 October 2017.

36.9 External links

The Wikibook Cryptography177 has a page on the topic of: Random number
generation178

• RFC179 4086180 , Randomness Requirements for Security


• Java ”entropy pool” for cryptographically secure unpredictable random numbers.181
• Java standard class providing a cryptographically strong pseudo-random number gener-
ator (PRNG).182
• Cryptographically Secure Random number on Windows without using CryptoAPI183
• Conjectured Security of the ANSI-NIST Elliptic Curve RNG184 , Daniel R. L. Brown,
IACR ePrint 2006/117.
• A Security Analysis of the NIST SP 800-90 Elliptic Curve Random Number Genera-
tor185 , Daniel R. L. Brown and Kristian Gjosteen, IACR ePrint 2007/048. To appear in
CRYPTO 2007.
• Cryptanalysis of the Dual Elliptic Curve Pseudorandom Generator186 , Berry Schoenmak-
ers and Andrey Sidorenko, IACR ePrint 2006/190.
• Efficient Pseudorandom Generators Based on the DDH Assumption187 , Reza Rezaeian
Farashahi and Berry Schoenmakers and Andrey Sidorenko, IACR ePrint 2006/321.
• Analysis of the Linux Random Number Generator188 , Zvi Gutterman and Benny Pinkas
and Tzachy Reinman.
• NIST Statistical Test Suite documentation and software download.189

172 https://en.wikipedia.org/w/index.php?title=Shaanan_Cohney&action=edit&redlink=1
173 https://en.wikipedia.org/wiki/Matthew_D._Green
174 https://en.wikipedia.org/wiki/Nadia_Heninger
175 https://duhkattack.com/paper.pdf
https://it.slashdot.org/story/17/10/25/0047224/duhk-crypto-attack-recovers-
176
encryption-keys-exposes-vpn-connections
177 https://en.wikibooks.org/wiki/Cryptography
178 https://en.wikibooks.org/wiki/Cryptography/Random_number_generation
179 https://en.wikipedia.org/wiki/RFC_(identifier)
180 https://tools.ietf.org/html/rfc4086
181 http://random.hd.org/
182 http://docs.oracle.com/javase/6/docs/api/java/security/SecureRandom.html
183 http://blogs.msdn.com/michael_howard/archive/2005/01/14/353379.aspx
184 http://eprint.iacr.org/2006/117
185 http://eprint.iacr.org/2007/048
186 http://eprint.iacr.org/2006/190
187 http://eprint.iacr.org/2006/321
188 http://eprint.iacr.org/2006/086.pdf
189 http://csrc.nist.gov/groups/ST/toolkit/rng/documentation_software.html

486
External links

487
37 Blum Blum Shub

This article has multiple issues. Please help improve it1 or discuss these
issues on the talk page2 . (Learn how and when to remove these template mes-
This article includes a list of references4 , but its sources remain unclear
because it has insufficient inline citations5 . Please help to improve6
sages3 )
this article by introducing7 more precise citations. (September 2013)(Learn
how and when to remove this template message8 )

This article relies too much on references9 to primary sources10 . (Learn how
Please improve this by adding secondary or tertiary sources11 . (September
2013)(Learn how and when to remove this template message12 )
and when to remove this template message13 )

Blum Blum Shub (B.B.S.) is a pseudorandom number generator14 proposed in 1986


by Lenore Blum15 , Manuel Blum16 and Michael Shub17[1] that is derived from Michael O.
Rabin18 's one-way function.
Blum Blum Shub takes the form
xn+1 = x2n mod M ,
where M = pq is the product of two large primes19 p and q. At each step of the algorithm,
some output is derived from xn+1 ; the output is commonly either the bit parity20 of xn+1 or
one or more of the least significant bits of xn+1 .
The seed21 x0 should be an integer that is co-prime to M (i.e. p and q are not factors of x0 )
and not 1 or 0.

13 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
14 https://en.wikipedia.org/wiki/Pseudorandom_number_generator
15 https://en.wikipedia.org/wiki/Lenore_Blum
16 https://en.wikipedia.org/wiki/Manuel_Blum
17 https://en.wikipedia.org/wiki/Michael_Shub
18 https://en.wikipedia.org/wiki/Michael_O._Rabin
19 https://en.wikipedia.org/wiki/Prime_number
20 https://en.wikipedia.org/wiki/Parity_bit
21 https://en.wikipedia.org/wiki/Random_seed

489
Blum Blum Shub

The two primes, p and q, should both be congruent22 to 3 (mod 4) (this guarantees that
each quadratic residue23 has one square root24 which is also a quadratic residue), and should
be safe primes25 with a small gcd26 ((p-3)/2, (q-3)/2) (this makes the cycle length large).
An interesting characteristic of the Blum Blum Shub generator is the possibility to calculate
any xi value directly (via Euler's theorem27 ):
( )
2i mod λ(M )
xi = x0 mod M ,

where λ is the Carmichael function28 . (Here we have λ(M ) = λ(p · q) = lcm(p − 1, q − 1)).

37.1 Security

There is a proof reducing its security to the computational difficulty29 of factoring.[1] When
the primes are chosen appropriately, and O30 (log31 log M) lower-order bits of each xn are
output, then in the limit as M grows large, distinguishing the output bits from random
should be at least as difficult as solving the Quadratic residuosity problem modulo M.

37.2 Example

Let p = 11, q = 23 and s = 3 (where s is the seed). We can expect to get a large cycle
length for those small numbers, because gcd((p − 3)/2, (q − 3)/2) = 2. The generator starts
to evaluate x0 by using x−1 = s and creates the sequence x0 , x1 , x2 , . . . x5 = 9, 81, 236,
36, 31, 202. The following table shows the output (in bits) for the different bit selection
methods used to determine the output.
Parity bit32 Least significant bit33
011010 110010

The following Common Lisp34 implementation provides a simple demonstration of the gen-
erator, in particular regarding the three bit selection methods. It is important to note that
the requirements imposed upon the parameters p, q and s (seed) are not checked.

(defun get-number-of-1-bits (bits)


"Counts and returns the number of 1-valued bits in the BITS."

22 https://en.wikipedia.org/wiki/Congruence_relation
23 https://en.wikipedia.org/wiki/Quadratic_residue
24 https://en.wikipedia.org/wiki/Square_root
25 https://en.wikipedia.org/wiki/Safe_prime
26 https://en.wikipedia.org/wiki/Greatest_common_divisor
27 https://en.wikipedia.org/wiki/Euler%27s_theorem
28 https://en.wikipedia.org/wiki/Carmichael_function
29 https://en.wikipedia.org/wiki/Computational_complexity_theory
30 https://en.wikipedia.org/wiki/Big_O_notation
31 https://en.wikipedia.org/wiki/Logarithm
32 https://en.wikipedia.org/wiki/Parity_bit
33 https://en.wikipedia.org/wiki/Least_significant_bit
34 https://en.wikipedia.org/wiki/Common_Lisp

490
Example

(declare (integer bits))


(loop for bit-index from 0 below (integer-length bits)
when (logbitp bit-index bits) sum 1))

(defun get-even-parity-bit (number)


(declare (integer number))
(mod (get-number-of-1-bits number) 2))

(defun get-least-significant-bit (number)


(declare (integer number))
(ldb (byte 1 0) number))

(defun make-blum-blum-shub (&key (p 11) (q 23) (s 3))


"Returns a function of no arguments which represents a simple
Blum-Blum-Shub pseudorandom number generator, configured to use the
generator parameters P, Q, and S (seed), and returning three values:
(1) the even parity bit of the number,
(2) the least significant bit of the number,
(3) the number x[n+1].
---
Please note that the parameters P, Q, and S are not checked in
accordance to the conditions described in the article."
(let ((M (* p q)) ;; M = p * q
(x[n] s)) ;; x0 = seed
(declare (integer p q M x[n]))
#(lambda ()
;; x[n+1] = x[n]^2 mod M
(let ((x[n+1] (mod (* x[n] x[n]) M)))
(declare (integer x[n+1]))
;; Compute the random bit(s) based on x[n+1].
(let ((even-parity-bit (get-even-parity-bit x[n+1]))
(least-significant-bit (get-least-significant-bit x[n+1])))
;; Update the state such that x[n+1] becomes the new x[n].
(setf x[n] x[n+1])
(values even-parity-bit
least-significant-bit
x[n+1]))))))

;; Print the exemplary outputs.


(let ((bbs (make-blum-blum-shub :p 11 :q 23 :s 3)))
(format T "~&Keys: E = even parity, ~
L = least significant")
(format T "~2%")
(format T "~&x[n+1] | E | L")
(format T "~&------------------")
(loop repeat 6 do
(multiple-value-bind (even-parity-bit odd-parity-bit
least-significant-bit x[n+1])
(funcall bbs)
(format T "~&~6d | ~d | ~d"
x[n+1] even-parity-bit least-significant-bit))))

491
Blum Blum Shub

37.3 References
1. B, L; B, M; S, M (1 M 1986). ”A S U-
 P-R N G”. SIAM Journal on Computing.
15 (2): 364–383. doi35 :10.1137/021502536 .
General
• B, L; B, M; S, M (1982). ”C  T
P-R N G”37 . A  C: P-
  CRYPTO '82. P: 61–78. Cite journal requires |journal= (help38 )
• G, M; K, M; D, A (D 2004).
”A R B”. CSX39 10.1.1.90.377940 . Cite journal requires
|journal= (help ) available as PDF42 and gzipped Postscript43
41

37.4 External links


• GMPBBS, a C-language implementation by Mark Rossmiller44
• BlumBlumShub, a Java-language implementation by Mark Rossmiller45
• An implementation in Java46
• Randomness tests47

35 https://en.wikipedia.org/wiki/Doi_(identifier)
36 https://doi.org/10.1137%2F0215025
37 https://www.iacr.org/cryptodb/data/paper.php?pubkey=1751
38 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
39 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
40 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.90.3779
41 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
42 http://www.daimi.au.dk/~mg/mamian/random-bits.pdf
https://web.archive.org/web/20110719120806/http://daimi.au.dk/~mg/mamian/random-
43
bits.ps.gz
44 http://sigmasec.org/2020/01/19/gmpbbs/
45 http://sigmasec.org/2020/01/19/gmpbbs/
46 https://code.google.com/p/javarng/
47 http://www.ciphersbyritter.com/NEWS2/TESTSBBS.HTM

492
38 Blum–Micali algorithm

The Blum–Micali algorithm is a cryptographically secure pseudorandom number genera-


tor1 . The algorithm gets its security from the difficulty of computing discrete logarithms2 .[1]
Let p be an odd prime, and let g be a primitive root3 modulo p. Let x0 be a seed, and let
xi+1 = g xi mod p.
p−1
The ith output of the algorithm is 1 if xi ≤ . Otherwise the output is 0. This is
2
equivalent to using one bit of xi as your random number. It has been shown that n − c − 1
bits of xi can be used if solving the discrete log problem is infeasible even for exponents
with as few as c bits.[2]
In order for this generator to be secure, the prime number p needs to be large enough so that
computing discrete logarithms modulo p is infeasible.[1] To be more precise, any method that
predicts the numbers generated will lead to an algorithm that solves the discrete logarithm
problem for that prime.[3]
There is a paper discussing possible examples of the quantum permanent compromise attack
to the Blum–Micali construction. This attacks illustrate how a previous attack to the
Blum–Micali generator can be extended to the whole Blum–Micali construction, including
the Blum Blum Shub4 and Kaliski5 generators.[4]

38.1 References
1. Bruce Schneier, Applied Cryptography: Protocols, Algorithms, and Source Code in C,
pages 416-417, Wiley; 2nd edition (October 18, 1996), ISBN6 04711170997
2. G, R (2004). ”A I P-R G
B   D L P”. Journal of Cryptology. 18 (2):
91–110. doi8 :10.1007/s00145-004-0215-y9 . ISSN10 0933-279011 .

1 https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
2 https://en.wikipedia.org/wiki/Discrete_logarithms
3 https://en.wikipedia.org/wiki/Primitive_root_modulo_n
4 https://en.wikipedia.org/wiki/Blum_Blum_Shub
5 https://en.wikipedia.org/wiki/Kaliski_generator
6 https://en.wikipedia.org/wiki/ISBN_(identifier)
7 https://en.wikipedia.org/wiki/Special:BookSources/0471117099
8 https://en.wikipedia.org/wiki/Doi_(identifier)
9 https://doi.org/10.1007%2Fs00145-004-0215-y
10 https://en.wikipedia.org/wiki/ISSN_(identifier)
11 http://www.worldcat.org/issn/0933-2790

493
Blum–Micali algorithm

3. Manuel Blum and Silvio Micali, How to Generate Cryptographically Strong Sequences
of Pseudorandom Bits, SIAM Journal on Computing 13, no. 4 (1984): 850-864. online
(pdf)12 Archived13 2015-02-24 at the Wayback Machine14
4. Elloá B. Guedes, Francisco Marcos de Assis, Bernardo Lula Jr, Examples of the Gen-
eralized Quantum Permanent Compromise Attack to the Blum-Micali Construction
15

38.2 External links


• 16

This cryptography-related article is a stub17 . You can help Wikipedia by expanding


it18 .
• v19
• t20
• e21

12 http://www.csee.wvu.edu/~xinl/library/papers/comp/Blum_FOCS1982.pdf
https://web.archive.org/web/20150224041435/http://www.csee.wvu.edu/~xinl/library/
13
papers/comp/Blum_FOCS1982.pdf
14 https://en.wikipedia.org/wiki/Wayback_Machine
15 https://arxiv.org/abs/1012.1776
https://web.archive.org/web/20080216164459/http://crypto.stanford.edu/pbc/notes/
16
crypto/blummicali.xhtml
17 https://en.wikipedia.org/wiki/Wikipedia:Stub
18 https://en.wikipedia.org/w/index.php?title=Blum%E2%80%93Micali_algorithm&action=edit
19 https://en.wikipedia.org/wiki/Template:Crypto-stub
20 https://en.wikipedia.org/wiki/Template_talk:Crypto-stub
21 https://en.wikipedia.org/w/index.php?title=Template:Crypto-stub&action=edit

494
39 Combinatorics

Branch of discrete mathematics Not to be confused with Combinatoriality1 .


Combinatorics is an area of mathematics2 primarily concerned with counting, both as
a means and an end in obtaining results, and certain properties of finite3 structures4 . It
is closely related to many other areas of mathematics and has many applications ranging
from logic5 to statistical physics6 , from evolutionary biology7 to computer science8 , etc.
To fully understand the scope of combinatorics requires a great deal of further amplification,
the details of which are not universally agreed upon.[1] According to H.J. Ryser9 , a definition
of the subject is difficult because it crosses so many mathematical subdivisions.[2] Insofar
as an area can be described by the types of problems it addresses, combinatorics is involved
with
• the enumeration (counting) of specified structures, sometimes referred to as arrangements
or configurations in a very general sense, associated with finite systems,
• the existence of such structures that satisfy certain given criteria,
• the construction of these structures, perhaps in many ways, and
• optimization, finding the ”best” structure or solution among several possibilities, be it the
”largest”, ”smallest” or satisfying some other optimality criterion.
Leon Mirsky10 has said: ”combinatorics is a range of linked studies which have something
in common and yet diverge widely in their objectives, their methods, and the degree of co-
herence they have attained.”[3] One way to define combinatorics is, perhaps, to describe its
subdivisions with their problems and techniques. This is the approach that is used below.
However, there are also purely historical reasons for including or not including some top-
ics under the combinatorics umbrella.[4] Although primarily concerned with finite systems,
some combinatorial questions and techniques can be extended to an infinite (specifically,
countable11 ) but discrete12 setting.

1 https://en.wikipedia.org/wiki/Combinatoriality
2 https://en.wikipedia.org/wiki/Mathematics
3 https://en.wikipedia.org/wiki/Finite_set
4 https://en.wikipedia.org/wiki/Mathematical_structure
5 https://en.wikipedia.org/wiki/Logic
6 https://en.wikipedia.org/wiki/Statistical_physics
7 https://en.wikipedia.org/wiki/Evolutionary_biology
8 https://en.wikipedia.org/wiki/Computer_science
9 https://en.wikipedia.org/wiki/H.J._Ryser
10 https://en.wikipedia.org/wiki/Leon_Mirsky
11 https://en.wikipedia.org/wiki/Countable_set
12 https://en.wikipedia.org/wiki/Discrete_mathematics

495
Combinatorics

Combinatorics is well known for the breadth of the problems it tackles. Combinatorial prob-
lems arise in many areas of pure mathematics13 , notably in algebra14 , probability theory15 ,
topology16 , and geometry17 ,[5] as well as in its many application areas. Many combinato-
rial questions have historically been considered in isolation, giving an ad hoc solution to a
problem arising in some mathematical context. In the later twentieth century, however,
powerful and general theoretical methods were developed, making combinatorics into an in-
dependent branch of mathematics in its own right.[6] One of the oldest and most accessible
parts of combinatorics is graph theory18 , which by itself has numerous natural connections
to other areas. Combinatorics is used frequently in computer science to obtain formulas
and estimates in the analysis of algorithms19 .
A mathematician20 who studies combinatorics is called a combinatorialist.

13 https://en.wikipedia.org/wiki/Pure_mathematics
14 https://en.wikipedia.org/wiki/Algebra
15 https://en.wikipedia.org/wiki/Probability_theory
16 https://en.wikipedia.org/wiki/Topology
17 https://en.wikipedia.org/wiki/Geometry
18 https://en.wikipedia.org/wiki/Graph_theory
19 https://en.wikipedia.org/wiki/Analysis_of_algorithms
20 https://en.wikipedia.org/wiki/Mathematician

496
History

39.1 History

Figure 86 An example of change ringing (with six bells), one of the earliest nontrivial
results in graph theory.

Main article: History of combinatorics21 Basic combinatorial concepts and enumerative


results appeared throughout the ancient world22 . In the 6th century BCE, ancient Indian23

21 https://en.wikipedia.org/wiki/History_of_combinatorics
22 https://en.wikipedia.org/wiki/Ancient_history
23 https://en.wikipedia.org/wiki/Timeline_of_Indian_history

497
Combinatorics

physician24 Sushruta25 asserts in Sushruta Samhita26 that 63 combinations can be made


out of 6 different tastes, taken one at a time, two at a time, etc., thus computing all 26 − 1
possibilities. Greek27 historian28 Plutarch29 discusses an argument between Chrysippus30
(3rd century BCE) and Hipparchus31 (2nd century BCE) of a rather delicate enumerative
problem, which was later shown to be related to Schröder–Hipparchus numbers32 .[7][8] In
the Ostomachion33 , Archimedes34 (3rd century BCE) considers a tiling puzzle35 .
In the Middle Ages36 , combinatorics continued to be studied, largely outside of the Euro-
pean civilization37 . The Indian38 mathematician Mahāvīra39 (c. 850) provided formulae for
the number of permutations40 and combinations41 ,[9][10] and these formulas may have been
familiar to Indian mathematicians as early as the 6th century CE.[11] The philosopher42 and
astronomer43 Rabbi Abraham ibn Ezra44 (c. 1140) established the symmetry of binomial
coefficients45 , while a closed formula was obtained later by the talmudist46 and mathe-
matician47 Levi ben Gerson48 (better known as Gersonides), in 1321.[12] The arithmetical
triangle—a graphical diagram showing relationships among the binomial coefficients—was
presented by mathematicians in treatises dating as far back as the 10th century, and would
eventually become known as Pascal's triangle49 . Later, in Medieval England50 , campanol-
ogy51 provided examples of what is now known as Hamiltonian cycles52 in certain Cayley
graphs53 on permutations.[13][14]

24 https://en.wikipedia.org/wiki/Physician
25 https://en.wikipedia.org/wiki/Sushruta
26 https://en.wikipedia.org/wiki/Sushruta_Samhita
27 https://en.wikipedia.org/wiki/Ancient_Greece
28 https://en.wikipedia.org/wiki/Historian
29 https://en.wikipedia.org/wiki/Plutarch
30 https://en.wikipedia.org/wiki/Chrysippus
31 https://en.wikipedia.org/wiki/Hipparchus
32 https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Hipparchus_number
33 https://en.wikipedia.org/wiki/Ostomachion
34 https://en.wikipedia.org/wiki/Archimedes
35 https://en.wikipedia.org/wiki/Tiling_puzzle
36 https://en.wikipedia.org/wiki/Middle_Ages
37 https://en.wikipedia.org/wiki/Culture_of_Europe
38 https://en.wikipedia.org/wiki/India
39 https://en.wikipedia.org/wiki/Mah%C4%81v%C4%ABra_(mathematician)
40 https://en.wikipedia.org/wiki/Permutation
41 https://en.wikipedia.org/wiki/Combination
42 https://en.wikipedia.org/wiki/Philosopher
43 https://en.wikipedia.org/wiki/Astronomer
44 https://en.wikipedia.org/wiki/Abraham_ibn_Ezra
45 https://en.wikipedia.org/wiki/Binomial_coefficient
46 https://en.wikipedia.org/wiki/Talmudist
47 https://en.wikipedia.org/wiki/Mathematician
48 https://en.wikipedia.org/wiki/Levi_ben_Gerson
49 https://en.wikipedia.org/wiki/Pascal%27s_triangle
50 https://en.wikipedia.org/wiki/Medieval_England
51 https://en.wikipedia.org/wiki/Campanology
52 https://en.wikipedia.org/wiki/Hamiltonian_cycle
53 https://en.wikipedia.org/wiki/Cayley_graph

498
Approaches and subfields of combinatorics

During the Renaissance54 , together with the rest of mathematics and the sciences55 , com-
binatorics enjoyed a rebirth. Works of Pascal56 , Newton57 , Jacob Bernoulli58 and Euler59
became foundational in the emerging field. In modern times, the works of J.J. Sylvester60
(late 19th century) and Percy MacMahon61 (early 20th century) helped lay the foundation
for enumerative62 and algebraic combinatorics63 . Graph theory64 also enjoyed an explosion
of interest at the same time, especially in connection with the four color problem65 .
In the second half of the 20th century, combinatorics enjoyed a rapid growth, which led
to establishment of dozens of new journals and conferences in the subject.[15] In part, the
growth was spurred by new connections and applications to other fields, ranging from alge-
bra to probability, from functional analysis66 to number theory67 , etc. These connections
shed the boundaries between combinatorics and parts of mathematics and theoretical com-
puter science, but at the same time led to a partial fragmentation of the field.

39.2 Approaches and subfields of combinatorics

39.2.1 Enumerative combinatorics

Figure 87 Five binary trees on three vertices, an example of Catalan numbers.

Main article: Enumerative combinatorics68 Enumerative combinatorics is the most classical


area of combinatorics and concentrates on counting the number of certain combinatorial
objects. Although counting the number of elements in a set is a rather broad mathemat-
ical problem69 , many of the problems that arise in applications have a relatively simple
combinatorial description. Fibonacci numbers70 is the basic example of a problem in enu-

54 https://en.wikipedia.org/wiki/Renaissance
55 https://en.wikipedia.org/wiki/Science
56 https://en.wikipedia.org/wiki/Blaise_Pascal
57 https://en.wikipedia.org/wiki/Isaac_Newton
58 https://en.wikipedia.org/wiki/Jacob_Bernoulli
59 https://en.wikipedia.org/wiki/Leonhard_Euler
60 https://en.wikipedia.org/wiki/James_Joseph_Sylvester
61 https://en.wikipedia.org/wiki/Percy_Alexander_MacMahon
62 https://en.wikipedia.org/wiki/Enumerative_combinatorics
63 https://en.wikipedia.org/wiki/Algebraic_combinatorics
64 https://en.wikipedia.org/wiki/Graph_theory
65 https://en.wikipedia.org/wiki/Four_color_problem
66 https://en.wikipedia.org/wiki/Functional_analysis
67 https://en.wikipedia.org/wiki/Number_theory
68 https://en.wikipedia.org/wiki/Enumerative_combinatorics
69 https://en.wikipedia.org/wiki/Mathematical_problem
70 https://en.wikipedia.org/wiki/Fibonacci_numbers

499
Combinatorics

merative combinatorics. The twelvefold way71 provides a unified framework for counting
permutations72 , combinations73 and partitions74 .

39.2.2 Analytic combinatorics

Main article: Analytic combinatorics75 Analytic combinatorics76 concerns the enumeration


of combinatorial structures using tools from complex analysis77 and probability theory78 .
In contrast with enumerative combinatorics, which uses explicit combinatorial formulae
and generating functions79 to describe the results, analytic combinatorics aims at obtaining
asymptotic formulae80 .

71 https://en.wikipedia.org/wiki/Twelvefold_way
72 https://en.wikipedia.org/wiki/Permutations
73 https://en.wikipedia.org/wiki/Combinations
74 https://en.wikipedia.org/wiki/Partition_of_a_set
75 https://en.wikipedia.org/wiki/Analytic_combinatorics
76 https://en.wikipedia.org/wiki/Analytic_combinatorics
77 https://en.wikipedia.org/wiki/Complex_analysis
78 https://en.wikipedia.org/wiki/Probability_theory
79 https://en.wikipedia.org/wiki/Generating_functions
80 https://en.wikipedia.org/wiki/Asymptotic_analysis

500
Approaches and subfields of combinatorics

39.2.3 Partition theory

Figure 88 A plane partition.

Main article: Partition theory81 Partition theory studies various enumeration and asymp-
totic problems related to integer partitions82 , and is closely related to q-series83 , special
functions84 and orthogonal polynomials85 . Originally a part of number theory86 and anal-
ysis87 , it is now considered a part of combinatorics or an independent field. It incorporates
the bijective approach88 and various tools in analysis and analytic number theory89 and has
connections with statistical mechanics90 .

81 https://en.wikipedia.org/wiki/Partition_(number_theory)
82 https://en.wikipedia.org/wiki/Integer_partition
83 https://en.wikipedia.org/wiki/Q-series
84 https://en.wikipedia.org/wiki/Special_functions
85 https://en.wikipedia.org/wiki/Orthogonal_polynomials
86 https://en.wikipedia.org/wiki/Number_theory
87 https://en.wikipedia.org/wiki/Analysis
88 https://en.wikipedia.org/wiki/Bijective_proof
89 https://en.wikipedia.org/wiki/Analytic_number_theory
90 https://en.wikipedia.org/wiki/Statistical_mechanics

501
Combinatorics

39.2.4 Graph theory

Figure 89 Petersen graph.

Main article: Graph theory91 Graphs are fundamental objects in combinatorics. Consider-
ations of graph theory range from enumeration (e.g., the number of graphs on n vertices
with k edges) to existing structures (e.g., Hamiltonian cycles) to algebraic representations
(e.g., given a graph G and two numbers x and y, does the Tutte polynomial92 TG (x,y)
have a combinatorial interpretation?). Although there are very strong connections between
graph theory and combinatorics, they are sometimes thought of as separate subjects.[16]

91 https://en.wikipedia.org/wiki/Graph_theory
92 https://en.wikipedia.org/wiki/Tutte_polynomial

502
Approaches and subfields of combinatorics

While combinatorial methods apply to many graph theory problems, the two disciplines are
generally used to seek solutions to different types of problems.

39.2.5 Design theory

Main article: Combinatorial design93 Design theory is a study of combinatorial designs94 ,


which are collections of subsets with certain intersection95 properties. Block designs96 are
combinatorial designs of a special type. This area is one of the oldest parts of combinatorics,
such as in Kirkman's schoolgirl problem97 proposed in 1850. The solution of the problem is a
special case of a Steiner system98 , which systems play an important role in the classification
of finite simple groups99 . The area has further connections to coding theory100 and geometric
combinatorics.

39.2.6 Finite geometry

Main article: Finite geometry101 Finite geometry is the study of geometric systems102 having
only a finite number of points. Structures analogous to those found in continuous geometries
(Euclidean plane103 , real projective space104 , etc.) but defined combinatorially are the main
items studied. This area provides a rich source of examples for design theory105 . It should
not be confused with discrete geometry (combinatorial geometry106 ).

93 https://en.wikipedia.org/wiki/Combinatorial_design
94 https://en.wikipedia.org/wiki/Combinatorial_design
95 https://en.wikipedia.org/wiki/Set_intersection
96 https://en.wikipedia.org/wiki/Block_design
97 https://en.wikipedia.org/wiki/Kirkman%27s_schoolgirl_problem
98 https://en.wikipedia.org/wiki/Steiner_system
99 https://en.wikipedia.org/wiki/Classification_of_finite_simple_groups
100 https://en.wikipedia.org/wiki/Coding_theory
101 https://en.wikipedia.org/wiki/Finite_geometry
102 https://en.wikipedia.org/wiki/Geometry
103 https://en.wikipedia.org/wiki/Euclidean_plane
104 https://en.wikipedia.org/wiki/Real_projective_space
105 https://en.wikipedia.org/wiki/Combinatorial_design
106 https://en.wikipedia.org/wiki/Combinatorial_geometry

503
Combinatorics

39.2.7 Order theory

Figure 90 Hasse diagram of the powerset of {x,y,z} ordered by inclusion.

Main article: Order theory107 Order theory is the study of partially ordered sets108 , both
finite and infinite. Various examples of partial orders appear in algebra109 , geometry, num-
ber theory and throughout combinatorics and graph theory. Notable classes and examples
of partial orders include lattices110 and Boolean algebras111 .

39.2.8 Matroid theory

Main article: Matroid theory112 Matroid theory abstracts part of geometry113 . It studies
the properties of sets (usually, finite sets) of vectors in a vector space114 that do not depend

107 https://en.wikipedia.org/wiki/Order_theory
108 https://en.wikipedia.org/wiki/Partially_ordered_sets
109 https://en.wikipedia.org/wiki/Abstract_algebra
110 https://en.wikipedia.org/wiki/Lattice_(order)
111 https://en.wikipedia.org/wiki/Boolean_algebras
112 https://en.wikipedia.org/wiki/Matroid_theory
113 https://en.wikipedia.org/wiki/Geometry
114 https://en.wikipedia.org/wiki/Vector_space

504
Approaches and subfields of combinatorics

on the particular coefficients in a linear dependence115 relation. Not only the structure but
also enumerative properties belong to matroid theory. Matroid theory was introduced by
Hassler Whitney116 and studied as a part of order theory. It is now an independent field of
study with a number of connections with other parts of combinatorics.

39.2.9 Extremal combinatorics

Main article: Extremal combinatorics117 Extremal combinatorics studies extremal questions


on set systems118 . The types of questions addressed in this case are about the largest possible
graph which satisfies certain properties. For example, the largest triangle-free graph119 on
2n vertices is a complete bipartite graph120 Kn,n . Often it is too hard even to find the
extremal answer f(n) exactly and one can only give an asymptotic estimate121 .
Ramsey theory122 is another part of extremal combinatorics. It states that any sufficiently
large123 configuration will contain some sort of order. It is an advanced generalization of
the pigeonhole principle124 .

115 https://en.wikipedia.org/wiki/Linear_independence
116 https://en.wikipedia.org/wiki/Hassler_Whitney
117 https://en.wikipedia.org/wiki/Extremal_combinatorics
118 https://en.wikipedia.org/wiki/Set_system
119 https://en.wikipedia.org/wiki/Triangle-free_graph
120 https://en.wikipedia.org/wiki/Complete_bipartite_graph
121 https://en.wikipedia.org/wiki/Asymptotic_analysis
122 https://en.wikipedia.org/wiki/Ramsey_theory
123 https://en.wikipedia.org/wiki/Sufficiently_large
124 https://en.wikipedia.org/wiki/Pigeonhole_principle

505
Combinatorics

39.2.10 Probabilistic combinatorics

Figure 91 Self-avoiding walk in a square grid graph.

Main article: Probabilistic method125 In probabilistic combinatorics, the questions are of


the following type: what is the probability of a certain property for a random discrete ob-
ject, such as a random graph126 ? For instance, what is the average number of triangles in
a random graph? Probabilistic methods are also used to determine the existence of com-
binatorial objects with certain prescribed properties (for which explicit examples might be
difficult to find), simply by observing that the probability of randomly selecting an object
with those properties is greater than 0. This approach (often referred to as the probabilistic

125 https://en.wikipedia.org/wiki/Probabilistic_method
126 https://en.wikipedia.org/wiki/Random_graph

506
Approaches and subfields of combinatorics

method127 ) proved highly effective in applications to extremal combinatorics and graph


theory. A closely related area is the study of finite Markov chains128 , especially on combi-
natorial objects. Here again probabilistic tools are used to estimate the mixing time129 .
Often associated with Paul Erdős130 , who did the pioneering work on the subject, proba-
bilistic combinatorics was traditionally viewed as a set of tools to study problems in other
parts of combinatorics. However, with the growth of applications to analyze algorithms131
in computer science132 , as well as classical probability, additive number theory133 , and
probabilistic number theory134 , the area recently grew to become an independent field of
combinatorics.

39.2.11 Algebraic combinatorics

Figure 92 Young diagram of a partition (5,4,1).

127 https://en.wikipedia.org/wiki/Probabilistic_method
128 https://en.wikipedia.org/wiki/Markov_chains
129 https://en.wikipedia.org/wiki/Markov_chain_mixing_time
130 https://en.wikipedia.org/wiki/Paul_Erd%C5%91s
131 https://en.wikipedia.org/wiki/Analysis_of_algorithms
132 https://en.wikipedia.org/wiki/Computer_science
133 https://en.wikipedia.org/wiki/Additive_number_theory
134 https://en.wikipedia.org/wiki/Probabilistic_number_theory

507
Combinatorics

Main article: Algebraic combinatorics135 Algebraic combinatorics is an area of mathemat-


ics136 that employs methods of abstract algebra137 , notably group theory138 and represen-
tation theory139 , in various combinatorial contexts and, conversely, applies combinatorial
techniques to problems in algebra140 . Algebraic combinatorics is continuously expanding its
scope, in both topics and techniques, and can be seen as the area of mathematics where the
interaction of combinatorial and algebraic methods is particularly strong and significant.

39.2.12 Combinatorics on words

Figure 93 Construction of a Thue–Morse infinite word.

Main article: Combinatorics on words141 Combinatorics on words deals with formal lan-
guages142 . It arose independently within several branches of mathematics, including number
theory143 , group theory144 and probability145 . It has applications to enumerative combina-
torics, fractal analysis146 , theoretical computer science147 , automata theory148 , and linguis-
tics149 . While many applications are new, the classical Chomsky–Schützenberger hierar-
chy150 of classes of formal grammars151 is perhaps the best-known result in the field.

135 https://en.wikipedia.org/wiki/Algebraic_combinatorics
136 https://en.wikipedia.org/wiki/Mathematics
137 https://en.wikipedia.org/wiki/Abstract_algebra
138 https://en.wikipedia.org/wiki/Group_theory
139 https://en.wikipedia.org/wiki/Representation_theory
140 https://en.wikipedia.org/wiki/Abstract_algebra
141 https://en.wikipedia.org/wiki/Combinatorics_on_words
142 https://en.wikipedia.org/wiki/Formal_language
143 https://en.wikipedia.org/wiki/Number_theory
144 https://en.wikipedia.org/wiki/Group_theory
145 https://en.wikipedia.org/wiki/Probability
146 https://en.wikipedia.org/wiki/Fractal_analysis
147 https://en.wikipedia.org/wiki/Theoretical_computer_science
148 https://en.wikipedia.org/wiki/Automata_theory
149 https://en.wikipedia.org/wiki/Linguistics
150 https://en.wikipedia.org/wiki/Chomsky%E2%80%93Sch%C3%BCtzenberger_hierarchy
151 https://en.wikipedia.org/wiki/Formal_grammar

508
Approaches and subfields of combinatorics

39.2.13 Geometric combinatorics

Figure 94 An icosahedron.

Main article: Geometric combinatorics152 Geometric combinatorics is related to convex153


and discrete geometry154 , in particular polyhedral combinatorics155 . It asks, for example,
how many faces of each dimension a convex polytope156 can have. Metric157 properties of
polytopes play an important role as well, e.g. the Cauchy theorem158 on the rigidity of

152 https://en.wikipedia.org/wiki/Geometric_combinatorics
153 https://en.wikipedia.org/wiki/Convex_geometry
154 https://en.wikipedia.org/wiki/Discrete_geometry
155 https://en.wikipedia.org/wiki/Polyhedral_combinatorics
156 https://en.wikipedia.org/wiki/Convex_polytope
157 https://en.wikipedia.org/wiki/Metric_geometry
158 https://en.wikipedia.org/wiki/Cauchy%27s_theorem_(geometry)

509
Combinatorics

convex polytopes. Special polytopes are also considered, such as permutohedra159 , associ-
ahedra160 and Birkhoff polytopes161 . Combinatorial geometry162 is an old fashioned name
for discrete geometry.

39.2.14 Topological combinatorics

Figure 95 Splitting a necklace with two cuts.

159 https://en.wikipedia.org/wiki/Permutohedron
160 https://en.wikipedia.org/wiki/Associahedron
161 https://en.wikipedia.org/wiki/Birkhoff_polytope
162 https://en.wikipedia.org/wiki/Combinatorial_geometry

510
Approaches and subfields of combinatorics

Main article: Topological combinatorics163 Combinatorial analogs of concepts and methods


in topology164 are used to study graph coloring165 , fair division166 , partitions167 , partially
ordered sets168 , decision trees169 , necklace problems170 and discrete Morse theory171 . It
should not be confused with combinatorial topology172 which is an older name for algebraic
topology173 .

39.2.15 Arithmetic combinatorics

Main article: Arithmetic combinatorics174 Arithmetic combinatorics arose out of the inter-
play between number theory175 , combinatorics, ergodic theory176 , and harmonic analysis177 .
It is about combinatorial estimates associated with arithmetic operations (addition, sub-
traction, multiplication, and division). Additive number theory178 (sometimes also called
additive combinatorics) refers to the special case when only the operations of addition and
subtraction are involved. One important technique in arithmetic combinatorics is the er-
godic theory179 of dynamical systems180 .

39.2.16 Infinitary combinatorics

Main article: Infinitary combinatorics181 Infinitary combinatorics, or combinatorial set the-


ory, is an extension of ideas in combinatorics to infinite sets. It is a part of set theory182 , an
area of mathematical logic183 , but uses tools and ideas from both set theory and extremal
combinatorics.
Gian-Carlo Rota184 used the name continuous combinatorics[17] to describe geometric prob-
ability185 , since there are many analogies between counting and measure.

163 https://en.wikipedia.org/wiki/Topological_combinatorics
164 https://en.wikipedia.org/wiki/Topology
165 https://en.wikipedia.org/wiki/Graph_coloring
166 https://en.wikipedia.org/wiki/Fair_division
167 https://en.wikipedia.org/wiki/Partition_of_a_set
168 https://en.wikipedia.org/wiki/Partially_ordered_set
169 https://en.wikipedia.org/wiki/Decision_tree
170 https://en.wikipedia.org/wiki/Necklace_problem
171 https://en.wikipedia.org/wiki/Discrete_Morse_theory
172 https://en.wikipedia.org/wiki/Combinatorial_topology
173 https://en.wikipedia.org/wiki/Algebraic_topology
174 https://en.wikipedia.org/wiki/Arithmetic_combinatorics
175 https://en.wikipedia.org/wiki/Number_theory
176 https://en.wikipedia.org/wiki/Ergodic_theory
177 https://en.wikipedia.org/wiki/Harmonic_analysis
178 https://en.wikipedia.org/wiki/Additive_number_theory
179 https://en.wikipedia.org/wiki/Ergodic_theory
180 https://en.wikipedia.org/wiki/Dynamical_system
181 https://en.wikipedia.org/wiki/Infinitary_combinatorics
182 https://en.wikipedia.org/wiki/Set_theory
183 https://en.wikipedia.org/wiki/Mathematical_logic
184 https://en.wikipedia.org/wiki/Gian-Carlo_Rota
185 https://en.wikipedia.org/wiki/Geometric_probability

511
Combinatorics

39.3 Related fields

Figure 96 Kissing spheres are connected to both coding theory and discrete geometry.

39.3.1 Combinatorial optimization

Combinatorial optimization186 is the study of optimization on discrete and combinatorial


objects. It started as a part of combinatorics and graph theory, but is now viewed as a

186 https://en.wikipedia.org/wiki/Combinatorial_optimization

512
Related fields

branch of applied mathematics and computer science, related to operations research187 ,


algorithm theory188 and computational complexity theory189 .

39.3.2 Coding theory

Coding theory190 started as a part of design theory with early combinatorial constructions
of error-correcting codes191 . The main idea of the subject is to design efficient and reliable
methods of data transmission. It is now a large field of study, part of information theory192 .

39.3.3 Discrete and computational geometry

Discrete geometry193 (also called combinatorial geometry) also began as a part of com-
binatorics, with early results on convex polytopes194 and kissing numbers195 . With the
emergence of applications of discrete geometry to computational geometry196 , these two
fields partially merged and became a separate field of study. There remain many connec-
tions with geometric and topological combinatorics, which themselves can be viewed as
outgrowths of the early discrete geometry.

39.3.4 Combinatorics and dynamical systems

Combinatorial aspects of dynamical systems197 is another emerging field. Here dynami-


cal systems can be defined on combinatorial objects. See for example graph dynamical
system198 .

39.3.5 Combinatorics and physics

There are increasing interactions between combinatorics and physics199 , particularly statis-
tical physics200 . Examples include an exact solution of the Ising model201 , and a connection

187 https://en.wikipedia.org/wiki/Operations_research
188 https://en.wikipedia.org/wiki/Analysis_of_algorithms
189 https://en.wikipedia.org/wiki/Computational_complexity_theory
190 https://en.wikipedia.org/wiki/Coding_theory
191 https://en.wikipedia.org/wiki/Error-correcting_code
192 https://en.wikipedia.org/wiki/Information_theory
193 https://en.wikipedia.org/wiki/Discrete_geometry
194 https://en.wikipedia.org/wiki/Convex_polytope
195 https://en.wikipedia.org/wiki/Kissing_number
196 https://en.wikipedia.org/wiki/Computational_geometry
197 https://en.wikipedia.org/wiki/Combinatorics_and_dynamical_systems
198 https://en.wikipedia.org/wiki/Graph_dynamical_system
199 https://en.wikipedia.org/wiki/Combinatorics_and_physics
200 https://en.wikipedia.org/wiki/Statistical_physics
201 https://en.wikipedia.org/wiki/Ising_model

513
Combinatorics

between the Potts model202 on one hand, and the chromatic203 and Tutte polynomials204
on the other hand.

39.4 See also

• Mathematics portal205
• Combinatorial biology206
• Combinatorial chemistry207
• Combinatorial data analysis208
• Combinatorial game theory209
• Combinatorial group theory210
• List of combinatorics topics211
• Phylogenetics212
• Polynomial method in combinatorics213

39.5 Notes
1. P, I. ”W  C?”214 . R 1 N 2017.
2. Ryser 1963215 , p. 2
3. M, L (1979), ”B R”216 (PDF), Bulletin (New Series) of the
American Mathematical Society, 1: 380–388, doi217 :10.1090/S0273-0979-1979-14606-
8218
4. R, G C (1969). Discrete Thoughts. Birkhaüser. p. 50. ... combina-
torial theory has been the mother of several of the more active branches of today's
mathematics, which have become independent ... . The typical ... case of this is
algebraic topology (formerly known as combinatorial topology)
5. Björner and Stanley, p. 2

202 https://en.wikipedia.org/wiki/Potts_model
203 https://en.wikipedia.org/wiki/Chromatic_polynomial
204 https://en.wikipedia.org/wiki/Tutte_polynomial
205 https://en.wikipedia.org/wiki/Portal:Mathematics
206 https://en.wikipedia.org/wiki/Combinatorial_biology
207 https://en.wikipedia.org/wiki/Combinatorial_chemistry
208 https://en.wikipedia.org/wiki/Combinatorial_data_analysis
209 https://en.wikipedia.org/wiki/Combinatorial_game_theory
210 https://en.wikipedia.org/wiki/Combinatorial_group_theory
211 https://en.wikipedia.org/wiki/List_of_combinatorics_topics
212 https://en.wikipedia.org/wiki/Phylogenetics
213 https://en.wikipedia.org/wiki/Polynomial_method_in_combinatorics
214 http://www.math.ucla.edu/~pak/hidden/papers/Quotes/Combinatorics-quotes.htm
215 #CITEREFRyser1963
http://www.ams.org/journals/bull/1979-01-02/S0273-0979-1979-14606-8/S0273-0979-1979-
216
14606-8.pdf
217 https://en.wikipedia.org/wiki/Doi_(identifier)
218 https://doi.org/10.1090%2FS0273-0979-1979-14606-8

514
Notes

6. L, L (1979). Combinatorial Problems and Exercises219 . N-


H. ISBN220 9780821842621221 . In my opinion, combinatorics is now growing
out of this early stage.
7. Stanley, Richard P.222 ; ”Hipparchus, Plutarch, Schröder, and Hough”, American Math-
ematical Monthly 104 (1997), no. 4, 344–350.
8. H, L; K, M; L, S (1998). ”O  S-
 N  P”. The American Mathematical Monthly. 105 (5): 446.
doi223 :10.1080/00029890.1998.12004906224 .
9. O'C, J J.225 ; R, E F.226 , ”C”227 , Mac-
Tutor History of Mathematics archive228 , U  S A229 .
10. P, T K. (2000). ”T M A 
A I M”. I S, H (.). Mathematics Across
Cultures: The History of Non-Western Mathematics230 . N: K
A P. . 417. ISBN231 978-1-4020-0260-1232 .
11. B, N L. (1979). ”T R  C”. Historia Mathe-
matica. 6 (2): 109–136. doi233 :10.1016/0315-0860(79)90074-0234 .
12. M, L.E. (1974), Probability Theory: A Historical Sketch235 , A
P, . 35, ISBN236 978-1-4832-1863-2237 . (Translation from 1967 Russian ed.)
13. W, A T. (1987). ”R  C”. The American Mathematical
Monthly. 94 (8): 721–746. doi238 :10.1080/00029890.1987.12000711239 .
14. W, A T. (1996). ”F S: T F G
T?”. The American Mathematical Monthly. 103 (9): 771–778.
doi240 :10.1080/00029890.1996.12004816241 .
15. See Journals in Combinatorics and Graph Theory242

219 https://books.google.com/?id=ueq1CwAAQBAJ&pg=PP1
220 https://en.wikipedia.org/wiki/ISBN_(identifier)
221 https://en.wikipedia.org/wiki/Special:BookSources/9780821842621
222 https://en.wikipedia.org/wiki/Richard_P._Stanley
223 https://en.wikipedia.org/wiki/Doi_(identifier)
224 https://doi.org/10.1080%2F00029890.1998.12004906
225 https://en.wikipedia.org/wiki/John_J._O%27Connor_(mathematician)
226 https://en.wikipedia.org/wiki/Edmund_F._Robertson
227 http://www-history.mcs.st-andrews.ac.uk/Biographies/Mahavira.html
228 https://en.wikipedia.org/wiki/MacTutor_History_of_Mathematics_archive
229 https://en.wikipedia.org/wiki/University_of_St_Andrews
230 https://books.google.com/?id=2hTyfurOH8AC
231 https://en.wikipedia.org/wiki/ISBN_(identifier)
232 https://en.wikipedia.org/wiki/Special:BookSources/978-1-4020-0260-1
233 https://en.wikipedia.org/wiki/Doi_(identifier)
234 https://doi.org/10.1016%2F0315-0860%2879%2990074-0
235 https://books.google.com/?id=2ZbiBQAAQBAJ&pg=PA35
236 https://en.wikipedia.org/wiki/ISBN_(identifier)
237 https://en.wikipedia.org/wiki/Special:BookSources/978-1-4832-1863-2
238 https://en.wikipedia.org/wiki/Doi_(identifier)
239 https://doi.org/10.1080%2F00029890.1987.12000711
240 https://en.wikipedia.org/wiki/Doi_(identifier)
241 https://doi.org/10.1080%2F00029890.1996.12004816
242 http://www.math.iit.edu/~kaul/Journals.html#CGT

515
Combinatorics

16. Sanders, Daniel P.; 2-Digit MSC Comparison243 Archived244 2008-12-31 at the Way-
back Machine245
17. Continuous and profinite combinatorics246

39.6 References
• Björner, Anders; and Stanley, Richard P.; (2010); A Combinatorial Miscellany247
• Bóna, Miklós; (2011); A Walk Through Combinatorics (3rd Edition)248 . ISBN249 978-
981-4335-23-2250 , 978-981-4460-00-2251
• Graham, Ronald L.; Groetschel, Martin; and Lovász, László; eds. (1996); Handbook
of Combinatorics, Volumes 1 and 2. Amsterdam, NL, and Cambridge, MA: Elsevier
(North-Holland) and MIT Press. ISBN252 0-262-07169-X253
• Lindner, Charles C.; and Rodger, Christopher A.; eds. (1997); Design Theory, CRC-
Press; 1st. edition (1997). ISBN254 0-8493-3986-3255 .
• R, J256 (2002) [1958], An Introduction to Combinatorial Analysis, Dover,
ISBN257 978-0-486-42536-8258
• R, H J (1963), Combinatorial Mathematics, The Carus Mathematical
Monographs(#14), The Mathematical Association of America
• Stanley, Richard P.259 (1997, 1999); Enumerative Combinatorics, Volumes 1 and 2260 ,
Cambridge University Press261 . ISBN262 0-521-55309-1263 , 0-521-56069-1264
• van Lint, Jacobus H.; and Wilson, Richard M.; (2001); A Course in Combinatorics, 2nd
Edition, Cambridge University Press. ISBN265 0-521-80340-3266

243 http://www.math.gatech.edu/~sanders/graphtheory/writings/2-digit.html
https://web.archive.org/web/20081231163112/http://www.math.gatech.edu/~sanders/
244
graphtheory/writings/2-digit.html
245 https://en.wikipedia.org/wiki/Wayback_Machine
246 http://faculty.uml.edu/dklain/cpc.pdf
247 http://www-math.mit.edu/~rstan/papers/comb.pdf
248 http://www.worldscientific.com/worldscibooks/10.1142/8027
249 https://en.wikipedia.org/wiki/ISBN_(identifier)
250 https://en.wikipedia.org/wiki/Special:BookSources/978-981-4335-23-2
251 https://en.wikipedia.org/wiki/Special:BookSources/978-981-4460-00-2
252 https://en.wikipedia.org/wiki/ISBN_(identifier)
253 https://en.wikipedia.org/wiki/Special:BookSources/0-262-07169-X
254 https://en.wikipedia.org/wiki/ISBN_(identifier)
255 https://en.wikipedia.org/wiki/Special:BookSources/0-8493-3986-3
256 https://en.wikipedia.org/wiki/John_Riordan_(mathematician)
257 https://en.wikipedia.org/wiki/ISBN_(identifier)
258 https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-42536-8
259 https://en.wikipedia.org/wiki/Richard_P._Stanley
260 http://www-math.mit.edu/~rstan/ec/
261 https://en.wikipedia.org/wiki/Cambridge_University_Press
262 https://en.wikipedia.org/wiki/ISBN_(identifier)
263 https://en.wikipedia.org/wiki/Special:BookSources/0-521-55309-1
264 https://en.wikipedia.org/wiki/Special:BookSources/0-521-56069-1
265 https://en.wikipedia.org/wiki/ISBN_(identifier)
266 https://en.wikipedia.org/wiki/Special:BookSources/0-521-80340-3

516
External links

39.7 External links

Combinatoricsat Wikipedia's sister projects267


• Definitions268 from Wiktionary
• Media269 from Wikimedia Commons
• Quotations270 from Wikiquote
• H, M271 , . (2001) [1994], ”C ”272 , En-
cyclopedia of Mathematics273 , S S+B M B.V. / K
A P, ISBN274 978-1-55608-010-4275
• Combinatorial Analysis276 – an article in Encyclopædia Britannica Eleventh Edition277
• Combinatorics278 , a MathWorld279 article with many references.
• Combinatorics280 , from a MathPages.com portal.
• The Hyperbook of Combinatorics281 , a collection of math articles links.
• The Two Cultures of Mathematics282 by W.T. Gowers, article on problem solving vs
theory building.
• ”Glossary of Terms in Combinatorics”283
• List of Combinatorics Software and Databases284

Mathematics (Areas of mathematics)

267 https://en.wikipedia.org/wiki/Wikipedia:Wikimedia_sister_projects
268 https://en.wiktionary.org/wiki/combinatorics
269 https://commons.wikimedia.org/wiki/Category:Combinatorics
270 https://en.wikiquote.org/wiki/Combinatorics
271 https://en.wikipedia.org/wiki/Michiel_Hazewinkel
272 https://www.encyclopediaofmath.org/index.php?title=p/c023250
273 https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics
274 https://en.wikipedia.org/wiki/ISBN_(identifier)
275 https://en.wikipedia.org/wiki/Special:BookSources/978-1-55608-010-4
276 http://encyclopedia.jrank.org/CLI_COM/COMBINATORIAL_ANALYSIS.html
277 https://en.wikipedia.org/wiki/Encyclop%C3%A6dia_Britannica_Eleventh_Edition
278 http://mathworld.wolfram.com/Combinatorics.html
279 https://en.wikipedia.org/wiki/MathWorld
280 http://www.mathpages.com/home/icombina.htm
281 http://www.combinatorics.net/Resources/hyper/Hyperbook.aspx
282 http://www.dpmms.cam.ac.uk/~wtg10/2cultures.pdf
283 http://www.math.illinois.edu/~dwest/openp/gloss.html
284 http://www.mat.univie.ac.at/~slc/divers/software.html

517
Combinatorics

Computer science

518
40 Cycle detection

This article is about iterated functions. For another use, see Cycle detection (graph theory)1 .

This article may be too technical for most readers to understand. Please
help improve it2 to make it understandable to non-experts3 , without removing the
technical details. (February 2018)(Learn how and when to remove this template mes-
sage4 )

In computer science5 , cycle detection or cycle finding is the algorithmic6 problem of


finding a cycle in a sequence7 of iterated function8 values.
For any function9 f that maps a finite set10 S to itself, and any initial value x0 in S, the
sequence of iterated function values
x0 , x1 = f (x0 ), x2 = f (x1 ), . . . , xi = f (xi−1 ), . . .
must eventually use the same value twice: there must be some pair of distinct indices i
and j such that xi = xj . Once this happens, the sequence must continue periodically11 , by
repeating the same sequence of values from xi to xj − 1 . Cycle detection is the problem of
finding i and j, given f and x0 .
Several algorithms for finding cycles quickly and with little memory are known. Robert W.
Floyd12 's tortoise and hare algorithm13 moves two pointers at different speeds through the
sequence of values until they both point to equal values. Alternatively, Brent's algorithm
is based on the idea of exponential search14 . Both Floyd's and Brent's algorithms use only
a constant number of memory cells, and take a number of function evaluations that is
proportional to the distance from the start of the sequence to the first repetition. Several
other algorithms trade off larger amounts of memory for fewer function evaluations.

1 https://en.wikipedia.org/wiki/Cycle_detection_(graph_theory)
2 https://en.wikipedia.org/w/index.php?title=Cycle_detection&action=edit
3 https://en.wikipedia.org/wiki/Wikipedia:Make_technical_articles_understandable
4 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
5 https://en.wikipedia.org/wiki/Computer_science
6 https://en.wikipedia.org/wiki/Algorithm
7 https://en.wikipedia.org/wiki/Sequence
8 https://en.wikipedia.org/wiki/Iterated_function
9 https://en.wikipedia.org/wiki/Function_(mathematics)
10 https://en.wikipedia.org/wiki/Finite_set
11 https://en.wikipedia.org/wiki/Periodic_sequence
12 https://en.wikipedia.org/wiki/Robert_W._Floyd
13 https://en.wikipedia.org/wiki/Cycle_detection#Floyd&#39;s_Tortoise_and_Hare
14 https://en.wikipedia.org/wiki/Exponential_search

519
Cycle detection

The applications of cycle detection include testing the quality of pseudorandom number gen-
erators15 and cryptographic hash functions16 , computational number theory17 algorithms,
detection of infinite loops18 in computer programs and periodic configurations in cellular
automata19 , and the automated shape analysis20 of linked list21 data structures.

40.1 Example

Figure 98 A function from and to the set {0,1,2,3,4,5,6,7,8} and the corresponding
functional graph

The figure shows a function f that maps the set S = {0,1,2,3,4,5,6,7,8} to itself. If one starts
from x0 = 2 and repeatedly applies f, one sees the sequence of values
2, 0, 6, 3, 1, 6, 3, 1, 6, 3, 1, ....

15 https://en.wikipedia.org/wiki/Pseudorandom_number_generator
16 https://en.wikipedia.org/wiki/Cryptographic_hash_function
17 https://en.wikipedia.org/wiki/Computational_number_theory
18 https://en.wikipedia.org/wiki/Infinite_loop
19 https://en.wikipedia.org/wiki/Cellular_automaton
20 https://en.wikipedia.org/wiki/Shape_analysis_(software)
21 https://en.wikipedia.org/wiki/Linked_list

520
Definitions

The cycle in this value sequence is 6, 3, 1.

40.2 Definitions

Let S be any finite set, f be any function from S to itself, and x0 be any element of S. For
any i > 0, let xi = f(xi − 1 ). Let μ be the smallest index such that the value xμ reappears
infinitely often within the sequence of values xi , and let λ (the loop length) be the smallest
positive integer such that xμ = xλ + μ . The cycle detection problem is the task of finding λ
and μ.[1]
One can view the same problem graph-theoretically22 , by constructing a functional graph23
(that is, a directed graph24 in which each vertex has a single outgoing edge) the vertices of
which are the elements of S and the edges of which map an element to the corresponding
function value, as shown in the figure. The set of vertices reachable25 from starting vertex
x0 form a subgraph with a shape resembling the Greek letter rho26 (ρ): a path of length μ
from x0 to a cycle27 of λ vertices.[2]

40.3 Computer representation

Generally, f will not be specified as a table of values, the way it is shown in the figure above.
Rather, a cycle detection algorithm may be given access either to the sequence of values
xi , or to a subroutine for calculating f. The task is to find λ and μ while examining as few
values from the sequence or performing as few subroutine calls as possible. Typically, also,
the space complexity28 of an algorithm for the cycle detection problem is of importance: we
wish to solve the problem while using an amount of memory significantly smaller than it
would take to store the entire sequence.
In some applications, and in particular in Pollard's rho algorithm29 for integer factoriza-
tion30 , the algorithm has much more limited access to S and to f. In Pollard's rho algorithm,
for instance, S is the set of integers modulo an unknown prime factor of the number to be
factorized, so even the size of S is unknown to the algorithm. To allow cycle detection
algorithms to be used with such limited knowledge, they may be designed based on the
following capabilities. Initially, the algorithm is assumed to have in its memory an object
representing a pointer31 to the starting value x0 . At any step, it may perform one of three
actions: it may copy any pointer it has to another object in memory, it may apply f and

22 https://en.wikipedia.org/wiki/Graph_theory
23 https://en.wikipedia.org/wiki/Functional_graph
24 https://en.wikipedia.org/wiki/Directed_graph
25 https://en.wikipedia.org/wiki/Reachability
26 https://en.wikipedia.org/wiki/Rho_(letter)
27 https://en.wikipedia.org/wiki/Cycle_(graph_theory)
28 https://en.wikipedia.org/wiki/Space_complexity
29 https://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm
30 https://en.wikipedia.org/wiki/Integer_factorization
31 https://en.wikipedia.org/wiki/Pointer_(computer_programming)

521
Cycle detection

replace any of its pointers by a pointer to the next object in the sequence, or it may ap-
ply a subroutine for determining whether two of its pointers represent equal values in the
sequence. The equality test action may involve some nontrivial computation: for instance,
in Pollard's rho algorithm, it is implemented by testing whether the difference between two
stored values has a nontrivial greatest common divisor32 with the number to be factored.[2]
In this context, by analogy to the pointer machine33 model of computation, an algorithm
that only uses pointer copying, advancement within the sequence, and equality tests may
be called a pointer algorithm.

40.4 Algorithms

If the input is given as a subroutine for calculating f, the cycle detection problem may be
trivially solved using only λ + μ function applications, simply by computing the sequence
of values xi and using a data structure34 such as a hash table35 to store these values and
test whether each subsequent value has already been stored. However, the space complexity
of this algorithm is proportional to λ + μ, unnecessarily large. Additionally, to implement
this method as a pointer algorithm would require applying the equality test to each pair of
values, resulting in quadratic time overall. Thus, research in this area has concentrated on
two goals: using less space than this naive algorithm, and finding pointer algorithms that
use fewer equality tests.

32 https://en.wikipedia.org/wiki/Greatest_common_divisor
33 https://en.wikipedia.org/wiki/Pointer_machine
34 https://en.wikipedia.org/wiki/Data_structure
35 https://en.wikipedia.org/wiki/Hash_table

522
Algorithms

40.4.1 Floyd's Tortoise and Hare

Figure 99 Floyd's ”tortoise and hare” cycle detection algorithm, applied to the sequence
2, 0, 6, 3, 1, 6, 3, 1, ...

Floyd's cycle-finding algorithm is a pointer algorithm that uses only two pointers, which
move through the sequence at different speeds. It is also called the ”tortoise and the hare
algorithm”, alluding to Aesop's fable of The Tortoise and the Hare36 .
The algorithm is named after Robert W. Floyd37 , who was credited with its invention by
Donald Knuth38 .[3][4] However, the algorithm does not appear in Floyd's published work,
and this may be a misattribution: Floyd describes algorithms for listing all simple cycles

36 https://en.wikipedia.org/wiki/The_Tortoise_and_the_Hare
37 https://en.wikipedia.org/wiki/Robert_W._Floyd
38 https://en.wikipedia.org/wiki/Donald_Knuth

523
Cycle detection

in a directed graph39 in a 1967 paper,[5] but this paper does not describe the cycle-finding
problem in functional graphs that is the subject of this article. In fact, Knuth's statement
(in 1969), attributing it to Floyd, without citation, is the first known appearance in print,
and it thus may be a folk theorem40 , not attributable to a single individual.[6]
The key insight in the algorithm is as follows. If there is a cycle, then, for any integers
i ≥μ and k ≥0, xi = xi + kλ , where λ is the length of the loop to be found and μ is the index
of the first element of the cycle. Based on this, it can then be shown that i = kλ ≥μ for
some k if and only if xi = x2i . Thus, the algorithm only needs to check for repeated values
of this special form, one twice as far from the start of the sequence as the other, to find a
period ν of a repetition that is a multiple of λ. Once ν is found, the algorithm retraces the
sequence from its start to find the first repeated value xμ in the sequence, using the fact
that λ divides ν and therefore that xμ = xμ + v . Finally, once the value of μ is known it is
trivial to find the length λ of the shortest repeating cycle, by searching for the first position
μ + λ for which xμ + λ = xμ .
The algorithm thus maintains two pointers into the given sequence, one (the tortoise) at
xi , and the other (the hare) at x2i . At each step of the algorithm, it increases i by one,
moving the tortoise one step forward and the hare two steps forward in the sequence, and
then compares the sequence values at these two pointers. The smallest value of i > 0 for
which the tortoise and hare point to equal values is the desired value ν.
The following Python41 code shows how this idea may be implemented as an algorithm.
def floyd(f, x0):
# Main phase of algorithm: finding a repetition x_i = x_2i.
# The hare moves twice as quickly as the tortoise and
# the distance between them increases by 1 at each step.
# Eventually they will both be inside the cycle and then,
# at some point, the distance between them will be
# divisible by the period λ.
tortoise = f(x0) # f(x0) is the element/node next to x0.
hare = f(f(x0))
while tortoise != hare:
tortoise = f(tortoise)
hare = f(f(hare))

# At this point the tortoise position, ν, which is also equal


# to the distance between hare and tortoise, is divisible by
# the period λ. So hare moving in circle one step at a time,
# and tortoise (reset to x0) moving towards the circle, will
# intersect at the beginning of the circle. Because the
# distance between them is constant at 2ν, a multiple of λ,
# they will agree as soon as the tortoise reaches index μ.

# Find the position μ of first repetition.


mu = 0
tortoise = x0
while tortoise != hare:
tortoise = f(tortoise)
hare = f(hare) # Hare and tortoise move at same speed
mu += 1

# Find the length of the shortest cycle starting from x_μ

39 https://en.wikipedia.org/wiki/Directed_graph
40 https://en.wikipedia.org/wiki/Mathematical_folklore
41 https://en.wikipedia.org/wiki/Python_(programming_language)

524
Algorithms

# The hare moves one step at a time while tortoise is still.


# lam is incremented until λ is found.
lam = 1
hare = f(tortoise)
while tortoise != hare:
hare = f(hare)
lam += 1

return lam, mu

This code only accesses the sequence by storing and copying pointers, function evaluations,
and equality tests; therefore, it qualifies as a pointer algorithm. The algorithm uses O(λ +
μ) operations of these types, and O(1) storage space.[7]

40.4.2 Brent's algorithm

Richard P. Brent42 described an alternative cycle detection algorithm that, like the tortoise
and hare algorithm, requires only two pointers into the sequence.[8] However, it is based on
a different principle: searching for the smallest power of two43 2i that is larger than both
λ and μ. For i = 0, 1, 2, ..., the algorithm compares x2i −1 with each subsequent sequence
value up to the next power of two, stopping when it finds a match. It has two advantages
compared to the tortoise and hare algorithm: it finds the correct length λ of the cycle
directly, rather than needing to search for it in a subsequent stage, and its steps involve
only one evaluation of f rather than three.[9]
The following Python code shows how this technique works in more detail.
def brent(f, x0):
# main phase: search successive powers of two
power = lam = 1
tortoise = x0
hare = f(x0) # f(x0) is the element/node next to x0.
while tortoise != hare:
if power == lam: # time to start a new power of two?
tortoise = hare
power *= 2
lam = 0
hare = f(hare)
lam += 1

# Find the position of the first repetition of length λ


tortoise = hare = x0
for i in range(lam):
# range(lam) produces a list with the values 0, 1, ... , lam-1
hare = f(hare)
# The distance between the hare and tortoise is now λ.

# Next, the hare and tortoise move at same speed until they agree
mu = 0
while tortoise != hare:
tortoise = f(tortoise)
hare = f(hare)
mu += 1

42 https://en.wikipedia.org/wiki/Richard_Brent_(scientist)
43 https://en.wikipedia.org/wiki/Power_of_two

525
Cycle detection

return lam, mu

Like the tortoise and hare algorithm, this is a pointer algorithm that uses O(λ + μ) tests
and function evaluations and O(1) storage space. It is not difficult to show that the number
of function evaluations can never be higher than for Floyd's algorithm. Brent claims that,
on average, his cycle finding algorithm runs around 36% more quickly than Floyd's and
that it speeds up the Pollard rho algorithm44 by around 24%. He also performs an average
case analysis45 for a randomized version of the algorithm in which the sequence of indices
traced by the slower of the two pointers is not the powers of two themselves, but rather a
randomized multiple of the powers of two. Although his main intended application was in
integer factorization algorithms, Brent also discusses applications in testing pseudorandom
number generators.[8]

40.4.3 Gosper's algorithm

R. W. Gosper46 's algorithm[10][11] finds the period but not the starting point of the first
cycle. Its main feature is that it never backs up to reevaluate the generator function, and is
economical in both space and time. For example, if it is known a priori that the generator
function has a period of 232 then 33 words is sufficient.

40.4.4 Time–space tradeoffs

A number of authors have studied techniques for cycle detection that use more memory than
Floyd's and Brent's methods, but detect cycles more quickly. In general these methods store
several previously-computed sequence values, and test whether each new value equals one
of the previously-computed values. In order to do so quickly, they typically use a hash
table47 or similar data structure for storing the previously-computed values, and therefore
are not pointer algorithms: in particular, they usually cannot be applied to Pollard's rho
algorithm. Where these methods differ is in how they determine which values to store.
Following Nivasch,[12] we survey these techniques briefly.
• Brent[8] already describes variations of his technique in which the indices of saved sequence
values are powers of a number R other than two. By choosing R to be a number close to
one, and storing the sequence values at indices that are near a sequence of consecutive
powers of R, a cycle detection algorithm can use a number of function evaluations that
is within an arbitrarily small factor of the optimum λ + μ.[13][14]
• Sedgewick, Szymanski, and Yao[15] provide a method that uses M memory cells and re-
quires in the worst case only (λ + µ)(1 + cM −1/2 ) function evaluations, for some constant
c, which they show to be optimal. The technique involves maintaining a numerical pa-
rameter d, storing in a table only those positions in the sequence that are multiples of d,
and clearing the table and doubling d whenever too many values have been stored.

44 https://en.wikipedia.org/wiki/Pollard_rho_algorithm
45 https://en.wikipedia.org/wiki/Average_case_analysis
46 https://en.wikipedia.org/wiki/Bill_Gosper
47 https://en.wikipedia.org/wiki/Hash_table

526
Applications

• Several authors have described distinguished point methods that store function values
in a table based on a criterion involving the values, rather than (as in the method of
Sedgewick et al.) based on their positions. For instance, values equal to zero modulo
some value d might be stored.[16][17] More simply, Nivasch[12] credits D. P. Woodruff
with the suggestion of storing a random sample of previously seen values, making an
appropriate random choice at each step so that the sample remains random.
• Nivasch[12] describes an algorithm that does not use a fixed amount of memory, but for
which the expected amount of memory used (under the assumption that the input function
is random) is logarithmic in the sequence length. An item is stored in the memory table,
with this technique, when no later item has a smaller value. As Nivasch shows, the items
with this technique can be maintained using a stack data structure48 , and each successive
sequence value need be compared only to the top of the stack. The algorithm terminates
when the repeated sequence element with smallest value is found. Running the same
algorithm with multiple stacks, using random permutations of the values to reorder the
values within each stack, allows a time–space tradeoff similar to the previous algorithms.
However, even the version of this algorithm with a single stack is not a pointer algorithm,
due to the comparisons needed to determine which of two values is smaller.
Any cycle detection algorithm
( that )
stores at most M values from the input sequence must
1
perform at least (λ + µ) 1 + function evaluations.[18][19]
M −1

40.5 Applications

Cycle detection has been used in many applications.


• Determining the cycle length of a pseudorandom number generator49 is one measure of its
strength. This is the application cited by Knuth in describing Floyd's method.[3] Brent[8]
describes the results of testing a linear congruential generator50 in this fashion; its period
turned out to be significantly smaller than advertised. For more complex generators, the
sequence of values in which the cycle is to be found may not represent the output of the
generator, but rather its internal state.
• Several number-theoretic51 algorithms are based on cycle detection, including Pollard's
rho algorithm52 for integer factorization[20] and his related kangaroo algorithm53 for the
discrete logarithm54 problem.[21]
• In cryptographic55 applications, the ability to find two distinct values xμ−-1 and xλ+μ−-1
mapped by some cryptographic function ƒ to the same value xμ may indicate a weakness
in ƒ. For instance, Quisquater and Delescaille[17] apply cycle detection algorithms in
the search for a message and a pair of Data Encryption Standard56 keys that map that

48 https://en.wikipedia.org/wiki/Stack_(data_structure)
49 https://en.wikipedia.org/wiki/Pseudorandom_number_generator
50 https://en.wikipedia.org/wiki/Linear_congruential_generator
51 https://en.wikipedia.org/wiki/Number_theory
52 https://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm
53 https://en.wikipedia.org/wiki/Pollard%27s_kangaroo_algorithm
54 https://en.wikipedia.org/wiki/Discrete_logarithm
55 https://en.wikipedia.org/wiki/Cryptography
56 https://en.wikipedia.org/wiki/Data_Encryption_Standard

527
Cycle detection

message to the same encrypted value; Kaliski57 , Rivest58 , and Sherman59[22] also use cycle
detection algorithms to attack DES. The technique may also be used to find a collision60
in a cryptographic hash function61 .[23]
• Cycle detection may be helpful as a way of discovering infinite loops62 in certain types of
computer programs63 .[24]
• Periodic configurations64 in cellular automaton65 simulations may be found by applying
cycle detection algorithms to the sequence of automaton states.[12]
• Shape analysis66 of linked list67 data structures is a technique for verifying the correctness
of an algorithm using those structures. If a node in the list incorrectly points to an
earlier node in the same list, the structure will form a cycle that can be detected by
these algorithms.[25] In Common Lisp68 , the S-expression69 printer, under control of the
*print-circle* variable, detects circular list structure and prints it compactly.
• Teske[14] describes applications in computational group theory70 : determining the struc-
ture of an Abelian group71 from a set of its generators. The cryptographic algorithms of
Kaliski et al.[22] may also be viewed as attempting to infer the structure of an unknown
group.
• Fich (1981)72 briefly mentions an application to computer simulation73 of celestial me-
chanics74 , which she attributes to William Kahan75 . In this application, cycle detection
in the phase space76 of an orbital system may be used to determine whether the system
is periodic to within the accuracy of the simulation.[18]

40.6 References
1. J, A (2009), Algorithmic Cryptanalysis77 , CRC P, . 223,
ISBN78 978142007003379 .
2. Joux (200980 , p. 224).

57 https://en.wikipedia.org/wiki/Burt_Kaliski
58 https://en.wikipedia.org/wiki/Ron_Rivest
59 https://en.wikipedia.org/wiki/Alan_Sherman
60 https://en.wikipedia.org/wiki/Hash_collision
61 https://en.wikipedia.org/wiki/Cryptographic_hash_function
62 https://en.wikipedia.org/wiki/Infinite_loop
63 https://en.wikipedia.org/wiki/Computer_program
64 https://en.wikipedia.org/wiki/Oscillator_(cellular_automaton)
65 https://en.wikipedia.org/wiki/Cellular_automaton
66 https://en.wikipedia.org/wiki/Shape_analysis_(software)
67 https://en.wikipedia.org/wiki/Linked_list
68 https://en.wikipedia.org/wiki/Common_Lisp
69 https://en.wikipedia.org/wiki/S-expression
70 https://en.wikipedia.org/wiki/Computational_group_theory
71 https://en.wikipedia.org/wiki/Abelian_group
72 #CITEREFFich1981
73 https://en.wikipedia.org/wiki/Computer_simulation
74 https://en.wikipedia.org/wiki/Celestial_mechanics
75 https://en.wikipedia.org/wiki/William_Kahan
76 https://en.wikipedia.org/wiki/Phase_space
77 https://books.google.com/books?id=buQajqt-_iUC&pg=PA223
78 https://en.wikipedia.org/wiki/ISBN_(identifier)
79 https://en.wikipedia.org/wiki/Special:BookSources/9781420070033
80 #CITEREFJoux2009

528
References

3. K, D E.81 (1969), The Art of Computer Programming, vol. II: Seminu-
merical Algorithms, Addison-Wesley, p. 7, exercises 6 and 7
4. Handbook of Applied Cryptography, by Alfred J. Menezes, Paul C. van Oorschot, Scott
A. Vanstone, p. 12582 , describes this algorithm and others
5. F, R.W.83 (1967), ”N A”84 , J. ACM, 14 (4):
636–644, doi85 :10.1145/321420.32142286
6. The Hash Function BLAKE, by Jean-Philippe Aumasson, Willi Meier, Raphael C.-W.
Phan, Luca Henzen (2015), p. 2187 , footnote 8
7. Joux (2009)88 , Section 7.1.1, Floyd's cycle-finding algorithm, pp. 225–226.
8. B, R. P.89 (1980), ”A  M C 
”90 (PDF), BIT Numerical Mathematics 91 , 20 (2): 176–184,
92
doi :10.1007/BF01933190 . 93

9. Joux (2009)94 , Section 7.1.2, Brent's cycle-finding algorithm, pp. 226–227.


10. ”A ”95 . A   96  2016-04-14. R-
 2017-02-08.CS1 maint: archived copy as title (link97 )
11. 98

12. N, G (2004), ”C    ”, Information Pro-
cessing Letters99 , 90 (3): 135–140, doi100 :10.1016/j.ipl.2004.01.016101 .
13. S, C P.102 ; L, H W.103 (1984), ”A M C
    ”, Mathematics of Computa-

81 https://en.wikipedia.org/wiki/Donald_Knuth
82 https://books.google.com/books?id=nSzoG72E93MC&pg=PA125
83 https://en.wikipedia.org/wiki/Robert_W._Floyd
84 http://doi.acm.org/10.1145/321420.321422
85 https://en.wikipedia.org/wiki/Doi_(identifier)
86 https://doi.org/10.1145%2F321420.321422
87 https://books.google.com/books?id=nhPmBQAAQBAJ&pg=PA21
88 #CITEREFJoux2009
89 https://en.wikipedia.org/wiki/Richard_Brent_(scientist)
90 http://wwwmaths.anu.edu.au/~brent/pd/rpb051i.pdf
91 https://en.wikipedia.org/wiki/BIT_Numerical_Mathematics
92 https://en.wikipedia.org/wiki/Doi_(identifier)
93 https://doi.org/10.1007%2FBF01933190
94 #CITEREFJoux2009
https://web.archive.org/web/20160414011322/http://hackersdelight.org/hdcodetxt/
95
loopdet.c.txt
96 http://www.hackersdelight.org/hdcodetxt/loopdet.c.txt
97 https://en.wikipedia.org/wiki/Category:CS1_maint:_archived_copy_as_title
98 http://www.inwap.com/pdp10/hbaker/hakmem/flows.html
99 https://en.wikipedia.org/wiki/Information_Processing_Letters
100 https://en.wikipedia.org/wiki/Doi_(identifier)
101 https://doi.org/10.1016%2Fj.ipl.2004.01.016
102 https://en.wikipedia.org/wiki/Claus_P._Schnorr
103 https://en.wikipedia.org/wiki/Hendrik_Lenstra

529
Cycle detection

tion104 , 43 (167): 289–311, doi105 :10.2307/2007414106 , hdl107 :1887/3815108 , JS-


TOR109 2007414110 .
14. T, E (1998), ”A -    -
 ”, Mathematics of Computation111 , 67 (224): 1637–1663,
doi112 :10.1090/S0025-5718-98-00968-5113 .
15. S, R114 ; S, T G.; Y, A C.-C.115 (1982),
”T       ”, SIAM Journal on
Computing116 , 11 (2): 376–390, doi117 :10.1137/0211030118 .
16.  O, P C.; W, M J. (1999), ”P 
   ”, Journal of Cryptology119 , 12 (1): 1–
28, doi120 :10.1007/PL00003816121 .
17. Q, J.-J.; D, J.-P., ”H    ? A-
  DES”, Advances in Cryptology – EUROCRYPT '89, Workshop on the
Theory and Application of Cryptographic Techniques122 , L N  C-
 S, 434, Springer-Verlag, pp. 429–434.
18. F, F E123 (1981), ”L     
”, Proc. 13th ACM Symposium on Theory of Computing124 , pp. 96–105,
doi125 :10.1145/800076.802462126 .
19. A, E W.127 ; K, M M.128 (1985), ”I  
    ”, Theoretical Computer Science129 , 36 (2–3):
231–237, doi130 :10.1016/0304-3975(85)90044-1131 .

104 https://en.wikipedia.org/wiki/Mathematics_of_Computation
105 https://en.wikipedia.org/wiki/Doi_(identifier)
106 https://doi.org/10.2307%2F2007414
107 https://en.wikipedia.org/wiki/Hdl_(identifier)
108 http://hdl.handle.net/1887%2F3815
109 https://en.wikipedia.org/wiki/JSTOR_(identifier)
110 http://www.jstor.org/stable/2007414
111 https://en.wikipedia.org/wiki/Mathematics_of_Computation
112 https://en.wikipedia.org/wiki/Doi_(identifier)
113 https://doi.org/10.1090%2FS0025-5718-98-00968-5
114 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
115 https://en.wikipedia.org/wiki/Andrew_Yao
116 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
117 https://en.wikipedia.org/wiki/Doi_(identifier)
118 https://doi.org/10.1137%2F0211030
119 https://en.wikipedia.org/wiki/Journal_of_Cryptology
120 https://en.wikipedia.org/wiki/Doi_(identifier)
121 https://doi.org/10.1007%2FPL00003816
122 https://doi.org/10.1007%2F3-540-46885-4_43
123 https://en.wikipedia.org/wiki/Faith_Ellen
124 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
125 https://en.wikipedia.org/wiki/Doi_(identifier)
126 https://doi.org/10.1145%2F800076.802462
127 https://en.wikipedia.org/wiki/Eric_Allender
128 https://en.wikipedia.org/wiki/Maria_Klawe
129 https://en.wikipedia.org/wiki/Theoretical_Computer_Science_(journal)
130 https://en.wikipedia.org/wiki/Doi_(identifier)
131 https://doi.org/10.1016%2F0304-3975%2885%2990044-1

530
External links

20. P, J. M. (1975), ”A M C   ”, BIT,


15 (3): 331–334, doi132 :10.1007/BF01933667133 .
21. P, J. M. (1978), ”M C     (
p)”, Mathematics of Computation134 , American Mathematical Society, 32 (143): 918–
924, doi135 :10.2307/2006496136 , JSTOR137 2006496138 .
22. K, B S., J.; R, R L.139 ; S, A T. (1988), ”I
 D E S  ? (R   
 DES)”, Journal of Cryptology140 , 1 (1): 3–36, doi141 :10.1007/BF00206323142 .
23. Joux (2009)143 , Section 7.5, Collisions in hash functions, pp. 242–245.
24. V G, A (1987), ”E    P 
 -- ”, Journal of Logic Programming, 4 (1): 23–31,
doi144 :10.1016/0743-1066(87)90020-3145 .
25. A, M; H, M H (1997), ”A  D S
A  L D S”, AADEBUG '97, Proceedings of the Third In-
ternational Workshop on Automatic Debugging146 , L E A
 C  I S, L U147 , . 37–42.

40.7 External links


• Gabriel Nivasch, The Cycle Detection Problem and the Stack Algorithm148
• Tortoise and Hare149 , Portland Pattern Repository
• Floyd's Cycle Detection Algorithm (The Tortoise and the Hare)150
• Brent's Cycle Detection Algorithm (The Teleporting Turtle)151

132 https://en.wikipedia.org/wiki/Doi_(identifier)
133 https://doi.org/10.1007%2FBF01933667
134 https://en.wikipedia.org/wiki/Mathematics_of_Computation
135 https://en.wikipedia.org/wiki/Doi_(identifier)
136 https://doi.org/10.2307%2F2006496
137 https://en.wikipedia.org/wiki/JSTOR_(identifier)
138 http://www.jstor.org/stable/2006496
139 https://en.wikipedia.org/wiki/Ron_Rivest
140 https://en.wikipedia.org/wiki/Journal_of_Cryptology
141 https://en.wikipedia.org/wiki/Doi_(identifier)
142 https://doi.org/10.1007%2FBF00206323
143 #CITEREFJoux2009
144 https://en.wikipedia.org/wiki/Doi_(identifier)
145 https://doi.org/10.1016%2F0743-1066%2887%2990020-3
146 http://www.ep.liu.se/ea/cis/1997/009/04/
147 https://en.wikipedia.org/wiki/Link%C3%B6ping_University
148 http://www.gabrielnivasch.org/fun/cycle-detection
149 http://wiki.c2.com/?TortoiseAndHare
150 http://www.siafoo.net/algorithm/10
151 http://www.siafoo.net/algorithm/11

531
41 Stable marriage problem

In mathematics1 , economics2 , and computer science3 , the stable marriage problem (also
stable matching problem or SMP) is the problem of finding a stable matching between
two equally sized sets of elements given an ordering of preferences for each element. A
matching4 is a bijection5 from the elements of one set to the elements of the other set. A
matching is not stable if:
1. There is an element A of the first matched set which prefers some given element B of
the second matched set over the element to which A is already matched, and
2. B also prefers A over the element to which B is already matched.
In other words, a matching is stable when there does not exist any match (A, B) which
both prefer each other to their current partner under the matching.
The stable marriage problem has been stated as follows:
Given n men and n women, where each person has ranked all members of the opposite
sex in order of preference, marry the men and women together such that there are no
two people of opposite sex who would both rather have each other than their current
partners. When there are no such pairs of people, the set of marriages is deemed stable.
The existence of two classes that need to be paired with each other (men and women in
this example) distinguishes this problem from the stable roommates problem6 .

41.1 Applications

Algorithms for finding solutions to the stable marriage problem have applications in a
variety of real-world situations, perhaps the best known of these being in the assignment
of graduating medical students to their first hospital appointments.[1] In 2012, The Sveriges
Riksbank Prize in Economic Sciences in Memory of Alfred Nobel7 was awarded to Lloyd S.
Shapley8 and Alvin E. Roth9 ”for the theory of stable allocations and the practice of market
design.”[2]

1 https://en.wikipedia.org/wiki/Mathematics
2 https://en.wikipedia.org/wiki/Economics
3 https://en.wikipedia.org/wiki/Computer_science
4 https://en.wikipedia.org/wiki/Matching_(graph_theory)
5 https://en.wikipedia.org/wiki/Bijection
6 https://en.wikipedia.org/wiki/Stable_roommates_problem
https://en.wikipedia.org/wiki/The_Sveriges_Riksbank_Prize_in_Economic_Sciences_in_
7
Memory_of_Alfred_Nobel
8 https://en.wikipedia.org/wiki/Lloyd_S._Shapley
9 https://en.wikipedia.org/wiki/Alvin_E._Roth

533
Stable marriage problem

An important and large-scale application of stable marriage is in assigning users to servers


in a large distributed Internet service.[3] Billions of users access web pages, videos, and other
services on the Internet, requiring each user to be matched to one of (potentially) hundreds
of thousands of servers around the world that offer that service. A user prefers servers that
are proximal enough to provide a faster response time for the requested service, resulting
in a (partial) preferential ordering of the servers for each user. Each server prefers to serve
users that it can with a lower cost, resulting in a (partial) preferential ordering of users for
each server. Content delivery networks10 that distribute much of the world's content and
services solve this large and complex stable marriage problem between users and servers
every tens of seconds to enable billions of users to be matched up with their respective
servers that can provide the requested web pages, videos, or other services.[3]

41.2 Different stable matchings

Main article: Lattice of stable matchings11 In general, there may be many different stable
matchings. For example, suppose there are three men (A,B,C) and three women (X,Y,Z)
which have preferences of:
A: YXZ B: ZYX C: XZY
X: BAC Y: CBA Z: ACB
There are three stable solutions to this matching arrangement:
• men get their first choice and women their third - (AY, BZ, CX);
• all participants get their second choice - (AX, BY, CZ);
• women get their first choice and men their third - (AZ, BX, CY).
All three are stable, because instability requires both of the participants to be happier with
an alternative match. Giving one group their first choices ensures that the matches are
stable because they would be unhappy with any other proposed match. Giving everyone
their second choice ensures that any other match would be disliked by one of the parties. In
general, the family of solutions to any instance of the stable marriage problem can be given
the structure of a finite distributive lattice12 , and this structure leads to efficient algorithms
for several problems on stable marriages.[4]
In a uniformly-random instance of the stable marriage problem with n men and n women,
the average number of stable matchings is asymptotically e−1 n ln n.[5] In a stable marriage
instance chosen to maximize the number of different stable matchings, this number is an
exponential function13 of n.[6] Counting the number of stable matchings in a given instance
is #P-complete14 .[7]

10 https://en.wikipedia.org/wiki/Content_delivery_network
11 https://en.wikipedia.org/wiki/Lattice_of_stable_matchings
12 https://en.wikipedia.org/wiki/Distributive_lattice
13 https://en.wikipedia.org/wiki/Exponential_function
14 https://en.wikipedia.org/wiki/%E2%99%AFP-complete

534
Algorithmic solution

41.3 Algorithmic solution

Main article: Gale–Shapley algorithm15

Figure 100 Animation showing an example of the Gale–Shapley algorithm

In 1962, David Gale16 and Lloyd Shapley17 proved that, for any equal number of men
and women, it is always possible to solve the SMP and make all marriages stable. They
presented an algorithm18 to do so.[8][9]
The Gale–Shapley algorithm19 (also known as the deferred acceptance algorithm) involves
a number of ”rounds” (or ”iterations20 ”):
• In the first round, first a) each unengaged man proposes to the woman he prefers most,
and then b) each woman replies ”maybe” to her suitor she most prefers and ”no” to all

15 https://en.wikipedia.org/wiki/Gale%E2%80%93Shapley_algorithm
16 https://en.wikipedia.org/wiki/David_Gale
17 https://en.wikipedia.org/wiki/Lloyd_Shapley
18 https://en.wikipedia.org/wiki/Algorithm
19 https://en.wikipedia.org/wiki/Gale%E2%80%93Shapley_algorithm
20 https://en.wikipedia.org/wiki/Iteration

535
Stable marriage problem

other suitors. She is then provisionally ”engaged” to the suitor she most prefers so far,
and that suitor is likewise provisionally engaged to her.
• In each subsequent round, first a) each unengaged man proposes to the most-preferred
woman to whom he has not yet proposed (regardless of whether the woman is already
engaged), and then b) each woman replies ”maybe” if she is currently not engaged or if she
prefers this man over her current provisional partner (in this case, she rejects her current
provisional partner who becomes unengaged). The provisional nature of engagements
preserves the right of an already-engaged woman to ”trade up” (and, in the process, to
”jilt” her until-then partner).
• This process is repeated until everyone is engaged.
This algorithm is guaranteed to produce a stable marriage for all participants in time O(n2 )
where n is the number of men or women.[10]
Among all possible different stable matchings, it always yields the one that is best for all
men among all stable matchings, and worst for all women. It is a truthful mechanism21
from the point of view of men (the proposing side). I.e, no man can get a better matching
for himself by misrepresenting his preferences. Moreover, the GS algorithm is even group-
strategy proof for men, i.e., no coalition of men can coordinate a misrepresentation of their
preferences such that all men in the coalition are strictly better-off.[11] However, it is possible
for some coalition to misrepresent their preferences such that some men are better-off and
the other men retain the same partner.[12] The GS algorithm is non-truthful for the women
(the reviewing side): each woman may be able to misrepresent her preferences and get a
better match.

41.4 Rural hospitals theorem

Main article: Rural hospitals theorem22 The Rural hospitals theorem23 concerns a more
general variant of the stable matching problem, like that applying in the problem of match-
ing doctors to positions at hospitals, differing in the following ways from the basic n-to-n
form of the stable marriage problem:
• Each participant may only be willing to be matched to a subset of the participants on
the other side of the matching.
• The participants on one side of the matching (the hospitals) may have a numerical ca-
pacity, specifying the number of doctors they are willing to hire.
• The total number of participants on one side might not equal the total capacity to which
they are to be matched on the other side.
• The resulting matching might not match all of the participants.
In this case, the condition of stability is that no unmatched pair prefer each other to their
situation in the matching (whether that situation is another partner or being unmatched).
With this condition, a stable matching will still exist, and can still be found by the Gale–
Shapley algorithm.

21 https://en.wikipedia.org/wiki/Truthful_mechanism
22 https://en.wikipedia.org/wiki/Rural_hospitals_theorem
23 https://en.wikipedia.org/wiki/Rural_hospitals_theorem

536
Related problems

For this kind of stable matching problem, the rural hospitals theorem states that:
• The set of assigned doctors, and the number of filled positions in each hospital, are the
same in all stable matchings.
• Any hospital that has some empty positions in some stable matching, receives exactly
the same set of doctors in all stable matchings.

41.5 Related problems

In stable matching with indifference24 , some men might be indifferent between two or
more women and vice versa.
The stable roommates problem25 is similar to the stable marriage problem, but differs
in that all participants belong to a single pool (instead of being divided into equal numbers
of ”men” and ”women”).
The hospitals/residents problem26 – also known as the college admissions problem –
differs from the stable marriage problem in that a hospital can take multiple residents, or
a college can take an incoming class of more than one student. Algorithms to solve the
hospitals/residents problem can be hospital-oriented (as the NRMP27 was before 1995)[13]
or resident-oriented. This problem was solved, with an algorithm, in the same original paper
by Gale and Shapley, in which the stable marriage problem was solved.[8]
The hospitals/residents problem with couples28 allows the set of residents to include
couples who must be assigned together, either to the same hospital or to a specific pair of
hospitals chosen by the couple (e.g., a married couple want to ensure that they will stay
together and not be stuck in programs that are far away from each other). The addition of
couples to the hospitals/residents problem renders the problem NP-complete29 .[14]
The assignment problem30 seeks to find a matching in a weighted bipartite graph31 that
has maximum weight. Maximum weighted matchings do not have to be stable, but in some
applications a maximum weighted matching is better than a stable one.
The matching with contracts problem is a generalization of matching problem, in which
participants can be matched with different terms of contracts.[15] An important special case
of contracts is matching with flexible wages.[16]

24 https://en.wikipedia.org/wiki/Stable_matching_with_indifference
25 https://en.wikipedia.org/wiki/Stable_roommates_problem
26 https://en.wikipedia.org/wiki/Hospital_resident
27 https://en.wikipedia.org/wiki/National_Resident_Matching_Program
28 https://en.wikipedia.org/wiki/Hospital_resident
29 https://en.wikipedia.org/wiki/NP-complete
30 https://en.wikipedia.org/wiki/Assignment_problem
31 https://en.wikipedia.org/wiki/Bipartite_graph

537
Stable marriage problem

41.6 See also


• Matching (graph theory)32 - matching between different vertices of the graph; usually
unrelated to preference-ordering.
• Envy-free matching33 - a relaxation of stable matching for many-to-one matching prob-
lems
• Rainbow matching34 for edge colored graphs
• Stable matching polytope35

41.7 References
1. Stable Matching Algorithms36
2. ”T P  E S 2012”37 . N.. R 2013-
09-09.
3. B M  R S38 (2015). ”A  
 ”39 (PDF). ACM SIGCOMM Computer Communication Review.
45 (3).
4. G, D40 (1987). ”T     
  ”. SIAM Journal on Computing41 . 16 (1): 111–128.
doi :10.1137/0216010 . MR44 087325545 .
42 43

5. P, B (1989). ”T     ”. SIAM


Journal on Discrete Mathematics46 . 2 (4): 530–549. doi47 :10.1137/040204848 .
MR49 101853850 .
6. K, A R.51 ; G, S O; W, R (2018). ”A 
         ”. I
D, I; K, D; H, M52 (.). Proceedings of
the 50th Symposium on Theory of Computing (STOC 2018). Association for Comput-

32 https://en.wikipedia.org/wiki/Matching_(graph_theory)
33 https://en.wikipedia.org/wiki/Envy-free_matching
34 https://en.wikipedia.org/wiki/Rainbow_matching
35 https://en.wikipedia.org/wiki/Stable_matching_polytope
36 http://www.dcs.gla.ac.uk/research/algorithms/stable/
37 https://www.nobelprize.org/nobel_prizes/economics/laureates/2012/
38 https://en.wikipedia.org/wiki/Ramesh_Sitaraman
39 http://www.sigcomm.org/sites/default/files/ccr/papers/2015/July/0000000-0000009.pdf
40 https://en.wikipedia.org/wiki/Dan_Gusfield
41 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
42 https://en.wikipedia.org/wiki/Doi_(identifier)
43 https://doi.org/10.1137%2F0216010
44 https://en.wikipedia.org/wiki/MR_(identifier)
45 http://www.ams.org/mathscinet-getitem?mr=0873255
46 https://en.wikipedia.org/wiki/SIAM_Journal_on_Discrete_Mathematics
47 https://en.wikipedia.org/wiki/Doi_(identifier)
48 https://doi.org/10.1137%2F0402048
49 https://en.wikipedia.org/wiki/MR_(identifier)
50 http://www.ams.org/mathscinet-getitem?mr=1018538
51 https://en.wikipedia.org/wiki/Anna_Karlin
52 https://en.wikipedia.org/wiki/Monika_Henzinger

538
References

ing Machinery. pp. 920–925. arXiv53 :1711.0103254 . doi55 :10.1145/3188745.318884856 .


MR57 382630558 .
7. I, R W.; L, P (1986). ”T   -
  ”. SIAM Journal on Computing59 . 15 (3): 655–667.
doi60 :10.1137/021504861 . MR62 085041563 .
8. G, D.; S, L. S. (1962). ”C A   S-
  M”64 . American Mathematical Monthly65 . 69 (1): 9–14.
doi66 :10.2307/231272667 . JSTOR68 231272669 .
9. Harry Mairson70 : ”The Stable Marriage Problem”, The Brandeis Review 12, 1992
(online71 ).
10. I, K72 ; M, S (2008). ”A S   S M-
 P  I V”. International Conference on Informatics Edu-
cation and Research for Knowledge-Circulating Society (ICKS 2008). IEEE. pp. 131–
136. doi73 :10.1109/ICKS.2008.774 . hdl75 :2433/22694076 . ISBN77 978-0-7695-3128-178 .
11. D, L. E.79 ; F, D. A.80 (1981), ”M   G–
S ”, American Mathematical Monthly81 , 88 (7): 485–494,
doi82 :10.2307/232175383 , JSTOR84 232175385 , MR86 062801687

53 https://en.wikipedia.org/wiki/ArXiv_(identifier)
54 http://arxiv.org/abs/1711.01032
55 https://en.wikipedia.org/wiki/Doi_(identifier)
56 https://doi.org/10.1145%2F3188745.3188848
57 https://en.wikipedia.org/wiki/MR_(identifier)
58 http://www.ams.org/mathscinet-getitem?mr=3826305
59 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
60 https://en.wikipedia.org/wiki/Doi_(identifier)
61 https://doi.org/10.1137%2F0215048
62 https://en.wikipedia.org/wiki/MR_(identifier)
63 http://www.ams.org/mathscinet-getitem?mr=0850415
64 http://www.dtic.mil/get-tr-doc/pdf?AD=AD0251958
65 https://en.wikipedia.org/wiki/American_Mathematical_Monthly
66 https://en.wikipedia.org/wiki/Doi_(identifier)
67 https://doi.org/10.2307%2F2312726
68 https://en.wikipedia.org/wiki/JSTOR_(identifier)
69 http://www.jstor.org/stable/2312726
70 https://en.wikipedia.org/wiki/Harry_Mairson
71 http://www1.cs.columbia.edu/~evs/intro/stable/writeup.html
72 https://en.wikipedia.org/wiki/Kazuo_Iwama_(computer_scientist)
73 https://en.wikipedia.org/wiki/Doi_(identifier)
74 https://doi.org/10.1109%2FICKS.2008.7
75 https://en.wikipedia.org/wiki/Hdl_(identifier)
76 http://hdl.handle.net/2433%2F226940
77 https://en.wikipedia.org/wiki/ISBN_(identifier)
78 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7695-3128-1
79 https://en.wikipedia.org/wiki/Lester_Dubins
80 https://en.wikipedia.org/wiki/David_A._Freedman
81 https://en.wikipedia.org/wiki/American_Mathematical_Monthly
82 https://en.wikipedia.org/wiki/Doi_(identifier)
83 https://doi.org/10.2307%2F2321753
84 https://en.wikipedia.org/wiki/JSTOR_(identifier)
85 http://www.jstor.org/stable/2321753
86 https://en.wikipedia.org/wiki/MR_(identifier)
87 http://www.ams.org/mathscinet-getitem?mr=0628016

539
Stable marriage problem

12. H, C-C (2006). ”C     G-S 


 ”. I A, Y; E, T (.). Algorithms -
ESA 2006, 14th Annual European Symposium, Zurich, Switzerland, September 11-13,
2006, Proceedings. Lecture Notes in Computer Science. 4168. Springer. pp. 418–431.
doi88 :10.1007/11841036_3989 . MR90 234716291 .
13. R, S (A 2003). ”A M S M T (B
P) M?”92 (PDF). SIAM News (3): 36. Retrieved 2 January 2018.
14. G, D.; I, R. W. (1989). The Stable Marriage Problem: Structure and
Algorithms. MIT Press. p. 54. ISBN93 0-262-07118-594 .
15. H, J W; M, P (2005). ”M
 C”. American Economic Review95 . 95 (4): 913–935.
96 97 98
doi :10.1257/0002828054825466 . JSTOR 4132699 . 99

16. C, V; K, E M (1981). ”J M 
H F  W”. Econometrica100 . 49 (2): 437–450.
doi101 :10.2307/1913320102 . JSTOR103 1913320104 .

41.7.1 Textbooks and other important references not cited in the text
• Kleinberg, J., and Tardos, E. (2005) Algorithm Design, Chapter 1, pp 1–12. See compan-
ion website for the Text [1]105 .
• Knuth, D. E.106 (1976). Mariages stables. Montreal: Les Presses de l'Universite de
Montreal.
• Knuth, D.E.107 (1996) Stable Marriage and Its Relation to Other Combinatorial Problems:
An Introduction to the Mathematical Analysis of Algorithms, English translation, (CRM
Proceedings and Lecture Notes), American Mathematical Society.
• Pittel, B. (1992). ”On likely solutions of a stable marriage problem”, The Annals of
Applied Probability108 2; 358-401.

88 https://en.wikipedia.org/wiki/Doi_(identifier)
89 https://doi.org/10.1007%2F11841036_39
90 https://en.wikipedia.org/wiki/MR_(identifier)
91 http://www.ams.org/mathscinet-getitem?mr=2347162
92 http://www.siam.org/pdf/news/305.pdf
93 https://en.wikipedia.org/wiki/ISBN_(identifier)
94 https://en.wikipedia.org/wiki/Special:BookSources/0-262-07118-5
95 https://en.wikipedia.org/wiki/American_Economic_Review
96 https://en.wikipedia.org/wiki/Doi_(identifier)
97 https://doi.org/10.1257%2F0002828054825466
98 https://en.wikipedia.org/wiki/JSTOR_(identifier)
99 http://www.jstor.org/stable/4132699
100 https://en.wikipedia.org/wiki/Econometrica
101 https://en.wikipedia.org/wiki/Doi_(identifier)
102 https://doi.org/10.2307%2F1913320
103 https://en.wikipedia.org/wiki/JSTOR_(identifier)
104 http://www.jstor.org/stable/1913320
105 http://www.aw-bc.com/info/kleinberg/
106 https://en.wikipedia.org/wiki/Donald_Knuth
107 https://en.wikipedia.org/wiki/Donald_Knuth
108 https://en.wikipedia.org/wiki/The_Annals_of_Applied_Probability

540
External links

• Roth, A. E. (1984). ”The evolution of the labor market for medical interns and residents:
A case study in game theory”, Journal of Political Economy109 92: 991–1016.
• Roth, A. E., and Sotomayor, M. A. O. (1990) Two-sided matching: A study in game-
theoretic modeling and analysis Cambridge University Press110 .
• S, Y; L-B, K (2009). Multiagent Systems: Algorithmic,
Game-Theoretic, and Logical Foundations111 . N Y: C U
P112 . ISBN113 978-0-521-89943-7114 . See Section 10.6.4; downloadable free online115 .
• S, J.; V, R. V. (2007). ”M   ”116
(PDF). I N, N; R, T; T, E; V, V (.).
Algorithmic Game Theory. pp. 255–262. ISBN117 978-0521872829118 ..

41.8 External links


• Interactive Flash Demonstration of SMP119
• 120
• 121
• SMP Lecture Notes122

Topics in game theory

109 https://en.wikipedia.org/wiki/Journal_of_Political_Economy
110 https://en.wikipedia.org/wiki/Cambridge_University_Press
111 http://www.masfoundations.org
112 https://en.wikipedia.org/wiki/Cambridge_University_Press
113 https://en.wikipedia.org/wiki/ISBN_(identifier)
114 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-89943-7
115 http://www.masfoundations.org/download.html
116 http://www.cambridge.org/journals/nisan/downloads/Nisan_Non-printable.pdf
117 https://en.wikipedia.org/wiki/ISBN_(identifier)
118 https://en.wikipedia.org/wiki/Special:BookSources/978-0521872829
119 http://mathsite.math.berkeley.edu/smp/smp.html
https://web.archive.org/web/20080512150525/http://kuznets.fas.harvard.edu/~aroth/
120
alroth.html#NRMP
121 http://www.dcs.gla.ac.uk/research/algorithms/stable/EGSapplet/EGS.html
122 http://www.csee.wvu.edu/~ksmani/courses/fa01/random/lecnotes/lecture5.pdf

541
42 Graph theory

This article is about sets of vertices connected by edges. For graphs of mathematical
functions, see Graph of a function1 . For other uses, see Graph (disambiguation)2 . Area of
discrete mathematics

Figure 101 A drawing of a graph.

In mathematics3 , graph theory is the study of graphs4 , which are mathematical structures
used to model pairwise relations between objects. A graph in this context is made up of
vertices5 (also called nodes or points) which are connected by edges6 (also called links or
lines). A distinction is made between undirected graphs, where edges link two vertices
symmetrically, and directed graphs, where edges link two vertices asymmetrically; see

1 https://en.wikipedia.org/wiki/Graph_of_a_function
2 https://en.wikipedia.org/wiki/Graph_(disambiguation)
3 https://en.wikipedia.org/wiki/Mathematics
4 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
5 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
6 https://en.wikipedia.org/wiki/Glossary_of_graph_theory_terms#edge

543
Graph theory

Graph (discrete mathematics)7 for more detailed definitions and for other variations in the
types of graph that are commonly considered. Graphs are one of the prime objects of study
in discrete mathematics8 .
Refer to the glossary of graph theory9 for basic definitions in graph theory.

42.1 Definitions

Definitions in graph theory vary. The following are some of the more basic ways of defining
graphs and related mathematical structures10 .

7 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
8 https://en.wikipedia.org/wiki/Discrete_mathematics
9 https://en.wikipedia.org/wiki/Glossary_of_graph_theory
10 https://en.wikipedia.org/wiki/Mathematical_structure

544
Definitions

42.1.1 Graph

Figure 102 A graph with three vertices and three edges.

In one restricted but very common sense of the term,[1][2] a graph is an ordered pair11 G =
(V, E) comprising:
• V a set12 of vertices (also called nodes or points);
• E ⊆{{x, y} | (x, y) ∈ V2 ∧ x ≠y} a set13 of edges (also called links or lines), which are
unordered pairs14 of vertices (i.e., an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called precisely an undirected simple
graph.

11 https://en.wikipedia.org/wiki/Ordered_pair
12 https://en.wikipedia.org/wiki/Set_(mathematics)
13 https://en.wikipedia.org/wiki/Set_(mathematics)
14 https://en.wikipedia.org/wiki/Unordered_pair

545
Graph theory

In the edge {x, y}, the vertices x and y are called the endpoints of the edge. The edge is said
to join x and y and to be incident on x and on y. A vertex may exist in a graph and not
belong to an edge. Multiple edges15 are two or more edges that join the same two vertices.
In one more general sense of the term allowing multiple edges,[3][4] a graph is an ordered
triple G = (V, E, ϕ) comprising:
• V a set16 of vertices (also called nodes or points);
• E a set17 of edges (also called links or lines);
• ϕ: E → {{x, y} | (x, y) ∈ V2 ∧ x ≠y} an incidence function mapping every edge to an
unordered pair18 of vertices (i.e., an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called precisely an undirected multigraph.
A loop19 is an edge that joins a vertex to itself. Graphs as defined in the two definitions
above cannot have loops, because a loop joining a vertex x is the edge (for an undirected
simple graph) or is incident on (for an undirected multigraph) {x, x} = {x} which is not
in {{x, y} | (x, y) ∈ V2 ∧ x ≠y}. So to allow loops the definitions must be expanded. For
undirected simple graphs, E ⊆{{x, y} | (x, y) ∈ V2 ∧ x ≠y} should become E ⊆{{x, y} |
(x, y) ∈ V2 }. For undirected multigraphs, ϕ: E → {{x, y} | (x, y) ∈ V2 ∧ x ≠y} should
become ϕ: E → {{x, y} | (x, y) ∈ V2 }. To avoid ambiguity, these types of objects may
be called precisely an undirected simple graph permitting loops and an undirected
multigraph permitting loops respectively.
V and E are usually taken to be finite, and many of the well-known results are not true (or
are rather different) for infinite graphs because many of the arguments fail in the infinite
case20 . Moreover, V is often assumed to be non-empty, but E is allowed to be the empty set.
The order of a graph is |V|, its number of vertices. The size of a graph is |E|, its number
of edges. The degree or valency of a vertex is the number of edges that are incident to it,
where a loop is counted twice.
In an undirected simple graph of order n, the maximum degree of each vertex is n − 1 and
the maximum size of the graph is n(n − 1)/2.
The edges of an undirected simple graph permitting loops G induce a symmetric homoge-
neous relation21 ~ on the vertices of G that is called the adjacency relation of G. Specifically,
for each edge {x, y}, its endpoints x and y are said to be adjacent to one another, which is
denoted x ~ y.

42.1.2 Directed graph

Main article: Directed graph22

15 https://en.wikipedia.org/wiki/Multiple_edges
16 https://en.wikipedia.org/wiki/Set_(mathematics)
17 https://en.wikipedia.org/wiki/Set_(mathematics)
18 https://en.wikipedia.org/wiki/Unordered_pair
19 https://en.wikipedia.org/wiki/Loop_(graph_theory)
20 https://en.wikipedia.org/wiki/Infinite_graph
21 https://en.wikipedia.org/wiki/Binary_relation#Homogeneous_relation
22 https://en.wikipedia.org/wiki/Directed_graph

546
Definitions

Figure 103 A directed graph with three vertices and four directed edges (the double
arrow represents an edge in each direction).

A directed graph or digraph is a graph in which edges have orientations.


In one restricted but very common sense of the term,[5] a directed graph is an ordered
pair G = (V, E) comprising:
• V a set23 of vertices (also called nodes or points);
• E ⊆{(x, y) | (x, y) ∈ V2 ∧ x ≠y} a set24 of edges (also called directed edges, directed links,
directed lines, arrows or arcs) which are ordered pairs25 of distinct vertices (i.e., an edge
is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called precisely a directed simple graph.

23 https://en.wikipedia.org/wiki/Set_(mathematics)
24 https://en.wikipedia.org/wiki/Set_(mathematics)
25 https://en.wikipedia.org/wiki/Ordered_pair

547
Graph theory

In the edge (x, y) directed from x to y, the vertices x and y are called the endpoints of the
edge, x the tail of the edge and y the head of the edge. The edge (y, x) is called the inverted
edge of (x, y). The edge is said to join x and y and to be incident on x and on y. A vertex
may exist in a graph and not belong to an edge. A loop26 is an edge that joins a vertex to
itself. Multiple edges27 are two or more edges that join the same two vertices.
In one more general sense of the term allowing multiple edges,[5] a directed graph is an
ordered triple G = (V, E, ϕ) comprising:
• V a set28 of vertices (also called nodes or points);
• E a set29 of edges (also called directed edges, directed links, directed lines, arrows or arcs);
• ϕ: E → {(x, y) | (x, y) ∈ V2 ∧ x ≠y} an incidence function mapping every edge to an
ordered pair30 of distinct vertices (i.e., an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called precisely a directed multigraph.
Directed graphs as defined in the two definitions above cannot have loops, because a loop
joining a vertex x is the edge (for a directed simple graph) or is incident on (for a directed
multigraph) (x, x) which is not in {(x, y) | (x, y) ∈ V2 ∧ x ≠y}. So to allow loops the
definitions must be expanded. For directed simple graphs, E ⊆{(x, y) | (x, y) ∈ V2 ∧ x ≠y}
should become E ⊆V2 . For directed multigraphs, ϕ: E → {(x, y) | (x, y) ∈ V2 ∧ x ≠y} should
become ϕ: E → V2 . To avoid ambiguity, these types of objects may be called precisely a
directed simple graph permitting loops and a directed multigraph permitting
loops (or a quiver31 ) respectively.
The edges of a directed simple graph permitting loops G is a homogeneous relation32 ~ on
the vertices of G that is called the adjacency relation of G. Specifically, for each edge (x, y),
its endpoints x and y are said to be adjacent to one another, which is denoted x ~ y.

26 https://en.wikipedia.org/wiki/Loop_(graph_theory)
27 https://en.wikipedia.org/wiki/Multiple_edges
28 https://en.wikipedia.org/wiki/Set_(mathematics)
29 https://en.wikipedia.org/wiki/Set_(mathematics)
30 https://en.wikipedia.org/wiki/Ordered_pair
31 https://en.wikipedia.org/wiki/Quiver_(mathematics)
32 https://en.wikipedia.org/wiki/Binary_relation#Homogeneous_relation

548
Applications

42.2 Applications

Figure 104 The network graph formed by Wikipedia editors (edges) contributing to
different Wikipedia language versions (vertices) during one month in summer 2013.[6]

Graphs can be used to model many types of relations and processes in physical,
biological,[7][8] social and information systems. Many practical problems can be represented
by graphs. Emphasizing their application to real-world systems, the term network is some-
times defined to mean a graph in which attributes (e.g. names) are associated with the
vertices and edges, and the subject that expresses and understands the real-world systems
as a network is called network science33 .

33 https://en.wikipedia.org/wiki/Network_science

549
Graph theory

42.2.1 Computer science

In computer science34 , graphs are used to represent networks of communication, data or-
ganization, computational devices, the flow of computation, etc. For instance, the link
structure of a website35 can be represented by a directed graph, in which the vertices rep-
resent web pages and directed edges represent links36 from one page to another. A similar
approach can be taken to problems in social media,[9] travel, biology, computer chip design,
mapping the progression of neuro-degenerative diseases,[10][11] and many other fields. The
development of algorithms37 to handle graphs is therefore of major interest in computer sci-
ence. The transformation of graphs38 is often formalized and represented by graph rewrite
systems39 . Complementary to graph transformation40 systems focusing on rule-based in-
memory manipulation of graphs are graph databases41 geared towards transaction42 -safe,
persistent43 storing and querying of graph-structured data44 .

42.2.2 Linguistics

Graph-theoretic methods, in various forms, have proven particularly useful in linguistics45 ,


since natural language often lends itself well to discrete structure. Traditionally, syntax46
and compositional semantics follow tree-based structures, whose expressive power lies in
the principle of compositionality47 , modeled in a hierarchical graph. More contemporary
approaches such as head-driven phrase structure grammar48 model the syntax of natural
language using typed feature structures49 , which are directed acyclic graphs50 . Within
lexical semantics51 , especially as applied to computers, modeling word meaning is easier
when a given word is understood in terms of related words; semantic networks52 are therefore
important in computational linguistics53 . Still, other methods in phonology (e.g. optimality
theory54 , which uses lattice graphs55 ) and morphology (e.g. finite-state morphology, using
finite-state transducers56 ) are common in the analysis of language as a graph. Indeed,

34 https://en.wikipedia.org/wiki/Computer_science
35 https://en.wikipedia.org/wiki/Website
36 https://en.wikipedia.org/wiki/Hyperlink
37 https://en.wikipedia.org/wiki/Algorithm
38 https://en.wikipedia.org/wiki/Graph_transformation
39 https://en.wikipedia.org/wiki/Graph_rewriting
40 https://en.wikipedia.org/wiki/Graph_transformation
41 https://en.wikipedia.org/wiki/Graph_database
42 https://en.wikipedia.org/wiki/Database_transaction
43 https://en.wikipedia.org/wiki/Persistence_(computer_science)
44 https://en.wikipedia.org/wiki/Graph_(data_structure)
45 https://en.wikipedia.org/wiki/Linguistics
46 https://en.wikipedia.org/wiki/Syntax
47 https://en.wikipedia.org/wiki/Principle_of_compositionality
48 https://en.wikipedia.org/wiki/Head-driven_phrase_structure_grammar
49 https://en.wikipedia.org/wiki/Feature_structure
50 https://en.wikipedia.org/wiki/Directed_acyclic_graph
51 https://en.wikipedia.org/wiki/Lexical_semantics
52 https://en.wikipedia.org/wiki/Semantic_network
53 https://en.wikipedia.org/wiki/Computational_linguistics
54 https://en.wikipedia.org/wiki/Optimality_theory
55 https://en.wikipedia.org/wiki/Lattice_graph
56 https://en.wikipedia.org/wiki/Finite-state_transducer

550
Applications

the usefulness of this area of mathematics to linguistics has borne organizations such as
TextGraphs57 , as well as various 'Net' projects, such as WordNet58 , VerbNet59 , and others.

42.2.3 Physics and chemistry

Graph theory is also used to study molecules in chemistry60 and physics61 . In condensed
matter physics62 , the three-dimensional structure of complicated simulated atomic struc-
tures can be studied quantitatively by gathering statistics on graph-theoretic properties
related to the topology of the atoms. Also, ”the Feynman graphs and rules of calculation63
summarize quantum field theory64 in a form in close contact with the experimental numbers
one wants to understand.”[12] In chemistry a graph makes a natural model for a molecule,
where vertices represent atoms65 and edges bonds66 . This approach is especially used in
computer processing of molecular structures, ranging from chemical editors67 to database
searching. In statistical physics68 , graphs can represent local connections between inter-
acting parts of a system, as well as the dynamics of a physical process on such systems.
Similarly, in computational neuroscience69 graphs can be used to represent functional con-
nections between brain areas that interact to give rise to various cognitive processes, where
the vertices represent different areas of the brain and the edges represent the connections
between those areas. Graph theory plays an important role in electrical modeling of elec-
trical networks, here, weights are associated with resistance of the wire segments to obtain
electrical properties of network structures.[13] Graphs are also used to represent the micro-
scale channels of porous media70 , in which the vertices represent the pores and the edges
represent the smaller channels connecting the pores. Chemical graph theory71 uses the
molecular graph72 as a means to model molecules. Graphs and networks are excellent mod-
els to study and understand phase transitions and critical phenomena. Removal of nodes
or edges lead to a critical transition where the network breaks into small clusters which is
studied as a phase transition. This breakdown is studied via percolation theory.[14] [15]

57 http://www.textgraphs.org/
58 https://en.wikipedia.org/wiki/WordNet
59 https://en.wikipedia.org/wiki/VerbNet
60 https://en.wikipedia.org/wiki/Chemistry
61 https://en.wikipedia.org/wiki/Physics
62 https://en.wikipedia.org/wiki/Condensed_matter_physics
63 https://en.wikipedia.org/wiki/Feynman_diagram
64 https://en.wikipedia.org/wiki/Quantum_field_theory
65 https://en.wikipedia.org/wiki/Atom
66 https://en.wikipedia.org/wiki/Chemical_bond
67 https://en.wikipedia.org/wiki/Molecule_editor
68 https://en.wikipedia.org/wiki/Statistical_physics
69 https://en.wikipedia.org/wiki/Computational_neuroscience
70 https://en.wikipedia.org/wiki/Porous_medium
71 https://en.wikipedia.org/wiki/Chemical_graph_theory
72 https://en.wikipedia.org/wiki/Molecular_graph

551
Graph theory

42.2.4 Social sciences

Figure 105 Graph theory in sociology: Moreno Sociogram (1953).[16]

Graph theory is also widely used in sociology73 as a way, for example, to measure actors'
prestige74 or to explore rumor spreading75 , notably through the use of social network analy-
sis76 software. Under the umbrella of social networks are many different types of graphs.[17]
Acquaintanceship and friendship graphs describe whether people know each other. Influ-
ence graphs model whether certain people can influence the behavior of others. Finally,

73 https://en.wikipedia.org/wiki/Sociology
74 https://en.wikipedia.org/wiki/Six_Degrees_of_Kevin_Bacon
75 https://en.wikipedia.org/wiki/Rumor_spread_in_social_network
76 https://en.wikipedia.org/wiki/Social_network_analysis

552
Applications

collaboration graphs model whether two people work together in a particular way, such as
acting in a movie together.

42.2.5 Biology

Likewise, graph theory is useful in biology77 and conservation efforts where a vertex can
represent regions where certain species exist (or inhabit) and the edges represent migra-
tion paths or movement between the regions. This information is important when looking
at breeding patterns or tracking the spread of disease, parasites or how changes to the
movement can affect other species.
Graph theory is also used in connectomics78 ;[18] nervous systems can be seen as a graph,
where the nodes are neurons and the edges are the connections between them.

42.2.6 Mathematics

In mathematics, graphs are useful in geometry and certain parts of topology such as knot
theory79 . Algebraic graph theory80 has close links with group theory81 . Algebraic graph
theory has been applied to many areas including dynamic systems and complexity.

42.2.7 Other topics

A graph structure can be extended by assigning a weight to each edge of the graph. Graphs
with weights, or weighted graphs82 , are used to represent structures in which pairwise
connections have some numerical values. For example, if a graph represents a road network,
the weights could represent the length of each road. There may be several weights associated
with each edge, including distance (as in the previous example), travel time, or monetary
cost. Such weighted graphs are commonly used to program GPS's, and travel-planning
search engines that compare flight times and costs.

77 https://en.wikipedia.org/wiki/Biology
78 https://en.wikipedia.org/wiki/Connectomics
79 https://en.wikipedia.org/wiki/Knot_theory
80 https://en.wikipedia.org/wiki/Algebraic_graph_theory
81 https://en.wikipedia.org/wiki/Group_theory
82 https://en.wikipedia.org/wiki/Weighted_graph

553
Graph theory

42.3 History

Figure 106 The Königsberg Bridge problem

The paper written by Leonhard Euler83 on the Seven Bridges of Königsberg84 and published
in 1736 is regarded as the first paper in the history of graph theory.[19] This paper, as well
as the one written by Vandermonde85 on the knight problem86 , carried on with the analysis
situs initiated by Leibniz87 . Euler's formula relating the number of edges, vertices, and faces
of a convex polyhedron was studied and generalized by Cauchy88[20] and L'Huilier89 ,[21] and
represents the beginning of the branch of mathematics known as topology90 .

83 https://en.wikipedia.org/wiki/Leonhard_Euler
84 https://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg
85 https://en.wikipedia.org/wiki/Alexandre-Th%C3%A9ophile_Vandermonde
86 https://en.wikipedia.org/wiki/Knight%27s_tour
87 https://en.wikipedia.org/wiki/Gottfried_Wilhelm_Leibniz
88 https://en.wikipedia.org/wiki/Augustin-Louis_Cauchy
89 https://en.wikipedia.org/wiki/Simon_Antoine_Jean_L%27Huilier
90 https://en.wikipedia.org/wiki/Topology

554
History

More than one century after Euler's paper on the bridges of Königsberg91 and while Listing92
was introducing the concept of topology, Cayley93 was led by an interest in particular
analytical forms arising from differential calculus94 to study a particular class of graphs, the
trees95 .[22] This study had many implications for theoretical chemistry96 . The techniques he
used mainly concern the enumeration of graphs97 with particular properties. Enumerative
graph theory then arose from the results of Cayley and the fundamental results published
by Pólya98 between 1935 and 1937. These were generalized by De Bruijn99 in 1959. Cayley
linked his results on trees with contemporary studies of chemical composition.[23] The fusion
of ideas from mathematics with those from chemistry began what has become part of the
standard terminology of graph theory.
In particular, the term ”graph” was introduced by Sylvester100 in a paper published in 1878
in Nature101 , where he draws an analogy between ”quantic invariants” and ”co-variants” of
algebra and molecular diagrams:[24]
”[…] Every invariant and co-variant thus becomes expressible by a graph precisely iden-
tical with a Kekuléan102 diagram or chemicograph. […] I give a rule for the geometrical
multiplication of graphs, i.e. for constructing a graph to the product of in- or co-variants
whose separate graphs are given. […]” (italics as in the original).
The first textbook on graph theory was written by Dénes Kőnig103 , and published in 1936.[25]
Another book by Frank Harary104 , published in 1969, was ”considered the world over to be
the definitive textbook on the subject”,[26] and enabled mathematicians, chemists, electrical
engineers and social scientists to talk to each other. Harary donated all of the royalties to
fund the Pólya Prize105 .[27]
One of the most famous and stimulating problems in graph theory is the four color prob-
lem106 : ”Is it true that any map drawn in the plane may have its regions colored with four
colors, in such a way that any two regions having a common border have different colors?”
This problem was first posed by Francis Guthrie107 in 1852 and its first written record is in
a letter of De Morgan108 addressed to Hamilton109 the same year. Many incorrect proofs
have been proposed, including those by Cayley, Kempe110 , and others. The study and

91 https://en.wikipedia.org/wiki/K%C3%B6nigsberg
92 https://en.wikipedia.org/wiki/Johann_Benedict_Listing
93 https://en.wikipedia.org/wiki/Arthur_Cayley
94 https://en.wikipedia.org/wiki/Differential_calculus
95 https://en.wikipedia.org/wiki/Tree_(graph_theory)
96 https://en.wikipedia.org/wiki/Chemistry
97 https://en.wikipedia.org/wiki/Enumeration_of_graphs
98 https://en.wikipedia.org/wiki/George_P%C3%B3lya
99 https://en.wikipedia.org/wiki/Nicolaas_Govert_de_Bruijn
100 https://en.wikipedia.org/wiki/James_Joseph_Sylvester
101 https://en.wikipedia.org/wiki/Nature_(journal)
102 https://en.wikipedia.org/wiki/August_Kekul%C3%A9
103 https://en.wikipedia.org/wiki/D%C3%A9nes_K%C5%91nig
104 https://en.wikipedia.org/wiki/Frank_Harary
105 https://en.wikipedia.org/wiki/George_P%C3%B3lya_Prize
106 https://en.wikipedia.org/wiki/Four_color_problem
107 https://en.wikipedia.org/wiki/Francis_Guthrie
108 https://en.wikipedia.org/wiki/Augustus_De_Morgan
109 https://en.wikipedia.org/wiki/William_Rowan_Hamilton
110 https://en.wikipedia.org/wiki/Alfred_Kempe

555
Graph theory

the generalization of this problem by Tait111 , Heawood112 , Ramsey113 and Hadwiger114 led
to the study of the colorings of the graphs embedded on surfaces with arbitrary genus115 .
Tait's reformulation generated a new class of problems, the factorization problems, partic-
ularly studied by Petersen116 and Kőnig117 . The works of Ramsey on colorations and more
specially the results obtained by Turán118 in 1941 was at the origin of another branch of
graph theory, extremal graph theory119 .
The four color problem remained unsolved for more than a century. In 1969 Heinrich
Heesch120 published a method for solving the problem using computers.[28] A computer-aided
proof produced in 1976 by Kenneth Appel121 and Wolfgang Haken122 makes fundamental
use of the notion of ”discharging” developed by Heesch.[29][30] The proof involved checking
the properties of 1,936 configurations by computer, and was not fully accepted at the time
due to its complexity. A simpler proof considering only 633 configurations was given twenty
years later by Robertson123 , Seymour124 , Sanders125 and Thomas126 .[31]
The autonomous development of topology from 1860 and 1930 fertilized graph theory back
through the works of Jordan127 , Kuratowski128 and Whitney129 . Another important factor of
common development of graph theory and topology130 came from the use of the techniques
of modern algebra. The first example of such a use comes from the work of the physicist
Gustav Kirchhoff131 , who published in 1845 his Kirchhoff's circuit laws132 for calculating
the voltage133 and current134 in electric circuits135 .
The introduction of probabilistic methods in graph theory, especially in the study of Erdős136
and Rényi137 of the asymptotic probability of graph connectivity, gave rise to yet another

111 https://en.wikipedia.org/wiki/Peter_Tait_(physicist)
112 https://en.wikipedia.org/wiki/Percy_John_Heawood
113 https://en.wikipedia.org/wiki/Frank_P._Ramsey
114 https://en.wikipedia.org/wiki/Hugo_Hadwiger
115 https://en.wikipedia.org/wiki/Genus_(mathematics)
116 https://en.wikipedia.org/wiki/Julius_Petersen
117 https://en.wikipedia.org/wiki/D%C3%A9nes_K%C5%91nig
118 https://en.wikipedia.org/wiki/P%C3%A1l_Tur%C3%A1n
119 https://en.wikipedia.org/wiki/Extremal_graph_theory
120 https://en.wikipedia.org/wiki/Heinrich_Heesch
121 https://en.wikipedia.org/wiki/Kenneth_Appel
122 https://en.wikipedia.org/wiki/Wolfgang_Haken
123 https://en.wikipedia.org/wiki/Neil_Robertson_(mathematician)
124 https://en.wikipedia.org/wiki/Paul_Seymour_(mathematician)
125 https://en.wikipedia.org/wiki/Daniel_P._Sanders
126 https://en.wikipedia.org/wiki/Robin_Thomas_(mathematician)
127 https://en.wikipedia.org/wiki/Camille_Jordan
128 https://en.wikipedia.org/wiki/Kazimierz_Kuratowski
129 https://en.wikipedia.org/wiki/Hassler_Whitney
130 https://en.wikipedia.org/wiki/Topology
131 https://en.wikipedia.org/wiki/Gustav_Kirchhoff
132 https://en.wikipedia.org/wiki/Kirchhoff%27s_circuit_laws
133 https://en.wikipedia.org/wiki/Voltage
134 https://en.wikipedia.org/wiki/Electric_current
135 https://en.wikipedia.org/wiki/Electric_circuit
136 https://en.wikipedia.org/wiki/Paul_Erd%C5%91s
137 https://en.wikipedia.org/wiki/Alfr%C3%A9d_R%C3%A9nyi

556
Graph drawing

branch, known as random graph theory138 , which has been a fruitful source of graph-theoretic
results.

42.4 Graph drawing

Main article: Graph drawing139 Graphs are represented visually by drawing a point or circle
for every vertex, and drawing a line between two vertices if they are connected by an edge.
If the graph is directed, the direction is indicated by drawing an arrow.
A graph drawing should not be confused with the graph itself (the abstract, non-visual
structure) as there are several ways to structure the graph drawing. All that matters is
which vertices are connected to which others by how many edges and not the exact layout. In
practice, it is often difficult to decide if two drawings represent the same graph. Depending
on the problem domain some layouts may be better suited and easier to understand than
others.
The pioneering work of W. T. Tutte140 was very influential on the subject of graph drawing.
Among other achievements, he introduced the use of linear algebraic methods to obtain
graph drawings.
Graph drawing also can be said to encompass problems that deal with the crossing num-
ber141 and its various generalizations. The crossing number of a graph is the minimum
number of intersections between edges that a drawing of the graph in the plane must con-
tain. For a planar graph142 , the crossing number is zero by definition.
Drawings on surfaces other than the plane are also studied.

42.5 Graph-theoretic data structures

Main article: Graph (abstract data type)143 There are different ways to store graphs in a
computer system. The data structure144 used depends on both the graph structure and the
algorithm145 used for manipulating the graph. Theoretically one can distinguish between
list and matrix structures but in concrete applications the best structure is often a combi-
nation of both. List structures are often preferred for sparse graphs146 as they have smaller
memory requirements. Matrix147 structures on the other hand provide faster access for
some applications but can consume huge amounts of memory. Implementations of sparse
matrix structures that are efficient on modern parallel computer architectures are an object
of current investigation.[32]

138 https://en.wikipedia.org/wiki/Random_graph
139 https://en.wikipedia.org/wiki/Graph_drawing
140 https://en.wikipedia.org/wiki/W._T._Tutte
141 https://en.wikipedia.org/wiki/Crossing_number_(graph_theory)
142 https://en.wikipedia.org/wiki/Planar_graph
143 https://en.wikipedia.org/wiki/Graph_(abstract_data_type)
144 https://en.wikipedia.org/wiki/Data_structure
145 https://en.wikipedia.org/wiki/Algorithm
146 https://en.wikipedia.org/wiki/Sparse_graph
147 https://en.wikipedia.org/wiki/Matrix(mathematics)

557
Graph theory

List structures include the incidence list148 , an array of pairs of vertices, and the adjacency
list149 , which separately lists the neighbors of each vertex: Much like the incidence list, each
vertex has a list of which vertices it is adjacent to.
Matrix structures include the incidence matrix150 , a matrix of 0's and 1's whose rows rep-
resent vertices and whose columns represent edges, and the adjacency matrix151 , in which
both the rows and columns are indexed by vertices. In both cases a 1 indicates two ad-
jacent objects and a 0 indicates two non-adjacent objects. The degree matrix152 indicates
the degree of vertices. The Laplacian matrix153 is a modified form of the adjacency matrix
that incorporates information about the degrees154 of the vertices, and is useful in some
calculations such as Kirchhoff's theorem155 on the number of spanning trees156 of a graph.
The distance matrix157 , like the adjacency matrix, has both its rows and columns indexed
by vertices, but rather than containing a 0 or a 1 in each cell it contains the length of a
shortest path158 between two vertices.

42.6 Problems

42.6.1 Enumeration

There is a large literature on graphical enumeration159 : the problem of counting graphs


meeting specified conditions. Some of this work is found in Harary and Palmer (1973).

42.6.2 Subgraphs, induced subgraphs, and minors

A common problem, called the subgraph isomorphism problem160 , is finding a fixed graph
as a subgraph161 in a given graph. One reason to be interested in such a question is that
many graph properties162 are hereditary for subgraphs, which means that a graph has the
property if and only if all subgraphs have it too. Unfortunately, finding maximal subgraphs
of a certain kind is often an NP-complete problem163 . For example:
• Finding the largest complete subgraph is called the clique problem164 (NP-complete).

148 https://en.wikipedia.org/wiki/Incidence_list
149 https://en.wikipedia.org/wiki/Adjacency_list
150 https://en.wikipedia.org/wiki/Incidence_matrix
151 https://en.wikipedia.org/wiki/Adjacency_matrix
152 https://en.wikipedia.org/wiki/Degree_matrix
153 https://en.wikipedia.org/wiki/Laplacian_matrix
154 https://en.wikipedia.org/wiki/Degree_(graph_theory)
155 https://en.wikipedia.org/wiki/Kirchhoff%27s_theorem
156 https://en.wikipedia.org/wiki/Spanning_tree
157 https://en.wikipedia.org/wiki/Distance_matrix
158 https://en.wikipedia.org/wiki/Shortest_path
159 https://en.wikipedia.org/wiki/Graphical_enumeration
160 https://en.wikipedia.org/wiki/Subgraph_isomorphism_problem
161 https://en.wikipedia.org/wiki/Glossary_of_graph_theory#Subgraphs
162 https://en.wikipedia.org/wiki/Graph_properties
163 https://en.wikipedia.org/wiki/NP-complete_problem
164 https://en.wikipedia.org/wiki/Clique_problem

558
Problems

One special case of subgraph isomorphism is the graph isomorphism problem165 . It asks
whether two graphs are isomorphic. It is not known whether this problem is NP-complete,
nor whether it can be solved in polynomial time.
A similar problem is finding induced subgraphs166 in a given graph. Again, some important
graph properties are hereditary with respect to induced subgraphs, which means that a
graph has a property if and only if all induced subgraphs also have it. Finding maximal
induced subgraphs of a certain kind is also often NP-complete. For example:
• Finding the largest edgeless induced subgraph or independent set167 is called the inde-
pendent set problem168 (NP-complete).
Still another such problem, the minor containment problem, is to find a fixed graph as a
minor of a given graph. A minor169 or subcontraction of a graph is any graph obtained
by taking a subgraph and contracting some (or no) edges. Many graph properties are
hereditary for minors, which means that a graph has a property if and only if all minors
have it too. For example, Wagner's Theorem170 states:
• A graph is planar171 if it contains as a minor neither the complete bipartite graph172 K3,3
(see the Three-cottage problem173 ) nor the complete graph K5 .
A similar problem, the subdivision containment problem, is to find a fixed graph as a
subdivision174 of a given graph. A subdivision175 or homeomorphism176 of a graph is any
graph obtained by subdividing some (or no) edges. Subdivision containment is related to
graph properties such as planarity177 . For example, Kuratowski's Theorem178 states:
• A graph is planar179 if it contains as a subdivision neither the complete bipartite graph180
K3,3 nor the complete graph181 K5 .
Another problem in subdivision containment is the Kelmans–Seymour conjecture182 :

165 https://en.wikipedia.org/wiki/Graph_isomorphism_problem
166 https://en.wikipedia.org/wiki/Induced_subgraph
167 https://en.wikipedia.org/wiki/Independent_set_(graph_theory)
168 https://en.wikipedia.org/wiki/Independent_set_problem
169 https://en.wikipedia.org/wiki/Minor_(graph_theory)
170 https://en.wikipedia.org/wiki/Wagner%27s_theorem
171 https://en.wikipedia.org/wiki/Planar_graph
172 https://en.wikipedia.org/wiki/Complete_bipartite_graph
173 https://en.wikipedia.org/wiki/Three-cottage_problem
174 https://en.wikipedia.org/wiki/Subdivision_(graph_theory)
175 https://en.wikipedia.org/wiki/Subdivision_(graph_theory)
176 https://en.wikipedia.org/wiki/Homeomorphism_(graph_theory)
177 https://en.wikipedia.org/wiki/Planarity_(graph_theory)
178 https://en.wikipedia.org/wiki/Kuratowski%27s_theorem
179 https://en.wikipedia.org/wiki/Planar_graph
180 https://en.wikipedia.org/wiki/Complete_bipartite_graph
181 https://en.wikipedia.org/wiki/Complete_graph
182 https://en.wikipedia.org/wiki/Kelmans%E2%80%93Seymour_conjecture

559
Graph theory

• Every 5-vertex-connected183 graph that is not planar184 contains a subdivision185 of the


5-vertex complete graph186 K5 .
Another class of problems has to do with the extent to which various species and general-
izations of graphs are determined by their point-deleted subgraphs. For example:
• The reconstruction conjecture187

42.6.3 Graph coloring

Main article: Graph coloring188 Many problems and theorems in graph theory have to do
with various ways of coloring graphs. Typically, one is interested in coloring a graph so that
no two adjacent vertices have the same color, or with other similar restrictions. One may
also consider coloring edges (possibly so that no two coincident edges are the same color),
or other variations. Among the famous results and conjectures concerning graph coloring
are the following:
• Four-color theorem189
• Strong perfect graph theorem190
• Erdős–Faber–Lovász conjecture191 (unsolved)
• Total coloring conjecture192 , also called Behzad193 's conjecture (unsolved)
• List coloring conjecture194 (unsolved)
• Hadwiger conjecture (graph theory)195 (unsolved)

42.6.4 Subsumption and unification

Constraint modeling theories concern families of directed graphs related by a partial


order196 . In these applications, graphs are ordered by specificity, meaning that more
constrained graphs—which are more specific and thus contain a greater amount of
information—are subsumed by those that are more general. Operations between graphs
include evaluating the direction of a subsumption relationship between two graphs, if any,
and computing graph unification. The unification of two argument graphs is defined as the
most general graph (or the computation thereof) that is consistent with (i.e. contains all
of the information in) the inputs, if such a graph exists; efficient unification algorithms are
known.

183 https://en.wikipedia.org/wiki/K-vertex-connected_graph
184 https://en.wikipedia.org/wiki/Planar_graph
185 https://en.wikipedia.org/wiki/Homeomorphism_(graph_theory)
186 https://en.wikipedia.org/wiki/Complete_graph
187 https://en.wikipedia.org/wiki/Reconstruction_conjecture
188 https://en.wikipedia.org/wiki/Graph_coloring
189 https://en.wikipedia.org/wiki/Four-color_theorem
190 https://en.wikipedia.org/wiki/Strong_perfect_graph_theorem
191 https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Faber%E2%80%93Lov%C3%A1sz_conjecture
192 https://en.wikipedia.org/wiki/Total_coloring
193 https://en.wikipedia.org/wiki/Mehdi_Behzad
194 https://en.wikipedia.org/wiki/List_edge-coloring
195 https://en.wikipedia.org/wiki/Hadwiger_conjecture_(graph_theory)
196 https://en.wikipedia.org/wiki/Partial_order

560
Problems

For constraint frameworks which are strictly compositional197 , graph unification is the suf-
ficient satisfiability and combination function. Well-known applications include automatic
theorem proving198 and modeling the elaboration of linguistic structure199 .

42.6.5 Route problems


• Hamiltonian path problem200
• Minimum spanning tree201
• Route inspection problem202 (also called the ”Chinese postman problem”)
• Seven bridges of Königsberg203
• Shortest path problem204
• Steiner tree205
• Three-cottage problem206
• Traveling salesman problem207 (NP-hard)

42.6.6 Network flow

There are numerous problems arising especially from applications that have to do with
various notions of flows in networks208 , for example:
• Max flow min cut theorem209

42.6.7 Visibility problems


• Museum guard problem210

42.6.8 Covering problems

Covering problems211 in graphs may refer to various set cover problems212 on subsets of
vertices/subgraphs.

197 https://en.wikipedia.org/wiki/Principle_of_Compositionality
198 https://en.wikipedia.org/wiki/Automatic_theorem_prover
199 https://en.wikipedia.org/wiki/Parsing
200 https://en.wikipedia.org/wiki/Hamiltonian_path_problem
201 https://en.wikipedia.org/wiki/Minimum_spanning_tree
202 https://en.wikipedia.org/wiki/Route_inspection_problem
203 https://en.wikipedia.org/wiki/Seven_bridges_of_K%C3%B6nigsberg
204 https://en.wikipedia.org/wiki/Shortest_path_problem
205 https://en.wikipedia.org/wiki/Steiner_tree
206 https://en.wikipedia.org/wiki/Three-cottage_problem
207 https://en.wikipedia.org/wiki/Traveling_salesman_problem
208 https://en.wikipedia.org/wiki/Flow_network
209 https://en.wikipedia.org/wiki/Max_flow_min_cut_theorem
210 https://en.wikipedia.org/wiki/Museum_guard_problem
211 https://en.wikipedia.org/wiki/Covering_problem
212 https://en.wikipedia.org/wiki/Set_cover_problem

561
Graph theory

• Dominating set213 problem is the special case of set cover problem214 where sets are the
closed neighborhoods215 .
• Vertex cover problem216 is the special case of Set cover problem217 where sets to cover
are every edges.
• The original set cover problem218 , also called hitting set, can be described as a vertex
cover in a hypergraph.

42.6.9 Decomposition problems

Decomposition, defined as partitioning the edge set of a graph (with as many vertices as
necessary accompanying the edges of each part of the partition), has a wide variety of
question. Often, it is required to decompose a graph into subgraphs isomorphic to a fixed
graph; for instance, decomposing a complete graph into Hamiltonian cycles. Other problems
specify a family of graphs into which a given graph should be decomposed, for instance,
a family of cycles, or decomposing a complete graph Kn into n − 1 specified trees having,
respectively, 1, 2, 3, ..., n − 1 edges.
Some specific decomposition problems that have been studied include:
• Arboricity219 , a decomposition into as few forests as possible
• Cycle double cover220 , a decomposition into a collection of cycles covering each edge
exactly twice
• Edge coloring221 , a decomposition into as few matchings222 as possible
• Graph factorization223 , a decomposition of a regular graph224 into regular subgraphs of
given degrees

42.6.10 Graph classes

Many problems involve characterizing the members of various classes of graphs. Some
examples of such questions are below:
• Enumerating225 the members of a class
• Characterizing a class in terms of forbidden substructures226
• Ascertaining relationships among classes (e.g. does one property of graphs imply another)

213 https://en.wikipedia.org/wiki/Dominating_set
214 https://en.wikipedia.org/wiki/Set_cover_problem
215 https://en.wikipedia.org/wiki/Neighbourhood_(graph_theory)
216 https://en.wikipedia.org/wiki/Vertex_cover_problem
217 https://en.wikipedia.org/wiki/Set_cover_problem
218 https://en.wikipedia.org/wiki/Set_cover_problem
219 https://en.wikipedia.org/wiki/Arboricity
220 https://en.wikipedia.org/wiki/Cycle_double_cover
221 https://en.wikipedia.org/wiki/Edge_coloring
222 https://en.wikipedia.org/wiki/Matching_(graph_theory)
223 https://en.wikipedia.org/wiki/Graph_factorization
224 https://en.wikipedia.org/wiki/Regular_graph
225 https://en.wikipedia.org/wiki/Graph_enumeration
226 https://en.wikipedia.org/wiki/Forbidden_graph_characterization

562
See also

• Finding efficient algorithms227 to decide228 membership in a class


• Finding representations229 for members of a class

42.7 See also


• Gallery of named graphs230
• Glossary of graph theory231
• List of graph theory topics232
• List of unsolved problems in graph theory233
• Publications in graph theory234

42.7.1 Related topics


• Algebraic graph theory235
• Citation graph236
• Conceptual graph237
• Data structure238
• Disjoint-set data structure239
• Dual-phase evolution240
• Entitative graph241
• Existential graph242
• Graph algebra243
• Graph automorphism244
• Graph coloring245
• Graph database246
• Graph data structure247
• Graph drawing248

227 https://en.wikipedia.org/wiki/Algorithm
228 https://en.wikipedia.org/wiki/Decision_problem
229 https://en.wikipedia.org/wiki/Representation_(mathematics)
230 https://en.wikipedia.org/wiki/Gallery_of_named_graphs
231 https://en.wikipedia.org/wiki/Glossary_of_graph_theory
232 https://en.wikipedia.org/wiki/List_of_graph_theory_topics
233 https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_graph_theory
234 https://en.wikipedia.org/wiki/List_of_publications_in_mathematics#Graph_theory
235 https://en.wikipedia.org/wiki/Algebraic_graph_theory
236 https://en.wikipedia.org/wiki/Citation_graph
237 https://en.wikipedia.org/wiki/Conceptual_graph
238 https://en.wikipedia.org/wiki/Data_structure
239 https://en.wikipedia.org/wiki/Disjoint-set_data_structure
240 https://en.wikipedia.org/wiki/Dual-phase_evolution
241 https://en.wikipedia.org/wiki/Entitative_graph
242 https://en.wikipedia.org/wiki/Existential_graph
243 https://en.wikipedia.org/wiki/Graph_algebra
244 https://en.wikipedia.org/wiki/Graph_automorphism
245 https://en.wikipedia.org/wiki/Graph_coloring
246 https://en.wikipedia.org/wiki/Graph_database
247 https://en.wikipedia.org/wiki/Graph_(data_structure)
248 https://en.wikipedia.org/wiki/Graph_drawing

563
Graph theory

• Graph equation249
• Graph rewriting250
• Graph sandwich problem251
• Graph property252
• Intersection graph253
• Knight's Tour254
• Logical graph255
• Loop256
• Network theory257
• Null graph258
• Pebble motion problems259
• Percolation260
• Perfect graph261
• Quantum graph262
• Random regular graphs263
• Semantic networks264
• Spectral graph theory265
• Strongly regular graphs266
• Symmetric graphs267
• Transitive reduction268
• Tree data structure269

42.7.2 Algorithms
• Bellman–Ford algorithm270
• Borůvka's algorithm271
• Breadth-first search272

249 https://en.wikipedia.org/wiki/Graph_equation
250 https://en.wikipedia.org/wiki/Graph_rewriting
251 https://en.wikipedia.org/wiki/Graph_sandwich_problem
252 https://en.wikipedia.org/wiki/Graph_property
253 https://en.wikipedia.org/wiki/Intersection_graph
254 https://en.wikipedia.org/wiki/Knight%27s_Tour
255 https://en.wikipedia.org/wiki/Logical_graph
256 https://en.wikipedia.org/wiki/Loop_(graph_theory)
257 https://en.wikipedia.org/wiki/Network_theory
258 https://en.wikipedia.org/wiki/Null_graph
259 https://en.wikipedia.org/wiki/Pebble_motion_problems
260 https://en.wikipedia.org/wiki/Percolation
261 https://en.wikipedia.org/wiki/Perfect_graph
262 https://en.wikipedia.org/wiki/Quantum_graph
263 https://en.wikipedia.org/wiki/Random_regular_graph
264 https://en.wikipedia.org/wiki/Semantic_networks
265 https://en.wikipedia.org/wiki/Spectral_graph_theory
266 https://en.wikipedia.org/wiki/Strongly_regular_graph
267 https://en.wikipedia.org/wiki/Symmetric_graph
268 https://en.wikipedia.org/wiki/Transitive_reduction
269 https://en.wikipedia.org/wiki/Tree_(data_structure)
270 https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm
271 https://en.wikipedia.org/wiki/Bor%C5%AFvka%27s_algorithm
272 https://en.wikipedia.org/wiki/Breadth-first_search

564
See also

• Depth-first search273
• Dijkstra's algorithm274
• Edmonds–Karp algorithm275
• Floyd–Warshall algorithm276
• Ford–Fulkerson algorithm277
• Hopcroft–Karp algorithm278
• Hungarian algorithm279
• Kosaraju's algorithm280
• Kruskal's algorithm281
• Nearest neighbour algorithm282
• Network simplex algorithm283
• Planarity testing algorithms284
• Prim's algorithm285
• Push–relabel maximum flow algorithm286
• Tarjan's strongly connected components algorithm287
• Topological sorting288

42.7.3 Subareas
• Algebraic graph theory289
• Geometric graph theory290
• Extremal graph theory291
• Probabilistic graph theory292
• Topological graph theory293

42.7.4 Related areas of mathematics


• Combinatorics294

273 https://en.wikipedia.org/wiki/Depth-first_search
274 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
275 https://en.wikipedia.org/wiki/Edmonds%E2%80%93Karp_algorithm
276 https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
277 https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
278 https://en.wikipedia.org/wiki/Hopcroft%E2%80%93Karp_algorithm
279 https://en.wikipedia.org/wiki/Hungarian_algorithm
280 https://en.wikipedia.org/wiki/Kosaraju%27s_algorithm
281 https://en.wikipedia.org/wiki/Kruskal%27s_algorithm
282 https://en.wikipedia.org/wiki/Nearest_neighbour_algorithm
283 https://en.wikipedia.org/wiki/Network_simplex_algorithm
284 https://en.wikipedia.org/wiki/Planarity_testing#Algorithms
285 https://en.wikipedia.org/wiki/Prim%27s_algorithm
286 https://en.wikipedia.org/wiki/Push%E2%80%93relabel_maximum_flow_algorithm
287 https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm
288 https://en.wikipedia.org/wiki/Topological_sorting
289 https://en.wikipedia.org/wiki/Algebraic_graph_theory
290 https://en.wikipedia.org/wiki/Geometric_graph_theory
291 https://en.wikipedia.org/wiki/Extremal_graph_theory
292 https://en.wikipedia.org/wiki/Random_graph
293 https://en.wikipedia.org/wiki/Topological_graph_theory
294 https://en.wikipedia.org/wiki/Combinatorics

565
Graph theory

• Group theory295
• Knot theory296
• Ramsey theory297

42.7.5 Generalizations
• Hypergraph298
• Abstract simplicial complex299

42.7.6 Prominent graph theorists


• Alon, Noga300
• Berge, Claude301
• Bollobás, Béla302
• Bondy, Adrian John303
• Brightwell, Graham304
• Chudnovsky, Maria305
• Chung, Fan306
• Dirac, Gabriel Andrew307
• Erdős, Paul308
• Euler, Leonhard309
• Faudree, Ralph310
• Fleischner, Herbert311
• Golumbic, Martin312
• Graham, Ronald313
• Harary, Frank314
• Heawood, Percy John315
• Kotzig, Anton316

295 https://en.wikipedia.org/wiki/Group_theory
296 https://en.wikipedia.org/wiki/Knot_theory
297 https://en.wikipedia.org/wiki/Ramsey_theory
298 https://en.wikipedia.org/wiki/Hypergraph
299 https://en.wikipedia.org/wiki/Abstract_simplicial_complex
300 https://en.wikipedia.org/wiki/Noga_Alon
301 https://en.wikipedia.org/wiki/Claude_Berge
302 https://en.wikipedia.org/wiki/B%C3%A9la_Bollob%C3%A1s
303 https://en.wikipedia.org/wiki/John_Adrian_Bondy
304 https://en.wikipedia.org/wiki/Graham_Brightwell
305 https://en.wikipedia.org/wiki/Maria_Chudnovsky
306 https://en.wikipedia.org/wiki/Fan_Chung
307 https://en.wikipedia.org/wiki/Gabriel_Andrew_Dirac
308 https://en.wikipedia.org/wiki/Paul_Erd%C5%91s
309 https://en.wikipedia.org/wiki/Leonhard_Euler
310 https://en.wikipedia.org/wiki/Ralph_Faudree
311 https://en.wikipedia.org/wiki/Herbert_Fleischner
312 https://en.wikipedia.org/wiki/Martin_Charles_Golumbic
313 https://en.wikipedia.org/wiki/Ronald_Graham
314 https://en.wikipedia.org/wiki/Frank_Harary
315 https://en.wikipedia.org/wiki/Percy_John_Heawood
316 https://en.wikipedia.org/wiki/Anton_Kotzig

566
Notes

• Kőnig, Dénes317
• Lovász, László318
• Murty, U. S. R.319
• Nešetřil, Jaroslav320
• Rényi, Alfréd321
• Ringel, Gerhard322
• Robertson, Neil323
• Seymour, Paul324
• Sudakov, Benny325
• Szemerédi, Endre326
• Thomas, Robin327
• Thomassen, Carsten328
• Turán, Pál329
• Tutte, W. T.330
• Whitney, Hassler331

42.8 Notes
1. Bender & Williamson 2010332 , p. 148.
2. See, for instance, Iyanaga and Kawada, 69 J, p. 234 or Biggs, p. 4.
3. Bender & Williamson 2010333 , p. 149.
4. See, for instance, Graham et al., p. 5.
5. Bender & Williamson 2010334 , p. 161.
6. H, S A. (2013). ”M  W E”.
Proceedings of the 2014 ACM Conference on Web Science - WebSci

317 https://en.wikipedia.org/wiki/D%C3%A9nes_K%C5%91nig
318 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Lov%C3%A1sz
319 https://en.wikipedia.org/wiki/U._S._R._Murty
320 https://en.wikipedia.org/wiki/Jaroslav_Ne%C5%A1et%C5%99il
321 https://en.wikipedia.org/wiki/Alfr%C3%A9d_R%C3%A9nyi
322 https://en.wikipedia.org/wiki/Gerhard_Ringel
323 https://en.wikipedia.org/wiki/Neil_Robertson_(mathematician)
324 https://en.wikipedia.org/wiki/Paul_Seymour_(mathematician)
325 https://en.wikipedia.org/wiki/Benny_Sudakov
326 https://en.wikipedia.org/wiki/Endre_Szemer%C3%A9di
327 https://en.wikipedia.org/wiki/Robin_Thomas_(mathematician)
328 https://en.wikipedia.org/wiki/Carsten_Thomassen
329 https://en.wikipedia.org/wiki/P%C3%A1l_Tur%C3%A1n
330 https://en.wikipedia.org/wiki/W._T._Tutte
331 https://en.wikipedia.org/wiki/Hassler_Whitney
332 #CITEREFBenderWilliamson2010
333 #CITEREFBenderWilliamson2010
334 #CITEREFBenderWilliamson2010

567
Graph theory

'14: 99–108. arXiv335 :1312.0976336 . Bibcode337 :2013arXiv1312.0976H338 .


doi339 :10.1145/2615569.2615684340 . ISBN341 9781450326223342 .
7. M, A.;  . (2004). ”I    
”. European Physical Journal B. 41 (1): 113–121. arXiv343 :cond-
mat/0304207 . 344 345 346 347
Bibcode :2004EPJB...41..113M . doi :10.1140/epjb/e2004-
00301-0348 .
8. S, P; A, A; M, F; P, A; K, L;
O, K; D, S R; S, J M; S, R T
(2019-07-01). ”C       
 ”. Brain. 142 (7): 1955–1972. doi349 :10.1093/brain/awz125350 .
ISSN351 0006-8950352 .
9. G, M (2016). ”A     T: M-
    ”353 (PDF). Cogent Arts & Humanities.
3 (1): 1171458. doi354 :10.1080/23311983.2016.1171458355 .
10. V, F (2017). ” ”S W”    -
     A' :    
  EEG ”. Brain Imaging and Behavior. 11 (2): 473–485.
doi356 :10.1007/s11682-016-9528-3357 . PMID358 26960946359 .
11. V, F (2013). ”B     
   ”. Neurology. 81 (2): 134–143.
doi360 :10.1212/WNL.0b013e31829a33f8361 . PMID362 23719145363 .

335 https://en.wikipedia.org/wiki/ArXiv_(identifier)
336 http://arxiv.org/abs/1312.0976
337 https://en.wikipedia.org/wiki/Bibcode_(identifier)
338 https://ui.adsabs.harvard.edu/abs/2013arXiv1312.0976H
339 https://en.wikipedia.org/wiki/Doi_(identifier)
340 https://doi.org/10.1145%2F2615569.2615684
341 https://en.wikipedia.org/wiki/ISBN_(identifier)
342 https://en.wikipedia.org/wiki/Special:BookSources/9781450326223
343 https://en.wikipedia.org/wiki/ArXiv_(identifier)
344 http://arxiv.org/abs/cond-mat/0304207
345 https://en.wikipedia.org/wiki/Bibcode_(identifier)
346 https://ui.adsabs.harvard.edu/abs/2004EPJB...41..113M
347 https://en.wikipedia.org/wiki/Doi_(identifier)
348 https://doi.org/10.1140%2Fepjb%2Fe2004-00301-0
349 https://en.wikipedia.org/wiki/Doi_(identifier)
350 https://doi.org/10.1093%2Fbrain%2Fawz125
351 https://en.wikipedia.org/wiki/ISSN_(identifier)
352 http://www.worldcat.org/issn/0006-8950
https://hal.archives-ouvertes.fr/hal-01517493/file/A%20social%20network%20analysis%
353
20of%20Twitter%20Mapping%20the%20digital%20humanities%20community.pdf
354 https://en.wikipedia.org/wiki/Doi_(identifier)
355 https://doi.org/10.1080%2F23311983.2016.1171458
356 https://en.wikipedia.org/wiki/Doi_(identifier)
357 https://doi.org/10.1007%2Fs11682-016-9528-3
358 https://en.wikipedia.org/wiki/PMID_(identifier)
359 http://pubmed.ncbi.nlm.nih.gov/26960946
360 https://en.wikipedia.org/wiki/Doi_(identifier)
361 https://doi.org/10.1212%2FWNL.0b013e31829a33f8
362 https://en.wikipedia.org/wiki/PMID_(identifier)
363 http://pubmed.ncbi.nlm.nih.gov/23719145

568
Notes

12. B, J. D.; D, S. D. (1965). Relativistic Quantum Fields364 . N Y:
MG-H. . .
13. K, A; K, G. U. (2016-01-04). ”E -
       -
”. Journal of Applied Physics. 119 (1): 015102. Bib-
code365 :2016JAP...119a5102K366 . doi367 :10.1063/1.4939280368 . ISSN369 0021-8979370 .
14. N, M (2010). Networks: An Introduction371 (PDF). O U-
 P.
15. Reuven Cohen, Shlomo Havlin (2010). Complex Networks: Structure, Robustness
and Function Cambridge University Press.
16. Grandjean, Martin (2015). ”Social network analysis and visualization: Moreno’s So-
ciograms revisited”372 . Redesigned network strictly based on Moreno (1934), Who
Shall Survive.
17. R, K H. (2011-06-14). Discrete mathematics and its applications (7th
ed.). New York: McGraw-Hill. ISBN373 978-0-07-338309-5374 .
18. S, P; A, A; M, F; P, A; K, L;
O, K; D, S R; S, J M; S, R T
(2019-07-01). ”C       
 ”. Brain. 142 (7): 1955–1972. doi375 :10.1093/brain/awz125376 .
ISSN377 0006-8950378 .
19. B, N.; L, E.; W, R. (1986), Graph Theory, 1736-1936, Oxford Uni-
versity Press
20. C, A. L. (1813), ”R    -  ”,
Journal de l'École Polytechnique379 , 9 (C 16): 66–86.
21. L'H, S.-A.-J. (1812–1813), ”M   ”, Annales
de Mathématiques, 3: 169–189.

364 https://archive.org/details/relativisticquan0000bjor_c5q0
365 https://en.wikipedia.org/wiki/Bibcode_(identifier)
366 https://ui.adsabs.harvard.edu/abs/2016JAP...119a5102K
367 https://en.wikipedia.org/wiki/Doi_(identifier)
368 https://doi.org/10.1063%2F1.4939280
369 https://en.wikipedia.org/wiki/ISSN_(identifier)
370 http://www.worldcat.org/issn/0021-8979
371 http://math.sjtu.edu.cn/faculty/xiaodong/course/Networks%20An%20introduction.pdf
http://www.martingrandjean.ch/social-network-analysis-visualization-morenos-
372
sociograms-revisited/
373 https://en.wikipedia.org/wiki/ISBN_(identifier)
374 https://en.wikipedia.org/wiki/Special:BookSources/978-0-07-338309-5
375 https://en.wikipedia.org/wiki/Doi_(identifier)
376 https://doi.org/10.1093%2Fbrain%2Fawz125
377 https://en.wikipedia.org/wiki/ISSN_(identifier)
378 http://www.worldcat.org/issn/0006-8950
https://en.wikipedia.org/w/index.php?title=Journal_de_l%27%C3%89cole_Polytechnique&
379
action=edit&redlink=1

569
Graph theory

22. C, A.380 (1857), ”O      
 ”, Philosophical Magazine381 , S IV, 13 (85): 172–176,
doi382 :10.1017/CBO9780511703690.046383 , ISBN384 9780511703690385
23. C, A. (1875), ”U  A F,    M-
 B     A   T -
 V”, Berichte der Deutschen Chemischen Gesellschaft, 8 (2):
1056–1059, doi386 :10.1002/cber.18750080252387 .
24. S, J J (1878). ”C  A”388 . Nature.
17 (432): 284. Bibcode389 :1878Natur..17..284S390 . doi391 :10.1038/017284a0392 .
25. T, W.T.393 (2001), Graph Theory394 , C U P, . 30,
ISBN395 978-0-521-79489-3396 ,  2016-03-14
26. G, M397 (1992), Fractal Music, Hypercards, and more…Mathematical
Recreations from Scientific American, W. H. Freeman and Company, p. 203
27. S  I  A M398 (2002), ”T G
P P”, Looking Back, Looking Ahead: A SIAM History399 (PDF), . 26,
 2016-03-14
28. Heinrich Heesch: Untersuchungen zum Vierfarbenproblem. Mannheim: Bibli-
ographisches Institut 1969.
29. A, K.; H, W. (1977), ”E     . P I.
D”, Illinois J. Math., 21 (3): 429–490, doi400 :10.1215/ijm/1256049011401 .
30. A, K.; H, W. (1977), ”E     -
. P II. R”, Illinois J. Math., 21 (3): 491–567,
doi402 :10.1215/ijm/1256049012403 .

380 https://en.wikipedia.org/wiki/Arthur_Cayley
381 https://en.wikipedia.org/wiki/Philosophical_Magazine
382 https://en.wikipedia.org/wiki/Doi_(identifier)
383 https://doi.org/10.1017%2FCBO9780511703690.046
384 https://en.wikipedia.org/wiki/ISBN_(identifier)
385 https://en.wikipedia.org/wiki/Special:BookSources/9780511703690
386 https://en.wikipedia.org/wiki/Doi_(identifier)
387 https://doi.org/10.1002%2Fcber.18750080252
388 https://archive.org/stream/nature15unkngoog#page/n312/mode/1up
389 https://en.wikipedia.org/wiki/Bibcode_(identifier)
390 https://ui.adsabs.harvard.edu/abs/1878Natur..17..284S
391 https://en.wikipedia.org/wiki/Doi_(identifier)
392 https://doi.org/10.1038%2F017284a0
393 https://en.wikipedia.org/wiki/W._T._Tutte
394 https://books.google.com/books?id=uTGhooU37h4C&pg=PA30
395 https://en.wikipedia.org/wiki/ISBN_(identifier)
396 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-79489-3
397 https://en.wikipedia.org/wiki/Martin_Gardner
398 https://en.wikipedia.org/wiki/Society_for_Industrial_and_Applied_Mathematics
399 http://www.siam.org/about/more/siam50.pdf
400 https://en.wikipedia.org/wiki/Doi_(identifier)
401 https://doi.org/10.1215%2Fijm%2F1256049011
402 https://en.wikipedia.org/wiki/Doi_(identifier)
403 https://doi.org/10.1215%2Fijm%2F1256049012

570
References

31. R, N.; S, D.; S, P.; T, R. (1997), ”T
  ”, Journal of Combinatorial Theory, Series B, 70: 2–44,
doi404 :10.1006/jctb.1997.1750405 .
32. K, J; G, J (2011). Graph Algorithms in the Language of
Linear Algebra406 . SIAM. . 1171458. ISBN407 978-0-898719-90-1408 .

42.9 References
• B, E A.; W, S. G (2010). Lists, Decisions and Graphs.
With an Introduction to Probability409 .
• C, C (1958). Théorie des graphes et ses applications. Paris: Dunod.
English edition, Wiley 1961; Methuen & Co, New York 1962; Russian, Moscow 1961;
Spanish, Mexico 1962; Roumanian, Bucharest 1969; Chinese, Shanghai 1963; Second
printing of the 1962 first English edition, Dover, New York 2001.
• B, N.; L, E.; W, R. (1986). Graph Theory, 1736–1936. Oxford Uni-
versity Press.
• B, J. A.; M, U. S. R. (2008). Graph Theory. Springer. ISBN410 978-1-
84628-969-9411 .
• B, B; R, O. M. (2003). Mathematical results on scale-free random
graphs in ”Handbook of Graphs and Networks” (S. Bornholdt and H.G. Schuster (eds)) (1st
ed.). Weinheim: Wiley VCH.
• C, G (1985). Introductory Graph Theory412 . D. ISBN413 0-486-
24775-9414 .
• D, N (1974). Graph Theory with Applications to Engineering and Computer
Science415 (PDF). E, N J: P-H. ISBN416 0-13-363473-
6417 .
• G, A (1985). Algorithmic Graph Theory. Cambridge University Press418 .
• R C, S H (2010). Complex Networks: Structure, Robustness
and Function419 . C U P. ISBN420 9781139489270421 .

404 https://en.wikipedia.org/wiki/Doi_(identifier)
405 https://doi.org/10.1006%2Fjctb.1997.1750
406 https://my.siam.org/Store/Product/viewproduct/?ProductId=106663
407 https://en.wikipedia.org/wiki/ISBN_(identifier)
408 https://en.wikipedia.org/wiki/Special:BookSources/978-0-898719-90-1
409 https://books.google.fr/books?id=vaXv_yhefG8C
410 https://en.wikipedia.org/wiki/ISBN_(identifier)
411 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84628-969-9
412 https://archive.org/details/introductorygrap0000char
413 https://en.wikipedia.org/wiki/ISBN_(identifier)
414 https://en.wikipedia.org/wiki/Special:BookSources/0-486-24775-9
415 https://www.edutechlearners.com/download/Graphtheory.pdf
416 https://en.wikipedia.org/wiki/ISBN_(identifier)
417 https://en.wikipedia.org/wiki/Special:BookSources/0-13-363473-6
418 https://en.wikipedia.org/wiki/Cambridge_University_Press
https://books.google.com/?id=1ECLiFrKulIC&pg=PR5&dq=%22Complex+Networks:+Structure,
419
+Robustness+and+Function%22#v=onepage&q=graph&f=false
420 https://en.wikipedia.org/wiki/ISBN_(identifier)
421 https://en.wikipedia.org/wiki/Special:BookSources/9781139489270

571
Graph theory

• G, M (1980). Algorithmic Graph Theory and Perfect Graphs. Academic
Press422 .
• H, F (1969). Graph Theory. Reading, Massachusetts: Addison-Wesley.
• H, F; P, E M. (1973). Graphical Enumeration. New York,
New York: Academic Press.
• M, N. V. R.; P, U N. (1995). Threshold Graphs and Related Topics.
North-Holland423 .
• N, M (2010). Networks: An Introduction. Oxford University Press.
• K, J; G, J (2011). Graph Algorithms in The Language of
Linear Algebra424 . P, P: SIAM. ISBN425 978-0-898719-90-1426 .

42.10 External links

Wikimedia Commons has media related to Graph theory427 .

• H, M428 , . (2001) [1994], ”G ”429 , Encyclopedia of


Mathematics430 , S S+B M B.V. / K A P-
, ISBN431 978-1-55608-010-4432
• Graph theory tutorial433
• A searchable database of small connected graphs434
• Image gallery: graphs435 at the Wayback Machine436 (archived February 6, 2006)
• Concise, annotated list of graph theory resources for researchers437
• rocs438 — a graph theory IDE
• The Social Life of Routers439 — non-technical paper discussing graphs of people and
computers
• Graph Theory Software440 — tools to teach and learn graph theory

422 https://en.wikipedia.org/wiki/Academic_Press
423 https://en.wikipedia.org/wiki/North-Holland_Publishing_Company
424 https://my.siam.org/Store/Product/viewproduct/?ProductId=106663
425 https://en.wikipedia.org/wiki/ISBN_(identifier)
426 https://en.wikipedia.org/wiki/Special:BookSources/978-0-898719-90-1
427 https://commons.wikimedia.org/wiki/Special:Search/Graph_theory
428 https://en.wikipedia.org/wiki/Michiel_Hazewinkel
429 https://www.encyclopediaofmath.org/index.php?title=p/g045010
430 https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics
431 https://en.wikipedia.org/wiki/ISBN_(identifier)
432 https://en.wikipedia.org/wiki/Special:BookSources/978-1-55608-010-4
433 http://www.utm.edu/departments/math/graph/
434 http://www.gfredericks.com/main/sandbox/graphs
435 https://web.archive.org/web/20060206155001/http://www.nd.edu/~networks/gallery.htm
436 https://en.wikipedia.org/wiki/Wayback_Machine
437 https://web.archive.org/web/20190713044422/http://www.babelgraph.org/links.html
438 http://www.kde.org/applications/education/rocs/
439 http://www.orgnet.com/SocialLifeOfRouters.pdf
440 http://graphtheorysoftware.com/

572
External links

• Online books441 , and library resources in your library442 and in other libraries443 about
graph theory
• A list of graph algorithms444 with references and links to graph library implementations

42.10.1 Online textbooks


• Phase Transitions in Combinatorial Optimization Problems, Section 3: Introduction to
Graphs445 (2006) by Hartmann and Weigt
• Digraphs: Theory Algorithms and Applications446 2007 by Jorgen Bang-Jensen and Gre-
gory Gutin
• Graph Theory, by Reinhard Diestel447

Mathematics (Areas of mathematics)

Computer science

441 http://tools.wmflabs.org/ftl/cgi-bin/ftl?st=&su=Graph+theory&library=OLBP
442 http://tools.wmflabs.org/ftl/cgi-bin/ftl?st=&su=Graph+theory
443 http://tools.wmflabs.org/ftl/cgi-bin/ftl?st=&su=Graph+theory&library=0CHOOSE0
444 http://www.martinbroadhurst.com/Graph-algorithms.html
445 https://arxiv.org/pdf/cond-mat/0602129
446 http://www.cs.rhul.ac.uk/books/dbook/
447 http://diestel-graph-theory.com/index.html

573
Graph theory

Computer science

Graph analysis software

574
43 Graph coloring

Not to be confused with Edge coloring1 .

Figure 107 A proper vertex coloring of the Petersen graph with 3 colors, the minimum
number possible.

1 https://en.wikipedia.org/wiki/Edge_coloring

575
Graph coloring

In graph theory2 , graph coloring is a special case of graph labeling3 ; it is an assignment


of labels traditionally called ”colors” to elements of a graph4 subject to certain constraints.
In its simplest form, it is a way of coloring the vertices of a graph such that no two adjacent
vertices5 are of the same color; this is called a vertex coloring. Similarly, an edge coloring6
assigns a color to each edge so that no two adjacent edges are of the same color, and a face
coloring of a planar graph assigns a color to each face or region so that no two faces that
share a boundary have the same color.
Vertex coloring is usually used to introduce graph coloring problems since other coloring
problems can be transformed into a vertex coloring instance. For example, an edge coloring
of a graph is just a vertex coloring of its line graph7 , and a face coloring of a plane graph8 is
just a vertex coloring of its dual9 . However, non-vertex coloring problems are often stated
and studied as is. This is partly pedagogical, and partly because some problems are best
studied in their non-vertex form, as in the case of edge coloring.
The convention of using colors originates from coloring the countries of a map, where each
face is literally colored. This was generalized to coloring the faces of a graph embedded10 in
the plane. By planar duality it became coloring the vertices, and in this form it generalizes
to all graphs. In mathematical and computer representations, it is typical to use the first
few positive or non-negative integers as the ”colors”. In general, one can use any finite set
as the ”color set”. The nature of the coloring problem depends on the number of colors but
not on what they are.
Graph coloring enjoys many practical applications as well as theoretical challenges. Beside
the classical types of problems, different limitations can also be set on the graph, or on the
way a color is assigned, or even on the color itself. It has even reached popularity with the
general public in the form of the popular number puzzle Sudoku11 . Graph coloring is still
a very active field of research.
Note: Many terms used in this article are defined in Glossary of graph theory12 .

43.1 History

See also: History of the four color theorem13 and History of graph theory14 The first results
about graph coloring deal almost exclusively with planar graphs15 in the form of the coloring

2 https://en.wikipedia.org/wiki/Graph_theory
3 https://en.wikipedia.org/wiki/Graph_labeling
4 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
5 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
6 https://en.wikipedia.org/wiki/Edge_coloring
7 https://en.wikipedia.org/wiki/Line_graph
8 https://en.wikipedia.org/wiki/Plane_graph
9 https://en.wikipedia.org/wiki/Dual_graph
10 https://en.wikipedia.org/wiki/Graph_embedding
11 https://en.wikipedia.org/wiki/Sudoku
12 https://en.wikipedia.org/wiki/Glossary_of_graph_theory
13 https://en.wikipedia.org/wiki/History_of_the_four_color_theorem
14 https://en.wikipedia.org/wiki/History_of_graph_theory
15 https://en.wikipedia.org/wiki/Planar_graphs

576
History

of maps. While trying to color a map of the counties of England, Francis Guthrie16 postu-
lated the four color conjecture17 , noting that four colors were sufficient to color the map so
that no regions sharing a common border received the same color. Guthrie’s brother passed
on the question to his mathematics teacher Augustus de Morgan18 at University College19 ,
who mentioned it in a letter to William Hamilton20 in 1852. Arthur Cayley21 raised the
problem at a meeting of the London Mathematical Society22 in 1879. The same year, Alfred
Kempe23 published a paper that claimed to establish the result, and for a decade the four
color problem was considered solved. For his accomplishment Kempe was elected a Fellow
of the Royal Society24 and later President of the London Mathematical Society.[1]
In 1890, Heawood25 pointed out that Kempe’s argument was wrong. However, in that paper
he proved the five color theorem26 , saying that every planar map can be colored with no
more than five colors, using ideas of Kempe. In the following century, a vast amount of
work and theories were developed to reduce the number of colors to four, until the four
color theorem was finally proved in 1976 by Kenneth Appel27 and Wolfgang Haken28 . The
proof went back to the ideas of Heawood and Kempe and largely disregarded the intervening
developments.[2] The proof of the four color theorem is also noteworthy for being the first
major computer-aided proof.
In 1912, George David Birkhoff29 introduced the chromatic polynomial30 to study the col-
oring problems, which was generalised to the Tutte polynomial31 by Tutte32 , important
structures in algebraic graph theory33 . Kempe had already drawn attention to the general,
non-planar case in 1879,[3] and many results on generalisations of planar graph coloring to
surfaces of higher order followed in the early 20th century.
In 1960, Claude Berge34 formulated another conjecture about graph coloring, the strong
perfect graph conjecture, originally motivated by an information-theoretic35 concept called
the zero-error capacity36 of a graph introduced by Shannon37 . The conjecture remained

16 https://en.wikipedia.org/wiki/Francis_Guthrie
17 https://en.wikipedia.org/wiki/Four_color_conjecture
18 https://en.wikipedia.org/wiki/Augustus_de_Morgan
19 https://en.wikipedia.org/wiki/University_College_London
20 https://en.wikipedia.org/wiki/William_Rowan_Hamilton
21 https://en.wikipedia.org/wiki/Arthur_Cayley
22 https://en.wikipedia.org/wiki/London_Mathematical_Society
23 https://en.wikipedia.org/wiki/Alfred_Kempe
24 https://en.wikipedia.org/wiki/Royal_Society
25 https://en.wikipedia.org/wiki/Heawood
26 https://en.wikipedia.org/wiki/Five_color_theorem
27 https://en.wikipedia.org/wiki/Kenneth_Appel
28 https://en.wikipedia.org/wiki/Wolfgang_Haken
29 https://en.wikipedia.org/wiki/George_David_Birkhoff
30 https://en.wikipedia.org/wiki/Chromatic_polynomial
31 https://en.wikipedia.org/wiki/Tutte_polynomial
32 https://en.wikipedia.org/wiki/Tutte
33 https://en.wikipedia.org/wiki/Algebraic_graph_theory
34 https://en.wikipedia.org/wiki/Claude_Berge
35 https://en.wikipedia.org/wiki/Information_theory
36 https://en.wikipedia.org/w/index.php?title=Zero-error_capacity&action=edit&redlink=1
37 https://en.wikipedia.org/wiki/Claude_Shannon

577
Graph coloring

unresolved for 40 years, until it was established as the celebrated strong perfect graph
theorem38 by Chudnovsky39 , Robertson40 , Seymour41 , and Thomas42 in 2002.
Graph coloring has been studied as an algorithmic problem since the early 1970s: the
chromatic number problem is one of Karp’s 21 NP-complete problems43 from 1972, and
at approximately the same time various exponential-time algorithms were developed based
on backtracking and on the deletion-contraction recurrence of Zykov (1949)44 . One of the
major applications of graph coloring, register allocation45 in compilers, was introduced in
1981.

38 https://en.wikipedia.org/wiki/Strong_perfect_graph_theorem
39 https://en.wikipedia.org/wiki/Maria_Chudnovsky
40 https://en.wikipedia.org/wiki/Neil_Robertson_(mathematician)
41 https://en.wikipedia.org/wiki/Paul_Seymour_(mathematician)
42 https://en.wikipedia.org/wiki/Robin_Thomas_(mathematician)
43 https://en.wikipedia.org/wiki/Karp%E2%80%99s_21_NP-complete_problems
44 #CITEREFZykov1949
45 https://en.wikipedia.org/wiki/Register_allocation

578
Definition and terminology

43.2 Definition and terminology

Figure 108 This graph can be 3-colored in 12 different ways.

43.2.1 Vertex coloring

When used without any qualification, a coloring of a graph is almost always a proper
vertex coloring, namely a labeling of the graph’s vertices with colors such that no two
vertices sharing the same edge46 have the same color. Since a vertex with a loop47 (i.e. a
connection directly back to itself) could never be properly colored, it is understood that
graphs in this context are loopless.
The terminology of using colors for vertex labels goes back to map coloring. Labels like
red and blue are only used when the number of colors is small, and normally it is understood
that the labels are drawn from the integers {1, 2, 3, ...}.

46 https://en.wikipedia.org/wiki/Edge_(graph_theory)
47 https://en.wikipedia.org/wiki/Loop_(graph_theory)

579
Graph coloring

A coloring using at most k colors is called a (proper) k-coloring. The smallest number
of colors needed to color a graph G is called its chromatic number, and is often denoted
χ(G). Sometimes γ(G) is used, since χ(G) is also used to denote the Euler characteristic48
of a graph. A graph that can be assigned a (proper) k-coloring is k-colorable, and it is
k-chromatic if its chromatic number is exactly k. A subset of vertices assigned to the
same color is called a color class, every such class forms an independent set49 . Thus, a
k-coloring is the same as a partition of the vertex set into k independent sets, and the terms
k-partite and k-colorable have the same meaning.

48 https://en.wikipedia.org/wiki/Euler_characteristic
49 https://en.wikipedia.org/wiki/Independent_set_(graph_theory)

580
Definition and terminology

43.2.2 Chromatic polynomial

Figure 109 All non-isomorphic graphs on 3 vertices and their chromatic polynomials.
The empty graph E3 (red) admits a 1-coloring; the others admit no such colorings. The
green graph admits 12 colorings with 3 colors.

Main article: Chromatic polynomial50 The chromatic polynomial counts the number of
ways a graph can be colored using no more than a given number of colors. For example,
using three colors, the graph in the adjacent image can be colored in 12 ways. With only
two colors, it cannot be colored at all. With four colors, it can be colored in 24 + 4⋅12 = 72

50 https://en.wikipedia.org/wiki/Chromatic_polynomial

581
Graph coloring

ways: using all four colors, there are 4! = 24 valid colorings (every assignment of four colors
to any 4-vertex graph is a proper coloring); and for every choice of three of the four colors,
there are 12 valid 3-colorings. So, for the graph in the example, a table of the number of
valid colorings would start like this:
Available colors 1 2 3 4 …
Number of colorings 0 0 12 72 …

The chromatic polynomial is a function P(G, t) that counts the number of t-colorings of G.
As the name indicates, for a given G the function is indeed a polynomial51 in t. For the
example graph, P(G, t) = t(t − 1)2 (t − 2), and indeed P(G, 4) = 72.
The chromatic polynomial includes at least as much information about the colorability of
G as does the chromatic number. Indeed, χ is the smallest positive integer that is not a
root of the chromatic polynomial
χ(G) = min{k : P (G, k) > 0}.
Chromatic polynomials for certain graphs

Triangle K3 t(t − 1)(t − 2)


Complete graph52 Kn t(t − 1)(t − 2) · · · (t − (n − 1))
Tree53 with n vertices
n−1
t(t − 1)
Cycle54 Cn
n n
(t − 1) + (−1) (t − 1)
Petersen graph55
7 6 5 4 3 2
t(t − 1)(t − 2)(t − 12t + 67t − 230t + 529t − 814t + 775t − 352)

43.2.3 Edge coloring

Main article: Edge coloring56 An edge coloring of a graph is a proper coloring of the edges,
meaning an assignment of colors to edges so that no vertex is incident to two edges of the
same color. An edge coloring with k colors is called a k-edge-coloring and is equivalent to
the problem of partitioning the edge set into k matchings57 . The smallest number of colors
needed for an edge coloring of a graph G is the chromatic index, or edge chromatic
number, χ′(G). A Tait coloring is a 3-edge coloring of a cubic graph58 . The four color
theorem59 is equivalent to the assertion that every planar cubic bridgeless60 graph admits
a Tait coloring.

51 https://en.wikipedia.org/wiki/Polynomial
52 https://en.wikipedia.org/wiki/Complete_graph
53 https://en.wikipedia.org/wiki/Tree_graph
54 https://en.wikipedia.org/wiki/Cycle_graph
55 https://en.wikipedia.org/wiki/Petersen_graph
56 https://en.wikipedia.org/wiki/Edge_coloring
57 https://en.wikipedia.org/wiki/Matching_(graph_theory)
58 https://en.wikipedia.org/wiki/Cubic_graph
59 https://en.wikipedia.org/wiki/Four_color_theorem
60 https://en.wikipedia.org/wiki/Bridge_(graph_theory)

582
Properties

43.2.4 Total coloring

Main article: Total coloring61 Total coloring is a type of coloring on the vertices and edges
of a graph. When used without any qualification, a total coloring is always assumed to be
proper in the sense that no adjacent vertices, no adjacent edges, and no edge and its end-
vertices are assigned the same color. The total chromatic number χ″(G) of a graph G is the
fewest colors needed in any total coloring of G.

43.2.5 Unlabeled coloring

An unlabeled coloring of a graph is an orbit62 of a coloring under the action of the


automorphism group63 of the graph. If we interpret a coloring of a graph on d vertices as
a vector in Zd , the action of an automorphism is a permutation64 of the coefficients of the
coloring. There are analogues of the chromatic polynomials65 which count the number of
unlabeled colorings of a graph from a given finite color set.

43.3 Properties

43.3.1 Bounds on the chromatic number

Assigning distinct colors to distinct vertices always yields a proper coloring, so


1 ≤ χ(G) ≤ n.
The only graphs that can be 1-colored are edgeless graphs66 . A complete graph67 Kn of
n vertices requires χ(Kn ) = n colors. In an optimal coloring there must be at least one of
the graph’s m edges between every pair of color classes, so
χ(G)(χ(G) − 1) ≤ 2m.
If G contains a clique68 of size k, then at least k colors are needed to color that clique; in
other words, the chromatic number is at least the clique number:
χ(G) ≥ ω(G).
For perfect graphs69 this bound is tight. Finding cliques is known as clique problem70 .

61 https://en.wikipedia.org/wiki/Total_coloring
62 https://en.wikipedia.org/wiki/Group_action_(mathematics)
63 https://en.wikipedia.org/wiki/Graph_automorphism
64 https://en.wikipedia.org/wiki/Permutation
65 https://en.wikipedia.org/wiki/Chromatic_polynomial
66 https://en.wikipedia.org/wiki/Edgeless_graph
67 https://en.wikipedia.org/wiki/Complete_graph
68 https://en.wikipedia.org/wiki/Clique_(graph_theory)
69 https://en.wikipedia.org/wiki/Perfect_graph
70 https://en.wikipedia.org/wiki/Clique_problem

583
Graph coloring

The 2-colorable graphs are exactly the bipartite graphs71 , including trees72 and forests. By
the four color theorem, every planar graph can be 4-colored.
A greedy coloring73 shows that every graph can be colored with one more color than the
maximum vertex degree74 ,
χ(G) ≤ ∆(G) + 1.
Complete graphs have χ(G) = n and ∆(G) = n − 1, and odd cycles75 have χ(G) = 3 and
∆(G) = 2, so for these graphs this bound is best possible. In all other cases, the bound can
be slightly improved; Brooks’ theorem76[4] states that
Brooks’ theorem77 :χ(G) ≤ ∆(G) for a connected, simple graph G, unless G is a complete
graph or an odd cycle.

43.3.2 Lower bounds on the chromatic number

Several lower bounds for the chromatic bounds have been discovered over the years:
Hoffman's bound: Let W be a real symmetric matrix such that Wi,j = 0 whenever (i, j)
is not an edge in G. Define χW (G) = 1 − λλmax (W )
min (W )
, where λmax (W ), λmin (W ) are the largest
and smallest eigenvalues of W . Define χH (G) = max χW (G), with W as above. Then:
W

χH (G) ≤ χ(G).
Vector chromatic number: Let W be a positive semi-definite matrix such that
Wi,j ≤ − k−1
1
whenever (i, j) is an edge in G. Define χV (G) to be the least k for which
such a matrix W exists. Then
χV (G) ≤ χ(G).
Lovász number78 : The Lovász number of a complementary graph, is also a lower bound
on the chromatic number:
ϑ(Ḡ) ≤ χ(G).
Fractional chromatic number79 : The Fractional chromatic number of a graph, is a lower
bound on the chromatic number as well:
χf (G) ≤ χ(G).
These bounds are ordered as follows:
χH (G) ≤ χV (G) ≤ ϑ(Ḡ) ≤ χf (G) ≤ χ(G).

71 https://en.wikipedia.org/wiki/Bipartite_graph
72 https://en.wikipedia.org/wiki/Tree_(graph_theory)
73 https://en.wikipedia.org/wiki/Greedy_coloring
74 https://en.wikipedia.org/wiki/Degree_(graph_theory)
75 https://en.wikipedia.org/wiki/Odd_cycle
76 https://en.wikipedia.org/wiki/Brooks%E2%80%99_theorem
77 https://en.wikipedia.org/wiki/Brooks%E2%80%99_theorem
78 https://en.wikipedia.org/wiki/Lov%C3%A1sz_number
79 https://en.wikipedia.org/wiki/Fractional_chromatic_number

584
Properties

43.3.3 Graphs with high chromatic number

Graphs with large cliques have a high chromatic number, but the opposite is not true. The
Grötzsch graph80 is an example of a 4-chromatic graph without a triangle, and the example
can be generalised to the Mycielskians81 .
Mycielski’s Theorem (Alexander Zykov82 194983 , Jan Mycielski84 195585 ): There exist
triangle-free graphs with arbitrarily high chromatic number.
From Brooks’s theorem, graphs with high chromatic number must have high maximum
degree. Another local property that leads to high chromatic number is the presence of a
large clique. But colorability is not an entirely local phenomenon: A graph with high girth86
looks locally like a tree, because all cycles are long, but its chromatic number need not be
2:
Theorem (Erdős): There exist graphs of arbitrarily high girth and chromatic number.[5]

43.3.4 Bounds on the chromatic index

An edge coloring of G is a vertex coloring of its line graph87 L(G), and vice versa. Thus,
χ′ (G) = χ(L(G)).
There is a strong relationship between edge colorability and the graph’s maximum degree
∆(G). Since all edges incident to the same vertex need their own color, we have
χ′ (G) ≥ ∆(G).
Moreover,
Kőnig’s theorem88 : χ′ (G) = ∆(G) if G is bipartite.
In general, the relationship is even stronger than what Brooks’s theorem gives for vertex
coloring:
Vizing’s Theorem:89 A graph of maximal degree ∆ has edge-chromatic number ∆ or
∆ + 1.

80 https://en.wikipedia.org/wiki/Gr%C3%B6tzsch_graph
81 https://en.wikipedia.org/wiki/Mycielskian
82 https://en.wikipedia.org/w/index.php?title=Alexander_Zykov&action=edit&redlink=1
83 #CITEREFZykov1949
84 https://en.wikipedia.org/wiki/Jan_Mycielski
85 #CITEREFMycielski1955
86 https://en.wikipedia.org/wiki/Girth_(graph_theory)
87 https://en.wikipedia.org/wiki/Line_graph
88 https://en.wikipedia.org/wiki/K%C5%91nig%27s_theorem_(graph_theory)
89 https://en.wikipedia.org/wiki/Vizing%27s_theorem

585
Graph coloring

43.3.5 Other properties

A graph has a k-coloring if and only if it has an acyclic orientation90 for which the longest
path91 has length at most k; this is the Gallai–Hasse–Roy–Vitaver theorem92 (Nešetřil &
Ossona de Mendez 201293 ).
For planar graphs, vertex colorings are essentially dual to nowhere-zero flows94 .
About infinite graphs, much less is known. The following are two of the few results about
infinite graph coloring:
• If all finite subgraphs of an infinite graph95 G are k-colorable, then so is G, under the
assumption of the axiom of choice96 . This is the de Bruijn–Erdős theorem97 of de Bruijn
& Erdős (1951)98 .
• If a graph admits a full n-coloring for every n ≥n0 , it admits an infinite full coloring
(Fawcett 197899 ).

43.3.6 Open problems

As stated above, ω(G) ≤ χ(G) ≤ ∆(G) + 1. A conjecture



of Reed from

1998 is that the value
ω(G) + ∆(G) + 1
is essentially closer to the lower bound, χ(G) ≤ .
2
The chromatic number of the plane100 , where two points are adjacent if they have unit
distance, is unknown, although it is one of 5, 6, or 7. Other open problems101 concerning
the chromatic number of graphs include the Hadwiger conjecture102 stating that every graph
with chromatic number k has a complete graph103 on k vertices as a minor104 , the Erdős–
Faber–Lovász conjecture105 bounding the chromatic number of unions of complete graphs
that have at most one vertex in common to each pair, and the Albertson conjecture106
that among k-chromatic graphs the complete graphs are the ones with smallest crossing
number107 .

90 https://en.wikipedia.org/wiki/Acyclic_orientation
91 https://en.wikipedia.org/wiki/Longest_path
https://en.wikipedia.org/wiki/Gallai%E2%80%93Hasse%E2%80%93Roy%E2%80%93Vitaver_
92
theorem
93 #CITEREFNe%C5%A1et%C5%99ilOssona_de_Mendez2012
94 https://en.wikipedia.org/wiki/Nowhere-zero_flows
95 https://en.wikipedia.org/wiki/Infinite_graph
96 https://en.wikipedia.org/wiki/Axiom_of_choice
97 https://en.wikipedia.org/wiki/De_Bruijn%E2%80%93Erd%C5%91s_theorem_(graph_theory)
98 #CITEREFde_BruijnErd%C5%91s1951
99 #CITEREFFawcett1978
100 https://en.wikipedia.org/wiki/Hadwiger%E2%80%93Nelson_problem
101 https://en.wikipedia.org/wiki/Unsolved_problems_in_mathematics
102 https://en.wikipedia.org/wiki/Hadwiger_conjecture_(graph_theory)
103 https://en.wikipedia.org/wiki/Complete_graph
104 https://en.wikipedia.org/wiki/Graph_minor
105 https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Faber%E2%80%93Lov%C3%A1sz_conjecture
106 https://en.wikipedia.org/wiki/Albertson_conjecture
107 https://en.wikipedia.org/wiki/Crossing_number_(graph_theory)

586
Algorithms

When Birkhoff and Lewis introduced the chromatic polynomial in their attack on the four-
color theorem, they conjectured that for planar graphs G, the polynomial P (G, t) has no
zeros in the region [4, ∞). Although it is known that such a chromatic polynomial has
no zeros in the region [5, ∞) and that P (G, 4) ̸= 0, their conjecture is still unresolved. It
also remains an unsolved problem to characterize graphs which have the same chromatic
polynomial and to determine which polynomials are chromatic.

43.4 Algorithms

Graph coloring

Decision
Name Graph coloring, vertex color-
ing, k-coloring
Input Graph G with n vertices. Inte-
ger k
Output Does G admit a proper vertex
coloring with k colors?
Running O(2 n n)[6]
time
Complex- NP-complete
ity
Reduc- 3-Satisfiability
tion from
Garey– GT4
Johnson
Optimisation
Name Chromatic number
Input Graph G with n vertices.
Output χ(G)
Complex- NP-hard
ity
Approx- O(n (log n)−3 (log log n)2 )
imability
Inapprox- O(n1−ε ) unless P = NP
imability
Counting problem
Name Chromatic polynomial
Input Graph G with n vertices. Inte-
ger k
Output The number P (G,k) of proper
k-colorings of G
Running O(2 n n)
time
Complex- #P-complete
ity

587
Graph coloring

Graph coloring
Approx- FPRAS for restricted cases
imability
Inapprox- No PTAS unless P = NP
imability

43.4.1 Polynomial time

Determining if a graph can be colored with 2 colors is equivalent to determining whether


or not the graph is bipartite108 , and thus computable in linear time109 using breadth-first
search110 or depth-first search111 . More generally, the chromatic number and a correspond-
ing coloring of perfect graphs112 can be computed in polynomial time113 using semidefinite
programming114 . Closed formulas115 for chromatic polynomial are known for many classes
of graphs, such as forests, chordal graphs, cycles, wheels, and ladders, so these can be
evaluated in polynomial time.
If the graph is planar and has low branch-width (or is nonplanar but with a known branch
decomposition), then it can be solved in polynomial time using dynamic programming. In
general, the time required is polynomial in the graph size, but exponential in the branch-
width.

43.4.2 Exact algorithms

Brute-force search116 for a k-coloring considers each of the k n assignments of k colors to


n vertices and checks for each if it is legal. To compute the chromatic number and the
chromatic polynomial, this procedure is used for every k = 1, . . . , n − 1, impractical for all
but the smallest input graphs.
Using dynamic programming117 and a bound on the number of maximal independent
sets118 , k-colorability can be decided in time and space O(2.4423n ).[7] Using the principle of
inclusion–exclusion119 and Yates120 ’s algorithm for the fast zeta transform, k-colorability can
be decided in time O(2n n)[6] for any k. Faster algorithms are known for 3- and 4-colorability,
which can be decided in time O(1.3289n )[8] and O(1.7272n ),[9] respectively.

108 https://en.wikipedia.org/wiki/Bipartite_graph
109 https://en.wikipedia.org/wiki/Linear_time
110 https://en.wikipedia.org/wiki/Breadth-first_search
111 https://en.wikipedia.org/wiki/Depth-first_search
112 https://en.wikipedia.org/wiki/Perfect_graph
113 https://en.wikipedia.org/wiki/Polynomial_time
114 https://en.wikipedia.org/wiki/Semidefinite_programming
115 https://en.wikipedia.org/wiki/Closed-form_expression
116 https://en.wikipedia.org/wiki/Brute-force_search
117 https://en.wikipedia.org/wiki/Dynamic_programming
118 https://en.wikipedia.org/wiki/Maximal_independent_set
119 https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion
120 https://en.wikipedia.org/wiki/Samuel_Yates

588
Algorithms

43.4.3 Contraction

The contraction121 G/uv of a graph G is the graph obtained by identifying the vertices
u and v, and removing any edges between them. The remaining edges originally incident
to u or v are now incident to their identification. This operation plays a major role in the
analysis of graph coloring.
The chromatic number satisfies the recurrence relation122 :
χ(G) = min{χ(G + uv), χ(G/uv)}
due to Zykov (1949)123 , where u and v are non-adjacent vertices, and G + uv is the graph
with the edge uv added. Several algorithms are based on evaluating this recurrence and the
resulting computation tree is sometimes called a Zykov tree. The running time is based on
a heuristic for choosing the vertices u and v.
The chromatic polynomial satisfies the following recurrence relation
P (G − uv, k) = P (G/uv, k) + P (G, k)
where u and v are adjacent vertices, and G − uv is the graph with the edge uv removed.
P (G − uv, k) represents the number of possible proper colorings of the graph, where the
vertices may have the same or different colors. Then the proper colorings arise from two
different graphs. To explain, if the vertices u and v have different colors, then we might as
well consider a graph where u and v are adjacent. If u and v have the same colors, we might
as well consider a graph where u and v are contracted. Tutte124 ’s curiosity about which other
graph properties satisfied this recurrence led him to discover a bivariate generalization of
the chromatic polynomial, the Tutte polynomial125 .
These expressions give rise to a recursive procedure called the deletion–contraction algo-
rithm, which forms the basis of many algorithms for graph coloring. The running time
satisfies the same recurrence relation as the Fibonacci numbers 126 , so in the worst case
( )
√ n+m
the algorithm runs in time within a polynomial factor of 1+ 5
2 = O(1.6180n+m ) for
n vertices and m edges.[10] The analysis can be improved to within a polynomial factor of the
number t(G) of spanning trees127 of the input graph.[11] In practice, branch and bound128
strategies and graph isomorphism129 rejection are employed to avoid some recursive calls.
The running time depends on the heuristic used to pick the vertex pair.

43.4.4 Greedy coloring

Main article: Greedy coloring130

121 https://en.wikipedia.org/wiki/Contraction_(graph_theory)
122 https://en.wikipedia.org/wiki/Recurrence_relation
123 #CITEREFZykov1949
124 https://en.wikipedia.org/wiki/Tutte
125 https://en.wikipedia.org/wiki/Tutte_polynomial
126 https://en.wikipedia.org/wiki/Fibonacci_numbers
127 https://en.wikipedia.org/wiki/Spanning_tree_(mathematics)
128 https://en.wikipedia.org/wiki/Branch_and_bound
129 https://en.wikipedia.org/wiki/Isomorphism
130 https://en.wikipedia.org/wiki/Greedy_coloring

589
Graph coloring

Figure 110 Two greedy colorings of the same graph using different vertex orders. The
right example generalizes to 2-colorable graphs with n vertices, where the greedy
algorithm expends n/2 colors.

The greedy algorithm131 considers the vertices in a specific order v1 ,…,vn and assigns to vi
the smallest available color not used by vi ’s neighbours among v1 ,…,vi−1 , adding a fresh
color if needed. The quality of the resulting coloring depends on the chosen ordering. There
exists an ordering that leads to a greedy coloring with the optimal number of χ(G) colors.
On the other hand, greedy colorings can be arbitrarily bad; for example, the crown graph132
on n vertices can be 2-colored, but has an ordering that leads to a greedy coloring with n/2
colors.
For chordal graphs133 , and for special cases of chordal graphs such as interval graphs134 and
indifference graphs135 , the greedy coloring algorithm can be used to find optimal colorings in
polynomial time, by choosing the vertex ordering to be the reverse of a perfect elimination
ordering136 for the graph. The perfectly orderable graphs137 generalize this property, but it
is NP-hard to find a perfect ordering of these graphs.
If the vertices are ordered according to their degrees138 , the resulting greedy coloring uses
at most maxi min{d(xi ) + 1, i} colors, at most one more than the graph’s maximum de-

131 https://en.wikipedia.org/wiki/Greedy_algorithm
132 https://en.wikipedia.org/wiki/Crown_graph
133 https://en.wikipedia.org/wiki/Chordal_graph
134 https://en.wikipedia.org/wiki/Interval_graph
135 https://en.wikipedia.org/wiki/Indifference_graph
136 https://en.wikipedia.org/wiki/Perfect_elimination_ordering
137 https://en.wikipedia.org/wiki/Perfectly_orderable_graph
138 https://en.wikipedia.org/wiki/Degree_(graph_theory)

590
Algorithms

gree. This heuristic is sometimes called the Welsh–Powell algorithm.[12] Another heuris-
tic due to Brélaz139 establishes the ordering dynamically while the algorithm proceeds,
choosing next the vertex adjacent to the largest number of different colors.[13] Many other
graph coloring heuristics are similarly based on greedy coloring for a specific static or dy-
namic strategy of ordering the vertices, these algorithms are sometimes called sequential
coloring algorithms.
The maximum (worst) number of colors that can be obtained by the greedy algorithm, by
using a vertex ordering chosen to maximize this number, is called the Grundy number140 of
a graph.

43.4.5 Parallel and distributed algorithms

In the field of distributed algorithms141 , graph coloring is closely related to the problem of
symmetry breaking142 . The current state-of-the-art randomized algorithms are faster for
sufficiently large maximum degree Δ than deterministic algorithms. The fastest randomized
algorithms employ the multi-trials technique143 by Schneider et al.[14]
In a symmetric graph144 , a deterministic145 distributed algorithm cannot find a proper vertex
coloring. Some auxiliary information is needed in order to break symmetry. A standard
assumption is that initially each node has a unique identifier, for example, from the set
{1, 2, ..., n}. Put otherwise, we assume that we are given an n-coloring. The challenge is
to reduce the number of colors from n to, e.g., Δ + 1. The more colors are employed, e.g.
O(Δ) instead of Δ + 1, the fewer communication rounds are required.[14]
A straightforward distributed version of the greedy algorithm for (Δ + 1)-coloring requires
Θ(n) communication rounds in the worst case − information may need to be propagated
from one side of the network to another side.
The simplest interesting case is an n-cycle146 . Richard Cole and Uzi Vishkin147[15] show
that there is a distributed algorithm that reduces the number of colors from n to O(log n)
in one synchronous communication step. By iterating the same procedure, it is possible to
obtain a 3-coloring of an n-cycle in O(log*148 n) communication steps (assuming that we
have unique node identifiers).
The function log*149 , iterated logarithm150 , is an extremely slowly growing function, ”almost
constant”. Hence the result by Cole and Vishkin raised the question of whether there is a
constant-time distributed algorithm for 3-coloring an n-cycle. Linial (1992)151 showed that

139 https://en.wikipedia.org/wiki/Daniel_Br%C3%A9laz
140 https://en.wikipedia.org/wiki/Grundy_number
141 https://en.wikipedia.org/wiki/Distributed_algorithm
142 https://en.wikipedia.org/wiki/Symmetry_breaking
143 https://en.wikipedia.org/wiki/Multi-trials_technique
144 https://en.wikipedia.org/wiki/Symmetric_graph
145 https://en.wikipedia.org/wiki/Deterministic_algorithm
146 https://en.wikipedia.org/wiki/Cycle_graph
147 https://en.wikipedia.org/wiki/Uzi_Vishkin
148 https://en.wikipedia.org/wiki/Iterated_logarithm
149 https://en.wikipedia.org/wiki/Iterated_logarithm
150 https://en.wikipedia.org/wiki/Iterated_logarithm
151 #CITEREFLinial1992

591
Graph coloring

this is not possible: any deterministic distributed algorithm requires Ω(log*152 n) commu-
nication steps to reduce an n-coloring to a 3-coloring in an n-cycle.
The technique by Cole and Vishkin can be applied in arbitrary bounded-degree graphs
as well; the running time is poly(Δ) + O(log*153 n).[16] The technique was extended to
unit disk graphs154 by Schneider et al.[17] The fastest deterministic algorithms for (Δ + 1)-
coloring for small Δ are due to Leonid Barenboim, Michael Elkin and Fabian Kuhn.[18]
The algorithm by Barenboim et al. runs in time O(Δ) + log*155 (n)/2, which is optimal in
terms of n since the constant factor 1/2 cannot be improved due to Linial's lower bound.
Panconesi & √
Srinivasan (1996)156 use network decompositions to compute a Δ+1 coloring
in time 2O( log n) .
The problem of edge coloring has also been studied in the distributed model. Panconesi &
Rizzi (2001)157 achieve a (2Δ − 1)-coloring in O(Δ + log*158 n) time in this model. The
lower bound for distributed vertex coloring due to Linial (1992)159 applies to the distributed
edge coloring problem as well.

43.4.6 Decentralized algorithms

Decentralized algorithms are ones where no message passing is allowed (in contrast to dis-
tributed algorithms where local message passing takes places), and efficient decentralized
algorithms exist that will color a graph if a proper coloring exists. These assume that a
vertex is able to sense whether any of its neighbors are using the same color as the vertex
i.e., whether a local conflict exists. This is a mild assumption in many applications e.g. in
wireless channel allocation it is usually reasonable to assume that a station will be able to
detect whether other interfering transmitters are using the same channel (e.g. by measur-
ing the SINR). This sensing information is sufficient to allow algorithms based on learning
automata to find a proper graph coloring with probability one.[19]

43.4.7 Computational complexity

Graph coloring is computationally hard. It is NP-complete160 to decide if a given graph


admits a k-coloring for a given k except for the cases k ∈{0,1,2} . In particular, it is NP-hard
to compute the chromatic number.[20] The 3-coloring problem remains NP-complete even
on 4-regular planar graphs161 .[21] However, for every k > 3, a k-coloring of a planar graph
exists by the four color theorem162 , and it is possible to find such a coloring in polynomial
time.

152 https://en.wikipedia.org/wiki/Iterated_logarithm
153 https://en.wikipedia.org/wiki/Iterated_logarithm
154 https://en.wikipedia.org/wiki/Unit_disk_graph
155 https://en.wikipedia.org/wiki/Iterated_logarithm
156 #CITEREFPanconesiSrinivasan1996
157 #CITEREFPanconesiRizzi2001
158 https://en.wikipedia.org/wiki/Iterated_logarithm
159 #CITEREFLinial1992
160 https://en.wikipedia.org/wiki/NP-complete
161 https://en.wikipedia.org/wiki/Planar_graph
162 https://en.wikipedia.org/wiki/Four_color_theorem

592
Applications

The best known approximation algorithm163 computes a coloring of size at most within a
factor O(n(log log n)2 (log n)−3 ) of the chromatic number.[22] For all ε > 0, approximating
the chromatic number within n1−ε is NP-hard164 .[23]
It is also NP-hard to color a 3-colorable graph with 4 colors[24] and a k-colorable graph with
k(log k ) / 25 colors for sufficiently large constant k.[25]
Computing the coefficients of the chromatic polynomial is #P-hard165 . In fact, even com-
puting the value of χ(G, k) is #P-hard at any rational point166 k except for k = 1 and
k = 2.[26] There is no FPRAS167 for evaluating the chromatic polynomial at any rational
point k ≥ 1.5 except for k = 2 unless NP168 = RP169 .[27]
For edge coloring, the proof of Vizing’s result gives an algorithm that uses at most Δ+1
colors. However, deciding between the two candidate values for the edge chromatic number
is NP-complete.[28] In terms of approximation algorithms, Vizing’s algorithm shows that the
edge chromatic number can be approximated to within 4/3, and the hardness result shows
that no (4/3 − ε )-algorithm exists for any ε > 0 unless P = NP170 . These are among the
oldest results in the literature of approximation algorithms, even though neither paper
makes explicit use of that notion.[29]

43.5 Applications

43.5.1 Scheduling

Vertex coloring models to a number of scheduling problems171 .[30] In the cleanest form, a
given set of jobs need to be assigned to time slots, each job requires one such slot. Jobs can
be scheduled in any order, but pairs of jobs may be in conflict in the sense that they may not
be assigned to the same time slot, for example because they both rely on a shared resource.
The corresponding graph contains a vertex for every job and an edge for every conflicting
pair of jobs. The chromatic number of the graph is exactly the minimum makespan, the
optimal time to finish all jobs without conflicts.
Details of the scheduling problem define the structure of the graph. For example, when
assigning aircraft to flights, the resulting conflict graph is an interval graph172 , so the
coloring problem can be solved efficiently. In bandwidth allocation173 to radio stations, the
resulting conflict graph is a unit disk graph174 , so the coloring problem is 3-approximable.

163 https://en.wikipedia.org/wiki/Approximation_algorithm
164 https://en.wikipedia.org/wiki/NP-hard
165 https://en.wikipedia.org/wiki/Sharp-P-complete
166 https://en.wikipedia.org/wiki/Rational_point
167 https://en.wikipedia.org/wiki/FPRAS
168 https://en.wikipedia.org/wiki/NP_(complexity)
169 https://en.wikipedia.org/wiki/RP_(complexity)
170 https://en.wikipedia.org/wiki/P_%3D_NP
171 https://en.wikipedia.org/wiki/Scheduling_(computing)
172 https://en.wikipedia.org/wiki/Interval_graph
173 https://en.wikipedia.org/wiki/Bandwidth_allocation
174 https://en.wikipedia.org/wiki/Unit_disk_graph

593
Graph coloring

43.5.2 Register allocation

Main article: Register allocation175 A compiler176 is a computer program177 that translates


one computer language178 into another. To improve the execution time of the resulting code,
one of the techniques of compiler optimization179 is register allocation180 , where the most
frequently used values of the compiled program are kept in the fast processor registers181 .
Ideally, values are assigned to registers so that they can all reside in the registers when they
are used.
The textbook approach to this problem is to model it as a graph coloring problem.[31] The
compiler constructs an interference graph, where vertices are variables and an edge connects
two vertices if they are needed at the same time. If the graph can be colored with k colors
then any set of variables needed at the same time can be stored in at most k registers.

43.5.3 Other applications

The problem of coloring a graph arises in many practical areas such as pattern matching182 ,
sports scheduling, designing seating plans, exam timetabling, the scheduling of taxis, and
solving Sudoku183 puzzles.[32]

43.6 Other colorings

43.6.1 Ramsey theory

Main article: Ramsey theory184 An important class of improper coloring problems is stud-
ied in Ramsey theory185 , where the graph’s edges are assigned to colors, and there is no
restriction on the colors of incident edges. A simple example is the friendship theorem186 ,
which states that in any coloring of the edges of K6 , the complete graph of six vertices,
there will be a monochromatic triangle; often illustrated by saying that any group of six
people either has three mutual strangers or three mutual acquaintances. Ramsey theory is
concerned with generalisations of this idea to seek regularity amid disorder, finding general
conditions for the existence of monochromatic subgraphs with given structure.

175 https://en.wikipedia.org/wiki/Register_allocation
176 https://en.wikipedia.org/wiki/Compiler
177 https://en.wikipedia.org/wiki/Computer_program
178 https://en.wikipedia.org/wiki/Computer_language
179 https://en.wikipedia.org/wiki/Compiler_optimization
180 https://en.wikipedia.org/wiki/Register_allocation
181 https://en.wikipedia.org/wiki/Processor_register
182 https://en.wikipedia.org/wiki/Pattern_matching
183 https://en.wikipedia.org/wiki/Sudoku
184 https://en.wikipedia.org/wiki/Ramsey_theory
185 https://en.wikipedia.org/wiki/Ramsey_theory
186 https://en.wikipedia.org/wiki/Friendship_theorem

594
Other colorings

43.6.2 Other colorings

595
Graph coloring

Adjacent-vertex-distinguishing-total coloring187 Oriented coloring206


A total coloring with the additional restriction that any two adjacent vertices Takes into account orientation of edges of the graph
have different color sets Path coloring207
Acyclic coloring188 Models a routing problem in graphs
Every 2-chromatic subgraph is acyclic Radio coloring208
B-coloring189 Sum of the distance between the vertices and the difference of their colors is
a coloring of the vertices where each color class contains a vertex that has a greater than k+1, where k is a positive integer.
neighbor in all other color classes. Rank coloring209
Circular coloring190 If two vertices have the same color i, then every path between them contain a
Motivated by task systems in which production proceeds in a cyclic way vertex with color greater than i
Cocoloring191 Subcoloring210
An improper vertex coloring where every color class induces an independent An improper vertex coloring where every color class induces a union of cliques
set or a clique Sum coloring211
Complete coloring192 The criterion of minimalization is the sum of colors
Every pair of colors appears on at least one edge Star coloring212
Defective coloring193 Every 2-chromatic subgraph is a disjoint collection of stars213
An improper vertex coloring where every color class induces a bounded degree Strong coloring214
subgraph. Every color appears in every partition of equal size exactly once
Distinguishing coloring194 Strong edge coloring215
An improper vertex coloring that destroys all the symmetries of the graph Edges are colored such that each color class induces a matching (equivalent to
Equitable coloring195 coloring the square of the line graph)
The sizes of color classes differ by at most one T-coloring216
Exact coloring196 Absolute value of the difference between two colors of adjacent vertices must
Every pair of colors appears on exactly one edge not belong to fixed set T
Fractional coloring197 Total coloring217
Vertices may have multiple colors, and on each edge the sum of the color Vertices and edges are colored
parts of each vertex is not greater than one Centered coloring218
Hamiltonian coloring198 Every connected induced subgraph219 has a color that is used exactly once
Uses the length of the longest path between two vertices, also known as the Triangle-free edge coloring220
detour distance The edges are colored so that each color class forms a triangle-free221 sub-
Harmonious coloring199 graph
Every pair of colors appears on at most one edge Weak coloring222
Incidence coloring200 An improper vertex coloring where every non-isolated node has at least one
Each adjacent incidence of vertex and edge is colored with distinct colors neighbor with a different color
Interval edge coloring201
A color of edges meeting in a common vertex must be contiguous
List coloring202
Each vertex chooses from a list of colors
List edge-coloring203
Each edge chooses from a list of colors
L(h, k)-coloring204
Difference of colors at adjacent vertices is at least h and difference of colors of
vertices at a distance two is at least k. A particular case is L(2,1)-coloring205 .

Coloring can also be considered for signed graphs223 and gain graphs224 .

43.7 See also

Wikimedia Commons has media related to Graph coloring225 .

187 https://en.wikipedia.org/wiki/Adjacent-vertex-distinguishing-total_coloring
188 https://en.wikipedia.org/wiki/Acyclic_coloring
189 https://en.wikipedia.org/wiki/B-coloring
190 https://en.wikipedia.org/wiki/Circular_coloring
191 https://en.wikipedia.org/wiki/Cocoloring
192 https://en.wikipedia.org/wiki/Complete_coloring
193 https://en.wikipedia.org/wiki/Defective_coloring
194 https://en.wikipedia.org/wiki/Distinguishing_coloring
195 https://en.wikipedia.org/wiki/Equitable_coloring
196 https://en.wikipedia.org/wiki/Exact_coloring
197 https://en.wikipedia.org/wiki/Fractional_coloring
198 https://en.wikipedia.org/wiki/Hamiltonian_coloring
199 https://en.wikipedia.org/wiki/Harmonious_coloring
200 https://en.wikipedia.org/wiki/Incidence_coloring
201 https://en.wikipedia.org/wiki/Interval_edge_coloring
202 https://en.wikipedia.org/wiki/List_coloring
203 https://en.wikipedia.org/wiki/List_edge-coloring
204 https://en.wikipedia.org/wiki/L(h,_k)-coloring
205 https://en.wikipedia.org/wiki/L(2,1)-coloring
206 https://en.wikipedia.org/wiki/Oriented_coloring
207 https://en.wikipedia.org/wiki/Path_coloring
208 https://en.wikipedia.org/wiki/Radio_coloring
596
209 https://en.wikipedia.org/wiki/Rank_coloring
210 https://en.wikipedia.org/wiki/Subcoloring
211 https://en.wikipedia.org/wiki/Sum_coloring
212 https://en.wikipedia.org/wiki/Star_coloring
213 https://en.wikipedia.org/wiki/Star_(graph_theory)
214 https://en.wikipedia.org/wiki/Strong_coloring
215 https://en.wikipedia.org/w/index.php?title=Strong_edge_coloring&action=edit&redlink=1
216 https://en.wikipedia.org/wiki/T-coloring
Notes

• Edge coloring226
• Circular coloring227
• Critical graph228
• Graph homomorphism229
• Hajós construction230
• Mathematics of Sudoku231
• Multipartite graph232
• Uniquely colorable graph233
• Graph coloring game234
• Interval edge coloring235

43.8 Notes
1. M. Kubale, History of graph coloring, in Kubale (2004)236
2. van Lint & Wilson (2001237 , Chap. 33)
3. Jensen & Toft (1995)238 , p. 2
4. Brooks (1941)239
5. EŐ, P240 (1959), ”G   ”, Canadian Journal
of Mathematics, 11: 34–38, doi241 :10.4153/CJM-1959-003-9242 .
6. Björklund, Husfeldt & Koivisto (2009)243
7. Lawler (1976)244
8. Beigel & Eppstein (2005)245
9. Fomin, Gaspers & Saurabh (2007)246
10. Wilf (1986)247
11. Sekine, Imai & Tani (1995)248
12. Welsh & Powell (1967)249

226 https://en.wikipedia.org/wiki/Edge_coloring
227 https://en.wikipedia.org/wiki/Circular_coloring
228 https://en.wikipedia.org/wiki/Critical_graph
229 https://en.wikipedia.org/wiki/Graph_homomorphism
230 https://en.wikipedia.org/wiki/Haj%C3%B3s_construction
231 https://en.wikipedia.org/wiki/Mathematics_of_Sudoku
232 https://en.wikipedia.org/wiki/Multipartite_graph
233 https://en.wikipedia.org/wiki/Uniquely_colorable_graph
234 https://en.wikipedia.org/wiki/Graph_coloring_game
235 https://en.wikipedia.org/wiki/Interval_edge_coloring
236 #CITEREFKubale2004
237 #CITEREFvan_LintWilson2001
238 #CITEREFJensenToft1995
239 #CITEREFBrooks1941
240 https://en.wikipedia.org/wiki/Paul_Erd%C5%91s
241 https://en.wikipedia.org/wiki/Doi_(identifier)
242 https://doi.org/10.4153%2FCJM-1959-003-9
243 #CITEREFBj%C3%B6rklundHusfeldtKoivisto2009
244 #CITEREFLawler1976
245 #CITEREFBeigelEppstein2005
246 #CITEREFFominGaspersSaurabh2007
247 #CITEREFWilf1986
248 #CITEREFSekineImaiTani1995
249 #CITEREFWelshPowell1967

597
Graph coloring

13. Brélaz (1979)250


14. Schneider (2010)251
15. Cole & Vishkin (1986)252 , see also Cormen, Leiserson & Rivest (1990253 , Section 30.5)
16. Goldberg, Plotkin & Shannon (1988)254
17. Schneider (2008)255
18. Barenboim & Elkin (2009)256 ; Kuhn (2009)257
19. E.g. see Leith & Clifford (2006)258 and Duffy, O'Connell & Sapozhnikov (2008)259 .
20. Garey, Johnson & Stockmeyer (1974)260 ; Garey & Johnson (1979)261 .
21. Dailey (1980)262
22. Halldórsson (1993)263
23. Zuckerman (2007)264
24. Guruswami & Khanna (2000)265
25. Khot (2001)266
26. Jaeger, Vertigan & Welsh (1990)267
27. Goldberg & Jerrum (2008)268
28. Holyer (1981)269
29. Crescenzi & Kann (1998)270
30. Marx (2004)271
31. Chaitin (1982)272
32. Lewis, R. A Guide to Graph Colouring: Algorithms and Applications273 . Springer
International Publishers, 2015.

250 #CITEREFBr%C3%A9laz1979
251 #CITEREFSchneider2010
252 #CITEREFColeVishkin1986
253 #CITEREFCormenLeisersonRivest1990
254 #CITEREFGoldbergPlotkinShannon1988
255 #CITEREFSchneider2008
256 #CITEREFBarenboimElkin2009
257 #CITEREFKuhn2009
258 #CITEREFLeithClifford2006
259 #CITEREFDuffyO&#39;ConnellSapozhnikov2008
260 #CITEREFGareyJohnsonStockmeyer1974
261 #CITEREFGareyJohnson1979
262 #CITEREFDailey1980
263 #CITEREFHalld%C3%B3rsson1993
264 #CITEREFZuckerman2007
265 #CITEREFGuruswamiKhanna2000
266 #CITEREFKhot2001
267 #CITEREFJaegerVertiganWelsh1990
268 #CITEREFGoldbergJerrum2008
269 #CITEREFHolyer1981
270 #CITEREFCrescenziKann1998
271 #CITEREFMarx2004
272 #CITEREFChaitin1982
273 https://www.springer.com/gb/book/9783319257280

598
References

43.9 References
• B, L.; E, M. (2009), ”D (Δ + 1)-   (
Δ) ”, Proceedings of the 41st Symposium on Theory of Computing274 , pp. 111–120,
arXiv275 :0812.1379276 , doi277 :10.1145/1536414.1536432278 , ISBN279 978-1-60558-506-2280
• B, R.; E, D.281 (2005), ”3-   O(1.3289n )”,
Journal of Algorithms282 , 54 (2)): 168–204, arXiv283 :cs/0006046284 ,
285
doi :10.1016/j.jalgor.2004.06.008 286

• B, A.; H, T.; K, M. (2009), ”S 


 –”, SIAM Journal on Computing287 , 39 (2): 546–563,
doi288 :10.1137/070683933289
• B, D.290 (1979), ”N        ”, Com-
munications of the ACM291 , 22 (4): 251–256, doi292 :10.1145/359094.359101293
• B, R. L.294 (1941), ”O      ”,
Proceedings of the Cambridge Philosophical Society295 , 37 (2): 194–197, Bib-
code296 :1941PCPS...37..194B297 , doi298 :10.1017/S030500410002168X299
•  B, N. G.300 ; EŐ, P.301 (1951), ”A     
       ”302 (PDF), Nederl. Akad. Wetensch.
Proc. Ser. A, 54: 371–373, archived from the original303 (PDF) on 2016-03-10, retrieved
2009-05-16 (= Indag. Math. 13)

274 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
275 https://en.wikipedia.org/wiki/ArXiv_(identifier)
276 http://arxiv.org/abs/0812.1379
277 https://en.wikipedia.org/wiki/Doi_(identifier)
278 https://doi.org/10.1145%2F1536414.1536432
279 https://en.wikipedia.org/wiki/ISBN_(identifier)
280 https://en.wikipedia.org/wiki/Special:BookSources/978-1-60558-506-2
281 https://en.wikipedia.org/wiki/David_Eppstein
282 https://en.wikipedia.org/wiki/Journal_of_Algorithms
283 https://en.wikipedia.org/wiki/ArXiv_(identifier)
284 http://arxiv.org/abs/cs/0006046
285 https://en.wikipedia.org/wiki/Doi_(identifier)
286 https://doi.org/10.1016%2Fj.jalgor.2004.06.008
287 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
288 https://en.wikipedia.org/wiki/Doi_(identifier)
289 https://doi.org/10.1137%2F070683933
290 https://en.wikipedia.org/wiki/Daniel_Br%C3%A9laz
291 https://en.wikipedia.org/wiki/Communications_of_the_ACM
292 https://en.wikipedia.org/wiki/Doi_(identifier)
293 https://doi.org/10.1145%2F359094.359101
294 https://en.wikipedia.org/wiki/R._Leonard_Brooks
295 https://en.wikipedia.org/wiki/Proceedings_of_the_Cambridge_Philosophical_Society
296 https://en.wikipedia.org/wiki/Bibcode_(identifier)
297 https://ui.adsabs.harvard.edu/abs/1941PCPS...37..194B
298 https://en.wikipedia.org/wiki/Doi_(identifier)
299 https://doi.org/10.1017%2FS030500410002168X
300 https://en.wikipedia.org/wiki/Nicolaas_Govert_de_Bruijn
301 https://en.wikipedia.org/wiki/Paul_Erd%C5%91s
https://web.archive.org/web/20160310003706/http://www.math-inst.hu/~p_erdos/1951-
302
01.pdf
303 http://www.math-inst.hu/~p_erdos/1951-01.pdf

599
Graph coloring

• B, J.M. (2004), ”E     -


   ”, Operations Research Letters, 32 (6): 547–556,
doi304 :10.1016/j.orl.2004.03.002305
• C, G. J. (1982), ”R  &    -
”, Proc. 1982 SIGPLAN Symposium on Compiler Construction306 , pp. 98–105,
doi307 :10.1145/800230.806984308 , ISBN309 0-89791-074-5310
• C, R.; V, U. (1986), ”D    
    ”, Information and Control311 , 70 (1): 32–53,
doi312 :10.1016/S0019-9958(86)80023-7313
• C, T. H.; L, C. E.; R, R. L. (1990), Introduction to Algorithms314
(1 .), T MIT P
• C, P.; K, V. (D 1998), ”H     -
  —  -  G  J”, ACM SIGACT News315 ,
29 (4): 90, doi316 :10.1145/306198.306210317
• D, D. P. (1980), ”U      
4-   NP-”, Discrete Mathematics318 , 30 (3): 289–293,
doi319 :10.1016/0012-365X(80)90236-8320
• D, K.; O'C, N.; S, A. (2008), ”C  
    ”321 (PDF), Information Processing
Letters, 107 (2): 60–63, doi322 :10.1016/j.ipl.2008.01.002323
• F, B. W. (1978), ”O     ”, Can. J.
Math.324 , 30: 455–457, doi325 :10.4153/cjm-1978-039-8326
• F, F.V.327 ; G, S.; S, S. (2007), ”I E A
 C 3-  4-C”, Proc. 13th Annual International Conference,

304 https://en.wikipedia.org/wiki/Doi_(identifier)
305 https://doi.org/10.1016%2Fj.orl.2004.03.002
306 https://en.wikipedia.org/wiki/SIGPLAN_Symposium_on_Compiler_Construction
307 https://en.wikipedia.org/wiki/Doi_(identifier)
308 https://doi.org/10.1145%2F800230.806984
309 https://en.wikipedia.org/wiki/ISBN_(identifier)
310 https://en.wikipedia.org/wiki/Special:BookSources/0-89791-074-5
311 https://en.wikipedia.org/wiki/Information_and_Control
312 https://en.wikipedia.org/wiki/Doi_(identifier)
313 https://doi.org/10.1016%2FS0019-9958%2886%2980023-7
314 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
315 https://en.wikipedia.org/wiki/ACM_SIGACT_News
316 https://en.wikipedia.org/wiki/Doi_(identifier)
317 https://doi.org/10.1145%2F306198.306210
318 https://en.wikipedia.org/wiki/Discrete_Mathematics_(journal)
319 https://en.wikipedia.org/wiki/Doi_(identifier)
320 https://doi.org/10.1016%2F0012-365X%2880%2990236-8
321 http://www.hamilton.ie/ken_duffy/Downloads/cfl.pdf
322 https://en.wikipedia.org/wiki/Doi_(identifier)
323 https://doi.org/10.1016%2Fj.ipl.2008.01.002
324 https://en.wikipedia.org/wiki/Canadian_Journal_of_Mathematics
325 https://en.wikipedia.org/wiki/Doi_(identifier)
326 https://doi.org/10.4153%2Fcjm-1978-039-8
327 https://en.wikipedia.org/wiki/Fedor_Fomin

600
References

COCOON 2007, Lecture Notes in Computer Science328 , 4598, Springer, pp. 65–74,
doi329 :10.1007/978-3-540-73545-8_9330 , ISBN331 978-3-540-73544-1332
• G, M. R.333 ; J, D. S.334 (1979), Computers and Intractability: A Guide
to the Theory of NP-Completeness335 , W.H. F, ISBN336 0-7167-1045-5337
• G, M. R.338 ; J, D. S.339 ; S, L.340 (1974), ”S 
NP- ”, Proceedings of the Sixth Annual ACM Symposium on Theory
of Computing341 , . 47–63, 342 :10.1145/800119.803884343
• G, L. A.344 ; J, M.345 (J 2008), ”I 
 T ”, Information and Computation346 , 206 (7): 908–929,
arXiv347 :cs/0605140348 , doi349 :10.1016/j.ic.2008.04.003350
• G, A. V.351 ; P, S. A.; S, G. E. (1988), ”P -
   ”, SIAM Journal on Discrete Mathematics352 , 1 (4): 434–
446, doi353 :10.1137/0401044354
• G, V.; K, S. (2000), ”O    4-  3-
 ”, Proceedings of the 15th Annual IEEE Conference on Computational
Complexity, pp. 188–197, doi355 :10.1109/CCC.2000.856749356 , ISBN357 0-7695-0674-7358
• H, M. M. (1993), ”A    
   ”, Information Processing Letters, 45: 19–23,
doi359 :10.1016/0020-0190(93)90246-6360

328 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
329 https://en.wikipedia.org/wiki/Doi_(identifier)
330 https://doi.org/10.1007%2F978-3-540-73545-8_9
331 https://en.wikipedia.org/wiki/ISBN_(identifier)
332 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-73544-1
333 https://en.wikipedia.org/wiki/Michael_R._Garey
334 https://en.wikipedia.org/wiki/David_S._Johnson
https://en.wikipedia.org/wiki/Computers_and_Intractability:_A_Guide_to_the_Theory_of_
335
NP-Completeness
336 https://en.wikipedia.org/wiki/ISBN_(identifier)
337 https://en.wikipedia.org/wiki/Special:BookSources/0-7167-1045-5
338 https://en.wikipedia.org/wiki/Michael_R._Garey
339 https://en.wikipedia.org/wiki/David_S._Johnson
340 https://en.wikipedia.org/wiki/Larry_Stockmeyer
341 http://portal.acm.org/citation.cfm?id=803884
342 https://en.wikipedia.org/wiki/Doi_(identifier)
343 https://doi.org/10.1145%2F800119.803884
344 https://en.wikipedia.org/wiki/Leslie_Ann_Goldberg
345 https://en.wikipedia.org/wiki/Mark_Jerrum
346 https://en.wikipedia.org/wiki/Information_and_Computation
347 https://en.wikipedia.org/wiki/ArXiv_(identifier)
348 http://arxiv.org/abs/cs/0605140
349 https://en.wikipedia.org/wiki/Doi_(identifier)
350 https://doi.org/10.1016%2Fj.ic.2008.04.003
351 https://en.wikipedia.org/wiki/Andrew_V._Goldberg
352 https://en.wikipedia.org/wiki/SIAM_Journal_on_Discrete_Mathematics
353 https://en.wikipedia.org/wiki/Doi_(identifier)
354 https://doi.org/10.1137%2F0401044
355 https://en.wikipedia.org/wiki/Doi_(identifier)
356 https://doi.org/10.1109%2FCCC.2000.856749
357 https://en.wikipedia.org/wiki/ISBN_(identifier)
358 https://en.wikipedia.org/wiki/Special:BookSources/0-7695-0674-7
359 https://en.wikipedia.org/wiki/Doi_(identifier)
360 https://doi.org/10.1016%2F0020-0190%2893%2990246-6

601
Graph coloring

• H, I. (1981), ”T NP-  -”, SIAM Journal on


Computing361 , 10 (4): 718–720, doi362 :10.1137/0210055363
• J, F.; V, D. L.; W, D. J. A. (1990), ”O  
   J  T ”, Mathematical Proceedings of the
Cambridge Philosophical Society364 , 108: 35–53, Bibcode365 :1990MPCPS.108...35J366 ,
doi367 :10.1017/S0305004100068936368
• J, T. R.; T, B. (1995), Graph Coloring Problems, Wiley-Interscience, New
York, ISBN369 0-471-02865-7370
• K, S.371 (2001), ”I    M-
C,      ”, Proc.
42nd Annual Symposium on Foundations of Computer Science372 , pp. 600–609,
doi373 :10.1109/SFCS.2001.959936374 , ISBN375 0-7695-1116-3376
• K, M. (2004), Graph Colorings, American Mathematical Society, ISBN377 0-
8218-3458-4378
• K, F. (2009), ”W  :    -
”, Proceedings of the 21st Symposium on Parallelism in Algorithms and Architec-
tures379 , pp. 138–144, doi380 :10.1145/1583991.1584032381 , ISBN382 978-1-60558-606-9383
• L, E.L.384 (1976), ”A        -
 ”, Information Processing Letters385 , 5 (3): 66–67, doi386 :10.1016/0020-
0190(76)90065-X387

361 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
362 https://en.wikipedia.org/wiki/Doi_(identifier)
363 https://doi.org/10.1137%2F0210055
https://en.wikipedia.org/wiki/Mathematical_Proceedings_of_the_Cambridge_
364
Philosophical_Society
365 https://en.wikipedia.org/wiki/Bibcode_(identifier)
366 https://ui.adsabs.harvard.edu/abs/1990MPCPS.108...35J
367 https://en.wikipedia.org/wiki/Doi_(identifier)
368 https://doi.org/10.1017%2FS0305004100068936
369 https://en.wikipedia.org/wiki/ISBN_(identifier)
370 https://en.wikipedia.org/wiki/Special:BookSources/0-471-02865-7
371 https://en.wikipedia.org/wiki/Subhash_Khot
372 https://en.wikipedia.org/wiki/Symposium_on_Foundations_of_Computer_Science
373 https://en.wikipedia.org/wiki/Doi_(identifier)
374 https://doi.org/10.1109%2FSFCS.2001.959936
375 https://en.wikipedia.org/wiki/ISBN_(identifier)
376 https://en.wikipedia.org/wiki/Special:BookSources/0-7695-1116-3
377 https://en.wikipedia.org/wiki/ISBN_(identifier)
378 https://en.wikipedia.org/wiki/Special:BookSources/0-8218-3458-4
https://en.wikipedia.org/wiki/Symposium_on_Parallelism_in_Algorithms_and_
379
Architectures
380 https://en.wikipedia.org/wiki/Doi_(identifier)
381 https://doi.org/10.1145%2F1583991.1584032
382 https://en.wikipedia.org/wiki/ISBN_(identifier)
383 https://en.wikipedia.org/wiki/Special:BookSources/978-1-60558-606-9
384 https://en.wikipedia.org/wiki/Eugene_Lawler
385 https://en.wikipedia.org/wiki/Information_Processing_Letters
386 https://en.wikipedia.org/wiki/Doi_(identifier)
387 https://doi.org/10.1016%2F0020-0190%2876%2990065-X

602
References

• L, D.J.; C, P. (2006), ”A S-M D C S-


 A  WLAN”, Proc. RAWNET 2006, Boston, MA388 (PDF), -
 2016-03-03
• L, R.M.R. (2016), A Guide to Graph Colouring: Algorithms and Applications389 ,
S I P, ISBN390 978-3-319-25728-0391
• L, N.392 (1992), ”L    ”, SIAM
Journal on Computing393 , 21 (1): 193–201, CiteSeerX394 10.1.1.471.6378395 ,
396
doi :10.1137/0221015 397

•  L, J. H.; W, R. M. (2001), A Course in Combinatorics (2nd ed.), Cam-
bridge University Press, ISBN398 0-521-80340-3399
• M, D (2004), ”G     
 ”, Periodica Polytechnica, Electrical Engineering, 48, pp. 11–16, Cite-
SeerX400 10.1.1.95.4268401
• M, J.402 (1955), ”S    ”403 (PDF), Colloq. Math.,
3: 161–162.
• NŘ, J404 ; O  M, P405 (2012), ”T 3.13”,
Sparsity: Graphs, Structures, and Algorithms, Algorithms and Combinatorics, 28, Heidel-
berg: Springer, p. 42, doi406 :10.1007/978-3-642-27875-4407 , hdl408 :10338.dmlcz/143192409 ,
ISBN410 978-3-642-27874-7411 , MR412 2920058413 .

388 http://www.hamilton.ie/peterc/downloads/rawnet06.pdf
389 https://www.springer.com/gb/book/9783319257280
390 https://en.wikipedia.org/wiki/ISBN_(identifier)
391 https://en.wikipedia.org/wiki/Special:BookSources/978-3-319-25728-0
392 https://en.wikipedia.org/wiki/Nati_Linial
393 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
394 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
395 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.471.6378
396 https://en.wikipedia.org/wiki/Doi_(identifier)
397 https://doi.org/10.1137%2F0221015
398 https://en.wikipedia.org/wiki/ISBN_(identifier)
399 https://en.wikipedia.org/wiki/Special:BookSources/0-521-80340-3
400 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
401 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.95.4268
402 https://en.wikipedia.org/wiki/Jan_Mycielski
403 http://matwbn.icm.edu.pl/ksiazki/cm/cm3/cm3119.pdf
404 https://en.wikipedia.org/wiki/Jaroslav_Ne%C5%A1et%C5%99il
405 https://en.wikipedia.org/wiki/Patrice_Ossona_de_Mendez
406 https://en.wikipedia.org/wiki/Doi_(identifier)
407 https://doi.org/10.1007%2F978-3-642-27875-4
408 https://en.wikipedia.org/wiki/Hdl_(identifier)
409 http://hdl.handle.net/10338.dmlcz%2F143192
410 https://en.wikipedia.org/wiki/ISBN_(identifier)
411 https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-27874-7
412 https://en.wikipedia.org/wiki/MR_(identifier)
413 http://www.ams.org/mathscinet-getitem?mr=2920058

603
Graph coloring

• P, A; R, R (2001), ”S   -


   ”, Distributed Computing, Berlin, New York: Springer-
Verlag414 , 14 (2): 97–100, doi415 :10.1007/PL00008932416 , ISSN417 0178-2770418
• P, A.; S, A. (1996), ”O     -
 ”, Journal of Algorithms, 20
• S, K.; I, H.; T, S. (1995), ”C  T  
    ”, Proc. 6th International Symposium on Algorithms and
Computation (ISAAC 1995), Lecture Notes in Computer Science419 , 1004, Springer,
pp. 224–233, doi420 :10.1007/BFb0015427421 , ISBN422 3-540-60573-8423
• S, J. (2010), ”A      -
”424 (PDF), Proceedings of the Symposium on Principles of Distributed Computing425
• S, J. (2008), ”A -     -
  - ”426 (PDF), Proceedings of the Symposium on
Principles of Distributed Computing427
• W, D. J. A.; P, M. B. (1967), ”A     
         ”, The Computer
Journal, 10 (1): 85–86, doi428 :10.1093/comjnl/10.1.85429
• W, D. B. (1996), Introduction to Graph Theory, Prentice-Hall, ISBN430 0-13-
227828-6431
• W, H. S. (1986), Algorithms and Complexity, Prentice–Hall
• Z, D. (2007), ”L     -
  M C  C N”, Theory of Computing432 , 3: 103–128,
doi433 :10.4086/toc.2007.v003a006434
• Z, A. A. (1949), ”О НЕКОТОРЫХ СВОЙСТВАХ ЛИНЕЙНЫХ КОМПЛЕКСОВ”435 [O
    ], Mat. Sbornik N.S. (in Russian), 24 (66):

414 https://en.wikipedia.org/wiki/Springer-Verlag
415 https://en.wikipedia.org/wiki/Doi_(identifier)
416 https://doi.org/10.1007%2FPL00008932
417 https://en.wikipedia.org/wiki/ISSN_(identifier)
418 http://www.worldcat.org/issn/0178-2770
419 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
420 https://en.wikipedia.org/wiki/Doi_(identifier)
421 https://doi.org/10.1007%2FBFb0015427
422 https://en.wikipedia.org/wiki/ISBN_(identifier)
423 https://en.wikipedia.org/wiki/Special:BookSources/3-540-60573-8
424 http://www.dcg.ethz.ch/publications/podcfp107_schneider_188.pdf
425 https://en.wikipedia.org/wiki/Symposium_on_Principles_of_Distributed_Computing
426 http://www.dcg.ethz.ch/publications/podc08SW.pdf
427 https://en.wikipedia.org/wiki/Symposium_on_Principles_of_Distributed_Computing
428 https://en.wikipedia.org/wiki/Doi_(identifier)
429 https://doi.org/10.1093%2Fcomjnl%2F10.1.85
430 https://en.wikipedia.org/wiki/ISBN_(identifier)
431 https://en.wikipedia.org/wiki/Special:BookSources/0-13-227828-6
432 https://en.wikipedia.org/wiki/Theory_of_Computing_(journal)
433 https://en.wikipedia.org/wiki/Doi_(identifier)
434 https://doi.org/10.4086%2Ftoc.2007.v003a006
435 http://mi.mathnet.ru/msb5974

604
External links

163–188, MR436 0035428437 . Translated into English in Amer. Math. Soc. Translation,
1952, MR438 0051516439 .

43.10 External links


• High-Performance Graph Colouring Algorithms440 Suite of 8 different algorithms (im-
plemented in C++) used in the book A Guide to Graph Colouring: Algorithms and
Applications441 (Springer International Publishers, 2015).
• Graph Coloring Page442 by Joseph Culberson (graph coloring programs)
• CoLoRaTiOn443 by Jim Andrews and Mike Fellows is a graph coloring puzzle
• Links to Graph Coloring source codes444
• Code for efficiently computing Tutte, Chromatic and Flow Polynomials445 by Gary Hag-
gard, David J. Pearce and Gordon Royle
• A graph coloring Web App446 by Jose Antonio Martin H.

436 https://en.wikipedia.org/wiki/MR_(identifier)
437 http://www.ams.org/mathscinet-getitem?mr=0035428
438 https://en.wikipedia.org/wiki/MR_(identifier)
439 https://mathscinet.ams.org/mathscinet-getitem?mr=0051516
440 http://rhydlewis.eu/resources/gCol.zip
441 https://www.springer.com/gb/book/9783319257280
442 https://webdocs.cs.ualberta.ca/~joe/Coloring/index.html
443 http://vispo.com/software
444 http://www.adaptivebox.net/research/bookmark/gcpcodes_link.html
445 http://www.mcs.vuw.ac.nz/~djp/tutte/
446 https://graph-coloring.appspot.com/

605
44 A* search algorithm

Algorithm used for pathfinding and graph traversal

Class Search algo-


rithm
Data structure Graph
Worst-case perfor- O(|E|) = O(bd )
mance
Worst-case space com- O(|V |) = O(bd )
plexity

607
A* search algorithm

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

608
External links

A* (pronounced ”A-star”) is a graph traversal1 and path search2 algorithm3 , which is of-
ten used in computer science due to its completeness, optimality, and optimal efficiency.[1]
One major practical drawback is its O(bd ) space complexity, as it stores all generated
nodes in memory. Thus, in practical travel-routing systems4 , it is generally outperformed
by algorithms which can pre-process the graph to attain better performance,[2] as well as
memory-bounded approaches; however, A* is still the best solution in many cases.[3]
Peter Hart5 , Nils Nilsson6 and Bertram Raphael7 of Stanford Research Institute (now SRI
International8 ) first published the algorithm in 1968.[4] It can be seen as an extension of
Edsger Dijkstra's9 1959 algorithm10 . A* achieves better performance by using heuristics11
to guide its search.

1 https://en.wikipedia.org/wiki/Graph_traversal
2 https://en.wikipedia.org/wiki/Pathfinding
3 https://en.wikipedia.org/wiki/Algorithm
4 https://en.wikipedia.org/wiki/Travel-routing_system
5 https://en.wikipedia.org/wiki/Peter_E._Hart
6 https://en.wikipedia.org/wiki/Nils_Nilsson_(researcher)
7 https://en.wikipedia.org/wiki/Bertram_Raphael
8 https://en.wikipedia.org/wiki/SRI_International
9 https://en.wikipedia.org/wiki/Edsger_Dijkstra
10 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
11 https://en.wikipedia.org/wiki/Heuristic_(computer_science)

609
A* search algorithm

44.1 History

Figure 111 A* was invented by researchers working on Shakey the Robot's path
planning.

A* was created as part of the Shakey project12 , which had the aim of building a mobile robot
that could plan its own actions. Nils Nilsson originally proposed using the Graph Traverser
algorithm[5] for Shakey's path planning.[6] Graph Traverser is guided by a heuristic function
h(n), the estimated distance from node n to the goal node: it entirely ignores g(n), the

12 https://en.wikipedia.org/wiki/Shakey_the_robot

610
Description

distance from the start node to n. Bertram Raphael suggested using the sum, g(n) + h(n).[6]
Peter Hart invented the concepts we now call admissibility13 and consistency14 of heuristic
functions. A* was originally designed for finding least-cost paths when the cost of a path is
the sum of its edge costs, but it has been shown that A* can be used to find optimal paths
for any problem satisfying the conditions of a cost algebra.[7]
The original 1968 A* paper[4] contained a theorem that no A*-like algorithm[8] could expand
fewer nodes than A* if the heuristic function is consistent and A*’s tie-breaking rule is
suitably chosen. A ″correction″was published a few years later[9] claiming that consistency
was not required, but this was shown to be false in Dechter and Pearl's definitive study
of A*'s optimality (now called optimal efficiency), which gave an example of A* with a
heuristic that was admissible but not consistent expanding arbitrarily more nodes than an
alternative A*-like algorithm.[10]

44.2 Description

A* is an informed search algorithm15 , or a best-first search16 , meaning that it is formulated


in terms of weighted graphs17 : starting from a specific starting node18 of a graph, it aims to
find a path to the given goal node having the smallest cost (least distance travelled, shortest
time, etc.). It does this by maintaining a tree19 of paths originating at the start node and
extending those paths one edge at a time until its termination criterion is satisfied.
At each iteration of its main loop, A* needs to determine which of its paths to extend. It
does so based on the cost of the path and an estimate of the cost required to extend the
path all the way to the goal. Specifically, A* selects the path that minimizes
f (n) = g(n) + h(n)
where n is the next node on the path, g(n) is the cost of the path from the start node to
n, and h(n) is a heuristic20 function that estimates the cost of the cheapest path from n to
the goal. A* terminates when the path it chooses to extend is a path from start to goal or
if there are no paths eligible to be extended. The heuristic function is problem-specific. If
the heuristic function is admissible21 , meaning that it never overestimates the actual cost
to get to the goal, A* is guaranteed to return a least-cost path from start to goal.
Typical implementations of A* use a priority queue22 to perform the repeated selection of
minimum (estimated) cost nodes to expand. This priority queue is known as the open set23

13 https://en.wikipedia.org/wiki/Admissible_heuristic
14 https://en.wikipedia.org/wiki/Consistent_heuristic
15 https://en.wikipedia.org/wiki/Informed_search_algorithm
16 https://en.wikipedia.org/wiki/Best-first_search
17 https://en.wikipedia.org/wiki/Weighted_graph
18 https://en.wikipedia.org/wiki/Node_(graph_theory)
19 https://en.wikipedia.org/wiki/Tree_(data_structure)
20 https://en.wikipedia.org/wiki/Heuristic
21 https://en.wikipedia.org/wiki/Admissible_heuristic
22 https://en.wikipedia.org/wiki/Priority_queue
https://en.wikipedia.org/w/index.php?title=Open_set_(Computer_science)&action=edit&
23
redlink=1

611
A* search algorithm

or fringe24 . At each step of the algorithm, the node with the lowest f(x) value is removed
from the queue, the f and g values of its neighbors are updated accordingly, and these
neighbors are added to the queue. The algorithm continues until a goal node has a lower f
value than any node in the queue (or until the queue is empty).[a] The f value of the goal is
then the cost of the shortest path, since h at the goal is zero in an admissible heuristic.
The algorithm described so far gives us only the length of the shortest path. To find the
actual sequence of steps, the algorithm can be easily revised so that each node on the path
keeps track of its predecessor. After this algorithm is run, the ending node will point to its
predecessor, and so on, until some node's predecessor is the start node.
As an example, when searching for the shortest route on a map, h(x) might represent the
straight-line distance25 to the goal, since that is physically the smallest possible distance
between any two points.
If the heuristic26 h satisfies the additional condition h(x) ≤ d(x, y) + h(y) for every edge (x,
y) of the graph (where d denotes the length of that edge), then h is called monotone, or
consistent27 . With a consistent heuristic, A* is guaranteed to find an optimal path without
processing any node more than once and A* is equivalent to running Dijkstra's algorithm28
with the reduced cost29 d'(x, y) = d(x, y) + h(y) − h(x).

44.2.1 Pseudocode

The following pseudocode30 describes the algorithm:

function reconstruct_path(cameFrom, current)


total_path := {current}
while current in cameFrom.Keys:
current := cameFrom[current]
total_path.prepend(current)
return total_path

// A* finds a path from start to goal.


// h is the heuristic function. h(n) estimates the cost to reach goal from node
n.
function A_Star(start, goal, h)
// The set of discovered nodes that may need to be (re-)expanded.
// Initially, only the start node is known.
// This is usually implemented as a min-heap or priority queue rather than a
hash-set.
openSet := {start}

// For node n, cameFrom[n] is the node immediately preceding it on the


cheapest path from start
// to n currently known.
cameFrom := an empty map

// For node n, gScore[n] is the cost of the cheapest path from start to n

https://en.wikipedia.org/w/index.php?title=Fringe_(Computer_science)&action=edit&
24
redlink=1
25 https://en.wikipedia.org/wiki/Euclidean_distance
26 https://en.wikipedia.org/wiki/Heuristic
27 https://en.wikipedia.org/wiki/Consistent_heuristic
28 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
29 https://en.wikipedia.org/wiki/Reduced_cost
30 https://en.wikipedia.org/wiki/Pseudocode

612
Description

currently known.
gScore := map with default value of Infinity
gScore[start] := 0

// For node n, fScore[n] := gScore[n] + h(n). fScore[n] represents our


current best guess as to
// how short a path from start to finish can be if it goes through n.
fScore := map with default value of Infinity
fScore[start] := h(start)

while openSet is not empty


// This operation can occur in O(1) time if openSet is a min-heap or a
priority queue
current := the node in openSet having the lowest fScore[] value
if current = goal
return reconstruct_path(cameFrom, current)

openSet.Remove(current)
for each neighbor of current
// d(current,neighbor) is the weight of the edge from current to
neighbor
// tentative_gScore is the distance from start to the neighbor
through current
tentative_gScore := gScore[current] + d(current, neighbor)
if tentative_gScore < gScore[neighbor]
// This path to neighbor is better than any previous one. Record
it!
cameFrom[neighbor] := current
gScore[neighbor] := tentative_gScore
fScore[neighbor] := gScore[neighbor] + h(neighbor)
if neighbor not in openSet
openSet.add(neighbor)

// Open set is empty but goal was never reached


return failure

Remark: In this pseudocode, if a node is reached by one path, removed from openSet, and
subsequently reached by a cheaper path, it will be added to openSet again. This is essential
to guarantee that the path returned is optimal if the heuristic function is admissible31 but
not consistent32 . If the heuristic is consistent, when a node is removed from openSet the
path to it is guaranteed to be optimal so the test ‘tentative_gScore < gScore[neighbor]’ will
always fail if the node is reached again.

31 https://en.wikipedia.org/wiki/Admissible_heuristic
32 https://en.wikipedia.org/wiki/Consistent_heuristic

613
A* search algorithm

Figure 112 Illustration of A* search for finding path from a start node to a goal node
in a robot motion planning problem. The empty circles represent the nodes in the open
set, i.e., those that remain to be explored, and the filled ones are in the closed set. Color
on each closed node indicates the distance from the start: the greener, the farther. One
can first see the A* moving in a straight line in the direction of the goal, then when
hitting the obstacle, it explores alternative routes through the nodes from the open set.
See also: Dijkstra's algorithm

44.2.2 Example

An example of an A* algorithm in action where nodes are cities connected with roads and
h(x) is the straight-line distance to target point:

614
Description

Figure 113 An example of A* algorithm in action (nodes are cities connected with
roads, h(x) is the straight-line distance to target point) Green: Start, Blue: Target,
Orange: Visited

Key: green: start; blue: goal; orange: visited


The A* algorithm also has real-world applications. In this example, edges are railroads and
h(x) is the great-circle distance (the shortest possible distance on a sphere) to the target.
The algorithm is searching for a path between Washington, D.C. and Los Angeles.

615
A* search algorithm

Figure 114 The A* algorithm finding a path of railroads between Washington, D.C.
and Los Angeles.

44.2.3 Implementation details

There are a number of simple optimizations or implementation details that can significantly
affect the performance of an A* implementation. The first detail to note is that the way the
priority queue handles ties can have a significant effect on performance in some situations.
If ties are broken so the queue behaves in a LIFO33 manner, A* will behave like depth-
first search34 among equal cost paths (avoiding exploring more than one equally optimal
solution).
When a path is required at the end of the search, it is common to keep with each node a
reference to that node's parent. At the end of the search these references can be used to
recover the optimal path. If these references are being kept then it can be important that the
same node doesn't appear in the priority queue more than once (each entry corresponding to
a different path to the node, and each with a different cost). A standard approach here is to
check if a node about to be added already appears in the priority queue. If it does, then the
priority and parent pointers are changed to correspond to the lower cost path. A standard
binary heap35 based priority queue does not directly support the operation of searching
for one of its elements, but it can be augmented with a hash table36 that maps elements
to their position in the heap, allowing this decrease-priority operation to be performed in

33 https://en.wikipedia.org/wiki/LIFO_(computing)
34 https://en.wikipedia.org/wiki/Depth-first_search
35 https://en.wikipedia.org/wiki/Binary_heap
36 https://en.wikipedia.org/wiki/Hash_table

616
Properties

logarithmic time. Alternatively, a Fibonacci heap37 can perform the same decrease-priority
operations in constant amortized time38 .

44.2.4 Special cases

Dijkstra's algorithm39 , as another example of a uniform-cost search algorithm, can be viewed


as a special case of A* where h(x) = 0 for all x.[11][12] General depth-first search40 can be
implemented using A* by considering that there is a global counter C initialized with a
very large value. Every time we process a node we assign C to all of its newly discovered
neighbors. After each single assignment, we decrease the counter C by one. Thus the earlier
a node is discovered, the higher its h(x) value. Both Dijkstra's algorithm and depth-first
search can be implemented more efficiently without including an h(x) value at each node.

44.3 Properties

44.3.1 Termination and Completeness

On finite graphs with non-negative edge weights A* is guaranteed to terminate and is


complete, i.e. it will always find a solution (a path from start to goal) if one exists. On
infinite graphs with a finite branching factor and edge costs that are bounded away from
zero (d(x, y) > ε > 0 for some fixed ε), A* is guaranteed to terminate only if there exists a
solution.

44.3.2 Admissibility

A search algorithm is said to be admissible if it is guaranteed to return an optimal solution.


If the heuristic function used by A* is admissible41 , then A* is admissible. An intuitive
″proof″of this is as follows:
When A* terminates its search, it has found a path from start to goal whose actual cost
is lower than the estimated cost of any path from start to goal through any open node
(the node's f value). When the heuristic is admissible, those estimates are optimistic (not
quite—see the next paragraph), so A* can safely ignore those nodes because they cannot
possibly lead to a cheaper solution than the one it already has. In other words, A* will
never overlook the possibility of a lower-cost path from start to goal and so it will continue
to search until no such possibilities exist.
The actual proof is a bit more involved because the f values of open nodes are not guaranteed
to be optimistic even if the heuristic is admissible. This is because the g values of open
nodes are not guaranteed to be optimal, so the sum g + h is not guaranteed to be optimistic.

37 https://en.wikipedia.org/wiki/Fibonacci_heap
38 https://en.wikipedia.org/wiki/Amortized_time
39 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
40 https://en.wikipedia.org/wiki/Depth-first_search
41 https://en.wikipedia.org/wiki/Admissible_heuristic

617
A* search algorithm

44.3.3 Optimal efficiency

Algorithm A is optimally efficient with respect to a set of alternative algorithms Alts on


a set of problems P if for every problem P in P and every algorithm A′in Alts, the set
of nodes expanded by A in solving P is a subset (possibly equal) of the set of nodes ex-
panded by A′ in solving P. The definitive study of the optimal efficiency of A* is due to
Rina Dechter and Judea Pearl.[10] They considered a variety of definitions of Alts and P in
combination with A*'s heuristic being merely admissible or being both consistent and ad-
missible. The most interesting positive result they proved is that A*, with a consistent
heuristic, is optimally efficient with respect to all admissible A*-like search algorithms on
all ″non-pathological″search problems. Roughly speaking, their notion of non-pathological
problem is what we now mean by ″up to tie-breaking″. This result does not hold if A*'s
heuristic is admissible but not consistent. In that case, Dechter and Pearl showed there
exist admissible A*-like algorithms that can expand arbitrarily fewer nodes than A* on
some non-pathological problems.
Optimal efficiency is about the set of nodes expanded, not the number of node expansions
(the number of iterations of A*'s main loop). When the heuristic being used is admissible
but not consistent, it is possible for a node to be expanded by A* many times, an exponential
number of times in the worst case.[13] In such circumstances Dijkstra's algorithm could
outperform A* by a large margin.

618
Bounded relaxation

44.4 Bounded relaxation

Figure 115 A* search that uses a heuristic that is 5.0(=ε) times a consistent heuristic,
and obtains a suboptimal path.

While the admissibility criterion guarantees an optimal solution path, it also means that
A* must examine all equally meritorious paths to find the optimal path. To compute
approximate shortest paths, it is possible to speed up the search at the expense of optimality
by relaxing the admissibility criterion. Oftentimes we want to bound this relaxation, so that
we can guarantee that the solution path is no worse than (1 + ε) times the optimal solution
path. This new guarantee is referred to as ε-admissible.
There are a number of ε-admissible algorithms:

619
A* search algorithm

• Weighted A*/Static Weighting.[14] If ha (n) is an admissible heuristic function, in the


weighted version of the A* search one uses hw (n) = ε ha (n), ε > 1 as the heuristic function,
and perform the A* search as usual (which eventually happens faster than using ha since
fewer nodes are expanded). The path hence found by the search algorithm can have a
cost of at most ε times that of the least cost path in the graph.[15]
• Dynamic {
Weighting[16] uses the cost function f (n) = g(n) + (1 + εw(n))h(n), where
1 − d(n)
N d(n) ≤ N
w(n) = , and where d(n) is the depth of the search and N is the
0 otherwise
anticipated length of the solution path.
• Sampled Dynamic Weighting[17] uses sampling of nodes to better estimate and debias the
heuristic error.
• A∗ε .[18] uses two heuristic functions. The first is the FOCAL list, which is used to select
candidate nodes, and the second hF is used to select the most promising node from the
FOCAL list.
• Aε [19] selects nodes with the function Af (n) + BhF (n), where A and B are constants. If
no nodes can be selected, the algorithm will backtrack with the function Cf (n) + DhF (n),
where C and D are constants.
• AlphA*[20] attempts to promote depth-first exploitation by preferring recently ex-
panded { nodes. AlphA* uses the cost function fα (n) = (1 + wα (n))f (n), where
λ g(π(n)) ≤ g(ñ)
wα (n) = , where λ and Λ are constants with λ ≤ Λ, π(n) is the parent
Λ otherwise
of n, and ñ is the most recently expanded node.

44.5 Complexity

The time complexity42 of A* depends on the heuristic. In the worst case of an unbounded
search space, the number of nodes expanded is exponential43 in the depth of the solution
(the shortest path) d: O(bd ), where b is the branching factor44 (the average number of
successors per state).[21] This assumes that a goal state exists at all, and is reachable45 from
the start state; if it is not, and the state space is infinite, the algorithm will not terminate.
The heuristic function has a major effect on the practical performance of A* search, since
a good heuristic allows A* to prune away many of the bd nodes that an uninformed search
would expand. Its quality can be expressed in terms of the effective branching factor b*,
which can be determined empirically for a problem instance by measuring the number of
nodes expanded, N, and the depth of the solution, then solving[22]
N + 1 = 1 + b∗ + (b∗ )2 + · · · + (b∗ )d .
Good heuristics are those with low effective branching factor (the optimal being b* = 1).

42 https://en.wikipedia.org/wiki/Computational_complexity_theory
43 https://en.wikipedia.org/wiki/Exponential_time
44 https://en.wikipedia.org/wiki/Branching_factor
45 https://en.wikipedia.org/wiki/Reachability

620
Applications

The time complexity is polynomial46 when the search space is a tree, there is a single goal
state, and the heuristic function h meets the following condition:
|h(x) − h∗ (x)| = O(log h∗ (x))
where h* is the optimal heuristic, the exact cost to get from x to the goal. In other words,
the error of h will not grow faster than the logarithm47 of the ”perfect heuristic” h* that
returns the true distance from x to the goal.[15][21]
The space complexity48 of A* is roughly the same as that of all other graph search algo-
rithms, as it keeps all generated nodes in memory.[23] In practice, this turns out to be the
biggest drawback of A* search, leading to the development of memory-bounded heuristic
searches, such as Iterative deepening A*49 , memory bounded A*, and SMA*50 .

44.6 Applications

A* is often used for the common pathfinding51 problem in applications such as video games,
but was originally designed as a general graph traversal algorithm.[4] It finds applications
in diverse problems, including the problem of parsing52 using stochastic grammars53 in
NLP54 .[24] Other cases include an Informational search with online learning.[25]

44.7 Relations to other algorithms

What sets A* apart from a greedy55 best-first search algorithm is that it takes the
cost/distance already traveled, g(n), into account.
Some common variants of Dijkstra's algorithm56 can be viewed as a special case of A* where
the heuristic h(n) = 0 for all nodes;[11][12] in turn, both Dijkstra and A* are special cases
of dynamic programming57 .[26] A* itself is a special case of a generalization of branch and
bound58 .[27]

44.8 Variants
• Anytime A*59[28] or Anytime Repairing A* (ARA*)[29]

46 https://en.wikipedia.org/wiki/Polynomial_time
47 https://en.wikipedia.org/wiki/Logarithm
48 https://en.wikipedia.org/wiki/Space_complexity
49 https://en.wikipedia.org/wiki/Iterative_deepening_A*
50 https://en.wikipedia.org/wiki/SMA*
51 https://en.wikipedia.org/wiki/Pathfinding
52 https://en.wikipedia.org/wiki/Parsing
53 https://en.wikipedia.org/wiki/Stochastic_context-free_grammar
54 https://en.wikipedia.org/wiki/Natural_language_processing
55 https://en.wikipedia.org/wiki/Greedy_algorithm
56 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
57 https://en.wikipedia.org/wiki/Dynamic_programming
58 https://en.wikipedia.org/wiki/Branch_and_bound
59 https://en.wikipedia.org/wiki/Anytime_A*

621
A* search algorithm

• Anytime Dynamic A*60


• Block A*61
• D*62
• Field D*63
• Fringe64
• Fringe Saving A* (FSA*)65
• Generalized Adaptive A* (GAA*)66
• Incremental heuristic search67
• Informational search68[25]
• Iterative deepening A* (IDA*)69
• Jump point search70
• Lifelong Planning A* (LPA*)71
• Multiplier-accelerated A* (MAA*)72 [30]
• New Bidirectional A* (NBA*)[31]
• Simplified Memory bounded A* (SMA*)73
• Realtime A*74[32]
• Theta*75
• Time-Bounded A* (TBA*)76[33]
A* can also be adapted to a bidirectional search77 algorithm. Special care needs to be taken
for the stopping criterion.[34]

44.9 See also


• Breadth-first search78
• Depth-first search79
• Any-angle path planning80 , search for paths that are not limited to move along graph
edges but rather can take on any angle

60 https://en.wikipedia.org/w/index.php?title=Anytime_Dynamic_A*&action=edit&redlink=1
61 https://en.wikipedia.org/wiki/Any-angle_path_planning#A*-based
62 https://en.wikipedia.org/wiki/D*
63 https://en.wikipedia.org/wiki/Any-angle_path_planning
64 https://en.wikipedia.org/wiki/Fringe_search
65 https://en.wikipedia.org/wiki/Incremental_heuristic_search
66 https://en.wikipedia.org/wiki/Incremental_heuristic_search
67 https://en.wikipedia.org/wiki/Incremental_heuristic_search
68 https://en.wikipedia.org/w/index.php?title=Informational_search&action=edit&redlink=1
69 https://en.wikipedia.org/wiki/Iterative_deepening_A*
70 https://en.wikipedia.org/wiki/Jump_point_search
71 https://en.wikipedia.org/wiki/Lifelong_Planning_A*
https://en.wikipedia.org/w/index.php?title=Multiplier-accelerated_A*_(MAA*)&action=
72
edit&redlink=1
73 https://en.wikipedia.org/wiki/SMA*
74 https://en.wikipedia.org/w/index.php?title=Realtime_A*&action=edit&redlink=1
75 https://en.wikipedia.org/wiki/Theta*
76 https://en.wikipedia.org/w/index.php?title=Time-Bounded_A*&action=edit&redlink=1
77 https://en.wikipedia.org/wiki/Bidirectional_search
78 https://en.wikipedia.org/wiki/Breadth-first_search
79 https://en.wikipedia.org/wiki/Depth-first_search
80 https://en.wikipedia.org/wiki/Any-angle_path_planning

622
Notes

44.10 Notes
1. Goal nodes may be passed over multiple times if there remain other nodes with lower
f values, as they may lead to a shorter path to a goal.

44.11 References
1. R, S J. (2018). Artificial intelligence a modern approach. Norvig,
Peter (4th ed.). Boston: Pearson. ISBN81 978-013461099382 . OCLC83 102187414284 .
2. D, D.; S, P.85 ; S, D.; W, D.86 (2009). ”E-
 R P A”. Algorithmics of Large and Complex Net-
works: Design, Analysis, and Simulation. Lecture Notes in Computer Science. 5515.
Springer. pp. 11个$7–139. CiteSeerX87 10.1.1.164.891688 . doi89 :10.1007/978-3-642-
02094-0_790 . ISBN91 978-3-642-02093-392 .
3. Z, W.; C, R. L. (2009). ”F     
:    A*”93 . International Journal of Geographical Information
Science. 23 (4): 531–543. doi94 :10.1080/1365881080194985095 .
4. H, P. E.; N, N. J.; R, B. (1968). ”A F B  
H D  M C P”. IEEE Transactions on Sys-
tems Science and Cybernetics. 4 (2): 100–107. doi96 :10.1109/TSSC.1968.30013697 .
5. D, J. E.; M, D. (1966-09-20). ”E   G
T ”. Proc. R. Soc. Lond. A. 294 (1437): 235–259.
Bibcode98 :1966RSPSA.294..235D99 . doi100 :10.1098/rspa.1966.0205101 . ISSN102 0080-
4630103 .

81 https://en.wikipedia.org/wiki/ISBN_(identifier)
82 https://en.wikipedia.org/wiki/Special:BookSources/978-0134610993
83 https://en.wikipedia.org/wiki/OCLC_(identifier)
84 http://www.worldcat.org/oclc/1021874142
85 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
86 https://en.wikipedia.org/wiki/Dorothea_Wagner
87 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
88 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.164.8916
89 https://en.wikipedia.org/wiki/Doi_(identifier)
90 https://doi.org/10.1007%2F978-3-642-02094-0_7
91 https://en.wikipedia.org/wiki/ISBN_(identifier)
92 https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-02093-3
93 https://zenodo.org/record/979689
94 https://en.wikipedia.org/wiki/Doi_(identifier)
95 https://doi.org/10.1080%2F13658810801949850
96 https://en.wikipedia.org/wiki/Doi_(identifier)
97 https://doi.org/10.1109%2FTSSC.1968.300136
98 https://en.wikipedia.org/wiki/Bibcode_(identifier)
99 https://ui.adsabs.harvard.edu/abs/1966RSPSA.294..235D
100 https://en.wikipedia.org/wiki/Doi_(identifier)
101 https://doi.org/10.1098%2Frspa.1966.0205
102 https://en.wikipedia.org/wiki/ISSN_(identifier)
103 http://www.worldcat.org/issn/0080-4630

623
A* search algorithm

6. N, N J. (2009-10-30). The Quest for Artificial Intelligence104 (PDF).


C: C U P. ISBN105 9780521122931106 .
7. E, S; J, S; L-L, A (2005).
”C-A H S”107 (PDF). Proceedings of the Twentieth Na-
tional Conference on Artificial Intelligence (AAAI): 1362–1367.
8. “A*-like” means the algorithm searches by extending paths originating at the start
node one edge at a time, just as A* does. This excludes, for example, algorithms that
search backward from the goal or in both directions simultaneously. In addition, the
algorithms covered by this theorem must be admissible and “not more informed” than
A*.
9. H, P E.; N, N J.; R, B (1972-12-01).
”C  'A F B   H D
 M C P'”108 (PDF). ACM SIGART Bulletin (37): 28–29.
doi109 :10.1145/1056777.1056779110 . ISSN111 0163-5719112 .
10. D, R; J P (1985). ”G - 
     A*”. Journal of the ACM113 . 32 (3): 505–
536. doi114 :10.1145/3828.3830115 .
11. D S, M J; G, M F.; L, P (2007),
Geospatial Analysis: A Comprehensive Guide to Principles, Techniques and Software
Tools116 , T P L, . 344, ISBN117 9781905886609118 .
12. H, M L (2010), Python Algorithms: Mastering Basic Algorithms
in the Python Language119 , A, . 214, ISBN120 9781430232377121 .
13. M, A (1977). ”O  C  A S
A”. Artificial Intelligence122 . 8 (1): 1–13. doi123 :10.1016/0004-
3702(77)90002-9124 .
14. P, I (1970). ”F        
”. Machine Intelligence. 5: 219–236.

104 https://ai.stanford.edu/~nilsson/QAI/qai.pdf
105 https://en.wikipedia.org/wiki/ISBN_(identifier)
106 https://en.wikipedia.org/wiki/Special:BookSources/9780521122931
107 http://www.aaai.org/Papers/AAAI/2005/AAAI05-216.pdf
108 https://www.ics.uci.edu/~dechter/publications/r0.pdf
109 https://en.wikipedia.org/wiki/Doi_(identifier)
110 https://doi.org/10.1145%2F1056777.1056779
111 https://en.wikipedia.org/wiki/ISSN_(identifier)
112 http://www.worldcat.org/issn/0163-5719
113 https://en.wikipedia.org/wiki/Journal_of_the_ACM
114 https://en.wikipedia.org/wiki/Doi_(identifier)
115 https://doi.org/10.1145%2F3828.3830
116 https://books.google.com/books?id=SULMdT8qPwEC&pg=PA344
117 https://en.wikipedia.org/wiki/ISBN_(identifier)
118 https://en.wikipedia.org/wiki/Special:BookSources/9781905886609
119 https://books.google.com/books?id=9_AXCmGDiz8C&pg=PA214
120 https://en.wikipedia.org/wiki/ISBN_(identifier)
121 https://en.wikipedia.org/wiki/Special:BookSources/9781430232377
122 https://en.wikipedia.org/wiki/Artificial_Intelligence
123 https://en.wikipedia.org/wiki/Doi_(identifier)
124 https://doi.org/10.1016%2F0004-3702%2877%2990002-9

624
References

15. P, J (1984). Heuristics: Intelligent Search Strategies for Computer Prob-
lem Solving125 . A-W. ISBN126 978-0-201-05594-8127 .
16. P, I (A 1973). ”T   () ,
 ,      -
    ”128 (PDF). Proceedings of the Third Interna-
tional Joint Conference on Artificial Intelligence (IJCAI-73). 3. California, USA.
pp. 11–17.
17. K, A; H K (A 1992). ”A    -
 ”. Proceedings of the Tenth European Conference on Artificial
Intelligence (ECAI-92). Vienna, Austria. pp. 16–17.
18. P, J; J H. K (1982). ”S  - ”.
IEEE Transactions on Pattern Analysis and Machine Intelligence. 4 (4): 392–399.
doi129 :10.1109/TPAMI.1982.4767270130 .
19. G, M; D A (A 1983). ”Aε – an efficient near admissi-
ble heuristic search algorithm”131 (PDF). Proceedings of the Eighth International Joint
Conference on Artificial Intelligence (IJCAI-83). 2. Karlsruhe, Germany. pp. 789–
791. Archived from the original132 (PDF) on 2014-08-06.
20. R, B (1999). ”AA*: A ε-admissible heuristic search algorithm”133 .
A   134  2016-01-31. R 2014-11-05. Cite
journal requires |journal= (help135 )
21. R, S136 ; N, P137 (2003) [1995]. Artificial Intelligence:
A Modern Approach138 (2 .). P H. . 97–104. ISBN139 978-
0137903955140 .
22. R, S141 ; N, P142 (2009) [1995]. Artificial Intelligence: A
Modern Approach143 (3 .). P H. . 103. ISBN144 978-0-13-604259-
4145 .

125 https://archive.org/details/heuristicsintell00pear
126 https://en.wikipedia.org/wiki/ISBN_(identifier)
127 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-05594-8
https://www.cs.auckland.ac.nz/courses/compsci709s2c/resources/Mike.d/
128
Pohl1973WeightedAStar.pdf
129 https://en.wikipedia.org/wiki/Doi_(identifier)
130 https://doi.org/10.1109%2FTPAMI.1982.4767270
https://web.archive.org/web/20140806200328/http://ijcai.org/Past%20Proceedings/IJCAI-
131
83-VOL-2/PDF/048.pdf
132 http://ijcai.org/Past%20Proceedings/IJCAI-83-VOL-2/PDF/048.pdf
https://web.archive.org/web/20160131214618/http://home1.stofanet.dk/breese/
133
astaralpha-submitted.pdf.gz
134 http://home1.stofanet.dk/breese/astaralpha-submitted.pdf.gz
135 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
136 https://en.wikipedia.org/wiki/Stuart_J._Russell
137 https://en.wikipedia.org/wiki/Peter_Norvig
138 https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach
139 https://en.wikipedia.org/wiki/ISBN_(identifier)
140 https://en.wikipedia.org/wiki/Special:BookSources/978-0137903955
141 https://en.wikipedia.org/wiki/Stuart_J._Russell
142 https://en.wikipedia.org/wiki/Peter_Norvig
143 https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach
144 https://en.wikipedia.org/wiki/ISBN_(identifier)
145 https://en.wikipedia.org/wiki/Special:BookSources/978-0-13-604259-4

625
A* search algorithm

23. R, S J. (2018). Artificial intelligence a modern approach. Norvig, Pe-
ter (4th ed.). Boston: Pearson. ISBN146 978-0134610993147 . OCLC148 1021874142149 .
24. K, D; M, C D. (2003). A* parsing: fast exact Viterbi
parse selection. Proc. NAACL-HLT.
25. K E.  B-G I. (2014). ”A G-T A  O-
 I L”150 (PDF). IIE T, 46:2, 164-184. Cite
journal requires |journal= (help151 )
26. F, D; L, M; S, A (2005). A Guide to
Heuristic-based Path Planning152 (PDF). P. ICAPS W  P
 U  A S.
27. N, D S.; K, V; K, L (1984). ”G  
,     A∗  AO∗”153 (PDF). Artificial Intelligence. 23 (1):
29–58. doi154 :10.1016/0004-3702(84)90004-3155 .
28. Hansen, Eric A., and Rong Zhou. ”Anytime Heuristic Search.156 ” J. Artif. Intell.
Res.(JAIR) 28 (2007): 267-297.
29. Likhachev, Maxim; Gordon, Geoff; Thrun, Sebastian. ”ARA*: Anytime A* search
with provable bounds on sub-optimality157 ”. In S. Thrun, L. Saul, and B. Schölkopf,
editors, Proceedings of Conference on Neural Information Processing Systems (NIPS),
Cambridge, MA, 2003. MIT Press.
30. Li, Jerry and Zimmerle, Daniel (2019), ”Designing Optimal Network for Rural Electri-
fication using Multiplier-accelerated A* Algorithm”158 , 2019 IEEE PES Asia-Pacific
Power and Energy Engineering Conference (APPEEC), Macao, Macao, 2019, pp. 1-5.
Accepted version of this paper is available at Researchgate159 or the author's personal
page160
31. Pijls, Wim; Post, Henk ”Yet another bidirectional algorithm for shortest paths161 ” In
Econometric Institute Report EI 2009-10/Econometric Institute, Erasmus University
Rotterdam. Erasmus School of Economics.
32. Korf, Richard E. ”Real-time heuristic search.162 ” Artificial intelligence 42.2-3 (1990):
189-211.

146 https://en.wikipedia.org/wiki/ISBN_(identifier)
147 https://en.wikipedia.org/wiki/Special:BookSources/978-0134610993
148 https://en.wikipedia.org/wiki/OCLC_(identifier)
149 http://www.worldcat.org/oclc/1021874142
150 http://www.eng.tau.ac.il/~bengal/GTA.pdf
151 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
https://www.cs.cmu.edu/afs/cs.cmu.edu/Web/People/maxim/files/hsplanguide_icaps05ws.
152
pdf
153 https://www.cs.umd.edu/~nau/papers/nau1984general.pdf
154 https://en.wikipedia.org/wiki/Doi_(identifier)
155 https://doi.org/10.1016%2F0004-3702%2884%2990004-3
156 http://www.jair.org/media/2096/live-2096-3136-jair.pdf?q=anytime:
157 http://robots.stanford.edu/papers/Likhachev03b.pdf
158 https://ieeexplore.ieee.org/document/8994527
https://www.researchgate.net/publication/339266504_Designing_Optimal_Network_for_
159
Rural_Electrification_using_Multiplier-accelerated_A_Algorithm
160 http://www.drjerryli.com/articles/APPEEC19_accepted.pdf
161 https://repub.eur.nl/pub/16100/ei2009-10.pdf
162 https://pdfs.semanticscholar.org/2fda/10f6079156c4621fefc8b7cad72c1829ee94.pdf

626
Further reading

33. B, Y; B, V; S, N (J 11–17, 2009).
TBA*: time-bounded A*163 (PDF). IJCAI 2009, P   21 I-
 J C  A I. P, C-
, USA: M K P I. . 431–436.
34. ”E P--P S P A”164 (PDF). Cite jour-
nal requires |journal= (help165 ) from Princeton University166

44.12 Further reading


• N, N. J. (1980). Principles of Artificial Intelligence167 . P A, C-
: T P C. ISBN168 978-0-935382-01-3169 .

44.13 External links


• Clear visual A* explanation, with advice and thoughts on path-finding170
• Variation on A* called Hierarchical Path-Finding A* (HPA*)171

163 http://web.cs.du.edu/~sturtevant/papers/TBA.pdf
http://www.cs.princeton.edu/courses/archive/spr06/cos423/Handouts/EPP%20shortest%
164
20path%20algorithms.pdf
165 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
166 https://en.wikipedia.org/wiki/Princeton_University
167 https://archive.org/details/principlesofarti00nils
168 https://en.wikipedia.org/wiki/ISBN_(identifier)
169 https://en.wikipedia.org/wiki/Special:BookSources/978-0-935382-01-3
170 http://theory.stanford.edu/~amitp/GameProgramming/
https://web.archive.org/web/20090917155722/http://www.cs.ualberta.ca/~mmueller/ps/
171
hpastar.pdf

627
45 Szemerédi regularity lemma

Figure 116 The edges between parts behave in a ”random-like” fashion.

Szemerédi's1 regularity lemmais one of the most powerful tools in extremal graph the-
ory2 , particularly in the study of large dense graphs. It states that the vertices of every

1 https://en.wikipedia.org/wiki/Szemer%C3%A9di
2 https://en.wikipedia.org/wiki/Extremal_graph_theory

629
Szemerédi regularity lemma

large enough graph3 can be partitioned into a bounded number of parts so that the edges
between different parts behave almost randomly.
According to the lemma, no matter how large a graph is, we can approximate it with the
edge densities between a bounded number of parts. Between any two parts, the distribution
of edges will be pseudorandom as per the edge density. These approximations provide es-
sentially correct values for various properties of the graph, such as the number of embedded
copies of a given subgraph or the number of edge deletions required to remove all copies of
some subgraph.

45.1 Statement

To state Szemerédi's regularity lemma formally, we must formalize what the edge distribu-
tion between parts behaving 'almost randomly' really means. By 'almost random', we're
referring to a notion called ε-regularity. To understand what this means, we first state some
definitions. In what follows G is a graph with vertex4 set V.
Definition 1. Let X, Y be disjoint subsets of V. The edge density of the pair (X, Y)
is defined as:
|E(X, Y )|
d(X, Y ) :=
|X||Y |
where E(X, Y) denotes the set of edges having one end vertex in X and one in Y.

Figure 117 Subset pairs of a regular pair are similar in edge density to the original pair.

We call a pair of parts ε-regular if, whenever you take a large subset of each part, their edge
density isn't too far off the edge density of the pair of parts. Formally,
Definition 2. For ε > 0, a pair of vertex sets X and Y is called ε-regular, if for all
subsets A ⊆ X, B ⊆ Y satisfying |A| ≥ ε|X|, |B| ≥ ε|Y|, we have

3 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
4 https://en.wikipedia.org/wiki/Vertex_(graph_theory)

630
Statement

|d(X, Y ) − d(A, B)| ≤ ε.


The natural way to define an ε-regular partition should be one where each pair of parts is
ε-regular. However, some graphs, such as the half graphs5 , require many pairs of partitions
(but a small fraction of all pairs) to be irregular.[1] So we shall define ε-regular partitions
to be one where most pairs of parts are ε-regular.
Definition 3. A partition of V into k sets P = {V1 , . . . , Vk } is called an ε-regular
partition if

|Vi ||Vj | ≤ ε|V (G)|2
(Vi ,Vj ) not ε-regular

Now we can state the lemma:


Szemerédi's Regularity Lemma. For every ε > 0 and positive6 integer7 m there ex-
ists an integer M such that if G is a graph with at least M vertices, there exists an
integer k in the range m ≤ k ≤ M and an ε-regular partition of the vertex set of G into
k sets.
The bound M for the number of parts in the partition of the graph given by the proofs of
Szemeredi's regularity lemma is very large, given by a O(ε−5 )-level iterated exponential of
m. At one time it was hoped that the true bound was much smaller, which would have had
several useful applications. However Gowers (1997)8 found examples of graphs for which
M does indeed grow very fast and is at least as large as a ε−1/16 -level iterated exponential
of m. In particular the best bound has level exactly 4 in the Grzegorczyk hierarchy9 , and
so is not an elementary recursive function10 .[2]

5 https://en.wikipedia.org/wiki/Half_graph
6 https://en.wikipedia.org/wiki/Negative_and_positive_numbers
7 https://en.wikipedia.org/wiki/Integer
8 #CITEREFGowers1997
9 https://en.wikipedia.org/wiki/Grzegorczyk_hierarchy
10 https://en.wikipedia.org/wiki/Elementary_recursive_function

631
Szemerédi regularity lemma

45.2 Proof

Figure 118 The boundaries of irregularity witnessing subsets refine each part of the
partition.

We shall find an ε-regular partition for a given graph following an algorithm. A rough
outline:
1. Start with an arbitrary partition (could just be 1 part)
2. While the partition isn't ε-regular:
• Find the subsets which witness ε-irregularity for each irregular pair.
• Simultaneously refine the partition using all the witnessing subsets.
We apply a technique called the energy increment argument to show that this process
terminates after a bounded number of steps. Basically, we define a monovariant which must

632
Proof

increase by a certain amount in each step, but it's bounded above and thus cannot increase
indefinitely. This monovariant is called 'energy' as it's an L2 quantity.
Definition 4. Let U, W be subsets of V. Set |V | = n. The energy of the pair (U, W)
is defined as:
|U ||W |
q(U, W ) := d(U, W )2
n2
For partitions PU = {U1 , . . . , Uk } of U and PW = {W1 , . . . , Wl } of W, we define the energy
to be the sum of the energies between each pair of parts:

k ∑
l
q(PU , PW ) := q(Ui , Wj )
i=1 j=1

Finally, for a partition P = {V1 , . . . , Vk } of V, define the energy of P to be q(P, P).


Specifically,

k ∑
k ∑
k ∑
k
|Vi ||Vj |
q(P) = q(Vi , Vj ) = d(Vi , Vj )2
i=1 j=1 i=1 j=1
n2

Observe that energy is between 0 and 1 because edge density is bounded above by 1:

k ∑
k
|Vi ||Vj | ∑
k ∑
k
|Vi ||Vj |
q(P) = d(Vi , Vj )2 ≤ =1
i=1 j=1
n2 i=1 j=1
n2

Now, we start by proving that energy does not decrease upon refinement.
Lemma 1. (Energy is nondecreasing under partitioning) For any partitions PU and PW
of vertex sets U and W , q(PU , PW ) ≥ q(U, W ).
Proof: Let PU = {U1 , . . . , Uk } and PW = {W1 , . . . , Wl }. Choose vertices x uniformly
from U and y uniformly from W , with x in part Ui and y in part Wj . Then define the
random variable Z = d(Ui , Wj ). Let us look at properties of Z. The expectation is

k ∑
l
|Ui | |Wj | e(U, W )
E[Z] = d(Ui , Wj ) = = d(U, W )
i=1 j=1
|U | |W | |U ||W |

The second moment is



k ∑
l
|Ui | |Wj | n2
E[Z ] =
2
d(Ui , Wj )2 = q(PU , PW )
i=1 j=1
|U | |W | |U ||W |

By convexity, E[Z 2 ] ≥ E[Z]2 . Rearranging, we get that q(PU , PW ) ≥ q(U, W ) for all
U, W .□
If each part of P is further partitioned, the new partition is called a refinement of P. Now,
if P = {V1 , . . . , Vm }, applying Lemma 1 to each pair (Vi , Vj ) proves that for every refinement
P ′ of P, q(P ′ ) ≥ q(P). Thus the refinement step in the algorithm doesn't lose any energy.
Lemma 2. (Energy boost lemma) If (U, W ) is not ε-regular as witnessed by
U1 ⊂ U, W1 ⊂ W , then,
|U ||W |
q ({U1 , U \U1 }, {W1 , W \W1 }) > q(U, W ) + ε4
n2

633
Szemerédi regularity lemma

Proof: Define Z as above. Then,


n2
V ar(Z) = E[Z 2 ] − E[Z]2 = (q ({U1 , U \U1 }, {W1 , W \W1 }) − q(U, W ))
|U ||W |
But observe that |Z − E[Z]| = |d(U1 , W1 ) − d(U, W )| with probability
|U1 | |W1 |
(corresponding to x ∈ U1 and y ∈ W1 ), so
|U | |W |
|U1 | |W1 |
V ar(Z) = E[(Z − E[Z])2 ] ≥ (d(U1 , W1 ) − d(U, W ))2 > ε · ε · ε2 □
|U | |W |
Now we can prove the energy increment argument, which shows that energy increases sub-
stantially in each iteration of the algorithm.
Lemma 3 (Energy increment lemma) If a partition P = {V1 , . . . , Vk } of V (G) is not ε-
regular, then there exists a refinement Q of P where every Vi is partitioned into at most
2k parts such that
q(Q) ≥ q(P) + ε5 .
Proof: For all (i, j) such that (Vi , Vj ) is not ε-regular, find Ai,j ⊂ Vi and Aj,i ⊂ Vj that
witness irregularity (do this simultaneously for all irregular pairs). Let Q be a common
refinement of P by Ai,j 's. Each Vi is partitioned into at most 2k parts as desired. Then,
∑ ∑ ∑
q(Q) = q(QVi , QVj ) = q(QVi , QVj ) + q(QVi , QVj )
(i,j)∈[k]2 (Vi ,Vj ) ε-regular (Vi ,Vj ) not ε-regular

Where QVi is the partition of Vi given by Q. By Lemma 1, the above quantity is at least
∑ ∑
q(Vi , Vj ) + q({Ai,j , Vi \Ai,j }, {Aj,i , Vj \Aj,i })
(Vi ,Vj ) ε-regular (Vi ,Vj ) not ε-regular

Since Vi is cut by Ai,j when creating Q, so QVi is a refinement of {Ai,j , Vi \Ai,j }. By


lemma 2, the above sum is at least
∑ ∑ |Vi ||Vj |
q(Vi , Vj ) + ε4
(Vi ,Vj ) not ε-regular
n2
(i,j)∈[k]2

But the second sum is at least ε5 since P is not ε-regular, so we deduce the desired
inequality. □
Now, starting from any partition, we can keep applying Lemma 3 as long as the resulting
partition isn't ε-regular. But in each step energy increases by ε5 , and it's bounded above
by 1. Then this process can be repeated at most ε−5 times, before it terminates and we
must have an ε-regular partition.

45.3 Applications

If we have enough information about the regularity of a graph, we can count the number of
copies of a specific subgraph within the graph up to small error.
Graph Counting Lemma. Let H be a graph with V (H) = [k], and let ε > 0. Let G
be an n-vertex graph with vertex sets V1 , . . . , Vk ⊆ V (G) such that (Vi , Vj ) is ε-regular

634
Applications

whenever {i, j} ∈ E(H). Then, the number of labeled copies of H in G is within


e(H)ε|V1 | · · · |Vk | of
 ( )
∏ ∏
k
 d(Vi , Vj ) |Vi |
{i,j}∈E(H) i=1

This can be combined with Szemerédi's regularity lemma to prove the Graph removal
lemma11 . The graph removal lemma can be used to prove Roth's Theorem on Arithmetic
Progressions12 ,[3] and a generalization of it, the hypergraph removal lemma13 , can be used
to prove Szemerédi's theorem14 .[4]
The graph removal lemma generalizes to induced subgraphs15 , by considering edge edits
instead of only edge deletions. This was proved by Alon, Fischer, Krivelevich, and Szegedy
in 2000.[5] However, this required a stronger variation of the regularity lemma.
Szemerédi's regularity lemma does not provide meaningful results in sparse graphs16 . Since
sparse graphs have subconstant edge densities, ε-regularity is trivially satisfied.

11 https://en.wikipedia.org/wiki/Graph_removal_lemma
12 https://en.wikipedia.org/wiki/Roth%27s_Theorem_on_Arithmetic_Progressions
13 https://en.wikipedia.org/wiki/Hypergraph_removal_lemma
14 https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem
15 https://en.wikipedia.org/wiki/Induced_subgraph
16 https://en.wikipedia.org/wiki/Sparse_graph

635
Szemerédi regularity lemma

45.4 History and Extensions

Figure 119 Gowers's construction for the lower bound of Szemerédi's regularity lemma

Szemerédi (1975)17 first introduced a weaker version of this lemma, restricted to bipartite
graphs, in order to prove Szemerédi's theorem18 ,[6] and in (Szemerédi 197819 ) he proved
the full lemma.[7] Extensions of the regularity method to hypergraphs20 were obtained by
Rödl21 and his collaborators[8][9][10] and Gowers22 .[11][12]
János Komlós23 , Gábor Sárközy24 and Endre Szemerédi25 later (in 1997) proved in the blow-
up lemma26[13][14] that the regular pairs in Szemerédi regularity lemma behave like complete
bipartite graphs under the correct conditions. The lemma allowed for deeper exploration
into the nature of embeddings of large sparse graphs into dense graphs.

17 #CITEREFSzemer%C3%A9di1975
18 https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem
19 #CITEREFSzemer%C3%A9di1978
20 https://en.wikipedia.org/wiki/Hypergraph
21 https://en.wikipedia.org/wiki/Vojt%C4%9Bch_R%C3%B6dl
22 https://en.wikipedia.org/wiki/Timothy_Gowers
23 https://en.wikipedia.org/wiki/J%C3%A1nos_Koml%C3%B3s_(mathematician)
24 https://en.wikipedia.org/wiki/G%C3%A1bor_N._S%C3%A1rk%C3%B6zy
25 https://en.wikipedia.org/wiki/Endre_Szemer%C3%A9di
26 https://en.wikipedia.org/w/index.php?title=Blow-up_lemma&action=edit&redlink=1

636
History and Extensions

The first constructive version was provided by Alon, Duke, Lefmann, Rödl27 and
Yuster.[15] Subsequently, Frieze28 and Kannan29 gave a different version and extended it
to hypergraphs.[16] They later produced a different construction due to Alan Frieze and
Ravi Kannan that uses singular values of matrices.[17] One can find more efficient non-
deterministic algorithms, as formally detailed in Terence Tao30 's blog[18] and implicitly
mentioned in various papers.[19][20][21]
An inequality of Terence Tao31 extends the Szemerédi regularity lemma, by revisiting it from
the perspective of probability theory and information theory instead of graph theory.[22]
Terence Tao has also provided a proof of the lemma based on spectral theory, using the
adjacency matrices of graphs.[23]
It is not possible to prove a variant of the regularity lemma in which all pairs of partition
sets are regular. Some graphs, such as the half graphs32 , require many pairs of partitions
(but a small fraction of all pairs) to be irregular.[24]
It is a common variant in the definition of an ε-regular partition to require that the vertex
sets all have the same size, while collecting the leftover vertices in an ”error”-set V0 whose
size is at most an ε-fraction of the size of the vertex set of G.
A stronger variation of the regularity lemma was proven by Alon, Fischer, Krivelevich, and
Szegedy while proving the induced graph removal lemma. This works with a sequence of
ε instead of just one, and shows that there exists a partition with an extremely regular
refinement, where the refinement doesn't have too large of an energy increment.
Szemerédi's regularity lemma can be interpreted as saying that the space of all graphs is
totally bounded33 (and hence precompact34 ) in a suitable metric (the cut distance35 ). Limits
in this metric can be represented by graphons36 ; another version of the regularity lemma
simply states that the space of graphons is compact37 .[25]

27 https://en.wikipedia.org/wiki/Vojt%C4%9Bch_R%C3%B6dl
28 https://en.wikipedia.org/wiki/Alan_M._Frieze
29 https://en.wikipedia.org/wiki/Ravi_Kannan
30 https://en.wikipedia.org/wiki/Terence_Tao
31 https://en.wikipedia.org/wiki/Terence_Tao
32 https://en.wikipedia.org/wiki/Half_graph
33 https://en.wikipedia.org/wiki/Totally_bounded
34 https://en.wikipedia.org/wiki/Relatively_compact_subspace
35 https://en.wikipedia.org/wiki/Cut_distance
36 https://en.wikipedia.org/wiki/Graphon
37 https://en.wikipedia.org/wiki/Compact_metric_space

637
Szemerédi regularity lemma

45.5 References
1. C, D38 ; F, J39 (2012), ”B   
  ”, Geometric and Functional Analysis, 22 (5): 1191–1256,
arXiv40 :1107.482941 , doi42 :10.1007/s00039-012-0171-x43 , MR44 298943245
2. G, W. T.46 (1997), ”L      S-
'  ”, Geometric and Functional Analysis, 7 (2): 322–337,
doi47 :10.1007/PL0000162148 , MR49 144538950 .
3. R, K. F.51 (1953), ”O    ”, Journal of the
London Mathematical Society, 28 (1): 104–109, doi52 :10.1112/jlms/s1-28.1.10453 ,
MR54 005185355
4. T, T56 (2006), ”A     
”, Journal of Combinatorial Theory, Series A, 113 (7): 1257–1280,
arXiv57 :math/050357258 , doi59 :10.1016/j.jcta.2005.11.00660 , MR61 225906062
5. A, N63 ; F, E; K, M64 ; S, M65
(2000), ”E    ”, Combinatorica, 20 (4): 451–476,
doi66 :10.1007/s00493007000167 , MR68 180482069

38 https://en.wikipedia.org/wiki/David_Conlon
39 https://en.wikipedia.org/wiki/Jacob_Fox
40 https://en.wikipedia.org/wiki/ArXiv_(identifier)
41 http://arxiv.org/abs/1107.4829
42 https://en.wikipedia.org/wiki/Doi_(identifier)
43 https://doi.org/10.1007%2Fs00039-012-0171-x
44 https://en.wikipedia.org/wiki/MR_(identifier)
45 http://www.ams.org/mathscinet-getitem?mr=2989432
46 https://en.wikipedia.org/wiki/Timothy_Gowers
47 https://en.wikipedia.org/wiki/Doi_(identifier)
48 https://doi.org/10.1007%2FPL00001621
49 https://en.wikipedia.org/wiki/MR_(identifier)
50 http://www.ams.org/mathscinet-getitem?mr=1445389
51 https://en.wikipedia.org/wiki/Klaus_Roth
52 https://en.wikipedia.org/wiki/Doi_(identifier)
53 https://doi.org/10.1112%2Fjlms%2Fs1-28.1.104
54 https://en.wikipedia.org/wiki/MR_(identifier)
55 http://www.ams.org/mathscinet-getitem?mr=0051853
56 https://en.wikipedia.org/wiki/Terence_Tao
57 https://en.wikipedia.org/wiki/ArXiv_(identifier)
58 http://arxiv.org/abs/math/0503572
59 https://en.wikipedia.org/wiki/Doi_(identifier)
60 https://doi.org/10.1016%2Fj.jcta.2005.11.006
61 https://en.wikipedia.org/wiki/MR_(identifier)
62 http://www.ams.org/mathscinet-getitem?mr=2259060
63 https://en.wikipedia.org/wiki/Noga_Alon
64 https://en.wikipedia.org/wiki/Michael_Krivelevich
65 https://en.wikipedia.org/wiki/Mario_Szegedy
66 https://en.wikipedia.org/wiki/Doi_(identifier)
67 https://doi.org/10.1007%2Fs004930070001
68 https://en.wikipedia.org/wiki/MR_(identifier)
69 http://www.ams.org/mathscinet-getitem?mr=1804820

638
References

6. S, E70 (1975), ”O       


  ”71 , Polska Akademia Nauk. Instytut Matematyczny.
Acta Arithmetica, 27: 199–245, MR72 036931273 .
7. S, E74 (1978), ”R   ”, Problèmes com-
binatoires et théorie des graphes (Colloq. Internat. CNRS, Univ. Orsay, Orsay,
1976), Colloq. Internat. CNRS, 260, Paris: CNRS, pp. 399–401, MR75 054002476 .
8. F, P77 ; R, VĚ78 (2002), ”E 
  ”, Random Structures & Algorithms, 20 (2): 131–164,
79 80 81
doi :10.1002/rsa.10017.abs , MR 1884430 . 82

9. R, VĚ83 ; S, J (2004), ”R   -


 ”, Random Structures & Algorithms, 25 (1): 1–42,
doi84 :10.1002/rsa.2001785 , MR86 206966387 .
10. N, B; R, VĚ88 ; S, M89 (2006), ”T -
    - ”, Random Structures & Al-
gorithms, 28 (2): 113–179, CiteSeerX90 10.1.1.378.850391 , doi92 :10.1002/rsa.2011793 ,
MR94 219849595 .
11. G, W. T.96 (2006), ”Q,    
3- ”97 , Combinatorics, Probability and Computing98 , 15 (1–2):
143–184, doi99 :10.1017/S0963548305007236100 , MR101 2195580102 .

70 https://en.wikipedia.org/wiki/Endre_Szemer%C3%A9di
71 http://matwbn.icm.edu.pl/tresc.php?wyd=6&tom=27
72 https://en.wikipedia.org/wiki/MR_(identifier)
73 http://www.ams.org/mathscinet-getitem?mr=0369312
74 https://en.wikipedia.org/wiki/Endre_Szemer%C3%A9di
75 https://en.wikipedia.org/wiki/MR_(identifier)
76 http://www.ams.org/mathscinet-getitem?mr=0540024
77 https://en.wikipedia.org/wiki/P%C3%A9ter_Frankl
78 https://en.wikipedia.org/wiki/Vojt%C4%9Bch_R%C3%B6dl
79 https://en.wikipedia.org/wiki/Doi_(identifier)
80 https://doi.org/10.1002%2Frsa.10017.abs
81 https://en.wikipedia.org/wiki/MR_(identifier)
82 http://www.ams.org/mathscinet-getitem?mr=1884430
83 https://en.wikipedia.org/wiki/Vojt%C4%9Bch_R%C3%B6dl
84 https://en.wikipedia.org/wiki/Doi_(identifier)
85 https://doi.org/10.1002%2Frsa.20017
86 https://en.wikipedia.org/wiki/MR_(identifier)
87 http://www.ams.org/mathscinet-getitem?mr=2069663
88 https://en.wikipedia.org/wiki/Vojt%C4%9Bch_R%C3%B6dl
89 https://en.wikipedia.org/wiki/Mathias_Schacht
90 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
91 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.378.8503
92 https://en.wikipedia.org/wiki/Doi_(identifier)
93 https://doi.org/10.1002%2Frsa.20117
94 https://en.wikipedia.org/wiki/MR_(identifier)
95 http://www.ams.org/mathscinet-getitem?mr=2198495
96 https://en.wikipedia.org/wiki/Timothy_Gowers
97 https://semanticscholar.org/paper/02969183d7b6287ad69520298210b77bb61d3b48
98 https://en.wikipedia.org/wiki/Combinatorics,_Probability_and_Computing
99 https://en.wikipedia.org/wiki/Doi_(identifier)
100 https://doi.org/10.1017%2FS0963548305007236
101 https://en.wikipedia.org/wiki/MR_(identifier)
102 http://www.ams.org/mathscinet-getitem?mr=2195580

639
Szemerédi regularity lemma

12. G, W. T.103 (2007), ”H    -


 S ”, Annals of Mathematics104 , S S-
, 166 (3): 897–946, arXiv105 :0710.3032106 , Bibcode107 :2007arXiv0710.3032G108 ,
doi109 :10.4007/annals.2007.166.897110 , MR111 2373376112 .
13. K, J113 ; S, G N.114 ; S, E-
 115 (1997), ”B- ”, Combinatorica116 , 17 (1): 109–123,
doi117 :10.1007/BF01196135118 , MR119 1466579120
14. K, J121 ; S, G N.122 ; S, E123 (1998),
”A     - ”, Random Structures & Al-
gorithms, 12 (3): 297–312, arXiv124 :math/9612213125 , doi126 :10.1002/(SICI)1098-
2418(199805)12:3<297::AID-RSA5>3.3.CO;2-W127 , MR128 1635264129
15. N. A  R. A. D  H. L  V. R  R. Y (1994).
”T A A   R L”. J. Algorithms. 16: 80–
109. CiteSeerX130 10.1.1.102.681131 . doi132 :10.1006/jagm.1994.1005133 .
16. A. F  R. K (1996). ”T   
    ”. FOCS '96: Proceed-
ings of the 37th Annual Symposium on Foundations of Computer Science.
doi134 :10.1109/SFCS.1996.548459135 .

103 https://en.wikipedia.org/wiki/Timothy_Gowers
104 https://en.wikipedia.org/wiki/Annals_of_Mathematics
105 https://en.wikipedia.org/wiki/ArXiv_(identifier)
106 http://arxiv.org/abs/0710.3032
107 https://en.wikipedia.org/wiki/Bibcode_(identifier)
108 https://ui.adsabs.harvard.edu/abs/2007arXiv0710.3032G
109 https://en.wikipedia.org/wiki/Doi_(identifier)
110 https://doi.org/10.4007%2Fannals.2007.166.897
111 https://en.wikipedia.org/wiki/MR_(identifier)
112 http://www.ams.org/mathscinet-getitem?mr=2373376
113 https://en.wikipedia.org/wiki/J%C3%A1nos_Koml%C3%B3s_(mathematician)
114 https://en.wikipedia.org/wiki/G%C3%A1bor_N._S%C3%A1rk%C3%B6zy
115 https://en.wikipedia.org/wiki/Endre_Szemer%C3%A9di
116 https://en.wikipedia.org/wiki/Combinatorica
117 https://en.wikipedia.org/wiki/Doi_(identifier)
118 https://doi.org/10.1007%2FBF01196135
119 https://en.wikipedia.org/wiki/MR_(identifier)
120 http://www.ams.org/mathscinet-getitem?mr=1466579
121 https://en.wikipedia.org/wiki/J%C3%A1nos_Koml%C3%B3s_(mathematician)
122 https://en.wikipedia.org/wiki/G%C3%A1bor_N._S%C3%A1rk%C3%B6zy
123 https://en.wikipedia.org/wiki/Endre_Szemer%C3%A9di
124 https://en.wikipedia.org/wiki/ArXiv_(identifier)
125 http://arxiv.org/abs/math/9612213
126 https://en.wikipedia.org/wiki/Doi_(identifier)
https://doi.org/10.1002%2F%28SICI%291098-2418%28199805%2912%3A3%3C297%3A%3AAID-
127
RSA5%3E3.3.CO%3B2-W
128 https://en.wikipedia.org/wiki/MR_(identifier)
129 http://www.ams.org/mathscinet-getitem?mr=1635264
130 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
131 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.102.681
132 https://en.wikipedia.org/wiki/Doi_(identifier)
133 https://doi.org/10.1006%2Fjagm.1994.1005
134 https://en.wikipedia.org/wiki/Doi_(identifier)
135 https://doi.org/10.1109%2FSFCS.1996.548459

640
References

17. A. F  R. K (1999). ”A S A  C


S' R P”136 (PDF). Electron. J. Comb. 6.
18. T, T137 (2009), Szemeredi's regularity lemma via random partitions138
19. A, N139 ; S, A140 (2008), ”E   -
  ”, SIAM J. Comput., 38 (2): 505–522, doi141 :10.1137/050633445142 ,
ISSN143 0097-5397144 , MR145 2411033146
20. I, Y147 (2006), A Simple Regularization of Hypergraphs,
arXiv148 :math/0612838149 , Bibcode150 :2006math.....12838I151
21. A, T (2008), ”O      -
     ”, Probability Surveys, 5: 80–
145, arXiv152 :0801.1698153 , Bibcode154 :2008arXiv0801.1698A155 , doi156 :10.1214/08-
PS124157
22. T, T158 (2006), ”S'   ”, Con-
tributions to Discrete Mathematics, 1 (1): 8–28, arXiv159 :math/0504472160 , Bib-
code161 :2005math......4472T162 , MR163 2212136164 .
23. T, T165 (2012), The spectral proof of the Szemeredi regularity lemma166

136 http://www.math.cmu.edu/~af1p/Texfiles/svreg.pdf
137 https://en.wikipedia.org/wiki/Terence_Tao
https://terrytao.wordpress.com/2009/04/26/szemeredis-regularity-lemma-via-random-
138
partitions
139 https://en.wikipedia.org/wiki/Noga_Alon
140 https://en.wikipedia.org/w/index.php?title=Shapira_Asaf&action=edit&redlink=1
141 https://en.wikipedia.org/wiki/Doi_(identifier)
142 https://doi.org/10.1137%2F050633445
143 https://en.wikipedia.org/wiki/ISSN_(identifier)
144 http://www.worldcat.org/issn/0097-5397
145 https://en.wikipedia.org/wiki/MR_(identifier)
146 http://www.ams.org/mathscinet-getitem?mr=2411033
147 https://en.wikipedia.org/w/index.php?title=Yoshiyasu_Ishigami&action=edit&redlink=1
148 https://en.wikipedia.org/wiki/ArXiv_(identifier)
149 http://arxiv.org/abs/math/0612838
150 https://en.wikipedia.org/wiki/Bibcode_(identifier)
151 https://ui.adsabs.harvard.edu/abs/2006math.....12838I
152 https://en.wikipedia.org/wiki/ArXiv_(identifier)
153 http://arxiv.org/abs/0801.1698
154 https://en.wikipedia.org/wiki/Bibcode_(identifier)
155 https://ui.adsabs.harvard.edu/abs/2008arXiv0801.1698A
156 https://en.wikipedia.org/wiki/Doi_(identifier)
157 https://doi.org/10.1214%2F08-PS124
158 https://en.wikipedia.org/wiki/Terence_Tao
159 https://en.wikipedia.org/wiki/ArXiv_(identifier)
160 http://arxiv.org/abs/math/0504472
161 https://en.wikipedia.org/wiki/Bibcode_(identifier)
162 https://ui.adsabs.harvard.edu/abs/2005math......4472T
163 https://en.wikipedia.org/wiki/MR_(identifier)
164 http://www.ams.org/mathscinet-getitem?mr=2212136
165 https://en.wikipedia.org/wiki/Terence_Tao
https://terrytao.wordpress.com/2012/12/03/the-spectral-proof-of-the-szemeredi-
166
regularity-lemma/

641
Szemerédi regularity lemma

24. C, D167 ; F, J168 (2012), ”B   
  ”, Geometric and Functional Analysis, 22 (5): 1191–1256,
arXiv169 :1107.4829170 , doi171 :10.1007/s00039-012-0171-x172 , MR173 2989432174
25. L, L175 ; S, B (2007), ”S'    -
”, Geometric and Functional Analysis, 17: 252–270, doi176 :10.1007/s00039-007-
0599-6177 , ISSN178 1016-443X179 , MR180 2306658181

45.6 Further reading


• K, J.182 ; S, M.183 (1996), ”S'   
    ”, Combinatorics, Paul Erdős is eighty, Vol. 2
(Keszthely, 1993), Bolyai Soc. Math. Stud., 2, János Bolyai Math. Soc., Budapest,
pp. 295–352, MR184 1395865185 .
• K, J.186 ; S, A; S, M187 ; S, E188
(2002), ”T        ”, Theoret-
ical aspects of computer science (Tehran, 2000), Lecture Notes in Computer Science189 ,
2292, Springer, Berlin, pp. 84–112, doi190 :10.1007/3-540-45878-6_3191 , ISBN192 978-3-
540-43328-6193 , MR194 1966181195 .

167 https://en.wikipedia.org/wiki/David_Conlon
168 https://en.wikipedia.org/wiki/Jacob_Fox
169 https://en.wikipedia.org/wiki/ArXiv_(identifier)
170 http://arxiv.org/abs/1107.4829
171 https://en.wikipedia.org/wiki/Doi_(identifier)
172 https://doi.org/10.1007%2Fs00039-012-0171-x
173 https://en.wikipedia.org/wiki/MR_(identifier)
174 http://www.ams.org/mathscinet-getitem?mr=2989432
175 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Lov%C3%A1sz
176 https://en.wikipedia.org/wiki/Doi_(identifier)
177 https://doi.org/10.1007%2Fs00039-007-0599-6
178 https://en.wikipedia.org/wiki/ISSN_(identifier)
179 http://www.worldcat.org/issn/1016-443X
180 https://en.wikipedia.org/wiki/MR_(identifier)
181 http://www.ams.org/mathscinet-getitem?mr=2306658
182 https://en.wikipedia.org/wiki/J%C3%A1nos_Koml%C3%B3s_(mathematician)
183 https://en.wikipedia.org/wiki/Mikl%C3%B3s_Simonovits
184 https://en.wikipedia.org/wiki/MR_(identifier)
185 http://www.ams.org/mathscinet-getitem?mr=1395865
186 https://en.wikipedia.org/wiki/J%C3%A1nos_Koml%C3%B3s_(mathematician)
187 https://en.wikipedia.org/wiki/Mikl%C3%B3s_Simonovits
188 https://en.wikipedia.org/wiki/Endre_Szemer%C3%A9di
189 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
190 https://en.wikipedia.org/wiki/Doi_(identifier)
191 https://doi.org/10.1007%2F3-540-45878-6_3
192 https://en.wikipedia.org/wiki/ISBN_(identifier)
193 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-43328-6
194 https://en.wikipedia.org/wiki/MR_(identifier)
195 http://www.ams.org/mathscinet-getitem?mr=1966181

642
46 Alpha–beta pruning

For other uses, see Alphabeta (disambiguation)1 .

Alpha–beta pruning
Class Search algorithm
Worst-case perfor- O(bd )
mance (√ )
Best-case perfor- O bd
mance

1 https://en.wikipedia.org/wiki/Alphabeta_(disambiguation)

643
Alpha–beta pruning

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

644
History

Alpha–beta pruning is a search algorithm2 that seeks to decrease the number of nodes
that are evaluated by the minimax algorithm3 in its search tree4 . It is an adversarial
search algorithm used commonly for machine playing of two-player games (Tic-tac-toe5 ,
Chess6 , Go7 , etc.). It stops evaluating a move when at least one possibility has been found
that proves the move to be worse than a previously examined move. Such moves need
not be evaluated further. When applied to a standard minimax tree, it returns the same
move as minimax would, but prunes away branches that cannot possibly influence the final
decision.[1]

46.1 History

Allen Newell8 and Herbert A. Simon9 who used what John McCarthy10 calls an
”approximation”[2] in 1958 wrote that alpha–beta ”appears to have been reinvented a num-
ber of times”.[3] Arthur Samuel11 had an early version for a checkers simulation. Richards,
Timothy Hart, Michael Levin12 and/or Daniel Edwards also invented alpha–beta indepen-
dently in the United States13 .[4] McCarthy proposed similar ideas during the Dartmouth
workshop14 in 1956 and suggested it to a group of his students including Alan Kotok15
at MIT in 1961.[5] Alexander Brudno16 independently conceived the alpha–beta algorithm,
publishing his results in 1963.[6] Donald Knuth17 and Ronald W. Moore refined the algo-
rithm in 1975.[7][8] Judea Pearl18 proved its optimality for trees with randomly assigned
leaf values in terms of the expected running time in two papers.[9][10] The optimality of the
randomized version of alpha-beta was shown by Michael Saks and Avi Wigderson in 1986.[11]

46.2 Core idea

The algorithm maintains two values, alpha and beta, which represent the minimum score
that the maximizing player is assured of and the maximum score that the minimizing player
is assured of respectively. Initially, alpha is negative infinity and beta is positive infinity,
i.e. both players start with their worst possible score. Whenever the maximum score that
the minimizing player (i.e. the ”beta” player) is assured of becomes less than the minimum

2 https://en.wikipedia.org/wiki/Search_algorithm
3 https://en.wikipedia.org/wiki/Minimax#Minimax_algorithm_with_alternate_moves
4 https://en.wikipedia.org/wiki/Game_tree
5 https://en.wikipedia.org/wiki/Tic-tac-toe
6 https://en.wikipedia.org/wiki/Chess
7 https://en.wikipedia.org/wiki/Go_(board_game)
8 https://en.wikipedia.org/wiki/Allen_Newell
9 https://en.wikipedia.org/wiki/Herbert_A._Simon
10 https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)
11 https://en.wikipedia.org/wiki/Arthur_Samuel
12 https://en.wikipedia.org/wiki/Michael_Levin
13 https://en.wikipedia.org/wiki/United_States
14 https://en.wikipedia.org/wiki/Dartmouth_workshop
15 https://en.wikipedia.org/wiki/Alan_Kotok
16 https://en.wikipedia.org/wiki/Alexander_Brudno
17 https://en.wikipedia.org/wiki/Donald_Knuth
18 https://en.wikipedia.org/wiki/Judea_Pearl

645
Alpha–beta pruning

score that the maximizing player (i.e., the ”alpha” player) is assured of (i.e. beta < alpha),
the maximizing player need not consider further descendants of this node, as they will never
be reached in the actual play.
To illustrate this with a real-life example, suppose you're playing chess and it is your turn.
You have found a good move that will improve your position. Denote this move A. You
continue to look for moves, making sure you haven't missed an even better one. You find a
move that appears to be good. Denote this move B. You then realize that move B allows
your opponent to force checkmate in two moves. Thus, you no longer need to consider any
other possible outcomes from playing move B, since you know that your opponent can force
a win.

46.3 Improvements over naive minimax

Figure 120 An illustration of alpha–beta pruning. The grayed-out subtrees don't need
to be explored (when moves are evaluated from left to right), since we know the group of
subtrees as a whole yields the value of an equivalent subtree or worse, and as such cannot
influence the final result. The max and min levels represent the turn of the player and the
adversary, respectively.

The benefit of alpha–beta pruning lies in the fact that branches of the search tree can be
eliminated. This way, the search time can be limited to the 'more promising' subtree, and
a deeper search can be performed in the same time. Like its predecessor, it belongs to the
branch and bound19 class of algorithms. The optimization reduces the effective depth to
slightly more than half that of simple minimax if the nodes are evaluated in an optimal or
near optimal order (best choice for side on move ordered first at each node).

19 https://en.wikipedia.org/wiki/Branch_and_bound

646
Improvements over naive minimax

With an (average or constant) branching factor20 of b, and a search depth of d plies21 ,


the maximum number of leaf node positions evaluated (when the move ordering is pessi-
mal22 ) is O23 (b×b×...×b) = O(bd ) – the same as a simple minimax search. If the move
ordering for the search is optimal (meaning the best moves are always searched first), the
number of leaf node positions evaluated is about O(b×1×b×1×...×b)
√ for odd depth and
d/2
O(b×1×b×1×...×1) for even depth, or O(b ) = O( b ). In the latter case, where the
d

ply of a search is even, the effective branching factor is reduced to its square root24 , or,
equivalently, the search can go twice as deep with the same amount of computation.[12] The
explanation of b×1×b×1×... is that all the first player's moves must be studied to find the
best one, but for each, only the second player's best move is needed to refute all but the
first (and best) first player move—alpha–beta ensures no other second player moves need
be considered. When nodes are considered in a random order (i.e., the algorithm random-
izes), asymptotically, the √expected number of nodes evaluated in uniform trees with binary
leaf-values is Θ(((b − 1 + b2 + 14b + 1)/4)d ) .[11] For the same trees, when the values are
assigned to the leaf values independently of each other and say zero and one are both equally
probable, the expected number of nodes evaluated is Θ((b/2)d ), which is much smaller than
the work done by the randomized algorithm, mentioned above, and is again optimal for
such random trees.[9] When the leaf values are chosen independently of each other but from
the [0, 1] interval uniformly at random, the expected number of nodes evaluated increases to
Θ(bd/log(d) ) in the d → ∞ limit,[10] which is again optimal for these kind random trees. Note
that the actual work for ”small” values of d is better approximated using 0.925d0.747 .[10][9]

20 https://en.wikipedia.org/wiki/Branching_factor
21 https://en.wikipedia.org/wiki/Ply_(game_theory)
22 https://en.wiktionary.org/wiki/pessimal
23 https://en.wikipedia.org/wiki/Big_O_notation
24 https://en.wikipedia.org/wiki/Square_root

647
Alpha–beta pruning

Figure 121 An animated pedagogical example that attempts to be human-friendly by


substituting initial infinite (or arbitrarily large) values for emptiness and by avoiding
using the negamax coding simplifications.

Normally during alpha–beta, the subtrees are temporarily dominated by either a first player
advantage (when many first player moves are good, and at each search depth the first move
checked by the first player is adequate, but all second player responses are required to try
to find a refutation), or vice versa. This advantage can switch sides many times during the
search if the move ordering is incorrect, each time leading to inefficiency. As the number
of positions searched decreases exponentially each move nearer the current position, it is
worth spending considerable effort on sorting early moves. An improved sort at any depth
will exponentially reduce the total number of positions searched, but sorting all positions
at depths near the root node is relatively cheap as there are so few of them. In practice,
the move ordering is often determined by the results of earlier, smaller searches, such as
through iterative deepening25 .
Additionally, this algorithm can be trivially modified to return an entire principal variation26
in addition to the score. Some more aggressive algorithms such as MTD(f)27 do not easily
permit such a modification.

25 https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search
26 https://en.wikipedia.org/wiki/Principal_variation
27 https://en.wikipedia.org/wiki/MTD(f)

648
Pseudocode

46.4 Pseudocode

The pseudo-code for depth limited minimax with alpha-beta pruning is as follows:[12]
function alphabeta(node, depth, α, β, maximizingPlayer) is
if depth = 0 or node is a terminal node then
return the heuristic value of node
if maximizingPlayer then
value := −∞
for each child of node do
value := max(value, alphabeta(child, depth − 1, α, β, FALSE))
α := max(α, value)
if α ≥ β then
break (* β cut-off *)
return value
else
value := +∞
for each child of node do
value := min(value, alphabeta(child, depth − 1, α, β, TRUE))
β := min(β, value)
if α ≥ β then
break (* α cut-off *)
return value

(* Initial call *)
alphabeta(origin, depth, −∞28 , +∞29 , TRUE)

Implementations of alpha-beta pruning can often be delineated by whether they are ”fail-
soft,” or ”fail-hard”. The pseudo-code illustrates the fail-soft variation. With fail-soft alpha-
beta, the alphabeta function may return values (v) that exceed (v < α or v > β) the α and
β bounds set by its function call arguments. In comparison, fail-hard alpha-beta limits its
function return value into the inclusive range of α and β.

46.5 Heuristic improvements

Further improvement can be achieved without sacrificing accuracy by using ordering heuris-
tics30 to search earlier parts of the tree that are likely to force alpha–beta cutoffs. For
example, in chess, moves that capture pieces may be examined before moves that do not,
and moves that have scored highly in earlier passes31 through the game-tree analysis may be
evaluated before others. Another common, and very cheap, heuristic is the killer heuristic32 ,
where the last move that caused a beta-cutoff at the same tree level in the tree search is
always examined first. This idea can also be generalized into a set of refutation tables33 .
Alpha–beta search can be made even faster by considering only a narrow search window
(generally determined by guesswork based on experience). This is known as aspiration
search. In the extreme case, the search is performed with alpha and beta equal; a technique

28 https://en.wikipedia.org/wiki/Infinity
29 https://en.wikipedia.org/wiki/Infinity
30 https://en.wikipedia.org/wiki/Heuristic
31 https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search
32 https://en.wikipedia.org/wiki/Killer_heuristic
33 https://en.wikipedia.org/wiki/Refutation_table

649
Alpha–beta pruning

known as zero-window search34 , null-window search, or scout search. This is particularly


useful for win/loss searches near the end of a game where the extra depth gained from the
narrow window and a simple win/loss evaluation function may lead to a conclusive result.
If an aspiration search fails, it is straightforward to detect whether it failed high (high edge
of window was too low) or low (lower edge of window was too high). This gives information
about what window values might be useful in a re-search of the position.
Over time, other improvements have been suggested, and indeed the Falphabeta (fail-soft
alpha-beta) idea of John Fishburn is nearly universal and is already incorporated above in
a slightly modified form. Fishburn also suggested a combination of the killer heuristic and
zero-window search under the name Lalphabeta (”last move with minimal window alpha-
beta search”).

46.6 Other algorithms

Since the minimax algorithm and its variants are inherently depth-first35 , a strategy such
as iterative deepening36 is usually used in conjunction with alpha–beta so that a reasonably
good move can be returned even if the algorithm is interrupted before it has finished exe-
cution. Another advantage of using iterative deepening is that searches at shallower depths
give move-ordering hints, as well as shallow alpha and beta estimates, that both can help
produce cutoffs for higher depth searches much earlier than would otherwise be possible.
Algorithms like SSS*37 , on the other hand, use the best-first38 strategy. This can potentially
make them more time-efficient, but typically at a heavy cost in space-efficiency.[13]

46.7 See also


• Minimax39
• Expectiminimax40
• Negamax41
• Pruning (algorithm)42
• Branch and bound43
• Combinatorial optimization44
• Principal variation search45
• Transposition table46

34 https://en.wikipedia.org/wiki/MTD-f#Zero-Window_Searches
35 https://en.wikipedia.org/wiki/Depth-first_search
36 https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search
37 https://en.wikipedia.org/wiki/SSS*
38 https://en.wikipedia.org/wiki/Best_first_search
39 https://en.wikipedia.org/wiki/Minimax
40 https://en.wikipedia.org/wiki/Expectiminimax
41 https://en.wikipedia.org/wiki/Negamax
42 https://en.wikipedia.org/wiki/Pruning_(algorithm)
43 https://en.wikipedia.org/wiki/Branch_and_bound
44 https://en.wikipedia.org/wiki/Combinatorial_optimization
45 https://en.wikipedia.org/wiki/Principal_variation_search
46 https://en.wikipedia.org/wiki/Transposition_table

650
References

46.8 References
1. R, S J.47 ; N, P48 (2010). Artificial Intelligence: A Modern
Approach49 (3 .). U S R, N J: P E,
I. . 167. ISBN50 978-0-13-604259-451 .
2. MC, J (27 N 2006). ”H L AI I H T I
S  1955”52 . R 2006-12-20.
3. N, A; S, H A. (1 M 1976). ”C  
 :   ”. Communications of the ACM. 19 (3):
113–126. doi53 :10.1145/360018.36002254 .
4. E, D.J.; H, T.P. (4 D 1961). ”T A– H
(AIM-030)”. M I  T55 . 56 :1721.1/609857 .
Cite journal requires |journal= (help58 )
5. K, A (3 D 2004). ”MIT A I M 41”59 .
R 2006-07-01.
6. M, T.A.60 (M 1987). ”C C M (PDF)  E-
  A I. S. S ()”61 (PDF). J. W-
 & S. . 159–171. A   62 (PDF63 )  O
30, 2008. R 2006-12-21.
7. K, D E.; M, R W. (1975). ”A   -
”64 (PDF65 ). Artificial Intelligence. 6 (4): 293–326. doi66 :10.1016/0004-
3702(75)90019-367 .
8. A, B (1 J 1989). ”C   -
”. ACM Computing Surveys. 21 (2): 137–161. doi68 :10.1145/66443.6644469 .

47 https://en.wikipedia.org/wiki/Stuart_J._Russell
48 https://en.wikipedia.org/wiki/Peter_Norvig
49 http://aima.cs.berkeley.edu/
50 https://en.wikipedia.org/wiki/ISBN_(identifier)
51 https://en.wikipedia.org/wiki/Special:BookSources/978-0-13-604259-4
52 http://www-formal.stanford.edu/jmc/slides/wrong/wrong-sli/wrong-sli.html
53 https://en.wikipedia.org/wiki/Doi_(identifier)
54 https://doi.org/10.1145%2F360018.360022
55 https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology
56 https://en.wikipedia.org/wiki/Hdl_(identifier)
57 http://hdl.handle.net/1721.1%2F6098
58 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
59 http://www.kotok.org/AI_Memo_41.html
60 http://www.cs.ualberta.ca/~tony/
https://web.archive.org/web/20081030023047if_/http://www.cs.ualberta.ca/~tony/
61
OldPapers/encyc.mac.pdf
62 http://www.cs.ualberta.ca/~tony/OldPapers/encyc.mac.pdf
63 https://en.wikipedia.org/wiki/PDF
64 https://pdfs.semanticscholar.org/dce2/6118156e5bc287bca2465a62e75af39c7e85.pdf
65 https://en.wikipedia.org/wiki/PDF
66 https://en.wikipedia.org/wiki/Doi_(identifier)
67 https://doi.org/10.1016%2F0004-3702%2875%2990019-3
68 https://en.wikipedia.org/wiki/Doi_(identifier)
69 https://doi.org/10.1145%2F66443.66444

651
Alpha–beta pruning

9. P, J (1980). ”A P  M T 


G-S P”. Artificial Intelligence70 . 14 (2): 113–138.
71
doi :10.1016/0004-3702(80)90037-5 . 72

10. P, J (1982). ”T S   B F  
A-B P A  I O”. Communications of the
ACM. 25 (8): 559–64. doi73 :10.1145/358589.35861674 .
11. S, M.; W, A. (1986). ”P B D T
  C  E G T”. 27th Annual Symposium
on Foundations of Computer Science. pp. 29–38. doi75 :10.1109/SFCS.1986.4476 .
ISBN77 0-8186-0740-878 .
12. R, S J.79 ; N, P80 (2003), Artificial Intelligence: A Mod-
ern Approach81 (2 .), U S R, N J: P H,
ISBN82 0-13-790395-283
13. P, J84 ; K, R (1987), ”S ”, Annual Review
of Computer Science, 2: 451–467, doi85 :10.1146/annurev.cs.02.060187.00231586 , Like
its A* counterpart for single-player games, SSS* is optimal in terms of the average
number of nodes examined; but its superior pruning power is more than offset by the
substantial storage space and bookkeeping required.

46.9 Bibliography
• G T. H; G P; S S (2008). ”C 7: P
F  AI”. Algorithms in a Nutshell. Oreilly Media87 . pp. 217–223. ISBN88 978-0-
596-51624-689 .
• Judea Pearl90 , Heuristics, Addison-Wesley, 1984

70 https://en.wikipedia.org/wiki/Artificial_Intelligence_(journal)
71 https://en.wikipedia.org/wiki/Doi_(identifier)
72 https://doi.org/10.1016%2F0004-3702%2880%2990037-5
73 https://en.wikipedia.org/wiki/Doi_(identifier)
74 https://doi.org/10.1145%2F358589.358616
75 https://en.wikipedia.org/wiki/Doi_(identifier)
76 https://doi.org/10.1109%2FSFCS.1986.44
77 https://en.wikipedia.org/wiki/ISBN_(identifier)
78 https://en.wikipedia.org/wiki/Special:BookSources/0-8186-0740-8
79 https://en.wikipedia.org/wiki/Stuart_J._Russell
80 https://en.wikipedia.org/wiki/Peter_Norvig
81 http://aima.cs.berkeley.edu/
82 https://en.wikipedia.org/wiki/ISBN_(identifier)
83 https://en.wikipedia.org/wiki/Special:BookSources/0-13-790395-2
84 https://en.wikipedia.org/wiki/Judea_Pearl
85 https://en.wikipedia.org/wiki/Doi_(identifier)
86 https://doi.org/10.1146%2Fannurev.cs.02.060187.002315
87 https://en.wikipedia.org/wiki/Oreilly_Media
88 https://en.wikipedia.org/wiki/ISBN_(identifier)
89 https://en.wikipedia.org/wiki/Special:BookSources/978-0-596-51624-6
90 https://en.wikipedia.org/wiki/Judea_Pearl

652
Bibliography

• J P. F (1984). ”A A: S O  α-β S”.


Analysis of Speedup in Distributed Algorithms (revision of 1981 PhD thesis). UMI Re-
search Press. pp. 107–111. ISBN91 0-8357-1527-292 .

Topics in game theory

91 https://en.wikipedia.org/wiki/ISBN_(identifier)
92 https://en.wikipedia.org/wiki/Special:BookSources/0-8357-1527-2

653
47 Aperiodic graph

Figure 122 An aperiodic graph. The cycles in this graph have lengths 5 and 6;
therefore, there is no k > 1 that divides all cycle lengths.

655
Aperiodic graph

Figure 123 A strongly connected graph with period three.

In the mathematical1 area of graph theory2 , a directed graph3 is said to be aperiodic if


there is no integer k > 1 that divides the length of every cycle4 of the graph. Equivalently,
a graph is aperiodic if the greatest common divisor5 of the lengths of its cycles is one; this
greatest common divisor for a graph G is called the period of G.

1 https://en.wikipedia.org/wiki/Mathematics
2 https://en.wikipedia.org/wiki/Graph_theory
3 https://en.wikipedia.org/wiki/Directed_graph
4 https://en.wikipedia.org/wiki/Cycle_(graph_theory)
5 https://en.wikipedia.org/wiki/Greatest_common_divisor

656
Graphs that cannot be aperiodic

47.1 Graphs that cannot be aperiodic

In any directed bipartite graph6 , all cycles have a length that is divisible by two. Therefore,
no directed bipartite graph can be aperiodic. In any directed acyclic graph7 , it is a vacuous
truth8 that every k divides all cycles (because there are no directed cycles to divide) so no
directed acyclic graph can be aperiodic. And in any directed cycle graph9 , there is only one
cycle, so every cycle's length is divisible by n, the length of that cycle.

47.2 Testing for aperiodicity

Suppose that G is strongly connected and that k divides the lengths of all cycles in G.
Consider the results of performing a depth-first search10 of G, starting at any vertex, and
assigning each vertex v to a set Vi where i is the length (taken mod k) of the path in the
depth-first search tree from the root to v. It can be shown (Jarvis & Shier 199611 ) that this
partition into sets Vi has the property that each edge in the graph goes from a set Vi to
another set V(i + 1) mod k . Conversely, if a partition with this property exists for a strongly
connected graph G, k must divide the lengths of all cycles in G.
Thus, we may find the period of a strongly connected graph G by the following steps:
• Perform a depth-first search of G
• For each e in G that connects a vertex on level i of the depth-first search tree to a vertex
on level j, let ke = j - i - 1.
• Compute the greatest common divisor of the set of numbers ke .
The graph is aperiodic if and only if the period computed in this fashion is 1.
If G is not strongly connected, we may perform a similar computation in each strongly
connected component12 of G, ignoring the edges that pass from one strongly connected
component to another.
Jarvis and Shier describe a very similar algorithm using breadth first search13 in place of
depth-first search; the advantage of depth-first search is that the strong connectivity analysis
can be incorporated into the same search.

6 https://en.wikipedia.org/wiki/Bipartite_graph
7 https://en.wikipedia.org/wiki/Directed_acyclic_graph
8 https://en.wikipedia.org/wiki/Vacuous_truth
9 https://en.wikipedia.org/wiki/Cycle_graph
10 https://en.wikipedia.org/wiki/Depth-first_search
11 #CITEREFJarvisShier1996
12 https://en.wikipedia.org/wiki/Strongly_connected_component
13 https://en.wikipedia.org/wiki/Breadth_first_search

657
Aperiodic graph

47.3 Applications

In a strongly connected graph14 , if one defines a Markov chain15 on the vertices, in which
the probability of transitioning from v to w is nonzero if and only if there is an edge from
v to w, then this chain is aperiodic if and only if the graph is aperiodic. A Markov chain
in which all states are recurrent has a strongly connected state transition graph, and the
Markov chain is aperiodic if and only if this graph is aperiodic. Thus, aperiodicity of graphs
is a useful concept in analyzing the aperiodicity of Markov chains.
Aperiodicity is also an important necessary condition for solving the road coloring prob-
lem16 . According to the solution of this problem (Trahtman 200917 ), a strongly connected
directed graph in which all vertices have the same outdegree18 has a synchronizable edge
coloring if and only if it is aperiodic.

47.4 References
• J, J. P.; S, D. R. (1996), ”G-    M
”,  S, D. R.; W, K. T. (.), Applied Mathematical Modeling: A
Multidisciplinary Approach19 (PDF), CRC P.
• T, A N.20 (2009), ”T   ”, Israel Journal
of Mathematics, 172 (1): 51–60, arXiv21 :0709.009922 , doi23 :10.1007/s11856-009-0062-524 .

14 https://en.wikipedia.org/wiki/Strongly_connected_component
15 https://en.wikipedia.org/wiki/Markov_chain
16 https://en.wikipedia.org/wiki/Road_coloring_problem
17 #CITEREFTrahtman2009
18 https://en.wikipedia.org/wiki/Outdegree
19 http://www.ces.clemson.edu/~shierd/Shier/markov.pdf
20 https://en.wikipedia.org/wiki/Avraham_Trahtman
21 https://en.wikipedia.org/wiki/ArXiv_(identifier)
22 http://arxiv.org/abs/0709.0099
23 https://en.wikipedia.org/wiki/Doi_(identifier)
24 https://doi.org/10.1007%2Fs11856-009-0062-5

658
48 B*

This article is about a graph search algorithm. For variant of B-Tree1 , see B*-tree2 .

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

1 https://en.wikipedia.org/wiki/B-Tree
2 https://en.wikipedia.org/wiki/B*-tree

659
B*

Graph and tree


search algorithms

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

In computer science3 , B* (pronounced ”B star”) is a best-first4 graph search algorithm5 that


finds the least-cost path from a given initial node6 to any goal node7 (out of one or more
possible goals). First published by Hans Berliner8 in 1979, it is related to the A* search
algorithm9 .

48.1 Summary

The algorithm stores intervals for nodes of the tree10 as opposed to single point-valued
estimates. Then, leaf nodes of the tree can be searched until one of the top level nodes has
an interval which is clearly ”best.”

48.2 Details

48.2.1 Interval evaluations rather than estimates

Leaf nodes of a B*-tree are given evaluations that are intervals rather than single numbers.
The interval is supposed to contain the true value of that node. If all intervals attached to
leaf nodes satisfy this property, then B* will identify an optimal path to the goal state.

3 https://en.wikipedia.org/wiki/Computer_science
4 https://en.wikipedia.org/wiki/Best-first_search
5 https://en.wikipedia.org/wiki/Graph_search_algorithm
6 https://en.wikipedia.org/wiki/Node_(graph_theory)
7 https://en.wikipedia.org/wiki/Goal_node
8 https://en.wikipedia.org/wiki/Hans_Berliner
9 https://en.wikipedia.org/wiki/A*_search_algorithm
10 https://en.wikipedia.org/wiki/Tree_(graph_theory)

660
Details

48.2.2 Backup process

To back up the intervals within the tree, a parent's upper bound is set to the maximum of
the upper bounds of the children. A parent's lower bound is set to the maximum of the
lower bound of the children. Note that different children might supply these bounds.

48.2.3 Termination of search

B* systematically expands nodes in order to create ”separation,” which occurs when the
lower bound of a direct child of the root is at least as large as the upper bound of any other
direct child of the root. A tree that creates separation at the root contains a proof that the
best child is at least as good as any other child.
In practice, complex searches might not terminate within practical resource limits. So
the algorithm is normally augmented with artificial termination criteria such as time or
memory limits. When an artificial limit is hit, then you must make a heuristic judgment
about which move to select. Normally, the tree would supply you with extensive evidence,
like the intervals of root nodes.

48.2.4 Expansion

B* is a best-first process, which means that it is very efficient to traverse the tree, repeatedly
descending to find a leaf to expand. This section describes how to choose the node to
expand. (Note: Whether or not the tree is memory-resident, is a function of the overall
implementation efficiency, including how it may be mapped and/or managed via real or
virtual memory.)
At the root of the tree, the algorithm applies one of two strategies, called prove-best and
disprove-rest. In the prove-best strategy, the algorithm selects the node associated with
the highest upper bound. The hope is that expanding that node will raise its lower bound
higher than any other node's upper bound.
The disprove-rest strategy selects the child of the root that has the second-highest upper
bound. The hope is that by expanding that node you might be able to reduce the upper
bound to less than the lower bound of the best child.

Strategy selection

Note that applying the disprove-rest strategy is pointless until the lower bound of the child
node that has the highest upper bound is the highest among all lower bounds.
The original algorithm description did not give any further guidance on which strategy to
select. There are several reasonable alternatives, such as expanding the choice that has the
smaller tree.

661
B*

Strategy selection at non-root nodes

Once a child of the root has been selected (using prove-best or disprove-rest) then the
algorithm descends to a leaf node by repeatedly selecting the child that has the highest
upper bound.
When a leaf node is reached, the algorithm generates all successor nodes and assigns intervals
to them using the evaluation function. Then the intervals of all nodes have to be backed
up using the backup operation.
When transpositions are possible, then the back-up operation might need to alter the values
of nodes that did not lie on the selection path. In this case, the algorithm needs pointers
from children to all parents so that changes can be propagated. Note that propagation can
cease when a backup operation does not change the interval associated with a node.

48.2.5 Robustness

If intervals are incorrect (in the sense that the game-theoretic value of the node is not
contained within the interval), then B* might not be able to identify the correct path.
However, the algorithm is fairly robust to errors in practice.
The Maven (Scrabble)11 program has an innovation that improves the robustness of B*
when evaluation errors are possible. If a search terminates due to separation then Maven
restarts the search after widening all of the evaluation intervals by a small amount. This
policy progressively widens the tree, eventually erasing all errors.

48.2.6 Extension to two-player games

The B* algorithm applies to two-player deterministic zero-sum games. In fact, the only
change is to interpret ”best” with respect to the side moving in that node. So you would
take the maximum if your side is moving, and the minimum if the opponent is moving.
Equivalently, you can represent all intervals from the perspective of the side to move, and
then negate the values during the back-up operation.

48.2.7 Applications

Andrew Palay applied B* to chess. Endpoint evaluations were assigned by performing


null-move searches. There is no report of how well this system performed compared to
alpha-beta pruning12 search engines running on the same hardware.
The Maven (Scrabble)13 program applied B* search to endgames. Endpoint evaluations
were assigned using a heuristic planning system.

11 https://en.wikipedia.org/wiki/Maven_(Scrabble)
12 https://en.wikipedia.org/wiki/Alpha-beta_pruning
13 https://en.wikipedia.org/wiki/Maven_(Scrabble)

662
See also

The B* search algorithm has been used to compute optimal strategy in a sum game of a
set of combinatorial games.

48.3 See also


• Branch and bound14

48.4 References
• B, H (1979). ”T B* T S A. A B-F
P P”15 . Artificial Intelligence16 . 12 (1): 23–40. doi17 :10.1016/0004-
3702(79)90003-118 .
• R, S. J.; N, P. (2003). Artificial Intelligence: A Modern Approach19 .
U S R, N.J.: P H. . 188. ISBN20 0-13-790395-221 .
• S, B (2002). ”W-- S”. Artificial
Intelligence22 . 134 (1–2): 241–275. doi23 :10.1016/S0004-3702(01)00166-724 .

14 https://en.wikipedia.org/wiki/Branch_and_bound
15 http://www.dtic.mil/get-tr-doc/pdf?AD=ADA059391
16 https://en.wikipedia.org/wiki/Artificial_Intelligence_(journal)
17 https://en.wikipedia.org/wiki/Doi_(identifier)
18 https://doi.org/10.1016%2F0004-3702%2879%2990003-1
19 https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach
20 https://en.wikipedia.org/wiki/ISBN_(identifier)
21 https://en.wikipedia.org/wiki/Special:BookSources/0-13-790395-2
22 https://en.wikipedia.org/wiki/Artificial_Intelligence_(journal)
23 https://en.wikipedia.org/wiki/Doi_(identifier)
24 https://doi.org/10.1016%2FS0004-3702%2801%2900166-7

663
49 Barabási–Albert model

Figure 124 Display of three graphs generated with the Barabasi-Albert (BA) model.
Each has 20 nodes and a parameter of attachment m as specified. The color of each node
is dependent upon its degree (same scale for each graph).

Network science

665
Barabási–Albert model

Network science

• Graph
• Complex network
• Contagion
• Small-world
• Scale-free
• Community structure
• Percolation
• Evolution
• Controllability
• Graph drawing
• Social capital
• Link analysis
• Optimization
• Reciprocity
• Closure
• Homophily
• Transitivity
• Preferential attachment
• Balance theory
• Network effect
• Social influence

• Informational (computing)
• Telecommunication
• Transport
• Social
• Scientific collaboration
• Biological
• Artificial neural
• Interdependent
• Semantic
• Spatial
• Dependency
• Flow
• on-Chip

666
References

Network science
Features

• Clique
• Component
• Cut
• Cycle
• Data structure
• Edge
• Loop
• Neighborhood
• Path
• Vertex
• Adjacency list /
matrix
• Incidence list /
matrix

Types

• Bipartite
• Complete
• Directed
• Hyper
• Multi
• Random
• Weighted

• Centrality
• Degree
• Betweenness
• Closeness
• PageRank
• Motif
• Clustering
• Degree distribution
• Assortativity
• Distance
• Modularity
• Efficiency

667
Barabási–Albert model

Network science
Topology

• Random graph
• Erdős–Rényi
• Barabási–Albert
• Fitness model
• Watts–Strogatz
• Exponential ran-
dom (ERGM)
• Random geometric
(RGG)
• Hyperbolic (HGN)
• Hierarchical
• Stochastic block
• Maximum entropy
• Soft configuration
• LFR Benchmark

Dynamics

• Boolean network
• agent based
• Epidemic/SIR

• Topics
• Software
• Network scientists
• Category:Network theory
• Category:Graph theory

The Barabási–Albert (BA) model is an algorithm for generating random scale-free1


networks2 using a preferential attachment3 mechanism. Several natural and human-made
systems, including the Internet4 , the world wide web5 , citation networks6 , and some social
networks7 are thought to be approximately scale-free and certainly contain few nodes (called
hubs) with unusually high degree as compared to the other nodes of the network. The BA
model tries to explain the existence of such nodes in real networks. The algorithm is named

1 https://en.wikipedia.org/wiki/Scale-free_network
2 https://en.wikipedia.org/wiki/Complex_network
3 https://en.wikipedia.org/wiki/Preferential_attachment
4 https://en.wikipedia.org/wiki/Internet
5 https://en.wikipedia.org/wiki/World_wide_web
6 https://en.wikipedia.org/wiki/Citation_analysis
7 https://en.wikipedia.org/wiki/Social_networks

668
Concepts

for its inventors Albert-László Barabási8 and Réka Albert9 and is a special case of an earlier
and more general model called Price's model10 .[1]

49.1 Concepts

Many observed networks (at least approximately) fall into the class of scale-free networks11 ,
meaning that they have power-law12 (or scale-free) degree distributions, while random graph
models such as the Erdős–Rényi (ER) model13 and the Watts–Strogatz (WS) model14 do
not exhibit power laws. The Barabási–Albert model is one of several proposed models that
generate scale-free networks. It incorporates two important general concepts: growth and
preferential attachment15 . Both growth and preferential attachment exist widely in real
networks.
Growth means that the number of nodes in the network increases over time.
Preferential attachment16 means that the more connected a node is, the more likely it is
to receive new links. Nodes with a higher degree17 have a stronger ability to grab links
added to the network. Intuitively, the preferential attachment can be understood if we
think in terms of social networks18 connecting people. Here a link from A to B means
that person A ”knows” or ”is acquainted with” person B. Heavily linked nodes represent
well-known people with lots of relations. When a newcomer enters the community, they are
more likely to become acquainted with one of those more visible people rather than with
a relative unknown. The BA model was proposed by assuming that in the World Wide
Web, new pages link preferentially to hubs, i.e. very well known sites such as Google19 ,
rather than to pages that hardly anyone knows. If someone selects a new page to link to
by randomly choosing an existing link, the probability of selecting a particular page would
be proportional to its degree. The BA model claims that this explains the preferential
attachment probability rule. However, in spite of being quite a useful model, empirical
evidence suggests that the mechanism in its simplest form does not apply to the World
Wide Web as shown in
”T C  'E  S  R N'”20 .
Later, the Bianconi–Barabási model21 works to address this issue by introducing a ”fitness”
parameter. Preferential attachment is an example of a positive feedback22 cycle where

8 https://en.wikipedia.org/wiki/Albert-L%C3%A1szl%C3%B3_Barab%C3%A1si
9 https://en.wikipedia.org/wiki/R%C3%A9ka_Albert
10 https://en.wikipedia.org/wiki/Price%27s_model
11 https://en.wikipedia.org/wiki/Scale-free_networks
12 https://en.wikipedia.org/wiki/Power_law
13 https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model
14 https://en.wikipedia.org/wiki/Watts_and_Strogatz_model
15 https://en.wikipedia.org/wiki/Preferential_attachment
16 https://en.wikipedia.org/wiki/Preferential_attachment
17 https://en.wikipedia.org/wiki/Degree_(graph_theory)
18 https://en.wikipedia.org/wiki/Social_networks
19 https://en.wikipedia.org/wiki/Google
20 http://www.hpl.hp.com/research/idl/papers/scalingcomment/
21 https://en.wikipedia.org/wiki/Bianconi%E2%80%93Barab%C3%A1si_model
22 https://en.wikipedia.org/wiki/Positive_feedback

669
Barabási–Albert model

initially random variations (one node initially having more links or having started accu-
mulating links earlier than another) are automatically reinforced, thus greatly magnifying
differences. This is also sometimes called the Matthew effect23 , ”the rich get richer24 ”. See
also autocatalysis25 .

49.2 Algorithm

Figure 125 The steps of the growth of the network according to the Barabasi–Albert
model (m0 = m = 2)

23 https://en.wikipedia.org/wiki/Matthew_effect_(sociology)
24 https://en.wikipedia.org/wiki/Rich_get_richer
25 https://en.wikipedia.org/wiki/Autocatalysis

670
Algorithm

The network begins with an initial connected network of m0 nodes.


New nodes are added to the network one at a time. Each new node is connected to m ≤ m0
existing nodes with a probability that is proportional to the number of links that the existing
nodes already have. Formally, the probability pi that the new node is connected to node i
is[2]
ki
pi = ∑ ,
j kj

where ki is the degree of node i and the sum is made over all pre-existing nodes j (i.e. the
denominator results in twice the current number of edges in the network). Heavily linked
nodes (”hubs”) tend to quickly accumulate even more links, while nodes with only a few
links are unlikely to be chosen as the destination for a new link. The new nodes have a
”preference” to attach themselves to the already heavily linked nodes.

Figure 126 A network generated according to the Barabasi Albert model. The network
is made of 50 vertices with initial degrees m0 = 1.

671
Barabási–Albert model

49.3 Properties

49.3.1 Degree distribution

Figure 127 The degree distribution of the BA Model, which follows a power law. In
loglog scale the power law function is a straight line.[3]

The degree distribution resulting from the BA model is scale free, in particular, it is a power
law of the form
P (k) ∼ k −3

49.3.2 Hirsch index distribution

The h-index26 or Hirsch index distribution was shown to also be scale free and was proposed
as the lobby index, to be used as a centrality measure[4]
H(k) ∼ k −6

26 https://en.wikipedia.org/wiki/H-index

672
Properties

Furthermore, an analytic result for the density of nodes with h-index27 1 can be obtained
in the case where m0 = 1


H(1) = 4−π
m0 =1

49.3.3 Average path length

The average path length28 of the BA model increases approximately logarithmically with
the size of the network. The actual form has a double logarithmic correction and goes as[5]
ln N
ℓ∼ .
ln ln N
The BA model has a systematically shorter average path length than a random graph.

49.3.4 Node degree correlations

Correlations between the degrees of connected nodes develop spontaneously in the BA model
because of the way the network evolves. The probability, nkℓ , of finding a link that connects
a node of degree k to an ancestor node of degree ℓ in the BA model for the special case of
m = 1 (BA tree) is given by
4 (ℓ − 1) 12 (ℓ − 1)
nkℓ = + .
k (k + 1) (k + ℓ) (k + ℓ + 1) (k + ℓ + 2) k (k + ℓ − 1) (k + ℓ) (k + ℓ + 1) (k + ℓ + 2)
This confirms the existence of degree correlations, because if the distributions were uncor-
related, we would get nkℓ = k −3 ℓ−3 .[2]
For general m, the fraction of links who connect a node of degree k to a node of degree ℓ
is[6]
[ (2m+2 )(k+ℓ−2m ) ]
2m(m + 1) m+1 ℓ−m
p(k, ℓ) = 1− (k+ℓ+2 ) .
k(k + 1)ℓ(ℓ + 1) ℓ+1

Also, the nearest-neighbor degree distribution p(ℓ | k), that is, the degree distribution of the
neighbors of a node with degree k, is given by[6]
[ (2m+2 )(k+ℓ−2m ) ]
m(k + 2) m+1 ℓ−m
p(ℓ | k) = 1− (k+ℓ+2 ) .
kℓ(ℓ + 1) ℓ+1

In other words, if we select a node with degree k, and then select one of its neighbors
randomly, the probability that this randomly selected neighbor will have degree ℓ is given
by the expression p(ℓ|k) above.

27 https://en.wikipedia.org/wiki/H-index
28 https://en.wikipedia.org/wiki/Average_path_length

673
Barabási–Albert model

49.3.5 Clustering coefficient

An analytical result for the clustering coefficient29 of the BA model was obtained by Klemm
and Eguíluz[7] and proven by Bollobás.[8][9] A mean-field approach to study the clustering
coefficient was applied by Fronczak, Fronczak and Holyst.[10]
This behavior is still distinct from the behavior of small-world networks where clustering is
independent of system size. In the case of hierarchical networks, clustering as a function of
node degree also follows a power-law,
C(k) = k −1 .
This result was obtained analytically by Dorogovtsev, Goltsev and Mendes.[11]

49.3.6 Spectral properties

The spectral density of BA model has a different shape from the semicircular spectral density
of random graph. It has a triangle-like shape with the top lying well above the semicircle
and edges decaying as a power law. [12]

29 https://en.wikipedia.org/wiki/Clustering_coefficient

674
Properties

49.3.7 Dynamic scaling

Figure 128 Generalized degree distribution F (q, t) of the BA model for m = 1.

675
Barabási–Albert model

Figure 129 The same data is plotted in the self-similar coordinates t1/2 F (q, N ) and
q/t1/2 and it gives an excellent collapsed revealing that F (q, t) exhibit dynamic scaling.

By definition, the BA model describes a time developing phenomenon and hence, besides its
scale-free property, one could also look for its dynamic scaling property. In the BA network
nodes can also be characterized by generalized degree q, the product of the square root of
the birth time of each node and their corresponding degree k, instead of the degree k alone
since the time of birth matters in the BA network. We find that the generalized degree
distribution F (q, t) has some non-trivial features and exhibits dynamic scaling30

30 https://en.wikipedia.org/wiki/Dynamic_scaling

676
Limiting cases

F (q, t) ∼ t−1/2 ϕ(q/t1/2 ).


It implies that the distinct plots of F (q, t) vs q would collapse into a universal curve if we
plots F (q, t)t1/2 vs q/t1/2 .[13]

49.4 Limiting cases

49.4.1 Model A

Model A retains growth but does not include preferential attachment. The probability of
a new node connecting to any pre-existing node is equal. The resulting degree distribution
in this limit is geometric,[14] indicating that growth alone is not sufficient to produce a
scale-free structure.

49.4.2 Model B

Model B retains preferential attachment but eliminates growth. The model begins with
a fixed number of disconnected nodes and adds links, preferentially choosing high degree
nodes as link destinations. Though the degree distribution early in the simulation looks
scale-free, the distribution is not stable, and it eventually becomes nearly Gaussian as the
network nears saturation. So preferential attachment alone is not sufficient to produce a
scale-free structure.
The failure of models A and B to lead to a scale-free distribution indicates that growth and
preferential attachment are needed simultaneously to reproduce the stationary power-law
distribution observed in real networks.[2]

49.5 History

Preferential attachment made its first appearance in 1923 in the celebrated urn model
of the Hungarian mathematician György Pólya31 in 1923.[15] The modern master equation
method, which yields a more transparent derivation, was applied to the problem by Herbert
A. Simon32 in 1955[16] in the course of studies of the sizes of cities and other phenomena.
It was first applied to the growth of networks by Derek de Solla Price33 in 1976[17] who
was interested in the networks of citation between scientific papers. The name ”preferential
attachment” and the present popularity of scale-free network models is due to the work of
Albert-László Barabási34 and Réka Albert35 , who rediscovered the process independently
in 1999 and applied it to degree distributions on the web.[3]

31 https://en.wikipedia.org/wiki/Gy%C3%B6rgy_P%C3%B3lya
32 https://en.wikipedia.org/wiki/Herbert_A._Simon
33 https://en.wikipedia.org/wiki/Derek_J._de_Solla_Price
34 https://en.wikipedia.org/wiki/Albert-L%C3%A1szl%C3%B3_Barab%C3%A1si
35 https://en.wikipedia.org/wiki/R%C3%A9ka_Albert

677
Barabási–Albert model

49.6 See also


• Bianconi–Barabási model36
• Chinese restaurant process37
• Complex networks38
• Erdős–Rényi (ER) model39
• Price's model40
• Scale-free network41
• Small-world network42
• Watts and Strogatz model43

49.7 References
1. A, R; B, A-L (2002). ”S  
 ”. Reviews of Modern Physics. 74 (1): 47–97. arXiv44 :cond-
mat/010609645 . Bibcode46 :2002RvMP...74...47A47 . CiteSeerX48 10.1.1.242.475349 .
doi50 :10.1103/RevModPhys.74.4751 . ISSN52 0034-686153 .
2. A, R54 ; B, A-L55 (2002). ”S -
   ”56 (PDF). Reviews of Modern Physics57 . 74 (47):
47–97. arXiv58 :cond-mat/010609659 . Bibcode60 :2002RvMP...74...47A61 . Cite-

36 https://en.wikipedia.org/wiki/Bianconi%E2%80%93Barab%C3%A1si_model
37 https://en.wikipedia.org/wiki/Chinese_restaurant_process
38 https://en.wikipedia.org/wiki/Complex_networks
39 https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model
40 https://en.wikipedia.org/wiki/Price%27s_model
41 https://en.wikipedia.org/wiki/Scale-free_network
42 https://en.wikipedia.org/wiki/Small-world_network
43 https://en.wikipedia.org/wiki/Watts_and_Strogatz_model
44 https://en.wikipedia.org/wiki/ArXiv_(identifier)
45 http://arxiv.org/abs/cond-mat/0106096
46 https://en.wikipedia.org/wiki/Bibcode_(identifier)
47 https://ui.adsabs.harvard.edu/abs/2002RvMP...74...47A
48 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
49 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.242.4753
50 https://en.wikipedia.org/wiki/Doi_(identifier)
51 https://doi.org/10.1103%2FRevModPhys.74.47
52 https://en.wikipedia.org/wiki/ISSN_(identifier)
53 http://www.worldcat.org/issn/0034-6861
54 https://en.wikipedia.org/wiki/R%C3%A9ka_Albert
55 https://en.wikipedia.org/wiki/Albert-L%C3%A1szl%C3%B3_Barab%C3%A1si
https://web.archive.org/web/20150824235818/http://www3.nd.edu/~networks/Publication%
56 20Categories/03%20Journal%20Articles/Physics/StatisticalMechanics_Rev%20of%20Modern%
20Physics%2074,%2047%20(2002).pdf
57 https://en.wikipedia.org/wiki/Reviews_of_Modern_Physics
58 https://en.wikipedia.org/wiki/ArXiv_(identifier)
59 http://arxiv.org/abs/cond-mat/0106096
60 https://en.wikipedia.org/wiki/Bibcode_(identifier)
61 https://ui.adsabs.harvard.edu/abs/2002RvMP...74...47A

678
References

SeerX62 10.1.1.242.475363 . doi64 :10.1103/RevModPhys.74.4765 . Archived from the


original66 (PDF) on 2015-08-24.
3. B, A-L67 ; A, R68 (O 1999). ”E-
69
     ” (PDF). Science . 286 (5439): 70

509–512. arXiv71 :cond-mat/991033272 . Bibcode73 :1999Sci...286..509B74 .


75
doi :10.1126/science.286.5439.509 . 76 PMID 1052134278 .
77 Archived from the
79
original (PDF) on 2012-04-17.
4. K, A.80 ; S, A.81 ; T, A.82 (2009). ”L  
”. Physica A. 388 (11): 2221–2226. arXiv :0809.051484 .
83 Bib-
code :2009PhyA..388.2221K . doi :10.1016/j.physa.2009.02.01388 .
85 86 87

5. C, R; H, S (2003). ”S-F N A U-


”. Physical Review Letters. 90 (5): 058701. arXiv89 :cond-mat/020547690 .
Bibcode91 :2003PhRvL..90e8701C92 . doi93 :10.1103/PhysRevLett.90.05870194 .
95 96 97
ISSN 0031-9007 . PMID 12633404 . 98

62 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
63 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.242.4753
64 https://en.wikipedia.org/wiki/Doi_(identifier)
65 https://doi.org/10.1103%2FRevModPhys.74.47
http://www.nd.edu/~networks/Publication%20Categories/03%20Journal%20Articles/Physics/
66
StatisticalMechanics_Rev%20of%20Modern%20Physics%2074,%2047%20(2002).pdf
67 https://en.wikipedia.org/wiki/Albert-L%C3%A1szl%C3%B3_Barab%C3%A1si
68 https://en.wikipedia.org/wiki/R%C3%A9ka_Albert
https://web.archive.org/web/20120417112354/http://www.nd.edu/~networks/Publication%
69 20Categories/03%20Journal%20Articles/Physics/EmergenceRandom_Science%20286,%20509-
512%20(1999).pdf
70 https://en.wikipedia.org/wiki/Science_(journal)
71 https://en.wikipedia.org/wiki/ArXiv_(identifier)
72 http://arxiv.org/abs/cond-mat/9910332
73 https://en.wikipedia.org/wiki/Bibcode_(identifier)
74 https://ui.adsabs.harvard.edu/abs/1999Sci...286..509B
75 https://en.wikipedia.org/wiki/Doi_(identifier)
76 https://doi.org/10.1126%2Fscience.286.5439.509
77 https://en.wikipedia.org/wiki/PMID_(identifier)
78 http://pubmed.ncbi.nlm.nih.gov/10521342
http://www.nd.edu/~networks/Publication%20Categories/03%20Journal%20Articles/Physics/
79
EmergenceRandom_Science%20286,%20509-512%20(1999).pdf
80 https://en.wikipedia.org/wiki/A._Korn
81 https://en.wikipedia.org/w/index.php?title=A._Schubert&action=edit&redlink=1
82 https://en.wikipedia.org/w/index.php?title=A._Telcs&action=edit&redlink=1
83 https://en.wikipedia.org/wiki/ArXiv_(identifier)
84 http://arxiv.org/abs/0809.0514
85 https://en.wikipedia.org/wiki/Bibcode_(identifier)
86 https://ui.adsabs.harvard.edu/abs/2009PhyA..388.2221K
87 https://en.wikipedia.org/wiki/Doi_(identifier)
88 https://doi.org/10.1016%2Fj.physa.2009.02.013
89 https://en.wikipedia.org/wiki/ArXiv_(identifier)
90 http://arxiv.org/abs/cond-mat/0205476
91 https://en.wikipedia.org/wiki/Bibcode_(identifier)
92 https://ui.adsabs.harvard.edu/abs/2003PhRvL..90e8701C
93 https://en.wikipedia.org/wiki/Doi_(identifier)
94 https://doi.org/10.1103%2FPhysRevLett.90.058701
95 https://en.wikipedia.org/wiki/ISSN_(identifier)
96 http://www.worldcat.org/issn/0031-9007
97 https://en.wikipedia.org/wiki/PMID_(identifier)
98 http://pubmed.ncbi.nlm.nih.gov/12633404

679
Barabási–Albert model

6. F, B; R, M (2013). ”D   -


 ”. The European Physical Journal B. 86 (12): 510. arXiv99 :1308.5169100 .
Bibcode101 :2013EPJB...86..510F102 . doi103 :10.1140/epjb/e2013-40920-6104 .
7. K, K.; E, V. C. (2002). ”G -
   - ”. Physical Re-
view E. 65 (5): 057102. arXiv105 :cond-mat/0107607106 . Bib-
code107 :2002PhRvE..65e7102K108 . doi109 :10.1103/PhysRevE.65.057102110 .
111 112 113
hdl :10261/15314 . PMID 12059755 . 114

8. B, B. (2003). ”M   - -


 ”. Handbook of Graphs and Networks. pp. 1–37. Cite-
SeerX115 10.1.1.176.6988116 .
9. ”M   -  ”. 2003: 1–37. C-
SX117 10.1.1.176.6988118 . Cite journal requires |journal= (help119 )
10. A, R; B, A-L; H, J A (2003).
”M-      B-A -
”. Phys. Rev. E. 68 (4): 046126. arXiv120 :cond-mat/0306255121 .
doi122 :10.1103/PhysRevE.68.046126123 . PMID124 14683021125 .
11. D, S.N.; G, A.V.; M, J.F.F. (25 J 2002).
”P - ”. Physical Review E126 . 65 (6):

99 https://en.wikipedia.org/wiki/ArXiv_(identifier)
100 http://arxiv.org/abs/1308.5169
101 https://en.wikipedia.org/wiki/Bibcode_(identifier)
102 https://ui.adsabs.harvard.edu/abs/2013EPJB...86..510F
103 https://en.wikipedia.org/wiki/Doi_(identifier)
104 https://doi.org/10.1140%2Fepjb%2Fe2013-40920-6
105 https://en.wikipedia.org/wiki/ArXiv_(identifier)
106 http://arxiv.org/abs/cond-mat/0107607
107 https://en.wikipedia.org/wiki/Bibcode_(identifier)
108 https://ui.adsabs.harvard.edu/abs/2002PhRvE..65e7102K
109 https://en.wikipedia.org/wiki/Doi_(identifier)
110 https://doi.org/10.1103%2FPhysRevE.65.057102
111 https://en.wikipedia.org/wiki/Hdl_(identifier)
112 http://hdl.handle.net/10261%2F15314
113 https://en.wikipedia.org/wiki/PMID_(identifier)
114 http://pubmed.ncbi.nlm.nih.gov/12059755
115 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
116 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.176.6988
117 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
118 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.176.6988
119 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
120 https://en.wikipedia.org/wiki/ArXiv_(identifier)
121 http://arxiv.org/abs/cond-mat/0306255
122 https://en.wikipedia.org/wiki/Doi_(identifier)
123 https://doi.org/10.1103%2FPhysRevE.68.046126
124 https://en.wikipedia.org/wiki/PMID_(identifier)
125 http://pubmed.ncbi.nlm.nih.gov/14683021
126 https://en.wikipedia.org/wiki/Physical_Review_E

680
References

066122. arXiv127 :cond-mat/0112143128 . Bibcode129 :2002PhRvE..65f6122D130 .


doi131 :10.1103/PhysRevE.65.066122132 . PMID133 12188798134 .
12. F, I.J.; D, I.; B, A.-L.; V, T. (20 J 2001) [19
F 2001]. ”S  ”-” : B  
”. Physical Review E135 . 64 (2): 026704. arXiv136 :cond-mat/0102335137 .
Bibcode138 :2001PhRvE..64b6704F139 . doi140 :10.1103/PhysRevE.64.026704141 .
hdl142 :2047/d20000692143 . PMID144 11497741145 .
13. M. K. Hassan, M. Z. Hassan and N. I. Pavel, “Dynamic scaling, data-collapseand Self-
similarity in Barabasi-Albert networks” J. Phys. A: Math. Theor. 44 175101 (2011)
146

14. P, E; R, A.; R, N. (2012). ”T    
    ”147 . Bernoulli.
15. A-L, B148 (2012). ”L  ”. Nature. 489 (7417):
507–508. doi149 :10.1038/nature11486150 . PMID151 22972190152 .
16. S, H A.153 (D 1955). ”O  C  S D-
 F”. Biometrika154 . 42 (3–4): 425–440. doi155 :10.1093/biomet/42.3-
4.425156 .
17. P, D.J.  S157 (S 1976). ”A    -
     ”. Journal of the American

127 https://en.wikipedia.org/wiki/ArXiv_(identifier)
128 http://arxiv.org/abs/cond-mat/0112143
129 https://en.wikipedia.org/wiki/Bibcode_(identifier)
130 https://ui.adsabs.harvard.edu/abs/2002PhRvE..65f6122D
131 https://en.wikipedia.org/wiki/Doi_(identifier)
132 https://doi.org/10.1103%2FPhysRevE.65.066122
133 https://en.wikipedia.org/wiki/PMID_(identifier)
134 http://pubmed.ncbi.nlm.nih.gov/12188798
135 https://en.wikipedia.org/wiki/Physical_Review_E
136 https://en.wikipedia.org/wiki/ArXiv_(identifier)
137 http://arxiv.org/abs/cond-mat/0102335
138 https://en.wikipedia.org/wiki/Bibcode_(identifier)
139 https://ui.adsabs.harvard.edu/abs/2001PhRvE..64b6704F
140 https://en.wikipedia.org/wiki/Doi_(identifier)
141 https://doi.org/10.1103%2FPhysRevE.64.026704
142 https://en.wikipedia.org/wiki/Hdl_(identifier)
143 http://hdl.handle.net/2047%2Fd20000692
144 https://en.wikipedia.org/wiki/PMID_(identifier)
145 http://pubmed.ncbi.nlm.nih.gov/11497741
146 https://dx.doi.org/10.1088/1751-8113/44/17/175101
http://www.e-publications.org/ims/submission/index.php/BEJ/user/submissionFile/10315?
147
confirm=c40442a0
148 https://en.wikipedia.org/wiki/Albert-L%C3%A1szl%C3%B3_Barab%C3%A1si
149 https://en.wikipedia.org/wiki/Doi_(identifier)
150 https://doi.org/10.1038%2Fnature11486
151 https://en.wikipedia.org/wiki/PMID_(identifier)
152 http://pubmed.ncbi.nlm.nih.gov/22972190
153 https://en.wikipedia.org/wiki/Herbert_A._Simon
154 https://en.wikipedia.org/wiki/Biometrika
155 https://en.wikipedia.org/wiki/Doi_(identifier)
156 https://doi.org/10.1093%2Fbiomet%2F42.3-4.425
157 https://en.wikipedia.org/wiki/Derek_J._de_Solla_Price

681
Barabási–Albert model

Society for Information Science158 . 27 (5): 292–306. CiteSeerX159 10.1.1.161.114160 .


doi161 :10.1002/asi.4630270505162 .

49.8 External links


• ”This Man Could Rule the World”163
• ”A Java Implementation for Barabási–Albert”164
• ”Generating Barabási–Albert Model Graphs in Code”165

158 https://en.wikipedia.org/wiki/Journal_of_the_American_Society_for_Information_Science
159 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
160 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.161.114
161 https://en.wikipedia.org/wiki/Doi_(identifier)
162 https://doi.org/10.1002%2Fasi.4630270505
163 http://www.popsci.com/science/article/2011-10/man-could-rule-world
164 https://github.com/alihadian/ROLL
https://compuzzle.wordpress.com/2015/02/03/generating-barabasi-albert-model-graphs-
165
in-clojure/

682
50 Belief propagation

This article includes a list of references1 , but its sources remain unclear be-
cause it has insufficient inline citations2 . Please help to improve3 this article
by introducing4 more precise citations. (April 2009)(Learn how and when to remove
this template message5 )

Belief propagation, also known as sum-product message passing, is a message-


passing6 algorithm7 for performing inference8 on graphical models9 , such as Bayesian net-
works10 and Markov random fields11 . It calculates the marginal distribution12 for each
unobserved node (or variable), conditional on any observed nodes (or variables). Belief
propagation is commonly used in artificial intelligence13 and information theory14 and has
demonstrated empirical success in numerous applications including low-density parity-check
codes15 , turbo codes16 , free energy17 approximation, and satisfiability18 .[1]
The algorithm was first proposed by Judea Pearl19 in 1982,[2] who formulated it as an exact
inference algorithm on trees20 , which was later extended to polytrees21 .[3] While it is not
exact on general graphs, it has been shown to be a useful approximate algorithm.[4]

1 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
2 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
3 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Fact_and_Reference_Check
4 https://en.wikipedia.org/wiki/Wikipedia:When_to_cite
5 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
6 https://en.wikipedia.org/wiki/Message_passing
7 https://en.wikipedia.org/wiki/Algorithm
8 https://en.wikipedia.org/wiki/Inference
9 https://en.wikipedia.org/wiki/Graphical_model
10 https://en.wikipedia.org/wiki/Bayesian_network
11 https://en.wikipedia.org/wiki/Markov_random_field
12 https://en.wikipedia.org/wiki/Marginal_distribution
13 https://en.wikipedia.org/wiki/Artificial_intelligence
14 https://en.wikipedia.org/wiki/Information_theory
15 https://en.wikipedia.org/wiki/Low-density_parity-check_codes
16 https://en.wikipedia.org/wiki/Turbo_codes
17 https://en.wikipedia.org/wiki/Thermodynamic_free_energy
18 https://en.wikipedia.org/wiki/Satisfiability
19 https://en.wikipedia.org/wiki/Judea_Pearl
20 https://en.wikipedia.org/wiki/Tree_(graph_theory)
21 https://en.wikipedia.org/wiki/Polytree

683
Belief propagation

If X={Xi } is a set of discrete22 random variables23 with a joint24 mass function25 p, the
marginal distribution26 of a single Xi is simply the summation of p over all other variables:

pXi (xi ) = p(x′ ).
x′ :x′i =xi

However, this quickly becomes computationally prohibitive: if there are 100 binary vari-
ables, then one needs to sum over 299 ≈ 6.338 × 1029 possible values. By exploiting the
polytree structure, belief propagation allows the marginals to be computed much more
efficiently.

50.1 Description of the sum-product algorithm

Variants of the belief propagation algorithm exist for several types of graphical models
(Bayesian networks27 and Markov random fields28 [5] in particular). We describe here the
variant that operates on a factor graph29 . A factor graph is a bipartite graph30 containing
nodes corresponding to variables V and factors F, with edges between variables and the
factors in which they appear. We can write the joint mass function:

p(x) = fa (xa )
a∈F

where xa is the vector of neighboring variable nodes to the factor node a. Any Bayesian
network31 or Markov random field32 can be represented as a factor graph by using a factor
for each node with its parents or a factor for each node with its neighborhood respectively.[6]
The algorithm works by passing real valued functions called messages along with the edges
between the hidden nodes. More precisely, if v is a variable node and a is a factor node
connected to v in the factor graph, the messages from v to a, (denoted by µv→a ) and from
a to v (µa→v ), are real-valued functions whose domain is Dom(v), the set of values that can
be taken by the random variable associated with v. These messages contain the ”influence”
that one variable exerts on another. The messages are computed differently depending on
whether the node receiving the message is a variable node or a factor node. Keeping the
same notation:
• A message from a variable node v to a factor node a is the product of the messages from
all other neighboring factor nodes (except the recipient; alternatively one can say the
recipient sends as message the constant function equal to ”1”):

22 https://en.wikipedia.org/wiki/Discrete_probability_distribution
23 https://en.wikipedia.org/wiki/Random_variable
24 https://en.wikipedia.org/wiki/Joint_distribution
25 https://en.wikipedia.org/wiki/Probability_mass_function
26 https://en.wikipedia.org/wiki/Marginal_distribution
27 https://en.wikipedia.org/wiki/Bayesian_networks
28 https://en.wikipedia.org/wiki/Markov_random_fields
29 https://en.wikipedia.org/wiki/Factor_graph
30 https://en.wikipedia.org/wiki/Bipartite_graph
31 https://en.wikipedia.org/wiki/Bayesian_network
32 https://en.wikipedia.org/wiki/Markov_random_field

684
Exact algorithm for trees


∀xv ∈ Dom(v), µv→a (xv ) = µa∗ →v (xv ).
a∗ ∈N (v)\{a}

where N(v) is the set of neighboring (factor) nodes to v. If N (v) \ {a} is empty, then
µv→a (xv ) is set to the uniform distribution.
• A message from a factor node a to a variable node v is the product of the factor with
messages from all other nodes, marginalized over all variables except the one associated
with v:
∑ ∏
∀xv ∈ Dom(v), µa→v (xv ) = fa (xa′ ) µv∗ →a (x′v∗ ).
′ :x′ =x
xa v ∗ ∈N (a)\{v}
v v

where N(a) is the set of neighboring (variable) nodes to a. If N (a) \ {v} is empty then
µa→v (xv ) = fa (xv ), since in this case xv = xa .
As shown by the previous formula: the complete marginalization is reduced to a sum of
products of simpler terms than the ones appearing in the full joint distribution. This is the
reason why it is called the sum-product algorithm.
In a typical run, each message will be updated iteratively from the previous value of the
neighboring messages. Different scheduling can be used for updating the messages. In the
case where the graphical model is a tree, an optimal scheduling allows to reach convergence
after computing each messages only once (see next sub-section). When the factor graph
has cycles, such an optimal scheduling does not exist, and a typical choice is to update all
messages simultaneously at each iteration.
Upon convergence (if convergence happened), the estimated marginal distribution of each
node is proportional to the product of all messages from adjoining factors (missing the
normalization constant):

pXv (xv ) ∝ µa→v (xv ).
a∈N (v)

Likewise, the estimated joint marginal distribution of the set of variables belonging to one
factor is proportional to the product of the factor and the messages from the variables:

pXa (xa ) ∝ fa (xa ) µv→a (xv ).
v∈N (a)

In the case where the factor graph is acyclic (i.e. is a tree or a forest), these estimated
marginal actually converge to the true marginals in a finite number of iterations. This can
be shown by mathematical induction33 .

50.2 Exact algorithm for trees

In the case when the factor graph34 is a tree35 , the belief propagation algorithm will compute
the exact marginals. Furthermore, with proper scheduling of the message updates, it will
terminate after 2 steps. This optimal scheduling can be described as follows:

33 https://en.wikipedia.org/wiki/Mathematical_induction
34 https://en.wikipedia.org/wiki/Factor_graph
35 https://en.wikipedia.org/wiki/Tree_(graph_theory)

685
Belief propagation

Before starting, the graph is oriented by designating one node as the root; any non-root
node which is connected to only one other node is called a leaf.
In the first step, messages are passed inwards: starting at the leaves, each node passes a
message along the (unique) edge towards the root node. The tree structure guarantees that
it is possible to obtain messages from all other adjoining nodes before passing the message
on. This continues until the root has obtained messages from all of its adjoining nodes.
The second step involves passing the messages back out: starting at the root, messages are
passed in the reverse direction. The algorithm is completed when all leaves have received
their messages.

50.3 Approximate algorithm for general graphs

Curiously, although it was originally designed for acyclic graphical models, it was found that
the Belief Propagation algorithm can be used in general graphs36 . The algorithm is then
sometimes called loopy belief propagation, because graphs typically contain cycles37 ,
or loops. The initialization and scheduling of message updates must be adjusted slightly
(compared with the previously described schedule for acyclic graphs) because graphs might
not contain any leaves. Instead, one initializes all variable messages to 1 and uses the same
message definitions above, updating all messages at every iteration (although messages
coming from known leaves or tree-structured subgraphs may no longer need updating after
sufficient iterations). It is easy to show that in a tree, the message definitions of this modified
procedure will converge to the set of message definitions given above within a number of
iterations equal to the diameter38 of the tree.
The precise conditions under which loopy belief propagation will converge are still not well
understood; it is known that on graphs containing a single loop it converges in most cases,
but the probabilities obtained might be incorrect.[7] Several sufficient (but not necessary)
conditions for convergence of loopy belief propagation to a unique fixed point exist.[8] There
exist graphs which will fail to converge, or which will oscillate between multiple states over
repeated iterations. Techniques like EXIT charts39 can provide an approximate visualization
of the progress of belief propagation and an approximate test for convergence.
There are other approximate methods for marginalization including variational methods40
and Monte Carlo methods41 .
One method of exact marginalization in general graphs is called the junction tree algo-
rithm42 , which is simply belief propagation on a modified graph guaranteed to be a tree.
The basic premise is to eliminate cycles by clustering them into single nodes.

36 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
37 https://en.wikipedia.org/wiki/Cycle_(graph_theory)
38 https://en.wikipedia.org/wiki/Diameter_(graph_theory)
39 https://en.wikipedia.org/wiki/EXIT_chart
40 https://en.wikipedia.org/wiki/Variational_Bayesian_methods
41 https://en.wikipedia.org/wiki/Monte_Carlo_method
42 https://en.wikipedia.org/wiki/Junction_tree_algorithm

686
Related algorithm and complexity issues

50.4 Related algorithm and complexity issues

A similar algorithm is commonly referred to as the Viterbi algorithm43 , but also known as a
special case of the max-product or min-sum algorithm, which solves the related problem of
maximization, or most probable explanation. Instead of attempting to solve the marginal,
the goal here is to find the values x that maximizes the global function (i.e. most probable
values in a probabilistic setting), and it can be defined using the arg max44 :
* arg maxx g(x).
An algorithm that solves this problem is nearly identical to belief propagation, with the
sums replaced by maxima in the definitions.[9]
It is worth noting that inference45 problems like marginalization and maximization are
NP-hard46 to solve exactly and approximately (at least for relative error47 ) in a graphical
model. More precisely, the marginalization problem defined above is #P-complete48 and
maximization is NP-complete49 .
The memory usage of belief propagation can be reduced through the use of the Island
algorithm50 (at a small cost in time complexity).

50.5 Relation to free energy

The sum-product algorithm is related to the calculation of free energy51 in thermodynam-


ics52 . Let Z be the partition function53 . A probability distribution
1∏
P (X) = fj (xj )
Z f
j

(as per the factor graph representation) can be viewed as a measure of the internal energy54
present in a system, computed as

E(X) = log fj (xj ).
fj

The free energy of the system is then


∑ ∑
F = U −H = P (X)E(X) + P (X) log P (X).
X X

43 https://en.wikipedia.org/wiki/Viterbi_algorithm
44 https://en.wikipedia.org/wiki/Arg_max
45 https://en.wikipedia.org/wiki/Inference
46 https://en.wikipedia.org/wiki/NP-hard
47 https://en.wikipedia.org/wiki/Approximation_error
48 https://en.wikipedia.org/wiki/Sharp-P-complete
49 https://en.wikipedia.org/wiki/NP-complete
50 https://en.wikipedia.org/wiki/Island_algorithm
51 https://en.wikipedia.org/wiki/Thermodynamic_free_energy
52 https://en.wikipedia.org/wiki/Thermodynamics
53 https://en.wikipedia.org/wiki/Partition_function_(mathematics)
54 https://en.wikipedia.org/wiki/Internal_energy

687
Belief propagation

It can then be shown that the points of convergence of the sum-product algorithm represent
the points where the free energy in such a system is minimized. Similarly, it can be shown
that a fixed point of the iterative belief propagation algorithm in graphs with cycles is a
stationary point of a free energy approximation.[10]

50.6 Generalized belief propagation (GBP)

Belief propagation algorithms are normally presented as message update equations on a


factor graph, involving messages between variable nodes and their neighboring factor nodes
and vice versa. Considering messages between regions in a graph is one way of generalizing
the belief propagation algorithm.[10] There are several ways of defining the set of regions in
a graph that can exchange messages. One method uses ideas introduced by Kikuchi55 in
the physics literature,[11][12][13] and is known as Kikuchi's cluster variation method56 .[14]
Improvements in the performance of belief propagation algorithms are also achievable by
breaking the replicas symmetry in the distributions of the fields (messages). This generaliza-
tion leads to a new kind of algorithm called survey propagation57 (SP), which have proved
to be very efficient in NP-complete58 problems like satisfiability59[1] and graph coloring60 .
The cluster variational method and the survey propagation algorithms are two different
improvements to belief propagation. The name generalized survey propagation61 (GSP) is
waiting to be assigned to the algorithm that merges both generalizations.

50.7 Gaussian belief propagation (GaBP)

Gaussian belief propagation is a variant of the belief propagation algorithm when the un-
derlying distributions are Gaussian62 . The first work analyzing this special model was the
seminal work of Weiss and Freeman.[15]
The GaBP algorithm solves the following marginalization problem:

1
P (xi ) = exp(−1/2xT Ax + bT x) dxj
Z j̸=i

where Z is a normalization constant, A is a symmetric positive definite matrix63 (inverse


covariance matrix a.k.a. precision matrix) and b is the shift vector.

55 https://en.wikipedia.org/w/index.php?title=Ryoichi_Kikuchi&action=edit&redlink=1
https://en.wikipedia.org/w/index.php?title=Cluster_variation_method&action=edit&
56
redlink=1
57 https://en.wikipedia.org/w/index.php?title=Survey_propagation&action=edit&redlink=1
58 https://en.wikipedia.org/wiki/NP-complete
59 https://en.wikipedia.org/wiki/Satisfiability
60 https://en.wikipedia.org/wiki/Graph_coloring
https://en.wikipedia.org/w/index.php?title=Generalized_survey_propagation&action=
61
edit&redlink=1
62 https://en.wikipedia.org/wiki/Normal_distribution
63 https://en.wikipedia.org/wiki/Positive-definite_matrix

688
Syndrome-based BP decoding

Equivalently, it can be shown that using the Gaussian model, the solution of the marginal-
ization problem is equivalent to the MAP64 assignment problem:
1
argmax P (x) = exp(−1/2xT Ax + bT x).
x Z
This problem is also equivalent to the following minimization problem of the quadratic form:
min 1/2xT Ax − bT x.
x

Which is also equivalent to the linear system of equations


Ax = b.
Convergence of the GaBP algorithm is easier to analyze (relatively to the general BP case)
and there are two known sufficient convergence conditions. The first one was formulated
by Weiss et al. in the year 2000, when the information matrix A is diagonally dominant65 .
The second convergence condition was formulated by Johnson et al.[16] in 2006, when the
spectral radius66 of the matrix
ρ(I − |D−1/2 AD−1/2 |) < 1
where D = diag(A). Later, Su and Wu established the necessary and sufficient convergence
conditions for synchronous GaBP and damped GaBP, as well as another sufficient conver-
gence condition for asynchronous GaBP. For each case, the convergence condition involves
verifying 1) a set (determined by A) being non-empty, 2) the spectral radius of a certain
matrix being smaller than one, and 3) the singularity issue (when converting BP message
into belief) does not occur.[17]
The GaBP algorithm was linked to the linear algebra domain,[18] and it was shown that the
GaBP algorithm can be viewed as an iterative algorithm for solving the linear system of
equations Ax = b where A is the information matrix and b is the shift vector. Empirically,
the GaBP algorithm is shown to converge faster than classical iterative methods like the
Jacobi method, the Gauss−Seidel method67 , successive over-relaxation68 , and others.[19]
Additionally, the GaBP algorithm is shown to be immune to numerical problems of the
preconditioned conjugate gradient69 method [20]

50.8 Syndrome-based BP decoding

The previous description of BP algorithm is called the codeword-based decoding, which cal-
culates the approximate marginal probability P (x|X), given received codeword X. There
is an equivalent form,[21] which calculate P (e|s), where s is the syndrome of the received
codeword X and e is the decoded error. The decoded input vector is x = X + e. This vari-
ation only change the interpretation of the mass function fa (Xa ). Explicitly, the messages
are

64 https://en.wikipedia.org/wiki/Maximum_A_Posteriori
65 https://en.wikipedia.org/wiki/Diagonally_dominant
66 https://en.wikipedia.org/wiki/Spectral_radius
67 https://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method
68 https://en.wikipedia.org/wiki/Successive_over-relaxation
69 https://en.wikipedia.org/wiki/Conjugate_gradient_method

689
Belief propagation


∀xv ∈ Dom(v), µv→a (xv ) = P (Xv ) µa∗ →v (xv ).
a∗ ∈N (v)\{a}

where P (Xv ) is the


∑ prior error probability
∏ on variable
′ ′
v∀xv ∈ Dom(v), µa→v (xv ) = δ(syndrome(xv ) = s) µv∗ →a (xv∗ ).
′ :x′ =x
xa v ∗ ∈N (a)\{v}
v v

This syndrome-based decoder doesn't require information on the received bits, thus can be
adapted to quantum codes, where the only information is the measurement syndrome.
In the binary case, xi ∈ {0, 1}, those messages can be simplifies to cause an exponential
reduction of 2|{v}|+|N (v)| in the complexity[22][23]
uv→a (xv = 0) ua→v (xv = 0)
Define log-likelihood ratio lv = log , La = log , then
uv→a (xv = 1) ua→v (xv = 1)

v → a : lv = lv(0) + (La∗ )
a∗ ∈N (v)\{a}

a → v : La = (−1)sa 2 tanh−1 tanh(lv∗ /2)
v ∗ ∈N (a)\{v}

where lv(0) = log(P (xv = 0)/P (xv = 1)) = const



The posterior log-likelihood ratio can be estimated as lv = lv(0) + (La )
a∈N (v)

50.9 References
1. B, A.; M, M.; Z, R. (2005). ”S :
A   ”. Random Structures & Algorithms. 27 (2):
201–226. arXiv70 :cs/021200271 . doi72 :10.1002/rsa.2005773 .
2. P, J74 (1982). ”R B   : A -
  ”75 (PDF). Proceedings of the Second National
Conference on Artificial Intelligence. AAAI-82: Pittsburgh, PA76 . Menlo Park, Cali-
fornia: AAAI Press. pp. 133–136. Retrieved 28 March 2009.
3. K, J H.; P, J77 (1983). ”A    
      ”78 (PDF). Proceedings
of the Eighth International Joint Conference on Artificial Intelligence. IJCAI-83:
Karlsruhe, Germany79 . 1. pp. 190–193. Retrieved 20 March 2016.

70 https://en.wikipedia.org/wiki/ArXiv_(identifier)
71 http://arxiv.org/abs/cs/0212002
72 https://en.wikipedia.org/wiki/Doi_(identifier)
73 https://doi.org/10.1002%2Frsa.20057
74 https://en.wikipedia.org/wiki/Judea_Pearl
75 https://www.aaai.org/Papers/AAAI/1982/AAAI82-032.pdf
76 http://www.aaai.org/Library/AAAI/aaai82contents.php
77 https://en.wikipedia.org/wiki/Judea_Pearl
78 http://www.ijcai.org/Proceedings/83-1/Papers/041.pdf
79 http://www.ijcai.org/proceedings/1983-1

690
References

4. P, J80 (1988). Probabilistic Reasoning in Intelligent Systems: Networks


of Plausible Inference (2nd ed.). San Francisco, CA: Morgan Kaufmann. ISBN81 978-
1-55860-479-782 .
5. Y, J.S.; F, W.T.; Y. (J 2003). ”U B
P  I G”83 . I L, G; N,
B (.). Exploring Artificial Intelligence in the New Millennium. Morgan
Kaufmann. pp. 239–236. ISBN84 978-1-55860-811-585 . Retrieved 30 March 2009.
6. W, M. J.; J, M. I. (2007). ”2.1 P D 
G”. Graphical Models, Exponential Families, and Variational Inference. Foun-
dations and Trends in Machine Learning. 1. pp. 5–9. doi86 :10.1561/220000000187 .
7. W, Y (2000). ”C  L P P
 G M  L”. Neural Computation88 . 12 (1): 1–41.
doi89 :10.1162/08997660030001588090 . PMID91 1063693292 .
8. M, J; K, H (2007). ”S C  C 
 S–P A”. IEEE Transactions on Information Theory93 .
53 (12): 4422–4437. arXiv94 :cs/050403095 . doi96 :10.1109/TIT.2007.90916697 .
9. L, H-A (2004). ”A I  F G”. IEEE
Signal Processing Magazine98 . 21 (1): 28–41. Bibcode99 :2004ISPM...21...28L100 .
doi101 :10.1109/msp.2004.1267047102 .
10. Y, J.S.; F, W.T.; W, Y.; Y. (J 2005). ”C
-      -
”103 . IEEE Transactions on Information Theory104 . 51 (7): 2282–2312. Cite-

80 https://en.wikipedia.org/wiki/Judea_Pearl
81 https://en.wikipedia.org/wiki/ISBN_(identifier)
82 https://en.wikipedia.org/wiki/Special:BookSources/978-1-55860-479-7
83 http://www.merl.com/publications/TR2001-022/
84 https://en.wikipedia.org/wiki/ISBN_(identifier)
85 https://en.wikipedia.org/wiki/Special:BookSources/978-1-55860-811-5
86 https://en.wikipedia.org/wiki/Doi_(identifier)
87 https://doi.org/10.1561%2F2200000001
88 https://en.wikipedia.org/wiki/Neural_Computation_(journal)
89 https://en.wikipedia.org/wiki/Doi_(identifier)
90 https://doi.org/10.1162%2F089976600300015880
91 https://en.wikipedia.org/wiki/PMID_(identifier)
92 http://pubmed.ncbi.nlm.nih.gov/10636932
93 https://en.wikipedia.org/wiki/IEEE_Transactions_on_Information_Theory
94 https://en.wikipedia.org/wiki/ArXiv_(identifier)
95 http://arxiv.org/abs/cs/0504030
96 https://en.wikipedia.org/wiki/Doi_(identifier)
97 https://doi.org/10.1109%2FTIT.2007.909166
https://en.wikipedia.org/w/index.php?title=IEEE_Signal_Processing_Magazine&action=
98
edit&redlink=1
99 https://en.wikipedia.org/wiki/Bibcode_(identifier)
100 https://ui.adsabs.harvard.edu/abs/2004ISPM...21...28L
101 https://en.wikipedia.org/wiki/Doi_(identifier)
102 https://doi.org/10.1109%2Fmsp.2004.1267047
103 http://www.merl.com/publications/TR2004-040/
104 https://en.wikipedia.org/wiki/IEEE_Transactions_on_Information_Theory

691
Belief propagation

SeerX105 10.1.1.3.5650106 . doi107 :10.1109/TIT.2005.850085108 . Retrieved 28 March


2009.
11. K, R (15 M 1951). ”A T  C P-
”. Physical Review. 81 (6): 988–1003. Bibcode109 :1951PhRv...81..988K110 .
doi111 :10.1103/PhysRev.81.988112 .
12. K, M; K, R; W, T (1953). ”A T
 C P. III. D D   C
V M”. The Journal of Chemical Physics. 21 (3): 434–448. Bib-
code113 :1953JChPh..21..434K114 . doi115 :10.1063/1.1698926116 .
13. K, R; B, S G. (1967). ”I   C-
‐V M”. The Journal of Chemical Physics. 47 (1): 195–203. Bib-
code117 :1967JChPh..47..195K118 . doi119 :10.1063/1.1711845120 .
14. P, A (2005). ”C    
    ”. Journal of Physics A: Math-
ematical and General. 38 (33): R309–R339. arXiv121 :cond-mat/0508216122 .
123
Bibcode :2005JPhA...38R.309P . 124 doi :10.1088/0305-4470/38/33/R01126 .
125
129
ISSN127 0305-4470128 .[permanent dead link ]
15. W, Y; F, W T. (O 2001). ”C 
B P  G G M  A T-
”. Neural Computation130 . 13 (10): 2173–2200. CiteSeerX131 10.1.1.44.794132 .
doi133 :10.1162/089976601750541769134 . PMID135 11570995136 .

105 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
106 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.3.5650
107 https://en.wikipedia.org/wiki/Doi_(identifier)
108 https://doi.org/10.1109%2FTIT.2005.850085
109 https://en.wikipedia.org/wiki/Bibcode_(identifier)
110 https://ui.adsabs.harvard.edu/abs/1951PhRv...81..988K
111 https://en.wikipedia.org/wiki/Doi_(identifier)
112 https://doi.org/10.1103%2FPhysRev.81.988
113 https://en.wikipedia.org/wiki/Bibcode_(identifier)
114 https://ui.adsabs.harvard.edu/abs/1953JChPh..21..434K
115 https://en.wikipedia.org/wiki/Doi_(identifier)
116 https://doi.org/10.1063%2F1.1698926
117 https://en.wikipedia.org/wiki/Bibcode_(identifier)
118 https://ui.adsabs.harvard.edu/abs/1967JChPh..47..195K
119 https://en.wikipedia.org/wiki/Doi_(identifier)
120 https://doi.org/10.1063%2F1.1711845
121 https://en.wikipedia.org/wiki/ArXiv_(identifier)
122 http://arxiv.org/abs/cond-mat/0508216
123 https://en.wikipedia.org/wiki/Bibcode_(identifier)
124 https://ui.adsabs.harvard.edu/abs/2005JPhA...38R.309P
125 https://en.wikipedia.org/wiki/Doi_(identifier)
126 https://doi.org/10.1088%2F0305-4470%2F38%2F33%2FR01
127 https://en.wikipedia.org/wiki/ISSN_(identifier)
128 http://www.worldcat.org/issn/0305-4470
130 https://en.wikipedia.org/wiki/Neural_Computation_(journal)
131 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
132 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.44.794
133 https://en.wikipedia.org/wiki/Doi_(identifier)
134 https://doi.org/10.1162%2F089976601750541769
135 https://en.wikipedia.org/wiki/PMID_(identifier)
136 http://pubmed.ncbi.nlm.nih.gov/11570995

692
References

16. M, D M.; J, J K.; W, A S. (O
2006). ”W-     G  ”137 .
Journal of Machine Learning Research138 . 7: 2031–2064. Retrieved 28 March 2009.
17. S, Q; W, Y-C (M 2015). ”O  
 G  ”. IEEE Trans. Signal Process.139 63 (5): 1144–
1155. Bibcode140 :2015ITSP...63.1144S141 . doi142 :10.1109/TSP.2015.2389755143 .
18. Gaussian belief propagation solver for systems of linear equations. By O. Shental, D.
Bickson, P. H. Siegel, J. K. Wolf, and D. Dolev, IEEE Int. Symp. on Inform. Theory
(ISIT), Toronto, Canada, July 2008. 144 Archived145 14 June 2011 at the Wayback
Machine146
19. Linear Detection via Belief Propagation. Danny Bickson, Danny Dolev, Ori Shental,
Paul H. Siegel and Jack K. Wolf. In the 45th Annual Allerton Conference on Commu-
nication, Control, and Computing, Allerton House, Illinois, 7 Sept.. 147 Archived148
14 June 2011 at the Wayback Machine149
20. Distributed large scale network utility maximization. D. Bickson, Y. Tock, A. Zymnis,
S. Boyd and D. Dolev. In the International symposium on information theory (ISIT),
July 2009. 150 Archived151 14 June 2011 at the Wayback Machine152
21. D, M A. (1 D 2006). ”R  ”I T-
, I,  L A  D J. C. MK”,
C U P, 2003”. ACM SIGACT News. 37 (4): 34.
doi153 :10.1145/1189056.1189063154 . ISSN155 0163-5700156 .
22. F, T (17 N 2009). ”S   B -
 ”157 (PDF).
23. L, Y-H; P, D (22 M 2019). ”N B-P
D  Q E-C C”. Physical Review Let-

137 http://jmlr.csail.mit.edu/papers/v7/malioutov06a.html
138 https://en.wikipedia.org/wiki/Journal_of_Machine_Learning_Research
139 https://en.wikipedia.org/wiki/IEEE_Trans._Signal_Process.
140 https://en.wikipedia.org/wiki/Bibcode_(identifier)
141 https://ui.adsabs.harvard.edu/abs/2015ITSP...63.1144S
142 https://en.wikipedia.org/wiki/Doi_(identifier)
143 https://doi.org/10.1109%2FTSP.2015.2389755
144 http://www.cs.huji.ac.il/labs/danss/p2p/gabp/
https://web.archive.org/web/20110614012544/http://www.cs.huji.ac.il/labs/danss/p2p/
145
gabp/
146 https://en.wikipedia.org/wiki/Wayback_Machine
147 http://www.cs.huji.ac.il/labs/danss/p2p/gabp/
https://web.archive.org/web/20110614012544/http://www.cs.huji.ac.il/labs/danss/p2p/
148
gabp/
149 https://en.wikipedia.org/wiki/Wayback_Machine
150 http://www.cs.huji.ac.il/labs/danss/p2p/gabp/
https://web.archive.org/web/20110614012544/http://www.cs.huji.ac.il/labs/danss/p2p/
151
gabp/
152 https://en.wikipedia.org/wiki/Wayback_Machine
153 https://en.wikipedia.org/wiki/Doi_(identifier)
154 https://doi.org/10.1145%2F1189056.1189063
155 https://en.wikipedia.org/wiki/ISSN_(identifier)
156 http://www.worldcat.org/issn/0163-5700
157 http://dde.binghamton.edu/filler/mct/lectures/22/mct-lect22-v0.pdf

693
Belief propagation

ters. 122 (20). arXiv158 :1811.07835159 . doi160 :10.1103/physrevlett.122.200501161 .


ISSN162 0031-9007163 . PMID164 31172756165 .

50.10 Further reading


• Bickson, Danny. (2009). Gaussian Belief Propagation Resource Page166 —Webpage
containing recent publications as well as Matlab source code.
• B, C M. (2006). ”C 8: G ”167 (PDF). Pat-
tern Recognition and Machine Learning. Springer. pp. 359–418. ISBN168 978-0-387-
31073-2169 . Retrieved 20 March 2014.
• Coughlan, James. (2009). A Tutorial Introduction to Belief Propagation170 .
• L, H-A (2004). ”A I  F G”. IEEE
Signal Processing Magazine. 21 (1): 28–41. Bibcode171 :2004ISPM...21...28L172 .
173
doi :10.1109/MSP.2004.1267047 . 174

• Mackenzie, Dana (2005). ”Communication Speed Nears Terminal Velocity175 ”, New Sci-
entist176 . 9 July 2005. Issue 2507 (Registration required)
• W, H (2007). Iterative Receiver Design177 . C U
P. ISBN178 978-0-521-87315-4179 .
• Y, J.S.; F, W.T.; W, Y. (J 2003). ”U B-
 P  I G”180 . I L, G; N,
B (.). Exploring Artificial Intelligence in the New Millennium. Morgan
Kaufmann. pp. 239–269. ISBN181 978-1-55860-811-5182 . Retrieved 30 March 2009.
• Y, J.S.; F, W.T.; W, Y. (J 2005). ”C
-      -

158 https://en.wikipedia.org/wiki/ArXiv_(identifier)
159 http://arxiv.org/abs/1811.07835
160 https://en.wikipedia.org/wiki/Doi_(identifier)
161 https://doi.org/10.1103%2Fphysrevlett.122.200501
162 https://en.wikipedia.org/wiki/ISSN_(identifier)
163 http://www.worldcat.org/issn/0031-9007
164 https://en.wikipedia.org/wiki/PMID_(identifier)
165 http://pubmed.ncbi.nlm.nih.gov/31172756
166 https://www.cs.cmu.edu/~bickson/gabp/index.html
http://research.microsoft.com/en-us/um/people/cmbishop/prml/pdf/Bishop-PRML-
167
sample.pdf
168 https://en.wikipedia.org/wiki/ISBN_(identifier)
169 https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-31073-2
170 http://computerrobotvision.org/2009/tutorial_day/crv09_belief_propagation_v2.pdf
171 https://en.wikipedia.org/wiki/Bibcode_(identifier)
172 https://ui.adsabs.harvard.edu/abs/2004ISPM...21...28L
173 https://en.wikipedia.org/wiki/Doi_(identifier)
174 https://doi.org/10.1109%2FMSP.2004.1267047
https://www.newscientist.com/article/mg18725071-400-communication-speed-nears-
175
terminal-velocity/
176 https://en.wikipedia.org/wiki/New_Scientist
177 http://www.cambridge.org/us/catalogue/catalogue.asp?isbn=9780521873154
178 https://en.wikipedia.org/wiki/ISBN_(identifier)
179 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-87315-4
180 http://www.merl.com/publications/TR2001-022/
181 https://en.wikipedia.org/wiki/ISBN_(identifier)
182 https://en.wikipedia.org/wiki/Special:BookSources/978-1-55860-811-5

694
Further reading

”183 . IEEE Transactions on Information Theory184 . 51 (7): 2282–2312. Cite-


SeerX185 10.1.1.3.5650186 . doi187 :10.1109/TIT.2005.850085188 . Retrieved 28 March 2009.

183 http://www.merl.com/publications/TR2004-040/
184 https://en.wikipedia.org/wiki/IEEE_Transactions_on_Information_Theory
185 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
186 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.3.5650
187 https://en.wikipedia.org/wiki/Doi_(identifier)
188 https://doi.org/10.1109%2FTIT.2005.850085

695
51 Bellman–Ford algorithm

Bellman–Ford algorithm
Class Single-source shortest path
problem (for weighted di-
rected graphs)
Data struc- Graph
ture
Worst-case Θ(|V ||E|)
perfor-
mance
Best-case Θ(|E|)
perfor-
mance
Worst-case Θ(|V |)
space com-
plexity

697
Bellman–Ford algorithm

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

698
Further reading

The Bellman–Ford algorithm is an algorithm1 that computes shortest paths2 from a


single source vertex3 to all of the other vertices in a weighted digraph4 .[1] It is slower than
Dijkstra's algorithm5 for the same problem, but more versatile, as it is capable of handling
graphs in which some of the edge weights are negative numbers. The algorithm was first
proposed by Alfonso Shimbel (19556 ), but is instead named after Richard Bellman7 and
Lester Ford Jr.8 , who published it in 19589 and 195610 , respectively.[2] Edward F. Moore11
also published the same algorithm in 1957, and for this reason it is also sometimes called
the Bellman–Ford–Moore algorithm.[1]
Negative edge weights are found in various applications of graphs, hence the usefulness of
this algorithm.[3] If a graph contains a ”negative cycle” (i.e. a cycle12 whose edges sum to a
negative value) that is reachable from the source, then there is no cheapest path: any path
that has a point on the negative cycle can be made cheaper by one more walk13 around
the negative cycle. In such a case, the Bellman–Ford algorithm can detect and report the
negative cycle.[1][4]

1 https://en.wikipedia.org/wiki/Algorithm
2 https://en.wikipedia.org/wiki/Shortest_path
3 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
4 https://en.wikipedia.org/wiki/Weighted_digraph
5 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
6 #CITEREFShimbel1955
7 https://en.wikipedia.org/wiki/Richard_Bellman
8 https://en.wikipedia.org/wiki/L._R._Ford_Jr.
9 #CITEREFBellman1958
10 #CITEREFFord1956
11 https://en.wikipedia.org/wiki/Edward_F._Moore
12 https://en.wikipedia.org/wiki/Cycle_(graph_theory)
13 https://en.wikipedia.org/wiki/Walk_(graph_theory)

699
Bellman–Ford algorithm

51.1 Algorithm

Figure 130 In this example graph, assuming that A is the source and edges are
processed in the worst order, from right to left, it requires the full |V|−1 or 4 iterations for
the distance estimates to converge. Conversely, if the edges are processed in the best
order, from left to right, the algorithm converges in a single iteration.

Like Dijkstra's algorithm14 , Bellman–Ford proceeds by relaxation15 , in which approxima-


tions to the correct distance are replaced by better ones until they eventually reach the

14 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
15 https://en.wikipedia.org/wiki/Relaxation_(iterative_method)

700
Algorithm

solution. In both algorithms, the approximate distance to each vertex is always an overesti-
mate of the true distance, and is replaced by the minimum of its old value and the length of
a newly found path. However, Dijkstra's algorithm uses a priority queue16 to greedily17 se-
lect the closest vertex that has not yet been processed, and performs this relaxation process
on all of its outgoing edges; by contrast, the Bellman–Ford algorithm simply relaxes all the
edges, and does this |V | − 1 times, where |V | is the number of vertices in the graph. In each
of these repetitions, the number of vertices with correctly calculated distances grows, from
which it follows that eventually all vertices will have their correct distances. This method
allows the Bellman–Ford algorithm to be applied to a wider class of inputs than Dijkstra.
Bellman–Ford runs in O(|V | · |E|) time18 , where |V | and |E| are the number of vertices and
edges respectively.
function BellmanFord(list vertices, list edges, vertex source) is
::distance[], predecessor[]

// This implementation takes in a graph, represented as


// lists of vertices and edges, and fills two arrays
// (distance and predecessor) about the shortest path
// from the source to each vertex

// Step 1: initialize graph


for each vertex v in vertices do
distance[v] := inf // Initialize the distance to all vertices
to infinity
predecessor[v] := null // And having a null predecessor

distance[source] := 0 // The distance from the source to itself


is, of course, zero

// Step 2: relax edges repeatedly


for i from 1 to size(vertices)−1 do //just |V|−1 repetitions; i is never referenced
for each edge (u, v) with weight w in edges do
if distance[u] + w < distance[v] then
distance[v] := distance[u] + w
predecessor[v] := u

// Step 3: check for negative-weight cycles


for each edge (u, v) with weight w in edges do
if distance[u] + w < distance[v] then
error ”Graph contains a negative-weight cycle”

return distance[], predecessor[]

Simply put, the algorithm initializes the distance to the source to 0 and all other nodes to
infinity. Then for all edges, if the distance to the destination can be shortened by taking the
edge, the distance is updated to the new lower value. At each iteration i that the edges are
scanned, the algorithm finds all shortest paths of at most length i edges (and possibly some
paths longer than i edges). Since the longest possible path without a cycle can be |V | − 1
edges, the edges must be scanned |V | − 1 times to ensure the shortest path has been found
for all nodes. A final scan of all the edges is performed and if any distance is updated, then
a path of length |V | edges has been found which can only occur if at least one negative
cycle exists in the graph.

16 https://en.wikipedia.org/wiki/Priority_queue
17 https://en.wikipedia.org/wiki/Greedy_algorithm
18 https://en.wikipedia.org/wiki/Big_O_notation

701
Bellman–Ford algorithm

51.2 Proof of correctness

This section does not cite19 any sources20 . Please help improve this section21
by adding citations to reliable sources22 . Unsourced material may be challenged
and removed23 .
Find sources: ”Bellman–Ford algorithm”24 –
news25 · newspapers26 · books27 · scholar28 · JSTOR29 (March 2019)(Learn how
and when to remove this template message30 )

The correctness of the algorithm can be shown by induction31 :


Lemma. After i repetitions of for loop,
• if Distance(u) is not infinity, it is equal to the length of some path from s to u; and
• if there is a path from s to u with at most i edges, then Distance(u) is at most the length
of the shortest path from s to u with at most i edges.
Proof. For the base case of induction, consider i=0 and the moment before for loop is
executed for the first time. Then, for the source vertex, source.distance = 0, which is
correct. For other vertices u, u.distance = infinity, which is also correct because there
is no path from source to u with 0 edges.
For the inductive case, we first prove the first part. Consider a moment when a vertex's dis-
tance is updated by v.distance := u.distance + uv.weight. By inductive assumption,
u.distance is the length of some path from source to u. Then u.distance + uv.weight is
the length of the path from source to v that follows the path from source to u and then goes
to v.
For the second part, consider a shortest path P (there may be more than one) from
source to v with at most i edges. Let u be the last vertex before v on this path. Then,
the part of the path from source to u is a shortest path from source to u with at most i-
1 edges, since if it were not, then there must be some strictly shorter path from source to
u with at most i-1 edges, and we could then append the edge uv to this path to obtain

19 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
20 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
21 https://en.wikipedia.org/w/index.php?title=Bellman%E2%80%93Ford_algorithm&action=edit
22 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
23 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
24 http://www.google.com/search?as_eq=wikipedia&q=%22Bellman%E2%80%93Ford+algorithm%22
http://www.google.com/search?tbm=nws&q=%22Bellman%E2%80%93Ford+algorithm%22+-
25
wikipedia
http://www.google.com/search?&q=%22Bellman%E2%80%93Ford+algorithm%22+site:news.
26
google.com/newspapers&source=newspapers
http://www.google.com/search?tbs=bks:1&q=%22Bellman%E2%80%93Ford+algorithm%22+-
27
wikipedia
28 http://scholar.google.com/scholar?q=%22Bellman%E2%80%93Ford+algorithm%22
https://www.jstor.org/action/doBasicSearch?Query=%22Bellman%E2%80%93Ford+algorithm%
29
22&acc=on&wc=on
30 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
31 https://en.wikipedia.org/wiki/Mathematical_induction

702
Finding negative cycles

a path with at most i edges that is strictly shorter than P—a contradiction. By in-
ductive assumption, u.distance after i−1 iterations is at most the length of this path
from source to u. Therefore, uv.weight + u.distance is at most the length of P. In the
ith iteration, v.distance gets compared with uv.weight + u.distance, and is set equal
to it if uv.weight + u.distance is smaller. Therefore, after i iterations, v.distance is at
most the length of P, i.e., the length of the shortest path from source to v that uses at most
i edges.
If there are no negative-weight cycles, then every shortest path visits each vertex at most
once, so at step 3 no further improvements can be made. Conversely, suppose no improve-
ment can be made. Then for any cycle with vertices v[0], ..., v[k−1],
v[i].distance <= v[i-1 (mod k)].distance + v[i-1 (mod k)]v[i].weight
Summing around the cycle, the v[i].distance and v[i−1 (mod k)].distance terms cancel,
leaving
0 <= sum from 1 to k of v[i-1 (mod k)]v[i].weight
I.e., every cycle has nonnegative weight.

51.3 Finding negative cycles

When the algorithm is used to find shortest paths, the existence of negative cycles is a prob-
lem, preventing the algorithm from finding a correct answer. However, since it terminates
upon finding a negative cycle, the Bellman–Ford algorithm can be used for applications
in which this is the target to be sought – for example in cycle-cancelling32 techniques in
network flow33 analysis.[1]

51.4 Applications in routing

A distributed variant of the Bellman–Ford algorithm is used in distance-vector routing


protocols34 , for example the Routing Information Protocol35 (RIP). The algorithm is dis-
tributed because it involves a number of nodes (routers) within an Autonomous system
(AS)36 , a collection of IP networks typically owned by an ISP. It consists of the following
steps:
1. Each node calculates the distances between itself and all other nodes within the AS
and stores this information as a table.
2. Each node sends its table to all neighboring nodes.
3. When a node receives distance tables from its neighbors, it calculates the shortest
routes to all other nodes and updates its own table to reflect any changes.

32 https://en.wikipedia.org/w/index.php?title=Cycle-cancelling&action=edit&redlink=1
33 https://en.wikipedia.org/wiki/Flow_network
34 https://en.wikipedia.org/wiki/Distance-vector_routing_protocol
35 https://en.wikipedia.org/wiki/Routing_Information_Protocol
36 https://en.wikipedia.org/wiki/Autonomous_system_(Internet)

703
Bellman–Ford algorithm

The main disadvantages of the Bellman–Ford algorithm in this setting are as follows:
• It does not scale well.
• Changes in network topology37 are not reflected quickly since updates are spread node-
by-node.
• Count to infinity38 if link or node failures render a node unreachable from some set of
other nodes, those nodes may spend forever gradually increasing their estimates of the
distance to it, and in the meantime there may be routing loops.

51.5 Improvements

The Bellman–Ford algorithm may be improved in practice (although not in the worst case)
by the observation that, if an iteration of the main loop of the algorithm terminates without
making any changes, the algorithm can be immediately terminated, as subsequent iterations
will not make any more changes. With this early termination condition, the main loop may
in some cases use many fewer than |V| − 1 iterations, even though the worst case of the
algorithm remains unchanged.
Yen (1970)39 described two more improvements to the Bellman–Ford algorithm for a graph
without negative-weight cycles; again, while making the algorithm faster in practice, they
do not change its O(|V | · |E|) worst case time bound. His first improvement reduces the
number of relaxation steps that need to be performed within each iteration of the algorithm.
If a vertex v has a distance value that has not changed since the last time the edges out of
v were relaxed, then there is no need to relax the edges out of v a second time. In this way,
as the number of vertices with correct distance values grows, the number whose outgoing
edges that need to be relaxed in each iteration shrinks, leading to a constant-factor savings
in time for dense graphs40 .
Yen's second improvement first assigns some arbitrary linear order on all vertices and then
partitions the set of all edges into two subsets. The first subset, Ef , contains all edges (vi ,
vj ) such that i < j; the second, Eb , contains edges (vi , vj ) such that i > j. Each vertex is
visited in the order v1 , v2 , ..., v|V| , relaxing each outgoing edge from that vertex in Ef . Each
vertex is then visited in the order v|V| , v|V|−1 , ..., v1 , relaxing each outgoing edge from that
vertex in Eb . Each iteration of the main loop of the algorithm, after the first one, adds at
least two edges to the set of edges whose relaxed distances match the correct shortest path
distances: one from Ef and one from Eb . This modification reduces the worst-case number
of iterations of the main loop of the algorithm from |V| − 1 to |V |/2.[5][6]
Another improvement, by Bannister & Eppstein (2012)41 , replaces the arbitrary linear order
of the vertices used in Yen's second improvement by a random permutation42 . This change
makes the worst case for Yen's improvement (in which the edges of a shortest path strictly
alternate between the two subsets Ef and Eb ) very unlikely to happen. With a randomly

37 https://en.wikipedia.org/wiki/Network_topology
38 https://en.wikipedia.org/wiki/Count_to_infinity#Count-to-infinity_problem
39 #CITEREFYen1970
40 https://en.wikipedia.org/wiki/Dense_graph
41 #CITEREFBannisterEppstein2012
42 https://en.wikipedia.org/wiki/Random_permutation

704
Trivia

permuted vertex ordering, the expected43 number of iterations needed in the main loop is
at most |V |/3.[6]

51.6 Trivia

In China, an algorithm which adds a first-in first-out queue to the Bellman–Ford algorithm,
known as SPFA44 , published by Edward Moore in 1959 and rediscovered by Fanding Duan
in 1994, is popular with students who take part in the National Olympiad in Informatics in
Provinces45 and International Collegiate Programming Contest46 .[7]

51.7 Notes
1. Bang-Jensen & Gutin (2000)47
2. Schrijver (2005)48
3. Sedgewick (2002)49 .
4. Kleinberg & Tardos (2006)50 .
5. Cormen et al., 2nd ed., Problem 24-1, pp. 614–615.
6. See Sedgewick's web exercises51 for Algorithms, 4th ed., exercises 5 and 12 (retrieved
2013-01-30).
7. D, F (1994), ”关于最短路径的SPFA快速算法”52 , Journal of Southwest
Jiaotong University, 29 (2): 207–212

51.8 References

51.8.1 Original sources


• S, A. (1955). Structure in communication nets. Proceedings of the Symposium
on Information Networks. New York, New York: Polytechnic Press of the Polytechnic
Institute of Brooklyn. pp. 199–203.CS1 maint: ref=harv (link53 )

43 https://en.wikipedia.org/wiki/Expected_value
44 https://en.wikipedia.org/wiki/SPFA
https://en.wikipedia.org/w/index.php?title=National_Olympiad_in_Informatics_in_
45
Provinces&action=edit&redlink=1
46 https://en.wikipedia.org/wiki/International_Collegiate_Programming_Contest
47 #CITEREFBang-JensenGutin2000
48 #CITEREFSchrijver2005
49 #CITEREFSedgewick2002
50 #CITEREFKleinbergTardos2006
51 http://algs4.cs.princeton.edu/44sp/
52 http://wenku.baidu.com/view/3b8c5d778e9951e79a892705.html
53 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

705
Bellman–Ford algorithm

• B, R54 (1958). ”O   ”. Quarterly of Applied Math-


ematics. 16: 87–90. doi55 :10.1090/qam/10243556 . MR57 010243558 .CS1 maint: ref=harv
(link59 )
• F, L R. J.60 (A 14, 1956). Network Flow Theory61 . P P-923.
S M, C: RAND C.CS1 maint: ref=harv (link62 )
• M, E F.63 (1959). The shortest path through a maze. Proc. Internat.
Sympos. Switching Theory 1957, Part II. Cambridge, Massachusetts: Harvard Univ.
Press. pp. 285–292. MR64 011471065 .CS1 maint: ref=harv (link66 )
• Y, J Y. (1970). ”A       
        ”. Quarterly of Ap-
plied Mathematics. 27 (4): 526–530. doi67 :10.1090/qam/25382268 . MR69 025382270 .CS1
maint: ref=harv (link71 )
• B, M. J.; E, D.72 (2012). Randomized speedup of the Bellman–
Ford algorithm. Analytic Algorithmics and Combinatorics (ANALCO12), Kyoto, Japan.
pp. 41–47. arXiv73 :1111.541474 . Bibcode75 :2011arXiv1111.5414B76 .CS1 maint: ref=harv
(link77 )

51.8.2 Secondary sources


• B-J, J; G, G (2000). ”S 2.3.4: T B-
F-M ”. Digraphs: Theory, Algorithms and Applications78 (F
.). ISBN79 978-1-84800-997-480 .CS1 maint: ref=harv (link81 )

54 https://en.wikipedia.org/wiki/Richard_Bellman
55 https://en.wikipedia.org/wiki/Doi_(identifier)
56 https://doi.org/10.1090%2Fqam%2F102435
57 https://en.wikipedia.org/wiki/MR_(identifier)
58 http://www.ams.org/mathscinet-getitem?mr=0102435
59 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
60 https://en.wikipedia.org/wiki/L._R._Ford_Jr.
61 http://www.rand.org/pubs/papers/P923.html
62 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
63 https://en.wikipedia.org/wiki/Edward_F._Moore
64 https://en.wikipedia.org/wiki/MR_(identifier)
65 http://www.ams.org/mathscinet-getitem?mr=0114710
66 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
67 https://en.wikipedia.org/wiki/Doi_(identifier)
68 https://doi.org/10.1090%2Fqam%2F253822
69 https://en.wikipedia.org/wiki/MR_(identifier)
70 http://www.ams.org/mathscinet-getitem?mr=0253822
71 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
72 https://en.wikipedia.org/wiki/David_Eppstein
73 https://en.wikipedia.org/wiki/ArXiv_(identifier)
74 http://arxiv.org/abs/1111.5414
75 https://en.wikipedia.org/wiki/Bibcode_(identifier)
76 https://ui.adsabs.harvard.edu/abs/2011arXiv1111.5414B
77 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
78 http://www.cs.rhul.ac.uk/books/dbook/
79 https://en.wikipedia.org/wiki/ISBN_(identifier)
80 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84800-997-4
81 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

706
References

• S, A (2005). ”O     


( 1960)”82 (PDF). Handbook of Discrete Optimization. Elsevier: 1–68.CS1 maint:
ref=harv (link83 )
• C, T H.84 ; L, C E.85 ; R, R L.86 Introduc-
tion to Algorithms87 . MIT P  MG-H., Second Edition. MIT Press
and McGraw-Hill, 2001. ISBN88 0-262-03293-789 . Section 24.1: The Bellman–Ford al-
gorithm, pp. 588–592. Problem 24-1, pp. 614–615. Third Edition. MIT Press, 2009.
ISBN90 978-0-262-53305-891 . Section 24.1: The Bellman–Ford algorithm, pp. 651–655.
• H, G T.; P, G; S, S (2008). ”C
6: G A”. Algorithms in a Nutshell. O'Reilly Media92 . pp. 160–164.
ISBN93 978-0-596-51624-694 .CS1 maint: ref=harv (link95 )
• K, J96 ; T, É97 (2006). Algorithm Design. New York: Pearson
Education, Inc.CS1 maint: ref=harv (link98 )
• S, R99 (2002). ”S 21.7: N E W”. Algo-
rithms in Java100 (3 .). ISBN101 0-201-36121-3102 . A   -
103  2008-05-31. R 2007-05-28.CS1 maint: ref=harv (link104 )

82 http://homepages.cwi.nl/~lex/files/histco.pdf
83 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
84 https://en.wikipedia.org/wiki/Thomas_H._Cormen
85 https://en.wikipedia.org/wiki/Charles_E._Leiserson
86 https://en.wikipedia.org/wiki/Ron_Rivest
87 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
88 https://en.wikipedia.org/wiki/ISBN_(identifier)
89 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
90 https://en.wikipedia.org/wiki/ISBN_(identifier)
91 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-53305-8
92 https://en.wikipedia.org/wiki/O%27Reilly_Media
93 https://en.wikipedia.org/wiki/ISBN_(identifier)
94 https://en.wikipedia.org/wiki/Special:BookSources/978-0-596-51624-6
95 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
96 https://en.wikipedia.org/wiki/Jon_Kleinberg
97 https://en.wikipedia.org/wiki/%C3%89va_Tardos
98 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
99 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
https://web.archive.org/web/20080531142256/http://safari.oreilly.com/0201361213/
100
ch21lev1sec7
101 https://en.wikipedia.org/wiki/ISBN_(identifier)
102 https://en.wikipedia.org/wiki/Special:BookSources/0-201-36121-3
103 http://safari.oreilly.com/0201361213/ch21lev1sec7
104 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

707
52 Bidirectional search

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

709
Bidirectional search

Graph and tree


search algorithms

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

Bidirectional search is a graph search algorithm1 that finds a shortest path2 from an
initial vertex3 to a goal vertex in a directed graph4 . It runs two simultaneous searches:
one forward from the initial state, and one backward from the goal, stopping when the two
meet. The reason for this approach is that in many cases it is faster: for instance, in a
simplified model of search problem complexity in which both searches expand a tree5 with
branching factor6 b, and the distance from start to goal is d, each of the two searches has
complexity O(bd/2 ) (in Big O notation7 ), and the sum of these two search times is much
less than the O(bd ) complexity that would result from a single search from the beginning
to the goal.
Andrew Goldberg8 and others explained the correct termination conditions for the bidirec-
tional version of Dijkstra’s Algorithm9 .[1]
As in A*10 search, bi-directional search can be guided by a heuristic11 estimate of the
remaining distance to the goal (in the forward tree) or from the start (in the backward
tree).
Ira Pohl (197112 ) was the first one to design and implement a bi-directional heuristic search
algorithm. Search trees emanating from the start and goal nodes failed to meet in the
middle of the solution space. The BHFFA algorithm fixed this defect Champeaux (1977).
A solution found by the uni-directional A* algorithm using an admissible heuristic has a
shortest path length; the same property holds for the BHFFA2 bidirectional heuristic version
described in de Champeaux (1983). BHFFA2 has, among others, more careful termination
conditions than BHFFA.

1 https://en.wikipedia.org/wiki/Graph_search_algorithm
2 https://en.wikipedia.org/wiki/Shortest_path
3 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
4 https://en.wikipedia.org/wiki/Directed_graph
5 https://en.wikipedia.org/wiki/Tree_(graph_theory)
6 https://en.wikipedia.org/wiki/Branching_factor
7 https://en.wikipedia.org/wiki/Big_O_notation
8 https://en.wikipedia.org/wiki/Andrew_V._Goldberg
9 https://en.wikipedia.org/wiki/Dijkstra%E2%80%99s_Algorithm
10 https://en.wikipedia.org/wiki/A*_search_algorithm
11 https://en.wikipedia.org/wiki/Heuristic_(computer_science)
12 #CITEREFPohl1971

710
Description

52.1 Description

A Bidirectional Heuristic Search is a state space search13 from some state s to another state
t, searching from s to t and from t to s simultaneously. It returns a valid list of operators
that if applied to s will give us t.
While it may seem as though the operators have to be invertible for the reverse search, it is
only necessary to be able to find, given any node n, the set of parent nodes of n such that
there exists some valid operator from each of the parent nodes to n. This has often been
likened to a one-way street in the route-finding domain: it is not necessary to be able to
travel down both directions, but it is necessary when standing at the end of the street to
determine the beginning of the street as a possible route.
Similarly, for those edges that have inverse arcs (i.e. arcs going in both directions) it is
not necessary that each direction be of equal cost. The reverse search will always use the
inverse cost (i.e. the cost of the arc in the forward direction). More formally, if n is a node
with parent p, then k1 (p, n) = k2 (n, p), defined as being the cost from p to n.(Auer Kaindl
2004)

52.1.1 Terminology and notation

b
the branching factor14 of a search tree
k(n, m)
the cost associated with moving from node n to node m
g(n)
the cost from the root to the node n
h(n)
the heuristic estimate of the distance between the node n and the goal
s
the start state
t
the goal state (sometimes g, not to be confused with the function)
d
the current search direction. By convention, d is equal to 1 for the forward direction and
2 for the backward direction (Kwa 1989)
d′

13 https://en.wikipedia.org/wiki/State_space_search
14 https://en.wikipedia.org/wiki/Branching_factor

711
Bidirectional search

the opposite search direction (i.e. d′ = 3 − d)


TREEd
the search tree in direction d. If d = 1, the root is s, if d = 2, the root is t
OPENd
the leaves of TREEd (sometimes referred to as FRINGEd ). It is from this set that a
node is chosen for expansion. In bidirectional search, these are sometimes called the
search 'frontiers' or 'wavefronts', referring to how they appear when a search is represented
graphically. In this metaphor, a 'collision' occurs when, during the expansion phase, a node
from one wavefront is found to have successors in the opposing wavefront.
CLOSEDd
the non-leaf nodes of TREEd . This set contains the nodes already visited by the search

52.2 Approaches for Bidirectional Heuristic Search

Bidirectional algorithms can be broadly split into three categories: Front-to-Front, Front-
to-Back (or Front-to-End), and Perimeter Search (Kaindl Kainz 1997). These differ by the
function used to calculate the heuristic.

52.2.1 Front-to-Back

Front-to-Back algorithms calculate the h value of a node n by using the heuristic estimate
between n and the root of the opposite search tree, s or t.
Front-to-Back is the most actively researched of the three categories. The current best
algorithm (at least in the Fifteen puzzle15 domain) is the BiMAX-BS*F algorithm, created
by Auer and Kaindl (Auer, Kaindl 2004).

52.2.2 Front-to-Front

Front-to-Front algorithms calculate the h value of a node n by using the heuristic estimate
between n and some subset of OPENd′ . The canonical example is that of the BHFFA16
(Bidirectional Heuristic Front-to-Front Algorithm17 ),[2] where the h function is defined as
the minimum of all heuristic estimates between the current node and the nodes on the
opposing front. Or, formally:
hd (n) = min {H(n, oi )|oi ∈ OPENd′ }
i

15 https://en.wikipedia.org/wiki/Fifteen_puzzle
16 https://en.wikipedia.org/w/index.php?title=BHFFA&action=edit&redlink=1
https://en.wikipedia.org/w/index.php?title=Bidirectional_Heuristic_Front-to-
17
Front_Algorithm&action=edit&redlink=1

712
References

where H(n, o) returns an admissible (i.e. not overestimating) heuristic estimate of the
distance between nodes n and o.
Front-to-Front suffers from being excessively computationally demanding. Every time a
node n is put into the open list, its f = g + h value must be calculated. This involves
calculating a heuristic estimate from n to every node in the opposing OPEN set, as described
above. The OPEN sets increase in size exponentially for all domains with b > 1.

52.3 References
1. Efficient Point-to-Point Shortest Path Algorithms18
2. de Champeaux 1977/1983
•  C, D; S, L (1977), ”A  -
   ”, Journal of the ACM19 , 24 (2): 177–191,
doi20 :10.1145/322003.32200421 .
•  C, D (1983), ”B   ”, Journal
of the ACM22 , 30 (1): 22–32, doi23 :10.1145/322358.32236024 .
• P, I (1971), ”B- S”,  M, B; M, D-
25 (.), Machine Intelligence, 6, Edinburgh University Press, pp. 127–140.
• R, S J.26 ; N, P27 (2002), ”3.4 U  -
”, Artificial Intelligence: A Modern Approach28 (2 .), P H.

http://www.cs.princeton.edu/courses/archive/spr06/cos423/Handouts/EPP%20shortest%
18
20path%20algorithms.pdf
19 https://en.wikipedia.org/wiki/Journal_of_the_ACM
20 https://en.wikipedia.org/wiki/Doi_(identifier)
21 https://doi.org/10.1145%2F322003.322004
22 https://en.wikipedia.org/wiki/Journal_of_the_ACM
23 https://en.wikipedia.org/wiki/Doi_(identifier)
24 https://doi.org/10.1145%2F322358.322360
25 https://en.wikipedia.org/wiki/Donald_Michie
26 https://en.wikipedia.org/wiki/Stuart_J._Russell
27 https://en.wikipedia.org/wiki/Peter_Norvig
28 https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach

713
53 Borůvka's algorithm

Figure 132 Animation of Boruvka's algorithm

715
Borůvka's algorithm

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

716
Pseudocode

Borůvka's algorithm is a greedy algorithm1 for finding a minimum spanning tree2 in a


graph for which all edge weights are distinct, or a minimum spanning forest in the case of
a graph that is not connected.
It was first published in 1926 by Otakar Borůvka3 as a method of constructing an efficient
electricity network4 for Moravia5 .[1][2][3] The algorithm was rediscovered by Choquet6 in
1938;[4] again by Florek7 , Łukasiewicz8 , Perkal9 , Steinhaus10 , and Zubrzycki11 in 1951;[5] and
again by Georges Sollin in 1965.[6] This algorithm is frequently called Sollin's algorithm,
especially in the parallel computing12 literature.
The algorithm begins by finding the minimum-weight edge incident to each vertex of the
graph, and adding all of those edges to the forest. Then, it repeats a similar process of
finding the minimum-weight edge from each tree constructed so far to a different tree, and
adding all of those edges to the forest. Each repetition of this process reduces the number of
trees, within each connected component of the graph, to at most half of this former value,
so after logarithmically many repetitions the process finishes. When it does, the set of edges
it has added forms the minimum spanning forest.

53.1 Pseudocode

Designating each vertex or set of connected vertices a ”component13 ”, pseudocode for


Borůvka's algorithm is:
algorithm Borůvka is
input: A graph G whose edges have distinct weights.
output: F is the minimum spanning forest of G.

Initialize a forest F to be a set of one-vertex trees, one for each vertex


of the graph.

while F has more than one component do


Find the connected components of F and label each vertex of G by its
component
Initialize the cheapest edge for each component to ”None”
for each edge uv of G do
if u and v have different component labels:
if uv is cheaper than the cheapest edge for the component of u then
Set uv as the cheapest edge for the component of u
if uv is cheaper than the cheapest edge for the component of v then
Set uv as the cheapest edge for the component of v
for each component whose cheapest edge is not ”None” do

1 https://en.wikipedia.org/wiki/Greedy_algorithm
2 https://en.wikipedia.org/wiki/Minimum_spanning_tree
3 https://en.wikipedia.org/wiki/Otakar_Bor%C5%AFvka
4 https://en.wikipedia.org/wiki/Electricity_network
5 https://en.wikipedia.org/wiki/Moravia
6 https://en.wikipedia.org/wiki/Gustave_Choquet
7 https://en.wikipedia.org/w/index.php?title=Kazimierz_Florek&action=edit&redlink=1
8 https://en.wikipedia.org/wiki/Jan_%C5%81ukasiewicz
9 https://en.wikipedia.org/w/index.php?title=Julian_Perkal&action=edit&redlink=1
10 https://en.wikipedia.org/wiki/Hugo_Steinhaus
11 https://en.wikipedia.org/w/index.php?title=Stefan_Zubrzycki&action=edit&redlink=1
12 https://en.wikipedia.org/wiki/Parallel_computing
13 https://en.wikipedia.org/wiki/Connected_component_(graph_theory)

717
Borůvka's algorithm

Add its cheapest edge to F

If edges do not have distinct weights, then a consistent tie-breaking rule (e.g. breaking ties
by the object identifiers of the edges) can be used. An optimization (not necessary for the
analysis) is to remove from G each edge that is found to connect two vertices in the same
component as each other.

53.2 Complexity

Borůvka's algorithm can be shown to take O14 (log V) iterations of the outer loop until it
terminates, and therefore to run in time O15 (E log V), where E is the number of edges,
and V is the number of vertices in G. In planar graphs16 , and more generally in families
of graphs closed under graph minor17 operations, it can be made to run in linear time, by
removing all but the cheapest edge between each pair of components after each stage of the
algorithm.[7]

53.3 Example

Image components Description

{A} This is our original weighted


{B} graph. The numbers near the
{C} edges indicate their weight. Ini-
{D} tially, every vertex by itself is a
{E} component (blue circles).
Figure 133 {F}
{G}

14 https://en.wikipedia.org/wiki/Big_O_notation
15 https://en.wikipedia.org/wiki/Big_O_notation
16 https://en.wikipedia.org/wiki/Planar_graph
17 https://en.wikipedia.org/wiki/Graph_minor

718
Other algorithms

Image components Description

{A,B,D,F} In the first iteration of the


{C,E,G} outer loop, the minimum weight
edge out of every component is
added. Some edges are selected
twice (AD, CE). Two compo-
Figure 134 nents remain.

{A,B,C,D,E,F,G} In the second and final itera-


tion, the minimum weight edge
out of each of the two remain-
ing components is added. These
happen to be the same edge.
Figure 135 One component remains and
we are done. The edge BD is
not considered because both
endpoints are in the same com-
ponent.

53.4 Other algorithms

Other algorithms for this problem include Prim's algorithm18 and Kruskal's algorithm19 .
Fast parallel algorithms can be obtained by combining Prim's algorithm with Borůvka's.[8]
A faster randomized minimum spanning tree algorithm based in part on Borůvka's algo-
rithm due to Karger, Klein, and Tarjan runs in expected O(E) time.[9] The best known
(deterministic) minimum spanning tree algorithm by Bernard Chazelle20 is also based in
part on Borůvka's and runs in O(E α(E,V)) time, where α is the inverse of the Ackermann
function21 .[10] These randomized and deterministic algorithms combine steps of Borůvka's
algorithm, reducing the number of components that remain to be connected, with steps of
a different type that reduce the number of edges between pairs of components.

18 https://en.wikipedia.org/wiki/Prim%27s_algorithm
19 https://en.wikipedia.org/wiki/Kruskal%27s_algorithm
20 https://en.wikipedia.org/wiki/Bernard_Chazelle
21 https://en.wikipedia.org/wiki/Ackermann_function

719
Borůvka's algorithm

53.5 Notes
1. BŮ, O22 (1926). ”O   ”23 [A 
  ]. Práce Mor. Přírodověd. Spol. V Brně III (in Czech
and German). 3: 37–58.
2. BŮ, O24 (1926). ”PŘĚ  Ř   
  (C        -
    )”. Elektronický Obzor (in
Czech). 15: 153–154.
3. NŘ, J25 ; M, E; NŘ, H (2001). ”O
BŮ     :   
 1926 , , ”. Discrete Mathematics26 . 233 (1–
3): 3–36. doi27 :10.1016/S0012-365X(00)00224-728 . hdl29 :10338.dmlcz/50041330 .
MR31 182559932 .
4. C, G33 (1938). ”É     ”.
Comptes Rendus de l'Académie des Sciences (in French). 206: 310–313.
5. F, K.; Ł, J.34 ; P, J.; S, H35 ; Z,
S. (1951). ”S        '  ”36 .
Colloquium Mathematicae (in French). 2: 282–285. MR37 004883238 .
6. S, G (1965). ”L   ”. Programming, Games,
and Transportation Networks (in French).
7. E, D39 (1999). ”S   ”. I S, J.-R.40 ;
U, J.41 (.). Handbook of Computational Geometry. Elsevier. pp. 425–461.;
M, M (2004). ”T     MST  
  ”42 (PDF). Archivum Mathematicum. 40 (3): 315–320..
8. B, D A.; C, G (2006). ”F - -
         ”.

22 https://en.wikipedia.org/wiki/Otakar_Bor%C5%AFvka
23 https://dml.cz/handle/10338.dmlcz/500114
24 https://en.wikipedia.org/wiki/Otakar_Bor%C5%AFvka
25 https://en.wikipedia.org/wiki/Jaroslav_Ne%C5%A1et%C5%99il
26 https://en.wikipedia.org/wiki/Discrete_Mathematics_(journal)
27 https://en.wikipedia.org/wiki/Doi_(identifier)
28 https://doi.org/10.1016%2FS0012-365X%2800%2900224-7
29 https://en.wikipedia.org/wiki/Hdl_(identifier)
30 http://hdl.handle.net/10338.dmlcz%2F500413
31 https://en.wikipedia.org/wiki/MR_(identifier)
32 http://www.ams.org/mathscinet-getitem?mr=1825599
33 https://en.wikipedia.org/wiki/Gustave_Choquet
34 https://en.wikipedia.org/wiki/Jan_%C5%81ukasiewicz
35 https://en.wikipedia.org/wiki/Hugo_Steinhaus
36 https://eudml.org/doc/209969
37 https://en.wikipedia.org/wiki/MR_(identifier)
38 http://www.ams.org/mathscinet-getitem?mr=0048832
39 https://en.wikipedia.org/wiki/David_Eppstein
40 https://en.wikipedia.org/wiki/J%C3%B6rg-R%C3%BCdiger_Sack
41 https://en.wikipedia.org/wiki/Jorge_Urrutia_Galicia
42 http://www.emis.de/journals/AM/04-3/am1139.pdf

720
Notes

Journal of Parallel and Distributed Computing. 66 (11): 1366–1378. Cite-


SeerX43 10.1.1.129.899144 . doi45 :10.1016/j.jpdc.2006.06.00146 .
9. K, D R.; K, P N.; T, R E. (1995). ”A -
 -      ”. Journal of the
ACM. 42 (2): 321–328. CiteSeerX47 10.1.1.39.901248 . doi49 :10.1145/201019.20102250 .
10. C, B (2000). ”A     
-A  ”51 (PDF). J. ACM. 47 (6): 1028–1047.
CiteSeerX52 10.1.1.115.231853 . doi54 :10.1145/355541.35556255 .

43 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
44 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.129.8991
45 https://en.wikipedia.org/wiki/Doi_(identifier)
46 https://doi.org/10.1016%2Fj.jpdc.2006.06.001
47 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
48 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.39.9012
49 https://en.wikipedia.org/wiki/Doi_(identifier)
50 https://doi.org/10.1145%2F201019.201022
51 http://www.cs.princeton.edu/~chazelle/pubs/mst.pdf
52 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
53 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.115.2318
54 https://en.wikipedia.org/wiki/Doi_(identifier)
55 https://doi.org/10.1145%2F355541.355562

721
54 Bottleneck traveling salesman
problem

The Bottleneck traveling salesman problem (bottleneck TSP) is a problem in dis-


crete1 or combinatorial optimization2 . The problem is to find the Hamiltonian cycle3 in a
weighted graph4 which minimizes the weight of the most weighty edge5 of the cycle.[1] It
was first formulated by Gilmore & Gomory (1964)6 with some additional constraints, and
in its full generality by Garfinkel & Gilbert (1978)7 .[1][2][3]

54.1 Complexity

The problem is known to be NP-hard8 . The decision problem9 version of this, ”for a given
length x is there a Hamiltonian cycle in a graph G with no edge longer than x?”, is NP-
complete10 . NP-completeness follows immediately by a reduction11 from the problem of
finding a Hamiltonian cycle.[4]

54.2 Algorithms

Another reduction, from the bottleneck TSP to the usual TSP (where the goal is to minimize
the sum of edge lengths), allows any algorithm for the usual TSP to also be used to solve
the bottleneck TSP. If the edge weights of the bottleneck TSP are replaced by any other
numbers that have the same relative order, then the bottleneck solution remains unchanged.
If, in addition, each number in the sequence exceeds the sum of all smaller numbers, then
the bottleneck solution will also equal the usual TSP solution. For instance, such a result
may be attained by resetting each weight to ni where n is the number of vertices in the
graph and i is the rank of the original weight of the edge in the sorted sequence of weights.

1 https://en.wikipedia.org/wiki/Discrete_optimization
2 https://en.wikipedia.org/wiki/Combinatorial_optimization
3 https://en.wikipedia.org/wiki/Hamiltonian_path
4 https://en.wikipedia.org/wiki/Weighted_graph
5 https://en.wikipedia.org/wiki/Edge_(graph_theory)
6 #CITEREFGilmoreGomory1964
7 #CITEREFGarfinkelGilbert1978
8 https://en.wikipedia.org/wiki/NP-hard
9 https://en.wikipedia.org/wiki/Decision_problem
10 https://en.wikipedia.org/wiki/NP-complete
11 https://en.wikipedia.org/wiki/Reduction_(complexity)

723
Bottleneck traveling salesman problem

For instance, following this transformation, the Held–Karp algorithm12 could be used to
solve the bottleneck TSP in time O(n2 2n ).[1]
Alternatively, the problem can be solved by performing a binary search13 or sequential
search14 for the smallest x such that the subgraph of edges of weight at most x has a
Hamiltonian cycle. This method leads to solutions whose running time is only a logarithmic
factor larger than the time to find a Hamiltonian cycle.[1]

54.3 Variations

In an asymmetric bottleneck TSP, there are cases where the weight from node A to
B is different from the weight from B to A (e. g. travel time between two cities with a
traffic jam in one direction).
The Euclidean bottleneck TSP, or planar bottleneck TSP, is the bottleneck TSP with
the distance being the ordinary Euclidean distance15 . The problem still remains NP-hard.
However, many heuristics work better for it than for other distance functions.
The maximum scatter traveling salesman problem is another variation of the trav-
eling salesman problem in which the goal is to find a Hamiltonian cycle that maximizes
the minimum edge length rather than minimizing the maximum length. Its applications
include the analysis of medical images, and the scheduling of metalworking steps in aircraft
manufacture to avoid heat buildup from steps that are nearby in both time and space.
It can be translated into an instance of the bottleneck TSP problem by negating all edge
lengths (or, to keep the results positive, subtracting them all from a large enough constant).
However, although this transformation preserves the optimal solution, it does not preserve
the quality of approximations to that solution.[1]

54.4 Metric approximation algorithm

If the graph is a metric space16 then there is an efficient approximation algorithm17 that finds
a Hamiltonian cycle with maximum edge weight being no more than twice the optimum.
This result follows by Fleischner's theorem18 , that the square19 of a 2-vertex-connected
graph20 always contains a Hamiltonian cycle. It is easy to find a threshold value θ, the
smallest value such that the edges of weight θ form a 2-connected graph. Then θ provides
a valid lower bound on the bottleneck TSP weight, for the bottleneck TSP is itself a 2-
connected graph and necessarily contains an edge of weight at least θ. However, the square

12 https://en.wikipedia.org/wiki/Held%E2%80%93Karp_algorithm
13 https://en.wikipedia.org/wiki/Binary_search
14 https://en.wikipedia.org/wiki/Sequential_search
15 https://en.wikipedia.org/wiki/Euclidean_distance
16 https://en.wikipedia.org/wiki/Metric_space
17 https://en.wikipedia.org/wiki/Approximation_algorithm
18 https://en.wikipedia.org/wiki/Fleischner%27s_theorem
19 https://en.wikipedia.org/wiki/Graph_power
20 https://en.wikipedia.org/wiki/K-vertex-connected_graph

724
See also

of the subgraph of edges of weight at most θ is Hamiltonian. By the triangle inequality21


for metric spaces, its Hamiltonian cycle has edges of weight at most 2θ.[5][6]
This approximation ratio is best possible. For, any unweighted graph can be transformed
into a metric space by setting its edge weights to 1 and setting the distance between all
nonadjacent pairs of vertices to 2. An approximation with ratio better than 2 in this metric
space could be used to determine whether the original graph contains a Hamiltonian cycle,
an NP-complete problem.[6]
Without the assumption that the input is a metric space, no finite approximation ratio is
possible.[1]

54.5 See also


• Travelling salesman problem22

54.6 References
1. K, S N.; P, A P. (2007), ”T  TSP”,
 G, G; P, A P. (.), The Traveling Salesman
Problem and Its Variations, Combinatorial Optimization, Springer, pp. 697–735,
doi23 :10.1007/0-306-48213-4_1524 .
2. G, P. C.; G, R. E.25 (1964), ”S   -
: A       ”, Oper. Res.,
12: 655–679, doi26 :10.1287/opre.12.5.65527 , JSTOR28 16777229 .
3. G, R. S.; G, K. C. (1978), ”T   -
 : A   ”, Journal of the ACM30 ,
25 (3): 435–448, doi31 :10.1145/322077.32208632 .
4. G, M R.33 ; J, D S.34 (1979), Computers and Intractability:
A Guide to the Theory of NP-Completeness35 , W.H. F, A2.3: ND24, . 21236 ,
ISBN37 0-7167-1045-538 .

21 https://en.wikipedia.org/wiki/Triangle_inequality
22 https://en.wikipedia.org/wiki/Travelling_salesman_problem
23 https://en.wikipedia.org/wiki/Doi_(identifier)
24 https://doi.org/10.1007%2F0-306-48213-4_15
25 https://en.wikipedia.org/wiki/Ralph_E._Gomory
26 https://en.wikipedia.org/wiki/Doi_(identifier)
27 https://doi.org/10.1287%2Fopre.12.5.655
28 https://en.wikipedia.org/wiki/JSTOR_(identifier)
29 http://www.jstor.org/stable/167772
30 https://en.wikipedia.org/wiki/Journal_of_the_ACM
31 https://en.wikipedia.org/wiki/Doi_(identifier)
32 https://doi.org/10.1145%2F322077.322086
33 https://en.wikipedia.org/wiki/Michael_R._Garey
34 https://en.wikipedia.org/wiki/David_S._Johnson
https://en.wikipedia.org/wiki/Computers_and_Intractability:_A_Guide_to_the_Theory_of_
35
NP-Completeness
36 https://archive.org/details/computersintract0000gare/page/
37 https://en.wikipedia.org/wiki/ISBN_(identifier)
38 https://en.wikipedia.org/wiki/Special:BookSources/0-7167-1045-5

725
Bottleneck traveling salesman problem

5. P, R. G; R, R L. (1984), ”G 


      ”, Operations Re-
search Letters, 2 (6): 269–272, doi39 :10.1016/0167-6377(84)90077-440 .
6. H, D S.41 ; S, D B.42 (M 1986), ”A  
     ”43 (PDF), Journal of
the ACM, New York, NY, USA: ACM, 33 (3): 533–550, doi44 :10.1145/5925.593345 .

39 https://en.wikipedia.org/wiki/Doi_(identifier)
40 https://doi.org/10.1016%2F0167-6377%2884%2990077-4
41 https://en.wikipedia.org/wiki/Dorit_S._Hochbaum
42 https://en.wikipedia.org/wiki/David_Shmoys
https://www.researchgate.net/profile/David_Shmoys/publication/220430962_A_
43 unified_approach_to_approximation_algorithms_for_bottleneck_problems/links/
57dc685508ae4e6f1846abde.pdf
44 https://en.wikipedia.org/wiki/Doi_(identifier)
45 https://doi.org/10.1145%2F5925.5933

726
55 Breadth-first search

This article needs additional citations for verification1 . Please help improve
this article2 by adding citations to reliable sources3 . Unsourced material may be
challenged and removed.
Find sources: ”Breadth-first search”4 – news5 · newspapers6 · books7 · scholar8 · JSTOR9
(April 2012)(Learn how and when to remove this template message10 )

Breadth-first search
Order in which the nodes are expanded
Class Search algorithm
Data structure Graph
Worst-case perfor- O(|V | + |E|) = O(bd )
mance
Worst-case space O(|V |) = O(bd )
complexity

1 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
2 https://en.wikipedia.org/w/index.php?title=Breadth-first_search&action=edit
3 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
4 http://www.google.com/search?as_eq=wikipedia&q=%22Breadth-first+search%22
5 http://www.google.com/search?tbm=nws&q=%22Breadth-first+search%22+-wikipedia
http://www.google.com/search?&q=%22Breadth-first+search%22+site:news.google.com/
6
newspapers&source=newspapers
7 http://www.google.com/search?tbs=bks:1&q=%22Breadth-first+search%22+-wikipedia
8 http://scholar.google.com/scholar?q=%22Breadth-first+search%22
https://www.jstor.org/action/doBasicSearch?Query=%22Breadth-first+search%22&acc=on&
9
wc=on
10 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

727
Breadth-first search

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

728
References

Figure 137 Animated example of a breadth-first search

Breadth-first search (BFS) is an algorithm11 for traversing or searching tree12 or graph13


data structures. It starts at the tree root14 (or some arbitrary node of a graph, sometimes
referred to as a 'search key'[1] ), and explores all of the neighbor nodes at the present depth
prior to moving on to the nodes at the next depth level.
It uses the opposite strategy as depth-first search15 , which instead explores the node branch
as far as possible before being forced to backtrack and expand other nodes.[2]

11 https://en.wikipedia.org/wiki/Algorithm
12 https://en.wikipedia.org/wiki/Tree_data_structure
13 https://en.wikipedia.org/wiki/Graph_(data_structure)
14 https://en.wikipedia.org/wiki/Tree_(data_structure)#Terminology
15 https://en.wikipedia.org/wiki/Depth-first_search

729
Breadth-first search

BFS and its application in finding connected components16 of graphs were invented in 1945
by Konrad Zuse17 , in his (rejected) Ph.D. thesis on the Plankalkül18 programming language,
but this was not published until 1972.[3] It was reinvented in 1959 by Edward F. Moore19 ,
who used it to find the shortest path out of a maze,[4][5] and later developed by C. Y. Lee
into a wire routing20 algorithm (published 1961).[6]

55.1 Pseudocode

Input: A graph Graph and a starting vertex root of Graph


Output: Goal state. The parent links trace the shortest path back to root
1 procedure BFS(G, start_v) is
2 let Q be a queue
3 label start_v as discovered
4 Q.enqueue(start_v)
5 while Q is not empty do
6 v := Q.dequeue()
7 if v is the goal then
8 return v
9 for all edges from v to w in G.adjacentEdges(v) do
10 if w is not labeled as discovered then
11 label w as discovered
12 w.parent := v
13 Q.enqueue(w)

55.1.1 More details

This non-recursive implementation is similar to the non-recursive implementation of depth-


first search21 , but differs from it in two ways:
1. it uses a queue22 (First In First Out) instead of a stack23 and
2. it checks whether a vertex has been discovered before enqueueing the vertex rather
than delaying this check until the vertex is dequeued from the queue.
The Q queue contains the frontier along which the algorithm is currently searching.
Nodes can be labelled as discovered by storing them in a set, or by an attribute on each
node, depending on the implementation.
Note that the word node is usually interchangeable with the word vertex.
The parent attribute of each node is useful for accessing the nodes in a shortest path, for
example by backtracking from the destination node up to the starting node, once the BFS
has been run, and the predecessors nodes have been set.

16 https://en.wikipedia.org/wiki/Connected_component_(graph_theory)
17 https://en.wikipedia.org/wiki/Konrad_Zuse
18 https://en.wikipedia.org/wiki/Plankalk%C3%BCl
19 https://en.wikipedia.org/wiki/Edward_F._Moore
20 https://en.wikipedia.org/wiki/Routing_(electronic_design_automation)
21 https://en.wikipedia.org/wiki/Depth-first_search
22 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
23 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)

730
Pseudocode

Breadth-first search produces a so-called breadth first tree. You can see how a breadth first
tree looks in the following example.

55.1.2 Example

The following is an example of the breadth-first tree obtained by running a BFS on German24
cities starting from Frankfurt:

Figure 138 An example map of Southern Germany with some connections between
cities

24 https://en.wikipedia.org/wiki/Germany

731
Breadth-first search

Figure 139 The breadth-first tree obtained when running BFS on the given map and
starting in Frankfurt

55.2 Analysis

55.2.1 Time and space complexity

The time complexity can be expressed as O(|V | + |E|), since every vertex and every edge
will be explored in the worst case. |V | is the number of vertices and |E| is the number of
edges in the graph. Note that O(|E|) may vary between O(1) and O(|V |2 ), depending on
how sparse the input graph is.[7]
When the number of vertices in the graph is known ahead of time, and additional data
structures are used to determine which vertices have already been added to the queue, the
space complexity can be expressed as O(|V |), where |V | is the cardinality25 of the set of
vertices. This is in addition to the space required for the graph itself, which may vary
depending on the graph representation26 used by an implementation of the algorithm.
When working with graphs that are too large to store explicitly (or infinite), it is more
practical to describe the complexity of breadth-first search in different terms: to find the

25 https://en.wikipedia.org/wiki/Cardinality
26 https://en.wikipedia.org/wiki/Graph_(abstract_data_type)

732
BFS ordering

nodes that are at distance d from the start node (measured in number of edge traversals),
BFS takes O(bd + 1 ) time and memory, where b is the ”branching factor27 ” of the graph (the
average out-degree).[8]:81

55.2.2 Completeness

In the analysis of algorithms, the input to breadth-first search is assumed to be a finite


graph, represented explicitly as an adjacency list28 or similar representation. However, in
the application of graph traversal methods in artificial intelligence29 the input may be an
implicit representation30 of an infinite graph. In this context, a search method is described
as being complete if it is guaranteed to find a goal state if one exists. Breadth-first search
is complete, but depth-first search is not. When applied to infinite graphs represented
implicitly, breadth-first search will eventually find the goal state, but depth-first search
may get lost in parts of the graph that have no goal state and never return.[9]

55.3 BFS ordering

An enumeration of the vertices of a graph is said to be a BFS ordering if it is the possible


output of the application of BFS to this graph.
Let G = (V, E) be a graph with n vertices. Recall that N (v) is the set of neighbors of v.
For σ = (v1 , . . . , vm ) be a list of distinct elements of V , for v ∈ V \ {v1 , . . . , vm }, let νσ (v) be
the least i such that vi is a neighbor of v, if such a i exists, and be ∞ otherwise.
Let σ = (v1 , . . . , vn ) be an enumeration of the vertices of V . The enumeration σ is said to be a
BFS ordering (with source v1 ) if, for all 1 < i ≤ n, vi is the vertex w ∈ V \ {v1 , . . . , vi−1 } such
that ν(v1 ,...,vi−1 ) (w) is minimal. Equivalently, σ is a BFS ordering if, for all 1 ≤ i < j < k ≤ n
with vi ∈ N (vk ) \ N (vj ), there exists a neighbor vm of vj such that m < i.

55.4 Applications

Breadth-first search can be used to solve many problems in graph theory, for example:
• Copying garbage collection31 , Cheney's algorithm32
• Finding the shortest path33 between two nodes u and v, with path length measured by
number of edges (an advantage over depth-first search34 )[10]
• (Reverse) Cuthill–McKee35 mesh numbering

27 https://en.wikipedia.org/wiki/Branching_factor
28 https://en.wikipedia.org/wiki/Adjacency_list
29 https://en.wikipedia.org/wiki/Artificial_intelligence
30 https://en.wikipedia.org/wiki/Implicit_graph
31 https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)
32 https://en.wikipedia.org/wiki/Cheney%27s_algorithm
33 https://en.wikipedia.org/wiki/Shortest_path
34 https://en.wikipedia.org/wiki/Depth-first_search
35 https://en.wikipedia.org/wiki/Cuthill%E2%80%93McKee_algorithm

733
Breadth-first search

• Ford–Fulkerson method36 for computing the maximum flow37 in a flow network38


• Serialization/Deserialization of a binary tree vs serialization in sorted order, allows the
tree to be re-constructed in an efficient manner.
• Construction of the failure function of the Aho-Corasick39 pattern matcher.
• Testing bipartiteness of a graph40 .

55.5 See also


• Depth-first search41
• Iterative deepening depth-first search42
• Level structure43
• Lexicographic breadth-first search44
• Parallel breadth-first search45

55.6 References
1. ”G500   (  -
)”46 . G500., 2010. A   47  2015-03-26.
R 2015-03-15.
2. C T H.;  . (2009). ”22.3”. Introduction to Algorithms. MIT
Press.
3. Z, K48 (1972), Der Plankalkül49 ( G), K Z I
A. See pp. 96–105 of the linked pdf file (internal numbering 2.47–2.56).
4. M, E F.50 (1959). ”T     ”. Proceed-
ings of the International Symposium on the Theory of Switching. Harvard University
Press. pp. 285–292. As cited by Cormen, Leiserson, Rivest, and Stein.

36 https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
37 https://en.wikipedia.org/wiki/Maximum_flow_problem
38 https://en.wikipedia.org/wiki/Flow_network
39 https://en.wikipedia.org/wiki/Aho-Corasick
40 https://en.wikipedia.org/wiki/Bipartite_graph#Testing_bipartiteness
41 https://en.wikipedia.org/wiki/Depth-first_search
42 https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search
43 https://en.wikipedia.org/wiki/Level_structure
44 https://en.wikipedia.org/wiki/Lexicographic_breadth-first_search
45 https://en.wikipedia.org/wiki/Parallel_breadth-first_search
https://web.archive.org/web/20150326055019/http://www.graph500.org/specifications#
46
sec-5#sec-5
47 http://www.graph500.org/specifications#sec-5
48 https://en.wikipedia.org/wiki/Konrad_Zuse
49 http://zuse.zib.de/item/gHI1cNsUuQweHB6
50 https://en.wikipedia.org/wiki/Edward_F._Moore

734
External links

5. S, S51 (2008). ”S  S”. The Algorithm Design


Manual. Springer. p. 480. Bibcode52 :2008adm..book.....S53 . doi54 :10.1007/978-1-
84800-070-4_455 . ISBN56 978-1-84800-069-857 .
6. L, C. Y. (1961). ”A A  P C
 I A”. IRE Transactions on Electronic Computers.
doi58 :10.1109/TEC.1961.521922259 .
7. C, T H.60 ; L, C E.61 ; R, R L.62 ; S,
C63 (2001) [1990]. ”22.2 B- ”. Introduction to Algo-
rithms64 (2 .). MIT P  MG-H. . 531–539. ISBN65 0-262-
03293-766 .
8. R, S67 ; N, P68 (2003) [1995]. Artificial Intelligence: A
Modern Approach69 (2 .). P H. ISBN70 978-013790395571 .
9. Coppin, B. (2004). Artificial intelligence illuminated. Jones & Bartlett Learning.
pp. 79–80.
10. A, A; P, A (2010). ”4. A  G”. Algorithms
for Interviews. p. 144. ISBN72 978-145379299573 .
• K, D E. (1997), The Art of Computer Programming Vol 1. 3rd ed.74 ,
B: A-W, ISBN75 978-0-201-89683-176

55.7 External links

Wikimedia Commons has media related to Breadth-first search77 .

51 https://en.wikipedia.org/wiki/Steven_Skiena
52 https://en.wikipedia.org/wiki/Bibcode_(identifier)
53 https://ui.adsabs.harvard.edu/abs/2008adm..book.....S
54 https://en.wikipedia.org/wiki/Doi_(identifier)
55 https://doi.org/10.1007%2F978-1-84800-070-4_4
56 https://en.wikipedia.org/wiki/ISBN_(identifier)
57 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84800-069-8
58 https://en.wikipedia.org/wiki/Doi_(identifier)
59 https://doi.org/10.1109%2FTEC.1961.5219222
60 https://en.wikipedia.org/wiki/Thomas_H._Cormen
61 https://en.wikipedia.org/wiki/Charles_E._Leiserson
62 https://en.wikipedia.org/wiki/Ron_Rivest
63 https://en.wikipedia.org/wiki/Clifford_Stein
64 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
65 https://en.wikipedia.org/wiki/ISBN_(identifier)
66 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
67 https://en.wikipedia.org/wiki/Stuart_J._Russell
68 https://en.wikipedia.org/wiki/Peter_Norvig
69 https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach
70 https://en.wikipedia.org/wiki/ISBN_(identifier)
71 https://en.wikipedia.org/wiki/Special:BookSources/978-0137903955
72 https://en.wikipedia.org/wiki/ISBN_(identifier)
73 https://en.wikipedia.org/wiki/Special:BookSources/978-1453792995
74 http://www-cs-faculty.stanford.edu/~knuth/taocp.html
75 https://en.wikipedia.org/wiki/ISBN_(identifier)
76 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-89683-1
77 https://commons.wikimedia.org/wiki/Category:Breadth-first_search

735
Breadth-first search

• Open Data Structures - Section 12.3.1 - Breadth-First Search78 , Pat Morin79


• Simplified Breadth-first Search80

http://opendatastructures.org/versions/edition-0.1e/ods-java/12_3_Graph_Traversal.
78
html#SECTION001531000000000000000
79 https://en.wikipedia.org/wiki/Pat_Morin
80 http://web.piyushgarg.in/2018/09/simplified-breadth-first-search-given.html

736
56 Bron–Kerbosch algorithm

In computer science1 , the Bron–Kerbosch algorithm is an enumeration algorithm2 for


finding maximal cliques3 in an undirected graph4 . That is, it lists all subsets of vertices with
the two properties that each pair of vertices in one of the listed subsets is connected by an
edge, and no listed subset can have any additional vertices added to it while preserving its
complete connectivity5 . The Bron–Kerbosch algorithm was designed by Dutch6 scientists
Coenraad Bron7 and Joep Kerbosch8 , who published its description in 1973. Although
other algorithms for solving the clique problem9 have running times that are, in theory,
better on inputs that have few maximal independent sets, the Bron–Kerbosch algorithm and
subsequent improvements to it are frequently reported as being more efficient in practice
than the alternatives.[1] It is well-known and widely used in application areas of graph
algorithms such as computational chemistry10 .[2]
A contemporaneous algorithm of Akkoyunlu (1973)11 , although presented in different terms,
can be viewed as being the same as the Bron–Kerbosch algorithm, as it generates the same
recursive search tree.[3]

56.1 Without pivoting

The basic form of the Bron–Kerbosch algorithm is a recursive12 backtracking13 algorithm


that searches for all maximal cliques in a given graph G. More generally, given three disjoint
sets of vertices R, P, and X, it finds the maximal cliques that include all of the vertices in R,
some of the vertices in P, and none of the vertices in X. In each call to the algorithm, P and
X are disjoint sets whose union consists of those vertices that form cliques when added to
R. In other words, P ∪X is the set of vertices which are joined to every element of R. When
P and X are both empty there are no further elements that can be added to R, so R is a
maximal clique and the algorithm outputs R.

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Enumeration_algorithm
3 https://en.wikipedia.org/wiki/Clique_(graph_theory)
4 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
5 https://en.wikipedia.org/wiki/Complete_graph
6 https://en.wikipedia.org/wiki/Netherlands
7 https://en.wikipedia.org/wiki/Coenraad_Bron
8 https://en.wikipedia.org/w/index.php?title=Joep_Kerbosch&action=edit&redlink=1
9 https://en.wikipedia.org/wiki/Clique_problem
10 https://en.wikipedia.org/wiki/Computational_chemistry
11 #CITEREFAkkoyunlu1973
12 https://en.wikipedia.org/wiki/Recursion
13 https://en.wikipedia.org/wiki/Backtracking

737
Bron–Kerbosch algorithm

The recursion is initiated by setting R and X to be the empty set14 and P to be the vertex
set of the graph. Within each recursive call, the algorithm considers the vertices in P in
turn; if there are no such vertices, it either reports R as a maximal clique (if X is empty), or
backtracks. For each vertex v chosen from P, it makes a recursive call in which v is added
to R and in which P and X are restricted to the neighbor set N(v) of v, which finds and
reports all clique extensions of R that contain v. Then, it moves v from P to X to exclude
it from consideration in future cliques and continues with the next vertex in P.
That is, in pseudocode, the algorithm performs the following steps:
algorithm BronKerbosch1(R, P, X) is
if P and X are both empty then
report R as a maximal clique
for each vertex v in P do
BronKerbosch1(R ⋃ {v}, P ⋂ N(v), X ⋂ N(v))
P := P \ {v}
X := X ⋃ {v}

56.2 With pivoting

The basic form of the algorithm, described above, is inefficient in the case of graphs with
many non-maximal cliques: it makes a recursive call for every clique, maximal or not. To
save time and allow the algorithm to backtrack more quickly in branches of the search that
contain no maximal cliques, Bron and Kerbosch introduced a variant of the algorithm involv-
ing a ”pivot vertex” u, chosen from P (or more generally, as later investigators realized,[4]
from P ⋃ X). Any maximal clique must include either u or one of its non-neighbors, for
otherwise the clique could be augmented by adding u to it. Therefore, only u and its non-
neighbors need to be tested as the choices for the vertex v that is added to R in each
recursive call to the algorithm. In pseudocode:
algorithm BronKerbosch2(R, P, X) is
if P and X are both empty then
report R as a maximal clique
choose a pivot vertex u in P ⋃ X
for each vertex v in P \ N(u) do
BronKerbosch2(R ⋃ {v}, P ⋂ N(v), X ⋂ N(v))
P := P \ {v}
X := X ⋃ {v}

If the pivot is chosen to minimize the number of recursive calls made by the algorithm,
the savings in running time compared to the non-pivoting version of the algorithm can be
significant.[5]

56.3 With vertex ordering

An alternative method for improving the basic form of the Bron–Kerbosch algorithm in-
volves forgoing pivoting at the outermost level of recursion, and instead choosing the order-

14 https://en.wikipedia.org/wiki/Empty_set

738
With vertex ordering

ing of the recursive calls carefully in order to minimize the sizes of the sets P of candidate
vertices within each recursive call.
The degeneracy15 of a graph G is the smallest number d such that every subgraph16 of
G has a vertex with degree17 d or less. Every graph has a degeneracy ordering, an ordering
of the vertices such that each vertex has d or fewer neighbors18 that come later in the
ordering; a degeneracy ordering may be found in linear time19 by repeatedly selecting the
vertex of minimum degree among the remaining vertices. If the order of the vertices v that
the Bron–Kerbosch algorithm loops through is a degeneracy ordering, then the set P of
candidate vertices in each call (the neighbors of v that are later in the ordering) will be
guaranteed to have size at most d. The set X of excluded vertices will consist of all earlier
neighbors of v, and may be much larger than d. In recursive calls to the algorithm below
the topmost level of the recursion, the pivoting version can still be used.[6][7]
In pseudocode, the algorithm performs the following steps:
algorithm BronKerbosch3(G) is
P = V(G)
R = X = empty
for each vertex v in a degeneracy ordering of G do
BronKerbosch2({v}, P ⋂ N(v), X ⋂ N(v))
P := P \ {v}
X := X ⋃ {v}

This variant of the algorithm can be proven to be efficient for graphs of small degeneracy,[6]
and experiments show that it also works well in practice for large sparse social networks20
and other real-world graphs.[7]

15 https://en.wikipedia.org/wiki/Degeneracy_(graph_theory)
16 https://en.wikipedia.org/wiki/Glossary_of_graph_theory#Subgraphs
17 https://en.wikipedia.org/wiki/Degree_(graph_theory)
18 https://en.wikipedia.org/wiki/Neighborhood_(graph_theory)
19 https://en.wikipedia.org/wiki/Linear_time
20 https://en.wikipedia.org/wiki/Social_network

739
Bron–Kerbosch algorithm

56.4 Example

Figure 140 A graph with five maximal cliques: four edges and a triangle

In the example graph shown, the algorithm is initially called with R = Ø, P = {1,2,3,4,5,6},
and X = Ø. The pivot u should be chosen as one of the degree-three vertices, to minimize
the number of recursive calls; for instance, suppose that u is chosen to be vertex 2. Then
there are three remaining vertices in P \ N(u): vertices 2, 4, and 6.
The iteration of the inner loop of the algorithm for v = 2 makes a recursive call to the
algorithm with R = {2}, P = {1,3,5}, and X = Ø. Within this recursive call, one of 1 or 5
will be chosen as a pivot, and there will be two second-level recursive calls, one for vertex 3
and the other for whichever vertex was not chosen as pivot. These two calls will eventually
report the two cliques {1,2,5} and {2,3}. After returning from these recursive calls, vertex
2 is added to X and removed from P.
The iteration of the inner loop of the algorithm for v = 4 makes a recursive call to the
algorithm with R = {4}, P = {3,5,6}, and X = Ø (although vertex 2 belongs to the set
X in the outer call to the algorithm, it is not a neighbor of v and is excluded from the
subset of X passed to the recursive call). This recursive call will end up making three
second-level recursive calls to the algorithm that report the three cliques {3,4}, {4,5}, and
{4,6}. Then, vertex 4 is added to X and removed from P.
In the third and final iteration of the inner loop of the algorithm, for v = 6, there is a
recursive call to the algorithm with R = {6}, P = Ø, and X = {4}. Because this recursive

740
Worst-case analysis

call has P empty and X non-empty, it immediately backtracks without reporting any more
cliques, as there can be no maximal clique that includes vertex 6 and excludes vertex 4.
The call tree for the algorithm, therefore, looks like:
BronKerbosch2(Ø, {1,2,3,4,5,6}, Ø)
BronKerbosch2({2}, {1,3,5}, Ø)
BronKerbosch2({2,3}, Ø, Ø): output {2, 3}
BronKerbosch2({2,5}, {1}, Ø)
BronKerbosch2({1,2,5}, Ø, Ø): output {1,2,5}
BronKerbosch2({4}, {3,5,6}, Ø)
BronKerbosch2({3,4}, Ø, Ø): output {3,4}
BronKerbosch2({4,5}, Ø, Ø): output {4,5}
BronKerbosch2({4,6}, Ø, Ø): output {4,6}
BronKerbosch2({6}, Ø, {4}): no output

The graph in the example has degeneracy two; one possible degeneracy ordering is
6,4,3,1,2,5. If the vertex-ordering version of the Bron–Kerbosch algorithm is applied to
the vertices, in this order, the call tree looks like
BronKerbosch3(G)
BronKerbosch2({6}, {4}, Ø)
BronKerbosch2({6,4}, Ø, Ø): output {6,4}
BronKerbosch2({4}, {3,5}, {6})
BronKerbosch2({4,3}, Ø, Ø): output {4,3}
BronKerbosch2({4,5}, Ø, Ø): output {4,5}
BronKerbosch2({3}, {2}, {4})
BronKerbosch2({3,2}, Ø, Ø): output {3,2}
BronKerbosch2({1}, {2,5}, Ø)
BronKerbosch2({1,2}, {5}, Ø)
BronKerbosch2({1,2,5}, Ø, Ø): output {1,2,5}
BronKerbosch2({2}, {5}, {1,3}): no output
BronKerbosch2({5}, Ø, {1,2,4}): no output

56.5 Worst-case analysis

The Bron–Kerbosch algorithm is not an output-sensitive algorithm21 : unlike some other


algorithms for the clique problem, it does not run in polynomial time22 per maximal clique
generated. However, it is efficient in a worst-case sense: by a result of Moon & Moser
(1965)23 , any n-vertex graph has at most 3n/324 maximal cliques, and the worst-case running
time of the Bron–Kerbosch algorithm (with a pivot strategy that minimizes the number of
recursive calls made at each step) is O(3n/3 ), matching this bound.[8]
For sparse graphs25 , tighter bounds are possible. In particular the vertex-ordering version
of the Bron–Kerbosch algorithm can be made to run in time O(dn3d/3 ), where d is the
degeneracy26 of the graph, a measure of its sparseness. There exist d-degenerate graphs for
which the total number of maximal cliques is (n − d)3d/3 , so this bound is close to tight.[6]

21 https://en.wikipedia.org/wiki/Output-sensitive_algorithm
22 https://en.wikipedia.org/wiki/Polynomial_time
23 #CITEREFMoonMoser1965
24 https://en.wikipedia.org/wiki/Power_of_three
25 https://en.wikipedia.org/wiki/Sparse_graph
26 https://en.wikipedia.org/wiki/Degeneracy_(graph_theory)

741
Bron–Kerbosch algorithm

56.6 Notes
1. Cazals & Karande (2008)27 .
2. Chen (2004)28 .
3. Johnston (1976)29 .
4. Tomita, Tanaka & Takahashi (2006)30 ; Cazals & Karande (2008)31 .
5. Johnston (1976)32 ; Koch (2001)33 ; Cazals & Karande (2008)34 .
6. Eppstein, Löffler & Strash (2010)35 .
7. Eppstein & Strash (2011)36 .
8. Tomita, Tanaka & Takahashi (2006)37 .

56.7 References
• A, E. A. (1973), ”T      
”, SIAM Journal on Computing, 2: 1–6, doi38 :10.1137/020200139 .
• C, L (2004), ”S    
”,  B, P (.), Computational Medicinal Chemistry for Drug
Discovery, CRC Press, pp. 483–514, ISBN40 978-0-8247-4774-941 .
• B, C; K, J (1973), ”A 457:  
    ”, Commun. ACM, ACM, 16 (9): 575–577,
doi42 :10.1145/362342.36236743 .
• C, F.; K, C. (2008), ”A      -
  ”44 (PDF), Theoretical Computer Science, 407 (1): 564–568,
47
doi45 :10.1016/j.tcs.2008.05.01046[permanent dead link ] .
• E, D48 ; L, M; S, D (2010), ”L 
      - ”,  C, O;
C, K-Y; P, K (.), 21st International Symposium on Algo-
rithms and Computation (ISAAC 2010), Jeju, Korea, Lecture Notes in Computer Science,

27 #CITEREFCazalsKarande2008
28 #CITEREFChen2004
29 #CITEREFJohnston1976
30 #CITEREFTomitaTanakaTakahashi2006
31 #CITEREFCazalsKarande2008
32 #CITEREFJohnston1976
33 #CITEREFKoch2001
34 #CITEREFCazalsKarande2008
35 #CITEREFEppsteinL%C3%B6fflerStrash2010
36 #CITEREFEppsteinStrash2011
37 #CITEREFTomitaTanakaTakahashi2006
38 https://en.wikipedia.org/wiki/Doi_(identifier)
39 https://doi.org/10.1137%2F0202001
40 https://en.wikipedia.org/wiki/ISBN_(identifier)
41 https://en.wikipedia.org/wiki/Special:BookSources/978-0-8247-4774-9
42 https://en.wikipedia.org/wiki/Doi_(identifier)
43 https://doi.org/10.1145%2F362342.362367
44 ftp://ftp-sop.inria.fr/geometrica/fcazals/papers/ncliques.pdf
45 https://en.wikipedia.org/wiki/Doi_(identifier)
46 https://doi.org/10.1016%2Fj.tcs.2008.05.010
48 https://en.wikipedia.org/wiki/David_Eppstein

742
External links

6506, Springer-Verlag, pp. 403–414, arXiv49 :1006.544050 , doi51 :10.1007/978-3-642-17517-


6_3652 .
• E, D53 ; S, D (2011), ”L    
  - ”, 10th International Symposium on Experimental
Algorithms, arXiv54 :1103.031855 , Bibcode56 :2011arXiv1103.0318E57 .
• J, H. C. (1976), ”C   —   B–
K ”, International Journal of Parallel Programming, 5 (3): 209–238,
doi58 :10.1007/BF0099183659 .
• K, I (2001), ”E     
  ”, Theoretical Computer Science, 250 (1–2): 1–30, doi60 :10.1016/S0304-
3975(00)00286-361 .
• M, J. W.; M, L.62 (1965), ”O   ”, Israel J. Math., 3: 23–28,
doi63 :10.1007/BF0276002464 , MR65 018257766 .
• T, E; T, A; T, H (2006), ”T
-        
 ”, Theoretical Computer Science, 363 (1): 28–42,
doi67 :10.1016/j.tcs.2006.06.01568 .

56.8 External links


• Review of the Bron-Kerbosch algorithm and variations69 by Alessio Conte
• Bron-Kerbosch algorithm implementation visualized in Javascript70
• Bron-Kerbosh algorithm implementation in Python71
• Bron-Kerbosh algorithm with vertex ordering implementation in Python72
• Bron-Kerbosh algorithm implementation in C++ 73

49 https://en.wikipedia.org/wiki/ArXiv_(identifier)
50 http://arxiv.org/abs/1006.5440
51 https://en.wikipedia.org/wiki/Doi_(identifier)
52 https://doi.org/10.1007%2F978-3-642-17517-6_36
53 https://en.wikipedia.org/wiki/David_Eppstein
54 https://en.wikipedia.org/wiki/ArXiv_(identifier)
55 http://arxiv.org/abs/1103.0318
56 https://en.wikipedia.org/wiki/Bibcode_(identifier)
57 https://ui.adsabs.harvard.edu/abs/2011arXiv1103.0318E
58 https://en.wikipedia.org/wiki/Doi_(identifier)
59 https://doi.org/10.1007%2FBF00991836
60 https://en.wikipedia.org/wiki/Doi_(identifier)
61 https://doi.org/10.1016%2FS0304-3975%2800%2900286-3
62 https://en.wikipedia.org/wiki/Leo_Moser
63 https://en.wikipedia.org/wiki/Doi_(identifier)
64 https://doi.org/10.1007%2FBF02760024
65 https://en.wikipedia.org/wiki/MR_(identifier)
66 http://www.ams.org/mathscinet-getitem?mr=0182577
67 https://en.wikipedia.org/wiki/Doi_(identifier)
68 https://doi.org/10.1016%2Fj.tcs.2006.06.015
69 http://www.dcs.gla.ac.uk/~pat/jchoco/clique/enumeration/report.pdf
70 https://davidpynes.github.io/Tutorials/Graphs/Graph_03/
71 http://www.kuchaev.com/files/graph.py
72 https://gist.github.com/abhin4v/8304062
73 https://github.com/atulsingh7890/Graph/blob/master/Graph.cpp

743
Bron–Kerbosch algorithm

• Bron-Kerbosh algorithm implementation in C++11 with unit tests 74


• C++ implementation of the algorithms presented in Eppstein, Strash (2011)75 by Darren
Strash
• Finding all cliques of an undirected graph76 . Seminar notes by Michaela Regneri, January
11, 2007.
• Bron-Kerbosh, algorithms implementation in PHP, library with composer install77 by
Sergio Zambrano

74 https://gitlab.com/Gluttton/BronKerbosch
75 https://github.com/darrenstrash/quick-cliques
76 http://www.dfki.de/~neumann/ie-seminar/presentations/finding_cliques.pdf
77 https://github.com/skilla/maximal-cliques

744
57 Centrality

For the statistical concept, see Central tendency1 .

Network science

• Graph
• Complex network
• Contagion
• Small-world
• Scale-free
• Community structure
• Percolation
• Evolution
• Controllability
• Graph drawing
• Social capital
• Link analysis
• Optimization
• Reciprocity
• Closure
• Homophily
• Transitivity
• Preferential attachment
• Balance theory
• Network effect
• Social influence

1 https://en.wikipedia.org/wiki/Central_tendency

745
Centrality

Network science

• Informational (computing)
• Telecommunication
• Transport
• Social
• Scientific collaboration
• Biological
• Artificial neural
• Interdependent
• Semantic
• Spatial
• Dependency
• Flow
• on-Chip

Features

• Clique
• Component
• Cut
• Cycle
• Data structure
• Edge
• Loop
• Neighborhood
• Path
• Vertex
• Adjacency list /
matrix
• Incidence list /
matrix

Types

• Bipartite
• Complete
• Directed
• Hyper
• Multi
• Random
• Weighted

746
External links

Network science

• Centrality
• Degree
• Betweenness
• Closeness
• PageRank
• Motif
• Clustering
• Degree distribution
• Assortativity
• Distance
• Modularity
• Efficiency

Topology

• Random graph
• Erdős–Rényi
• Barabási–Albert
• Fitness model
• Watts–Strogatz
• Exponential ran-
dom (ERGM)
• Random geometric
(RGG)
• Hyperbolic (HGN)
• Hierarchical
• Stochastic block
• Maximum entropy
• Soft configuration
• LFR Benchmark

Dynamics

• Boolean network
• agent based
• Epidemic/SIR

• Topics
• Software
• Network scientists
• Category:Network theory
• Category:Graph theory

747
Centrality

In graph theory2 and network analysis3 , indicators of centrality identify the most impor-
tant vertices4 within a graph. Applications include identifying the most influential person(s)
in a social network5 , key infrastructure nodes in the Internet6 or urban networks7 , and super-
spreaders8 of disease. Centrality concepts were first developed in social network analysis9 ,
and many of the terms used to measure centrality reflect their sociological10 origin.[1] They
should not be confused with node influence metrics11 , which seek to quantify the influence
of every node in the network.

57.1 Definition and characterization of centrality indices

Centrality indices are answers to the question ”What characterizes an important vertex?”
The answer is given in terms of a real-valued function on the vertices of a graph, where
the values produced are expected to provide a ranking which identifies the most important
nodes.[2][3][4]
The word ”importance” has a wide number of meanings, leading to many different defini-
tions of centrality. Two categorization schemes have been proposed. ”Importance” can be
conceived in relation to a type of flow or transfer across the network. This allows cen-
tralities to be classified by the type of flow they consider important.[3] ”Importance” can
alternatively be conceived as involvement in the cohesiveness of the network. This allows
centralities to be classified based on how they measure cohesiveness.[5] Both of these ap-
proaches divide centralities in distinct categories. A further conclusion is that a centrality
which is appropriate for one category will often ”get it wrong” when applied to a different
category.[3]
When centralities are categorized by their approach to cohesiveness, it becomes apparent
that the majority of centralities inhabit one category. The count of the number of walks
starting from a given vertex differs only in how walks are defined and counted. Restricting
consideration to this group allows for a soft characterization which places centralities on
a spectrum from walks of length one (degree centrality12 ) to infinite walks (eigenvalue
centrality13 ).[2][6] The observation that many centralities share this familial relationships
perhaps explains the high rank correlations between these indices.

2 https://en.wikipedia.org/wiki/Graph_theory
3 https://en.wikipedia.org/wiki/Network_theory
4 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
5 https://en.wikipedia.org/wiki/Social_network
6 https://en.wikipedia.org/wiki/Internet
7 https://en.wikipedia.org/wiki/Urban_network
8 https://en.wikipedia.org/wiki/Super-spreader
9 https://en.wikipedia.org/wiki/Social_network_analysis
10 https://en.wikipedia.org/wiki/Sociology
11 https://en.wikipedia.org/wiki/Node_influence_metric
12 https://en.wikipedia.org/wiki/Centrality#Degree_centrality
13 https://en.wikipedia.org/wiki/Centrality#Eigenvector_centrality

748
Definition and characterization of centrality indices

57.1.1 Characterization by network flows

A network can be considered a description of the paths along which something flows. This
allows a characterization based on the type of flow and the type of path encoded by the
centrality. A flow can be based on transfers, where each indivisible item goes from one
node to another, like a package delivery going from the delivery site to the client's house.
A second case is serial duplication, in which an item is replicated so that both the source
and the target have it. An example is the propagation of information through gossip, with
the information being propagated in a private way and with both the source and the target
nodes being informed at the end of the process. The last case is parallel duplication, with
the item being duplicated to several links at the same time, like a radio broadcast which
provides the same information to many listeners at once.[3]
Likewise, the type of path can be constrained to geodesics14 (shortest paths), paths15 (no
vertex is visited more than once), trails16 (vertices can be visited multiple times, no edge is
traversed more than once), or walks17 (vertices and edges can be visited/traversed multiple
times).[3]

57.1.2 Characterization by walk structure

An alternative classification can be derived from how the centrality is constructed. This
again splits into two classes. Centralities are either radial or medial. Radial centralities
count walks which start/end from the given vertex. The degree18 and eigenvalue19 centrali-
ties are examples of radial centralities, counting the number of walks of length one or length
infinity. Medial centralities count walks which pass through the given vertex. The canoni-
cal example is Freeman's betweenness20 centrality, the number of shortest paths which pass
through the given vertex.[5]
Likewise, the counting can capture either the volume or the length of walks. Volume is the
total number of walks of the given type. The three examples from the previous paragraph
fall into this category. Length captures the distance from the given vertex to the remaining
vertices in the graph. Freeman's closeness21 centrality, the total geodesic distance from a
given vertex to all other vertices, is the best known example.[5] Note that this classification
is independent of the type of walk counted (i.e. walk, trail, path, geodesic).
Borgatti and Everett propose that this typology provides insight into how best to compare
centrality measures. Centralities placed in the same box in this 2×2 classification are similar
enough to make plausible alternatives; one can reasonably compare which is better for a
given application. Measures from different boxes, however, are categorically distinct. Any
evaluation of relative fitness can only occur within the context of predetermining which
category is more applicable, rendering the comparison moot.[5]

14 https://en.wikipedia.org/wiki/Distance_(graph_theory)
15 https://en.wikipedia.org/wiki/Glossary_of_graph_theory_terms#path
16 https://en.wikipedia.org/wiki/Glossary_of_graph_theory_terms#trail
17 https://en.wikipedia.org/wiki/Glossary_of_graph_theory_terms#walk
18 https://en.wikipedia.org/wiki/Centrality#Degree_centrality
19 https://en.wikipedia.org/wiki/Centrality#Eigenvector_centrality
20 https://en.wikipedia.org/wiki/Centrality#Betweenness_centrality
21 https://en.wikipedia.org/wiki/Centrality#Closeness_centrality

749
Centrality

57.1.3 Radial-volume centralities exist on a spectrum

The characterization by walk structure shows that almost all centralities in wide use are
radial-volume measures. These encode the belief that a vertex's centrality is a function of
the centrality of the vertices it is associated with. Centralities distinguish themselves on
how association is defined.
Bonacich showed that if association is defined in terms of walks22 , then a family of central-
ities can be defined based on the length of walk considered.[2] Degree centrality23 counts
walks of length one, while eigenvalue centrality24 counts walks of length infinity. Alternative
definitions of association are also reasonable. Alpha centrality25 allows vertices to have an
external source of influence. Estrada's subgraph centrality proposes only counting closed
paths (triangles, squares, etc.).
The heart of such measures is the observation that powers of the graph's adjacency matrix
gives the number of walks of length given by that power. Similarly, the matrix exponential
is also closely related to the number of walks of a given length. An initial transformation of
the adjacency matrix allows a different definition of the type of walk counted. Under either
approach, the centrality of a vertex can be expressed as an infinite sum, either


AkR β k
k=0

for matrix powers or



∑ (AR β)k
k=0
k!
for matrix exponentials, where
• k is walk length,
• AR is the transformed adjacency matrix, and
• β is a discount parameter which ensures convergence of the sum.
Bonacich's family of measures does not transform the adjacency matrix. Alpha central-
ity26 replaces the adjacency matrix with its resolvent27 . Subgraph centrality replaces the
adjacency matrix with its trace. A startling conclusion is that regardless of the initial trans-
formation of the adjacency matrix, all such approaches have common limiting behavior. As
β approaches zero, the indices converge to degree centrality28 . As β approaches its maximal
value, the indices converge to eigenvalue centrality29 .[6]

22 https://en.wikipedia.org/wiki/Glossary_of_graph_theory_terms#walk
23 https://en.wikipedia.org/wiki/Centrality#Degree_centrality
24 https://en.wikipedia.org/wiki/Centrality#Eigenvector_centrality
25 https://en.wikipedia.org/wiki/Alpha_centrality
26 https://en.wikipedia.org/wiki/Alpha_centrality
27 https://en.wikipedia.org/wiki/Resolvent_formalism
28 https://en.wikipedia.org/wiki/Centrality#Degree_centrality
29 https://en.wikipedia.org/wiki/Centrality#Eigenvector_centrality

750
Definition and characterization of centrality indices

57.1.4 Game-theoretic centrality

The common feature of most of the aforementioned standard measures is that they assess
the importance of a node by focusing only on the role that a node plays by itself. However,
in many applications such an approach is inadequate because of synergies that may occur
if the functioning of nodes is considered in groups.

Figure 141 Example of game-theoretic centrality

For example, consider the problem of stopping an epidemic. Looking at above image of
network, which nodes should we vaccinate? Based on previously described measures, we
want to recognize nodes that are the most important in disease spreading. Approaches
based only on centralities, that focus on individual features of nodes, may not be good idea.
Nodes in the red square, individually cannot stop disease spreading, but considering them
as a group, we clearly see that they can stop disease if it has started in nodes v1 , v4 , and v5 .
Game-theoretic centralities try to consult described problems and opportunities, using tools
from game-theory. The approach proposed in [7] uses the Shapley value30 . Because of the
time-complexity hardness of the Shapley value calculation, most efforts in this domain are
driven into implementing new algorithms and methods which rely on a peculiar topology of
the network or a special character of the problem. Such an approach may lead to reducing
time-complexity from exponential to polynomial.

30 https://en.wikipedia.org/wiki/Shapley_value

751
Centrality

Similarly, the solution concept authority distribution31 ([8] ) applies the Shapley-Shubik
power index32 , rather than the Shapley value33 , to measure the bilateral direct influence
between the players. The distribution is indeed a type of engenvector centrality. It is used
to sort big data objects in Hu (2020)[9] , such as ranking U.S. colleges.

57.2 Important limitations

Centrality indices have two important limitations, one obvious and the other subtle. The
obvious limitation is that a centrality which is optimal for one application is often sub-
optimal for a different application. Indeed, if this were not so, we would not need so many
different centralities. An illustration of this phenomenon is provided by the Krackhardt
kite graph34 , for which three different notions of centrality give three different choices of
the most central vertex.[10]
The more subtle limitation is the commonly held fallacy that vertex centrality indicates
the relative importance of vertices. Centrality indices are explicitly designed to produce
a ranking which allows indication of the most important vertices.[2][3] This they do well,
under the limitation just noted. They are not designed to measure the influence of nodes
in general. Recently, network physicists have begun developing node influence metrics35 to
address this problem.
The error is two-fold. Firstly, a ranking only orders vertices by importance, it does not
quantify the difference in importance between different levels of the ranking. This may
be mitigated by applying Freeman centralization36 to the centrality measure in question,
which provide some insight to the importance of nodes depending on the differences of their
centralization scores. Furthermore, Freeman centralization enables one to compare several
networks by comparing their highest centralization scores.[11] This approach, however, is
37
seldom seen in practice.[citation needed ]
Secondly, the features which (correctly) identify the most important vertices in a given net-
work/application do not necessarily generalize to the remaining vertices. For the majority
of other network nodes the rankings may be meaningless.[12][13][14][15] This explains why, for
example, only the first few results of a Google image search appear in a reasonable order.
The pagerank is a highly unstable measure, showing frequent rank reversals after small
adjustments of the jump parameter.[16]
While the failure of centrality indices to generalize to the rest of the network may at first
seem counter-intuitive, it follows directly from the above definitions. Complex networks have
heterogeneous topology. To the extent that the optimal measure depends on the network
structure of the most important vertices, a measure which is optimal for such vertices is
sub-optimal for the remainder of the network.[12]

31 https://en.wikipedia.org/wiki/Authority_distribution
32 https://en.wikipedia.org/wiki/Shapley-Shubik_power_index
33 https://en.wikipedia.org/wiki/Shapley_value
34 https://en.wikipedia.org/wiki/Krackhardt_kite_graph
35 https://en.wikipedia.org/wiki/Node_influence_metric
36 https://en.wikipedia.org/wiki/Centrality#Freeman_centralization

752
Degree centrality

57.3 Degree centrality

Main article: Degree (graph theory)38

Figure 142 Examples of A) Betweenness centrality, B) Closeness centrality, C)


Eigenvector centrality, D) Degree centrality, E) Harmonic centrality and F) Katz
centrality of the same graph.

Historically first and conceptually simplest is degree centrality, which is defined as the
number of links incident upon a node (i.e., the number of ties that a node has). The

38 https://en.wikipedia.org/wiki/Degree_(graph_theory)

753
Centrality

degree can be interpreted in terms of the immediate risk of a node for catching whatever
is flowing through the network (such as a virus, or some information). In the case of a
directed network (where ties have direction), we usually define two separate measures of
degree centrality, namely indegree39 and outdegree40 . Accordingly, indegree is a count of
the number of ties directed to the node and outdegree is the number of ties that the node
directs to others. When ties are associated to some positive aspects such as friendship
or collaboration, indegree is often interpreted as a form of popularity, and outdegree as
gregariousness.
The degree centrality of a vertex v, for a given graph G := (V, E) with |V | vertices and |E|
edges, is defined as
CD (v) = deg(v)
Calculating degree centrality for all the nodes in a graph takes Θ(V 2 )41 in a dense42 adja-
cency matrix43 representation of the graph, and for edges takes Θ(E) in a sparse matrix44
representation.
The definition of centrality on the node level can be extended to the whole graph, in which
case we are speaking of graph centralization.[17] Let v∗ be the node with highest degree
centrality in G. Let X := (Y, Z) be the |Y |-node connected graph that maximizes the
following quantity (with y∗ being the node with highest degree centrality in X):
|Y |

H= [CD (y∗) − CD (yj )]
j=1

Correspondingly, the degree centralization of the graph G is as follows:


∑|V |
i=1 [CD (v∗) − CD (vi )]
CD (G) =
H
The value of H is maximized when the graph X contains one central node to which all
other nodes are connected (a star graph45 ), and in this case
H = (n − 1) · ((n − 1) − 1) = n2 − 3n + 2.
So, for any graph G := (V, E),
∑|V |
i=1 [CD (v∗) − CD (vi )]
CD (G) =
|V |2 − 3|V | + 2

39 https://en.wikipedia.org/wiki/Indegree
40 https://en.wikipedia.org/wiki/Outdegree
41 https://en.wikipedia.org/wiki/Big_theta
42 https://en.wikipedia.org/wiki/Dense_matrix
43 https://en.wikipedia.org/wiki/Adjacency_matrix
44 https://en.wikipedia.org/wiki/Sparse_matrix
45 https://en.wikipedia.org/wiki/Star_graph

754
Closeness centrality

57.4 Closeness centrality

Main article: Closeness centrality46 In a connected47 graph48 , the normalized49 closeness


centrality (or closeness) of a node is the average length of the shortest path50 between
the node and all other nodes in the graph. Thus the more central a node is, the closer it is
to all other nodes.
Closeness was defined by Alex Bavelas51 (1950) as the reciprocal52 of the farness,[18][19] that
is:
1
C(x) = ∑
y d(y, x)
where d(y, x) is the distance53 between vertices x and y. However, when speaking of closeness
centrality, people usually refer to its normalized form, generally given by the previous
formula multiplied by N − 1, where N is the number of nodes in the graph. This adjustment
allows comparisons between nodes of graphs of different sizes.
Taking distances from or to all other nodes is irrelevant in undirected graphs, whereas it
can produce totally different results in directed graphs54 (e.g. a website can have a high
closeness centrality from outgoing link, but low closeness centrality from incoming links).

57.4.1 Harmonic centrality

In a (not necessarily connected) graph, the harmonic centrality reverses the sum and
reciprocal operations in the definition of closeness centrality:
∑ 1
H(x) =
y̸=x
d(y, x)

where 1/d(y, x) = 0 if there is no path from y to x. Harmonic centrality can be normalized


by dividing by N − 1, where N is the number of nodes in the graph.
Harmonic centrality was proposed by Marchiori55 and Latora56 (2000)[20] and then indepen-
dently by Dekker (2005), using the name ”valued centrality,”[21] and by Rochat (2009).[22]

46 https://en.wikipedia.org/wiki/Closeness_centrality
47 https://en.wikipedia.org/wiki/Connected_component_(graph_theory)
48 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
49 https://en.wikipedia.org/wiki/Normalization_(statistics)
50 https://en.wikipedia.org/wiki/Shortest_path_problem
51 https://en.wikipedia.org/wiki/Alex_Bavelas
52 https://en.wikipedia.org/wiki/Multiplicative_inverse
53 https://en.wikipedia.org/wiki/Distance_(graph_theory)
54 https://en.wikipedia.org/wiki/Directed_graph
55 https://en.wikipedia.org/wiki/Massimo_Marchiori
56 https://en.wikipedia.org/wiki/Vito_Latora

755
Centrality

57.5 Betweenness centrality

Main article: Betweenness centrality57

Figure 143 Hue (from red = 0 to blue = max) shows the node betweenness.

Betweenness is a centrality measure of a vertex58 within a graph59 (there is also edge60


betweenness, which is not discussed here). Betweenness centrality quantifies the number
of times a node acts as a bridge along the shortest path between two other nodes. It
was introduced as a measure for quantifying the control of a human on the communication

57 https://en.wikipedia.org/wiki/Betweenness_centrality
58 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
59 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
60 https://en.wikipedia.org/wiki/Edge_(graph_theory)

756
Betweenness centrality

between other humans in a social network by Linton Freeman61[23] In his conception, vertices
that have a high probability to occur on a randomly chosen shortest path62 between two
randomly chosen vertices have a high betweenness.
The betweenness of a vertex v in a graph G := (V, E) with V vertices is computed as follows:
1. For each pair of vertices (s,t), compute the shortest paths63 between them.
2. For each pair of vertices (s,t), determine the fraction of shortest paths that pass
through the vertex in question (here, vertex v).
3. Sum this fraction over all pairs of vertices (s,t).
More compactly the betweenness can be represented as:[24]
∑ σst (v)
CB (v) =
s̸=v̸=t∈V
σst

where σst is total number of shortest paths from node s to node t and σst (v) is the number of
those paths that pass through v. The betweenness may be normalised by dividing through
the number of pairs of vertices not including v, which for directed graphs64 is (n − 1)(n − 2)
and for undirected graphs is (n − 1)(n − 2)/2. For example, in an undirected star graph65 ,
the center vertex (which is contained in every possible shortest path) would have a be-
tweenness of (n − 1)(n − 2)/2 (1, if normalised) while the leaves (which are contained in no
shortest paths) would have a betweenness of 0.
From a calculation aspect, both betweenness and closeness centralities of all vertices in
a graph involve calculating the shortest paths between all pairs of vertices on a graph,
which requires O(V 3 )66 time with the Floyd–Warshall algorithm67 . However, on sparse
graphs, Johnson's algorithm68 may be more efficient, taking O(V 2 log V + V E)69 time. In
the case of unweighted graphs the calculations can be done with Brandes' algorithm[24]
which takes O(V E)70 time. Normally, these algorithms assume that graphs are undirected
and connected with the allowance of loops and multiple edges. When specifically dealing
with network graphs, often graphs are without loops or multiple edges to maintain simple
relationships (where edges represent connections between two people or vertices). In this
case, using Brandes' algorithm will divide final centrality scores by 2 to account for each
shortest path being counted twice.[24]

61 https://en.wikipedia.org/wiki/Linton_Freeman
62 https://en.wikipedia.org/wiki/Shortest_path_problem
63 https://en.wikipedia.org/wiki/Shortest_path_problem
64 https://en.wikipedia.org/wiki/Digraph_(mathematics)
65 https://en.wikipedia.org/wiki/Star_(graph_theory)
66 https://en.wikipedia.org/wiki/Big_O_notation
67 https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
68 https://en.wikipedia.org/wiki/Johnson%27s_algorithm
69 https://en.wikipedia.org/wiki/Big_O_notation
70 https://en.wikipedia.org/wiki/Big_O_notation

757
Centrality

57.6 Eigenvector centrality

Main article: Eigenvector centrality71 Eigenvector centrality (also called eigencentral-


ity) is a measure of the influence of a node72 in a network73 . It assigns relative scores
to all nodes in the network based on the concept that connections to high-scoring nodes
contribute more to the score of the node in question than equal connections to low-scoring
nodes.[25][4] Google74 's PageRank75 and the Katz centrality76 are variants of the eigenvector
centrality.[26]

57.6.1 Using the adjacency matrix to find eigenvector centrality

For a given graph G := (V, E) with |V | number of vertices let A = (av,t ) be the adjacency
matrix77 , i.e. av,t = 1 if vertex v is linked to vertex t, and av,t = 0 otherwise. The relative
centrality score of vertex v can be defined as:
1 ∑ 1∑
xv = xt = av,t xt
λ t∈M (v) λ t∈G

where M (v) is a set of the neighbors of v and λ is a constant. With a small rearrangement
this can be rewritten in vector notation as the eigenvector78 equation
Ax = λx
In general, there will be many different eigenvalues79 λ for which a non-zero eigenvector
solution exists. Since the entries in the adjacency matrix are non-negative, there is a
unique largest eigenvalue, which is real and positive, by the Perron–Frobenius theorem80 .
This greatest eigenvalue results in the desired centrality measure.[27] The v th component
of the related eigenvector then gives the relative centrality score of the vertex v in the
network. The eigenvector is only defined up to a common factor, so only the ratios of the
centralities of the vertices are well defined. To define an absolute score one must normalise
the eigenvector, e.g., such that the sum over all vertices is 1 or the total number of vertices
n. Power iteration81 is one of many eigenvalue algorithms82 that may be used to find this
dominant eigenvector.[26] Furthermore, this can be generalized so that the entries in A can
be real numbers representing connection strengths, as in a stochastic matrix83 .

71 https://en.wikipedia.org/wiki/Eigenvector_centrality
72 https://en.wikipedia.org/wiki/Node_(networking)
73 https://en.wikipedia.org/wiki/Network_(mathematics)
74 https://en.wikipedia.org/wiki/Google
75 https://en.wikipedia.org/wiki/PageRank
76 https://en.wikipedia.org/wiki/Katz_centrality
77 https://en.wikipedia.org/wiki/Adjacency_matrix
78 https://en.wikipedia.org/wiki/Eigenvector
79 https://en.wikipedia.org/wiki/Eigenvalue
80 https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem
81 https://en.wikipedia.org/wiki/Power_iteration
82 https://en.wikipedia.org/wiki/Eigenvalue_algorithm
83 https://en.wikipedia.org/wiki/Stochastic_matrix

758
Katz centrality

57.7 Katz centrality

Main article: Katz centrality84 Katz centrality[28] is a generalization of degree centrality.


Degree centrality measures the number of direct neighbors, and Katz centrality measures
the number of all nodes that can be connected through a path, while the contributions of
distant nodes are penalized. Mathematically, it is defined as
∞ ∑
∑ N
xi = αk (Ak )ji
k=1 j=1

where α is an attenuation factor in (0, 1).


Katz centrality can be viewed as a variant of eigenvector centrality. Another form of Katz
centrality is

N
xi = α aij (xj + 1).
j=1

Compared to the expression of eigenvector centrality, xj is replaced by xj + 1.


It is shown that[29] the principal eigenvector (associated with the largest eigenvalue of A,
the adjacency matrix) is the limit of Katz centrality as α approaches λ1 from below.

57.8 PageRank centrality

Main article: PageRank85 PageRank86 satisfies the following equation


∑ xj 1−α
xi = α aji + ,
j
L(j) N

where

L(j) = aji
i

is the number of neighbors of node j (or number of outbound links in a directed graph).
Compared to eigenvector centrality and Katz centrality, one major difference is the scaling
factor L(j). Another difference between PageRank and eigenvector centrality is that the
PageRank vector is a left hand eigenvector (note the factor aji has indices reversed).[30]

57.9 Percolation centrality

A slew of centrality measures exist to determine the ‘importance’ of a single node in a


complex network. However, these measures quantify the importance of a node in purely
topological terms, and the value of the node does not depend on the ‘state’ of the node in any

84 https://en.wikipedia.org/wiki/Katz_centrality
85 https://en.wikipedia.org/wiki/PageRank
86 https://en.wikipedia.org/wiki/PageRank

759
Centrality

way. It remains constant regardless of network dynamics. This is true even for the weighted
betweenness measures. However, a node may very well be centrally located in terms of
betweenness centrality or another centrality measure, but may not be ‘centrally’ located in
the context of a network in which there is percolation. Percolation of a ‘contagion’ occurs
in complex networks in a number of scenarios. For example, viral or bacterial infection can
spread over social networks of people, known as contact networks. The spread of disease
can also be considered at a higher level of abstraction, by contemplating a network of towns
or population centres, connected by road, rail or air links. Computer viruses can spread
over computer networks. Rumours or news about business offers and deals can also spread
via social networks of people. In all of these scenarios, a ‘contagion’ spreads over the links
of a complex network, altering the ‘states’ of the nodes as it spreads, either recoverably or
otherwise. For example, in an epidemiological scenario, individuals go from ‘susceptible’ to
‘infected’ state as the infection spreads. The states the individual nodes can take in the
above examples could be binary (such as received/not received a piece of news), discrete
(susceptible/infected/recovered), or even continuous (such as the proportion of infected
people in a town), as the contagion spreads. The common feature in all these scenarios is
that the spread of contagion results in the change of node states in networks. Percolation
centrality (PC) was proposed with this in mind, which specifically measures the importance
of nodes in terms of aiding the percolation through the network. This measure was proposed
by Piraveenan et al.[31]
Percolation centrality is defined for a given node, at a given time, as the proportion
of ‘percolated paths’ that go through that node. A ‘percolated path’ is a shortest path
between a pair of nodes, where the source node is percolated (e.g., infected). The target
node can be percolated or non-percolated, or in a partially percolated state.
1 ∑ σsr (v) xt s
P C t (v) = ∑ t
N − 2 s̸=v̸=r σsr [x i ] − xt v

where σsr is total number of shortest paths from node s to node r and σsr (v) is the number
of those paths that pass through v. The percolation state of the node i at time t is denoted
by xt i and two special cases are when xt i = 0 which indicates a non-percolated state at time
t whereas when xt i = 1 which indicates a fully percolated state at time t. The values in
between indicate partially percolated states ( e.g., in a network of townships, this would be
the percentage of people infected in that town).
The attached weights to the percolation paths depend on the percolation levels assigned
to the source nodes, based on the premise that the higher the percolation level of a source
node is, the more important are the paths that originate from that node. Nodes which lie
on shortest paths originating from highly percolated nodes are therefore potentially more
important to the percolation. The definition of PC may also be extended to include target
node weights as well. Percolation centrality calculations run in O(N M )87 time with an
efficient implementation adopted from Brandes' fast algorithm and if the calculation needs
to consider target nodes weights, the worst case time is O(N 3 )88 .

87 https://en.wikipedia.org/wiki/Big_O_notation
88 https://en.wikipedia.org/wiki/Big_O_notation

760
Cross-clique centrality

57.10 Cross-clique centrality

Cross-clique centrality of a single node in a complex graph determines the connectivity


of a node to different cliques89 . A node with high cross-clique connectivity facilitates the
propagation of information or disease in a graph. Cliques are subgraphs in which every
node is connected to every other node in the clique. The cross-clique connectivity of a node
v for a given graph G := (V, E) with |V | vertices and |E| edges, is defined as X(v) where
X(v) is the number of cliques to which vertex v belongs. This measure was used in [32]
but was first proposed by Everett and Borgatti in 1998 where they called it clique-overlap
centrality.

57.11 Freeman centralization

The centralization of any network is a measure of how central its most central node is in
relation to how central all the other nodes are.[11] Centralization measures then (a) calculate
the sum in differences in centrality between the most central node in a network and all other
nodes; and (b) divide this quantity by the theoretically largest such sum of differences in any
network of the same size.[11] Thus, every centrality measure can have its own centralization
measure. Defined formally, if Cx (pi ) is any centrality measure of point i, if Cx (p∗ ) is the
largest such measure in the network, and if:

N
max Cx (p∗ ) − Cx (pi )
i=1

is the largest sum of differences in point centrality Cx for any graph with the same number
of nodes, then the centralization of the network is:[11]
∑N
i=1 Cx (p∗ ) − Cx (pi )
Cx = ∑ .
max Ni=1 Cx (p∗ ) − Cx (pi )

89 https://en.wikipedia.org/wiki/Clique_(graph_theory)

761
Centrality

57.12 Dissimilarity based centrality measures

Figure 144 In the illustrated network, green and red nodes are the most dissimilar
because they do not share neighbors between them. So, the green one contributes more to
the centrality of the red one than the gray ones, because the red one can access to the blue
ones only through the green, and the gray nodes are redundant for the red one, because it
can access directly to each gray node without any intermediary.

In order to obtain better results in the ranking of the nodes of a given network, in [33]
are used dissimilarity measures (specific to the theory of classification and data mining)
to enrich the centrality measures in complex networks. This is illustrated with eigenvector

762
Extensions

centrality90 , calculating the centrality of each node through the solution of the eigenvalue
problem
W c = λc
where Wij = Aij Dij (coordinate-to-coordinate product) and Dij is an arbitrary dissimilar-
ity91 matrix, defined through a dissimilitary measure, e.g., Jaccard92 dissimilarity given
by
|V + (i) ∩ V + (j)|
Dij = 1 −
|V + (i) ∪ V + (j)|
Where this measure permits us to quantify the topological contribution (which is why is
called contribution centrality) of each node to the centrality of a given node, having more
weight/relevance those nodes with greater dissimilarity, since these allow to the given node
access to nodes that which themselves can not access directly.
Is noteworthy that W is non-negative because A and D are non-negative matrices, so we
can use the Perron–Frobenius theorem93 to ensure that the above problem has a unique
solution for λ = λmax with c non-negative, allowing us to infer the centrality of each node in
the network. Therefore, the centrality of the i-th node is
1∑ n
ci = Wij cj , j = 1, · · · , n
n j=1

where n is the number of the nodes in the network. Several dissimilarity measures and
networks were tested in [34] obtaining improved results in the studied cases.

57.13 Extensions

Empirical and theoretical research have extended the concept of centrality in the context
of static networks to dynamic centrality[35] in the context of time-dependent and temporal
networks.[36][37][38]
For generalizations to weighted networks, see Opsahl et al. (2010).[39]
The concept of centrality was extended to a group level as well. For example, group
betweenness centrality shows the proportion of geodesics connecting pairs of non-group
members that pass through the group.[40][41]

57.14 See also


• Alpha centrality94
• Core–periphery structure95

90 https://en.wikipedia.org/wiki/Eigenvector_centrality
91 https://en.wikipedia.org/wiki/Matrix_similarity
92 https://en.wikipedia.org/wiki/Jaccard_index
93 https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem
94 https://en.wikipedia.org/wiki/Alpha_centrality
95 https://en.wikipedia.org/wiki/Core%E2%80%93periphery_structure

763
Centrality

• Distance in graphs96

57.15 Notes and references


1. Newman, M.E.J. 2010. Networks: An Introduction. Oxford, UK: Oxford University
Press.
2. B, P (1987). ”P  C: A F  M”.
American Journal of Sociology. 92 (5): 1170–1182. doi97 :10.1086/22863198 .
3. B, S P. (2005). ”C  N
F”. Social Networks. 27: 55–71. CiteSeerX99 10.1.1.387.419100 .
101
doi :10.1016/j.socnet.2004.11.008 . 102

4. C F. A. N, U N. M, H P. H, R


P, G P. L, J. P L, I R, J H, V S.
B. (2018). ”E     -
  ”103 . Proceedings of the National Academy of Sciences.
115 (52): E12201–E12208. doi104 :10.1073/pnas.1810452115105 . PMC106 6310864107 .
PMID108 30530700109 .CS1 maint: multiple names: authors list (link110 )
5. B, S P.; E, M G. (2006). ”A G-
T P  C”. Social Networks. 28 (4): 466–484.
doi111 :10.1016/j.socnet.2005.11.005112 .
6. B, M; K, C (2013). ”A    -
  ”. SIAM Journal on Matrix Analysis and Applications.
36 (2): 686–706. arXiv113 :1312.6722114 . doi115 :10.1137/130950550116 .
7. Michalak, Aadithya, Szczepański, Ravindran, & Jennings 117

96 https://en.wikipedia.org/wiki/Distance_(graph_theory)
97 https://en.wikipedia.org/wiki/Doi_(identifier)
98 https://doi.org/10.1086%2F228631
99 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
100 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.387.419
101 https://en.wikipedia.org/wiki/Doi_(identifier)
102 https://doi.org/10.1016%2Fj.socnet.2004.11.008
103 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6310864
104 https://en.wikipedia.org/wiki/Doi_(identifier)
105 https://doi.org/10.1073%2Fpnas.1810452115
106 https://en.wikipedia.org/wiki/PMC_(identifier)
107 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6310864
108 https://en.wikipedia.org/wiki/PMID_(identifier)
109 http://pubmed.ncbi.nlm.nih.gov/30530700
110 https://en.wikipedia.org/wiki/Category:CS1_maint:_multiple_names:_authors_list
111 https://en.wikipedia.org/wiki/Doi_(identifier)
112 https://doi.org/10.1016%2Fj.socnet.2005.11.005
113 https://en.wikipedia.org/wiki/ArXiv_(identifier)
114 http://arxiv.org/abs/1312.6722
115 https://en.wikipedia.org/wiki/Doi_(identifier)
116 https://doi.org/10.1137%2F130950550
117 https://arxiv.org/pdf/1402.0567.pdf

764
Notes and references

8. H, X; S, L (2003). ”O A D  O-


”. Games and Economic Behavior. 45: 132–170. doi118 :10.1016/s0899-
8256(03)00130-1119 .
9. H, X (2020). ”S       -
   ”. Journal of Big Data. 7. doi120 :10.1186/s40537-
020-00300-1121 .
10. K, D122 (J 1990). ”A  P L:
S, C,  P  O”. Administrative Science
Quarterly. 35 (2): 342–369. doi123 :10.2307/2393394124 . JSTOR125 2393394126 .
11. F, L C. (1979), ”   : C-
 ”127 (PDF), Social Networks, 1 (3): 215–239, Cite-
SeerX128 10.1.1.227.9549129 , doi130 :10.1016/0378-8733(78)90021-7131 , archived from
the original132 (PDF) on 2016-02-22, retrieved 2014-07-31
12. L, G (2015). ”U    
    :  - ”133 . Sci
Rep. 5: 8665. arXiv134 :1405.6707135 . Bibcode136 :2015NatSR...5E8665L137 .
doi138 :10.1038/srep08665139 . PMC140 4345333141 . PMID142 25727453143 .
13.  S, R; V, M;  F. C, L (2012). ”P-
        ”.

118 https://en.wikipedia.org/wiki/Doi_(identifier)
119 https://doi.org/10.1016%2Fs0899-8256%2803%2900130-1
120 https://en.wikipedia.org/wiki/Doi_(identifier)
121 https://doi.org/10.1186%2Fs40537-020-00300-1
122 https://en.wikipedia.org/wiki/David_Krackhardt
123 https://en.wikipedia.org/wiki/Doi_(identifier)
124 https://doi.org/10.2307%2F2393394
125 https://en.wikipedia.org/wiki/JSTOR_(identifier)
126 http://www.jstor.org/stable/2393394
https://web.archive.org/web/20160222033108/http://leonidzhukov.ru/hse/2013/
127
socialnetworks/papers/freeman79-centrality.pdf
128 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
129 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.227.9549
130 https://en.wikipedia.org/wiki/Doi_(identifier)
131 https://doi.org/10.1016%2F0378-8733%2878%2990021-7
132 http://leonidzhukov.ru/hse/2013/socialnetworks/papers/freeman79-centrality.pdf
133 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4345333
134 https://en.wikipedia.org/wiki/ArXiv_(identifier)
135 http://arxiv.org/abs/1405.6707
136 https://en.wikipedia.org/wiki/Bibcode_(identifier)
137 https://ui.adsabs.harvard.edu/abs/2015NatSR...5E8665L
138 https://en.wikipedia.org/wiki/Doi_(identifier)
139 https://doi.org/10.1038%2Fsrep08665
140 https://en.wikipedia.org/wiki/PMC_(identifier)
141 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4345333
142 https://en.wikipedia.org/wiki/PMID_(identifier)
143 http://pubmed.ncbi.nlm.nih.gov/25727453

765
Centrality

J. Stat. Mech.: Theory Exp. 2012 (7): P07005. arXiv144 :1202.0024145 . Bib-
code146 :2012JSMTE..07..005A147 . doi148 :10.1088/1742-5468/2012/07/p07005149 .
14. B, F; L, J (2012). ”I  
       : A 
 ”. Europhys Lett. 99 (6): 68007. arXiv150 :1203.0502151 . Bib-
code152 :2012EL.....9968007B153 . doi154 :10.1209/0295-5075/99/68007155 .
15. S, M; L, A; A-F, N; S, H
(2013). ”E  --      
   ?”. The European Physical Journal B. 86 (10):
1–13. arXiv156 :1110.2558157 . doi158 :10.1140/epjb/e2013-31025-5159 .
16. G, G.; B, A L (2011). ”R  
-    ”. Nat Commun. 2:
394. Bibcode160 :2011NatCo...2..394G161 . doi162 :10.1038/ncomms1396163 .
PMID164 21772265165 .
17. Freeman, Linton C. ”Centrality in social networks conceptual clarification.” Social
networks 1.3 (1979): 215–239.
18. Alex Bavelas. Communication patterns in task-oriented groups. J. Acoust. Soc. Am,
22(6):725–730, 1950.
19. S, G (1966). ”T     ”. Psychome-
trika. 31 (4): 581–603. doi166 :10.1007/bf02289527167 . hdl168 :10338.dmlcz/101401169 .
PMID170 5232444171 .
20. M, M; L, V (2000), ”H   -
”, Physica A: Statistical Mechanics and Its Applications, 285 (3–

144 https://en.wikipedia.org/wiki/ArXiv_(identifier)
145 http://arxiv.org/abs/1202.0024
146 https://en.wikipedia.org/wiki/Bibcode_(identifier)
147 https://ui.adsabs.harvard.edu/abs/2012JSMTE..07..005A
148 https://en.wikipedia.org/wiki/Doi_(identifier)
149 https://doi.org/10.1088%2F1742-5468%2F2012%2F07%2Fp07005
150 https://en.wikipedia.org/wiki/ArXiv_(identifier)
151 http://arxiv.org/abs/1203.0502
152 https://en.wikipedia.org/wiki/Bibcode_(identifier)
153 https://ui.adsabs.harvard.edu/abs/2012EL.....9968007B
154 https://en.wikipedia.org/wiki/Doi_(identifier)
155 https://doi.org/10.1209%2F0295-5075%2F99%2F68007
156 https://en.wikipedia.org/wiki/ArXiv_(identifier)
157 http://arxiv.org/abs/1110.2558
158 https://en.wikipedia.org/wiki/Doi_(identifier)
159 https://doi.org/10.1140%2Fepjb%2Fe2013-31025-5
160 https://en.wikipedia.org/wiki/Bibcode_(identifier)
161 https://ui.adsabs.harvard.edu/abs/2011NatCo...2..394G
162 https://en.wikipedia.org/wiki/Doi_(identifier)
163 https://doi.org/10.1038%2Fncomms1396
164 https://en.wikipedia.org/wiki/PMID_(identifier)
165 http://pubmed.ncbi.nlm.nih.gov/21772265
166 https://en.wikipedia.org/wiki/Doi_(identifier)
167 https://doi.org/10.1007%2Fbf02289527
168 https://en.wikipedia.org/wiki/Hdl_(identifier)
169 http://hdl.handle.net/10338.dmlcz%2F101401
170 https://en.wikipedia.org/wiki/PMID_(identifier)
171 http://pubmed.ncbi.nlm.nih.gov/5232444

766
Notes and references

4): 539–546, arXiv172 :cond-mat/0008357173 , Bibcode174 :2000PhyA..285..539M175 ,


doi176 :10.1016/s0378-4371(00)00311-3177
21. D, A (2005). ”C D  S N A-
”178 . Journal of Social Structure. 6 (3).
22. Y R. Closeness centrality extended to unconnected graphs: The har-
monic centrality index179 (PDF). A  S N A,
ASNA 2009.
23. F, L (1977). ”A       
”. Sociometry. 40 (1): 35–41. doi180 :10.2307/3033543181 . JS-
TOR182 3033543183 .
24. B, U (2001). ”A     -
”184 (PDF). Journal of Mathematical Sociology. 25 (2): 163–177. Cite-
SeerX185 10.1.1.11.2024186 . doi187 :10.1080/0022250x.2001.9990249188 . Retrieved Oc-
tober 11, 2011.
25. M. E. J. N. ”T   ”189 (PDF). R
2006-11-09. Cite journal requires |journal= (help190 )
26. ”A M S”191 .
27. M. E. J. N. ”T   ”192 (PDF). R
2006-11-09. Cite journal requires |journal= (help193 )
28. Katz, L. 1953. A New Status Index Derived from Sociometric Index. Psychometrika,
39–43.
29. B, P (1991). ”S    ”.
Social Networks. 13 (2): 155–168. doi194 :10.1016/0378-8733(91)90018-o195 .

172 https://en.wikipedia.org/wiki/ArXiv_(identifier)
173 http://arxiv.org/abs/cond-mat/0008357
174 https://en.wikipedia.org/wiki/Bibcode_(identifier)
175 https://ui.adsabs.harvard.edu/abs/2000PhyA..285..539M
176 https://en.wikipedia.org/wiki/Doi_(identifier)
177 https://doi.org/10.1016%2Fs0378-4371%2800%2900311-3
178 http://www.cmu.edu/joss/content/articles/volume6/dekker/index.html
179 http://infoscience.epfl.ch/record/200525/files/%5bEN%5dASNA09.pdf
180 https://en.wikipedia.org/wiki/Doi_(identifier)
181 https://doi.org/10.2307%2F3033543
182 https://en.wikipedia.org/wiki/JSTOR_(identifier)
183 http://www.jstor.org/stable/3033543
184 http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.11.2024
185 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
186 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.11.2024
187 https://en.wikipedia.org/wiki/Doi_(identifier)
188 https://doi.org/10.1080%2F0022250x.2001.9990249
189 http://www-personal.umich.edu/~mejn/papers/palgrave.pdf
190 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
191 http://www.ams.org/samplings/feature-column/fcarc-pagerank
192 http://www-personal.umich.edu/~mejn/papers/palgrave.pdf
193 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
194 https://en.wikipedia.org/wiki/Doi_(identifier)
195 https://doi.org/10.1016%2F0378-8733%2891%2990018-o

767
Centrality

30. How does Google rank webpages?196 Archived197 January 31, 2012, at the Wayback
Machine198 20Q: About Networked Life
31. P, M.; P, M.; H, L. (2013). ”P-
 C: Q G-T I  N
 P  N”199 . PLOS One. 8 (1): e53095.
Bibcode200 :2013PLoSO...853095P201 . doi202 :10.1371/journal.pone.0053095203 .
PMC204 3551907205 . PMID206 23349699207 .
32. F, M R (2013). ”A S  XSS W P-
  D M  O S N”. IEEE
Transactions on Information Forensics and Security. 8 (11): 1815–1826.
doi208 :10.1109/TIFS.2013.2280884209 .
33. A-S, A. J.; H-A, G. C.; G-D, L. A.
(2015-11-25). ”E     
    ”210 . Scientific Reports. 5: 17095. Bib-
code211 :2015NatSR...517095A212 . doi213 :10.1038/srep17095214 . PMC215 4658528216 .
PMID217 26603652218 .
34. A-S, A.J.; H-A; G-D, L. A. ”S-
 I  E    -
      ”219 (PDF). N P-
 G.
35. B, D.; B-Y, Y. (2006). ”F C  T-
 F: D C  C N”. Complexity.

196 http://scenic.princeton.edu/network20q/lectures/Q3_notes.pdf
https://web.archive.org/web/20120131083328/http://scenic.princeton.edu/network20q/
197
lectures/Q3_notes.pdf
198 https://en.wikipedia.org/wiki/Wayback_Machine
199 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3551907
200 https://en.wikipedia.org/wiki/Bibcode_(identifier)
201 https://ui.adsabs.harvard.edu/abs/2013PLoSO...853095P
202 https://en.wikipedia.org/wiki/Doi_(identifier)
203 https://doi.org/10.1371%2Fjournal.pone.0053095
204 https://en.wikipedia.org/wiki/PMC_(identifier)
205 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3551907
206 https://en.wikipedia.org/wiki/PMID_(identifier)
207 http://pubmed.ncbi.nlm.nih.gov/23349699
208 https://en.wikipedia.org/wiki/Doi_(identifier)
209 https://doi.org/10.1109%2FTIFS.2013.2280884
210 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4658528
211 https://en.wikipedia.org/wiki/Bibcode_(identifier)
212 https://ui.adsabs.harvard.edu/abs/2015NatSR...517095A
213 https://en.wikipedia.org/wiki/Doi_(identifier)
214 https://doi.org/10.1038%2Fsrep17095
215 https://en.wikipedia.org/wiki/PMC_(identifier)
216 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4658528
217 https://en.wikipedia.org/wiki/PMID_(identifier)
218 http://pubmed.ncbi.nlm.nih.gov/26603652
http://www.nature.com/article-assets/npg/srep/2015/151125/srep17095/extref/srep17095-
219
s1.pdf

768
Further reading

12 (2): 59–63. arXiv220 :physics/0611295221 . Bibcode222 :2006Cmplx..12b..59B223 .


doi224 :10.1002/cplx.20156225 .
36. H, S.A.; B, D. (2010). ”D M  T-D C-
 N”. Physical Review E. 82 (4): 046105. arXiv226 :0901.4407227 .
Bibcode228 :2010PhRvE..82d6105H229 . doi230 :10.1103/physreve.82.046105231 .
232
PMID 21230343 . 233

37. Gross, T. and Sayama, H. (Eds.). 2009. Adaptive Networks: Theory, Models and
Applications. Springer.
38. Holme, P. and Saramäki, J. 2013. Temporal Networks. Springer.
39. O, T; A, F; S, J (2010). ”N -
   : G    ”234 .
Social Networks. 32 (3): 245–251. doi235 :10.1016/j.socnet.2010.03.006236 .
40. Everett, M. G. and Borgatti, S. P. (2005). Extending centrality. In P. J. Carrington,
J. Scott and S. Wasserman (Eds.), Models and methods in social network analysis (pp.
57–76). New York: Cambridge University Press.
41. Puzis, R., Yagil, D., Elovici, Y., Braha, D. (2009).Collaborative attack on Inter-
net users’ anonymity237 Archived238 2013-12-07 at the Wayback Machine239 , Internet
Research 19(1)

57.16 Further reading


• Koschützki, D.; Lehmann, K. A.; Peeters, L.; Richter, S.; Tenfelde-Podehl, D. and Zlo-
towski, O. (2005) Centrality Indices. In Brandes, U. and Erlebach, T. (Eds.) Network
Analysis: Methodological Foundations, pp. 16–61, LNCS 3418, Springer-Verlag.

220 https://en.wikipedia.org/wiki/ArXiv_(identifier)
221 http://arxiv.org/abs/physics/0611295
222 https://en.wikipedia.org/wiki/Bibcode_(identifier)
223 https://ui.adsabs.harvard.edu/abs/2006Cmplx..12b..59B
224 https://en.wikipedia.org/wiki/Doi_(identifier)
225 https://doi.org/10.1002%2Fcplx.20156
226 https://en.wikipedia.org/wiki/ArXiv_(identifier)
227 http://arxiv.org/abs/0901.4407
228 https://en.wikipedia.org/wiki/Bibcode_(identifier)
229 https://ui.adsabs.harvard.edu/abs/2010PhRvE..82d6105H
230 https://en.wikipedia.org/wiki/Doi_(identifier)
231 https://doi.org/10.1103%2Fphysreve.82.046105
232 https://en.wikipedia.org/wiki/PMID_(identifier)
233 http://pubmed.ncbi.nlm.nih.gov/21230343
http://toreopsahl.com/2010/04/21/article-node-centrality-in-weighted-networks-
234
generalizing-degree-and-shortest-paths/
235 https://en.wikipedia.org/wiki/Doi_(identifier)
236 https://doi.org/10.1016%2Fj.socnet.2010.03.006
237 http://necsi.edu/affiliates/braha/Internet_Research_Anonimity.pdf
https://web.archive.org/web/20131207133417/http://necsi.edu/affiliates/braha/
238
Internet_Research_Anonimity.pdf
239 https://en.wikipedia.org/wiki/Wayback_Machine

769
58 Chaitin's algorithm

Chaitin's algorithm is a bottom-up, graph coloring1 register allocation2 algorithm3 that


uses cost/degree as its spill metric4 . It is named after its designer, Gregory Chaitin5 .
Chaitin's algorithm was the first register allocation6 algorithm that made use of coloring of
the interference graph7 for both register allocations and spilling.
Chaitin's algorithm was presented on the 1982 SIGPLAN8 Symposium on Compiler Con-
struction, and published in the symposium proceedings. It was extension of an earlier 1981
paper on the use of graph coloring for register allocation. Chaitin's algorithm formed the
basis of a large section of research into register allocators.

58.1 References
• G C (A 2004). ”R     
”. ACM SIGPLAN Notices. 39 (4): 66–74. doi9 :10.1145/989393.98940310 .

This algorithms11 or data structures12 -related article is a stub13 . You can help
Wikipedia by expanding it14 .
• v15
• t16
• e17

1 https://en.wikipedia.org/wiki/Graph_coloring
2 https://en.wikipedia.org/wiki/Register_allocation
3 https://en.wikipedia.org/wiki/Algorithm
4 https://en.wikipedia.org/wiki/Spill_metric
5 https://en.wikipedia.org/wiki/Gregory_Chaitin
6 https://en.wikipedia.org/wiki/Register_allocation
7 https://en.wikipedia.org/w/index.php?title=Interference_graph&action=edit&redlink=1
8 https://en.wikipedia.org/wiki/SIGPLAN
9 https://en.wikipedia.org/wiki/Doi_(identifier)
10 https://doi.org/10.1145%2F989393.989403
11 https://en.wikipedia.org/wiki/Algorithm
12 https://en.wikipedia.org/wiki/Data_structure
13 https://en.wikipedia.org/wiki/Wikipedia:Stub
14 https://en.wikipedia.org/w/index.php?title=Chaitin%27s_algorithm&action=edit
15 https://en.wikipedia.org/wiki/Template:Algorithm-stub
16 https://en.wikipedia.org/wiki/Template_talk:Algorithm-stub
17 https://en.wikipedia.org/w/index.php?title=Template:Algorithm-stub&action=edit

771
59 Christofides algorithm

The Christofides algorithm or Christofides–Serdyukov algorithm is an algorithm1


for finding approximate solutions to the travelling salesman problem2 , on instances where
the distances form a metric space3 (they are symmetric and obey the triangle inequality4 ).[1]
It is an approximation algorithm5 that guarantees that its solutions will be within a factor
of 3/2 of the optimal solution length, and is named after Nicos Christofides6 and Anatoliy
I. Serdyukov7 , who discovered it independently in 1976.[2][3][4]
8
As of 2019[update] , this is the best approximation ratio9 that has been proven for the traveling
salesman problem on general metric spaces, although better approximations are known for
some special cases.

59.1 Algorithm

Let G = (V,w) be an instance of the travelling salesman problem. That is, G is a complete
graph on the set V of vertices, and the function w assigns a nonnegative real weight to
every edge of G. According to the triangle inequality, for every three vertices u, v, and x, it
should be the case that w(uv) + w(vx) ≥ w(ux).
Then the algorithm can be described in pseudocode10 as follows.[1]
1. Create a minimum spanning tree11 T of G.
2. Let O be the set of vertices with odd degree12 in T. By the handshaking lemma13 , O
has an even number of vertices.
3. Find a minimum-weight perfect matching14 M in the induced subgraph15 given by the
vertices from O.

1 https://en.wikipedia.org/wiki/Algorithm
2 https://en.wikipedia.org/wiki/Travelling_salesman_problem
3 https://en.wikipedia.org/wiki/Metric_space
4 https://en.wikipedia.org/wiki/Triangle_inequality
5 https://en.wikipedia.org/wiki/Approximation_algorithm
6 https://en.wikipedia.org/w/index.php?title=Nicos_Christofides&action=edit&redlink=1
https://en.wikipedia.org/w/index.php?title=Anatoliy_I._Serdyukov&action=edit&redlink=
7
1
9 https://en.wikipedia.org/wiki/Approximation_ratio
10 https://en.wikipedia.org/wiki/Pseudocode
11 https://en.wikipedia.org/wiki/Minimum_spanning_tree
12 https://en.wikipedia.org/wiki/Degree_(graph_theory)
13 https://en.wikipedia.org/wiki/Handshaking_lemma
14 https://en.wikipedia.org/wiki/Perfect_matching
15 https://en.wikipedia.org/wiki/Induced_subgraph

773
Christofides algorithm

4. Combine the edges of M and T to form a connected multigraph16 H in which each


vertex has even degree.
5. Form an Eulerian circuit17 in H.
6. Make the circuit found in previous step into a Hamiltonian circuit18 by skipping
repeated vertices (shortcutting).

59.2 Approximation ratio

The cost of the solution produced by the algorithm is within 3/2 of the optimum. To prove
this, let C be the optimal traveling salesman tour. Removing an edge from C produces a
spanning tree, which must have weight at least that of the minimum spanning tree, implying
that w(T) ≤ w(C). Next, number the vertices of O in cyclic order around C, and partition
C into two sets of paths: the ones in which the first path vertex in cyclic order has an odd
number and the ones in which the first path vertex has an even number. Each set of paths
corresponds to a perfect matching of O that matches the two endpoints of each path, and
the weight of this matching is at most equal to the weight of the paths. Since these two
sets of paths partition the edges of C, one of the two sets has at most half of the weight of
C, and thanks to the triangle inequality its corresponding matching has weight that is also
at most half the weight of C. The minimum-weight perfect matching can have no larger
weight, so w(M) ≤ w(C)/2. Adding the weights of T and M gives the weight of the Euler
tour, at most 3w(C)/2. Thanks to the triangle inequality, shortcutting does not increase
the weight, so the weight of the output is also at most 3w(C)/2.[1]

59.3 Lower bounds

There exist inputs to the travelling salesman problem that cause the Christofides algorithm
to find a solution whose approximation ratio is arbitrarily close to 3/2. One such class of
inputs are formed by a path19 of n vertices, with the path edges having weight 1, together
with a set of edges connecting vertices two steps apart in the path with weight 1 + ε for a
number ε chosen close to zero but positive. All remaining edges of the complete graph have
distances given by the shortest paths20 in this subgraph. Then the minimum spanning tree
will be given by the path, of length n − 1, and the only two odd vertices will be the path
endpoints, whose perfect matching consists of a single edge with weight approximately n/2.
The union of the tree and the matching is a cycle, with no possible shortcuts, and with
weight approximately 3n/2. However, the optimal solution uses the edges of weight 1 + ε
together with two weight-1 edges incident to the endpoints of the path, and has total weight
(1 + ε)(n − 2) + 2, close to n for small values of ε. Hence we obtain an approximation
ratio of 3/2.[5]

16 https://en.wikipedia.org/wiki/Multigraph
17 https://en.wikipedia.org/wiki/Eulerian_circuit
18 https://en.wikipedia.org/wiki/Hamiltonian_circuit
19 https://en.wikipedia.org/wiki/Path_(graph_theory)
20 https://en.wikipedia.org/wiki/Shortest_path_problem

774
Example

59.4 Example

Given: complete graph whose edge weights obey the triangle inequality

Figure 146

Calculate minimum spanning tree21 T

Figure 147

21 https://en.wikipedia.org/wiki/Minimum_spanning_tree

775
Christofides algorithm

Calculate the set of vertices O with odd degree in T

Figure 148

Form the subgraph of G using only the vertices of O

Figure 149

776
Example

Construct a minimum-weight perfect matching M in this subgraph

Figure 150

Unite matching and spanning tree T ∪ M to form an Eulerian multigraph

Figure 151

777
Christofides algorithm

Calculate Euler tour

Figure 152

Remove repeated vertices, giving the algorithm's output

Figure 153

59.5 References
1. G, M T.22 ; T, R23 (2015), ”18.1.2 T
C A A”, Algorithm Design and Applications,
Wiley, pp. 513–514.
2. Nicos Christofides, Worst-case analysis of a new heuristic for the travelling salesman
problem, Report 388, Graduate School of Industrial Administration, CMU, 1976.

22 https://en.wikipedia.org/wiki/Michael_T._Goodrich
23 https://en.wikipedia.org/wiki/Roberto_Tamassia

778
External links

3.  B, R; S, V A. (2020-04-06). ”A 


   3/2-     
 ”. arXiv:2004.02437 [cs, math]. arXiv24 :2004.0243725 . Bib-
code26 :2020arXiv200402437V27 .
4. S, A I. (1978). ”О НЕКОТОРЫХ ЭКСТРЕМАЛЬНЫХ ОБХОДАХ
В ГРАФАХ”28 [O     ] (PDF). Upravlyaemye
Sistemy (in Russian). 17: 76–79.
5. B, M (2008), ”M TSP”29 ,  K, M-Y (.), Encyclo-
pedia of Algorithms}, Springer-Verlag, pp. 517–519, ISBN30 978038730770131 .

59.6 External links


• NIST Christofides Algorithm Definition32

24 https://en.wikipedia.org/wiki/ArXiv_(identifier)
25 http://arxiv.org/abs/2004.02437
26 https://en.wikipedia.org/wiki/Bibcode_(identifier)
27 https://ui.adsabs.harvard.edu/abs/2020arXiv200402437V
28 http://nas1.math.nsc.ru/aim/journals/us/us17/us17_007.pdf
29 https://books.google.com/books?id=i3S9_GnHZwYC&pg=PA517
30 https://en.wikipedia.org/wiki/ISBN_(identifier)
31 https://en.wikipedia.org/wiki/Special:BookSources/9780387307701
32 https://xlinux.nist.gov/dads/HTML/christofides.html

779
60 Clique percolation method

The clique percolation method[1] is a popular approach for analyzing the overlapping
community structure1 of networks2 . The term network community (also called a module,
cluster or cohesive group) has no widely accepted unique definition and it is usually de-
fined as a group of nodes that are more densely connected to each other than to other
nodes in the network. There are numerous alternative methods for detecting communities
in networks,[2] for example, the Girvan–Newman algorithm3 , hierarchical clustering4 and
modularity5 maximization.

60.1 Definitions

60.1.1 Clique Percolation Method (CPM)

The clique percolation method builds up the communities from k-cliques6 , which correspond
to complete (fully connected) sub-graphs of k nodes. (E.g., a k-clique at k = 3 is equivalent
to a triangle). Two k-cliques are considered adjacent if they share k − 1 nodes. A community
is defined as the maximal union of k-cliques that can be reached from each other through
a series of adjacent k-cliques. Such communities can be best interpreted with the help of a
k-clique template (an object isomorphic to a complete graph of k nodes). Such a template
can be placed onto any k-clique in the graph, and rolled to an adjacent k-clique by relocating
one of its nodes and keeping its other k − 1 nodes fixed. Thus, the k-clique communities of
a network are all those sub-graphs that can be fully explored by rolling a k-clique template
in them, but cannot be left by this template.
This definition allows overlaps between the communities in a natural way, as illustrated
in Fig.1, showing four k-clique communities at k = 4. The communities are color-coded
and the overlap between them is emphasized in red. The definition above is also local:
if a certain sub-graph fulfills the criteria to be considered as a community, then it will
remain a community independent of what happens to another part of the network far
away. In contrast, when searching for the communities by optimizing with respect to a
global quantity, a change far away in the network can reshape the communities in the
unperturbed regions as well. Furthermore, it has been shown that global methods can
suffer from a resolution limit problem,[3] where the size of the smallest community that can

1 https://en.wikipedia.org/wiki/Community_structure
2 https://en.wikipedia.org/wiki/Social_network
3 https://en.wikipedia.org/wiki/Girvan%E2%80%93Newman_algorithm
4 https://en.wikipedia.org/wiki/Hierarchical_clustering
5 https://en.wikipedia.org/wiki/Modularity_(networks)
6 https://en.wikipedia.org/wiki/Clique_(graph_theory)

781
Clique percolation method

be extracted is dependent on the system size. A local community definition such as here
circumvents this problem automatically.
Since even small networks can contain a vast number of k-cliques, the implementation of this
approach is based on locating all maximal cliques7 rather than the individual k-cliques.[1]
This inevitably requires finding the graph's maximum clique, which is an NP-hard8 problem.
(We emphasize to the reader that finding a maximum clique is much harder than finding
a single maximal clique.) This means that although networks with few million nodes have
already been analyzed successfully with this approach,[4] the worst case runtime complexity
is exponential in the number of nodes.

Figure 154 Fig.1. Illustration of the k-clique communities at k = 4.

7 https://en.wikipedia.org/wiki/Clique_(graph_theory)
8 https://en.wikipedia.org/wiki/NP-hardness

782
Definitions

60.1.2 Directed Clique Percolation Method (CPMd)

On a network with directed links a directed k-clique is a complete subgraph with k nodes
fulfilling the following condition. The k nodes can be ordered such that between an arbitrary
pair of them there exists a directed link pointing from the node with the higher rank towards
the node with the lower rank. The directed Clique Percolation Method defines directed
network communities as the percolation clusters of directed k-cliques.

60.1.3 Weighted Clique Percolation Method (CPMw)

On a network with weighted links a weighted k-clique is a complete subgraph with k nodes
such that the geometric mean9 of the k (k - 1) / 2 link weights within the k-clique is greater
than a selected threshold value, I. The weighted Clique Percolation Method defines weighted
network communities as the percolation clusters of weighted k-cliques. Note that the geo-
metric mean of link weights within a subgraph is called the intensity of that subgraph.[5]

60.1.4 Clique Graph Generalizations

Clique percolation methods may be generalized by recording different amounts of overlap


between the various k-cliques. This then defines a new type of graph, a clique graph,[6]
where each k-clique in the original graph is represented by a vertex in the new clique graph.
The edges in the clique graph are used to record the strength of the overlap of cliques in
the original graph. One may then apply any community detection10 method to this clique
graph to identify the clusters in the original graph through the k-clique structure.
For instance in a simple graph, we can define the overlap between two k-cliques to be
the number of vertices common to both k-cliques. The Clique Percolation Method is then
equivalent to thresholding this clique graph, dropping all edges of weight less than (k-1),
with the remaining connected components forming the communities of cliques found in
CPM. For k=2 the cliques are the edges of the original graph and the clique graph in this
case is the line graph11 of the original network.
In practice, using the number of common vertices as a measure of the strength of clique
overlap may give poor results as large cliques in the original graph, those with many more
than k vertices, will dominate the clique graph. The problem arises because if a vertex is
in n different k-cliques it will contribute to n(n-1)/2 edges in such a clique graph. A simple
solution is to let each vertex common to two overlapping kcliques to contribute a weight
equal to 1/n when measuring the overlap strength of the two k-cliques.
In general the clique graph viewpoint is a useful way of finding generalizations of standard
clique-percolation methods to get any round problems encountered. It even shows how to

9 https://en.wikipedia.org/wiki/Geometric_mean
10 https://en.wikipedia.org/wiki/Community_structure
11 https://en.wikipedia.org/wiki/Line_graph

783
Clique percolation method

describe extensions of these methods based on other motifs12 , subgraphs other than kcliques.
In this case a clique graph is best thought of a particular example of a hypergraph13 .

60.2 Percolation transition in the CPM

The Erdős–Rényi model14 shows a series of interesting transitions when the probability
p of two nodes being connected is increased. For each k one can find a certain threshold
probability pc above which the k-cliques organize into a giant community.[7][8][9] (The size of
the giant community is comparable to the system size, in other words the giant community
occupies a finite part of the system even in the thermodynamic limit.) This transition is
analogous to the percolation transition15 in statistical physics16 . A similar phenomenon can
be observed in many real networks as well: if k is large, only the most densely linked parts
are accepted as communities, thus, they usually remain small and dispersed. When k is
lowered, both the number and the size of the communities start to grow. However, in most
cases a critical k value can be reached, below which a giant community emerges, smearing
out the details of the community structure by merging (and making invisible) many smaller
communities.

60.3 Applications

The clique percolation method had been used to detect communities from the studies
of cancer metastasis17[10][11] through various social networks18[4][12][13][14][15] to document
clustering[16] and economical networks.[17]

60.4 Algorithms and Software

There are a number of implementations of clique percolation. The clique percolation method
was first implemented and popularized by CFinder [1]19 (freeware for non-commercial use)
software for detecting and visualizing overlapping communities in networks. The program
enables customizable visualization and allows easy strolling over the found communities.
The package contains a command line version of the program as well, which is suitable for
scripting.
A faster implementation (available20 under the GPL) has been implemented by another
group.[18] Another example, which is also very fast in certain contexts, is the SCP
algorithm.[19]

12 https://en.wikipedia.org/wiki/Network_motif
13 https://en.wikipedia.org/wiki/Hypergraph
14 https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model
15 https://en.wikipedia.org/wiki/Percolation
16 https://en.wikipedia.org/wiki/Statistical_physics
17 https://en.wikipedia.org/wiki/Metastasis
18 https://en.wikipedia.org/wiki/Social_network
19 http://www.cfinder.org
20 https://github.com/aaronmcdaid/MaximalCliques

784
See also

60.4.1 Parallel Algorithms

A parallel version of the clique percolation method was designed and developed by S.
Mainardi et al..[20] By exploiting today's multi-core/multi-processor computing architec-
tures, the method enables the extraction of k-clique communities from very large networks
such as the Internet.[21] The authors released the source code of the method under the GPL
and made it freely available21 for the community.

60.5 See also


• Social network22
• Community structure23
• Survey Article Communities in Networks24
• F, S (2010). ”C   ”. Physics Re-
ports. 486 (3–5): 75–174. arXiv25 :0906.061226 . Bibcode27 :2010PhR...486...75F28 .
doi29 :10.1016/j.physrep.2009.11.00230 .
• Bibliography of community structure links31

60.6 References
1. P, G (2005). ”U   
       ”. Nature.
435 (7043): 814–818. arXiv :physics/0506133 . Bibcode :2005Natur.435..814P35 .
32 33 34

doi36 :10.1038/nature0360737 . PMID38 1594470439 .

21 http://cosparallel.sf.net/
22 https://en.wikipedia.org/wiki/Social_network
23 https://en.wikipedia.org/wiki/Community_structure
24 http://www.ams.org/notices/200909/rtx090901082p.pdf
25 https://en.wikipedia.org/wiki/ArXiv_(identifier)
26 http://arxiv.org/abs/0906.0612
27 https://en.wikipedia.org/wiki/Bibcode_(identifier)
28 https://ui.adsabs.harvard.edu/abs/2010PhR...486...75F
29 https://en.wikipedia.org/wiki/Doi_(identifier)
30 https://doi.org/10.1016%2Fj.physrep.2009.11.002
31 http://www.cscs.umich.edu/~crshalizi/notabene/community-discovery.html
32 https://en.wikipedia.org/wiki/ArXiv_(identifier)
33 http://arxiv.org/abs/physics/0506133
34 https://en.wikipedia.org/wiki/Bibcode_(identifier)
35 https://ui.adsabs.harvard.edu/abs/2005Natur.435..814P
36 https://en.wikipedia.org/wiki/Doi_(identifier)
37 https://doi.org/10.1038%2Fnature03607
38 https://en.wikipedia.org/wiki/PMID_(identifier)
39 http://pubmed.ncbi.nlm.nih.gov/15944704

785
Clique percolation method

2. F, S (2010). ”C   ”. Physics Re-


ports. 486 (3–5): 75–174. arXiv40 :0906.061241 . Bibcode42 :2010PhR...486...75F43 .
doi44 :10.1016/j.physrep.2009.11.00245 .
3. F, S. (2007). ”R    -
”46 . Proceedings of the National Academy of Sciences. 104 (1):
36–41. arXiv47 :physics/060710048 . Bibcode49 :2007PNAS..104...36F50 .
doi51 :10.1073/pnas.060596510452 . PMC53 176546654 . PMID55 1719081856 .
4. P, G (2007). ”Q   ”. Na-
ture. 446 (7136): 664–667. arXiv57 :0704.074458 . Bibcode59 :2007Natur.446..664P60 .
doi61 :10.1038/nature0567062 . PMID63 1741017564 .
5. O, J-P; S, J; K, J; K, K
(2005). ”I        -
”. Physical Review E. 71 (6): 065103. arXiv65 :cond-mat/040862966 .
67
Bibcode :2005PhRvE..71f5103O . 68 doi :10.1103/PhysRevE.71.06510370 .
69

PMID71 1608980072 .
6. E, T S (2010). ”C    ”.
Journal of Statistical Mechanics: Theory and Experiment. 2010 (12): P12037.

40 https://en.wikipedia.org/wiki/ArXiv_(identifier)
41 http://arxiv.org/abs/0906.0612
42 https://en.wikipedia.org/wiki/Bibcode_(identifier)
43 https://ui.adsabs.harvard.edu/abs/2010PhR...486...75F
44 https://en.wikipedia.org/wiki/Doi_(identifier)
45 https://doi.org/10.1016%2Fj.physrep.2009.11.002
46 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1765466
47 https://en.wikipedia.org/wiki/ArXiv_(identifier)
48 http://arxiv.org/abs/physics/0607100
49 https://en.wikipedia.org/wiki/Bibcode_(identifier)
50 https://ui.adsabs.harvard.edu/abs/2007PNAS..104...36F
51 https://en.wikipedia.org/wiki/Doi_(identifier)
52 https://doi.org/10.1073%2Fpnas.0605965104
53 https://en.wikipedia.org/wiki/PMC_(identifier)
54 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1765466
55 https://en.wikipedia.org/wiki/PMID_(identifier)
56 http://pubmed.ncbi.nlm.nih.gov/17190818
57 https://en.wikipedia.org/wiki/ArXiv_(identifier)
58 http://arxiv.org/abs/0704.0744
59 https://en.wikipedia.org/wiki/Bibcode_(identifier)
60 https://ui.adsabs.harvard.edu/abs/2007Natur.446..664P
61 https://en.wikipedia.org/wiki/Doi_(identifier)
62 https://doi.org/10.1038%2Fnature05670
63 https://en.wikipedia.org/wiki/PMID_(identifier)
64 http://pubmed.ncbi.nlm.nih.gov/17410175
65 https://en.wikipedia.org/wiki/ArXiv_(identifier)
66 http://arxiv.org/abs/cond-mat/0408629
67 https://en.wikipedia.org/wiki/Bibcode_(identifier)
68 https://ui.adsabs.harvard.edu/abs/2005PhRvE..71f5103O
69 https://en.wikipedia.org/wiki/Doi_(identifier)
70 https://doi.org/10.1103%2FPhysRevE.71.065103
71 https://en.wikipedia.org/wiki/PMID_(identifier)
72 http://pubmed.ncbi.nlm.nih.gov/16089800

786
References

arXiv73 :1009.063874 . Bibcode75 :2010JSMTE..12..037E76 . doi77 :10.1088/1742-


5468/2010/12/P1203778 .
7. D, I; P, G; V, T (2005). ”C P-
  R N”. Physical Review Letters. 94 (16):
160202. arXiv79 :cond-mat/050455180 . Bibcode81 :2005PhRvL..94p0202D82 .
doi83 :10.1103/PhysRevLett.94.16020284 . PMID85 1590419886 .
8. P, G; D, I; V, T (2006). ”T C-
 P  -C P   EŐ–R G”. Jour-
nal of Statistical Physics. 128 (1–2): 219–227. arXiv87 :cond-mat/061029888 . Bib-
code89 :2007JSP...128..219P90 . doi91 :10.1007/s10955-006-9184-x92 .
9. L, M; D, Y; W, B-H (2015). ”C  
 ”. Physical Review E. 92 (4): 042116. arXiv93 :1508.0187894 .
Bibcode95 :2015PhRvE..92d2116L96 . doi97 :10.1103/PhysRevE.92.04211698 .
99
PMID 26565177 . 100

10. J, P. F. (2006). ”G     -


    ”101 . Bioinformatics. 22 (18): 2291–2297.
doi102 :10.1093/bioinformatics/btl390103 . PMC104 1865486105 . PMID106 16844706107 .

73 https://en.wikipedia.org/wiki/ArXiv_(identifier)
74 http://arxiv.org/abs/1009.0638
75 https://en.wikipedia.org/wiki/Bibcode_(identifier)
76 https://ui.adsabs.harvard.edu/abs/2010JSMTE..12..037E
77 https://en.wikipedia.org/wiki/Doi_(identifier)
78 https://doi.org/10.1088%2F1742-5468%2F2010%2F12%2FP12037
79 https://en.wikipedia.org/wiki/ArXiv_(identifier)
80 http://arxiv.org/abs/cond-mat/0504551
81 https://en.wikipedia.org/wiki/Bibcode_(identifier)
82 https://ui.adsabs.harvard.edu/abs/2005PhRvL..94p0202D
83 https://en.wikipedia.org/wiki/Doi_(identifier)
84 https://doi.org/10.1103%2FPhysRevLett.94.160202
85 https://en.wikipedia.org/wiki/PMID_(identifier)
86 http://pubmed.ncbi.nlm.nih.gov/15904198
87 https://en.wikipedia.org/wiki/ArXiv_(identifier)
88 http://arxiv.org/abs/cond-mat/0610298
89 https://en.wikipedia.org/wiki/Bibcode_(identifier)
90 https://ui.adsabs.harvard.edu/abs/2007JSP...128..219P
91 https://en.wikipedia.org/wiki/Doi_(identifier)
92 https://doi.org/10.1007%2Fs10955-006-9184-x
93 https://en.wikipedia.org/wiki/ArXiv_(identifier)
94 http://arxiv.org/abs/1508.01878
95 https://en.wikipedia.org/wiki/Bibcode_(identifier)
96 https://ui.adsabs.harvard.edu/abs/2015PhRvE..92d2116L
97 https://en.wikipedia.org/wiki/Doi_(identifier)
98 https://doi.org/10.1103%2FPhysRevE.92.042116
99 https://en.wikipedia.org/wiki/PMID_(identifier)
100 http://pubmed.ncbi.nlm.nih.gov/26565177
101 http://bioinformatics.oxfordjournals.org/cgi/content/abstract/22/18/2291
102 https://en.wikipedia.org/wiki/Doi_(identifier)
103 https://doi.org/10.1093%2Fbioinformatics%2Fbtl390
104 https://en.wikipedia.org/wiki/PMC_(identifier)
105 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1865486
106 https://en.wikipedia.org/wiki/PMID_(identifier)
107 http://pubmed.ncbi.nlm.nih.gov/16844706

787
Clique percolation method

11. J, PF; C, T; Z, D; B, PA (2006). ”C -


     :  -
        ”108 .
BMC Bioinformatics. 7: 2. doi109 :10.1186/1471-2105-7-2110 . PMC111 1363365112 .
PMID113 16398927114 .
12. G, M C.; L, P G.; H, H J. (2006). ”S-
  M A  M S N”. Physical Review Letters.
96 (8): 088702. arXiv115 :physics/0602091116 . Bibcode117 :2006PhRvL..96h8702G118 .
doi119 :10.1103/PhysRevLett.96.088702120 . PMID121 16606237122 .
13. K, J M.; O, J-P; S, J; K,
K; K, J (2007). ”E  C  W
N”. Physical Review Letters. 99 (22): 228701. arXiv123 :0708.0925124 .
Bibcode125 :2007PhRvL..99v8701K126 . doi127 :10.1103/PhysRevLett.99.228701128 .
129
PMID 18233339 . 130

14. T, R; O, J-P; S, J; H-


, J; K, K (2006). ”A    -
”. Physica A: Statistical Mechanics and Its Applications. 371 (2):
851–860. arXiv131 :physics/0601114132 . Bibcode133 :2006PhyA..371..851T134 .
doi135 :10.1016/j.physa.2006.03.050136 .
15. G, M.C.; H, H.J.; K, J.; V, T. (2007).
”C       -
 ”. Physica A: Statistical Mechanics and Its Applications.

108 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1363365
109 https://en.wikipedia.org/wiki/Doi_(identifier)
110 https://doi.org/10.1186%2F1471-2105-7-2
111 https://en.wikipedia.org/wiki/PMC_(identifier)
112 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1363365
113 https://en.wikipedia.org/wiki/PMID_(identifier)
114 http://pubmed.ncbi.nlm.nih.gov/16398927
115 https://en.wikipedia.org/wiki/ArXiv_(identifier)
116 http://arxiv.org/abs/physics/0602091
117 https://en.wikipedia.org/wiki/Bibcode_(identifier)
118 https://ui.adsabs.harvard.edu/abs/2006PhRvL..96h8702G
119 https://en.wikipedia.org/wiki/Doi_(identifier)
120 https://doi.org/10.1103%2FPhysRevLett.96.088702
121 https://en.wikipedia.org/wiki/PMID_(identifier)
122 http://pubmed.ncbi.nlm.nih.gov/16606237
123 https://en.wikipedia.org/wiki/ArXiv_(identifier)
124 http://arxiv.org/abs/0708.0925
125 https://en.wikipedia.org/wiki/Bibcode_(identifier)
126 https://ui.adsabs.harvard.edu/abs/2007PhRvL..99v8701K
127 https://en.wikipedia.org/wiki/Doi_(identifier)
128 https://doi.org/10.1103%2FPhysRevLett.99.228701
129 https://en.wikipedia.org/wiki/PMID_(identifier)
130 http://pubmed.ncbi.nlm.nih.gov/18233339
131 https://en.wikipedia.org/wiki/ArXiv_(identifier)
132 http://arxiv.org/abs/physics/0601114
133 https://en.wikipedia.org/wiki/Bibcode_(identifier)
134 https://ui.adsabs.harvard.edu/abs/2006PhyA..371..851T
135 https://en.wikipedia.org/wiki/Doi_(identifier)
136 https://doi.org/10.1016%2Fj.physa.2006.03.050

788
References

379 (1): 307–316. arXiv137 :physics/0611268138 . Bibcode139 :2007PhyA..379..307G140 .


doi141 :10.1016/j.physa.2007.01.002142 .
16. G, W; W, K-F (2006). Natural Document Clustering by Clique Per-
colation in Random Graphs. Lecture Notes in Computer Science. 4182. pp. 119–131.
doi143 :10.1007/11880592_10144 . ISBN145 978-3-540-45780-0146 .
17. H, T; S, J; O, J-P; K, K (2007).
”S         -
   ”. Physica A: Statistical Mechanics and Its Applications.
383 (1): 147–151. arXiv147 :physics/0703061148 . Bibcode149 :2007PhyA..383..147H150 .
doi151 :10.1016/j.physa.2007.04.124152 .
18. R, F.; MD, A.; H, N.; V, T (2012). ”P-
 C  C N”. 2012 IEEE/ACM International
Conference on Advances in Social Networks Analysis and Mining. pp. 274–281.
arXiv153 :1205.0038154 . doi155 :10.1109/ASONAM.2012.54156 . ISBN157 978-1-4673-
2497-7158 .
19. K, J M.; K, M; K, K; S, J (2008).
”S     ”. Physical Review
E. 78 (2): 026109. arXiv159 :0805.1449160 . Bibcode161 :2008PhRvE..78b6109K162 .
doi163 :10.1103/PhysRevE.78.026109164 . PMID165 18850899166 .
20. G, E; L, L; M, S (2013). ”P-
 k-Clique Community Detection on Large-Scale Networks”167 (PDF).

137 https://en.wikipedia.org/wiki/ArXiv_(identifier)
138 http://arxiv.org/abs/physics/0611268
139 https://en.wikipedia.org/wiki/Bibcode_(identifier)
140 https://ui.adsabs.harvard.edu/abs/2007PhyA..379..307G
141 https://en.wikipedia.org/wiki/Doi_(identifier)
142 https://doi.org/10.1016%2Fj.physa.2007.01.002
143 https://en.wikipedia.org/wiki/Doi_(identifier)
144 https://doi.org/10.1007%2F11880592_10
145 https://en.wikipedia.org/wiki/ISBN_(identifier)
146 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-45780-0
147 https://en.wikipedia.org/wiki/ArXiv_(identifier)
148 http://arxiv.org/abs/physics/0703061
149 https://en.wikipedia.org/wiki/Bibcode_(identifier)
150 https://ui.adsabs.harvard.edu/abs/2007PhyA..383..147H
151 https://en.wikipedia.org/wiki/Doi_(identifier)
152 https://doi.org/10.1016%2Fj.physa.2007.04.124
153 https://en.wikipedia.org/wiki/ArXiv_(identifier)
154 http://arxiv.org/abs/1205.0038
155 https://en.wikipedia.org/wiki/Doi_(identifier)
156 https://doi.org/10.1109%2FASONAM.2012.54
157 https://en.wikipedia.org/wiki/ISBN_(identifier)
158 https://en.wikipedia.org/wiki/Special:BookSources/978-1-4673-2497-7
159 https://en.wikipedia.org/wiki/ArXiv_(identifier)
160 http://arxiv.org/abs/0805.1449
161 https://en.wikipedia.org/wiki/Bibcode_(identifier)
162 https://ui.adsabs.harvard.edu/abs/2008PhRvE..78b6109K
163 https://en.wikipedia.org/wiki/Doi_(identifier)
164 https://doi.org/10.1103%2FPhysRevE.78.026109
165 https://en.wikipedia.org/wiki/PMID_(identifier)
166 http://pubmed.ncbi.nlm.nih.gov/18850899
http://puma.isti.cnr.it/rmydownload.php?filename=cnr.iit/cnr.iit/2013-A0-016/2013-A0-
167
016.pdf

789
Clique percolation method

IEEE Transactions on Parallel and Distributed Systems. 24 (8): 1651–1660.


doi168 :10.1109/TPDS.2012.229169 .
21. G, E; L, L; O, C (2011). ”K-
C   I AS- T G”. 2011 31st Inter-
national Conference on Distributed Computing Systems Workshops. pp. 134–139.
doi170 :10.1109/ICDCSW.2011.17171 . ISBN172 978-1-4577-0384-3173 .

168 https://en.wikipedia.org/wiki/Doi_(identifier)
169 https://doi.org/10.1109%2FTPDS.2012.229
170 https://en.wikipedia.org/wiki/Doi_(identifier)
171 https://doi.org/10.1109%2FICDCSW.2011.17
172 https://en.wikipedia.org/wiki/ISBN_(identifier)
173 https://en.wikipedia.org/wiki/Special:BookSources/978-1-4577-0384-3

790
61 Closure problem

In graph theory1 and combinatorial optimization2 , a closure of a directed graph3 is a set of


vertices with no outgoing edges. That is, the graph should have no edges that start within
the closure and end outside the closure. The closure problem is the task of finding the
maximum-weight or minimum-weight closure in a vertex-weighted directed graph.[1][2] It
may be solved in polynomial time using a reduction to the maximum flow problem4 . It may
be used to model various application problems of choosing an optimal subset of tasks to
perform, with dependencies between pairs of tasks, one example being in open pit mining5 .

61.1 Algorithms

61.1.1 Condensation

The maximum-weight closure of a given graph G is the same as the complement6 of the
minimum-weight closure on the transpose graph7 of G, so the two problems are equivalent
in computational complexity. If two vertices of the graph belong to the same strongly
connected component8 , they must behave the same as each other with respect to all closures:
it is not possible for a closure to contain one vertex without containing the other. For this
reason, the input graph to a closure problem may be replaced by its condensation9 , in which
every strongly connected component is replaced by a single vertex. The condensation is
always a directed acyclic graph10 .

1 https://en.wikipedia.org/wiki/Graph_theory
2 https://en.wikipedia.org/wiki/Combinatorial_optimization
3 https://en.wikipedia.org/wiki/Directed_graph
4 https://en.wikipedia.org/wiki/Maximum_flow_problem
5 https://en.wikipedia.org/wiki/Open_pit_mining
6 https://en.wikipedia.org/wiki/Complement_(set_theory)
7 https://en.wikipedia.org/wiki/Transpose_graph
8 https://en.wikipedia.org/wiki/Strongly_connected_component
9 https://en.wikipedia.org/wiki/Condensation_(graph_theory)
10 https://en.wikipedia.org/wiki/Directed_acyclic_graph

791
Closure problem

61.1.2 Reduction to maximum flow

Figure 155 Reduction from closure to maximum flow

As Picard (1976)11 showed,[2][3] a maximum-weight closure may be obtained from G by


solving a maximum flow problem12 on a graph H constructed from G by adding to it two
additional vertices s and t. For each vertex v with positive weight in G, the augmented
graph H contains an edge from s to v with capacity equal to the weight of v, and for each
vertex v with negative weight in G, the augmented graph H contains an edge from v to
t whose capacity is the negation of the weight of v. All of the edges in G are given infinite
capacity in H.[1]
A minimum cut13 separating s from t in this graph cannot have any edges of G passing in
the forward direction across the cut: a cut with such an edge would have infinite capacity
and would not be minimum. Therefore, the set of vertices on the same side of the cut as
s automatically forms a closure C. The capacity of the cut equals the weight of all positive-
weight vertices minus the weight of the vertices in C, which is minimized when the weight
of C is maximized. By the max-flow min-cut theorem14 , a minimum cut, and the optimal
closure derived from it, can be found by solving a maximum flow problem.[1]

11 #CITEREFPicard1976
12 https://en.wikipedia.org/wiki/Maximum_flow_problem
13 https://en.wikipedia.org/wiki/Minimum_cut
14 https://en.wikipedia.org/wiki/Max-flow_min-cut_theorem

792
Applications

61.1.3 Alternative algorithms

Alternative algorithms for the maximum closure problem that do not compute flows have
also been studied.[4][5][6] Their running time is similar to that of the fastest known flow
algorithms.[4]

61.2 Applications

61.2.1 Open pit mining

An open pit mine may be modeled as a set of blocks of material which may be removed by
mining it once all the blocks directly above it have been removed. A block has a total value,
equal to the value of the minerals that can be extracted from it minus the cost of removal
and extraction; in some cases, a block has no extraction value but must still be removed
to reach other blocks, giving it a negative value. One may define an acyclic network that
has as its vertices the blocks of a mine, with an edge from each block to the blocks above it
that must be removed earlier than it. The weight of each vertex in this network is the total
value of its block, and the most profitable plan for mining can be determined by finding a
maximum weight closure, and then forming a topological ordering15 of the blocks in this
closure.[1][5][7]

61.2.2 Military targeting

In military operations, high-value targets such as command centers are frequently protected
by layers of defense systems, which may in turn be protected by other systems. In order to
reach a target, all of its defenses must be taken down, making it into a secondary target.
Each target needs a certain amount of resources to be allocated to it in order to perform
a successful attack. The optimal set of targets to attack, to obtain the most value for the
resources expended, can be modeled as a closure problem.[1][8]

61.2.3 Transportation network design

The problem of planning a freight delivery system may be modeled by a network in which
the vertices represent cities and the (undirected) edges represent potential freight delivery
routes between pairs of cities. Each route can achieve a certain profit, but can only be
used if freight depots are constructed at both its ends, with a certain cost. The problem of
designing a network that maximizes the difference between the profits and the costs can be
solved as a closure problem, by subdividing each undirected edge into two directed edges,
both directed outwards from the subdivision point. The weight of each subdivision point
is a positive number, the profit of the corresponding route, and the weight of each original
graph vertex is a negative number, the cost of building a depot in that city.[1][9] Together
with open pit mining, this was one of the original motivating applications for studying the

15 https://en.wikipedia.org/wiki/Topological_ordering

793
Closure problem

closure problem; it was originally studied in 1970, in two independent papers published in
the same issue of the same journal by J. M. W. Rhys and Michel Balinski16 .[9][10][11]

61.2.4 Job scheduling

Sidney (1975)17 and Lawler (1978)18 describe an application of the closure problem to a
version of job shop scheduling19 in which one is given a collection of tasks to be scheduled
to be performed, one at a time. Each task has two numbers associated with it: a weight
or priority, and a processing time, the amount of time that it takes to perform that task.
In addition the tasks have precedence constraints: certain tasks must be performed before
others. These precedence constraints can be described by a directed acyclic graph G in
which an edge from one task to another indicates that the first task must be performed
earlier than the second one. The goal is to choose an ordering that is consistent with these
constraints (a topological ordering20 of G) that minimizes the total weighted completion
time of the tasks.[12][13]
Although (as Lawler shows) this scheduling problem is NP-complete21 in general, Sidney
describes a decomposition method that can help solve the problem by reducing it to several
smaller problems of the same type. In particular, if S is a subset of the tasks that (among
all subsets) has the largest possible ratio of its total weight to its total processing time, and
in addition S is minimal among all sets with the same ratio, then there exists an optimal
schedule in which all tasks in S are performed before all other tasks. As long as S is not the
whole set of tasks, this partition of the tasks splits the scheduling problem into two smaller
problems, one of scheduling S and one of scheduling the remaining tasks.[12] Although S is
a closure (for a graph with reversed edges from G) the problem of finding S is not exactly
a maximum weight closure problem, because the value of S is a ratio rather than a sum of
weights. Nevertheless, Lawler shows that S may be found in polynomial time by a binary
search22 algorithm in which each step of the search uses an instance of the closure problem
as a subroutine.[13]

16 https://en.wikipedia.org/wiki/Michel_Balinski
17 #CITEREFSidney1975
18 #CITEREFLawler1978
19 https://en.wikipedia.org/wiki/Job_shop_scheduling
20 https://en.wikipedia.org/wiki/Topological_ordering
21 https://en.wikipedia.org/wiki/NP-complete
22 https://en.wikipedia.org/wiki/Binary_search

794
References

61.3 References
1. A, R K.23 ; M, T L.24 ; O, J B.25 (1993),
”19.2 M     ”, Network flows, Englewood Cliffs,
NJ: Prentice Hall Inc., pp. 719–724, ISBN26 0-13-617549-X27 , MR28 120577529 .
2. C, W J.30 ; C, W H.; P, W R.31 ;
S, A32 (2011), ”O    ”, Combinatorial
Optimization33 , W S  D M  O, 33,
John Wiley & Sons, pp. 49–50, ISBN34 978111803139135 .
3. P, J-C (1976), ”M      -
   ”, Management Science36 , 22 (11): 1268–1272,
doi37 :10.1287/mnsc.22.11.126838 , MR39 040359640 .
4. H, D S.41 (2001), ”A -   -
  -   ”, Networks, 37 (4): 171–193,
doi42 :10.1002/net.101243 , MR44 183719645 .
5. L, H.; G, I. F. (1965), ”O   - ”,
Transactions of the Canadian Institute of Mining and Metallurgy, 68: 17–24. As cited
by Hochbaum (2001)46 .
6. F, B; K, K; S, T (1990), ”A   
      ”, Management Science47 , 36 (3):
315–331, doi48 :10.1287/mnsc.36.3.31549 .
7. J, T. B. (1968), Optimum pit mine production scheduling, Technical Report,
University of California, Berkeley, CA. As cited by Ahuja, Magnanti & Orlin (1993)50 .

23 https://en.wikipedia.org/wiki/Ravindra_K._Ahuja
24 https://en.wikipedia.org/wiki/Thomas_L._Magnanti
25 https://en.wikipedia.org/wiki/James_B._Orlin
26 https://en.wikipedia.org/wiki/ISBN_(identifier)
27 https://en.wikipedia.org/wiki/Special:BookSources/0-13-617549-X
28 https://en.wikipedia.org/wiki/MR_(identifier)
29 http://www.ams.org/mathscinet-getitem?mr=1205775
30 https://en.wikipedia.org/wiki/William_J._Cook
31 https://en.wikipedia.org/wiki/William_R._Pulleyblank
32 https://en.wikipedia.org/wiki/Alexander_Schrijver
33 https://books.google.com/books?id=tarLTNwM3gEC&pg=PA49
34 https://en.wikipedia.org/wiki/ISBN_(identifier)
35 https://en.wikipedia.org/wiki/Special:BookSources/9781118031391
36 https://en.wikipedia.org/wiki/Management_Science_(journal)
37 https://en.wikipedia.org/wiki/Doi_(identifier)
38 https://doi.org/10.1287%2Fmnsc.22.11.1268
39 https://en.wikipedia.org/wiki/MR_(identifier)
40 http://www.ams.org/mathscinet-getitem?mr=0403596
41 https://en.wikipedia.org/wiki/Dorit_S._Hochbaum
42 https://en.wikipedia.org/wiki/Doi_(identifier)
43 https://doi.org/10.1002%2Fnet.1012
44 https://en.wikipedia.org/wiki/MR_(identifier)
45 http://www.ams.org/mathscinet-getitem?mr=1837196
46 #CITEREFHochbaum2001
47 https://en.wikipedia.org/wiki/Management_Science_(journal)
48 https://en.wikipedia.org/wiki/Doi_(identifier)
49 https://doi.org/10.1287%2Fmnsc.36.3.315
50 #CITEREFAhujaMagnantiOrlin1993

795
Closure problem

8. O, D. (1987), ”O     -


”, Naval Research Logistics Quarterly, 34 (5): 605–617, doi51 :10.1002/1520-
6750(198710)34:5<605::aid-nav3220340502>3.0.co;2-l52 . As cited by Ahuja, Magnanti
& Orlin (1993)53 .
9. H, D54 (2004), ”50 A A: S, P-
, S F C, M C,  I
 A M T”, Management Science55 , 50 (6): 709–723,
doi56 :10.1287/mnsc.1040.024257 .
10. R, J. M. W. (1970), ”A     
   ”, Management Science58 , 17 (3): 200–207,
doi59 :10.1287/mnsc.17.3.20060 .
11. B, M. L.61 (1970), ”O   ”, Management Science62 ,
17 (3): 230–231, doi63 :10.1287/mnsc.17.3.23064 .
12. S, J B. (1975), ”D   -
      ”, Operations Re-
search65 , 23 (2): 283–298, doi66 :10.1287/opre.23.2.28367 .
13. L, E. L.68 (1978), ”S      -
     ”69 , Ann. Discrete Math.,
Annals of Discrete Mathematics, 2: 75–90, doi70 :10.1016/S0167-5060(08)70323-671 ,
ISBN72 978072041043373 , MR74 049515675 .

51 https://en.wikipedia.org/wiki/Doi_(identifier)
https://doi.org/10.1002%2F1520-6750%28198710%2934%3A5%3C605%3A%3Aaid-nav3220340502%
52
3E3.0.co%3B2-l
53 #CITEREFAhujaMagnantiOrlin1993
54 https://en.wikipedia.org/wiki/Dorit_S._Hochbaum
55 https://en.wikipedia.org/wiki/Management_Science_(journal)
56 https://en.wikipedia.org/wiki/Doi_(identifier)
57 https://doi.org/10.1287%2Fmnsc.1040.0242
58 https://en.wikipedia.org/wiki/Management_Science_(journal)
59 https://en.wikipedia.org/wiki/Doi_(identifier)
60 https://doi.org/10.1287%2Fmnsc.17.3.200
61 https://en.wikipedia.org/wiki/Michel_Balinski
62 https://en.wikipedia.org/wiki/Management_Science_(journal)
63 https://en.wikipedia.org/wiki/Doi_(identifier)
64 https://doi.org/10.1287%2Fmnsc.17.3.230
65 https://en.wikipedia.org/wiki/Operations_Research_(journal)
66 https://en.wikipedia.org/wiki/Doi_(identifier)
67 https://doi.org/10.1287%2Fopre.23.2.283
68 https://en.wikipedia.org/wiki/Eugene_Lawler
69 https://books.google.com/books?id=YvdjzQxSMLMC&pg=PA75
70 https://en.wikipedia.org/wiki/Doi_(identifier)
71 https://doi.org/10.1016%2FS0167-5060%2808%2970323-6
72 https://en.wikipedia.org/wiki/ISBN_(identifier)
73 https://en.wikipedia.org/wiki/Special:BookSources/9780720410433
74 https://en.wikipedia.org/wiki/MR_(identifier)
75 http://www.ams.org/mathscinet-getitem?mr=0495156

796
62 Color-coding

This article is about a technique in the design of graph algorithms. For the use of color
to display information, see color code1 . For other uses, see Color code (disambiguation)2 .
In computer science3 and graph theory4 , the term color-coding refers to an algorithmic
technique5 which is useful in the discovery of network motifs6 . For example, it can be used to
detect a simple path7 of length k in a given graph8 . The traditional color-coding algorithm
is probabilistic9 , but it can be derandomized10 without much overhead in the running time.
Color-coding also applies to the detection of cycles11 of a given length, and more generally
it applies to the subgraph isomorphism problem12 (an NP-complete13 problem), where it
yields polynomial time algorithms14 when the subgraph pattern that it is trying to detect
has bounded treewidth15 .
The color-coding method was proposed and analyzed in 1994 by Noga Alon16 , Raphael
Yuster17 , and Uri Zwick18 .[1][2]

62.1 Results

The following results can be obtained through the method of color-coding:


• For every fixed constant k, if a graph G = (V, E) contains a simple cycle of size k, then
such a cycle can be found in:
• O(V ω ) expected time, or
• O(V ω log V ) worst-case time, where ω is the exponent of matrix multiplication19 .[3]

1 https://en.wikipedia.org/wiki/Color_code
2 https://en.wikipedia.org/wiki/Color_code_(disambiguation)
3 https://en.wikipedia.org/wiki/Computer_science
4 https://en.wikipedia.org/wiki/Graph_theory
5 https://en.wikipedia.org/wiki/Algorithmic_technique
6 https://en.wikipedia.org/wiki/Network_motif
7 https://en.wikipedia.org/wiki/Path_(graph_theory)
8 https://en.wikipedia.org/wiki/Graph_theory
9 https://en.wikipedia.org/wiki/Probabilistic_algorithms
10 https://en.wikipedia.org/wiki/Derandomization#Derandomization
11 https://en.wikipedia.org/wiki/Cycle_(graph_theory)
12 https://en.wikipedia.org/wiki/Subgraph_isomorphism
13 https://en.wikipedia.org/wiki/NP-complete
14 https://en.wikipedia.org/wiki/Polynomial_time
15 https://en.wikipedia.org/wiki/Treewidth
16 https://en.wikipedia.org/wiki/Noga_Alon
17 https://en.wikipedia.org/w/index.php?title=Raphael_Yuster&action=edit&redlink=1
18 https://en.wikipedia.org/wiki/Uri_Zwick
19 https://en.wikipedia.org/wiki/Matrix_multiplication

797
Color-coding

• For every fixed constant k, and every graph G = (V, E) that is in any nontrivial minor-
closed graph family20 (e.g., a planar graph21 ), if G contains a simple cycle of size k, then
such cycle can be found in:
• O(V) expected time, or
• O(V log V) worst-case time.
• If a graph G = (V, E) contains a subgraph isomorphic to a bounded treewidth22 graph
which has O(log V) vertices, then such a subgraph can be found in polynomial time23 .

62.2 The method

To solve the problem of finding a subgraph H = (VH , EH ) in a given graph G = (V, E),
where H can be a path, a cycle, or any bounded treewidth24 graph where |VH | = O(log V ),
the method of color-coding begins by randomly coloring each vertex of G with k = |VH |
colors, and then tries to find a colorful copy of H in colored G. Here, a graph is colorful
if every vertex in it is colored with a distinct color. This method works by repeating (1)
random coloring a graph and (2) finding colorful copy of the target subgraph, and eventually
the target subgraph can be found if the process is repeated a sufficient number of times.
Suppose a copy of H in G becomes colorful with some non-zero probability p. It immediately
follows that if the random coloring is repeated 1/p times, then this copy is expected to
become colorful once. Note that though p is small, it is shown that if |VH | = O(log V ), p is
only polynomially small. Suppose again there exists an algorithm such that, given a graph
G and a coloring which maps each vertex of G to one of the k colors, it finds a copy of
colorful H, if one exists, within some runtime O(r). Then the expected time to find a copy
of H in G, if one exists, is O( pr ).
Sometimes it is also desirable to use a more restricted version of colorfulness. For example,
in the context of finding cycles in planar graphs25 , it is possible to develop an algorithm
that finds well-colored cycles. Here, a cycle is well-colored if its vertices are colored by
consecutive colors.

62.2.1 Example

An example would be finding a simple cycle of length k in graph G = (V, E).


By applying random coloring method, each simple cycle has a probability of k!/k k > e−k
to become colorful, since there are k k ways of coloring the k vertices on the cycle, among
which there are k! colorful occurrences. Then an algorithm (described next) can be used to
find colorful cycles in the randomly colored graph G in time O(V ω ), where ω is the matrix
multiplication constant. Therefore, it takes ek · O(V ω ) overall time to find a simple cycle of
length k in G.

20 https://en.wikipedia.org/wiki/Minor_(graph_theory)#Minor-closed_graph_families
21 https://en.wikipedia.org/wiki/Planar_graph
22 https://en.wikipedia.org/wiki/Treewidth
23 https://en.wikipedia.org/wiki/Polynomial_time
24 https://en.wikipedia.org/wiki/Treewidth
25 https://en.wikipedia.org/wiki/Planar_graphs

798
Derandomization

The colorful cycle-finding algorithm works by first finding all pairs of vertices in V that are
connected by a simple path of length k − 1, and then checking whether the two vertices
in each pair are connected. Given a coloring function c : V → {1, ..., k} to color graph
G, enumerate all partitions of the color set {1, ..., k} into two subsets C1 , C2 of size k/2
each. Note that V can be divided into V1 and V2 accordingly, and let G1 and G2 denote
the subgraphs induced by V1 and V2 respectively. Then, recursively find colorful paths of
length k/2 − 1 in each of G1 and G2 . Suppose the boolean matrix A1 and A2 represent the
connectivity of each pair of vertices in G1 and G2 by a colorful path, respectively, and let B
be the matrix describing the adjacency relations between vertices of V1 and those of V2 , the
boolean product A1 BA2 gives all pairs of vertices in V that are connected by a colorful path
of length k − 1. Thus, the recursive relation of matrix multiplications is t(k) ≤ 2k · t(k/2),
which yields a runtime of 2O(k) · V ω = O(V ω ). Although this algorithm finds only the end
points of the colorful path, another algorithm by Alon and Naor[4] that finds colorful paths
themselves can be incorporated into it.

62.3 Derandomization

The derandomization26 of color-coding involves enumerating possible colorings of a graph


G, such that the randomness of coloring G is no longer required. For the target subgraph
H in G to be discoverable, the enumeration has to include at least one instance where the
H is colorful. To achieve this, enumerating a k-perfect family F of hash functions from {1,
..., |V|} to {1, ..., k} is sufficient. By definition, F is k-perfect if for every subset S of {1,
..., |V|} where |S| = k, there exists a hash function h in F such that h : S → {1, ..., k} is
perfect27 . In other words, there must exist a hash function in F that colors any given k
vertices with k distinct colors.
There are several approaches to construct such a k-perfect hash family:
1. The best explicit construction is by Moni Naor28 , Leonard J. Schulman29 , and Ar-
avind Srinivasan30 ,[5] where a family of size ek k O(log k) log |V | can be obtained. This
construction does not require the target subgraph to exist in the original subgraph
finding problem.
2. Another explicit construction by Jeanette P. Schmidt31 and Alan Siegel[6] yields a
family of size 2O(k) log2 |V |.
3. Another construction that appears in the original paper of Noga Alon32 et al.[2] can
be obtained by first building a k-perfect family that maps {1, ..., |V|} to {1, ..., k2 },
followed by building another k-perfect family that maps {1, ..., k2 } to {1, ..., k}. In
the first step, it is possible to construct such a family with 2nlog k random bits that
are almost 2log k-wise independent,[7][8] and the sample space needed for generating
those random bits can be as small as k O(1) log |V |. In the second step, it has been

26 https://en.wikipedia.org/wiki/Derandomization
27 https://en.wikipedia.org/wiki/Perfect_hash
28 https://en.wikipedia.org/wiki/Moni_Naor
29 https://en.wikipedia.org/w/index.php?title=Leonard_J._Schulman&action=edit&redlink=1
30 https://en.wikipedia.org/w/index.php?title=Aravind_Srinivasan&action=edit&redlink=1
31 https://en.wikipedia.org/w/index.php?title=Jeanette_P._Schmidt&action=edit&redlink=1
32 https://en.wikipedia.org/wiki/Noga_Alon

799
Color-coding

shown by Jeanette P. Schmidt and Alan Siegel[6] that the size of such k-perfect family
can be 2O(k) . Consequently, by composing the k-perfect families from both steps, a
k-perfect family of size 2O(k) log |V | that maps from {1, ..., |V|} to {1, ..., k} can be
obtained.
In the case of derandomizing well-coloring, where each vertex on the subgraph is colored
consecutively, a k-perfect family of hash functions from {1, ..., |V|} to {1, ..., k!} is needed.
A sufficient k-perfect family which maps from {1, ..., |V|} to {1, ..., kk } can be constructed
in a way similar to the approach 3 above (the first step). In particular, it is done by
using nklog k random bits that are almost klog k independent, and the size of the resulting
k-perfect family will be k O(k) log |V |.
The derandomization of color-coding method can be easily parallelized, yielding efficient
NC33 algorithms.

62.4 Applications

Recently, color-coding has attracted much attention in the field of bioinformatics. One
example is the detection of signaling pathways34 in protein-protein interaction35 (PPI) net-
works. Another example is to discover and to count the number of motifs36 in PPI net-
works. Studying both signaling pathways37 and motifs38 allows a deeper understanding
of the similarities and differences of many biological functions, processes, and structures
among organisms.
Due to the huge amount of gene data that can be collected, searching for pathways or motifs
can be highly time consuming. However, by exploiting the color-coding method, the motifs
or signaling pathways with k = O(log n) vertices in a network G with n vertices can be found
very efficiently in polynomial time. Thus, this enables us to explore more complex or larger
structures in PPI networks.

62.5 Further reading


• A, N.; D, P.; H, I.; H, F.; S, S. C.
(2008). ”B        -

33 https://en.wikipedia.org/wiki/NC_(complexity)
34 https://en.wikipedia.org/wiki/Wnt_signaling_pathway
35 https://en.wikipedia.org/wiki/Protein-protein_interaction
36 https://en.wikipedia.org/wiki/Structural_motif
37 https://en.wikipedia.org/wiki/Wnt_signaling_pathway
38 https://en.wikipedia.org/wiki/Structural_motif

800
References

”39 . Bioinformatics. 24 (13): i241–i249. doi40 :10.1093/bioinformatics/btn16341 .


PMC42 271864143 . PMID44 1858672145 .
• H, F.; W, S.; Z, T. (2008). ”A E 
C-C  A  S P D”. Algorith-
mica. 52 (2): 114–132. CiteSeerX46 10.1.1.68.946947 . doi48 :10.1007/s00453-007-9008-749 .

62.6 References
1. Alon, N., Yuster, R., and Zwick, U. 1994. Color-coding: a new method for finding
simple paths, cycles and other small subgraphs within large graphs. In Proceedings
of the Twenty-Sixth Annual ACM Symposium on theory of Computing (Montreal,
Quebec, Canada, May 23–25, 1994). STOC '94. ACM, New York, NY, 326–335.
DOI= 50
2. Alon, N., Yuster, R., and Zwick, U. 1995. Color-coding. J. ACM 42, 4 (Jul. 1995),
844–856. DOI= 51
3. Coppersmith–Winograd Algorithm52
4. Alon, N. and Naor, M. 1994 Derandomization, Witnesses for Boolean Matrix Multi-
plication and Construction of Perfect Hash Functions. Technical Report. UMI Order
Number: CS94-11., Weizmann Science Press of Israel.
5. Naor, M., Schulman, L. J., and Srinivasan, A. 1995. Splitters and near-optimal de-
randomization. In Proceedings of the 36th Annual Symposium on Foundations of
Computer Science (October 23–25, 1995). FOCS. IEEE Computer Society, Washing-
ton, DC, 182.
6. S, J. P.; S, A. (1990). ”T    -
 - H ”. SIAM J. Comput. 19 (5): 775–786.
doi53 :10.1137/021905454 .
7. Naor, J. and Naor, M. 1990. Small-bias probability spaces: efficient constructions
and applications. In Proceedings of the Twenty-Second Annual ACM Symposium on
theory of Computing (Baltimore, Maryland, United States, May 13–17, 1990). H.
Ortiz, Ed. STOC '90. ACM, New York, NY, 213-223. DOI= 55
8. Alon, N., Goldreich, O., Hastad, J., and Peralta, R. 1990. Simple construction of
almost k-wise independent random variables. In Proceedings of the 31st Annual

39 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2718641
40 https://en.wikipedia.org/wiki/Doi_(identifier)
41 https://doi.org/10.1093%2Fbioinformatics%2Fbtn163
42 https://en.wikipedia.org/wiki/PMC_(identifier)
43 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2718641
44 https://en.wikipedia.org/wiki/PMID_(identifier)
45 http://pubmed.ncbi.nlm.nih.gov/18586721
46 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
47 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.68.9469
48 https://en.wikipedia.org/wiki/Doi_(identifier)
49 https://doi.org/10.1007%2Fs00453-007-9008-7
50 http://doi.acm.org/10.1145/195058.195179
51 http://doi.acm.org/10.1145/210332.210337
52 https://en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm
53 https://en.wikipedia.org/wiki/Doi_(identifier)
54 https://doi.org/10.1137%2F0219054
55 http://doi.acm.org/10.1145/100216.100244

801
Color-coding

Symposium on Foundations of Computer Science (October 22–24, 1990). SFCS. IEEE


Computer Society, Washington, DC, 544-553 vol.2. doi56 :10.1109/FSCS.1990.8957557

56 https://en.wikipedia.org/wiki/Doi_(identifier)
57 https://doi.org/10.1109%2FFSCS.1990.89575

802
63 Contraction hierarchies

In computer science1 , the method of contraction hierarchies is a speed-up technique2 for


finding the shortest-path3 in a graph4 . The most intuitive applications are car-navigation
systems: A user wants to drive from A to B using the quickest possible route. The metric
optimized here is the travel time. Intersections are represented by vertices5 , the street
sections connecting them by edges6 . The edge weights represent the time it takes to drive
along this segment of the street. A path from A to B is a sequence of edges (streets); the
shortest path is the one with the minimal sum of edge weights among all possible paths. The
shortest path in a graph can be computed using Dijkstra's7 algorithm; but given that road
networks consist of tens of millions of vertices, this is impractical.[1] Contraction hierarchies
is a speed-up method optimized to exploit properties of graphs representing road networks.[2]
The speed-up is achieved by creating shortcuts in a preprocessing phase which are then
used during a shortest-path query to skip over ”unimportant” vertices.[2] This is based on
the observation that road networks are highly hierarchical. Some intersections, for example
highway junctions, are ”more important” and higher up in the hierarchy than for example a
junction leading into a dead end. Shortcuts can be used to save the precomputed distance
between two important junctions such that the algorithm doesn't have to consider the full
path between these junctions at query time. Contraction hierarchies do not know about
which roads humans consider ”important” (e.g. highways), but they are provided with the
graph as input and are able to assign importance to vertices using heuristics.
Contraction hierarchies are not only applied to speed-up algorithms in car-navigation
systems8 but also in web-based route planners9 , traffic simulation10 , and logistics
optimization.[3][1][4] Implementations of the algorithm are publicly available as open source
software11 .[5][6][7][8][9]

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Speedup
3 https://en.wikipedia.org/wiki/Shortest_path_problem
4 https://en.wikipedia.org/wiki/Graph_theory
5 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
6 https://en.wikipedia.org/wiki/Edge_(graph_theory)
7 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
8 https://en.wikipedia.org/wiki/Automotive_navigation_system
9 https://en.wikipedia.org/wiki/Journey_planner
10 https://en.wikipedia.org/wiki/Traffic_simulation
11 https://en.wikipedia.org/wiki/Open-source_software

803
Contraction hierarchies

63.1 Algorithm

The contraction hierarchies (CH) algorithm is a two-phase approach to the shortest path
problem12 consisting of a preprocessing phase and a query phase. As road networks
change rather infrequently, more time (seconds to hours) can be used to once precompute
some calculations before queries are to be answered. Using this precomputed data many
queries can be answered taking very little time (microseconds) each.[1][3] CHs rely on short-
cuts to achieve this speedup. A shortcut connects two vertices u and v not adjacent in the
original graph. Its edge weight is the sum of the edge weights on the shortest u-v path.
Consider two large cities connected by a highway. Between these two cities, there is a
multitude of junctions leading to small villages and suburbs. Most drivers want to get from
one city to the other – maybe as part of a larger route – and not take one of the exits
on the way. In the graph representing this street layout each intersections is represented
by a node and edges are created between each neighboring intersections. To calculate the
distance between these two cities, the algorithm has to traverse all the edges along the way,
adding up their length. Precomputing this distance once and storing it in an additional edge
created between the two large cities will save calculations each time this highway has to be
evaluated in a query. This additional edge is called a ”shortcut” and has no counterpart in
the real world. The contraction hierarchies algorithm has no knowledge about street types
but is able to determine which shortcuts have to be created using the graph alone as input.

Figure 156 To find a path from s to t the algorithm can skip over the grey vertices and
use the dashed shortcut instead. This reduces the number of vertices the algorithm has
to look at. The edge weight of the shortcut from u to v is the sum of the edge weights of
the shortest u-v path.

63.1.1 Preprocessing phase

The CH algorithm relies on shortcuts created in the preprocessing phase to reduce the search
space – that is the number of vertices CH has to look at, at query time. To achieve this,

12 https://en.wikipedia.org/wiki/Shortest_path_problem

804
Algorithm

iterative vertex contractions are performed. When contracting a vertex v it is temporarily


removed from the graph G, and a shortcut is created between each pair {u, w} of neighboring
vertices if the shortest path from u to w contains v.[2] The process of determining if the
shortest path between u and w contains v is called witness search. It can be performed
for example by computing a path from u to w using a forward search using only not yet
contracted nodes.[3]

Figure 157 The original graph is the line (a, b, c, d, e, f )(solid). Dashed edges represent
shortcuts, grey arrows show which two edges are combined to form the respective
shortcut. Vertices have been drawn to represent the node order in which the vertices are
being contracted, bottom-to-top. Contracting vertex cintroduces a shortcut between band
dwith dist(b, d) = dist(b, c) + dist(c, d). Contractions of the vertices eand dintroduce one
shortcut respectively. Contractions of a, b and f do not introduce any shortcuts and are
therefore not shown.

Node order

The vertices of the input graph have to be contracted in a way which minimizes the number
of edges added to the graph by contractions. As optimal node ordering is NP-complete13 ,[10]
heuristics14 are used.[2]
Bottom-up and top-down heuristics exist. On one hand, the computationally cheaper
bottom-up heuristics decide the order in which to contract the vertices in a greedy15 fash-

13 https://en.wikipedia.org/wiki/NP-completeness
14 https://en.wikipedia.org/wiki/Heuristic_(computer_science)
15 https://en.wikipedia.org/wiki/Greedy_algorithm

805
Contraction hierarchies

ion; this means the order is not known in advance but rather the next node is selected for
contraction after the previous contraction has been completed. Top-down heuristics on the
other hand precompute the whole node ordering before the first node is contracted. This
yields better results but needs more preprocessing time.[2]
In bottom-up heuristics, a combination of factors is used to select the next vertex for contrac-
tion. As the number of shortcuts is the primary factor which determines preprocessing and
query runtime, we want to keep it as small as possible. The most important term by which
to select the next node for contraction is therefore the net number of edges added when
contracting a node x. This is defined as A(x) − |{(u, x) : (u, x) ∈ E}| where A(x) is the num-
ber of shortcuts that would be created if x were to be contracted and |{(u, x) : (u, x) ∈ E}|
is the number of edges incident to x. Using this criteria alone a linear path would result
in a linear hierarchy (many levels) and no created shortcuts. By considering the number
of nearby vertices which are already contracted a uniform contraction and a flat hierarchy
(less levels) is achieved. This can for example be done by maintaining a counter for each
node which is incremented each time a neighboring vertex is contracted. Nodes with lower
counters are then preferred to nodes width higher counters.[11]
Top-down heuristics on the other hand yield better results but need more preprocessing
time. They classify vertices which are part of many shortest paths as more important than
those which are only needed for a few shortest paths. This can be approximated16 using
nested dissections17 .[2] To compute a nested dissection, one recursively separates a graph
into two parts; which are themselves then separated into two parts and so on. That is, find a
subset of nodes S ⊆ V which when removed from the graph G separate G into two disjunct
pieces G1 , G2 of approximately equal size such that S ∪ G1 ∪ G2 = G. Place all nodes v ∈ S
last in the node ordering and then recursively compute the nested dissection for G1 and
G2 .[12] The intuition being, that all queries from one half of the graph to the other half of
the graph need to pass through the small separator and therefore nodes in this separator
are of high importance. Nested dissections can be efficiently calculated on road networks
because of their small separators.[13]

63.1.2 Query phase

In the query phase, a bidirectional search is performed starting from the starting node s
and the target node t on the original graph augmented by the shortcuts created in the
preprocessing phase.[2] The most important vertex on the shortest path between s and
t will be either s or t themselves or more important than both s and t. Therefore the
vertex u minimizing dist(s, u) + dist(u, t) is on the shortest s − t path in the original graph
and dist(s, u) + dist(u, t) = dist(s, t) holds.[2] This, in combination with how shortcuts are
created, means that both forward and backward search only need to relax edges leading to
more important nodes (upwards) in the hierarchy which keeps the search space small.[3] In
all up-(down-up)-down paths, the inner (down-up) can be skipped, because a shortcut has
been created in the preprocessing stage.

16 https://en.wikipedia.org/wiki/Approximation_algorithm
17 https://en.wikipedia.org/wiki/Nested_dissection

806
Customized contraction hierarchies

Figure 158 When computing the shortest path from s to t, forward (orange) and
backward (blue) search only need to follow edges going upwards in the hierarchy. The
found path marked in red and uses one shortcut (dashed).

Path retrieval

A CH query as described above yields the time or distance from s to t but not the actual
path. To obtain the list of edges (streets) on the shortest path the shortcuts taken have to
be unpacked. Each shortcut is the concatenation of two other edges of the original graph
or further shortcuts. Storing the middle vertex of each shortcut during contraction enables
linear-time recursive unpacking of the shortest route.[2][3]

63.2 Customized contraction hierarchies

If the edge weights are changed more often than the network topology, CH can be extended
to a three-phase approach by including a customization phase between the preprocessing and
query phase. This can be used for example to switch between shortest distance and shortest
time or include current traffic information as well as user preferences like avoiding certain
types of roads (ferries, highways, ...). In the preprocessing phase, most of the runtime
is spent on computing the order in which the nodes are contracted.[3] This sequence of
contraction operations in the preprocessing phase can be saved for when they are later
needed in the customization phase. Each time the metric is customized, the contractions
can then be efficiently applied in the stored order using the custom metric.[2] Additionally,
depending on the new edge weights it may be necessary to recompute some shortcuts.[3] For

807
Contraction hierarchies

this to work, the contraction order has to be computed using metric-independent nested
dissections.[1]

63.3 Extensions and applications

CHs as described above search for a shortest path from one starting node to one target node.
This is called one-to-one shortest path and is used for example in car-navigation systems.
Other applications include matching GPS18 traces to road segments and speeding up traffic
simulators19 which have to consider the likely routes taken by all drivers in a network. In
route prediction20 one tries to estimate where a vehicle is likely headed by calculating how
well its current and past positions agree with a shortest path from its starting point to any
possible target. This can be efficiently done using CHs.[2]
In one-to-many scenarios, a starting node sand a set of target nodes T are given and the
distance dist(s, t) for all t ∈ T has to be computed. The most prominent application for
one-to-many queries are point-of-interest searches. Typical examples include finding the
closest gas station, restaurant or post office using actual travel time instead of geographical
distance21 as metric.[2]
In the many-to-many shortest path scenario, a set of starting nodes S and a set of target
nodes T are given and the distance dist(si , ti ) for all (si , tj ) ∈ S × T has to be computed.
This is used for example in logistic applications.[2] CHs can be extended to many-to-many
queries in the following manner: First, perform a backward upward search from each tj ∈ T .
For each vertex u scanned during this search, one stores dist(tj , u) in a bucket β(u). Then,
one runs a forward upward search from each si ∈ S, checking for each non-empty bucket,
whether the route over the corresponding vertex improves any best distance. That is, if
dist(si , u) + dist(u, tj ) < dist(si , tj ) for any (si , tj ) ∈ S × T .[2][3]
Some applications even require one-to-all computations, i.e., finding the distances from
a source vertex s to all other vertices in the graph. As Dijkstra’s algorithm visits each
edge exactly once and therefore runs in linear time it is theoretically optimal. Dijkstra’s
algorithm, however, is hard to parallelize22 and is not cache-optimal23 because of its bad
locality. CHs can be used for a more cache-optimal implementation. For this, a forward
upward search from s followed by a downward scan over alle nodes in the shortcut-enriched
graph is performed. The later operation scans through memory in a linear fashion, as
the nodes are processed in decreasing order of importance and can therefore be placed in
memory accordingly.[14] Note, that this is possible because the order in which the nodes are
processed in the second phase is independent of the source node s.[2]
In production, car-navigation systems should be able to compute fastest travel routes using
predicted traffic information and display alternative routes. Both can be done using CHs.[2]
The former is called routing with time-dependent networks where the travel time of a given

18 https://en.wikipedia.org/wiki/Global_Positioning_System
19 https://en.wikipedia.org/wiki/Traffic_simulation
20 https://en.wikipedia.org/w/index.php?title=Route_prediction&action=edit&redlink=1
21 https://en.wikipedia.org/wiki/Geographical_distance
22 https://en.wikipedia.org/wiki/Parallel_computing
23 https://en.wikipedia.org/wiki/Cache_(computing)

808
References

edge is no longer constant but rather a function of the time of day when entering the edge.
Alternative routes need to be smooth looking, significantly different from the shortest path
but not significantly longer.[2]
CHs can be extended to optimize multiple metrics at the same time; this is called multi-
criteria route planning. For example one could minimize both travel cost and time. Another
example are electric vehicles24 for which the available battery charge constrains the valid
routes as the battery may not run empty.[2]

63.4 References
1. D, J; S, B; W, D (5 A 2016). ”C-
 C H”. Journal of Experimental Algorithmics.
21 (1): 1–49. arXiv25 :1402.040226 . doi27 :10.1145/288684328 .
2. B, H; D, D; G, A V.; M-
H, M; P, T; S, P; W, D;
W, R F. (2016). ”R P  T N-
”. Algorithm Engineering. Lecture Notes in Computer Science. 9220: 19–
80. arXiv29 :1504.0514030 . doi31 :10.1007/978-3-319-49487-6_232 . ISBN33 978-3-319-
49486-934 .
3. G, R; S, P; S, D; V,
C (2012). ”E R  L R N U
C H”. Transportation Science. 46 (3): 388–404.
doi35 :10.1287/trsc.1110.040136 .
4. D, D; S, P; S, D; W, D
(2009). ”E R P A”. Algorithmics of Large
and Complex Networks. Lecture Notes in Computer Science. 5515: 117–139.
doi37 :10.1007/978-3-642-02094-0_738 . ISBN39 978-3-642-02093-340 .
5. ”OSRM – O S R M”41 .
6. ”W – OTP”42 .

24 https://en.wikipedia.org/wiki/Electric_vehicle
25 https://en.wikipedia.org/wiki/ArXiv_(identifier)
26 http://arxiv.org/abs/1402.0402
27 https://en.wikipedia.org/wiki/Doi_(identifier)
28 https://doi.org/10.1145%2F2886843
29 https://en.wikipedia.org/wiki/ArXiv_(identifier)
30 http://arxiv.org/abs/1504.05140
31 https://en.wikipedia.org/wiki/Doi_(identifier)
32 https://doi.org/10.1007%2F978-3-319-49487-6_2
33 https://en.wikipedia.org/wiki/ISBN_(identifier)
34 https://en.wikipedia.org/wiki/Special:BookSources/978-3-319-49486-9
35 https://en.wikipedia.org/wiki/Doi_(identifier)
36 https://doi.org/10.1287%2Ftrsc.1110.0401
37 https://en.wikipedia.org/wiki/Doi_(identifier)
38 https://doi.org/10.1007%2F978-3-642-02094-0_7
39 https://en.wikipedia.org/wiki/ISBN_(identifier)
40 https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-02093-3
41 http://project-osrm.org/
42 http://www.opentripplanner.org

809
Contraction hierarchies

7. ”W – GH”43 .
8. ”GH – T”44 .
9. ”GH – RK”45 .
10. B, R; D, D; S, P; S, D-
; S, D; W, D (2010-03-01). ”C -
  - -   ' ”46 .
Journal of Experimental Algorithmics. 15: 2.1. doi47 :10.1145/1671970.167197648 .
ISSN49 1084-665450 .
11. G, R; S, P; S, D; D, D
(2008). MG, C C. (.). ”C H: F
 S H R  R N”. Experimental Algo-
rithms. Lecture Notes in Computer Science. Springer Berlin Heidelberg. 5038:
319–333. doi51 :10.1007/978-3-540-68552-4_2452 . ISBN53 978354068552454 .
12. B, R; C, T; R, I; W, D
(2016-09-13). ”S-    ”. Theoretical
Computer Science. 645: 112–127. doi55 :10.1016/j.tcs.2016.07.00356 . ISSN57 0304-
397558 .
13. D, D; G, A V.; R, I; W, R-
 F. (M 2011). ”G P  N C”. (:unav): 1135–
1146. CiteSeerX59 10.1.1.385.158060 . doi61 :10.1109/ipdps.2011.10862 . ISBN63 978-1-
61284-372-864 .
14. D, D; G, A V.; N, A; W,
R F. (2011). ”PHAST: H-A S P T”.
Parallel & Distributed Processing Symposium (IPDPS), 2011 IEEE International:
921–931. doi65 :10.1109/ipdps.2011.8966 . ISBN67 978-1-61284-372-868 .

43 http://graphhopper.com
44 https://github.com/ifsttar/Tempus
45 https://github.com/RoutingKit/RoutingKit
46 https://publikationen.bibliothek.kit.edu/1000014952
47 https://en.wikipedia.org/wiki/Doi_(identifier)
48 https://doi.org/10.1145%2F1671970.1671976
49 https://en.wikipedia.org/wiki/ISSN_(identifier)
50 http://www.worldcat.org/issn/1084-6654
51 https://en.wikipedia.org/wiki/Doi_(identifier)
52 https://doi.org/10.1007%2F978-3-540-68552-4_24
53 https://en.wikipedia.org/wiki/ISBN_(identifier)
54 https://en.wikipedia.org/wiki/Special:BookSources/9783540685524
55 https://en.wikipedia.org/wiki/Doi_(identifier)
56 https://doi.org/10.1016%2Fj.tcs.2016.07.003
57 https://en.wikipedia.org/wiki/ISSN_(identifier)
58 http://www.worldcat.org/issn/0304-3975
59 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
60 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.385.1580
61 https://en.wikipedia.org/wiki/Doi_(identifier)
62 https://doi.org/10.1109%2Fipdps.2011.108
63 https://en.wikipedia.org/wiki/ISBN_(identifier)
64 https://en.wikipedia.org/wiki/Special:BookSources/978-1-61284-372-8
65 https://en.wikipedia.org/wiki/Doi_(identifier)
66 https://doi.org/10.1109%2Fipdps.2011.89
67 https://en.wikipedia.org/wiki/ISBN_(identifier)
68 https://en.wikipedia.org/wiki/Special:BookSources/978-1-61284-372-8

810
External links

63.5 External links

Open source implementations


• 69

• 70

• 71

• 72

• 73

69 https://www.graphhopper.com/
70 https://github.com/ifsttar/Tempus
71 https://github.com/RoutingKit/RoutingKit
72 http://project-osrm.org/
73 http://www.opentripplanner.org/

811
64 Courcelle's theorem

In the study of graph1 algorithms2 , Courcelle's theorem is the statement that every
graph property3 definable in the monadic second-order4 logic of graphs5 can be decided in
linear time6 on graphs of bounded treewidth7 .[1][2][3] The result was first proved by Bruno
Courcelle8 in 1990[4] and independently rediscovered by Borie, Parker & Tovey (1992)9 .[5]
It is considered the archetype of algorithmic meta-theorems10 .[6][7]

64.1 Formulations

64.1.1 Vertex sets

In one variation of monadic second-order graph logic known as MSO1 , the graph11 is de-
scribed by a set of vertices V and a binary adjacency relation adj(.,.), and the restriction to
monadic logic means that the graph property in question may be defined in terms of sets
of vertices of the given graph, but not in terms of sets of edges, or sets of tuples of vertices.
As an example, the property of a graph being colorable12 with three colors (represented by
three sets of vertices R, G, and B) may be defined by the monadic second-order formula

∃R,G,B. (∀v∈V. (v∈R ∨v∈G ∨v∈B)) ∧


(∀u,v∈V. ((u∈R ∧v∈R) ∨ (u∈G ∧v∈G) ∨ (u∈B ∧v∈B)) → ¬adj(u,v)).

The first part of this formula ensures that the three color classes cover all the vertices of
the graph, and the second ensures that they each form an independent set13 . (It would also
be possible to add clauses to the formula to ensure that the three color classes are disjoint,

1 https://en.wikipedia.org/wiki/Graph_theory
2 https://en.wikipedia.org/wiki/Algorithm
3 https://en.wikipedia.org/wiki/Graph_property
4 https://en.wikipedia.org/wiki/Monadic_second-order_logic
5 https://en.wikipedia.org/wiki/Logic_of_graphs
6 https://en.wikipedia.org/wiki/Linear_time
7 https://en.wikipedia.org/wiki/Treewidth
8 https://en.wikipedia.org/wiki/Bruno_Courcelle
9 #CITEREFBorieParkerTovey1992
10 https://en.wikipedia.org/wiki/Meta-theorem
11 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
12 https://en.wikipedia.org/wiki/Graph_coloring
13 https://en.wikipedia.org/wiki/Independent_set_(graph_theory)

813
Courcelle's theorem

but this makes no difference to the result.) Thus, by Courcelle's theorem, 3-colorability of
graphs of bounded treewidth may be tested in linear time.
For this variation of graph logic, Courcelle's theorem can be extended from treewidth to
clique-width14 : for every fixed MSO1 property P, and every fixed bound b on the clique-
width of a graph, there is a linear-time algorithm for testing whether a graph of clique-width
at most b has property P.[8] The original formulation of this result required the input graph
to be given together with a construction proving that it has bounded clique-width, but later
approximation algorithms15 for clique-width removed this requirement.[9]

64.1.2 Edge sets

Courcelle's theorem may also be used with a stronger variation of monadic second-order
logic known as MSO2 . In this formulation, a graph is represented by a set V of vertices, a
set E of edges, and an incidence relation between vertices and edges. This variation allows
quantification over sets of vertices or edges, but not over more complex relations on tuples
of vertices or edges.
For instance, the property of having a Hamiltonian cycle16 may be expressed in MSO2
by describing the cycle as a set of edges that includes exactly two edges incident to each
vertex, such that every nonempty proper subset of vertices has an edge in the putative cycle
with exactly one endpoint in the subset. However, Hamiltonicity cannot be expressed in
MSO1 .[10]

64.1.3 Labeled graphs

It is possible to apply the same results to graphs in which the vertices or edges have labels17
from a fixed finite set, either by augmenting the graph logic to incorporate predicates
describing the labels, or by representing the labels by unquantified vertex set or edge set
variables.[11]

64.1.4 Modular congruences

Another direction for extending Courcelle's theorem concerns logical formulas that include
predicates for counting the size of the test. In this context, it is not possible to perform
arbitrary arithmetic operations on set sizes, nor even to test whether two sets have the same
size. However, MSO1 and MSO2 can be extended to logics called CMSO1 and CMSO2 , that
include for every two constants q and r a predicate cardq,r (S) which tests whether the
cardinality18 of set S is congruent19 to r modulo q. Courcelle's theorem can be extended to
these logics.[4]

14 https://en.wikipedia.org/wiki/Clique-width
15 https://en.wikipedia.org/wiki/Approximation_algorithm
16 https://en.wikipedia.org/wiki/Hamiltonian_cycle
17 https://en.wikipedia.org/wiki/Graph_labeling
18 https://en.wikipedia.org/wiki/Cardinality
19 https://en.wikipedia.org/wiki/Modular_arithmetic

814
Proof strategy and complexity

64.1.5 Decision versus optimization

As stated above, Courcelle's theorem applies primarily to decision problems20 : does a graph
have a property or not. However, the same methods also allow the solution to optimization
problems21 in which the vertices or edges of a graph have integer weights, and one seeks
the minimum or maximum weight vertex set that satisfies a given property, expressed in
second-order logic. These optimization problems can be solved in linear time on graphs of
bounded clique-width.[8][11]

64.1.6 Space complexity

Rather than bounding the time complexity22 of an algorithm that recognizes an MSO prop-
erty on bounded-treewidth graphs, it is also possible to analyze the space complexity23 of
such an algorithm; that is, the amount of memory needed above and beyond the size of
the input itself (which is assumed to be represented in a read-only way so that its space
requirements cannot be put to other purposes). In particular, it is possible to recognize the
graphs of bounded treewidth, and any MSO property on these graphs, by a deterministic
Turing machine24 that uses only logarithmic space25 .[12]

64.2 Proof strategy and complexity

The typical approach to proving Courcelle's theorem involves the construction of a finite
bottom-up tree automaton26 that acts on the tree decompositions27 of the given graph.[6]
In more detail, two graphs G1 and G2 , each with a specified subset T of vertices called
terminals, may be defined to be equivalent with respect to an MSO formula F if, for all
other graphs H whose intersection with G1 and G2 consists only of vertices in T, the two
graphs G1 ∪ H and G2 ∪ H behave the same with respect to F: either they both model
F or they both do not model F. This is an equivalence relation28 , and it can be shown by
induction on the length of F that (when the sizes of T and F are both bounded) it has
finitely many equivalence classes29 .[13]
A tree decomposition of a given graph G consists of a tree and, for each tree node, a subset
of the vertices of G called a bag. It must satisfy two properties: for each vertex v of G, the
bags containing v must be associated with a contiguous subtree of the tree, and for each edge
uv of G, there must be a bag containing both u and v. The vertices in a bag can be thought
of as the terminals of a subgraph of G, represented by the subtree of the tree decomposition

20 https://en.wikipedia.org/wiki/Decision_problem
21 https://en.wikipedia.org/wiki/Optimization_problem
22 https://en.wikipedia.org/wiki/Time_complexity
23 https://en.wikipedia.org/wiki/Space_complexity
24 https://en.wikipedia.org/wiki/Deterministic_Turing_machine
25 https://en.wikipedia.org/wiki/L_(complexity)
26 https://en.wikipedia.org/wiki/Tree_automaton
27 https://en.wikipedia.org/wiki/Tree_decomposition
28 https://en.wikipedia.org/wiki/Equivalence_relation
29 https://en.wikipedia.org/wiki/Equivalence_class

815
Courcelle's theorem

descending from that bag. When G has bounded treewidth, it has a tree decomposition
in which all bags have bounded size, and such a decomposition can be found in fixed-
parameter tractable time.[14] Moreover, it is possible to choose this tree decomposition so
that it forms a binary tree30 , with only two child subtrees per bag. Therefore, it is possible
to perform a bottom-up computation on this tree decomposition, computing an identifier for
the equivalence class of the subtree rooted at each bag by combining the edges represented
within the bag with the two identifiers for the equivalence classes of its two children.[15]
The size of the automaton constructed in this way is not an elementary function31 of the
size of the input MSO formula. This non-elementary complexity is necessary, in the sense
that (unless P = NP32 ) it is not possible to test MSO properties on trees in a time that is
fixed-parameter tractable with an elementary dependence on the parameter.[16]

64.3 Bojańczyk-Pilipczuk's theorem

The proofs of Courcelle's theorem show a stronger result: not only can every (counting)
monadic second-order property be recognized in linear time for graphs of bounded treewidth,
but also it can be recognized by a finite-state tree automaton33 . Courcelle conjectured a
converse to this: if a property of graphs of bounded treewidth is recognized by a tree
automaton, then it can be defined in counting monadic second-order logic. In 1998 Lapoire
(1998)34 , claimed a resolution of the conjecture.[17] However, the proof is widely regarded
as unsatisfactory.[18][19] Until 2016, only a few special cases were resolved: in particular,
the conjecture has been proved for graphs of treewidth at most three,[20] for k-connected
graphs of treewidth k, for graphs of constant treewidth and chordality, and for k-outerplanar
graphs. The general version of the conjecture was finally proved by Mikołaj Bojańczyk35
and Michał Pilipczuk.[21]
Moreover, for Halin graphs36 (a special case of treewidth three graphs) counting is not
needed: for these graphs, every property that can be recognized by a tree automaton can
also be defined in monadic second-order logic. The same is true more generally for certain
classes of graphs in which a tree decomposition can itself be described in MSOL. However,
it cannot be true for all graphs of bounded treewidth, because in general counting adds
extra power over monadic second-order logic without counting. For instance, the graphs
with an even number of vertices can be recognized using counting, but not without.[19]

30 https://en.wikipedia.org/wiki/Binary_tree
31 https://en.wikipedia.org/wiki/Elementary_function
32 https://en.wikipedia.org/wiki/P_%3D_NP
33 https://en.wikipedia.org/wiki/Tree_automaton
34 #CITEREFLapoire1998
35 https://en.wikipedia.org/wiki/Miko%C5%82aj_Boja%C5%84czyk
36 https://en.wikipedia.org/wiki/Halin_graph

816
Satisfiability and Seese's theorem

64.4 Satisfiability and Seese's theorem

The satisfiability problem37 for a formula of monadic second-order logic is the problem of
determining whether there exists at least one graph (possibly within a restricted family of
graphs) for which the formula is true. For arbitrary graph families, and arbitrary formulas,
this problem is undecidable38 . However, satisfiability of MSO2 formulas is decidable for the
graphs of bounded treewidth, and satisfiability of MSO1 formulas is decidable for graphs of
bounded clique-width. The proof involves building a tree automaton for the formula and
then testing whether the automaton has an accepting path.
As a partial converse, Seese (1991)39 proved that, whenever a family of graphs has a de-
cidable MSO2 satisfiability problem, the family must have bounded treewidth. The proof
is based on a theorem of Robertson40 and Seymour41 that the families of graphs with un-
bounded treewidth have arbitrarily large grid42 minors43 .[22] Seese also conjectured that
every family of graphs with a decidable MSO1 satisfiability problem must have bounded
clique-width; this has not been proven, but a weakening of the conjecture that replaces
MSO1 by CMSO1 is true.[23]

64.5 Applications

Grohe (2001)44 used Courcelle's theorem to show that computing the crossing number45
of a graph G is fixed-parameter tractable46 with a quadratic dependence on the size of
G, improving a cubic-time algorithm based on the Robertson–Seymour theorem47 . An
additional later improvement to linear time48 by Kawarabayashi & Reed (2007)49 follows
the same approach. If the given graph G has small treewidth, Courcelle's theorem can
be applied directly to this problem. On the other hand, if G has large treewidth, then
it contains a large grid50 minor51 , within which the graph can be simplified while leaving
the crossing number unchanged. Grohe's algorithm performs these simplifications until the
remaining graph has a small treewidth, and then applies Courcelle's theorem to solve the
reduced subproblem.[24][25]

37 https://en.wikipedia.org/wiki/Satisfiability_problem
38 https://en.wikipedia.org/wiki/Undecidable_problem
39 #CITEREFSeese1991
40 https://en.wikipedia.org/wiki/Neil_Robertson_(mathematician)
41 https://en.wikipedia.org/wiki/Paul_Seymour_(mathematician)
42 https://en.wikipedia.org/wiki/Grid_graph
43 https://en.wikipedia.org/wiki/Graph_minor
44 #CITEREFGrohe2001
45 https://en.wikipedia.org/wiki/Crossing_number_(graph_theory)
46 https://en.wikipedia.org/wiki/Parameterized_complexity
47 https://en.wikipedia.org/wiki/Robertson%E2%80%93Seymour_theorem
48 https://en.wikipedia.org/wiki/Linear_time
49 #CITEREFKawarabayashiReed2007
50 https://en.wikipedia.org/wiki/Grid_graph
51 https://en.wikipedia.org/wiki/Graph_minor

817
Courcelle's theorem

Gottlob & Lee (2007)52 observed that Courcelle's theorem applies to several problems of
finding minimum multi-way cuts53 in a graph, when the structure formed by the graph
and the set of cut pairs has bounded treewidth. As a result they obtain a fixed-parameter
tractable algorithm for these problems, parameterized by a single parameter, treewidth,
improving previous solutions that had combined multiple parameters.[26]
In computational topology, Burton & Downey (2014)54 extend Courcelle's theorem from
MSO2 to a form of monadic second-order logic on simplicial complexes55 of bounded di-
mension that allows quantification over simplices of any fixed dimension. As a consequence,
they show how to compute certain quantum invariants56 of 3-manifolds57 as well as how
to solve certain problems in discrete Morse theory58 efficiently, when the manifold has a
triangulation (avoiding degenerate simplices) whose dual graph59 has small treewidth.[27]
Methods based on Courcelle's theorem have also been applied to database theory60 ,[28]
knowledge representation and reasoning61 ,[29] automata theory62 ,[30] and model check-
ing63 .[31]

64.6 References
1. E, S (2008), Regular Languages, Tree Width, and Courcelle's Theorem:
An Introduction, VDM Publishing, ISBN64 978363907633265 .
2. C, B66 ; E, J (2012), Graph Structure and Monadic
Second-Order Logic: A Language-Theoretic Approach67 (PDF), E
 M   A, 138, Cambridge University Press68 ,
ISBN69 978113964400670 , Zbl71 1257.6800672 .
3. D, R G.73 ; F, M R.74 (2013), ”C 13:
C' ”, Fundamentals of parameterized complexity, Texts in

52 #CITEREFGottlobLee2007
53 https://en.wikipedia.org/wiki/Cut_(graph_theory)
54 #CITEREFBurtonDowney2014
55 https://en.wikipedia.org/wiki/Simplicial_complex
56 https://en.wikipedia.org/wiki/Quantum_invariants
57 https://en.wikipedia.org/wiki/Manifold
58 https://en.wikipedia.org/wiki/Discrete_Morse_theory
59 https://en.wikipedia.org/wiki/Dual_graph
60 https://en.wikipedia.org/wiki/Database_theory
61 https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning
62 https://en.wikipedia.org/wiki/Automata_theory
63 https://en.wikipedia.org/wiki/Model_checking
64 https://en.wikipedia.org/wiki/ISBN_(identifier)
65 https://en.wikipedia.org/wiki/Special:BookSources/9783639076332
66 https://en.wikipedia.org/wiki/Bruno_Courcelle
67 http://hal.archives-ouvertes.fr/docs/00/64/65/14/PDF/TheBook.pdf
68 https://en.wikipedia.org/wiki/Cambridge_University_Press
69 https://en.wikipedia.org/wiki/ISBN_(identifier)
70 https://en.wikipedia.org/wiki/Special:BookSources/9781139644006
71 https://en.wikipedia.org/wiki/Zbl_(identifier)
72 http://zbmath.org/?format=complete&q=an:1257.68006
73 https://en.wikipedia.org/wiki/Rod_Downey
74 https://en.wikipedia.org/wiki/Michael_Fellows

818
References

Computer Science, London: Springer, pp. 265–278, CiteSeerX75 10.1.1.456.272976 ,


doi77 :10.1007/978-1-4471-5559-178 , ISBN79 978-1-4471-5558-480 , MR81 315446182 .
4. C, B83 (1990), ”T  -   .
I. R    ”, Information and Computation, 85 (1):
12–75, doi84 :10.1016/0890-5401(90)90043-H85 , MR86 104264987 , Zbl88 0722.0300889
5. B, R B.; P, R. G; T, C A. (1992), ”A
  -     -
       ”, Algorith-
mica, 7 (5–6): 555–581, doi90 :10.1007/BF0175877791 , MR92 115458893 .
6. K, J; L, A (2009), ”A   
C' ”, Electronic Notes in Theoretical Computer Science, 251:
65–81, doi94 :10.1016/j.entcs.2009.08.02895 .
7. L, M (2010), ”A -   
”,   B, M; M, U (.), Proc. 18th Annual Euro-
pean Symposium on Algorithms, Lecture Notes in Computer Science, 6346, Springer,
pp. 549–560, doi96 :10.1007/978-3-642-15775-2_4797 , Zbl98 1287.6807899 .
8. C, B.; M, J. A.; R, U. (2000), ”L  -
       -”, The-
ory of Computing Systems, 33 (2): 125–150, CiteSeerX100 10.1.1.414.1845101 ,
doi102 :10.1007/s002249910009103 , MR104 1739644105 , Zbl106 1009.68102107 .

75 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
76 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.456.2729
77 https://en.wikipedia.org/wiki/Doi_(identifier)
78 https://doi.org/10.1007%2F978-1-4471-5559-1
79 https://en.wikipedia.org/wiki/ISBN_(identifier)
80 https://en.wikipedia.org/wiki/Special:BookSources/978-1-4471-5558-4
81 https://en.wikipedia.org/wiki/MR_(identifier)
82 http://www.ams.org/mathscinet-getitem?mr=3154461
83 https://en.wikipedia.org/wiki/Bruno_Courcelle
84 https://en.wikipedia.org/wiki/Doi_(identifier)
85 https://doi.org/10.1016%2F0890-5401%2890%2990043-H
86 https://en.wikipedia.org/wiki/MR_(identifier)
87 http://www.ams.org/mathscinet-getitem?mr=1042649
88 https://en.wikipedia.org/wiki/Zbl_(identifier)
89 http://zbmath.org/?format=complete&q=an:0722.03008
90 https://en.wikipedia.org/wiki/Doi_(identifier)
91 https://doi.org/10.1007%2FBF01758777
92 https://en.wikipedia.org/wiki/MR_(identifier)
93 http://www.ams.org/mathscinet-getitem?mr=1154588
94 https://en.wikipedia.org/wiki/Doi_(identifier)
95 https://doi.org/10.1016%2Fj.entcs.2009.08.028
96 https://en.wikipedia.org/wiki/Doi_(identifier)
97 https://doi.org/10.1007%2F978-3-642-15775-2_47
98 https://en.wikipedia.org/wiki/Zbl_(identifier)
99 http://zbmath.org/?format=complete&q=an:1287.68078
100 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
101 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.414.1845
102 https://en.wikipedia.org/wiki/Doi_(identifier)
103 https://doi.org/10.1007%2Fs002249910009
104 https://en.wikipedia.org/wiki/MR_(identifier)
105 http://www.ams.org/mathscinet-getitem?mr=1739644
106 https://en.wikipedia.org/wiki/Zbl_(identifier)
107 http://zbmath.org/?format=complete&q=an:1009.68102

819
Courcelle's theorem

9. O, S-; S, P108 (2006), ”A - 


-”, Journal of Combinatorial Theory109 , S B, 96 (4): 514–528,
doi110 :10.1016/j.jctb.2005.10.006111 , MR112 2232389113 .
10. Courcelle & Engelfriet (2012)114 , Proposition 5.13, p. 338115 .
11. A, S; L, J; S, D (1991), ”E
  - ”, Journal of Algorithms, 12 (2):
308–340, CiteSeerX116 10.1.1.12.2544117 , doi118 :10.1016/0196-6774(91)90006-K119 ,
MR120 1105479121 .
12. E, M; J, A; T, T (O 2010),
”L V   T  B  C”122
(PDF), Proc. 51st Annual IEEE Symposium on Foundations of Computer Science
(FOCS 2010)123 , . 143–152, 124 :10.1109/FOCS.2010.21125 .
13. Downey & Fellows (2013)126 , Theorem 13.1.1, p. 266.
14. Downey & Fellows (2013)127 , Section 10.5: Bodlaender's theorem, pp. 195–203.
15. Downey & Fellows (2013)128 , Section 12.6: Tree automata, pp. 237–247.
16. F, M; G, M129 (2004), ”T   -
  -  ”, Annals of Pure and Applied Logic,
130 (1–3): 3–31, doi130 :10.1016/j.apal.2004.01.007131 , MR132 2092847133 .
17. L, D (1998), ”R   - -
       -”, STACS 98: 15th An-
nual Symposium on Theoretical Aspects of Computer Science Paris, France, Febru-

108 https://en.wikipedia.org/wiki/Paul_Seymour_(mathematician)
109 https://en.wikipedia.org/wiki/Journal_of_Combinatorial_Theory
110 https://en.wikipedia.org/wiki/Doi_(identifier)
111 https://doi.org/10.1016%2Fj.jctb.2005.10.006
112 https://en.wikipedia.org/wiki/MR_(identifier)
113 http://www.ams.org/mathscinet-getitem?mr=2232389
114 #CITEREFCourcelleEngelfriet2012
115 https://books.google.com/books?id=JpIhAwAAQBAJ&pg=PA338
116 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
117 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.12.2544
118 https://en.wikipedia.org/wiki/Doi_(identifier)
119 https://doi.org/10.1016%2F0196-6774%2891%2990006-K
120 https://en.wikipedia.org/wiki/MR_(identifier)
121 http://www.ams.org/mathscinet-getitem?mr=1105479
122 http://wwwmayr.in.tum.de/konferenzen/Sommerakademie2010/talks/tantau_paper.pdf
123 https://en.wikipedia.org/wiki/Symposium_on_Foundations_of_Computer_Science
124 https://en.wikipedia.org/wiki/Doi_(identifier)
125 https://doi.org/10.1109%2FFOCS.2010.21
126 #CITEREFDowneyFellows2013
127 #CITEREFDowneyFellows2013
128 #CITEREFDowneyFellows2013
129 https://en.wikipedia.org/wiki/Martin_Grohe
130 https://en.wikipedia.org/wiki/Doi_(identifier)
131 https://doi.org/10.1016%2Fj.apal.2004.01.007
132 https://en.wikipedia.org/wiki/MR_(identifier)
133 http://www.ams.org/mathscinet-getitem?mr=2092847

820
References

ary 27, 1998, Proceedings, pp. 618–628, Bibcode134 :1998LNCS.1373..618L135 , Cite-


SeerX136 10.1.1.22.7805137 , doi138 :10.1007/bfb0028596139 .
18. C, B.; E., J. (2012), ”G S  M S-
 O L -- A L-T A”, Encyclopedia of math-
ematics and its applications, 138, Cambridge University Press.
19. J, L; B, H L.140 (2015), MSOL-definability equals
recognizability for Halin graphs and bounded degree k-outerplanar graphs,
arXiv141 :1503.01604142 , Bibcode143 :2015arXiv150301604J144 .
20. K, D. (2000), ”D     3-
  -  -”, Algorithmica, 27 (3): 348–381,
doi145 :10.1007/s004530010024146 .
21. BŃ, M; P, M (2016), ”D  -
     ”, Proceedings of the 31st An-
nual ACM/IEEE Symposium on Logic in Computer Science (LICS 2016), pp. 407–416,
arXiv147 :1605.03045148 , doi149 :10.1145/2933575.2934508150 .
22. S, D. (1991), ”T       
  ”, Annals of Pure and Applied Logic, 53 (2): 169–195,
doi151 :10.1016/0168-0072(91)90054-P152 , MR153 1114848154 .
23. C, B; O, S- (2007), ”V-,  -
 ,     S”155 (PDF), Journal of Combina-
torial Theory156 , S B, 97 (1): 91–126, doi157 :10.1016/j.jctb.2006.04.003158 ,
MR159 2278126160 .
24. G, M (2001), ”C    
”, Proceedings of the Thirty-third Annual ACM Symposium on The-

134 https://en.wikipedia.org/wiki/Bibcode_(identifier)
135 https://ui.adsabs.harvard.edu/abs/1998LNCS.1373..618L
136 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
137 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.7805
138 https://en.wikipedia.org/wiki/Doi_(identifier)
139 https://doi.org/10.1007%2Fbfb0028596
140 https://en.wikipedia.org/wiki/Hans_L._Bodlaender
141 https://en.wikipedia.org/wiki/ArXiv_(identifier)
142 http://arxiv.org/abs/1503.01604
143 https://en.wikipedia.org/wiki/Bibcode_(identifier)
144 https://ui.adsabs.harvard.edu/abs/2015arXiv150301604J
145 https://en.wikipedia.org/wiki/Doi_(identifier)
146 https://doi.org/10.1007%2Fs004530010024
147 https://en.wikipedia.org/wiki/ArXiv_(identifier)
148 http://arxiv.org/abs/1605.03045
149 https://en.wikipedia.org/wiki/Doi_(identifier)
150 https://doi.org/10.1145%2F2933575.2934508
151 https://en.wikipedia.org/wiki/Doi_(identifier)
152 https://doi.org/10.1016%2F0168-0072%2891%2990054-P
153 https://en.wikipedia.org/wiki/MR_(identifier)
154 http://www.ams.org/mathscinet-getitem?mr=1114848
155 http://mathsci.kaist.ac.kr/~sangil/pdf/2006co.pdf
156 https://en.wikipedia.org/wiki/Journal_of_Combinatorial_Theory
157 https://en.wikipedia.org/wiki/Doi_(identifier)
158 https://doi.org/10.1016%2Fj.jctb.2006.04.003
159 https://en.wikipedia.org/wiki/MR_(identifier)
160 http://www.ams.org/mathscinet-getitem?mr=2278126

821
Courcelle's theorem

ory of Computing (STOC '01)161 , . 231–236, X162 :/0009010163 ,


164 :10.1145/380752.380805165 .
25. K, K-166 ; R, B167 (2007), ”C -
    ”, Proceedings of the Thirty-ninth Annual
ACM Symposium on Theory of Computing (STOC '07)168 , . 382–390,
169 :10.1145/1250790.1250848170 .
26. G, G; L, S T (2007), ”A  
  ”, Information Processing Letters171 , 103 (4): 136–141,
doi172 :10.1016/j.ipl.2007.03.005173 , MR174 2330167175 .
27. B, B A.; D, R G.176 (2014), Courcelle's theorem for
triangulations, arXiv177 :1403.2926178 , Bibcode179 :2014arXiv1403.2926B180 . Short com-
munication, International Congress of Mathematicians181 , 2014.
28. G, M; M, J (1999), ”D   -
     -”, Database Theory — ICDT’99:
7th International Conference Jerusalem, Israel, January 10–12, 1999, Proceedings,
Lecture Notes in Computer Science, 1540, pp. 70–82, CiteSeerX182 10.1.1.52.2984183 ,
doi184 :10.1007/3-540-49257-7_6185 .
29. G, G; P, R; W, F (J 2010),
”B         -
  ”, Artificial Intelligence, 174 (1): 105–132,
doi186 :10.1016/j.artint.2009.10.003187 .
30. M, P.; P, G (2011), ”T T W  A
S”, Proceedings of the 38th Annual ACM SIGPLAN-SIGACT Symposium on

161 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
162 https://en.wikipedia.org/wiki/ArXiv_(identifier)
163 http://arxiv.org/abs/cs/0009010
164 https://en.wikipedia.org/wiki/Doi_(identifier)
165 https://doi.org/10.1145%2F380752.380805
166 https://en.wikipedia.org/wiki/Ken-ichi_Kawarabayashi
167 https://en.wikipedia.org/wiki/Bruce_Reed_(mathematician)
168 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
169 https://en.wikipedia.org/wiki/Doi_(identifier)
170 https://doi.org/10.1145%2F1250790.1250848
171 https://en.wikipedia.org/wiki/Information_Processing_Letters
172 https://en.wikipedia.org/wiki/Doi_(identifier)
173 https://doi.org/10.1016%2Fj.ipl.2007.03.005
174 https://en.wikipedia.org/wiki/MR_(identifier)
175 http://www.ams.org/mathscinet-getitem?mr=2330167
176 https://en.wikipedia.org/wiki/Rod_Downey
177 https://en.wikipedia.org/wiki/ArXiv_(identifier)
178 http://arxiv.org/abs/1403.2926
179 https://en.wikipedia.org/wiki/Bibcode_(identifier)
180 https://ui.adsabs.harvard.edu/abs/2014arXiv1403.2926B
181 https://en.wikipedia.org/wiki/International_Congress_of_Mathematicians
182 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
183 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.52.2984
184 https://en.wikipedia.org/wiki/Doi_(identifier)
185 https://doi.org/10.1007%2F3-540-49257-7_6
186 https://en.wikipedia.org/wiki/Doi_(identifier)
187 https://doi.org/10.1016%2Fj.artint.2009.10.003

822
References

Principles of Programming Languages (POPL '11), SIGPLAN Notices, 46, pp. 283–
294, doi188 :10.1145/1926385.1926419189
31. O, J (2003), ”F -    -
  ”, Computer Aided Verification: 15th International Conference,
CAV 2003, Boulder, CO, USA, July 8-12, 2003, Proceedings, Lecture Notes in Com-
puter Science, 2725, pp. 80–92, CiteSeerX190 10.1.1.2.4843191 , doi192 :10.1007/978-3-
540-45069-6_7193 .

188 https://en.wikipedia.org/wiki/Doi_(identifier)
189 https://doi.org/10.1145%2F1926385.1926419
190 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
191 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.2.4843
192 https://en.wikipedia.org/wiki/Doi_(identifier)
193 https://doi.org/10.1007%2F978-3-540-45069-6_7

823
65 Cuthill–McKee algorithm

Figure 159 Cuthill-McKee ordering of a matrix

825
Cuthill–McKee algorithm

Figure 160 RCM ordering of the same matrix

In numerical linear algebra1 , the Cuthill–McKee algorithm (CM), named for Elizabeth
Cuthill and James[1] McKee,[2] is an algorithm2 to permute a sparse matrix3 that has a sym-
metric4 sparsity pattern into a band matrix5 form with a small bandwidth6 . The reverse
Cuthill–McKee algorithm (RCM) due to Alan George is the same algorithm but with

1 https://en.wikipedia.org/wiki/Numerical_linear_algebra
2 https://en.wikipedia.org/wiki/Algorithm
3 https://en.wikipedia.org/wiki/Sparse_matrix
4 https://en.wikipedia.org/wiki/Symmetric_matrix
5 https://en.wikipedia.org/wiki/Band_matrix
6 https://en.wikipedia.org/wiki/Bandwidth_(matrix_theory)

826
Algorithm

the resulting index numbers reversed.[3] In practice this generally results in less fill-in7 than
the CM ordering when Gaussian elimination is applied.[4]
The Cuthill McKee algorithm is a variant of the standard breadth-first search8 algorithm
used in graph algorithms. It starts with a peripheral node and then generates levels9 Ri for
i = 1, 2, .. until all nodes are exhausted. The set Ri+1 is created from set Ri by listing all
vertices adjacent to all nodes in Ri . These nodes are listed in increasing degree. This last
detail is the only difference with the breadth-first search algorithm.

65.1 Algorithm

Given a symmetric n × n matrix we visualize the matrix as the adjacency matrix10 of a


graph11 . The Cuthill–McKee algorithm is then a relabeling of the vertices12 of the graph to
reduce the bandwidth of the adjacency matrix.
The algorithm produces an ordered n-tuple13 R of vertices which is the new order of the
vertices.
First we choose a peripheral vertex14 (the vertex with the lowest degree15 ) x and set
R := ({x}).
Then for i = 1, 2, . . . we iterate the following steps while |R| < n
• Construct the adjacency set Ai of Ri (with Ri the i-th component of R) and exclude the
vertices we already have in R
Ai := Adj(Ri ) \ R
• Sort Ai with ascending vertex order (vertex degree16 ).
• Append Ai to the Result set R.
In other words, number the vertices according to a particular breadth-first traversal17 where
neighboring vertices are visited in order from lowest to highest vertex order.

65.2 See also


• Graph bandwidth18
• Sparse matrix19

7 https://en.wikipedia.org/wiki/Sparse_matrix#Reducing_fill-in
8 https://en.wikipedia.org/wiki/Breadth-first_search
9 https://en.wikipedia.org/wiki/Level_structure
10 https://en.wikipedia.org/wiki/Adjacency_matrix
11 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
12 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
13 https://en.wikipedia.org/wiki/N-tuple
14 https://en.wikipedia.org/wiki/Peripheral_vertex
15 https://en.wikipedia.org/wiki/Degree_(graph_theory)
16 https://en.wikipedia.org/wiki/Degree_(graph_theory)
17 https://en.wikipedia.org/wiki/Breadth-first_search
18 https://en.wikipedia.org/wiki/Graph_bandwidth
19 https://en.wikipedia.org/wiki/Sparse_matrix

827
Cuthill–McKee algorithm

65.3 References
1. Recommendations for ship hull surface representation20 , page 6
2. E. Cuthill and J. McKee. Reducing the bandwidth of sparse symmetric matrices21 In
Proc. 24th Nat. Conf. ACM22 , pages 157–172, 1969.
3. 23
4. J. A. George and J. W-H. Liu, Computer Solution of Large Sparse Positive Definite
Systems, Prentice-Hall, 1981
• Cuthill–McKee documentation24 for the Boost C++ Libraries25 .
• A detailed description of the Cuthill–McKee algorithm26 .
• symrcm27 MATLAB's implementation of RCM.
• reverse_cuthill_mckee28 RCM routine from SciPy29 written in Cython30 .

20 http://calhoun.nps.edu/bitstream/handle/10945/30131/recommendationsf00fran.pdf
21 http://portal.acm.org/citation.cfm?id=805928
22 https://en.wikipedia.org/wiki/Association_for_Computing_Machinery
23 http://ciprian-zavoianu.blogspot.ch/2009/01/project-bandwidth-reduction.html
24 http://www.boost.org/doc/libs/1_37_0/libs/graph/doc/cuthill_mckee_ordering.html
25 https://en.wikipedia.org/wiki/Boost_C%2B%2B_Libraries
26 http://ciprian-zavoianu.blogspot.com/2009/01/project-bandwidth-reduction.html
27 http://www.mathworks.com/help/matlab/ref/symrcm.html
http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csgraph.reverse_
28
cuthill_mckee.html
29 https://en.wikipedia.org/wiki/SciPy
30 https://en.wikipedia.org/wiki/Cython

828
66 D*

This article is about a search algorithm. For the digital voice and data protocol specification,
see D-STAR1 . For the physical quantity, see Specific detectivity2 . For the subatomic particle,
see hexaquark3 .

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

1 https://en.wikipedia.org/wiki/D-STAR
2 https://en.wikipedia.org/wiki/Specific_detectivity
3 https://en.wikipedia.org/wiki/Hexaquark

829
D*

Graph and tree


search algorithms
Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

D* (pronounced ”D star”) is any one of the following three related incremental search algo-
rithms4 :
• The original D*,[1] by Anthony Stentz, is an informed incremental search algorithm.
• Focussed D*[2] is an informed incremental heuristic search algorithm by Anthony Stentz
that combines ideas of A*5[3] and the original D*. Focussed D* resulted from a further
development of the original D*.
• D* Lite[4] is an incremental heuristic search algorithm by Sven Koenig6 and Maxim
Likhachev that builds on LPA*7 ,[5] an incremental heuristic search algorithm that com-
bines ideas of A*8 and Dynamic SWSF-FP.[6]
All three search algorithms solve the same assumption-based path planning9 problems,
including planning with the freespace assumption,[7] where a robot has to navigate to given
goal coordinates in unknown terrain. It makes assumptions about the unknown part of the
terrain (for example: that it contains no obstacles) and finds a shortest path from its current
coordinates to the goal coordinates under these assumptions. The robot then follows the
path. When it observes new map information (such as previously unknown obstacles), it
adds the information to its map and, if necessary, replans a new shortest path from its
current coordinates to the given goal coordinates. It repeats the process until it reaches
the goal coordinates or determines that the goal coordinates cannot be reached. When
traversing unknown terrain, new obstacles may be discovered frequently, so this replanning
needs to be fast. Incremental (heuristic) search algorithms10 speed up searches for sequences
of similar search problems by using experience with the previous problems to speed up the

4 https://en.wikipedia.org/wiki/Incremental_heuristic_search
5 https://en.wikipedia.org/wiki/A*_search_algorithm
6 https://en.wikipedia.org/wiki/Sven_Koenig_(computer_scientist)
7 https://en.wikipedia.org/wiki/Lifelong_Planning_A*
8 https://en.wikipedia.org/wiki/A*_search_algorithm
9 https://en.wikipedia.org/wiki/Path_planning
10 https://en.wikipedia.org/wiki/Incremental_heuristic_search

830
Operation

search for the current one. Assuming the goal coordinates do not change, all three search
algorithms are more efficient than repeated A* searches.
D* and its variants have been widely used for mobile robot11 and autonomous vehicle12 nav-
igation13 . Current systems are typically based on D* Lite rather than the original D* or Fo-
cussed D*. In fact, even Stentz's lab uses D* Lite rather than D* in some implementations.[8]
Such navigation systems include a prototype system tested on the Mars rovers Opportu-
nity14 and Spirit15 and the navigation system of the winning entry in the DARPA Urban
Challenge16 , both developed at Carnegie Mellon University17 .
The original D* was introduced by Anthony Stentz in 1994. The name D* comes from the
term ”Dynamic A*”[9] , because the algorithm behaves like A* except that the arc costs can
change as the algorithm runs.

66.1 Operation

The basic operation of D* is outlined below.


Like Dijkstra's algorithm and A*, D* maintains a list of nodes to be evaluated, known as
the ”OPEN list”. Nodes are marked as having one of several states:
• NEW, meaning it has never been placed on the OPEN list
• OPEN, meaning it is currently on the OPEN list
• CLOSED, meaning it is no longer on the OPEN list
• RAISE, indicating its cost is higher than the last time it was on the OPEN list
• LOWER, indicating its cost is lower than the last time it was on the OPEN list

66.1.1 Expansion

The algorithm works by iteratively selecting a node from the OPEN list and evaluating it.
It then propagates the node's changes to all of the neighboring nodes and places them on
the OPEN list. This propagation process is termed ”expansion”. In contrast to canonical
A*, which follows the path from start to finish, D* begins by searching backwards from the
goal node. Each expanded node has a backpointer which refers to the next node leading to
the target, and each node knows the exact cost to the target. When the start node is the
next node to be expanded, the algorithm is done, and the path to the goal can be found by
simply following the backpointers.

11 https://en.wikipedia.org/wiki/Mobile_robot
12 https://en.wikipedia.org/wiki/Autonomous_vehicle
13 https://en.wikipedia.org/wiki/Navigation_research
14 https://en.wikipedia.org/wiki/Opportunity_rover
15 https://en.wikipedia.org/wiki/Spirit_rover
16 https://en.wikipedia.org/wiki/DARPA_Urban_Challenge
17 https://en.wikipedia.org/wiki/Carnegie_Mellon_University

831
D*

Figure 161 Expansion in progress. The finish node (yellow) is in the middle of the
top row of points, the start node is in the middle of the bottom row. Red indicates an
obstacle; black/blue indicates expanded nodes (brightness indicating cost). Green
indicates nodes which are being expanded.

832
Operation

Figure 162 Expansion finished. The path is indicated in cyan.

66.1.2 Obstacle handling

When an obstruction is detected along the intended path, all the points that are affected are
again placed on the OPEN list, this time marked RAISE. Before a RAISED node increases
in cost, however, the algorithm checks its neighbors and examines whether it can reduce the
node's cost. If not, the RAISE state is propagated to all of the nodes' descendants, that
is, nodes which have backpointers to it. These nodes are then evaluated, and the RAISE
state is passed on, forming a wave. When a RAISED node can be reduced, its backpointer
is updated, and passes the LOWER state to its neighbors. These waves of RAISE and
LOWER states are the heart of D*.

833
D*

By this point, a whole series of other points are prevented from being ”touched” by the
waves. The algorithm has therefore only worked on the points which are affected by change
of cost.

Figure 163 An obstacle has been added (red) and nodes marked RAISE (yellow).

834
Operation

Figure 164 Expansion in progress. Yellow indicates nodes marked RAISE, green
indicates nodes marked LOWER.

66.1.3 Another deadlock occurs

This time, the deadlock cannot be bypassed so elegantly. None of the points can find a new
route via a neighbor to the destination. Therefore, they continue to propagate their cost
increase. Only points outside of the channel can be found, which can lead to destination
via a viable route. This is how two Lower waves develop, which expand as unattainably
marked points with new route information.

835
D*

Figure 165 Channel blocked by additional obstacles (red)

836
Operation

Figure 166 Expansion in progress (Raise wave in yellow, Lower wave in green)

837
D*

Figure 167 New path found (cyan)

66.2 Pseudocode

while (!openList.isEmpty()) {
point = openList.getFirst();
expand(point);
}

66.2.1 Expand

void expand(currentPoint) {
boolean isRaise = isRaise(currentPoint);
double cost;

838
Variants

for each (neighbor in currentPoint.getNeighbors()) {


if (isRaise) {
if (neighbor.nextPoint == currentPoint) {
neighbor.setNextPointAndUpdateCost(currentPoint);
openList.add(neighbor);
} else {
cost = neighbor.calculateCostVia(currentPoint);
if (cost < neighbor.getCost()) {
currentPoint.setMinimumCostToCurrentCost();
openList.add(currentPoint);
}
}
} else {
cost = neighbor.calculateCostVia(currentPoint);
if (cost < neighbor.getCost()) {
neighbor.setNextPointAndUpdateCost(currentPoint);
openList.add(neighbor);
}
}
}
}

66.2.2 Check for raise

boolean isRaise(point) {
double cost;
if (point.getCurrentCost() > point.getMinimumCost()) {
for each(neighbor in point.getNeighbors()) {
cost = point.calculateCostVia(neighbor);
if (cost < point.getCurrentCost()) {
point.setNextPointAndUpdateCost(neighbor);
}
}
}
return point.getCurrentCost() > point.getMinimumCost();
}

66.3 Variants

66.3.1 Focussed D*

As its name suggests, Focussed D* is an extension of D* which uses a heuristic to focus


the propagation of RAISE and LOWER toward the robot. In this way, only the states that
matter are updated, in the same way that A* only computes costs for some of the nodes.

66.3.2 D* Lite

D* Lite is not based on the original D* or Focussed D*, but implements the same behavior.
It is simpler to understand and can be implemented in fewer lines of code, hence the name
”D* Lite”. Performance-wise, it is as good as or better than Focussed D*. D* Lite is based
on Lifelong Planning A*, which was introduced by Koenig and Likhachev few years earlier.
D* Lite18

18 http://idm-lab.org/bib/abstracts/papers/aaai02b.pdf

839
D*

66.4 Minimum cost versus current cost

For D*, it is important to distinguish between current and minimum costs. The former
is only important at the time of collection and the latter is critical because it sorts the
OpenList. The function which returns the minimum cost is always the lowest cost to the
current point since it is the first entry of the OpenList.

66.5 References
1. S, A (1994), ”O  E P P 
P-K E”, Proceedings of the International Conference
on Robotics and Automation: 3310–3317, CiteSeerX19 10.1.1.15.368320
2. S, A (1995), ”T F D* A  R-T R-
”, Proceedings of the International Joint Conference on Artificial Intelli-
gence: 1652–1659, CiteSeerX21 10.1.1.41.825722
3. H, P.; N, N.; R, B. (1968), ”A F B   H-
 D  M C P”, IEEE Trans. Syst. Science and
Cybernetics, SSC-4 (2): 100–107
4. K, S.; L, M. (2005), ”F R  N
 U T”, Transactions on Robotics, 21 (3): 354–363, Cite-
SeerX23 10.1.1.65.597924 , doi25 :10.1109/tro.2004.83802626
5. K, S.; L, M.; F, D. (2004), ”L P A*”, Arti-
ficial Intelligence, 155 (1–2): 93–146, doi27 :10.1016/j.artint.2003.12.00128
6. R, G.; R, T. (1996), ”A     -
   - ”, Journal of Algorithms, 21 (2): 267–
305, CiteSeerX29 10.1.1.3.792630 , doi31 :10.1006/jagm.1996.004632
7. K, S.; S, Y.; T, C. (2003), ”P B 
P  U T”, Artificial Intelligence, 147 (1–2): 253–279,
doi33 :10.1016/s0004-3702(03)00062-634
8. W, D. (2006). Graph-based Path Planning for Mobile Robots (Thesis). Geor-
gia Institute of Technology.

19 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
20 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.15.3683
21 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
22 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.8257
23 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
24 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.65.5979
25 https://en.wikipedia.org/wiki/Doi_(identifier)
26 https://doi.org/10.1109%2Ftro.2004.838026
27 https://en.wikipedia.org/wiki/Doi_(identifier)
28 https://doi.org/10.1016%2Fj.artint.2003.12.001
29 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
30 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.3.7926
31 https://en.wikipedia.org/wiki/Doi_(identifier)
32 https://doi.org/10.1006%2Fjagm.1996.0046
33 https://en.wikipedia.org/wiki/Doi_(identifier)
34 https://doi.org/10.1016%2Fs0004-3702%2803%2900062-6

840
External links

9. S, A (1995), ”T F D* A  R-T R-


”, Proceedings of the International Joint Conference on Artificial Intelli-
gence: 1652–1659, CiteSeerX35 10.1.1.41.825736

66.6 External links


• Sven Koenig's web page37
• Anthony Stentz's web page38

35 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
36 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.8257
37 http://idm-lab.org/project-a.html
https://web.archive.org/web/20161120074915/http://www.frc.ri.cmu.edu/~axs/dynamic_
38
plan.html

841
67 Depth-first search

This article needs additional citations for verification1 . Please help improve
this article2 by adding citations to reliable sources3 . Unsourced material may be
challenged and removed.
Find sources: ”Depth-first search”4 – news5 · newspapers6 · books7 · scholar8 · JSTOR9
(July 2010)(Learn how and when to remove this template message10 )

Depth-first search
Order in which the nodes are visited
Class Search algorithm
Data Graph
struc-
ture
Worst- O(|V | + |E|) for explicit graphs
case traversed without repetition,
perfor- O(bd ) for implicit graphs with
mance branching factor b searched to
depth d
Worst- O(|V |) if entire graph is tra-
case space versed without repetition,
complex- O(longest path length searched)
ity = O(bd)for implicit graphs
without elimination of dupli-
cate nodes

1 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
2 https://en.wikipedia.org/w/index.php?title=Depth-first_search&action=edit
3 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
4 http://www.google.com/search?as_eq=wikipedia&q=%22Depth-first+search%22
5 http://www.google.com/search?tbm=nws&q=%22Depth-first+search%22+-wikipedia
http://www.google.com/search?&q=%22Depth-first+search%22+site:news.google.com/
6
newspapers&source=newspapers
7 http://www.google.com/search?tbs=bks:1&q=%22Depth-first+search%22+-wikipedia
8 http://scholar.google.com/scholar?q=%22Depth-first+search%22
https://www.jstor.org/action/doBasicSearch?Query=%22Depth-first+search%22&acc=on&wc=
9
on
10 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

843
Depth-first search

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

844
Properties

Depth-first search (DFS) is an algorithm11 for traversing or searching tree12 or graph13


data structures. The algorithm starts at the root node14 (selecting some arbitrary node
as the root node in the case of a graph) and explores as far as possible along each branch
before backtracking15 .
A version of depth-first search was investigated in the 19th century by French mathematician
Charles Pierre Trémaux16[1] as a strategy for solving mazes17 .[2][3]

67.1 Properties

The time18 and space19 analysis of DFS differs according to its application area. In theo-
retical computer science, DFS is typically used to traverse an entire graph, and takes time
O(|V | + |E|),[4] linear in the size of the graph. In these applications it also uses space O(|V |)
in the worst case to store the stack of vertices on the current search path as well as the set
of already-visited vertices. Thus, in this setting, the time and space bounds are the same
as for breadth-first search20 and the choice of which of these two algorithms to use depends
less on their complexity and more on the different properties of the vertex orderings the
two algorithms produce.
For applications of DFS in relation to specific domains, such as searching for solutions in
artificial intelligence21 or web-crawling, the graph to be traversed is often either too large
to visit in its entirety or infinite (DFS may suffer from non-termination22 ). In such cases,
search is only performed to a limited depth23 ; due to limited resources, such as memory
or disk space, one typically does not use data structures to keep track of the set of all
previously visited vertices. When search is performed to a limited depth, the time is still
linear in terms of the number of expanded vertices and edges (although this number is
not the same as the size of the entire graph because some vertices may be searched more
than once and others not at all) but the space complexity of this variant of DFS is only
proportional to the depth limit, and as a result, is much smaller than the space needed for
searching to the same depth using breadth-first search. For such applications, DFS also
lends itself much better to heuristic24 methods for choosing a likely-looking branch. When
an appropriate depth limit is not known a priori, iterative deepening depth-first search25
applies DFS repeatedly with a sequence of increasing limits. In the artificial intelligence

11 https://en.wikipedia.org/wiki/Algorithm
12 https://en.wikipedia.org/wiki/Tree_data_structure
13 https://en.wikipedia.org/wiki/Graph_(data_structure)
14 https://en.wikipedia.org/wiki/Tree_(data_structure)#Terminology
15 https://en.wikipedia.org/wiki/Backtracking
16 https://en.wikipedia.org/wiki/Charles_Pierre_Tr%C3%A9maux
17 https://en.wikipedia.org/wiki/Maze_solving_algorithm
18 https://en.wikipedia.org/wiki/Time_complexity
19 https://en.wikipedia.org/wiki/Memory_management
20 https://en.wikipedia.org/wiki/Breadth-first_search
21 https://en.wikipedia.org/wiki/Artificial_intelligence
22 https://en.wikipedia.org/wiki/Halting_problem
23 https://en.wikipedia.org/wiki/Depth-limited_search
24 https://en.wikipedia.org/wiki/Heuristics
25 https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search

845
Depth-first search

mode of analysis, with a branching factor26 greater than one, iterative deepening increases
the running time by only a constant factor over the case in which the correct depth limit is
known due to the geometric growth of the number of nodes per level.
DFS may also be used to collect a sample27 of graph nodes. However, incomplete DFS,
similarly to incomplete BFS28 , is biased29 towards nodes of high degree30 .

67.2 Example

For the following graph:

Figure 169

a depth-first search starting at A, assuming that the left edges in the shown graph are chosen
before right edges, and assuming the search remembers previously visited nodes and will
not repeat them (since this is a small graph), will visit the nodes in the following order: A,
B, D, F, E, C, G. The edges traversed in this search form a Trémaux tree31 , a structure with
important applications in graph theory32 . Performing the same search without remembering
previously visited nodes results in visiting nodes in the order A, B, D, F, E, A, B, D, F, E,
etc. forever, caught in the A, B, D, F, E cycle and never reaching C or G.
Iterative deepening33 is one technique to avoid this infinite loop and would reach all nodes.

26 https://en.wikipedia.org/wiki/Branching_factor
27 https://en.wikipedia.org/wiki/Sample_(statistics)
28 https://en.wikipedia.org/wiki/Breadth-first_search#Bias_towards_nodes_of_high_degree
29 https://en.wikipedia.org/wiki/Bias
30 https://en.wikipedia.org/wiki/Degree_(graph_theory)
31 https://en.wikipedia.org/wiki/Tr%C3%A9maux_tree
32 https://en.wikipedia.org/wiki/Graph_theory
33 https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search

846
Output of a depth-first search

67.3 Output of a depth-first search

Figure 170 The four types of edges defined by a spanning tree

A convenient description of a depth-first search of a graph is in terms of a spanning tree34


of the vertices reached during the search. Based on this spanning tree, the edges of the
original graph can be divided into three classes: forward edges, which point from a node
of the tree to one of its descendants, back edges, which point from a node to one of its
ancestors, and cross edges, which do neither. Sometimes tree edges, edges which belong
to the spanning tree itself, are classified separately from forward edges. If the original graph
is undirected then all of its edges are tree edges or back edges.

67.3.1 DFS ordering

An enumeration of the vertices of a graph is said to be a DFS ordering if it is the possible


output of the application of DFS to this graph.
Let G = (V, E) be a graph with n vertices. For σ = (v1 , . . . , vm ) be a list of distinct elements
of V , for v ∈ V \ {v1 , . . . , vm }, let νσ (v) be the greatest i such that vi is a neighbor of v, if
such a i exists, and be 0 otherwise.
Let σ = (v1 , . . . , vn ) be an enumeration of the vertices of V . The enumeration σ is said to be a
DFS ordering (with source v1 ) if, for all 1 < i ≤ n, vi is the vertex w ∈ V \ {v1 , . . . , vi−1 } such
that ν(v1 ,...,vi−1 ) (w) is maximal. Recall that N (v) is the set of neighbors of v. Equivalently, σ

34 https://en.wikipedia.org/wiki/Spanning_tree_(mathematics)

847
Depth-first search

is a DFS ordering if, for all 1 ≤ i < j < k ≤ n with vi ∈ N (vk ) \ N (vj ), there exists a neighbor
vm of vj such that i < m < j.

67.3.2 Vertex orderings

It is also possible to use depth-first search to linearly order the vertices of a graph or tree.
There are four possible ways of doing this:
• A preordering is a list of the vertices in the order that they were first visited by the
depth-first search algorithm. This is a compact and natural way of describing the progress
of the search, as was done earlier in this article. A preordering of an expression tree35 is
the expression in Polish notation36 .
• A postordering is a list of the vertices in the order that they were last visited by the
algorithm. A postordering of an expression tree is the expression in reverse Polish nota-
tion37 .
• A reverse preordering is the reverse of a preordering, i.e. a list of the vertices in the
opposite order of their first visit. Reverse preordering is not the same as postordering.
• A reverse postordering is the reverse of a postordering, i.e. a list of the vertices in the
opposite order of their last visit. Reverse postordering is not the same as preordering.
For binary trees there is additionally in-ordering and reverse in-ordering.
For example, when searching the directed graph below beginning at node A, the sequence
of traversals is either A B D B A C A or A C D C A B A (choosing to first visit B or C
from A is up to the algorithm). Note that repeat visits in the form of backtracking to a
node, to check if it has still unvisited neighbors, are included here (even if it is found to
have none). Thus the possible preorderings are A B D C and A C D B, while the possible
postorderings are D B C A and D C B A, and the possible reverse postorderings are A C
B D and A B C D.

35 https://en.wikipedia.org/wiki/Parse_tree
36 https://en.wikipedia.org/wiki/Polish_notation
37 https://en.wikipedia.org/wiki/Reverse_Polish_notation

848
Pseudocode

Figure 171

Reverse postordering produces a topological sorting38 of any directed acyclic graph39 . This
ordering is also useful in control flow analysis40 as it often represents a natural linearization
of the control flows. The graph above might represent the flow of control in the code
fragment below, and it is natural to consider this code in the order A B C D or A C B D
but not natural to use the order A B D C or A C D B.
if (A) then {
B
} else {
C
}
D

67.4 Pseudocode

Input: A graph G and a vertex v of G


Output: All vertices reachable from v labeled as discovered
A recursive implementation of DFS:[5]
procedure DFS(G, v) is
label v as discovered
for all directed edges from v to w that are in G.adjacentEdges(v) do

38 https://en.wikipedia.org/wiki/Topological_sorting
39 https://en.wikipedia.org/wiki/Directed_acyclic_graph
40 https://en.wikipedia.org/wiki/Control_flow_graph

849
Depth-first search

if vertex w is not labeled as discovered then


recursively call DFS(G, w)

The order in which the vertices are discovered by this algorithm is called the lexicographic
order41 .
A non-recursive implementation of DFS with worst-case space complexity O(|E|):[6]
procedure DFS-iterative(G, v) is
let S be a stack
S.push(v)
while S is not empty do
v = S.pop()
if v is not labeled as discovered then
label v as discovered
for all edges from v to w in G.adjacentEdges(v) do
S.push(w)

These two variations of DFS visit the neighbors of each vertex in the opposite order from
each other: the first neighbor of v visited by the recursive variation is the first one in
the list of adjacent edges, while in the iterative variation the first visited neighbor is the
last one in the list of adjacent edges. The recursive implementation will visit the nodes
from the example graph in the following order: A, B, D, F, E, C, G. The non-recursive
implementation will visit the nodes as: A, E, F, B, D, C, G.
The non-recursive implementation is similar to breadth-first search42 but differs from it in
two ways:
1. it uses a stack instead of a queue, and
2. it delays checking whether a vertex has been discovered until the vertex is popped
from the stack rather than making this check before adding the vertex.

67.5 Applications

Play media43 44 Randomized algorithm similar to depth-first search used in generating a


maze. Algorithms that use depth-first search as a building block include:
• Finding connected components45 .
• Topological sorting46 .
• Finding 2-(edge or vertex)-connected components.
• Finding 3-(edge or vertex)-connected components.
• Finding the bridges47 of a graph.
• Generating words in order to plot the limit set48 of a group49 .

41 https://en.wikipedia.org/wiki/Lexicographical_order
42 https://en.wikipedia.org/wiki/Breadth-first_search
43 http://upload.wikimedia.org/wikipedia/commons/4/45/MAZE_30x20_DFS.ogv
44 https://en.wikipedia.org/wiki/File:MAZE_30x20_DFS.ogv
45 https://en.wikipedia.org/wiki/Connected_component_(graph_theory)
46 https://en.wikipedia.org/wiki/Topological_sorting
47 https://en.wikipedia.org/wiki/Bridge_(graph_theory)#Bridge-finding_algorithm
48 https://en.wikipedia.org/wiki/Limit_set
49 https://en.wikipedia.org/wiki/Group_(mathematics)

850
Complexity

• Finding strongly connected components50 .


• Planarity testing51 .[7][8]
• Solving puzzles with only one solution, such as mazes52 . (DFS can be adapted to find all
solutions to a maze by only including nodes on the current path in the visited set.)
• Maze generation53 may use a randomized depth-first search.
• Finding biconnectivity in graphs54 .

67.6 Complexity

The computational complexity55 of DFS was investigated by John Reif56 . More precisely,
given a graph G, let O = (v1 , . . . , vn ) be the ordering computed by the standard recursive
DFS algorithm. This ordering is called the lexicographic depth-first search ordering. John
Reif considered the complexity of computing the lexicographic depth-first search ordering,
given a graph and a source. A decision version57 of the problem (testing whether some
vertex u occurs before some vertex v in this order) is P-complete58 ,[9] meaning that it is ”a
nightmare for parallel processing59 ”.[10]:189
A depth-first search ordering (not necessarily the lexicographic one), can be computed by
a randomized parallel algorithm in the complexity class RNC60 .[11] As of 1997, it remained
unknown whether a depth-first traversal could be constructed by a deterministic parallel
algorithm, in the complexity class NC61 .[12]

67.7 See also


• Tree traversal62 (for details about pre-order, in-order and post-order depth-first traversal)
• Breadth-first search63
• Iterative deepening depth-first search64
• Search games65

50 https://en.wikipedia.org/wiki/Strongly_connected_components
51 https://en.wikipedia.org/wiki/Planarity_testing
52 https://en.wikipedia.org/wiki/Maze
53 https://en.wikipedia.org/wiki/Maze_generation
54 https://en.wikipedia.org/wiki/Biconnected_graph
55 https://en.wikipedia.org/wiki/Analysis_of_algorithms
56 https://en.wikipedia.org/wiki/John_Reif
57 https://en.wikipedia.org/wiki/Decision_problem
58 https://en.wikipedia.org/wiki/P-complete
59 https://en.wikipedia.org/wiki/Parallel_algorithm
60 https://en.wikipedia.org/wiki/NC_(complexity)
61 https://en.wikipedia.org/wiki/NC_(complexity)
62 https://en.wikipedia.org/wiki/Tree_traversal
63 https://en.wikipedia.org/wiki/Breadth-first_search
64 https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search
65 https://en.wikipedia.org/wiki/Search_games

851
Depth-first search

67.8 Notes
1. Charles Pierre Trémaux66 (1859–1882) École polytechnique of Paris (X:1876), French
engineer of the telegraph
in Public conference, December 2, 2010 – by professor Jean Pelletier-Thibert67 in
Académie de Macon (Burgundy – France) – (Abstract published in the Annals aca-
demic, March 2011 – ISSN68 0980-603269 )
2. E, S70 (2011), Graph Algorithms71 (2 .), C U
P, . 46–48, ISBN72 978-0-521-73653-473 .
3. S, R (2002), Algorithms in C++: Graph Algorithms (3rd ed.),
Pearson Education, ISBN74 978-0-201-36118-675 .
4. Cormen, Thomas H., Charles E. Leiserson, and Ronald L. Rivest. p.606
5. Goodrich and Tamassia; Cormen, Leiserson, Rivest, and Stein
6. Page 93, Algorithm Design, Kleinberg and Tardos
7. H, J76 ; T, R E.77 (1974), ”E  -
”78 (PDF), Journal of the Association for Computing Machinery79 , 21 (4): 549–
568, doi80 :10.1145/321850.32185281 .
8.  F, H.; O  M, P.82 ; R, P.83
(2006), ”T T  P”, International Journal of Foun-
dations of Computer Science, 17 (5): 1017–1030, arXiv84 :math/061093585 ,
doi86 :10.1142/S012905410600424887 .
9. R, J H. (1985). ”D-    ”. In-
formation Processing Letters. 20 (5). doi88 :10.1016/0020-0190(85)90024-989 .

66 https://en.wikipedia.org/wiki/Charles_Pierre_Tr%C3%A9maux
https://en.wikipedia.org/w/index.php?title=Jean_Pelletier-Thibert&action=edit&
67
redlink=1
68 https://en.wikipedia.org/wiki/ISSN_(identifier)
69 https://www.worldcat.org/search?fq=x0:jrnl&q=n2:0980-6032
70 https://en.wikipedia.org/wiki/Shimon_Even
71 https://books.google.com/books?id=m3QTSMYm5rkC&pg=PA46
72 https://en.wikipedia.org/wiki/ISBN_(identifier)
73 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-73653-4
74 https://en.wikipedia.org/wiki/ISBN_(identifier)
75 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-36118-6
76 https://en.wikipedia.org/wiki/John_Hopcroft
77 https://en.wikipedia.org/wiki/Robert_Tarjan
78 https://ecommons.cornell.edu/bitstream/1813/6011/1/73-165.pdf
79 https://en.wikipedia.org/wiki/Journal_of_the_ACM
80 https://en.wikipedia.org/wiki/Doi_(identifier)
81 https://doi.org/10.1145%2F321850.321852
82 https://en.wikipedia.org/wiki/Patrice_Ossona_de_Mendez
83 https://en.wikipedia.org/wiki/Pierre_Rosenstiehl
84 https://en.wikipedia.org/wiki/ArXiv_(identifier)
85 http://arxiv.org/abs/math/0610935
86 https://en.wikipedia.org/wiki/Doi_(identifier)
87 https://doi.org/10.1142%2FS0129054106004248
88 https://en.wikipedia.org/wiki/Doi_(identifier)
89 https://doi.org/10.1016%2F0020-0190%2885%2990024-9

852
References

10. M, K90 ; S, P91 (2008). Algorithms and Data Structures:
The Basic Toolbox92 (PDF). S.
11. A, A.; A, R. J. (1988), ”A  NC algorithm for depth first
search”, Combinatorica93 , 8 (1): 1–12, doi94 :10.1007/BF0212254895 , MR96 095198997 .
12. K, D R.98 ; M, R99 (1997), ”A NC algorithm for
minimum cuts”, SIAM Journal on Computing100 , 26 (1): 255–272, Cite-
SeerX 10.1.1.33.1701 , doi :10.1137/S0097539794273083 , MR105 1431256106 .
101 102 103 104

67.9 References
• Thomas H. Cormen107 , Charles E. Leiserson108 , Ronald L. Rivest109 , and Clifford Stein110 .
Introduction to Algorithms111 , Second Edition. MIT Press and McGraw-Hill, 2001.
ISBN112 0-262-03293-7113 . Section 22.3: Depth-first search, pp. 540–549.
• G, M T.114 ; T, R115 (2001), Algorithm Design: Foun-
dations, Analysis, and Internet Examples, Wiley, ISBN116 0-471-38365-1117
• K, J118 ; T, É119 (2006), Algorithm Design, Addison Wesley,
pp. 92–94

90 https://en.wikipedia.org/wiki/Kurt_Mehlhorn
91 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
92 http://people.mpi-inf.mpg.de/~mehlhorn/ftp/Toolbox/GraphTraversal.pdf
93 https://en.wikipedia.org/wiki/Combinatorica
94 https://en.wikipedia.org/wiki/Doi_(identifier)
95 https://doi.org/10.1007%2FBF02122548
96 https://en.wikipedia.org/wiki/MR_(identifier)
97 http://www.ams.org/mathscinet-getitem?mr=0951989
98 https://en.wikipedia.org/wiki/David_Karger
99 https://en.wikipedia.org/wiki/Rajeev_Motwani
100 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
101 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
102 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.33.1701
103 https://en.wikipedia.org/wiki/Doi_(identifier)
104 https://doi.org/10.1137%2FS0097539794273083
105 https://en.wikipedia.org/wiki/MR_(identifier)
106 http://www.ams.org/mathscinet-getitem?mr=1431256
107 https://en.wikipedia.org/wiki/Thomas_H._Cormen
108 https://en.wikipedia.org/wiki/Charles_E._Leiserson
109 https://en.wikipedia.org/wiki/Ronald_L._Rivest
110 https://en.wikipedia.org/wiki/Clifford_Stein
111 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
112 https://en.wikipedia.org/wiki/ISBN_(identifier)
113 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
114 https://en.wikipedia.org/wiki/Michael_T._Goodrich
115 https://en.wikipedia.org/wiki/Roberto_Tamassia
116 https://en.wikipedia.org/wiki/ISBN_(identifier)
117 https://en.wikipedia.org/wiki/Special:BookSources/0-471-38365-1
118 https://en.wikipedia.org/wiki/Jon_Kleinberg
119 https://en.wikipedia.org/wiki/%C3%89va_Tardos

853
Depth-first search

• K, D E.120 (1997), The Art of Computer Programming Vol 1. 3rd ed121 ,
B: A-W, ISBN122 0-201-89683-4123 , OCLC124 155842391125

67.10 External links

Wikimedia Commons has media related to Depth-first search126 .

• Open Data Structures - Section 12.3.2 - Depth-First-Search127 , Pat Morin128


• C++ Boost Graph Library: Depth-First Search129
• Depth-First Search Animation (for a directed graph)130
• Depth First and Breadth First Search: Explanation and Code131
• QuickGraph132 , depth first search example for .Net
• Depth-first search algorithm illustrated explanation (Java and C++ implementations)133
• YAGSBPL – A template-based C++ library for graph search and planning134

120 https://en.wikipedia.org/wiki/Donald_Knuth
121 http://www-cs-faculty.stanford.edu/~knuth/taocp.html
122 https://en.wikipedia.org/wiki/ISBN_(identifier)
123 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89683-4
124 https://en.wikipedia.org/wiki/OCLC_(identifier)
125 http://www.worldcat.org/oclc/155842391
126 https://commons.wikimedia.org/wiki/Category:Depth-first_search
http://opendatastructures.org/versions/edition-0.1e/ods-java/12_3_Graph_Traversal.
127
html#SECTION001532000000000000000
128 https://en.wikipedia.org/wiki/Pat_Morin
129 http://www.boost.org/libs/graph/doc/depth_first_search.html
130 http://www.cs.duke.edu/csed/jawaa/DFSanim.html
131 http://www.kirupa.com/developer/actionscript/depth_breadth_search.htm
132 http://quickgraph.codeplex.com/Wiki/View.aspx?title=Depth%20First%20Search%20Example
133 http://www.algolist.net/Algorithms/Graph_algorithms/Undirected/Depth-first_search
134 https://code.google.com/p/yagsbpl/

854
68 Iterative deepening depth-first search

This article needs additional citations for verification1 . Please help improve
this article2 by adding citations to reliable sources3 . Unsourced material may be
challenged and removed.
Find sources: ”Iterative deepening depth-first search”4 –
news5 · newspapers6 · books7 · scholar8 · JSTOR9 (January 2017)(Learn how and
when to remove this template message10 )

Iterative deepening depth-first search


Class Search algorithm
Data Tree, Graph
structure
Worst- O(bd ), where b is the branch-
case per- ing factor and d is the depth
formance of the shallowest solution
Worst- O(d)[1]:5
case space
complex-
ity

1 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
https://en.wikipedia.org/w/index.php?title=Iterative_deepening_depth-first_search&
2
action=edit
3 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
http://www.google.com/search?as_eq=wikipedia&q=%22Iterative+deepening+depth-first+
4
search%22
http://www.google.com/search?tbm=nws&q=%22Iterative+deepening+depth-first+search%22+-
5
wikipedia
http://www.google.com/search?&q=%22Iterative+deepening+depth-first+search%22+site:
6
news.google.com/newspapers&source=newspapers
http://www.google.com/search?tbs=bks:1&q=%22Iterative+deepening+depth-first+search%
7
22+-wikipedia
8 http://scholar.google.com/scholar?q=%22Iterative+deepening+depth-first+search%22
https://www.jstor.org/action/doBasicSearch?Query=%22Iterative+deepening+depth-
9
first+search%22&acc=on&wc=on
10 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

855
Iterative deepening depth-first search

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

856
Algorithm for directed graphs

In computer science11 , iterative deepening search or more specifically iterative deep-


ening depth-first search[2] (IDS or IDDFS) is a state space12 /graph search strategy in
which a depth-limited version of depth-first search13 is run repeatedly with increasing depth
limits until the goal is found. IDDFS is optimal like breadth-first search14 , but uses much
less memory; at each iteration, it visits the nodes15 in the search tree16 in the same order
as depth-first search, but the cumulative order in which nodes are first visited is effectively
breadth-first.

68.1 Algorithm for directed graphs

The following pseudocode shows IDDFS implemented in terms of a recursive depth-limited


DFS (called DLS) for directed graphs17 . This implementation of IDDFS does not account
for already-visited nodes and therefore does not work for undirected graphs18 .
function IDDFS(root) is
for depth from 0 to ∞ do
found, remaining ← DLS(root, depth)
if found ≠ null then
return found
else if not remaining then
return null

function DLS(node, depth) is


if depth = 0 then
if node is a goal then
return (node, true)
else
return (null, true) (Not found, but may have children)

else if depth > 0 then


any_remaining ← false
foreach child of node do
found, remaining ← DLS(child, depth−1)
if found ≠ null then
return (found, true)
if remaining then
any_remaining ← true (At least one node found at depth, let IDDFS deepen)
return (null, any_remaining)

If the goal node is found, then DLS unwinds the recursion returning with no further iter-
ations. Otherwise, if at least one node exists at that level of depth, the remaining flag will
let IDDFS continue.

11 https://en.wikipedia.org/wiki/Computer_science
12 https://en.wikipedia.org/wiki/State_space_search
13 https://en.wikipedia.org/wiki/Depth-first_search
14 https://en.wikipedia.org/wiki/Breadth-first_search
15 https://en.wikipedia.org/wiki/Node_(computer_science)
16 https://en.wikipedia.org/wiki/Search_tree
17 https://en.wikipedia.org/wiki/Directed_graph
18 https://en.wikipedia.org/wiki/Undirected_graph

857
Iterative deepening depth-first search

2-tuples19 are useful as return value to signal IDDFS to continue deepening or stop, in case
tree depth and goal membership are unknown a priori. Another solution could use sentinel
values20 instead to represent not found or remaining level results.

68.2 Properties

IDDFS combines depth-first search's space-efficiency and breadth-first search's completeness


(when the branching factor21 is finite). If a solution exists, it will find a solution path with
the fewest arcs.[3]
Since iterative deepening visits states multiple times, it may seem wasteful, but it turns out
to be not so costly, since in a tree most of the nodes are in the bottom level, so it does not
matter much if the upper levels are visited multiple times.[4]
The main advantage of IDDFS in game tree22 searching is that the earlier searches tend
to improve the commonly used heuristics, such as the killer heuristic23 and alpha-beta
pruning24 , so that a more accurate estimate of the score of various nodes at the final depth
search can occur, and the search completes more quickly since it is done in a better order.
For example, alpha-beta pruning is most efficient if it searches the best moves first.[4]
A second advantage is the responsiveness of the algorithm. Because early iterations use
small values for d, they execute extremely quickly. This allows the algorithm to supply
early indications of the result almost immediately, followed by refinements as d increases.
When used in an interactive setting, such as in a chess25 -playing program, this facility
allows the program to play at any time with the current best move found in the search
it has completed so far. This can be phrased as each depth of the search corecursively26
producing a better approximation of the solution, though the work done at each step is
recursive. This is not possible with a traditional depth-first search, which does not produce
intermediate results.

68.3 Asymptotic analysis

68.3.1 Time complexity

The time complexity27 of IDDFS in a (well-balanced) tree works out to be the same as
breadth-first search, i.e. O(bd ),[1]:5 where b is the branching factor and d is the depth of the
goal.

19 https://en.wikipedia.org/wiki/Tuple
20 https://en.wikipedia.org/wiki/Sentinel_value
21 https://en.wikipedia.org/wiki/Branching_factor
22 https://en.wikipedia.org/wiki/Game_tree
23 https://en.wikipedia.org/wiki/Killer_heuristic
24 https://en.wikipedia.org/wiki/Alpha-beta_pruning
25 https://en.wikipedia.org/wiki/Chess
26 https://en.wikipedia.org/wiki/Corecursive
27 https://en.wikipedia.org/wiki/Time_complexity

858
Asymptotic analysis

Proof

In an iterative deepening search, the nodes at depth d are expanded once, those at depth
d − 1 are expanded twice, and so on up to the root of the search tree, which is expanded
d + 1 times.[1]:5 So the total number of expansions in an iterative deepening search is

d
bd + 2bd−1 + 3bd−2 + · · · + (d − 1)b2 + db + (d + 1) = (d + 1 − i)bi
i=0

where bd is the number of expansions at depth d, 2bd−1 is the number of expansions at


depth d − 1, and so on. Factoring out bd gives
bd (1 + 2b−1 + 3b−2 + · · · + (d − 1)b2−d + db1−d + (d + 1)b−d )
1
Now let x = = b−1 . Then we have
b
bd (1 + 2x + 3x2 + · · · + (d − 1)xd−2 + dxd−1 + (d + 1)xd )
This is less than the infinite series
(∞ )

b (1 + 2x + 3x + 4x + · · · ) = b
d 2 3 d
nx n−1

n=1

which converges28 to
1
bd (1 − x)−2 = bd , for abs(x) < 1
(1 − x)2
That is, we have
bd (1 + 2x + 3x2 + · · · + (d − 1)xd−2 + dxd−1 + (d + 1)xd ) ≤ bd (1 − x)−2 , for abs(x) < 1
( )
1 −2
Since (1 − x)−2 or 1 − is a constant independent of d (the depth), if b > 1 (i.e., if the
b
branching factor is greater than 1), the running time of the depth-first iterative deepening
search is O(bd ).

Example

For b = 10 and d = 5 the number is



5
(5 + 1 − i)10i = 6 + 50 + 400 + 3000 + 20000 + 100000 = 123456
i=0

All together, an iterative deepening search from depth 1 all the way down to depth d expands
only about 11% more nodes than a single breadth-first or depth-limited search to depth d,
when b = 10.[5]
The higher the branching factor, the lower the overhead of repeatedly expanded states,[1]:6
but even when the branching factor is 2, iterative deepening search only takes about twice

28 https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series

859
Iterative deepening depth-first search

as long as a complete breadth-first search. This means that the time complexity of iterative
deepening is still O(bd ).

68.3.2 Space complexity

The space complexity29 of IDDFS is O(d),[1]:5 where d is the depth of the goal.

Proof

Since IDDFS, at any point, is engaged in a depth-first search, it need only store a stack of
nodes which represents the branch of the tree it is expanding. Since it finds a solution of
optimal length, the maximum depth of this stack is d, and hence the maximum amount of
space is O(d).
In general, iterative deepening is the preferred search method when there is a large search
space and the depth of the solution is not known.[4]

68.4 Example

For the following graph:

Figure 173

a depth-first search starting at A, assuming that the left edges in the shown graph are
chosen before right edges, and assuming the search remembers previously-visited nodes and
will not repeat them (since this is a small graph), will visit the nodes in the following order:
A, B, D, F, E, C, G. The edges traversed in this search form a Trémaux tree30 , a structure
with important applications in graph theory31 .

29 https://en.wikipedia.org/wiki/Space_complexity
30 https://en.wikipedia.org/wiki/Tr%C3%A9maux_tree
31 https://en.wikipedia.org/wiki/Graph_theory

860
Related algorithms

Performing the same search without remembering previously visited nodes results in visiting
nodes in the order A, B, D, F, E, A, B, D, F, E, etc. forever, caught in the A, B, D, F, E
cycle and never reaching C or G.
Iterative deepening prevents this loop and will reach the following nodes on the following
depths, assuming it proceeds left-to-right as above:
• 0: A
• 1: A, B, C, E
(Iterative deepening has now seen C, when a conventional depth-first search did not.)
• 2: A, B, D, F, C, G, E, F
(It still sees C, but that it came later. Also it sees E via a different path, and loops back to
F twice.)
• 3: A, B, D, F, E, C, G, E, F, B
For this graph, as more depth is added, the two cycles ”ABFE” and ”AEFB” will simply get
longer before the algorithm gives up and tries another branch.

68.5 Related algorithms

Similar to iterative deepening is a search strategy called iterative lengthening search32 that
works with increasing path-cost limits instead of depth-limits. It expands nodes in the order
of increasing path cost; therefore the first goal it encounters is the one with the cheapest
path cost. But iterative lengthening incurs substantial overhead that makes it less useful
than iterative deepening.[4]
Iterative deepening A*33 is a best-first search that performs iterative deepening based on
”f”-values similar to the ones computed in the A* algorithm34 .

68.5.1 Bidirectional IDDFS

IDDFS has a bidirectional counterpart,[1]:6 which alternates two searches: one starting from
the source node and moving along the directed arcs, and another one starting from the
target node and proceeding along the directed arcs in opposite direction (from the arc's
head node to the arc's tail node). The search process first checks that the source node
and the target node are same, and if so, returns the trivial path consisting of a single
source/target node. Otherwise, the forward search process expands the child nodes of the
source node (set A), the backward search process expands the parent nodes of the target
node (set B), and it is checked whether A and B intersect. If so, a shortest path is found.
Otherwise, the search depth is incremented and the same computation takes place.

https://en.wikipedia.org/w/index.php?title=Iterative_lengthening_search&action=edit&
32
redlink=1
33 https://en.wikipedia.org/wiki/Iterative_deepening_A*
34 https://en.wikipedia.org/wiki/A*_algorithm

861
Iterative deepening depth-first search

One limitation of the algorithm is that the shortest path consisting of an odd number of arcs
will not be detected. Suppose we have a shortest path ⟨s, u, v, t⟩. When the depth will reach
two hops along the arcs, the forward search will proceed to v from u, and the backward
search will proceed from v to u. Pictorially, the search frontiers will go through each other,
and instead a suboptimal path consisting of an even number of arcs will be returned. This
is illustrated in the below diagrams:

Figure 174 Bidirectional IDDFS

What comes to space complexity, the algorithm colors the deepest nodes in the forward
search process in order to detect existing of the middle node where the two search processes
meet.
Additional difficulty of applying bidirectional IDDFS is that if the source and the target
nodes are in different strongly connected components, say, s ∈ S, t ∈ T , if there is no arc
leaving S and entering T , the search will never terminate.

862
Related algorithms

Time and space complexities

The running time of bidirectional IDDFS is given by


n/2

2 bk
k=0

and the space complexity is given by


bn/2 ,
where n is the number of nodes in the shortest s, t-path. Since the running time complexity

n
of iterative deepening depth-first search is bk , the speedup is roughly
k=0
∑n
bk
1−bn+1
1−b 1 − bn+1 bn+1 − 1 bn+1 √
k=0
∑n/2 k = = = ≈ = Θ(bn/2
) = Θ( bn ).
2 b 1−bn/2+1
2 1−b 2(1 − bn/2+1 ) 2(bn/2+1 − 1) 2bn/2+1
k=0

Pseudocode

function Build-Path(s, µ, B) is
π ← Find-Shortest-Path(s, µ) (Recursively compute the path to the relay node)
remove the last node from π
return π <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mo> </mo> </mstyle> </mrow> {\displaystyle \circ } </semantics> B (Append the backward search stack)

function Depth-Limited-Search-Forward(u, ∆, F) is
if ∆ = 0 then
F ← F <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mo> </mo> </mstyle> </mrow> {\displaystyle \cup } </semantics> {u} (Mark the node)
return
foreach child of u do
Depth-Limited-Search-Forward(child, ∆ − 1, F)

function Depth-Limited-Search-Backward(u, ∆, B, F) is
prepend u to B
if ∆ = 0 then
if u in F then
return u (Reached the marked node, use it as a relay node)
remove the head node of B
return null
foreach parent of u do
µ ← Depth-Limited-Search-Backward(parent, ∆ − 1, B, F)
if µ <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mo>≠</mo> </mstyle> </mrow> {\displaystyle \neq } </semantics> null then
return µ
remove the head node of B
return null

function Find-Shortest-Path(s, t) is
if s = t then
return <s>
F, B, ∆ ← ∅, ∅, 0
forever do
Depth-Limited-Search-Forward(s, ∆, F)
foreach δ = ∆, ∆ + 1 do
µ ← Depth-Limited-Search-Backward(t, δ, B, F)
if µ <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mo>≠</mo> </mstyle> </mrow> {\displaystyle \neq } </semantics> null then

863
Iterative deepening depth-first search

return Build-Path(s, µ, B) (Found a relay node)


B←∅
F, ∆ ← ∅, ∆ + 1

68.6 References
1. KORF, R E. (1985). ”D-  ”35 (PDF). Cite
journal requires |journal= (help36 )
2. K, R (1985). ”D- I-D: A O A-
 T S”. Artificial Intelligence. 27: 97–109. doi37 :10.1016/0004-
3702(85)90084-038 .
3. D P; A M. ”3.5.3 I D‣ Chapter 3
Searching for Solutions ‣Artificial Intelligence: Foundations of Computational Agents,
2nd Edition”39 . artint.info. Retrieved 29 November 2018.
4. R, S J.40 ; N, P41 (2003), Artificial Intelligence: A Mod-
ern Approach42 (2 .), U S R, N J: P H,
ISBN43 0-13-790395-244
5. R; N (1994). Artificial Intelligence: A Modern Approach.

35 https://cse.sc.edu/~mgv/csce580f09/gradPres/korf_IDAStar_1985.pdf
36 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
37 https://en.wikipedia.org/wiki/Doi_(identifier)
38 https://doi.org/10.1016%2F0004-3702%2885%2990084-0
39 https://artint.info/2e/html/ArtInt2e.Ch3.S5.SS3.html
40 https://en.wikipedia.org/wiki/Stuart_J._Russell
41 https://en.wikipedia.org/wiki/Peter_Norvig
42 http://aima.cs.berkeley.edu/
43 https://en.wikipedia.org/wiki/ISBN_(identifier)
44 https://en.wikipedia.org/wiki/Special:BookSources/0-13-790395-2

864
69 Dijkstra's algorithm

Graph search algorithm Not to be confused with Dykstra's projection algorithm1 .

Dijkstra's algorithm
Dijkstra's algorithm to find the shortest path between a and b. It picks the unvisited
vertex with the lowest distance, calculates the distance through it to each unvisited
neighbor, and updates the neighbor's distance if smaller. Mark visited (set to red)
when done with neighbors.
Class Search algorithm
Greedy algorithm
Dynamic
programming[1]
Data structure Graph
Worst-case perfor- O(|E| + |V | log |V |)
mance

1 https://en.wikipedia.org/wiki/Dykstra%27s_projection_algorithm

865
Dijkstra's algorithm

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

866
References

Dijkstra's algorithm (or Dijkstra's Shortest Path First algorithm, SPF algo-
rithm)[2] is an algorithm2 for finding the shortest paths3 between nodes4 in a graph5 ,
which may represent, for example, road networks6 . It was conceived by computer scientist7
Edsger W. Dijkstra8 in 1956 and published three years later.[3][4][5]
The algorithm exists in many variants. Dijkstra's original algorithm found the shortest path
between two given nodes,[5] but a more common variant fixes a single node as the ”source”
node and finds shortest paths from the source to all other nodes in the graph, producing a
shortest-path tree9 .
For a given source node in the graph, the algorithm finds the shortest path between that
node and every other.[6]:196–206 It can also be used for finding the shortest paths from a
single node to a single destination node by stopping the algorithm once the shortest path to
the destination node has been determined. For example, if the nodes of the graph represent
cities and edge path costs represent driving distances between pairs of cities connected by a
direct road (for simplicity, ignore red lights, stop signs, toll roads and other obstructions),
Dijkstra's algorithm can be used to find the shortest route between one city and all other
cities. A widely used application of shortest path algorithm is network routing protocols10 ,
most notably IS-IS11 (Intermediate System to Intermediate System) and Open Shortest
Path First (OSPF12 ). It is also employed as a subroutine13 in other algorithms such as
Johnson's14 .
The Dijkstra algorithm uses labels that are positive integers or real numbers, which are
totally ordered15 . It can be generalized to use any labels that are partially ordered16 ,
provided the subsequent labels (a subsequent label is produced when traversing an edge) are
monotonically17 non-decreasing. This generalization is called the generic Dijkstra shortest-
path algorithm.[7]
Dijkstra's algorithm uses a data structure for storing and querying partial solutions sorted by
distance from the start. While the original algorithm uses a min-priority queue18 and runs in
time19 O(|V | + |E| log |V |)(where |V |is the number of nodes and |E| is the number of edges),
it can also be implemented in O(V 2 ) using an array. The idea of this algorithm is also given

2 https://en.wikipedia.org/wiki/Algorithm
3 https://en.wikipedia.org/wiki/Shortest_path_problem
4 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
5 https://en.wikipedia.org/wiki/Graph_(abstract_data_type)
6 https://en.wikipedia.org/wiki/Road_network
7 https://en.wikipedia.org/wiki/Computer_scientist
8 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
9 https://en.wikipedia.org/wiki/Shortest-path_tree
10 https://en.wikipedia.org/wiki/Routing_protocol
11 https://en.wikipedia.org/wiki/IS-IS
12 https://en.wikipedia.org/wiki/OSPF
13 https://en.wikipedia.org/wiki/Subroutine
14 https://en.wikipedia.org/wiki/Johnson%27s_algorithm
15 https://en.wikipedia.org/wiki/Total_order
16 https://en.wikipedia.org/wiki/Partially_ordered_set
17 https://en.wikipedia.org/wiki/Monotonic_function
18 https://en.wikipedia.org/wiki/Min-priority_queue
19 https://en.wikipedia.org/wiki/Time_complexity

867
Dijkstra's algorithm

in Leyzorek et al. 195720 . Fredman & Tarjan 198421 propose using a Fibonacci heap22 min-
priority queue to optimize the running time complexity to O(|E| + |V | log |V |)(where |E|is
the number of edges). This is asymptotically23 the fastest known single-source shortest-path
algorithm24 for arbitrary directed graphs25 with unbounded non-negative weights. However,
specialized cases (such as bounded/integer weights, directed acyclic graphs etc.) can indeed
be improved further as detailed in Specialized variants26 .
In some fields, artificial intelligence27 in particular, Dijkstra's algorithm or a variant of it is
known as uniform cost search and formulated as an instance of the more general idea of
best-first search28 .[8]

69.1 History

What is the shortest way to travel from Rotterdam29 to Groningen30 , in general: from
given city to given city. It is the algorithm for the shortest path31 , which I designed in
about twenty minutes. One morning I was shopping in Amsterdam32 with my young
fiancée, and tired, we sat down on the café terrace to drink a cup of coffee and I was
just thinking about whether I could do this, and I then designed the algorithm for the
shortest path. As I said, it was a twenty-minute invention. In fact, it was published in
'59, three years later. The publication is still readable, it is, in fact, quite nice. One of
the reasons that it is so nice was that I designed it without pencil and paper. I learned
later that one of the advantages of designing without pencil and paper is that you are
almost forced to avoid all avoidable complexities. Eventually that algorithm became, to
my great amazement, one of the cornerstones of my fame.

E D,     P L. F, C 
 ACM, 2001[4]
Dijkstra thought about the shortest path problem when working at the Mathematical Center
in Amsterdam33 in 1956 as a programmer to demonstrate the capabilities of a new computer
called ARMAC.[9] His objective was to choose both a problem and a solution (that would
be produced by computer) that non-computing people could understand. He designed
the shortest path algorithm and later implemented it for ARMAC for a slightly simplified
transportation map of 64 cities in the Netherlands (64, so that 6 bits would be sufficient

20 #CITEREFLeyzorekGrayJohnsonLadew1957
21 #CITEREFFredmanTarjan1984
22 https://en.wikipedia.org/wiki/Fibonacci_heap
23 https://en.wikipedia.org/wiki/Asymptotic_computational_complexity
24 https://en.wikipedia.org/wiki/Shortest_path_problem
25 https://en.wikipedia.org/wiki/Directed_graph
26 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm#Specialized_variants
27 https://en.wikipedia.org/wiki/Artificial_intelligence
28 https://en.wikipedia.org/wiki/Best-first_search
29 https://en.wikipedia.org/wiki/Rotterdam
30 https://en.wikipedia.org/wiki/Groningen
31 https://en.wikipedia.org/wiki/Shortest_path_problem
32 https://en.wikipedia.org/wiki/Amsterdam
33 https://en.wikipedia.org/wiki/Centrum_Wiskunde_%26_Informatica

868
History

to encode the city number).[4] A year later, he came across another problem from hardware
engineers working on the institute's next computer: minimize the amount of wire needed
to connect the pins on the back panel of the machine. As a solution, he re-discovered the
algorithm known as Prim's minimal spanning tree algorithm34 (known earlier to Jarník35 ,
and also rediscovered by Prim36 ).[10][11] Dijkstra published the algorithm in 1959, two years
after Prim and 29 years after Jarník.[12][13]

34 https://en.wikipedia.org/wiki/Prim%27s_algorithm
35 https://en.wikipedia.org/wiki/Vojt%C4%9Bch_Jarn%C3%ADk
36 https://en.wikipedia.org/wiki/Robert_C._Prim

869
Dijkstra's algorithm

69.2 Algorithm

Figure 175 Illustration of Dijkstra's algorithm finding a path from a start node (lower
left, red) to a goal node (upper right, green) in a robot motion planning problem. Open
nodes represent the ”tentative” set (aka set of ”unvisited” nodes). Filled nodes are visited
ones, with color representing the distance: the greener, the closer. Nodes in all the
different directions are explored uniformly, appearing more-or-less as a circular wavefront
as Dijkstra's algorithm uses a heuristic identically equal to 0.

Let the node at which we are starting be called the initial node. Let the distance of
node Y be the distance from the initial node to Y. Dijkstra's algorithm will assign some
initial distance values and will try to improve them step by step.
1. Mark all nodes unvisited. Create a set of all the unvisited nodes called the unvisited
set.

870
Description

2. Assign to every node a tentative distance value: set it to zero for our initial node and
to infinity for all other nodes. Set the initial node as current.[14]
3. For the current node, consider all of its unvisited neighbours and calculate their
tentative distances through the current node. Compare the newly calculated
tentative distance to the current assigned value and assign the smaller one. For ex-
ample, if the current node A is marked with a distance of 6, and the edge connecting
it with a neighbour B has length 2, then the distance to B through A will be 6 + 2
= 8. If B was previously marked with a distance greater than 8 then change it to 8.
Otherwise, the current value will be kept.
4. When we are done considering all of the unvisited neighbours of the current node,
mark the current node as visited and remove it from the unvisited set. A visited node
will never be checked again.
5. If the destination node has been marked visited (when planning a route between two
specific nodes) or if the smallest tentative distance among the nodes in the unvisited
set is infinity (when planning a complete traversal; occurs when there is no connection
between the initial node and remaining unvisited nodes), then stop. The algorithm
has finished.
6. Otherwise, select the unvisited node that is marked with the smallest tentative dis-
tance, set it as the new ”current node”, and go back to step 3.
When planning a route, it is actually not necessary to wait until the destination node
is ”visited” as above: the algorithm can stop once the destination node has the smallest
tentative distance among all ”unvisited” nodes (and thus could be selected as the next
”current”).

69.3 Description

Note: For ease of understanding, this discussion uses the terms intersection, road and
map – however, in formal terminology these terms are vertex, edge and graph, respec-
tively. Suppose you would like to find the shortest path between two intersections37 on a city
map: a starting point and a destination. Dijkstra's algorithm initially marks the distance
(from the starting point) to every other intersection on the map with infinity. This is done
not to imply that there is an infinite distance, but to note that those intersections have not
been visited yet. Some variants of this method leave the intersections' distances unlabeled.
Now select the current intersection at each iteration. For the first iteration, the current
intersection will be the starting point, and the distance to it (the intersection's label) will
be zero. For subsequent iterations (after the first), the current intersection will be a closest
unvisited intersection to the starting point (this will be easy to find).
From the current intersection, update the distance to every unvisited intersection that is
directly connected to it. This is done by determining the sum of the distance between an
unvisited intersection and the value of the current intersection and then relabeling38 the
unvisited intersection with this value (the sum) if it is less than the unvisited intersection's
current value. In effect, the intersection is relabeled if the path to it through the current

37 https://en.wikipedia.org/wiki/Intersection_(road)
38 https://en.wikipedia.org/wiki/Graph_labeling

871
Dijkstra's algorithm

intersection is shorter than the previously known paths. To facilitate shortest path identifi-
cation, in pencil, mark the road with an arrow pointing to the relabeled intersection if you
label/relabel it, and erase all others pointing to it. After you have updated the distances to
each neighboring intersection39 , mark the current intersection as visited and select an unvis-
ited intersection with minimal distance (from the starting point) – or the lowest label—as
the current intersection. Intersections marked as visited are labeled with the shortest path
from the starting point to it and will not be revisited or returned to.
Continue this process of updating the neighboring intersections with the shortest distances,
marking the current intersection as visited, and moving onto a closest unvisited intersection
until you have marked the destination as visited. Once you have marked the destination as
visited (as is the case with any visited intersection), you have determined the shortest path
to it from the starting point and can trace your way back following the arrows in reverse.
In the algorithm's implementations, this is usually done (after the algorithm has reached
the destination node) by following the nodes' parents from the destination node up to the
starting node; that's why we also keep track of each node's parent.
This algorithm makes no attempt of direct ”exploration” towards the destination as one
might expect. Rather, the sole consideration in determining the next ”current” intersection
is its distance from the starting point. This algorithm therefore expands outward from the
starting point, interactively considering every node that is closer in terms of shortest path
distance until it reaches the destination. When understood in this way, it is clear how
the algorithm necessarily finds the shortest path. However, it may also reveal one of the
algorithm's weaknesses: its relative slowness in some topologies.

69.4 Pseudocode

In the following algorithm, the code u ← vertex in Q with min dist[u], searches for the
vertex u in the vertex set Q that has the least dist[u] value. length(u, v) returns the length
of the edge joining (i.e. the distance between) the two neighbor-nodes u and v. The variable
alt on line 18 is the length of the path from the root node to the neighbor node v if it were
to go through u. If this path is shorter than the current shortest path recorded for v, that
current path is replaced with this alt path. The prev array is populated with a pointer to
the ”next-hop” node on the source graph to get the shortest route to the source.

39 https://en.wikipedia.org/wiki/Neighbourhood_(graph_theory)

872
Pseudocode

Figure 176 A demo of Dijkstra's algorithm based on Euclidean distance. Red lines are
the shortest path covering, i.e., connecting u and prev[u]. Blue lines indicate where
relaxing happens, i.e., connecting v with a node u in Q, which gives a shorter path from
the source to v.

1 function Dijkstra(Graph, source):


2
3 create vertex set Q
4
5 for each vertex v in Graph:
6 dist[v] ← INFINITY
7 prev[v] ← UNDEFINED
8 add v to Q
10 dist[source] ← 0
11
12 while Q is not empty:
13 u ← vertex in Q with min dist[u]
14
15 remove u from Q

873
Dijkstra's algorithm

16
17 for each neighbor v of u: // only v that are still in Q
18 alt ← dist[u] + length(u, v)
19 if alt < dist[v]:
20 dist[v] ← alt
21 prev[v] ← u
22
23 return dist[], prev[]

If we are only interested in a shortest path between vertices source and target, we can
terminate the search after line 15 if u = target. Now we can read the shortest path from
source to target by reverse iteration:
1 S ← empty sequence
2 u ← target
3 if prev[u] is defined or u = source: // Do something only if the vertex is reachable
4 while u is defined: // Construct the shortest path with a stack S
5 insert u at the beginning of S // Push the vertex onto the stack
6 u ← prev[u] // Traverse from target to source

Now sequence S is the list of vertices constituting one of the shortest paths from source to
target, or the empty sequence if no path exists.
A more general problem would be to find all the shortest paths between source and
target (there might be several different ones of the same length). Then instead of storing
only a single node in each entry of prev[] we would store all nodes satisfying the relaxation
condition. For example, if both r and source connect to target and both of them lie on
different shortest paths through target (because the edge cost is the same in both cases),
then we would add both r and source to prev[target]. When the algorithm completes, prev[]
data structure will actually describe a graph that is a subset of the original graph with
some edges removed. Its key property will be that if the algorithm was run with some
starting node, then every path from that node to any other node in the new graph will be
the shortest path between those nodes in the original graph, and all paths of that length
from the original graph will be present in the new graph. Then to actually find all these
shortest paths between two given nodes we would use a path finding algorithm on the new
graph, such as depth-first search40 .

69.4.1 Using a priority queue

A min-priority queue is an abstract data type that provides 3 basic operations :


add_with_priority(), decrease_priority() and extract_min(). As mentioned earlier, us-
ing such a data structure can lead to faster computing times than using a basic queue.
Notably, Fibonacci heap41 (Fredman & Tarjan 198442 ) or Brodal queue43 offer optimal im-

40 https://en.wikipedia.org/wiki/Depth-first_search
41 https://en.wikipedia.org/wiki/Fibonacci_heap
42 #CITEREFFredmanTarjan1984
43 https://en.wikipedia.org/wiki/Brodal_queue

874
Proof of correctness

plementations for those 3 operations. As the algorithm is slightly different, we mention it


here, in pseudo-code as well :
1 function Dijkstra(Graph, source):
2 dist[source] ← 0 // Initialization
3
4 create vertex priority queue Q
5
6 for each vertex v in Graph:
7 if v ≠ source
8 dist[v] ← INFINITY // Unknown distance from source to v
9 prev[v] ← UNDEFINED // Predecessor of v
10
11 Q.add_with_priority(v, dist[v])
12
13
14 while Q is not empty: // The main loop
15 u ← Q.extract_min() // Remove and return best vertex
16 for each neighbor v of u: // only v that are still in Q
17 alt ← dist[u] + length(u, v)
18 if alt < dist[v]
19 dist[v] ← alt
20 prev[v] ← u
21 Q.decrease_priority(v, alt)
22
23 return dist, prev

Instead of filling the priority queue with all nodes in the initialization phase, it is
also possible to initialize it to contain only source; then, inside the if alt < dist[v]
block, the node must be inserted if not already in the queue (instead of performing a
decrease_priority operation).[6]:198
Other data structures can be used to achieve even faster computing times in practice.[15]

69.5 Proof of correctness

Proof of Dijkstra's algorithm is constructed by induction on the number of visited nodes.


Invariant hypothesis: For each visited node v, dist[v] is considered the shortest distance
from source to v; and for each unvisited node u, dist[u] is assumed the shortest distance
when traveling via visited nodes only, from source to u. This assumption is only considered
if a path exists, otherwise the distance is set to infinity. (Note : we do not assume dist[u]
is the actual shortest distance for unvisited nodes)
The base case is when there is just one visited node, namely the initial node source, in
which case the hypothesis is trivial44 .
Otherwise, assume the hypothesis for n-1 visited nodes. In which case, we choose an edge
vu where u has the least dist[u] of any unvisited nodes and the edge vu is such that dist[u]
= dist[v] + length[v,u]. dist[u] is considered to be the shortest distance from source to u
because if there were a shorter path, and if w was the first unvisited node on that path then
by the original hypothesis dist[w] > dist[u] which creates a contradiction. Similarly if there

44 https://en.wikipedia.org/wiki/Triviality_(mathematics)

875
Dijkstra's algorithm

were a shorter path to u without using unvisited nodes, and if the last but one node on that
path were w, then we would have had dist[u] = dist[w] + length[w,u], also a contradiction.
After processing u it will still be true that for each unvisited node w, dist[w] will be the
shortest distance from source to w using visited nodes only, because if there were a shorter
path that doesn't go by u we would have found it previously, and if there were a shorter
path using u we would have updated it when processing u.

69.6 Running time

Bounds of the running time of Dijkstra's algorithm on a graph with edges E and vertices
V can be expressed as a function of the number of edges, denoted |E|, and the number
of vertices, denoted |V |, using big-O notation45 . The complexity bound depends mainly
on the data structure used to represent the set Q. In the following, upper bounds can be
simplified because |E| is O(|V |2 ) for any graph, but that simplification disregards the fact
that in some problems, other upper bounds on |E| may hold.
For any data structure for the vertex set Q, the running time is in
O(|E| · Tdk + |V | · Tem ),
where Tdk and Tem are the complexities of the decrease-key and extract-minimum operations
in Q, respectively. The simplest version of Dijkstra's algorithm stores the vertex set Q as
an ordinary linked list or array, and extract-minimum is simply a linear search through all
vertices in Q. In this case, the running time is O(|E| + |V |2 ) = O(|V |2 ).
If the graph is stored as an adjacency list, the running time for a dense graph (i.e., where
|E| ∈ O(|V |2 )) is
Θ((|V |2 ) log |V |).
For sparse graphs46 , that is, graphs with far fewer than |V |2 edges, Dijkstra's algorithm
can be implemented more efficiently by storing the graph in the form of adjacency lists47
and using a self-balancing binary search tree48 , binary heap49 , pairing heap50 , or Fibonacci
heap51 as a priority queue52 to implement extracting minimum efficiently. To perform
decrease-key steps in a binary heap efficiently, it is necessary to use an auxiliary data
structure that maps each vertex to its position in the heap, and to keep this structure up
to date as the priority queue Q changes. With a self-balancing binary search tree or binary
heap, the algorithm requires
Θ((|E| + |V |) log |V |)

45 https://en.wikipedia.org/wiki/Big-O_notation
46 https://en.wikipedia.org/wiki/Sparse_graph
47 https://en.wikipedia.org/wiki/Adjacency_list
48 https://en.wikipedia.org/wiki/Self-balancing_binary_search_tree
49 https://en.wikipedia.org/wiki/Binary_heap
50 https://en.wikipedia.org/wiki/Pairing_heap
51 https://en.wikipedia.org/wiki/Fibonacci_heap
52 https://en.wikipedia.org/wiki/Priority_queue

876
Running time

time in the worst case (where log denotes the binary logarithm log2 ); for connected graphs
this time bound can be simplified to Θ(|E| log |V |). The Fibonacci heap53 improves this to
O(|E| + |V | log |V |).
When using binary heaps, the average case54 time complexity is lower than the worst-case:
assuming edge costs are drawn independently from a common probability distribution55 ,
the expected number of decrease-key operations is bounded by O(|V | log(|E|/|V |)), giving
a total running time of[6]:199–200
( )
|E|
O |E| + |V | log log |V | .
|V |

69.6.1 Practical optimizations and infinite graphs

In common presentations of Dijkstra's algorithm, initially all nodes are entered into the
priority queue. This is, however, not necessary: the algorithm can start with a priority
queue that contains only one item, and insert new items as they are discovered (instead of
doing a decrease-key, check whether the key is in the queue; if it is, decrease its key, otherwise
insert it).[6]:198 This variant has the same worst-case bounds as the common variant, but
maintains a smaller priority queue in practice, speeding up the queue operations.[8]
Moreover, not inserting all nodes in a graph makes it possible to extend the algorithm to
find the shortest path from a single source to the closest of a set of target nodes on infinite
graphs or those too large to represent in memory. The resulting algorithm is called uniform-
cost search (UCS) in the artificial intelligence literature[8][16][17] and can be expressed in
pseudocode as
procedure uniform_cost_search(Graph, start, goal) is
node ← start
cost ← 0
frontier ← priority queue containing node only
explored ← empty set
do
if frontier is empty then
return failure
node ← frontier.pop()
if node is goal then
return solution
explored.add(node)
for each of node's neighbors n do
if n is not in explored then
frontier.add(n)

The complexity of this algorithm can be expressed in an alternative way for very large
graphs: when C* is the length of the shortest path from the start node to any node satisfying
the ”goal” predicate, each edge has cost at least ε, and the number of neighbors per node
is bounded by b, then the algorithm's worst-case time and space complexity are both in
*
O(b1+⌊C ⁄ ε⌋ ).[16]

53 https://en.wikipedia.org/wiki/Fibonacci_heap
54 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
55 https://en.wikipedia.org/wiki/Probability_distribution

877
Dijkstra's algorithm

Further optimizations of Dijkstra's algorithm for the single-target case include bidirec-
tional56 variants, goal-directed variants such as the A* algorithm57 (see § Related problems
and algorithms58 ), graph pruning to determine which nodes are likely to form the middle
segment of shortest paths (reach-based routing), and hierarchical decompositions of the in-
put graph that reduce s–t routing to connecting s and t to their respective ”transit nodes59 ”
followed by shortest-path computation between these transit nodes using a ”highway”.[18]
Combinations of such techniques may be needed for optimal practical performance on spe-
cific problems.[19]

69.6.2 Specialized variants

When arc weights are small integers (bounded by a parameter C), a monotone priority
queue60 can be used to speed up Dijkstra's algorithm. The first algorithm of this type was
Dial's algorithm, which used a bucket queue61 to obtain a running time O(|E| + diam(G))
that depends on the weighted diameter62 of a graph with integer edge weights (Dial
196963 ). The use of a Van Emde Boas tree64 as the priority queue brings the complexity to
O(|E| log log C) (Ahuja et al. 199065 ). Another interesting variant based on a combination

of a new radix heap66 and the well-known Fibonacci heap runs in time O(|E| + |V | log C)
(Ahuja et al. 199067 ). Finally, the best algorithms in this special case are as follows. The
algorithm given by (Thorup 200068 ) runs in O(|E| log log |V |) time and the algorithm given
by (Raman 199769 ) runs in O(|E| + |V | min{(log |V |)1/3+ε , (log C)1/4+ε }) time.
Also, for directed acyclic graphs70 , it is possible to find shortest paths from a given starting
vertex in linear O(|E| + |V |) time, by processing the vertices in a topological order71 , and
calculating the path length for each vertex to be the minimum length obtained via any of
its incoming edges.[20][21]
In the special case of integer weights and undirected connected graphs, Dijkstra's algorithm
can be completely countered with a linear O(|E|) complexity algorithm, given by (Thorup
199972 ).

56 https://en.wikipedia.org/wiki/Bidirectional_search
57 https://en.wikipedia.org/wiki/A*_algorithm
58 #Related_problems_and_algorithms
59 https://en.wikipedia.org/wiki/Transit_Node_Routing
60 https://en.wikipedia.org/wiki/Monotone_priority_queue
61 https://en.wikipedia.org/wiki/Bucket_queue
62 https://en.wikipedia.org/wiki/Distance_(graph_theory)
63 #CITEREFDial1969
64 https://en.wikipedia.org/wiki/Van_Emde_Boas_tree
65 #CITEREFAhujaMehlhornOrlinTarjan1990
66 https://en.wikipedia.org/wiki/Radix_heap
67 #CITEREFAhujaMehlhornOrlinTarjan1990
68 #CITEREFThorup2000
69 #CITEREFRaman1997
70 https://en.wikipedia.org/wiki/Directed_acyclic_graph
71 https://en.wikipedia.org/wiki/Topological_sorting
72 #CITEREFThorup1999

878
Related problems and algorithms

69.7 Related problems and algorithms

The functionality of Dijkstra's original algorithm can be extended with a variety of mod-
ifications. For example, sometimes it is desirable to present solutions which are less than
mathematically optimal. To obtain a ranked list of less-than-optimal solutions, the optimal
solution is first calculated. A single edge appearing in the optimal solution is removed from
the graph, and the optimum solution to this new graph is calculated. Each edge of the
original solution is suppressed in turn and a new shortest-path calculated. The secondary
solutions are then ranked and presented after the first optimal solution.
Dijkstra's algorithm is usually the working principle behind link-state routing protocols73 ,
OSPF74 and IS-IS75 being the most common ones.
Unlike Dijkstra's algorithm, the Bellman–Ford algorithm76 can be used on graphs with
negative edge weights, as long as the graph contains no negative cycle77 reachable from the
source vertex s. The presence of such cycles means there is no shortest path, since the total
weight becomes lower each time the cycle is traversed. (This statement assumes that a ”path”
is allowed to repeat vertices. In graph theory78 that is normally not allowed. In theoretical
computer science79 it often is allowed.) It is possible to adapt Dijkstra's algorithm to handle
negative weight edges by combining it with the Bellman-Ford algorithm (to remove negative
edges and detect negative cycles), such an algorithm is called Johnson's algorithm80 .
The A* algorithm81 is a generalization of Dijkstra's algorithm that cuts down on the size
of the subgraph that must be explored, if additional information is available that provides
a lower bound on the ”distance” to the target. This approach can be viewed from the per-
spective of linear programming82 : there is a natural linear program for computing shortest
paths83 , and solutions to its dual linear program84 are feasible if and only if they form a
consistent heuristic85 (speaking roughly, since the sign conventions differ from place to place
in the literature). This feasible dual / consistent heuristic defines a non-negative reduced
cost86 and A* is essentially running Dijkstra's algorithm with these reduced costs. If the
dual satisfies the weaker condition of admissibility87 , then A* is instead more akin to the
Bellman–Ford algorithm.

73 https://en.wikipedia.org/wiki/Link-state_routing_protocol
74 https://en.wikipedia.org/wiki/OSPF
75 https://en.wikipedia.org/wiki/IS-IS
76 https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm
77 https://en.wikipedia.org/wiki/Negative_cycle
78 https://en.wikipedia.org/wiki/Graph_theory
79 https://en.wikipedia.org/wiki/Theoretical_computer_science
80 https://en.wikipedia.org/wiki/Johnson%27s_algorithm
81 https://en.wikipedia.org/wiki/A-star_algorithm
82 https://en.wikipedia.org/wiki/Linear_programming
83 https://en.wikipedia.org/wiki/Shortest_path_problem#Linear_programming_formulation
84 https://en.wikipedia.org/wiki/Dual_linear_program
85 https://en.wikipedia.org/wiki/Consistent_heuristic
86 https://en.wikipedia.org/wiki/Reduced_cost
87 https://en.wikipedia.org/wiki/Admissible_heuristic

879
Dijkstra's algorithm

The process that underlies Dijkstra's algorithm is similar to the greedy88 process used in
Prim's algorithm89 . Prim's purpose is to find a minimum spanning tree90 that connects all
nodes in the graph; Dijkstra is concerned with only two nodes. Prim's does not evaluate
the total weight of the path from the starting node, only the individual edges.
Breadth-first search91 can be viewed as a special-case of Dijkstra's algorithm on unweighted
graphs, where the priority queue degenerates into a FIFO queue.
The fast marching method92 can be viewed as a continuous version of Dijkstra's algorithm
which computes the geodesic distance on a triangle mesh.

69.7.1 Dynamic programming perspective

From a dynamic programming93 point of view, Dijkstra's algorithm is a successive approx-


imation scheme that solves the dynamic programming functional equation for the shortest
path problem by the Reaching method.[22][23][24]
In fact, Dijkstra's explanation of the logic behind the algorithm,[25] namely
Problem 2. Find the path of minimum total length between two given nodes P and Q.
We use the fact that, if R is a node on the minimal path from P to Q, knowledge of the
latter implies the knowledge of the minimal path from P to R.
is a paraphrasing of Bellman's94 famous Principle of Optimality95 in the context of the
shortest path problem.

69.8 See also


• A* search algorithm96
• Bellman–Ford algorithm97
• Euclidean shortest path98
• Flood fill99
• Floyd–Warshall algorithm100
• Johnson's algorithm101
• Longest path problem102

88 https://en.wikipedia.org/wiki/Greedy_algorithm
89 https://en.wikipedia.org/wiki/Prim%27s_algorithm
90 https://en.wikipedia.org/wiki/Minimum_spanning_tree
91 https://en.wikipedia.org/wiki/Breadth-first_search
92 https://en.wikipedia.org/wiki/Fast_marching_method
93 https://en.wikipedia.org/wiki/Dynamic_programming
94 https://en.wikipedia.org/wiki/Richard_Bellman
95 https://en.wikipedia.org/wiki/Principle_of_Optimality
96 https://en.wikipedia.org/wiki/A*_search_algorithm
97 https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm
98 https://en.wikipedia.org/wiki/Euclidean_shortest_path
99 https://en.wikipedia.org/wiki/Flood_fill
100 https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
101 https://en.wikipedia.org/wiki/Johnson%27s_algorithm
102 https://en.wikipedia.org/wiki/Longest_path_problem

880
Notes

• Parallel all-pairs shortest path algorithm103

69.9 Notes
1. Controversial,see
M S (2006). ”D'  :  
 ”104 . Control and Cybernetics. 35: 599–620. and below
part.
2. ”OSPF I SPF”105 . Cisco.
3. R, H. ”E W D”106 . A.M. Turing Award. Asso-
ciation for Computing Machinery. Retrieved 16 October 2017. At the Mathematical
Centre a major project was building the ARMAC computer. For its official inaugura-
tion in 1956, Dijkstra devised a program to solve a problem interesting to a nontech-
nical audience: Given a network of roads connecting cities, what is the shortest route
between two designated cities?
4. F, P (A 2010). ”A I  E W. D”.
Communications of the ACM. 53 (8): 41–47. doi107 :10.1145/1787234.1787249108 .
5. D, E. W.109 (1959). ”A      -
  ”110 (PDF). Numerische Mathematik. 1: 269–271.
111 112
doi :10.1007/BF01386390 .CS1 maint: ref=harv (link ) 113

6. M, K114 ; S, P115 (2008). ”C 10. S


P”116 (PDF). Algorithms and Data Structures: The Basic Toolbox. Springer.
doi117 :10.1007/978-3-540-77978-0118 . ISBN119 978-3-540-77977-3120 .
7. SŚ, I; J, A; WŹ-SŚ, BŻ
(2019). ”G D   ”. Journal of Optical
Communications and Networking. 11 (11): 568–577. arXiv121 :1810.04481122 .
doi123 :10.1364/JOCN.11.000568124 .

103 https://en.wikipedia.org/wiki/Parallel_all-pairs_shortest_path_algorithm
https://www.infona.pl/resource/bwmeta1.element.baztech-article-BAT5-0013-0005/tab/
104
summary
105 https://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/ospfispf.html
106 http://amturing.acm.org/award_winners/dijkstra_1053701.cfm
107 https://en.wikipedia.org/wiki/Doi_(identifier)
108 https://doi.org/10.1145%2F1787234.1787249
109 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
110 http://www-m3.ma.tum.de/twiki/pub/MN0506/WebHome/dijkstra.pdf
111 https://en.wikipedia.org/wiki/Doi_(identifier)
112 https://doi.org/10.1007%2FBF01386390
113 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
114 https://en.wikipedia.org/wiki/Kurt_Mehlhorn
115 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
116 http://people.mpi-inf.mpg.de/~mehlhorn/ftp/Toolbox/ShortestPaths.pdf
117 https://en.wikipedia.org/wiki/Doi_(identifier)
118 https://doi.org/10.1007%2F978-3-540-77978-0
119 https://en.wikipedia.org/wiki/ISBN_(identifier)
120 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-77977-3
121 https://en.wikipedia.org/wiki/ArXiv_(identifier)
122 http://arxiv.org/abs/1810.04481
123 https://en.wikipedia.org/wiki/Doi_(identifier)
124 https://doi.org/10.1364%2FJOCN.11.000568

881
Dijkstra's algorithm

8. F, A (2011). Position Paper: Dijkstra's Algorithm versus Uniform Cost
Search or a Case Against Dijkstra's Algorithm125 . P. 4 I' S. 
C S. In a route-finding problem, Felner finds that the queue
can be a factor 500–600 smaller, taking some 40% of the running time.
9. ”ARMAC”126 . Unsung Heroes in Dutch Computing History. 2007. Archived from
the original127 on 13 November 2013.
10. D, E W., Reflections on ”A note on two problems in connexion with
graphs128 (PDF)
11. T, R E129 (1983), Data Structures and Network Algorithms,
CBMS_NSF Regional Conference Series in Applied Mathematics, 44, Society for
Industrial and Applied Mathematics, p. 75, The third classical minimum spanning
tree algorithm was discovered by Jarník and rediscovered by Prim and Dikstra; it is
commonly known as Prim's algorithm.
12. P, R.C. (1957). ”S     -
” 130 (PDF). Bell System Technical Journal. 36 (6): 1389–1401.
Bibcode131 :1957BSTJ...36.1389P132 . doi133 :10.1002/j.1538-7305.1957.tb01515.x134 .
Archived from the original135 (PDF) on 18 July 2017. Retrieved 18 July 2017.
13. V. Jarník: O jistém problému minimálním [About a certain minimal problem], Práce
Moravské Přírodovědecké Společnosti, 6, 1930, pp. 57–63. (in Czech)
14. G, S; F, M (2013). ”D' A”. Encyclopedia of
Operations Research and Management Science. Springer. 1. doi136 :10.1007/978-1-
4419-1153-7137 . ISBN138 978-1-4419-1137-7139 − via Springer Link.
15. C, M.; C, R. A.; R, V.; R, D. L.; T, L.
(2007). Priority Queues and Dijkstra's Algorithm – UTCS Technical Report TR-07-
54 – 12 October 2007140 (PDF). A, T: T U  T 
A, D  C S.

125 http://www.aaai.org/ocs/index.php/SOCS/SOCS11/paper/view/4017/4357
https://web.archive.org/web/20131113021126/http://www-set.win.tue.nl/UnsungHeroes/
126
machines/armac.html
127 http://www-set.win.tue.nl/UnsungHeroes/machines/armac.html
128 https://www.cs.utexas.edu/users/EWD/ewd08xx/EWD841a.PDF
129 https://en.wikipedia.org/wiki/Robert_Endre_Tarjan
https://web.archive.org/web/20170718230207/http://bioinfo.ict.ac.cn/~dbu/
130
AlgorithmCourses/Lectures/Prim1957.pdf
131 https://en.wikipedia.org/wiki/Bibcode_(identifier)
132 https://ui.adsabs.harvard.edu/abs/1957BSTJ...36.1389P
133 https://en.wikipedia.org/wiki/Doi_(identifier)
134 https://doi.org/10.1002%2Fj.1538-7305.1957.tb01515.x
135 http://bioinfo.ict.ac.cn/~dbu/AlgorithmCourses/Lectures/Prim1957.pdf
136 https://en.wikipedia.org/wiki/Doi_(identifier)
137 https://doi.org/10.1007%2F978-1-4419-1153-7
138 https://en.wikipedia.org/wiki/ISBN_(identifier)
139 https://en.wikipedia.org/wiki/Special:BookSources/978-1-4419-1137-7
140 http://www.cs.sunysb.edu/~rezaul/papers/TR-07-54.pdf

882
Notes

16. R, S141 ; N, P142 (2009) [1995]. Artificial Intelligence: A


Modern Approach143 (3 .). P H. . 75, 81. ISBN144 978-0-13-
604259-4145 .
17. Sometimes also least-cost-first search:
N, D S. (1983). ”E  ”146 (PDF). Computer. IEEE.
16 (2): 63–85. doi147 :10.1109/mc.1983.1654302148 .
18. W, D; W, T (2007). Speed-up techniques for
shortest-path computations. STACS. pp. 23–36.
19. B, R; D, D; S, P; S, D-
; S, D; W, D (2010). ”C 
 - -   D' ”149 . J.
Experimental Algorithmics. 15: 2.1. doi150 :10.1145/1671970.1671976151 .
20. ”B G L: D A G S P –
1.44.0”152 . www.boost.org.
21. Cormen et al. 2001153 , p. 655
22. S, M. (2006). ”D'  :   -
 ”154 (PDF). Journal of Control and Cybernetics. 35 (3): 599–
620. Online version of the paper with interactive computational modules.155
23. D, E.V. (2003). Dynamic Programming: Models and Applications. Mine-
ola, NY: Dover Publications156 . ISBN157 978-0-486-42810-9158 .
24. S, M. (2010). Dynamic Programming: Foundations and Principles.
Francis & Taylor159 . ISBN160 978-0-8247-4099-3161 .
25. Dijkstra 1959162 , p. 270

141 https://en.wikipedia.org/wiki/Stuart_J._Russell
142 https://en.wikipedia.org/wiki/Peter_Norvig
143 https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach
144 https://en.wikipedia.org/wiki/ISBN_(identifier)
145 https://en.wikipedia.org/wiki/Special:BookSources/978-0-13-604259-4
146 https://www.cs.umd.edu/~nau/papers/nau1983expert.pdf
147 https://en.wikipedia.org/wiki/Doi_(identifier)
148 https://doi.org/10.1109%2Fmc.1983.1654302
149 https://publikationen.bibliothek.kit.edu/1000014952
150 https://en.wikipedia.org/wiki/Doi_(identifier)
151 https://doi.org/10.1145%2F1671970.1671976
152 https://www.boost.org/doc/libs/1_44_0/libs/graph/doc/dag_shortest_paths.html
153 #CITEREFCormenLeisersonRivestStein2001
154 http://matwbn.icm.edu.pl/ksiazki/cc/cc35/cc3536.pdf
155 http://www.ifors.ms.unimelb.edu.au/tutorial/dijkstra_new/index.html
156 https://en.wikipedia.org/wiki/Dover_Publications
157 https://en.wikipedia.org/wiki/ISBN_(identifier)
158 https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-42810-9
159 https://en.wikipedia.org/w/index.php?title=Francis_%26_Taylor&action=edit&redlink=1
160 https://en.wikipedia.org/wiki/ISBN_(identifier)
161 https://en.wikipedia.org/wiki/Special:BookSources/978-0-8247-4099-3
162 #CITEREFDijkstra1959

883
Dijkstra's algorithm

69.10 References
• C, T H.163 ; L, C E.164 ; R, R L.165 ; S,
C166 (2001). ”S 24.3: D' ”. Introduction to Algo-
rithms167 (S .). MIT P168  MG–H169 . . 595–601. ISBN170 0-
262-03293-7171 .CS1 maint: ref=harv (link172 )
• D, R B. (1969). ”A 360: S-  
  [H]”. Communications of the ACM173 . 12 (11): 632–633.
doi174 :10.1145/363269.363610175 .CS1 maint: ref=harv (link176 )
• F, M L177 ; T, R E.178 (1984). Fibonacci
heaps and their uses in improved network optimization algorithms. 25th An-
nual Symposium on Foundations of Computer Science. IEEE179 . pp. 338–346.
doi180 :10.1109/SFCS.1984.715934181 .CS1 maint: ref=harv (link182 )
• F, M L183 ; T, R E.184 (1987). ”F-
         -
”. Journal of the Association for Computing Machinery. 34 (3): 596–615.
doi185 :10.1145/28869.28874186 .CS1 maint: ref=harv (link187 )
• Z, F. B; N, C E. (F 1998). ”S P A-
: A E U R R N”188 . Transportation Science189 .
32 (1): 65–73. doi190 :10.1287/trsc.32.1.65191 .
• L, M.; G, R. S.; J, A. A.; L, W. C.; M, J., S. R.;
P, R. M.; S, R. N. (1957). Investigation of Model Techniques – First Annual

163 https://en.wikipedia.org/wiki/Thomas_H._Cormen
164 https://en.wikipedia.org/wiki/Charles_E._Leiserson
165 https://en.wikipedia.org/wiki/Ronald_L._Rivest
166 https://en.wikipedia.org/wiki/Clifford_Stein
167 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
168 https://en.wikipedia.org/wiki/MIT_Press
169 https://en.wikipedia.org/wiki/McGraw%E2%80%93Hill
170 https://en.wikipedia.org/wiki/ISBN_(identifier)
171 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
172 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
173 https://en.wikipedia.org/wiki/Communications_of_the_ACM
174 https://en.wikipedia.org/wiki/Doi_(identifier)
175 https://doi.org/10.1145%2F363269.363610
176 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
177 https://en.wikipedia.org/wiki/Michael_Fredman
178 https://en.wikipedia.org/wiki/Robert_Tarjan
179 https://en.wikipedia.org/wiki/IEEE
180 https://en.wikipedia.org/wiki/Doi_(identifier)
181 https://doi.org/10.1109%2FSFCS.1984.715934
182 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
183 https://en.wikipedia.org/wiki/Michael_Fredman
184 https://en.wikipedia.org/wiki/Robert_Tarjan
185 https://en.wikipedia.org/wiki/Doi_(identifier)
186 https://doi.org/10.1145%2F28869.28874
187 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
188 https://semanticscholar.org/paper/c71301816cfe1e0c7ed1a04fddd7740ceb2e8197
189 https://en.wikipedia.org/wiki/Transportation_Science
190 https://en.wikipedia.org/wiki/Doi_(identifier)
191 https://doi.org/10.1287%2Ftrsc.32.1.65

884
External links

Report – 6 June 1956 – 1 July 1957 – A Study of Model Techniques for Communication
Systems. Cleveland, Ohio: Case Institute of Technology.CS1 maint: ref=harv (link192 )
• K, D.E.193 (1977). ”A G  D' A”. Informa-
tion Processing Letters194 . 6 (1): 1–5. doi195 :10.1016/0020-0190(77)90002-3196 .
• A, R K.; M, K; O, J B.; T, R
E. (A 1990). ”F A   S P P”197
(PDF). Journal of the ACM. 37 (2): 213–223. doi198 :10.1145/77600.77615199 .
200 201
hdl :1721.1/47994 .CS1 maint: ref=harv (link ) 202

• R, R (1997). ”R    -  


”. SIGACT News. 28 (2): 81–87. doi203 :10.1145/261342.261352204 .CS1 maint:
ref=harv (link205 )
• T, M (2000). ”O RAM  Q”. SIAM Journal on Comput-
ing. 30 (1): 86–109. doi206 :10.1137/S0097539795288246207 .CS1 maint: ref=harv (link208 )
• T, M (1999). ”U -    -
     ”209 . Journal of the ACM. 46 (3): 362–394.
doi210 :10.1145/316542.316548211 .CS1 maint: ref=harv (link212 )

69.11 External links

Wikimedia Commons has media related to Dijkstra's algorithm213 .

• Oral history interview with Edsger W. Dijkstra214 , Charles Babbage Institute215 Univer-
sity of Minnesota, Minneapolis.

192 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
193 https://en.wikipedia.org/wiki/Donald_Knuth
194 https://en.wikipedia.org/wiki/Information_Processing_Letters
195 https://en.wikipedia.org/wiki/Doi_(identifier)
196 https://doi.org/10.1016%2F0020-0190%2877%2990002-3
197 https://dspace.mit.edu/bitstream/1721.1/47994/1/fasteralgorithms00sloa.pdf
198 https://en.wikipedia.org/wiki/Doi_(identifier)
199 https://doi.org/10.1145%2F77600.77615
200 https://en.wikipedia.org/wiki/Hdl_(identifier)
201 http://hdl.handle.net/1721.1%2F47994
202 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
203 https://en.wikipedia.org/wiki/Doi_(identifier)
204 https://doi.org/10.1145%2F261342.261352
205 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
206 https://en.wikipedia.org/wiki/Doi_(identifier)
207 https://doi.org/10.1137%2FS0097539795288246
208 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
209 http://www.diku.dk/~mthorup/PAPERS/sssp.ps.gz
210 https://en.wikipedia.org/wiki/Doi_(identifier)
211 https://doi.org/10.1145%2F316542.316548
212 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
213 https://commons.wikimedia.org/wiki/Category:Dijkstra%27s_algorithm
214 http://purl.umn.edu/107247
215 https://en.wikipedia.org/wiki/Charles_Babbage_Institute

885
Dijkstra's algorithm

• Implementation of Dijkstra's algorithm using TDD216 , Robert Cecil Martin217 , The Clean
Code Blog
• Graphical explanation of Dijkstra's algorithm step-by-step on an example218 , Gilles
Bertrand219 , A step by step graphical explanation of Dijkstra's algorithm operations

Edsger Dijkstra

• Wikiquote

Optimization: Algorithms, methods, and heuristics

216 http://blog.cleancoder.com/uncle-bob/2016/10/26/DijkstrasAlg.html
217 https://en.wikipedia.org/wiki/Robert_Cecil_Martin
http://www.gilles-bertrand.com/2014/03/disjkstra-algorithm-description-shortest-path-
218
pseudo-code-data-structure-example-image.html
219 https://en.wikipedia.org/w/index.php?title=Gilles_Bertrand&action=edit&redlink=1

886
70 Dijkstra–Scholten algorithm

The Dijkstra–Scholten algorithm (named after Edsger W. Dijkstra1 and Carel S.


Scholten2 ) is an algorithm3 for detecting termination4 in a distributed system5 .[1][2] The
algorithm was proposed by Dijkstra and Scholten in 1980.[3]
First, consider the case of a simple process graph6 which is a tree7 . A distributed compu-
tation which is tree-structured is not uncommon. Such a process graph may arise when
the computation is strictly a divide-and-conquer8 type. A node9 starts the computation
and divides the problem in two (or more, usually a multiple of 2) roughly equal parts
and distribute those parts to other processors. This process continues recursively until the
problems are of sufficiently small size to solve in a single processor.

70.1 Algorithm

The Dijkstra–Scholten algorithm is a tree-based algorithm which can be described by the


following:
• The initiator of a computation is the root of the tree.
• Upon receiving a computational message:
• If the receiving process is currently not in the computation: the process joins the tree
by becoming a child of the sender of the message. (No acknowledgment message is sent
at this point.)
• If the receiving process is already in the computation: the process immediately sends
an acknowledgment message to the sender of the message.
• When a process has no more children and has become idle, the process detaches itself
from the tree by sending an acknowledgment message to its tree parent.
• Termination occurs when the initiator has no children and has become idle.

1 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
2 https://en.wikipedia.org/wiki/Carel_S._Scholten
3 https://en.wikipedia.org/wiki/Algorithm
4 https://en.wikipedia.org/wiki/Termination_analysis
5 https://en.wikipedia.org/wiki/Distributed_system
6 https://en.wikipedia.org/wiki/Process_graph
7 https://en.wikipedia.org/wiki/Tree_(data_structure)
8 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
9 https://en.wikipedia.org/wiki/Node_(networking)

887
Dijkstra–Scholten algorithm

70.2 Dijkstra–Scholten algorithm for a tree


• For a tree, it is easy to detect termination. When a leaf process determines that it has
terminated, it sends a signal to its parent. In general, a process waits for all its children
to send signals and then it sends a signal to its parent.
• The program terminates when the root receives signals from all its children.

70.3 Dijkstra–Scholten algorithm for directed acyclic


graphs
• The algorithm for a tree can be extended to acyclic directed graphs. We add an additional
integer attribute Deficit to each edge.
• On an incoming edge, Deficit will denote the difference between the number of messages
received and the number of signals sent in reply.
• When a node wishes to terminate, it waits until it has received signals from outgoing
edges reducing their deficits to zero.
• Then it sends enough signals to ensure that the deficit is zero on each incoming edge.
• Since the graph is acyclic, some nodes will have no outgoing edges and these nodes will
be the first to terminate after sending enough signals to their incoming edges. After that
the nodes at higher levels will terminate level by level.

70.4 Dijkstra–Scholten algorithm for cyclic directed graphs


• If cycles are allowed, the previous algorithm does not work. This is because, there may
not be any node with zero outgoing edges. So, potentially there is no node which can
terminate without consulting other nodes.
• The Dijkstra–Scholten algorithm solves this problem by implicitly creating a spanning
tree10 of the graph. A spanning-tree is a tree which includes each node of the underlying
graph once and the edge-set is a subset of the original set of edges.
• The tree will be directed (i.e., the channels will be directed) with the source node (which
initiates the computation) as the root.
• The spanning-tree is created in the following way. A variable First_Edge is added to each
node. When a node receives a message for the first time, it initializes First_Edge with
the edge through which it received the message. First_Edge is never changed afterwards.
Note that, the spanning tree is not unique and it depends on the order of messages in the
system.
• Termination is handled by each node in three steps :
1. Send signals on all incoming edges except the first edge. (Each node will send signals
which reduces the deficit on each incoming edge to zero.)
2. Wait for signals from all outgoing edges. (The number of signals received on each
outgoing edge should reduce each of their deficits to zero.)
3. Send signals on First_Edge. (Once steps 1 and 2 are complete, a node informs its
parent in the spanning tree about its intention of terminating.)

10 https://en.wikipedia.org/wiki/Spanning_tree_(mathematics)

888
References

70.5 See also


• Huang's algorithm11

70.6 References
1. G, S (2010), ”9.3.1 T D–S A”,
Distributed Systems: An Algorithmic Approach12 , CRC P, . 140–143,
ISBN13 978142001084814 .
2. F, W (2013), ”6.1 D–S ”, Distributed Algo-
rithms: An Intuitive Approach15 , MIT P, . 38–39, ISBN16 978026231895217 .
3. D, E W.; S, C. S. (1980), ”T  
 ”18 (PDF), Information Processing Letters, 11 (1): 1–4,
doi19 :10.1016/0020-0190(80)90021-620 , MR21 058539422 .

Edsger Dijkstra

• Wikiquote

11 https://en.wikipedia.org/wiki/Huang%27s_algorithm
12 https://books.google.com/books?id=aVjVzuav7cIC&pg=PA140
13 https://en.wikipedia.org/wiki/ISBN_(identifier)
14 https://en.wikipedia.org/wiki/Special:BookSources/9781420010848
15 https://books.google.com/books?id=QqNWAgAAQBAJ&pg=PA38
16 https://en.wikipedia.org/wiki/ISBN_(identifier)
17 https://en.wikipedia.org/wiki/Special:BookSources/9780262318952
18 http://www.cs.mcgill.ca/~lli22/575/termination3.pdf
19 https://en.wikipedia.org/wiki/Doi_(identifier)
20 https://doi.org/10.1016%2F0020-0190%2880%2990021-6
21 https://en.wikipedia.org/wiki/MR_(identifier)
22 http://www.ams.org/mathscinet-getitem?mr=0585394

889
71 Dinic's algorithm

Dinic's algorithm or Dinitz's algorithm is a strongly polynomial1 algorithm for com-


puting the maximum flow2 in a flow network3 , conceived in 1970 by Israeli (formerly Soviet)
computer scientist Yefim (Chaim) A. Dinitz.[1] The algorithm runs in O(V 2 E) time and is
similar to the Edmonds–Karp algorithm4 , which runs in O(V E 2 ) time, in that it uses
shortest augmenting paths. The introduction of the concepts of the level graph and blocking
flow enable Dinic's algorithm to achieve its performance.

71.1 History

Yefim Dinitz invented this algorithm in response to a pre-class exercise in Adelson-Velsky5 's
algorithms class. At the time he was not aware of the basic facts regarding the Ford–
Fulkerson algorithm6 .[2]
Dinitz mentions inventing his algorithm in January 1969, which was published in 1970 in the
journal Doklady Akademii Nauk SSSR. In 1974, Shimon Even and (his then Ph.D. student)
Alon Itai at the Technion7 in Haifa were very curious and intrigued by Dinitz's algorithm
as well as Alexander V. Karzanov8 's related idea of blocking flow. However it was hard for
them to decipher these two papers, each being limited to four pages to meet the restrictions
of journal Doklady Akademii Nauk SSSR. Even did not give up, and after three days of effort
managed to understand both papers except for the layered network maintenance issue. Over
the next couple of years, Even gave lectures on ”Dinic's algorithm”, mispronouncing the
name of the author while popularizing it. Even and Itai also contributed to this algorithm
by combining BFS9 and DFS10 , which is the current version of the algorithm.[3]
For about 10 years of time after the Ford–Fulkerson algorithm was invented, it was unknown
if it could be made to terminate in polynomial time in the general case of irrational edge ca-
pacities. This caused a lack of any known polynomial-time algorithm to solve the max flow
problem in generic cases. Dinitz's algorithm and the Edmonds–Karp algorithm11 (published

1 https://en.wikipedia.org/wiki/Strongly_polynomial
2 https://en.wikipedia.org/wiki/Maximum_flow
3 https://en.wikipedia.org/wiki/Flow_network
4 https://en.wikipedia.org/wiki/Edmonds%E2%80%93Karp_algorithm
5 https://en.wikipedia.org/wiki/Georgy_Adelson-Velsky
6 https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
7 https://en.wikipedia.org/wiki/Technion_%E2%80%93_Israel_Institute_of_Technology
8 https://en.wikipedia.org/wiki/Alexander_V._Karzanov
9 https://en.wikipedia.org/wiki/Breadth-first_search
10 https://en.wikipedia.org/wiki/Depth-first_search
11 https://en.wikipedia.org/wiki/Edmonds%E2%80%93Karp_algorithm

891
Dinic's algorithm

in 1972) both independently showed that in the Ford–Fulkerson algorithm, if each augment-
ing path is the shortest one, then the length of the augmenting paths is non-decreasing and
the algorithm always terminates.

71.2 Definition

Let G = ((V, E), c, f, s, t) be a network with c(u, v) and f (u, v) the capacity and the flow of
the edge (u, v) respectively.
The residual capacity is a mapping cf : V × V → R+ defined as,
1. if (u, v) ∈ E,
cf (u, v) = c(u, v) − f (u, v)
2. cf (u, v) = 0 otherwise.
The residual graph is a unweighted graph Gf = ((V, Ef ), cf |Ef , s, t), where
Ef = {(u, v) ∈ V × V : cf (u, v) > 0}.
An augmenting path is an s − t path in the residual graph Gf .
Define dist(v) to be the length of the shortest path from s to v in Gf . Then the level
graph of Gf is the graph GL = ((V, EL ), cf |EL , s, t), where
EL = {(u, v) ∈ Ef : dist(v) = dist(u) + 1}.
A blocking flow is an s − t flow f such that the graph G′ = ((V, EL′ ), s, t) with
EL′ = {(u, v) : f (u, v) < cf |EL (u, v)} contains no s − t path.[4][5]

71.3 Algorithm

Dinic's Algorithm
Input: A network G = ((V, E), c, s, t).
Output: An s − t flow f of maximum value.
1. Set f (e) = 0 for each e ∈ E.
2. Construct GL from Gf of G. If dist(t) = ∞, stop and output f .
3. Find a blocking flow f ′ in GL .
4. Add augment flow f by f ′ and go back to step 2.

71.4 Analysis

It can be shown that the number of layers in each blocking flow increases by at least 1 each
time and thus there are at most |V | − 1 blocking flows in the algorithm. For each of them:
• the level graph GL can be constructed by breadth-first search12 in O(E) time

12 https://en.wikipedia.org/wiki/Breadth-first_search

892
Example

• a blocking flow in the level graph GL can be found in O(V E) time


with total running time O(E + V E) = O(V E) for each layer. As a consequence, the running
time of Dinic's algorithm is O(V 2 E).
Using a data structure called dynamic trees13 , the running time of finding a blocking flow in
each phase can be reduced to O(E log V ) and therefore the running time of Dinic's algorithm
can be improved to O(V E log V ).

71.4.1 Special cases

In networks with unit capacities, a much stronger time bound holds. Each blocking flow
can√be found in O(E) time, and it can be shown that the number of phases does not exceed
O( E) and O(V 2/3 ). Thus the algorithm runs in O(min{V 2/3 , E 1/2 }E) time.[6]
In networks that 14 problem, the number of phases is
√ arise from the bipartite matching

bounded by O( V ), therefore leading to the O( V E) time bound. The resulting algorithm
is also known as Hopcroft–Karp algorithm15 . More generally, this bound holds for any unit
network — a network in which each vertex, except for source and sink, either has a single
entering edge of capacity one, or a single outgoing edge of capacity one, and all other
capacities are arbitrary integers.[5]

71.5 Example

The following is a simulation of Dinic's algorithm. In the level graph GL , the vertices with
labels in red are the values dist(v). The paths in blue form a blocking flow.

G Gf GL

1.

Figure 177 Figure 178 Figure 179


The blocking flow consists of
1. {s, 1, 3, t} with 4 units of flow,
2. {s, 1, 4, t} with 6 units of flow, and
3. {s, 2, 4, t} with 4 units of flow.
Therefore, the blocking flow is of 14 units and the value of flow |f | is 14. Note
that each augmenting path in the blocking flow has 3 edges.

13 https://en.wikipedia.org/wiki/Dynamic_trees
14 https://en.wikipedia.org/wiki/Bipartite_matching
15 https://en.wikipedia.org/wiki/Hopcroft%E2%80%93Karp_algorithm

893
Dinic's algorithm

G Gf GL

1.

Figure 177 Figure 178 Figure 179

2.

Figure 180 Figure 181 Figure 182


The blocking flow consists of
1. {s, 2, 4, 3, t} with 5 units of flow.
Therefore, the blocking flow is of 5 units and the value of flow |f | is 14 + 5 =
19. Note that each augmenting path has 4 edges.

3.

Figure 183 Figure 184 Figure 185


Since t cannot be reached in Gf , the algorithm terminates and returns a flow
with maximum value of 19. Note that in each blocking flow, the number of
edges in the augmenting path increases by at least 1.

71.6 See also


• Ford–Fulkerson algorithm16
• Maximum flow problem17

71.7 Notes
1. Y D18 (1970). ”A       
      ”19 (PDF). Doklady Akademii Nauk
SSSR. 11: 1277–1280.
2. I K; S A (2009-11-27). ”D'    
    ”20 .
3. Y D. ”D' A: T O V  E' V-
”21 (PDF). Cite journal requires |journal= (help22 )

16 https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
17 https://en.wikipedia.org/wiki/Maximum_flow_problem
18 https://en.wikipedia.org/w/index.php?title=Yefim_Dinitz&action=edit&redlink=1
19 http://www.cs.bgu.ac.il/~dinitz/D70.pdf
http://www.powershow.com/view/c6619-OThkZ/Dinitzs_algorithm_for_finding_a_maximum_
20
flow_in_a_network_powerpoint_ppt_presentation
21 http://www.cs.bgu.ac.il/~dinitz/Papers/Dinitz_alg.pdf
22 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical

894
References

4. This means that the subgraph resulting from removing all saturated edges (i.e. all
edges (u, v) such that f (u, v) = cf |EL (u, v)) does not contain any path from s to t. In
other terms, the blocking flow is such that every possible path from s to t contains a
saturated edge.
5. Tarjan 198323 , p. 102.
6. E, S; T, R. E (1975). ”N F  T-
 G C”. SIAM Journal on Computing. 4 (4): 507–518.
doi24 :10.1137/020404325 . ISSN26 0097-539727 .

71.8 References
• Y D (2006). ”D' A: T O V  E'
V”28 (PDF). I O G29 ; A L. R; A L. S
(.). Theoretical Computer Science: Essays in Memory of Shimon Even30 . Springer.
pp. 218–240. ISBN31 978-3-540-32880-332 .
• T, R. E. (1983). Data structures and network algorithms.CS1 maint: ref=harv
(link33 )
• B. H. K; J V (2008). ”8.4 B F  F' A-
”. Combinatorial Optimization: Theory and Algorithms (Algorithms and Combi-
natorics, 21). Springer Berlin Heidelberg. pp. 174–176. ISBN34 978-3-540-71844-435 .

23 #CITEREFTarjan1983
24 https://en.wikipedia.org/wiki/Doi_(identifier)
25 https://doi.org/10.1137%2F0204043
26 https://en.wikipedia.org/wiki/ISSN_(identifier)
27 http://www.worldcat.org/issn/0097-5397
28 http://www.cs.bgu.ac.il/~dinitz/Papers/Dinitz_alg.pdf
29 https://en.wikipedia.org/wiki/Oded_Goldreich
30 https://en.wikipedia.org/wiki/Shimon_Even
31 https://en.wikipedia.org/wiki/ISBN_(identifier)
32 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-32880-3
33 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
34 https://en.wikipedia.org/wiki/ISBN_(identifier)
35 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-71844-4

895
72 Double pushout graph rewriting

In computer science1 , double pushout graph rewriting (or DPO graph rewriting) refers
to a mathematical framework for graph rewriting2 . It was introduced as one of the first
algebraic approaches to graph rewriting in the article ”Graph-grammars: An algebraic ap-
proach” (1973).[1] It has since been generalized to allow rewriting structures which are not
graphs, and to handle negative application conditions,[2] among other extensions.

72.1 Definition

A DPO graph transformation system (or graph grammar3 ) consists of a finite graph4 , which
is the starting state, and a finite or countable set of labeled spans5 in the category6 of finite
graphs and graph homomorphisms, which serve as derivation rules. The rule spans are
generally taken to be composed of monomorphisms7 , but the details can vary.[3]
Rewriting is performed in two steps: deletion and addition.
After a match from the left hand side to G is fixed, nodes and edges that are not in the
right hand side are deleted. The right hand side is then glued in.
Gluing graphs is in fact a pushout8 construction in the category9 of graphs, and the deletion
is the same as finding a pushout complement, hence the name.

72.2 Uses

Double pushout graph rewriting allows the specification of graph transformations by spec-
ifying a pattern of fixed size and composition to be found and replaced, where part of
the pattern can be preserved. The application of a rule is potentially non-deterministic:
several distinct matches can be possible. These can be non-overlapping, or share only pre-
served items, thus showing a kind of concurrency10 known as parallel independence,[4] or

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Graph_rewriting
3 https://en.wikipedia.org/wiki/Graph_grammar
4 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
5 https://en.wikipedia.org/wiki/Span_(category_theory)
6 https://en.wikipedia.org/wiki/Category_(mathematics)
7 https://en.wikipedia.org/wiki/Monomorphism
8 https://en.wikipedia.org/wiki/Pushout_(category_theory)
9 https://en.wikipedia.org/wiki/Category_(mathematics)
10 https://en.wikipedia.org/wiki/Concurrency_(computer_science)

897
Double pushout graph rewriting

they may be incompatible, in which case either the applications can sometimes be executed
sequentially, or one can even preclude the other.
It can be used as a language for software design and programming (usually a variant working
on richer structures than graphs is chosen). Termination11 for DPO graph rewriting is
undecidable12 because the Post correspondence problem13 can be reduced to it.[5]
DPO graph rewriting can be viewed as a generalization of Petri nets14 .[4]

72.3 Generalization

Axioms have been sought to describe categories in which DPO rewriting will work. One
possibility is the notion of an adhesive category15 , which also enjoys many closure properties.
Related notions are HLR systems, quasi-adhesive categories and M-adhesive categories,
adhesive HLR categories.[6]
The concepts of adhesive category16 and HLR system are related (an adhesive category with
coproducts17 is a HLR system[7] ).
Hypergraph18 , typed graph19 and attributed graph20 rewriting,[8] for example, can be han-
dled because they can be cast as adhesive HLR systems.

72.4 Notes
1. ”Graph-grammars: An algebraic approach”, Ehrig, Hartmut and Pfender, Michael and
Schneider, Hans-Jürgen, Switching and Automata Theory, 1973. SWAT'08. IEEE
Conference Record of 14th Annual Symposium on, pp. 167-180, 1973, IEEE
2. ”Constraints and application conditions: From graphs to high-level structures”, Ehrig,
Ehrig, Habel and Pennemann, Graph transformations, pp. 287--303, Springer
3. ”Double-pushout graph transformation revisited”, Habel, Annegret and Müller, Jürgen
and Plump, Detlef, Mathematical Structures in Computer Science, vol. 11, no. 05.,
pp. 637--688, 2001, Cambridge University Press
4. ”Concurrent computing: from Petri nets to graph grammars”, Corradini, Andrea,
ENTCS, vol. 2, pp. 56--70, 1995, Elsevier
5. , ”Termination of graph rewriting is undecidable”, Detlef Plump, Fundamenta Infor-
maticae, vol. 33, no. 2, pp. 201--209, 1998, IOS Press
6. Hartmut Ehrig and Annegret Habel and Julia Padberg and Ulrike Prange, ”Adhesive
high-level replacement categories and systems”, 2004, Springer

11 https://en.wikipedia.org/wiki/Halting_problem
12 https://en.wikipedia.org/wiki/Undecidable_problem
13 https://en.wikipedia.org/wiki/Post_correspondence_problem
14 https://en.wikipedia.org/wiki/Petri_nets
15 https://en.wikipedia.org/wiki/Adhesive_category
16 https://en.wikipedia.org/wiki/Adhesive_category
17 https://en.wikipedia.org/wiki/Coproduct
18 https://en.wikipedia.org/wiki/Hypergraph
19 https://en.wikipedia.org/w/index.php?title=Typed_graph&action=edit&redlink=1
20 https://en.wikipedia.org/wiki/Attributed_graph

898
Notes

7. ”Adhesive categories”, Stephen Lack and Paweł Sobociński, in Foundations of software


science and computation structures, pp. 273--288, Springer 2004
8. ”Fundamentals of Algebraic Graph Transformation”, Hartmut Ehrig, Karsten Ehrig,
Ulrike Prange and Gabriele Taentzer

899
73 Dulmage–Mendelsohn decomposition

In graph theory1 , the Dulmage–Mendelsohn decomposition is a partition of the ver-


tices of a bipartite graph2 into subsets, with the property that two adjacent vertices belong
to the same subset if and only if they are paired with each other in a perfect matching3 of
the graph. It is named after A. L. Dulmage and Nathan Mendelsohn4 , who published it
in 1958. A generalization to any graph is the Edmonds–Gallai decomposition5 , using the
Blossom algorithm6 .

73.1 The coarse decomposition

Let G = (X+Y,E) be a bipartite graph, and let D be the set of vertices in G that are not
matched in at least one maximum matching7 of G. Then D is necessarily an independent
set8 , so G can be partitioned into three parts:
• The vertices in D ∩X and their neighbors;
• The vertices in D ∩Y and their neighbors;
• The remaining vertices.
Every maximum matching in G consists of matchings in the first and second part that
match all neighbors of D, together with a perfect matching9 of the remaining vertices.

73.1.1 Alternative coarse decomposition

An alternative definition of the coarse decomposition is presented in [1] (it is attributed to


[2] who in turn attribute it to [3] ).

Let G be a bipartite graph, M a maximum matching in G, and V0 the set of vertices of


G unmatched by M (the ”free vertices”). Then G can be partitioned into three parts:

1 https://en.wikipedia.org/wiki/Graph_theory
2 https://en.wikipedia.org/wiki/Bipartite_graph
3 https://en.wikipedia.org/wiki/Perfect_matching
4 https://en.wikipedia.org/wiki/Nathan_Mendelsohn
5 https://en.wikipedia.org/wiki/Gallai%E2%80%93Edmonds_decomposition
6 https://en.wikipedia.org/wiki/Blossom_algorithm
7 https://en.wikipedia.org/wiki/Maximum_matching
8 https://en.wikipedia.org/wiki/Independent_set_(graph_theory)
9 https://en.wikipedia.org/wiki/Perfect_matching

901
Dulmage–Mendelsohn decomposition

Figure 186 The E-O-U decomposition

• E - the even vertices - the vertices reachable from V0 by an M-alternating path of even
length.
• O - the odd vertices - the vertices reachable from V0 by an M-alternating path of odd
length.
• U - the unreachable vertices - the vertices unreachable from V0 by an M-alternating path.
An illustration is shown on the left. The bold lines are the edges of M. The weak lines are
other edges of G. The red dots are the vertices unmatched by M.
Based on this decomposition, the edges in G can be partitioned into six parts according
to their endpoints: E-U, E-E, O-O, O-U, E-O, U-U. This decomposition has the following
properties: [2]
1. The sets E, O, U are pairwise-disjoint.
2. The sets E, O, U do not depend on the maximum-matching M (i.e., any maximum-
matching defines exactly the same decomposition).
3. G contains only O-O, O-U, E-O and U-U edges.
4. Any maximum-matching in G contains only E-O and U-U edges.
5. Any maximum-matching in G saturates all vertices in O and all vertices in U.
6. The size of a maximum-matching in G is |O| + |U| / 2.

73.2 The fine decomposition

The third set of vertices in the coarse decomposition (or all vertices in a graph with a perfect
matching) may additionally be partitioned into subsets by the following steps:
• Find a perfect matching of G.
• Form a directed graph10 H whose vertices are the matched edges in G. For each unmatched
edge (x,y) in G, add a directed edge in H from the matched edge of x to the matched
edge of y.
• Find the strongly connected components11 of the resulting graph.
• For each component of H, form a subset of the Dulmage–Mendelsohn decomposition
consisting of the vertices in G that are endpoints of edges in the component.

10 https://en.wikipedia.org/wiki/Directed_graph
11 https://en.wikipedia.org/wiki/Strongly_connected_component

902
Core

To see that this subdivision into subsets characterizes the edges that belong to perfect
matchings, suppose that two vertices x and y in G belong to the same subset of the decom-
position, but are not already matched by the initial perfect matching. Then there exists a
strongly connected component in H containing edge x,y. This edge must belong to a sim-
ple cycle12 in H (by the definition of strong connectivity) which necessarily corresponds to
an alternating cycle in G (a cycle whose edges alternate between matched and unmatched
edges). This alternating cycle may be used to modify the initial perfect matching to produce
a new matching containing edge x,y.
An edge x,y of the graph G belongs to all perfect matchings of G, if and only if x and y are
the only members of their set in the decomposition. Such an edge exists if and only if the
matching preclusion13 number of the graph is one.

73.3 Core

As another component of the Dulmage–Mendelsohn decomposition, Dulmage and Mendel-


sohn defined the core of a graph to be the union of its maximum matchings.[4] However,
this concept should be distinguished from the core14 in the sense of graph homomorphisms,
and from the k-core15 formed by the removal of low-degree vertices.

73.4 Applications

This decomposition has been used to partition meshes in finite element analysis16 , and
to determine specified, underspecified and overspecified equations in systems of nonlinear
equations.

73.5 References
1. (PDF) ://..../~//-
. . Missing or empty |title= (help18 )
17

2. I, R W.; K, T; M, K; M, D-


; P, K E. (2006-10-01). ”R- ”. ACM
Transactions on Algorithms. 2 (4): 602–610. doi19 :10.1145/1198513.119852020 .
3. P, W.R. (1995). ”M  E”. Handbook of Com-
binatorics. Amsterdam, North-Holland: Elsevier Science. pp. 179–232.

12 https://en.wikipedia.org/wiki/Simple_cycle
13 https://en.wikipedia.org/wiki/Matching_preclusion
14 https://en.wikipedia.org/wiki/Core_(graph_theory)
15 https://en.wikipedia.org/wiki/K-core
16 https://en.wikipedia.org/wiki/Finite_element_analysis
17 http://www.cse.iitm.ac.in/~meghana/matchings/bip-decomp.pdf
18 https://en.wikipedia.org/wiki/Help:CS1_errors#citation_missing_title
19 https://en.wikipedia.org/wiki/Doi_(identifier)
20 https://doi.org/10.1145%2F1198513.1198520

903
Dulmage–Mendelsohn decomposition

4. H, F21 ; P, M D.22 (1967), ”O    


”, Proceedings of the London Mathematical Society, Third Series, 17: 305–314,
doi23 :10.1112/plms/s3-17.2.30524 , MR25 020918426 .
• D, A. L.27 & M, N. S.28 (1958). ”C  
”. Can. J. Math. 10: 517–534. doi29 :10.4153/cjm-1958-052-030 . The original
Dulmage–Mendelsohn paper

73.6 External links


• A good explanation of its application to systems of nonlinear equations is available in
this paper: [1]31
• An open source implementation of the algorithm is available as a part of the sparse-matrix
library: SPOOLES32
• Graph-theoretical aspects of constraint solving in the SST project: [2]33

21 https://en.wikipedia.org/wiki/Frank_Harary
22 https://en.wikipedia.org/wiki/Michael_D._Plummer
23 https://en.wikipedia.org/wiki/Doi_(identifier)
24 https://doi.org/10.1112%2Fplms%2Fs3-17.2.305
25 https://en.wikipedia.org/wiki/MR_(identifier)
26 http://www.ams.org/mathscinet-getitem?mr=0209184
27 https://en.wikipedia.org/w/index.php?title=Andrew_Lloyd_Dulmage&action=edit&redlink=1
28 https://en.wikipedia.org/wiki/Nathan_Mendelsohn
29 https://en.wikipedia.org/wiki/Doi_(identifier)
30 https://doi.org/10.4153%2Fcjm-1958-052-0
31 http://www.modelica.org/events/Conference2002/papers/p21_Bunus.pdf
32 http://www.netlib.org/linalg/spooles/spooles.2.2.html
33 http://essay.utwente.nl/61082/1/MSc_JJ_Koelewijn.PDF

904
74 Edmonds' algorithm

This article is about the optimum branching algorithm. For the maximum matching algo-
rithm, see Blossom algorithm1 .

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

1 https://en.wikipedia.org/wiki/Blossom_algorithm

905
Edmonds' algorithm

Graph and tree


search algorithms

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

In graph theory2 , Edmonds' algorithm or Chu–Liu/Edmonds' algorithm is an al-


gorithm3 for finding a spanning4 arborescence5 of minimum weight (sometimes called an
optimum branching). It is the directed6 analog of the minimum spanning tree7 problem.
The algorithm was proposed independently first by Yoeng-Jin Chu and Tseng-Hong Liu
(1965) and then by Jack Edmonds8 (1967).

74.1 Algorithm

74.1.1 Description

The algorithm takes as input a directed graph D = ⟨V, E⟩ where V is the set of nodes and
E is the set of directed edges, a distinguished vertex r ∈ V called the root, and a real-valued
weight w(e) for each edge e ∈ E. It returns a spanning arborescence9 A rooted at r of
minimum weight,∑ where the weight of an arborescence is defined to be the sum of its edge
weights, w(A) = w(e).
e∈A

The algorithm has a recursive description. Let f (D, r, w) denote the function which returns
a spanning arborescence rooted at r of minimum weight. We first remove any edge from
E whose destination is r. We may also replace any set of parallel edges (edges between
the same pair of vertices in the same direction) by a single edge with weight equal to the
minimum of the weights of these parallel edges.

2 https://en.wikipedia.org/wiki/Graph_theory
3 https://en.wikipedia.org/wiki/Algorithm
4 https://en.wikipedia.org/wiki/Spanning_subgraph
5 https://en.wikipedia.org/wiki/Arborescence_(graph_theory)
6 https://en.wikipedia.org/wiki/Directed_graph
7 https://en.wikipedia.org/wiki/Minimum_spanning_tree
8 https://en.wikipedia.org/wiki/Jack_Edmonds
9 https://en.wikipedia.org/wiki/Arborescence_(graph_theory)

906
Running time

Now, for each node v other than the root, find the edge incoming to v of lowest weight
(with ties broken arbitrarily). Denote the source of this edge by π(v). If the set of edges
P = {(π(v), v) | v ∈ V \ {r}} does not contain any cycles, then f (D, r, w) = P .
Otherwise, P contains at least one cycle. Arbitrarily choose one of these cycles and call
it C. We now define a new weighted directed graph D′ = ⟨V ′ , E ′ ⟩ in which the cycle C is
”contracted” into one node as follows:
The nodes of V ′ are the nodes of V not in C plus a new node denoted vC .
• If (u, v) is an edge in E with u ∈ / C and v ∈ C (an edge coming into the cycle), then
include in E a new edge e = (u, vC ), and define w′ (e) = w(u, v) − w(π(v), v).

• If (u, v) is an edge in E with u ∈ C and v ∈/ C (an edge going away from the cycle), then
include in E ′ a new edge e = (vC , v), and define w′ (e) = w(u, v).
• If (u, v) is an edge in E with u ∈ / C and v ∈/ C (an edge unrelated to the cycle), then
include in E ′ a new edge e = (u, v), and define w′ (e) = w(u, v).
For each edge in E ′ , we remember which edge in E it corresponds to.
Now find a minimum spanning arborescence A′ of D′ using a call to f (D′ , r, w′ ). Since A′
is a spanning arborescence, each vertex has exactly one incoming edge. Let (u, vC ) be the
unique incoming edge to vC in A′ . This edge corresponds to an edge (u, v) ∈ E with v ∈ C.
Remove the edge (π(v), v) from C, breaking the cycle. Mark each remaining edge in C. For
each edge in A′ , mark its corresponding edge in E. Now we define f (D, r, w) to be the set
of marked edges, which form a minimum spanning arborescence.
Observe that f (D, r, w) is defined in terms of f (D′ , r, w′ ), with D′ having strictly fewer
vertices than D. Finding f (D, r, w) for a single-vertex graph is trivial (it is just D itself),
so the recursive algorithm is guaranteed to terminate.

74.2 Running time

The running time of this algorithm is O(EV ). A faster implementation of the algorithm
due to Robert Tarjan10 runs in time O(E log V ) for sparse graphs11 and O(V 2 ) for dense
graphs. This is as fast as Prim's algorithm12 for an undirected minimum spanning tree.
In 1986, Gabow, Galil, Spencer, Compton, and Tarjan produced a faster implementation,
with running time O(E + V log V ).

74.3 References
• C, Y. J.; L, T. H. (1965), ”O  S A   D
G”, Science Sinica, 14: 1396–1400

10 https://en.wikipedia.org/wiki/Robert_Tarjan
11 https://en.wikipedia.org/wiki/Sparse_graph
12 https://en.wikipedia.org/wiki/Prim%27s_algorithm

907
Edmonds' algorithm

• E, J. (1967), ”O B”, Journal of Research of the National


Bureau of Standards Section B, 71B (4): 233–240, doi13 :10.6028/jres.071b.03214
• T, R. E.15 (1977), ”F O B”, Networks, 7: 25–35,
doi16 :10.1002/net.323007010317
• C, P.M.; F, L.; M, F. (1979), ”A    
”, Networks, 9 (4): 309–312, doi18 :10.1002/net.323009040319
• G, A (1985), Algorithmic Graph Theory, Cambridge University press,
ISBN20 0-521-28881-921
• G, H. N.; G, Z.; S, T.; T, R. E.22 (1986), ”E -
         
”, Combinatorica, 6 (2): 109–122, doi23 :10.1007/bf0257916824

74.4 External links


• Edmonds's algorithm ( edmonds-alg )25 – An implementation of Edmonds's algorithm
written in C++26 and licensed under the MIT License27 . This source is using Tarjan's
implementation for the dense graph.
• NetworkX, a python28 library distributed under BSD29 , has an implementation of Ed-
monds' Algorithm30 .

13 https://en.wikipedia.org/wiki/Doi_(identifier)
14 https://doi.org/10.6028%2Fjres.071b.032
15 https://en.wikipedia.org/wiki/Robert_Tarjan
16 https://en.wikipedia.org/wiki/Doi_(identifier)
17 https://doi.org/10.1002%2Fnet.3230070103
18 https://en.wikipedia.org/wiki/Doi_(identifier)
19 https://doi.org/10.1002%2Fnet.3230090403
20 https://en.wikipedia.org/wiki/ISBN_(identifier)
21 https://en.wikipedia.org/wiki/Special:BookSources/0-521-28881-9
22 https://en.wikipedia.org/wiki/Robert_Tarjan
23 https://en.wikipedia.org/wiki/Doi_(identifier)
24 https://doi.org/10.1007%2Fbf02579168
25 http://edmonds-alg.sourceforge.net/
26 https://en.wikipedia.org/wiki/C%2B%2B
27 https://en.wikipedia.org/wiki/MIT_License
28 https://en.wikipedia.org/wiki/Python_(programming_language)
29 https://en.wikipedia.org/wiki/BSD
https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.
30
algorithms.tree.branchings.Edmonds.html

908
75 Blossom algorithm

The blossom algorithm is an algorithm1 in graph theory2 for constructing maximum


matchings3 on graphs. The algorithm was developed by Jack Edmonds4 in 1961,[1] and
published in 1965.[2] Given a general graph5 G = (V, E), the algorithm finds a matching
M such that each vertex in V is incident with at most one edge in M and |M| is maxi-
mized. The matching is constructed by iteratively improving an initial empty matching
along augmenting paths in the graph. Unlike bipartite6 matching, the key new idea is that
an odd-length cycle in the graph (blossom) is contracted to a single vertex, with the search
continuing iteratively in the contracted graph.
The algorithm runs in time O(|E||V |2 ), where |E| is the number of edges7 of the graph and
|V | is its number of vertices8 . A better running time of O(|E||V |1/2 ) for the same task can
be achieved with the much more complex algorithm of Micali and Vazirani[3] .
A major reason that the blossom algorithm is important is that it gave the first proof
that a maximum-size matching could be found using a polynomial amount of computation
time. Another reason is that it led to a linear programming9 polyhedral description of
the matching polytope10 , yielding an algorithm for min-weight matching.[4] As elaborated
by Alexander Schrijver11 , further significance of the result comes from the fact that this
was the first polytope whose proof of integrality ”does not simply follow just from total
unimodularity12 , and its description was a breakthrough in polyhedral combinatorics13 .”[5]

75.1 Augmenting paths

Given G = (V, E) and a matching M of G, a vertex v is exposed if no edge of M is in-


cident with v. A path in G is an alternating path, if its edges are alternately not in
M and in M (or in M and not in M). An augmenting path P is an alternating path
that starts and ends at two distinct exposed vertices. Note that the number of un-
matched edges in an augmenting path is greater by one than the number of matched

1 https://en.wikipedia.org/wiki/Algorithm
2 https://en.wikipedia.org/wiki/Graph_theory
3 https://en.wikipedia.org/wiki/Maximum_matching
4 https://en.wikipedia.org/wiki/Jack_Edmonds
5 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
6 https://en.wikipedia.org/wiki/Bipartite_graph
7 https://en.wikipedia.org/wiki/Edge_(graph)
8 https://en.wikipedia.org/wiki/Vertex_(graph)
9 https://en.wikipedia.org/wiki/Linear_programming
10 https://en.wikipedia.org/wiki/Polytope
11 https://en.wikipedia.org/wiki/Alexander_Schrijver
12 https://en.wikipedia.org/wiki/Total_unimodularity
13 https://en.wikipedia.org/wiki/Polyhedral_combinatorics

909
Blossom algorithm

edges, and hence the total number of edges in an augmenting path is odd. A match-
ing augmentation along an augmenting path P is the operation of replacing M with a
new matching M1 = M ⊕ P = (M \ P ) ∪ (P \ M ).

Figure 187

By Berge's lemma14 , matching M is maximum if and only if there is no M-augmenting path


in G.[6][7] Hence, either a matching is maximum, or it can be augmented. Thus, starting
from an initial matching, we can compute a maximum matching by augmenting the current
matching with augmenting paths as long as we can find them, and return whenever no
augmenting paths are left. We can formalize the algorithm as follows:
INPUT: Graph G, initial matching M on G
OUTPUT: maximum matching M* on G
A1 function find_maximum_matching( G, M ) : M*
A2 P ← find_augmenting_path( G, M )
A3 if P is non-empty then
A4 return find_maximum_matching( G, augment M along P )
A5 else
A6 return M
A7 end if
A8 end function

We still have to describe how augmenting paths can be found efficiently. The subroutine to
find them uses blossoms and contractions.

14 https://en.wikipedia.org/wiki/Berge%27s_lemma

910
Blossoms and contractions

75.2 Blossoms and contractions

Given G = (V, E) and a matching M of G, a blossom15 B is a cycle in G consisting of 2k


+ 1 edges of which exactly k belong to M, and where one of the vertices v of the cycle (the
base) is such that there exists an alternating path of even length (the stem) from v to an
exposed vertex w.
Finding Blossoms:
• Traverse the graph starting from an exposed vertex.
• Starting from that vertex, label it as an outer vertex ”o”.
• Alternate the labeling between vertices being inner ”i” and outer ”o” such that no two
adjacent vertices have the same label.
• If we end up with two adjacent vertices labeled as outer ”o” then we have an odd-length
cycle and hence a blossom.
Define the contracted graph G’ as the graph obtained from G by contracting16 every edge
of B, and define the contracted matching M’ as the matching of G’ corresponding to M.

Figure 188

15 https://en.wikipedia.org/wiki/Blossom_(graph_theory)
16 https://en.wikipedia.org/wiki/Edge_contraction

911
Blossom algorithm

G’ has an M’-augmenting path if and only if17 G has an M-augmenting path, and that any
M’-augmenting path P’ in G’ can be lifted to an M-augmenting path in G by undoing the
contraction by B so that the segment of P’ (if any) traversing through vB is replaced by an
appropriate segment traversing through B.[8] In more detail:
• if P’ traverses through a segment u → vB → w in G’, then this segment is replaced with
the segment u → ( u’ → ... → w’ ) → w in G, where blossom vertices u’ and w’ and the
side of B, ( u’ → ... → w’ ), going from u’ to w’ are chosen to ensure that the new path
is still alternating (u’ is exposed with respect to M ∩ B, {w′ , w} ∈ E \ M ).

Figure 189

17 https://en.wikipedia.org/wiki/If_and_only_if

912
Blossoms and contractions

• if P’ has an endpoint vB , then the path segment u → vB in G’ is replaced with the segment
u → ( u’ → ... → v’ ) in G, where blossom vertices u’ and v’ and the side of B, ( u’
→ ... → v’ ), going from u’ to v’ are chosen to ensure that the path is alternating (v’ is
exposed, {u′ , u} ∈ E \ M ).

Figure 190

Thus blossoms can be contracted and search performed in the contracted graphs. This
reduction is at the heart of Edmonds' algorithm.

913
Blossom algorithm

75.3 Finding an augmenting path

The search for an augmenting path uses an auxiliary data structure consisting of a forest18
F whose individual trees correspond to specific portions of the graph G. In fact, the forest
F is the same that would be used to find maximum matchings in bipartite graphs19 (without
need for shrinking blossoms). In each iteration the algorithm either (1) finds an augmenting
path, (2) finds a blossom and recurses onto the corresponding contracted graph, or (3)
concludes there are no augmenting paths. The auxiliary structure is built by an incremental
procedure discussed next.[8]
The construction procedure considers vertices v and edges e in G and incrementally updates
F as appropriate. If v is in a tree T of the forest, we let root(v) denote the root of T. If both
u and v are in the same tree T in F, we let distance(u,v) denote the length of the unique
path from u to v in T.
INPUT: Graph G, matching M on G
OUTPUT: augmenting path P in G or empty path if none found
B01 function find_augmenting_path( G, M ) : P
B02 F ← empty forest
B03 unmark all vertices and edges in G, mark all edges of M
B05 for each exposed vertex v do
B06 create a singleton tree { v } and add the tree to F
B07 end for
B08 while there is an unmarked vertex v in F with distance( v, root( v ) ) even do
B09 while there exists an unmarked edge e = { v, w } do
B10 if w is not in F then
// w is matched, so add e and w's matched edge to F
B11 x ← vertex matched to w in M
B12 add edges { v, w } and { w, x } to the tree of v
B13 else
B14 if distance( w, root( w ) ) is odd then
// Do nothing.
B15 else
B16 if root( v ) ≠ root( w ) then
// Report an augmenting path in F <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle dis-
playstyle=”true” scriptlevel=”0”> <mo> </mo> </mstyle> </mrow> {\displaystyle \cup } </semantics> { e }.
B17 P ← path ( root( v ) → ... → v ) → ( w → ... → root( w ) )
B18 return P
B19 else
// Contract a blossom in G and look for the path in
the contracted graph.
B20 B ← blossom formed by e and edges on the path v → w
in T
B21 G’, M’ ← contract G and M by B
B22 P’ ← find_augmenting_path( G’, M’ )
B23 P ← lift P’ to G
B24 return P
B25 end if
B26 end if
B27 end if
B28 mark edge e
B29 end while
B30 mark vertex v
B31 end while
B32 return empty path
B33 end function

18 https://en.wikipedia.org/wiki/Forest_(graph_theory)
19 https://en.wikipedia.org/wiki/Bipartite_graph

914
Finding an augmenting path

75.3.1 Examples

The following four figures illustrate the execution of the algorithm. Dashed lines indicate
edges that are currently not present in the forest. First, the algorithm processes an out-of-
forest edge that causes the expansion of the current forest (lines B10 − B12).

Figure 191

Next, it detects a blossom and contracts the graph (lines B20 − B21).

915
Blossom algorithm

Figure 192

Finally, it locates an augmenting path P′ in the contracted graph (line B22) and lifts it to
the original graph (line B23). Note that the ability of the algorithm to contract blossoms
is crucial here; the algorithm cannot find P in the original graph directly because only out-
of-forest edges between vertices at even distances from the roots are considered on line B17
of the algorithm.

916
Finding an augmenting path

Figure 193

Figure 194

917
Blossom algorithm

75.3.2 Analysis

The forest F constructed by the find_augmenting_path() function is an alternating forest.[9]


• a tree T in G is an alternating tree with respect to M, if
• T contains exactly one exposed vertex r called the tree root
• every vertex at an odd distance from the root has exactly two incident edges in T, and
• all paths from r to leaves in T have even lengths, their odd edges are not in M and
their even edges are in M.
• a forest F in G is an alternating forest with respect to M, if
• its connected components are alternating trees, and
• every exposed vertex in G is a root of an alternating tree in F.
Each iteration of the loop starting at line B09 either adds to a tree T in F (line B10) or
finds an augmenting path (line B17) or finds a blossom (line B20). It is easy to see that the
running time is O(|E||V |2 ).

75.3.3 Bipartite matching

When G is bipartite20 , there are no odd cycles in G. In that case, blossoms will never be
found and one can simply remove lines B20 − B24 of the algorithm. The algorithm thus
reduces to the standard algorithm to construct maximum cardinality matchings in bipartite
graphs[7] where we repeatedly search for an augmenting path by a simple graph traversal:
this is for instance the case of the Ford–Fulkerson algorithm21 .

75.3.4 Weighted matching

The matching problem can be generalized by assigning weights to edges in G and asking
for a set M that produces a matching of maximum (minimum) total weight: this is the
maximum weight matching22 problem. This problem can be solved by a combinatorial
algorithm that uses the unweighted Edmonds's algorithm as a subroutine.[6] Kolmogorov
provides an efficient C++ implementation of this.[10]

75.4 References
1. E, J (1991), ”A   ”,  J.K. L; A.H.G. R-
 K; A. S (.), History of Mathematical Programming --- A Collec-
tion of Personal Reminiscences, CWI, Amsterdam and North-Holland, Amsterdam,
pp. 32–54
2. E, J (1965). ”P, ,  ”. Can. J. Math. 17:
449–467. doi23 :10.4153/CJM-1965-045-424 .

20 https://en.wikipedia.org/wiki/Bipartite_graph
21 https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
22 https://en.wikipedia.org/wiki/Maximum_weight_matching
23 https://en.wikipedia.org/wiki/Doi_(identifier)
24 https://doi.org/10.4153%2FCJM-1965-045-4

918
References

3. M, S; V, V (1980). An O(V1/2 E) algorithm for finding max-
imum matching in general graphs. 21st Annual Symposium on Foundations of Com-
puter Science. IEEE Computer Society Press, New York. pp. 17–27.
4. E, J (1965). ”M      0,1-
”. Journal of Research of the National Bureau of Standards Section B. 69:
125–130. doi25 :10.6028/jres.069B.01326 .
5. S, A (2003). Combinatorial Optimization: Polyhedra and Ef-
ficiency27 . A  C. B H: S-
V. ISBN28 978354044389629 .
6. L, L30 ; P, M31 (1986). Matching Theory. Akadémiai
Kiadó. ISBN32 963-05-4168-833 .
7. K, R, ”E' N-B M A”, Course
Notes. U. C. Berkeley34 (PDF),    35 (PDF)  2008-
12-30
8. T, R, ”S N  E' I S B-
 A  G M”, Course Notes, Department of Computer
Science, Princeton University36 (PDF)
9. K, C; L, L37 , ”A D M”,
Technical Report CS-TR-251-90, Department of Computer Science, Princeton Uni-
versity
10. K, V (2009), ”B V: A   
     ”38 , Mathematical Programming
Computation, 1 (1): 43–67, doi39 :10.1007/s12532-009-0002-840

25 https://en.wikipedia.org/wiki/Doi_(identifier)
26 https://doi.org/10.6028%2Fjres.069B.013
27 https://www.springer.com/us/book/9783540443896
28 https://en.wikipedia.org/wiki/ISBN_(identifier)
29 https://en.wikipedia.org/wiki/Special:BookSources/9783540443896
30 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Lov%C3%A1sz
31 https://en.wikipedia.org/wiki/Michael_D._Plummer
32 https://en.wikipedia.org/wiki/ISBN_(identifier)
33 https://en.wikipedia.org/wiki/Special:BookSources/963-05-4168-8
https://web.archive.org/web/20081230183603/http://www.cs.berkeley.edu/~karp/
34
greatalgo/lecture05.pdf
35 http://www.cs.berkeley.edu/~karp/greatalgo/lecture05.pdf
36 http://www.cs.dartmouth.edu/~ac/Teach/CS105-Winter05/Handouts/tarjan-blossom.pdf
37 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Lov%C3%A1sz
38 http://pub.ist.ac.at/~vnk/papers/BLOSSOM5.html
39 https://en.wikipedia.org/wiki/Doi_(identifier)
40 https://doi.org/10.1007%2Fs12532-009-0002-8

919
76 Edmonds–Karp algorithm

algorithm to compute the maximum flow in a flow network (equivalently; the minimum cut)
In computer science1 , the Edmonds–Karp algorithm is an implementation of the Ford–
Fulkerson method2 for computing the maximum flow3 in a flow network4 in O5 (|V ||E|2 )
time. The algorithm was first published by Yefim Dinitz (whose name is also transliterated
”E. A. Dinic”, notably as author of his early papers) in 1970[1][2] and independently published
by Jack Edmonds6 and Richard Karp7 in 1972.[3] Dinic's algorithm8 includes additional
techniques that reduce the running time to O(|V |2 |E|).

76.1 Algorithm

The algorithm is identical to the Ford–Fulkerson algorithm9 , except that the search order
when finding the augmenting path10 is defined. The path found must be a shortest path
that has available capacity. This can be found by a breadth-first search11 , where we apply
a weight of 1 to each edge. The running time of O(|V ||E|2 )is found by showing that each
augmenting path can be found in O(|E|) time, that every time at least one of the E edges
becomes saturated (an edge which has the maximum possible flow), that the distance from
the saturated edge to the source along the augmenting path must be longer than last time
it was saturated, and that the length is at most |V |. Another property of this algorithm
is that the length of the shortest augmenting path increases monotonically. There is an
accessible proof in Introduction to Algorithms12 .[4]

76.2 Pseudocode

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
3 https://en.wikipedia.org/wiki/Maximum_flow_problem
4 https://en.wikipedia.org/wiki/Flow_network
5 https://en.wikipedia.org/wiki/Big_O_notation
6 https://en.wikipedia.org/wiki/Jack_Edmonds
7 https://en.wikipedia.org/wiki/Richard_Karp
8 https://en.wikipedia.org/wiki/Dinic%27s_algorithm
9 https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
10 https://en.wikipedia.org/wiki/Flow_network#Augmenting_paths
11 https://en.wikipedia.org/wiki/Breadth-first_search
12 https://en.wikipedia.org/wiki/Introduction_to_Algorithms

921
Edmonds–Karp algorithm

The Wikibook Algorithm implementation13 has a page on the topic of:


Edmonds-Karp14

algorithm EdmondsKarp is
input:
graph (graph[v] should be the list of edges coming out of vertex v in the
original graph and their corresponding constructed reverse edges
which are used for push-back flow.
Each edge should have a capacity, flow, source and sink as parameters,
as well as a pointer to the reverse edge.)
s (Source vertex)
t (Sink vertex)
output:
flow (Value of maximum flow)

flow := 0 (Initialize flow to zero)


repeat
(Run a bfs to find the shortest s-t path.
We use 'pred' to store the edge taken to get to each vertex,
so we can recover the path afterwards)
q := queue()
q.push(s)
pred := array(graph.length)
while not empty(q)
cur := q.pull()
for Edge e in graph[cur] do
if pred[e.t] = null and e.t ≠ s and e.cap > e.flow then
pred[e.t] := e
q.push(e.t)

if not (pred[t] = null) then


(We found an augmenting path.
See how much flow we can send)
df := ∞
for (e := pred[t]; e ≠ null; e := pred[e.s]) do
df := min(df, e.cap - e.flow)
(And update edges by that amount)
for (e := pred[t]; e ≠ null; e := pred[e.s]) do
e.flow := e.flow + df
e.rev.flow := e.rev.flow - df
flow := flow + df

until pred[t] = null (i.e., until no augmenting path was found)


return flow

76.3 Example

Given a network of seven nodes, source A, sink G, and capacities as shown below:

13 https://en.wikibooks.org/wiki/Algorithm_implementation
https://en.wikibooks.org/wiki/Algorithm_implementation/Graphs/Maximum_flow/Edmonds-
14
Karp

922
Example

Figure 195

In the pairs f /c written on the edges, f is the current flow, and c is the capacity. The
residual capacity from u to v is cf (u, v) = c(u, v) − f (u, v), the total capacity, minus the flow
that is already used. If the net flow from u to v is negative, it contributes to the residual
capacity.

Capacity Path Resulting network

min(cf (A, D), cf (D, E), cf (E, G))


= min(3 − 0, 2 − 0, 1 − 0) = A, D, E, G
= min(3, 2, 1) = 1

Figure 196

min(cf (A, D), cf (D, F ), cf (F, G))


= min(3 − 1, 6 − 0, 9 − 0) A, D, F, G
= min(2, 6, 9) = 2

Figure 197

min(cf (A, B), cf (B, C), cf (C, D), cf (D, F ), cf (F, G))
= min(3 − 0, 4 − 0, 1 − 0, 6 − 2, 9 − 2) A, B, C, D, F, G
= min(3, 4, 1, 4, 7) = 1

Figure 198

923
Edmonds–Karp algorithm

Capacity Path Resulting network

min(cf (A, B), cf (B, C), cf (C, E), cf (E, D), cf (D, F ), cf (F, G))
= min(3 − 1, 4 − 1, 2 − 0, 0 − (−1), 6 −A, 9 −C,3)E, D, F, G
3,B,
= min(2, 3, 2, 1, 3, 6) = 1

Figure 199

Notice how the length of the augmenting path15 found by the algorithm (in red) never
decreases. The paths found are the shortest possible. The flow found is equal to the
capacity across the minimum cut16 in the graph separating the source and the sink. There
is only one minimal cut in this graph, partitioning the nodes into the sets {A, B, C, E} and
{D, F, G}, with the capacity
c(A, D) + c(C, D) + c(E, G) = 3 + 1 + 1 = 5.

76.4 Notes
1. D, E. A. (1970). ”A       
      ”. Soviet Mathematics - Doklady.
Doklady. 11: 1277–1280.
2. Y D. ”D' A: T O V  E' V-
”17 (PDF). Cite journal requires |journal= (help18 )
3. E, J19 ; K, R M.20 (1972). ”T 
      ”21 (PDF). Journal of
the ACM. 19 (2): 248–264. doi22 :10.1145/321694.32169923 .
4. T H. C24 , C E. L25 , R L. R26  C-
 S27 (2009). ”26.2”. Introduction to Algorithms28 ( .). MIT P.
. 727–730. ISBN29 978-0-262-03384-830 .CS1 maint: multiple names: authors list
(link31 )

15 https://en.wikipedia.org/wiki/Augmenting_path
16 https://en.wikipedia.org/wiki/Max_flow_min_cut_theorem
17 http://www.cs.bgu.ac.il/~dinitz/Papers/Dinitz_alg.pdf
18 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
19 https://en.wikipedia.org/wiki/Jack_Edmonds
20 https://en.wikipedia.org/wiki/Richard_Karp
21 http://www.eecs.umich.edu/%7Epettie/matching/Edmonds-Karp-network-flow.pdf
22 https://en.wikipedia.org/wiki/Doi_(identifier)
23 https://doi.org/10.1145%2F321694.321699
24 https://en.wikipedia.org/wiki/Thomas_H._Cormen
25 https://en.wikipedia.org/wiki/Charles_E._Leiserson
26 https://en.wikipedia.org/wiki/Ronald_L._Rivest
27 https://en.wikipedia.org/wiki/Clifford_Stein
28 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
29 https://en.wikipedia.org/wiki/ISBN_(identifier)
30 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03384-8
31 https://en.wikipedia.org/wiki/Category:CS1_maint:_multiple_names:_authors_list

924
References

76.5 References
1. Algorithms and Complexity (see pages 63–69). 32

https://web.archive.org/web/20061005083406/http://www.cis.upenn.edu/~wilf/AlgComp3.
32
html

925
77 Euler tour technique

Figure 200 Euler tour of a tree, with edges labeled to show the order in which they are
traversed by the tour

The Euler tour technique (ETT), named after Leonhard Euler1 , is a method in graph
theory2 for representing trees3 . The tree is viewed as a directed graph4 that contains two
directed edges for each edge in the tree. The tree can then be represented as a Eulerian

1 https://en.wikipedia.org/wiki/Leonhard_Euler
2 https://en.wikipedia.org/wiki/Graph_theory
3 https://en.wikipedia.org/wiki/Tree_(graph_theory)
4 https://en.wikipedia.org/wiki/Directed_graph

927
Euler tour technique

circuit5 of the directed graph, known as the Euler tour representation (ETR) of the
tree. The ETT allows for efficient, parallel computation6 of solutions to common problems
in algorithmic graph theory7 . It was introduced by Tarjan and Vishkin in 1984.[1]

77.1 Construction

Given an undirected tree presented as a set of edges, the Euler tour representation (ETR)
can be constructed in parallel as follows:
• We construct a symmetric list of directed edges:
• For each undirected edge {u,v} in the tree, insert (u,v) and (v,u) in the edge list.
• Sort the edge list lexicographically8 . (Here we assume that the nodes of the tree are
ordered, and that the root is the first element in this order.)
• Construct adjacency lists for each node (called next) and a map from nodes to the first
entries of the adjacency lists (called first):
• For each edge (u,v) in the list, do in parallel:
• If the previous edge (x,y) has x ≠ u, i.e. starts from a different node, set
first(u) = (u,v)
• Else if x = u, i.e. starts from the same node, set next(x,y) = (u,v)
Construct an edge list (called succ) in Euler tour order by setting pointers succ(u,v) for all
edges (u,v) in parallel according to the following rule:
{
next(v, u) next(v, u) ̸= nil
succ(u, v) =
first(v) otherwise.
The resulting list succ will be circular.
The overall construction takes work W(n) = O(sort(n)) (the time it takes to sort n items in
parallel) if the tree has n nodes, as in trees the number of edges is one less than the number
of nodes.

77.2 Roots, advance and retreat edges

If the tree has a root, we can split the circular list succ at that root. In that case, we can
speak of advance and retreat edges: given a pair of nodes u,v, the first occurrence of either
(u,v) or (v,u) in the ETR is called the advance edge, and the second occurrence is called the
retreat edge. This appeals to the intuition that the first time such an edge is traversed the
distance to the root is increased, while the second time the distance decreases.
Rerooting the tree can be done in constant time O(1) by splitting the circular list succ at
the new root.

5 https://en.wikipedia.org/wiki/Eulerian_circuit
6 https://en.wikipedia.org/wiki/Parallel_computation
7 https://en.wikipedia.org/wiki/Algorithmic_graph_theory
8 https://en.wikipedia.org/wiki/Lexicographical_order

928
Applications

77.3 Applications

All of the following problems can be solved in O(Prefix sum(n)) (the time it takes to solve
the prefix sum9 problem in parallel for a list of n items):
1. Classifying advance and retreat edges: Do list ranking on the ETR and save the result
in a two-dimensional array A. Then (u,v) is an advance edge iff A(u,v) < A(v,u), and
a retreat edge otherwise.
2. Determine the level of each node: Do a prefix sum on the ETR, where every advance
edge counts as 1, and every retreat edge counts as −1. Then the value at the advance
edge (u,v) is the level of v.
3. Number of nodes in a subtree rooted at v: determine advance edge (u,v), and the
retreat edge (u,v) in parallel, and then count the number of advance edges between
(u,v) and (u,v) using prefix sum.
4. The depth-first search10 index of a node v: count the number of advance edges up to
and including (u,v).
5. Determine the lowest common ancestor of two nodes.

77.4 Euler tour trees

Henzinger and King[2] suggest to represent a given tree by keeping its Euler tour in a bal-
anced binary search tree11 , keyed by the index in the tour. So for example, the unbalanced
tree in the example above, having 7 nodes, will be represented by a balanced binary tree
with 14 nodes, one for each time each node appears on the tour.
We can represent a forest (an acyclic graph) using a collection of ET trees - one ET tree for
one forest tree. This representation allows us to quickly answer the question ”what is the
root of node v?” by just moving to the first node of the ET tree (since nodes in the ET tree
are keyed by their location in the Euler tour, and the root is the first and last node in the
tour). When the represented forest is updated (e.g. by connecting two trees to a single tree
or by splitting a tree to two trees), the corresponding Euler-tour structure can be updated
in time O(log(n)).
Link/cut trees12 have similar performance guarantees. While LC trees are good for main-
taining aggregates on paths of a tree (making it a good choice data structure in network
flow algorithms), ET trees are better at keeping aggregate information on subtrees.[3]

9 https://en.wikipedia.org/wiki/Prefix_sum
10 https://en.wikipedia.org/wiki/Depth-first_search
11 https://en.wikipedia.org/wiki/Balanced_binary_search_tree
12 https://en.wikipedia.org/wiki/Link/cut_tree

929
Euler tour technique

77.5 References
1. T, R.E.; V, U. (1984). Finding biconnected components and com-
puting tree functions in logarithmic parallel time. Proceedings of FOCS. pp. 12–20.
CiteSeerX13 10.1.1.419.308814 . doi15 :10.1109/SFCS.1984q589616 .
2. H, M. R.; K, V. (1995). ”R   -
     ”. Proceedings of the twenty-
seventh annual ACM symposium on Theory of computing - STOC '95. p. 519.
doi17 :10.1145/225058.22526918 . ISBN19 089791718920 .
3. Euler tour trees21 - in Lecture Notes in Advanced Data Structures. Prof. Erik De-
maine; Scribe: Katherine Lai.

13 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
14 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.419.3088
15 https://en.wikipedia.org/wiki/Doi_(identifier)
16 https://doi.org/10.1109%2FSFCS.1984q5896
17 https://en.wikipedia.org/wiki/Doi_(identifier)
18 https://doi.org/10.1145%2F225058.225269
19 https://en.wikipedia.org/wiki/ISBN_(identifier)
20 https://en.wikipedia.org/wiki/Special:BookSources/0897917189
21 http://courses.csail.mit.edu/6.851/spring07/scribe/lec05.pdf

930
78 FKT algorithm

Algorithm for counting perfect matchings in planar graphs The FKT algorithm, named
after Fisher1 , Kasteleyn2 , and Temperley3 , counts the number of perfect matchings4 in a
planar5 graph in polynomial time. This same task is #P-complete6 for general graphs.
Counting the number of matchings7 , even for planar graphs, is also #P-complete. The key
idea is to convert the problem into a Pfaffian8 computation of a skew-symmetric matrix9
derived from a planar embedding of the graph. The Pfaffian of this matrix is then computed
efficiently using standard determinant algorithms10 .

78.1 History

The problem of counting planar perfect matchings has its roots in statistical mechanics11
and chemistry12 , where the original question was: If diatomic molecules13 are adsorbed on
a surface, forming a single layer, how many ways can they be arranged?[1] The partition
function14 is an important quantity that encodes the statistical properties of a system at
equilibrium and can be used to answer the previous question. However, trying to compute
the partition function from its definition is not practical. Thus to exactly solve a physical
system is to find an alternate form of the partition function for that particular physical
system that is sufficiently simple to calculate exactly.[2] In the early 1960s, the definition
of exactly solvable was not rigorous.[3] Computer science provided a rigorous definition with
the introduction of polynomial time15 , which dates to 1965. Similarly, the notation of not
exactly solvable should correspond to #P-hardness16 , which was defined in 1979.
Another type of physical system to consider is composed of dimers17 , which is a polymer
with two atoms. The dimer model counts the number of dimer coverings of a graph.[4]

1 https://en.wikipedia.org/wiki/Michael_Fisher
2 https://en.wikipedia.org/wiki/Pieter_Kasteleyn
3 https://en.wikipedia.org/wiki/Harold_Neville_Vazeille_Temperley
4 https://en.wikipedia.org/wiki/Perfect_matching
5 https://en.wikipedia.org/wiki/Planar_graph
6 https://en.wikipedia.org/wiki/Sharp-P-complete
7 https://en.wikipedia.org/wiki/Matching_(graph_theory)
8 https://en.wikipedia.org/wiki/Pfaffian
9 https://en.wikipedia.org/wiki/Skew-symmetric_matrix
10 https://en.wikipedia.org/wiki/Determinant#Algorithmic_implementation
11 https://en.wikipedia.org/wiki/Statistical_mechanics
12 https://en.wikipedia.org/wiki/Chemistry
13 https://en.wikipedia.org/wiki/Diatomic_molecule
14 https://en.wikipedia.org/wiki/Partition_function_(statistical_mechanics)
15 https://en.wikipedia.org/wiki/P_(complexity)
16 https://en.wikipedia.org/wiki/Sharp-P-complete
17 https://en.wikipedia.org/wiki/Dimer_(chemistry)

931
FKT algorithm

Another physical system to consider is the bonding of H2 O18 molecules in the form of ice.
This can be modelled as a directed, 3-regular19 graph where the orientation of the edges at
each vertex cannot all be the same. How many edge orientations does this model have?
Motivated by physical systems involving dimers, in 1961, Kasteleyn[5] and Temperley and
Fisher[6] independently found the number of domino tilings20 for the m-by-n rectangle. This
is equivalent to counting the number of perfect matchings for the m-by-n lattice graph21 .
By 1967, Kasteleyn had generalized this result to all planar graphs.[7][8]

78.2 Algorithm

78.2.1 Explanation

The main insight is that every non-zero term in the Pfaffian22 of the adjacency matrix23 of
a graph G corresponds to a perfect matching. Thus, if one can find an orientation24 of G to
align all signs of the terms in Pfaffian25 (no matter + or - ), then the absolute value of the
Pfaffian26 is just the number of perfect matchings in G. The FKT algorithm does such a
task for a planar graph G. The orientation it finds is called a Pfaffian orientation27 .
Let G = (V, E) be an undirected graph with adjacency matrix28 A. Define PM(n) to be the
set of partitions of n elements into pairs, then the number of perfecting matchings in G is
∑ ∏
PerfMatch(G) = Aij .
M ∈P M (|V |) (i,j)∈M

Closely related to this is the Pfaffian29 for an n by n matrix A


∑ ∏
pf(A) = sgn(M ) Aij ,
M ∈P M (n) (i,j)∈M

where sgn(M) is the sign of the permutation30 M. A Pfaffian orientation of G is a directed


graph H with (1, −1, 0)-adjacency matrix31 B such that pf(B) = PerfMatch(G).[9] In 1967,
Kasteleyn proved that planar graphs have an efficiently computable Pfaffian orientation.
Specifically, for a planar graph G, let H be a directed version of G where an odd number of
edges are oriented clockwise for every face in a planar embedding of G. Then H is a Pfaffian
orientation of G.

18 https://en.wikipedia.org/wiki/H2O
19 https://en.wikipedia.org/wiki/Regular_graph
20 https://en.wikipedia.org/wiki/Domino_tiling#Counting_tilings_of_regions
21 https://en.wikipedia.org/wiki/Lattice_graph
22 https://en.wikipedia.org/wiki/Pfaffian
23 https://en.wikipedia.org/wiki/Adjacency_matrix
24 https://en.wikipedia.org/wiki/Orientation_(graph_theory)
25 https://en.wikipedia.org/wiki/Pfaffian
26 https://en.wikipedia.org/wiki/Pfaffian
27 https://en.wikipedia.org/wiki/Pfaffian_orientation
28 https://en.wikipedia.org/wiki/Adjacency_matrix
29 https://en.wikipedia.org/wiki/Pfaffian
30 https://en.wikipedia.org/wiki/Parity_of_a_permutation
31 https://en.wikipedia.org/wiki/Adjacency_matrix#Variations

932
Algorithm

Finally, for any skew-symmetric matrix32 A,


pf(A)2 = det(A),
where det(A) is the determinant33 of A. This result is due to Cayley34 .[10] Since determi-
nants35 are efficiently computable, so is PerfMatch(G).

32 https://en.wikipedia.org/wiki/Skew-symmetric_matrix
33 https://en.wikipedia.org/wiki/Determinant
34 https://en.wikipedia.org/wiki/Arthur_Cayley
35 https://en.wikipedia.org/wiki/Determinant#Algorithmic_implementation

933
FKT algorithm

78.2.2 High-level description

Figure 201 An example showing how the FKT algorithm finds a Pfaffian orientation.

1. Compute a planar embedding36 of G.


2. Compute a spanning tree37 T1 of the input graph G.
3. Give an arbitrary orientation to each edge in G that is also in T1 .

36 https://en.wikipedia.org/wiki/Graph_embedding
37 https://en.wikipedia.org/wiki/Spanning_tree

934
Generalizations

4. Use the planar embedding to create an (undirected) graph T2 with the same vertex
set as the dual graph38 of G.
5. Create an edge in T2 between two vertices if their corresponding faces in G share an
edge in G that is not in T1 . (Note that T2 is a tree.)
6. For each leaf v in T2 (that is not also the root):
a) Let e be the lone edge of G in the face corresponding to v that does not yet have
an orientation.
b) Give e an orientation such that the number of edges oriented clock-wise is odd.
c) Remove v from T2 .
7. Return the absolute value of the Pfaffian39 of the (1, −1, 0)-adjacency matrix40 of G,
which is the square root of the determinant.

78.3 Generalizations

The sum of weighted perfect matchings can also be computed by using the Tutte matrix41
for the adjacency matrix in the last step.
Kuratowski's theorem42 states that
a finite graph43 is planar if and only if44 it contains no subgraph45 homeomorphic46 to K5
(complete graph47 on five vertices) or K3,3 (complete bipartite graph48 on two partitions
of size three).
Vijay Vazirani49 generalized the FKT algorithm to graphs that do not contain a subgraph
homeomorphic to K3,3 .[11] Since counting the number of perfect matchings in a general graph
is #P-complete50 , some restriction on the input graph is required unless FP51 , the function
version of P52 , is equal to #P53 . Counting matchings, which is known as the Hosoya index54 ,
is also #P-complete even for planar graphs.[12]

38 https://en.wikipedia.org/wiki/Dual_graph
39 https://en.wikipedia.org/wiki/Pfaffian
40 https://en.wikipedia.org/wiki/Adjacency_matrix#Variations
41 https://en.wikipedia.org/wiki/Tutte_matrix
42 https://en.wikipedia.org/wiki/Kuratowski%27s_theorem
43 https://en.wikipedia.org/wiki/Finite_graph
44 https://en.wikipedia.org/wiki/If_and_only_if
45 https://en.wikipedia.org/wiki/Glossary_of_graph_theory#Subgraphs
46 https://en.wikipedia.org/wiki/Homeomorphism_(graph_theory)
47 https://en.wikipedia.org/wiki/Complete_graph
48 https://en.wikipedia.org/wiki/Complete_bipartite_graph
49 https://en.wikipedia.org/wiki/Vijay_Vazirani
50 https://en.wikipedia.org/wiki/Sharp-P-complete
51 https://en.wikipedia.org/wiki/FP_(complexity)
52 https://en.wikipedia.org/wiki/P_(complexity)
53 https://en.wikipedia.org/wiki/Sharp-P
54 https://en.wikipedia.org/wiki/Hosoya_index

935
FKT algorithm

78.4 Applications

The FKT algorithm has seen extensive use in holographic algorithms55 on planar graphs via
matchgates56 .[3] For example, consider the planar version of the ice model mentioned above,
which has the technical name #PL57 -3-NAE-SAT58 (where NAE stands for ”not all equal”).
Valiant found a polynomial time algorithm for this problem which uses matchgates.[13]

78.5 References
1. H, B59 (J–F 2008), ”A A”60 ,
American Scientist61
2. B, R. J.62 (2008) [1982]. Exactly Solved Models in Statistical Mechanics63
(T .). D P. . 11. ISBN64 978-0-486-46271-465 .
3. C, J-Y; L, P; X, M (2010). Holographic Algorithms with Match-
gates Capture Precisely Tractable Planar #CSP. Foundations of Computer Science
(FOCS), 2010 51st Annual IEEE Symposium on66 . Las Vegas, NV, USA: IEEE.
arXiv67 :1008.068368 . Bibcode69 :2010arXiv1008.0683C70 .
4. K, R; O, A (2005). ”W   D?”71 (PDF).
AMS. 52 (3): 342–343.
5. K, P. W.72 (1961), ”T      . I. T
       ”, Physica, 27 (12):
1209–1225, Bibcode73 :1961Phy....27.1209K74 , doi75 :10.1016/0031-8914(61)90063-576

55 https://en.wikipedia.org/wiki/Holographic_algorithm
56 https://en.wikipedia.org/w/index.php?title=Matchgates&action=edit&redlink=1
57 https://en.wikipedia.org/wiki/Planar_graph
58 https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
59 https://en.wikipedia.org/wiki/Brian_Hayes_(scientist)
60 https://www.americanscientist.org/article/accidental-algorithms
61 https://en.wikipedia.org/wiki/American_Scientist
62 https://en.wikipedia.org/wiki/Rodney_J._Baxter
63 http://tpsrv.anu.edu.au/Members/baxter/book
64 https://en.wikipedia.org/wiki/ISBN_(identifier)
65 https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-46271-4
66 http://www.egr.unlv.edu/~larmore/FOCS/focs2010/
67 https://en.wikipedia.org/wiki/ArXiv_(identifier)
68 http://arxiv.org/abs/1008.0683
69 https://en.wikipedia.org/wiki/Bibcode_(identifier)
70 https://ui.adsabs.harvard.edu/abs/2010arXiv1008.0683C
71 http://www.ams.org/notices/200503/what-is.pdf
72 https://en.wikipedia.org/wiki/Pieter_Kasteleyn
73 https://en.wikipedia.org/wiki/Bibcode_(identifier)
74 https://ui.adsabs.harvard.edu/abs/1961Phy....27.1209K
75 https://en.wikipedia.org/wiki/Doi_(identifier)
76 https://doi.org/10.1016%2F0031-8914%2861%2990063-5

936
References

6. T, H. N. V.77 ; F, M E.78 (1961). ”D  


 -  ”. Philosophical Magazine. 6 (68): 1061–
1063. doi79 :10.1080/1478643610824336680 .
7. K, P. W.81 (1963). ”D S  P T”.
Journal of Mathematical Physics. 4 (2): 287–293. Bibcode82 :1963JMP.....4..287K83 .
doi84 :10.1063/1.170395385 .
8. K, P. W.86 (1967), ”G    ”,  H,
87
F. (.), Graph Theory and Theoretical Physics, New York: Academic Press, pp. 43–
110
9. T, R88 (2006). A survey of Pfaffian orientations of graphs89 (PDF).
I C  M90 . III. Zurich: European Mathe-
matical Society. pp. 963–984.
10. C, A91 (1847). ”S   ” [O 
]. Crelle's Journal. 38: 93–96.
11. V, V V.92 (1989). ”NC     
    K3,3 -free graphs and related problems”. Information and
Computation. 80 (2): 152–164. doi93 :10.1016/0890-5401(89)90017-594 . ISSN95 0890-
540196 .
12. J, M97 (1987), ”T- -  
 ”, Journal of Statistical Physics, 48 (1): 121–134,
Bibcode98 :1987JSP....48..121J99 , doi100 :10.1007/BF01010403101 .
13. V, L G.102 (2004). ”H A (E A-
)”. Proceedings of the 45th Annual IEEE Symposium on Foundations of Com-

77 https://en.wikipedia.org/wiki/Harold_Neville_Vazeille_Temperley
78 https://en.wikipedia.org/wiki/Michael_Fisher
79 https://en.wikipedia.org/wiki/Doi_(identifier)
80 https://doi.org/10.1080%2F14786436108243366
81 https://en.wikipedia.org/wiki/Pieter_Kasteleyn
82 https://en.wikipedia.org/wiki/Bibcode_(identifier)
83 https://ui.adsabs.harvard.edu/abs/1963JMP.....4..287K
84 https://en.wikipedia.org/wiki/Doi_(identifier)
85 https://doi.org/10.1063%2F1.1703953
86 https://en.wikipedia.org/wiki/Pieter_Kasteleyn
87 https://en.wikipedia.org/wiki/Frank_Harary
88 https://en.wikipedia.org/wiki/Robin_Thomas_(mathematician)
89 http://people.math.gatech.edu/~thomas/PAP/pfafsurv.pdf
90 http://www.icm2006.org/
91 https://en.wikipedia.org/wiki/Arthur_Cayley
92 https://en.wikipedia.org/wiki/Vijay_Vazirani
93 https://en.wikipedia.org/wiki/Doi_(identifier)
94 https://doi.org/10.1016%2F0890-5401%2889%2990017-5
95 https://en.wikipedia.org/wiki/ISSN_(identifier)
96 http://www.worldcat.org/issn/0890-5401
97 https://en.wikipedia.org/wiki/Mark_Jerrum
98 https://en.wikipedia.org/wiki/Bibcode_(identifier)
99 https://ui.adsabs.harvard.edu/abs/1987JSP....48..121J
100 https://en.wikipedia.org/wiki/Doi_(identifier)
101 https://doi.org/10.1007%2FBF01010403
102 https://en.wikipedia.org/wiki/Leslie_G._Valiant

937
FKT algorithm

puter Science. FOCS'04103 . Rome, Italy: IEEE Computer Society. pp. 306–315.
doi104 :10.1109/FOCS.2004.34105 . ISBN106 0-7695-2228-9107 .

78.6 External links


• More history, information, and examples can be found in chapter 2 and section 5.3.2 of
Dmitry Kamenetsky's PhD thesis108

103 http://www.cs.brown.edu/~aris/focs04/
104 https://en.wikipedia.org/wiki/Doi_(identifier)
105 https://doi.org/10.1109%2FFOCS.2004.34
106 https://en.wikipedia.org/wiki/ISBN_(identifier)
107 https://en.wikipedia.org/wiki/Special:BookSources/0-7695-2228-9
108 https://digitalcollections.anu.edu.au/bitstream/1885/49338/2/02whole.pdf

938
79 Flooding algorithm

A flooding algorithm is an algorithm1 for distributing material to every part of a graph2 .


The name derives from the concept of inundation by a flood3 .
Flooding algorithms are used in computer networking4 and graphics5 . Flooding algorithms
are also useful for solving many mathematical problems, including maze6 problems and
many problems in graph theory7 .

79.1 See also


• Flooding (computer networking)8
• Water retention on mathematical surfaces9
• Flood fill10
• Spanning tree11
• Spanning Tree Protocol12

79.2 External links


• Flooding edge or node weighted graphs, Fernand Meyer13
• Water Retention Utility14

1 https://en.wikipedia.org/wiki/Algorithm
2 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
3 https://en.wikipedia.org/wiki/Flood
4 https://en.wikipedia.org/wiki/Flooding_(computer_networking)
5 https://en.wikipedia.org/wiki/Flood_fill
6 https://en.wikipedia.org/wiki/Maze
7 https://en.wikipedia.org/wiki/Graph_theory
8 https://en.wikipedia.org/wiki/Flooding_(computer_networking)
9 https://en.wikipedia.org/wiki/Water_retention_on_mathematical_surfaces
10 https://en.wikipedia.org/wiki/Flood_fill
11 https://en.wikipedia.org/wiki/Spanning_tree
12 https://en.wikipedia.org/wiki/Spanning_Tree_Protocol
13 https://arxiv.org/abs/1305.5756
https://web.archive.org/web/20131211173036/http://users.eastlink.ca/~sharrywhite/
14
Download.html

939
80 Flow network

In graph theory1 , a flow network (also known as a transportation network) is a directed


graph2 where each edge has a capacity and each edge receives a flow. The amount of flow
on an edge cannot exceed the capacity of the edge. Often in operations research3 , a directed
graph is called a network, the vertices are called nodes and the edges are called arcs. A
flow must satisfy the restriction that the amount of flow into a node equals the amount of
flow out of it, unless it is a source, which has only outgoing flow, or sink, which has only
incoming flow. A network can be used to model traffic in a computer network, circulation
with demands, fluids in pipes, currents in an electrical circuit, or anything similar in which
something travels through a network of nodes.

80.1 Definition

A network is a graph G = (V, E), where V is a set of vertices and E is a set of V’s edges
– a subset of V × V – together with a non-negative function4 c: V × V → ℝ∞ , called the
capacity function. Without loss of generality5 , we may assume that if (u, v) ∈ E then (v,
u) is also a member of E, since if (v, u) ∉ E then we may add (v, u) to E and then set c(v,
u) = 0.
If two nodes in G are distinguished, a source s and a sink t, then (G, c, s, t) is called a flow
network.[1]

80.2 Flows

There are various notions of a flow function that can be defined in a flow graph. Flow
functions model the net flow of units between pairs of nodes, and are useful when asking
questions such as what is the maximum number of units that can be transferred from the
source node s to the sink node t? The simplest example of a flow function is known as a
pseudo-flow.
A pseudo-flow is a function f : V × V → ℝthat satisfies the following two constraints
for all nodes u and v:

1 https://en.wikipedia.org/wiki/Graph_theory
2 https://en.wikipedia.org/wiki/Directed_graph
3 https://en.wikipedia.org/wiki/Operations_research
4 https://en.wikipedia.org/wiki/Function_(mathematics)
5 https://en.wikipedia.org/wiki/Without_loss_of_generality

941
Flow network

• Skew symmetry: Only encode the net flow of units between a pair of nodes u and v (see
intuition6 below), that is: f (u, v) = −f (v, u).
• Capacity constraint: An arc's flow cannot exceed its capacity, that is: f (u, v) ≤ c(u, v).

Given a pseudo-flow f in a flow network, it is often useful to consider the net flow entering
a given node v, that is, the sum of the flows entering v. The excess function xf : V → ℝis
defined by xf (u) = ∑v ∈V f (v, u). A node u is said to be active if xf (u) > 0, deficient if
xf (u) < 0 or conserving if xf (u) = 0.
These final definitions lead to two strengthenings of the definition of a pseudo-flow:
A pre-flow is a pseudo-flow that, for all v ∈V \{s}, satisfies the additional constraint:
• Non-deficient flows: The net flow entering the node v is non-negative, except for the
source, which ”produces” flow. That is: xf (v) ≥ 0 for all v ∈V \{s}.
A feasible flow, or just a flow, is a pseudo-flow that, for all v ∈V \{s, t}, satisfies the
additional constraint:
• Flow conservation: The net flow entering the node v is 0, except for the source, which
”produces” flow, and the sink, which ”consumes” flow. That is: xf (v) = 0 for all v ∈V \{s,
t}.

The value of a feasible flow f, denoted | f |, is the net flow into the sink t of the flow network.
That is, | f | = xf (t).

80.3 Intuition

In the context of flow analysis, there is only an interest in considering how units are trans-
ferred between nodes in a holistic sense. Put another way, it is not necessary to distinguish
multiple arcs between a pair of nodes:
• Given any two nodes u and v, if there are two arcs from u to v with capacities 5 and
3 respectively, this is equivalent to considering only a single arc between u and v with
capacity 8 — it is only useful to know that 8 units can be transferred from u to v, not
how they can be transferred.
• Again, given two nodes u and v, if there is a flow of 5 units from u to v, and another flow
of 3 units from v to u, this is equivalent to a net flow of 2 units from u to v, or a net flow
of −2 units from v to u (so sign indicates direction) — it is only useful to know that a
net flow of 2 units will flow between u and v, and the direction that they will flow, not
how that net flow is achieved.
For this reason, the capacity function c: V × V → ℝ∞ , which does not allow for multiple
arcs starting and ending at the same nodes, is sufficient for flow analysis. Similarly, it

6 https://en.wikipedia.org/wiki/Flow_network#Intuition

942
Concepts useful to flow problems

is enough to impose the skew symmetry constraint on flow functions to ensure that flow
between two vertices is encoded by a single number (to indicate magnitude), and a sign
(to indicate direction) — by knowing the flow between u and v you implicitly, via skew
symmetry, know the flow between v and u. These simplifications of the model aren't always
immediately intuitive, but they are convenient when it comes time to analyze flows.
The capacity constraint simply ensures that a flow on any one arc within the network cannot
exceed the capacity of that arc.

80.4 Concepts useful to flow problems

80.4.1 Residuals

The residual capacity of an arc with respect to a pseudo-flow f, denoted cf , is the dif-
ference between the arc's capacity and its flow. That is, cf (e) = c(e) - f(e). From this
we can construct a residual network, denoted Gf (V, Ef ), which models the amount of
available capacity on the set of arcs in G = (V, E). More formally, given a flow network
G, the residual network Gf has the node set V, arc set Ef = {e ∈V × V : cf (e) > 0} and
capacity function cf .
This concept is used in Ford–Fulkerson algorithm7 which computes the maximum flow8 in
a flow network.
Note that there can be a path from u to v in the residual network, even though there is
no path from u to v in the original network. Since flows in opposite directions cancel out,
decreasing the flow from v to u is the same as increasing the flow from u to v.

80.4.2 Augmenting paths

An augmenting path is a path (u1 , u2 , ..., uk ) in the residual network, where u1 = s,


uk = t, and cf (ui , ui + 1 ) > 0. A network is at maximum flow9 if and only if there is no
augmenting path in the residual network Gf .

80.4.3 Multiple sources and/or sinks

Sometimes, when modeling a network with more than one source, a supersource is intro-
duced to the graph.[2] This consists of a vertex connected to each of the sources with edges
of infinite capacity, so as to act as a global source. A similar construct for sinks is called a
supersink.[3]

7 https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
8 https://en.wikipedia.org/wiki/Maximum_flow
9 https://en.wikipedia.org/wiki/Maximum_flow

943
Flow network

80.5 Example

Figure 202 A flow network showing flow and capacity

To the left you see a flow network with source labeled s, sink t, and four additional nodes.
The flow and capacity is denoted f /c. Notice how the network upholds skew symmetry,
capacity constraints and flow conservation. The total amount of flow from s to t is 5, which
can be easily seen from the fact that the total outgoing flow from s is 5, which is also the
incoming flow to t. We know that no flow appears or disappears in any of the other nodes.

Figure 203 Residual network for the above flow network, showing residual capacities.

944
Applications

Below you see the residual network for the given flow. Notice how there is positive
residual capacity on some edges where the original capacity is zero, for example for
the edge (d, c). This flow is not a maximum flow10 . There is available capacity along
the paths (s, a, c, t), (s, a, b, d, t) and (s, a, b, d, c, t), which are then the augmenting paths.
The residual capacity of the first path is min(c(s, a) − f (s, a), c(a, c) − f (a, c), c(c, t) − f (c, t))
11
= min(5 − 3, 3 − 2, 2 − 1) = min(2, 1, 1) = 1.[citation needed ] Notice that as long as there exists
some path with a positive residual capacity, the flow will not be maximum. The residual
capacity for some path is the minimum residual capacity of all edges in that path.

80.6 Applications

See also: Pipe network analysis12 Picture a series of water pipes, fitting into a network.
Each pipe is of a certain diameter, so it can only maintain a flow of a certain amount of
water. Anywhere that pipes meet, the total amount of water coming into that junction
must be equal to the amount going out, otherwise we would quickly run out of water, or we
would have a buildup of water. We have a water inlet, which is the source, and an outlet,
the sink. A flow would then be one possible way for water to get from source to sink so
that the total amount of water coming out of the outlet is consistent. Intuitively, the total
flow of a network is the rate at which water comes out of the outlet.
Flows can pertain to people or material over transportation networks, or to electricity
over electrical distribution13 systems. For any such physical network, the flow coming into
any intermediate node needs to equal the flow going out of that node. This conservation
constraint is equivalent to Kirchhoff's current law14 .
Flow networks also find applications in ecology15 : flow networks arise naturally when con-
sidering the flow of nutrients and energy between different organisms in a food web16 . The
mathematical problems associated with such networks are quite different from those that
arise in networks of fluid or traffic flow. The field of ecosystem network analysis, developed
by Robert Ulanowicz17 and others, involves using concepts from information theory18 and
thermodynamics19 to study the evolution of these networks over time.

80.7 Classifying flow problems

The simplest and most common problem using flow networks is to find what is called the
maximum flow20 , which provides the largest possible total flow from the source to the

10 https://en.wikipedia.org/wiki/Max_flow
12 https://en.wikipedia.org/wiki/Pipe_network_analysis
13 https://en.wikipedia.org/wiki/Electrical_distribution
14 https://en.wikipedia.org/wiki/Kirchhoff%27s_current_law
15 https://en.wikipedia.org/wiki/Ecology
16 https://en.wikipedia.org/wiki/Food_web
17 https://en.wikipedia.org/wiki/Robert_Ulanowicz
18 https://en.wikipedia.org/wiki/Information_theory
19 https://en.wikipedia.org/wiki/Thermodynamics
20 https://en.wikipedia.org/wiki/Maximum_flow_problem

945
Flow network

sink in a given graph. There are many other problems which can be solved using max flow
algorithms, if they are appropriately modeled as flow networks, such as bipartite matching21 ,
the assignment problem22 and the transportation problem23 . Maximum flow problems can
be solved efficiently with the relabel-to-front algorithm24 . The max-flow min-cut theorem25
states that finding a maximal network flow is equivalent to finding a cut26 of minimum
capacity that separates the source and the sink, where a cut is the division of vertices such
that the source is in one division and the sink is in another.

Well-known algorithms for the Maximum Flow Problem

Inventor(s) Year Time


complexity
(with n nodes
and m arcs)
Edmonds–Karp algorithm27 1972 O(m2 n)
MPM (Malhotra, Pramodh-Kumar and Maheshwari) 1978 O(n3 )
algorithm[4]
James B. Orlin28[5] 2013 O(mn)

In a multi-commodity flow problem29 , you have multiple sources and sinks, and various
”commodities” which are to flow from a given source to a given sink. This could be for
example various goods that are produced at various factories, and are to be delivered to
various given customers through the same transportation network.
In a minimum cost flow problem30 , each edge u, v has a given cost k(u, v), and the cost of
sending the flow f (u, v) across the edge is f (u, v) · k(u, v). The objective is to send a given
amount of flow from the source to the sink, at the lowest possible price.
In a circulation problem31 , you have a lower bound ℓ(u, v) on the edges, in addition to the
upper bound c(u, v). Each edge also has a cost. Often, flow conservation holds for all nodes
in a circulation problem, and there is a connection from the sink back to the source. In this
way, you can dictate the total flow with ℓ(t, s) and c(t, s). The flow circulates through the
network, hence the name of the problem.
In a network with gains or generalized network each edge has a gain32 , a real number
(not zero) such that, if the edge has gain g, and an amount x flows into the edge at its tail,
then an amount gx flows out at the head.

21 https://en.wikipedia.org/wiki/Bipartite_matching
22 https://en.wikipedia.org/wiki/Assignment_problem
23 https://en.wikipedia.org/wiki/Transportation_problem
24 https://en.wikipedia.org/wiki/Relabel-to-front_algorithm
25 https://en.wikipedia.org/wiki/Max-flow_min-cut_theorem
26 https://en.wikipedia.org/wiki/Cut_(graph_theory)
27 https://en.wikipedia.org/wiki/Edmonds%E2%80%93Karp_algorithm
28 https://en.wikipedia.org/wiki/James_B._Orlin
29 https://en.wikipedia.org/wiki/Multi-commodity_flow_problem
30 https://en.wikipedia.org/wiki/Minimum_cost_flow_problem
31 https://en.wikipedia.org/wiki/Circulation_problem
32 https://en.wikipedia.org/wiki/Gain_graph

946
See also

In a source localization problem, an algorithm tries to identify the most likely source
node of information diffusion through a partially observed network. This can be done in
linear time for trees and cubic time for arbitrary networks and has applications ranging from
tracking mobile phone users to identifying the originating village of disease outbreaks.[6]

80.8 See also


• Braess' paradox33
• Centrality34
• Ford–Fulkerson algorithm35
• Dinic's algorithm36
• Flow (computer networking)37
• Flow graph (disambiguation)38
• Max-flow min-cut theorem39
• Oriented matroid40
• Shortest path problem41
• Nowhere-zero flow42

80.9 References
1. A.V. Goldberg, É. Tardos and R.E. Tarjan, Network flow algorithms, Tech. Report
STAN-CS-89-1252, Stanford University CS Dept., 1989
2. This article incorporates public domain material43 from the NIST44 document:
B, P E. ”S”45 . Dictionary of Algorithms and Data Structures46 .
3. This article incorporates public domain material47 from the NIST48 document:
B, P E. ”S”49 . Dictionary of Algorithms and Data Structures50 .

33 https://en.wikipedia.org/wiki/Braess%27_paradox
34 https://en.wikipedia.org/wiki/Centrality
35 https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
36 https://en.wikipedia.org/wiki/Dinic%27s_algorithm
37 https://en.wikipedia.org/wiki/Flow_(computer_networking)
38 https://en.wikipedia.org/wiki/Flow_graph_(disambiguation)
39 https://en.wikipedia.org/wiki/Max-flow_min-cut_theorem
40 https://en.wikipedia.org/wiki/Oriented_matroid
41 https://en.wikipedia.org/wiki/Shortest_path_problem
42 https://en.wikipedia.org/wiki/Nowhere-zero_flow
https://en.wikipedia.org/wiki/Copyright_status_of_works_by_the_federal_government_of_
43
the_United_States
44 https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
45 https://xlinux.nist.gov/dads/HTML/supersource.html
46 https://en.wikipedia.org/wiki/Dictionary_of_Algorithms_and_Data_Structures
https://en.wikipedia.org/wiki/Copyright_status_of_works_by_the_federal_government_of_
47
the_United_States
48 https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
49 https://xlinux.nist.gov/dads/HTML/supersink.html
50 https://en.wikipedia.org/wiki/Dictionary_of_Algorithms_and_Data_Structures

947
Flow network

4. M, V.M.; K, M.P; M, S.N. (1978). ”A O(|V |3 )


      ”51 (PDF). Information
Processing Letters. 7 (6): 277–278. doi52 :10.1016/0020-0190(78)90016-953 .
5. O, J. B. (2013). ”M   O() ,  ”54 (PDF). Pro-
ceedings of the 2013 Symposium on the Theory of Computing: 765–774. Archived
at
6. 55

80.10 Further reading


• G T. H; G P; S S (2008). ”C 8:N-
 F A”. Algorithms in a Nutshell. Oreilly Media56 . pp. 226–250.
ISBN57 978-0-596-51624-658 .
• R K. A59 , T L. M60 ,  J B. O61 (1993). Net-
work Flows: Theory, Algorithms and Applications. Prentice Hall. ISBN62 0-13-617549-
X63 .CS1 maint: multiple names: authors list (link64 )
• B, B65 (1979). Graph Theory: An Introductory Course. Heidelberg:
Springer-Verlag. ISBN66 3-540-90399-267 .
• C, G68 & O, O R.69 (1993). Applied and Algorithmic
Graph Theory. New York: McGraw-Hill. ISBN70 0-07-557101-371 .CS1 maint: multiple
names: authors list (link72 )
• E, S (1979). Graph Algorithms73 . R, M: C
S P. ISBN74 0-914894-21-875 .

51 https://eprints.utas.edu.au/160/1/iplFlow.pdf
52 https://en.wikipedia.org/wiki/Doi_(identifier)
53 https://doi.org/10.1016%2F0020-0190%2878%2990016-9
54 http://jorlin.scripts.mit.edu/docs/publications/O(nm)MaxFlow.pdf
http://www.pedropinto.org.s3.amazonaws.com/publications/locating_source_diffusion_
55
networks.pdf
56 https://en.wikipedia.org/wiki/Oreilly_Media
57 https://en.wikipedia.org/wiki/ISBN_(identifier)
58 https://en.wikipedia.org/wiki/Special:BookSources/978-0-596-51624-6
59 https://en.wikipedia.org/wiki/Ravindra_K._Ahuja
60 https://en.wikipedia.org/wiki/Thomas_L._Magnanti
61 https://en.wikipedia.org/wiki/James_B._Orlin
62 https://en.wikipedia.org/wiki/ISBN_(identifier)
63 https://en.wikipedia.org/wiki/Special:BookSources/0-13-617549-X
64 https://en.wikipedia.org/wiki/Category:CS1_maint:_multiple_names:_authors_list
65 https://en.wikipedia.org/wiki/B%C3%A9la_Bollob%C3%A1s
66 https://en.wikipedia.org/wiki/ISBN_(identifier)
67 https://en.wikipedia.org/wiki/Special:BookSources/3-540-90399-2
68 https://en.wikipedia.org/wiki/Gary_Theodore_Chartrand
69 https://en.wikipedia.org/wiki/Ortrud_Oellermann
70 https://en.wikipedia.org/wiki/ISBN_(identifier)
71 https://en.wikipedia.org/wiki/Special:BookSources/0-07-557101-3
72 https://en.wikipedia.org/wiki/Category:CS1_maint:_multiple_names:_authors_list
73 https://archive.org/details/graphalgorithms0000even
74 https://en.wikipedia.org/wiki/ISBN_(identifier)
75 https://en.wikipedia.org/wiki/Special:BookSources/0-914894-21-8

948
External links

• G, A (1985). Algorithmic Graph Theory. Cambridge: Cambridge University


Press. ISBN76 0-521-28881-977 .
• T H. C78 , C E. L79 , R L. R80 ,  C-
 S81 (2001) [1990]. ”26”. Introduction to Algorithms82 (2 .). MIT P
 MG-H. . 696–697. ISBN83 0-262-03293-784 .CS1 maint: multiple names:
authors list (link85 )

80.11 External links


• Maximum Flow Problem86
• Real graph instances87
• Lemon C++ library with several maximum flow and minimum cost circulation algo-
rithms88
• QuickGraph89 , graph data structures and algorithms for .Net

76 https://en.wikipedia.org/wiki/ISBN_(identifier)
77 https://en.wikipedia.org/wiki/Special:BookSources/0-521-28881-9
78 https://en.wikipedia.org/wiki/Thomas_H._Cormen
79 https://en.wikipedia.org/wiki/Charles_E._Leiserson
80 https://en.wikipedia.org/wiki/Ronald_L._Rivest
81 https://en.wikipedia.org/wiki/Clifford_Stein
82 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
83 https://en.wikipedia.org/wiki/ISBN_(identifier)
84 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
85 https://en.wikipedia.org/wiki/Category:CS1_maint:_multiple_names:_authors_list
https://web.archive.org/web/20080111234829/http://www-b2.is.tokushima-u.ac.jp/~ikeda/
86
suuri/maxflow/Maxflow.shtml
87 http://www.dis.uniroma1.it/~challenge9/download.shtml
88 http://lemon.cs.elte.hu/
89 http://quickgraph.codeplex.com/

949
81 Floyd–Warshall algorithm

”Floyd's algorithm” redirects here. For cycle detection, see Floyd's cycle-finding algorithm1 .
For computer graphics, see Floyd–Steinberg dithering2 .

Floyd–Warshall algorithm
Class All-pairs shortest path
problem (for weighted
graphs)
Data struc- Graph
ture
Worst-case Θ(|V |3 )
performance
Best-case Θ(|V |3 )
performance
Average per- Θ(|V |3 )
formance
Worst-case Θ(|V |2 )
space com-
plexity

1 https://en.wikipedia.org/wiki/Floyd%27s_cycle-finding_algorithm
2 https://en.wikipedia.org/wiki/Floyd%E2%80%93Steinberg_dithering

951
Floyd–Warshall algorithm

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

952
History and naming

In computer science3 , the Floyd–Warshall algorithm (also known as Floyd's algo-


rithm, the Roy–Warshall algorithm, the Roy–Floyd algorithm, or the WFI algo-
rithm) is an algorithm4 for finding shortest paths5 in a weighted graph6 with positive or
negative edge weights (but with no negative cycles).[1][2] A single execution of the algorithm
will find the lengths (summed weights) of shortest paths between all pairs of vertices. Al-
though it does not return details of the paths themselves, it is possible to reconstruct the
paths with simple modifications to the algorithm. Versions of the algorithm can also be
used for finding the transitive closure7 of a relation R, or (in connection with the Schulze
voting system8 ) widest paths9 between all pairs of vertices in a weighted graph.

81.1 History and naming

The Floyd–Warshall algorithm is an example of dynamic programming10 , and was published


in its currently recognized form by Robert Floyd11 in 1962.[3] However, it is essentially the
same as algorithms previously published by Bernard Roy12 in 1959[4] and also by Stephen
Warshall13 in 1962[5] for finding the transitive closure of a graph,[6] and is closely related to
Kleene's algorithm14 (published in 1956) for converting a deterministic finite automaton15
into a regular expression16 .[7] The modern formulation of the algorithm as three nested
for-loops was first described by Peter Ingerman, also in 1962.[8]

81.2 Algorithm

The Floyd–Warshall algorithm compares all possible paths through the graph between each
pair of vertices. It is able to do this with Θ(|V |3 ) comparisons in a graph, even though
there may be up to Ω(|V |2 ) edges in the graph, and every combination of edges is tested. It
does so by incrementally improving an estimate on the shortest path between two vertices,
until the estimate is optimal.
Consider a graph G with vertices V numbered 1 through N . Further consider a function
shortestPath(i, j, k) that returns the shortest possible path from i to j using vertices only
from the set {1, 2, . . . , k} as intermediate points along the way. Now, given this function,
our goal is to find the shortest path from each i to each j using any vertex in {1, 2, . . . , N }.

3 https://en.wikipedia.org/wiki/Computer_science
4 https://en.wikipedia.org/wiki/Algorithm
5 https://en.wikipedia.org/wiki/Shortest_path_problem
6 https://en.wikipedia.org/wiki/Weighted_graph
7 https://en.wikipedia.org/wiki/Transitive_closure
8 https://en.wikipedia.org/wiki/Schulze_method
9 https://en.wikipedia.org/wiki/Widest_path_problem
10 https://en.wikipedia.org/wiki/Dynamic_programming
11 https://en.wikipedia.org/wiki/Robert_Floyd
12 https://en.wikipedia.org/wiki/Bernard_Roy
13 https://en.wikipedia.org/wiki/Stephen_Warshall
14 https://en.wikipedia.org/wiki/Kleene%27s_algorithm
15 https://en.wikipedia.org/wiki/Deterministic_finite_automaton
16 https://en.wikipedia.org/wiki/Regular_expression

953
Floyd–Warshall algorithm

For each of these pairs of vertices, the shortestPath(i, j, k) could be either


(1) a path that does not go through k (only uses vertices in the set {1, . . . , k − 1}.)
or
(2) a path that does go through k (from i to k and then from k to j, both only using
intermediate vertices in {1, . . . , k − 1})
We know that the best path from i to j that only uses vertices 1 through k − 1 is defined
by shortestPath(i, j, k − 1), and it is clear that if there was a better path from i to k to j,
then the length of this path would be the concatenation of the shortest path from i to k
(only using intermediate vertices in {1, . . . , k − 1}) and the shortest path from k to j (only
using intermediate vertices in {1, . . . , k − 1}).
If w(i, j) is the weight of the edge between vertices i and j, we can define shortestPath(i, j, k)
in terms of the following recursive17 formula: the base case is
shortestPath(i, j, 0) = w(i, j)
and the recursive case is
shortestPath(i, j, k) =
(
min shortestPath(i, j, k − 1),
)
shortestPath(i, k, k − 1) + shortestPath(k, j, k − 1) .

This formula is the heart of the Floyd–Warshall algorithm. The algorithm works by first
computing shortestPath(i, j, k) for all (i, j) pairs for k = 1, then k = 2, and so on. This
process continues until k = N , and we have found the shortest path for all (i, j) pairs using
18
any intermediate vertices. Pseudocode for this basic version follows:[original research? ]
let dist be a |V| × |V| array of minimum distances initialized to ∞ (infinity)
for each edge (u, v) do
dist[u][v] ← w(u, v) // The weight of the edge (u,v)
for each vertex v do
dist[v][v] ← 0
for k from 1 to |V|
for i from 1 to |V|
for j from 1 to |V|
if dist[i][j] > dist[i][k] + dist[k][j]
dist[i][j] ← dist[i][k] + dist[k][j]
end if

81.3 Example

The algorithm above is executed on the graph on the left below:

17 https://en.wikipedia.org/wiki/Recursion

954
Example

Figure 204

Prior to the first recursion of the outer loop, labeled k = 0 above, the only known paths
correspond to the single edges in the graph. At k = 1, paths that go through the vertex 1
are found: in particular, the path [2,1,3] is found, replacing the path [2,3] which has fewer
edges but is longer (in terms of weight). At k = 2, paths going through the vertices {1,2}
are found. The red and blue boxes show how the path [4,2,1,3] is assembled from the two
known paths [4,2] and [2,1,3] encountered in previous iterations, with 2 in the intersection.
The path [4,2,3] is not considered, because [2,1,3] is the shortest path encountered so far
from 2 to 3. At k = 3, paths going through the vertices {1,2,3} are found. Finally, at k =
4, all shortest paths are found.
The distance matrix at each iteration of k, with the updated distances in bold, will be:
j
k=0
1 2 3 4
1 0 ∞ −2 ∞
2 4 0 3 ∞
i
3 ∞ ∞ 0 2
4 ∞ −1 ∞ 0

j
k=1
1 2 3 4
1 0 ∞ −2 ∞
2 4 0 2 ∞
i
3 ∞ ∞ 0 2
4 ∞ −1 ∞ 0

j
k=2
1 2 3 4
1 0 ∞ −2 ∞
2 4 0 2 ∞
i
3 ∞ ∞ 0 2
4 3 −1 1 0

j
k=3
1 2 3 4
1 0 ∞ −2 0
i

955
Floyd–Warshall algorithm

2 4 0 2 4
3 ∞ ∞ 0 2
4 3 −1 1 0

j
k=4
1 2 3 4
1 0 −1 −2 0
2 4 0 2 4
i
3 5 1 0 2
4 3 −1 1 0

81.4 Behavior with negative cycles

A negative cycle is a cycle whose edges sum to a negative value. There is no shortest path
between any pair of vertices i, j which form part of a negative cycle, because path-lengths
from i to j can be arbitrarily small (negative). For numerically meaningful output, the
Floyd–Warshall algorithm assumes that there are no negative cycles. Nevertheless, if there
are negative cycles, the Floyd–Warshall algorithm can be used to detect them. The intuition
is as follows:
• The Floyd–Warshall algorithm iteratively revises path lengths between all pairs of vertices
(i, j), including where i = j;
• Initially, the length of the path (i, i) is zero;
• A path [i, k, . . . , i] can only improve upon this if it has length less than zero, i.e. denotes
a negative cycle;
• Thus, after the algorithm, (i, i) will be negative if there exists a negative-length path from
i back to i.
Hence, to detect negative cycles19 using the Floyd–Warshall algorithm, one can inspect
the diagonal of the path matrix, and the presence of a negative number indicates that the
graph contains at least one negative cycle.[9] To avoid numerical problems one should check
for negative numbers on the diagonal of the path matrix within the inner for loop of the
algorithm.[10] Obviously, in an undirected graph a negative edge creates a negative cycle
(i.e., a closed walk) involving its incident vertices. Considering all edges of the above20
example graph as undirected, e.g. the vertex sequence 4 – 2 – 4 is a cycle with weight sum
−2.

81.5 Path reconstruction

The Floyd–Warshall algorithm typically only provides the lengths of the paths between all
pairs of vertices. With simple modifications, it is possible to create a method to reconstruct
the actual path between any two endpoint vertices. While one may be inclined to store
the actual path from each vertex to each other vertex, this is not necessary, and in fact, is
very costly in terms of memory. Instead, the shortest-path tree21 can be calculated for each

19 https://en.wikipedia.org/wiki/Cycle_(graph_theory)
20 #Example
21 https://en.wikipedia.org/wiki/Shortest-path_tree

956
Analysis

node in Θ(|E|) time using Θ(|V |) memory to store each tree which allows us to efficiently
reconstruct a path from any two connected vertices.

Pseudocode [11]

let dist be a <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>


<mrow class=”MJX-TeXAtom-ORD”> <mo stretchy=”false”>|</mo> </mrow> <mi>V</mi> <mrow class=”MJX-
TeXAtom-ORD”> <mo stretchy=”false”>|</mo> </mrow> <mo>×</mo> <mrow class=”MJX-TeXAtom-ORD”> <mo
stretchy=”false”>|</mo> </mrow> <mi>V</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo stretchy=”false”>|</mo>
</mrow> </mstyle> </mrow> {\displaystyle |V|\times |V|} </semantics> array of minimum distances initialized to
<semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mi mathvari-
ant=”normal”> </mi> </mstyle> </mrow> {\displaystyle \infty } </semantics> (infinity)
let next be a <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mrow class=”MJX-TeXAtom-ORD”> <mo stretchy=”false”>|</mo> </mrow> <mi>V</mi> <mrow class=”MJX-
TeXAtom-ORD”> <mo stretchy=”false”>|</mo> </mrow> <mo>×</mo> <mrow class=”MJX-TeXAtom-ORD”> <mo
stretchy=”false”>|</mo> </mrow> <mi>V</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo stretchy=”false”>|</mo>
</mrow> </mstyle> </mrow> {\displaystyle |V|\times |V|} </semantics> array of vertex indices initialized to null

procedure FloydWarshallWithPathReconstruction() is
for each edge (u, v) do
dist[u][v] ← w(u, v) // The weight of the edge (u, v)
next[u][v] ← v
for each vertex v do
dist[v][v] ← 0
next[v][v] ← v
for k from 1 to |V| do // standard Floyd-Warshall implementation
for i from 1 to |V|
for j from 1 to |V|
if dist[i][j] > dist[i][k] + dist[k][j] then
dist[i][j] ← dist[i][k] + dist[k][j]
next[i][j] ← next[i][k]

procedure Path(u, v)
if next[u][v] = null then
return []
path = [u]
while u ≠ v
u ← next[u][v]
path.append(u)
return path

81.6 Analysis

Let n be |V |, the number of vertices. To find all n2 of shortestPath(i, j, k) (for all


i and j) from those of shortestPath(i, j, k − 1) requires 2n2 operations. Since we be-
gin with shortestPath(i, j, 0) = edgeCost(i, j) and compute the sequence of n matrices
shortestPath(i, j, 1), shortestPath(i, j, 2), . . ., shortestPath(i, j, n), the total number of oper-
ations used is n · 2n2 = 2n3 . Therefore, the complexity22 of the algorithm is Θ(n3 )23 .

22 https://en.wikipedia.org/wiki/Computational_complexity_theory
23 https://en.wikipedia.org/wiki/Big_theta

957
Floyd–Warshall algorithm

81.7 Applications and generalizations

The Floyd–Warshall algorithm can be used to solve the following problems, among others:
• Shortest paths in directed graphs (Floyd's algorithm).
• Transitive closure24 of directed graphs (Warshall's algorithm). In Warshall's original
formulation of the algorithm, the graph is unweighted and represented by a Boolean
adjacency matrix. Then the addition operation is replaced by logical conjunction25 (AND)
and the minimum operation by logical disjunction26 (OR).
• Finding a regular expression27 denoting the regular language28 accepted by a finite au-
tomaton29 (Kleene's algorithm30 , a closely related generalization of the Floyd–Warshall
algorithm)[12]
• Inversion31 of real32 matrices33 (Gauss–Jordan algorithm34 ) [13]
• Optimal routing. In this application one is interested in finding the path with the max-
imum flow between two vertices. This means that, rather than taking minima as in the
pseudocode above, one instead takes maxima. The edge weights represent fixed con-
straints on flow. Path weights represent bottlenecks; so the addition operation above is
replaced by the minimum operation.
• Fast computation of Pathfinder networks35 .
• Widest paths/Maximum bandwidth paths36
• Computing canonical form of difference bound matrices (DBMs)
• Computing the similarity between graphs

81.8 Implementations

Implementations are available for many programming languages37 .


• For C++38 , in the boost::graph39 library
• For C#40 , at QuickGraph41

24 https://en.wikipedia.org/wiki/Transitive_closure
25 https://en.wikipedia.org/wiki/Logical_conjunction
26 https://en.wikipedia.org/wiki/Logical_disjunction
27 https://en.wikipedia.org/wiki/Regular_expression
28 https://en.wikipedia.org/wiki/Regular_language
29 https://en.wikipedia.org/wiki/Finite_automaton
30 https://en.wikipedia.org/wiki/Kleene%27s_algorithm
31 https://en.wikipedia.org/wiki/Invertible_matrix
32 https://en.wikipedia.org/wiki/Real_number
33 https://en.wikipedia.org/wiki/Matrix_(mathematics)
34 https://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_elimination
35 https://en.wikipedia.org/wiki/Pathfinder_network
36 https://en.wikipedia.org/wiki/Widest_path_problem
37 https://en.wikipedia.org/wiki/Programming_language
38 https://en.wikipedia.org/wiki/C%2B%2B
39 http://www.boost.org/libs/graph/doc/
40 https://en.wikipedia.org/wiki/C_Sharp_(programming_language)
41 http://www.codeplex.com/quickgraph

958
Comparison with other shortest path algorithms

• For C#42 , at QuickGraphPCL43 (A fork of QuickGraph with better compatibility with


projects using Portable Class Libraries.)
• For Java44 , in the Apache Commons Graph45 library
• For JavaScript46 , in the Cytoscape47 library
• For MATLAB48 , in the Matlab_bgl49 package
• For Perl50 , in the Graph51 module
• For Python52 , in the SciPy53 library (module scipy.sparse.csgraph54 ) or NetworkX55 li-
brary
• For R56 , in packages e107157 and Rfast58

81.9 Comparison with other shortest path algorithms

The Floyd–Warshall algorithm is a good choice for computing paths between all pairs of
vertices in dense graphs59 , in which most or all pairs of vertices are connected by edges.
For sparse graphs60 with non-negative edge weights, a better choice is to use Dijkstra's
algorithm61 from each possible starting vertex, since the running time of repeated Dijkstra
(O(|E||V | + |V |2 log |V |) using Fibonacci heaps62 ) is better than the O(|V |3 ) running time
of the Floyd–Warshall algorithm when |E| is significantly smaller than |V |2 . For sparse
graphs with negative edges but no negative cycles, Johnson's algorithm63 can be used, with
the same asymptotic running time as the repeated Dijkstra approach.
There are also known algorithms using fast matrix multiplication64 to speed up all-pairs
shortest path computation in dense graphs, but these typically make extra assumptions on
the edge weights (such as requiring them to be small integers).[14][15] In addition, because
of the high constant factors in their running time, they would only provide a speedup over
the Floyd–Warshall algorithm for very large graphs.

42 https://en.wikipedia.org/wiki/C_Sharp_(programming_language)
43 https://www.nuget.org/packages/QuickGraphPCL/3.6.61114.2
44 https://en.wikipedia.org/wiki/Java_(programming_language)
45 http://commons.apache.org/sandbox/commons-graph/
46 https://en.wikipedia.org/wiki/JavaScript
47 https://en.wikipedia.org/wiki/Cytoscape
48 https://en.wikipedia.org/wiki/MATLAB
49 http://www.mathworks.com/matlabcentral/fileexchange/10922
50 https://en.wikipedia.org/wiki/Perl
51 https://metacpan.org/module/Graph
52 https://en.wikipedia.org/wiki/Python_(programming_language)
53 https://en.wikipedia.org/wiki/SciPy
http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csgraph.floyd_
54
warshall.html#scipy.sparse.csgraph.floyd_warshall
55 https://en.wikipedia.org/wiki/NetworkX
56 https://en.wikipedia.org/wiki/R_programming_language
57 https://cran.r-project.org/web/packages/e1071/index.html
58 https://cran.r-project.org/web/packages/Rfast/index.html
59 https://en.wikipedia.org/wiki/Dense_graph
60 https://en.wikipedia.org/wiki/Sparse_graph
61 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
62 https://en.wikipedia.org/wiki/Fibonacci_heap
63 https://en.wikipedia.org/wiki/Johnson%27s_algorithm
64 https://en.wikipedia.org/wiki/Fast_matrix_multiplication

959
Floyd–Warshall algorithm

81.10 References
1. C, T H.65 ; L, C E.66 ; R, R L.67 (1990).
Introduction to Algorithms68 (1 .). MIT P  MG-H. ISBN69 0-
262-03141-870 . See in particular Section 26.2, ”The Floyd–Warshall algorithm”,
pp. 558–565 and Section 26.4, ”A general framework for solving path problems in
directed graphs”, pp. 570–576.
2. K H. R (2003). Discrete Mathematics and Its Applications, 5th Edi-
tion. Addison Wesley. ISBN71 978-0-07-119881-372 .
3. F, R W.73 (J 1962). ”A 97: S P”. Commu-
nications of the ACM74 . 5 (6): 345. doi75 :10.1145/367766.36816876 .
4. R, B77 (1959). ”T  ”78 . C. R. Acad. Sci.
Paris79 ( F). 249: 216–218.
5. W, S (J 1962). ”A   B ”.
Journal of the ACM80 . 9 (1): 11–12. doi81 :10.1145/321105.32110782 .
6. W, E W.83 ”F-W A”84 . MathWorld85 .
7. K, S. C.86 (1956). ”R       
”. I C. E. S87  J. MC88 (.). Automata Studies.
Princeton University Press. pp. 3–42.
8. I, P Z. (N 1962). ”A 141: P M”.
Communications of the ACM89 . 5 (11): 556. doi90 :10.1145/368996.36901691 .

65 https://en.wikipedia.org/wiki/Thomas_H._Cormen
66 https://en.wikipedia.org/wiki/Charles_E._Leiserson
67 https://en.wikipedia.org/wiki/Ron_Rivest
68 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
69 https://en.wikipedia.org/wiki/ISBN_(identifier)
70 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03141-8
71 https://en.wikipedia.org/wiki/ISBN_(identifier)
72 https://en.wikipedia.org/wiki/Special:BookSources/978-0-07-119881-3
73 https://en.wikipedia.org/wiki/Robert_W._Floyd
74 https://en.wikipedia.org/wiki/Communications_of_the_ACM
75 https://en.wikipedia.org/wiki/Doi_(identifier)
76 https://doi.org/10.1145%2F367766.368168
77 https://en.wikipedia.org/wiki/Bernard_Roy
78 https://gallica.bnf.fr/ark:/12148/bpt6k3201c/f222.image
79 https://en.wikipedia.org/wiki/C._R._Acad._Sci._Paris
80 https://en.wikipedia.org/wiki/Journal_of_the_ACM
81 https://en.wikipedia.org/wiki/Doi_(identifier)
82 https://doi.org/10.1145%2F321105.321107
83 https://en.wikipedia.org/wiki/Eric_W._Weisstein
84 https://mathworld.wolfram.com/Floyd-WarshallAlgorithm.html
85 https://en.wikipedia.org/wiki/MathWorld
86 https://en.wikipedia.org/wiki/Stephen_Cole_Kleene
87 https://en.wikipedia.org/wiki/Claude_Elwood_Shannon
88 https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)
89 https://en.wikipedia.org/wiki/Communications_of_the_ACM
90 https://en.wikipedia.org/wiki/Doi_(identifier)
91 https://doi.org/10.1145%2F368996.369016

960
External links

9. H, D92 (2014). ”S 8.9: F-W  


   ”93 (PDF94 ). Lecture Notes for IEOR 266: Graph Algo-
rithms and Network Flows. Department of Industrial Engineering and Operations
Research, University of California, Berkeley95 .
10. S H (A 2010). ”T F–W  
   ”. Information Processing Letters. 110 (8–9): 279–
281. doi96 :10.1016/j.ipl.2010.02.00197 .
11. 98

12. G, J L.; Y, J (2003), Handbook of Graph The-
ory99 , D M  I A, CRC P, . 65,
ISBN100 9780203490204101 .
13. P, R. ”A S  T C”.
CSX102 10.1.1.71.7650103 . Cite journal requires |journal= (help104 )
14. Z, U105 (M 2002), ”A      
   ”, Journal of the ACM106 , 49 (3): 289–
317, arXiv107 :cs/0008011108 , doi109 :10.1145/567112.567114110 .
15. C, T M.111 (J 2010), ”M   -
    ”, SIAM Journal on Computing112 , 39 (5):
2075–2089, CiteSeerX113 10.1.1.153.6864114 , doi115 :10.1137/08071990x116 .

81.11 External links

Wikimedia Commons has media related to Floyd-Warshall algorithm117 .

92 https://en.wikipedia.org/wiki/Dorit_S._Hochbaum
93 http://www.ieor.berkeley.edu/~hochbaum/files/ieor266-2014.pdf
94 https://en.wikipedia.org/wiki/PDF
95 https://en.wikipedia.org/wiki/University_of_California,_Berkeley
96 https://en.wikipedia.org/wiki/Doi_(identifier)
97 https://doi.org/10.1016%2Fj.ipl.2010.02.001
98 https://books.goalkicker.com/AlgorithmsBook/
99 https://books.google.com/books?id=mKkIGIea_BkC&pg=PA65
100 https://en.wikipedia.org/wiki/ISBN_(identifier)
101 https://en.wikipedia.org/wiki/Special:BookSources/9780203490204
102 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
103 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.71.7650
104 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
105 https://en.wikipedia.org/wiki/Uri_Zwick
106 https://en.wikipedia.org/wiki/Journal_of_the_ACM
107 https://en.wikipedia.org/wiki/ArXiv_(identifier)
108 http://arxiv.org/abs/cs/0008011
109 https://en.wikipedia.org/wiki/Doi_(identifier)
110 https://doi.org/10.1145%2F567112.567114
111 https://en.wikipedia.org/wiki/Timothy_M._Chan
112 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
113 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
114 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.153.6864
115 https://en.wikipedia.org/wiki/Doi_(identifier)
116 https://doi.org/10.1137%2F08071990x
117 https://commons.wikimedia.org/wiki/Category:Floyd-Warshall_algorithm

961
Floyd–Warshall algorithm

• Interactive animation of the Floyd–Warshall algorithm118


• Interactive animation of the Floyd–Warshall algorithm (Technical University of Mu-
nich)119

Optimization: Algorithms, methods, and heuristics

http://www.pms.informatik.uni-muenchen.de/lehre/compgeometry/Gosper/shortest_path/
118
shortest_path.html#visualization
119 https://www-m9.ma.tum.de/graph-algorithms/spp-floyd-warshall/index_en.html

962
82 Force-directed graph drawing

Figure 205 Social network visualization using a force-directed graph drawing


algorithm.[1]

963
Force-directed graph drawing

Figure 206 Visualization of links between pages on a wiki using a force-directed layout.

Force-directed graph drawing algorithms are a class of algorithms1 for drawing graphs2
in an aesthetically-pleasing way. Their purpose is to position the nodes of a graph3 in two-
dimensional or three-dimensional space so that all the edges are of more or less equal length
and there are as few crossing edges as possible, by assigning forces among the set of edges
and the set of nodes, based on their relative positions, and then using these forces either to
simulate the motion of the edges and nodes or to minimize their energy.[2]

1 https://en.wikipedia.org/wiki/Algorithm
2 https://en.wikipedia.org/wiki/Graph_drawing
3 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)

964
Forces

While graph drawing can be a difficult problem, force-directed algorithms, being physical
simulations, usually require no special knowledge about graph theory such as planarity4 .

82.1 Forces

Force-directed graph drawing algorithms assign forces among the set of edges and the set of
nodes of a graph drawing5 . Typically, spring6 -like attractive forces based on Hooke's law7
are used to attract pairs of endpoints of the graph's edges towards each other, while simul-
taneously repulsive forces like those of electrically charged8 particles based on Coulomb's
law9 are used to separate all pairs of nodes. In equilibrium states for this system of forces,
the edges tend to have uniform length (because of the spring forces), and nodes that are not
connected by an edge tend to be drawn further apart (because of the electrical repulsion).
Edge attraction and vertex repulsion forces may be defined using functions that are not
based on the physical behavior of springs and particles; for instance, some force-directed
systems use springs whose attractive force is logarithmic rather than linear.
An alternative model considers a spring-like force for every pair of nodes (i, j) where the ideal
length δij of each spring is proportional to the graph-theoretic distance between nodes i and
j, without using a separate repulsive force. Minimizing the difference (usually the squared
difference) between Euclidean10 and ideal distances between nodes is then equivalent to a
metric multidimensional scaling11 problem.
A force-directed graph can involve forces other than mechanical springs and electrical re-
pulsion. A force analogous to gravity may be used to pull vertices towards a fixed point of
the drawing space; this may be used to pull together different connected components12 of a
disconnected graph, which would otherwise tend to fly apart from each other because of the
repulsive forces, and to draw nodes with greater centrality13 to more central positions in the
drawing;[3] it may also affect the vertex spacing within a single component. Analogues of
magnetic fields may be used for directed graphs. Repulsive forces may be placed on edges as
well as on nodes in order to avoid overlap or near-overlap in the final drawing. In drawings
with curved edges such as circular arcs14 or spline curves15 , forces may also be placed on
the control points of these curves, for instance to improve their angular resolution16 .[4]

4 https://en.wikipedia.org/wiki/Planar_graph
5 https://en.wikipedia.org/wiki/Graph_drawing
6 https://en.wikipedia.org/wiki/Spring_(device)
7 https://en.wikipedia.org/wiki/Hooke%27s_law
8 https://en.wikipedia.org/wiki/Electric_charge
9 https://en.wikipedia.org/wiki/Coulomb%27s_law
10 https://en.wikipedia.org/wiki/Euclidean_distance
11 https://en.wikipedia.org/wiki/Multidimensional_scaling
12 https://en.wikipedia.org/wiki/Connected_component_(graph_theory)
13 https://en.wikipedia.org/wiki/Centrality
14 https://en.wikipedia.org/wiki/Circular_arc
15 https://en.wikipedia.org/wiki/Spline_curve
16 https://en.wikipedia.org/wiki/Angular_resolution_(graph_drawing)

965
Force-directed graph drawing

82.2 Methods

Once the forces on the nodes and edges of a graph have been defined, the behavior of the
entire graph under these sources may then be simulated as if it were a physical system.
In such a simulation, the forces are applied to the nodes, pulling them closer together
or pushing them further apart. This is repeated iteratively until the system comes to
a mechanical equilibrium17 state; i.e., their relative positions do not change anymore from
one iteration to the next. The positions of the nodes in this equilibrium are used to generate
a drawing of the graph.
For forces defined from springs whose ideal length is proportional to the graph-theoretic
distance, stress majorization18 gives a very well-behaved (i.e., monotonically convergent19 )[5]
and mathematically elegant way to minimise20 these differences and, hence, find a good
layout for the graph.
It is also possible to employ mechanisms that search more directly for energy minima, either
instead of or in conjunction with physical simulation. Such mechanisms, which are exam-
ples of general global optimization21 methods, include simulated annealing22 and genetic
algorithms23 .

82.3 Advantages

The following are among the most important advantages of force-directed algorithms:
Good-quality results
At least for graphs of medium size (up to 50–500 vertices), the results obtained have usually
very good results based on the following criteria: uniform edge length, uniform vertex
distribution and showing symmetry. This last criterion is among the most important ones
and is hard to achieve with any other type of algorithm.
Flexibility
Force-directed algorithms can be easily adapted and extended to fulfill additional aesthetic
criteria. This makes them the most versatile class of graph drawing algorithms. Examples
of existing extensions include the ones for directed graphs, 3D graph drawing,[6] cluster
graph drawing, constrained graph drawing, and dynamic graph drawing.
Intuitive
Since they are based on physical analogies of common objects, like springs, the behavior
of the algorithms is relatively easy to predict and understand. This is not the case with
other types of graph-drawing algorithms.

17 https://en.wikipedia.org/wiki/Mechanical_equilibrium
18 https://en.wikipedia.org/wiki/Stress_majorization
19 https://en.wikipedia.org/wiki/Limit_of_a_sequence
20 https://en.wikipedia.org/wiki/Optimization_(mathematics)
21 https://en.wikipedia.org/wiki/Global_optimization
22 https://en.wikipedia.org/wiki/Simulated_annealing
23 https://en.wikipedia.org/wiki/Genetic_algorithm

966
Disadvantages

Simplicity
Typical force-directed algorithms are simple and can be implemented in a few lines of code.
Other classes of graph-drawing algorithms, like the ones for orthogonal layouts, are usually
much more involved.
Interactivity
Another advantage of this class of algorithm is the interactive aspect. By drawing the
intermediate stages of the graph, the user can follow how the graph evolves, seeing it
unfold from a tangled mess into a good-looking configuration. In some interactive graph
drawing tools, the user can pull one or more nodes out of their equilibrium state and watch
them migrate back into position. This makes them a preferred choice for dynamic and
online24 graph-drawing systems.
Strong theoretical foundations
While simple ad-hoc force-directed algorithms often appear in the literature and in practice
(because they are relatively easy to understand), more reasoned approaches are starting
to gain traction. Statisticians have been solving similar problems in multidimensional
scaling25 (MDS) since the 1930s, and physicists also have a long history of working with
related n-body26 problems - so extremely mature approaches exist. As an example, the
stress majorization27 approach to metric MDS can be applied to graph drawing as described
above. This has been proven to converge monotonically.[5] Monotonic convergence, the
property that the algorithm will at each iteration decrease the stress or cost of the layout,
is important because it guarantees that the layout will eventually reach a local minimum
and stop. Damping schedules cause the algorithm to stop, but cannot guarantee that a
true local minimum is reached.

82.4 Disadvantages

The main disadvantages of force-directed algorithms include the following:


High running time28
The typical force-directed algorithms are in general considered to have a running time
equivalent to O(n3 ), where n is the number of nodes of the input graph. This is because
the number of iterations is estimated to be O(n), and in every iteration, all pairs of nodes
need to be visited and their mutual repulsive forces computed. This is related to the N-body
problem29 in physics. However, since repulsive forces are local in nature the graph can
be partitioned such that only neighboring vertices are considered. Common techniques
used by algorithms for determining the layout of large graphs include high-dimensional
embedding,[7] multi-layer drawing and other methods related to N-body simulation30 . For

24 https://en.wikipedia.org/wiki/Online_algorithm
25 https://en.wikipedia.org/wiki/Multidimensional_scaling
26 https://en.wikipedia.org/wiki/N-body
27 https://en.wikipedia.org/wiki/Stress_majorization
28 https://en.wikipedia.org/wiki/Running_time
29 https://en.wikipedia.org/wiki/N-body_problem
30 https://en.wikipedia.org/wiki/N-body_simulation

967
Force-directed graph drawing

example, the Barnes–Hut simulation31 -based method FADE[8] can improve running time
to n*log(n) per iteration. As a rough guide, in a few seconds one can expect to draw at
most 1,000 nodes with a standard n2 per iteration technique, and 100,000 with a n*log(n)
per iteration technique.[8] Force-directed algorithms, when combined with a multilevel
approach, can draw graphs of millions of nodes.[9]
Poor local minima
It is easy to see that force-directed algorithms produce a graph with minimal energy,
in particular one whose total energy is only a local minimum32 . The local minimum
found can be, in many cases, considerably worse than a global minimum, which translates
into a low-quality drawing. For many algorithms, especially the ones that allow only
down-hill moves of the vertices, the final result can be strongly influenced by the initial
layout, that in most cases is randomly generated. The problem of poor local minima
becomes more important as the number of vertices of the graph increases. A combined
application of different algorithms is helpful to solve this problem.[10] For example, using
the Kamada–Kawai algorithm[11] to quickly generate a reasonable initial layout and then
the Fruchterman–Reingold algorithm[12] to improve the placement of neighbouring nodes.
Another technique to achieve a global minimum is to use a multilevel approach.[13]

82.5 History

Force-directed methods in graph drawing date back to the work of Tutte (1963)33 , who
showed that polyhedral graphs34 may be drawn in the plane with all faces convex by fixing
the vertices of the outer face of a planar embedding of the graph into convex position35 ,
placing a spring-like attractive force on each edge, and letting the system settle into an
equilibrium.[14] Because of the simple nature of the forces in this case, the system cannot
get stuck in local minima, but rather converges to a unique global optimum configuration.
Because of this work, embeddings of planar graphs with convex faces are sometimes called
Tutte embeddings36 .
The combination of attractive forces on adjacent vertices, and repulsive forces on all vertices,
was first used by Eades (1984)37 ;[15] additional pioneering work on this type of force-directed
layout was done by Fruchterman & Reingold (1991)38 .[12] The idea of using only spring forces
between all pairs of vertices, with ideal spring lengths equal to the vertices' graph-theoretic
distance, is from Kamada & Kawai (1989)39 .[11]

31 https://en.wikipedia.org/wiki/Barnes%E2%80%93Hut_simulation
32 https://en.wikipedia.org/wiki/Local_minimum
33 #CITEREFTutte1963
34 https://en.wikipedia.org/wiki/Polyhedral_graph
35 https://en.wikipedia.org/wiki/Convex_position
36 https://en.wikipedia.org/wiki/Tutte_embedding
37 #CITEREFEades1984
38 #CITEREFFruchtermanReingold1991
39 #CITEREFKamadaKawai1989

968
See also

82.6 See also


• Cytoscape40 , software for visualising biological networks. The base package includes
force-directed layouts as one of the built-in methods.
• Gephi41 , an interactive visualization and exploration platform for all kinds of networks
and complex systems, dynamic and hierarchical graphs.
• Graphviz42 , software that implements a multilevel force-directed layout algorithm (among
many others) capable of handling very large graphs.
• Tulip43 , software that implements most of the force-directed layout algorithms (GEM,
LGL, GRIP, FM³).
• Prefuse44

82.7 References
1. G, M (2015), ”I     ,
'    ”, Geschichte und Informatik 18/1945 (PDF),
. 109–128
2. K, S G. (2012), Spring Embedders and Force-Directed Graph
Drawing Algorithms, arXiv46 :1201.301147 , Bibcode48 :2012arXiv1201.3011K49 .
3. B, M. J.; E, D.50 ; G, M. T.51 ; T, L.
(2012), ”F-       -
”, Proc. 20th Int. Symp. Graph Drawing, arXiv52 :1209.074853 , Bib-
54
code :2012arXiv1209.0748B . 55

4. C, R.; C, K.; G, M. T.56 ; K, S. G.;


T, L. (2011), ”F- L-  ”, Proc.
19th Symposium on Graph Drawing57 (PDF), . 78–90.
5.  L, J (1988), ”C     
 ”, Journal of Classification, Springer, 5 (2): 163–180,
doi58 :10.1007/BF0189716259 .

40 https://en.wikipedia.org/wiki/Cytoscape
41 https://en.wikipedia.org/wiki/Gephi
42 https://en.wikipedia.org/wiki/Graphviz
43 https://en.wikipedia.org/wiki/Tulip_(software)
44 https://en.wikipedia.org/wiki/Prefuse
45 http://www.martingrandjean.ch/wp-content/uploads/2015/09/Grandjean2015.pdf
46 https://en.wikipedia.org/wiki/ArXiv_(identifier)
47 http://arxiv.org/abs/1201.3011
48 https://en.wikipedia.org/wiki/Bibcode_(identifier)
49 https://ui.adsabs.harvard.edu/abs/2012arXiv1201.3011K
50 https://en.wikipedia.org/wiki/David_Eppstein
51 https://en.wikipedia.org/wiki/Michael_T._Goodrich
52 https://en.wikipedia.org/wiki/ArXiv_(identifier)
53 http://arxiv.org/abs/1209.0748
54 https://en.wikipedia.org/wiki/Bibcode_(identifier)
55 https://ui.adsabs.harvard.edu/abs/2012arXiv1209.0748B
56 https://en.wikipedia.org/wiki/Michael_T._Goodrich
57 http://www.cs.arizona.edu/~kobourov/fdl.pdf
58 https://en.wikipedia.org/wiki/Doi_(identifier)
59 https://doi.org/10.1007%2FBF01897162

969
Force-directed graph drawing

6. V, A. ”3D P T V”60 . R 3 J


61
2012.[permanent dead link ]
7. H, D62 ; K, Y (2002), ”G   -
 ”, Proceedings of the 9th International Symposium on Graph
Drawing, pp. 207–219, CiteSeerX63 10.1.1.20.539064 , ISBN65 3-540-00158-166
8. Q, A; E, P67 (2001), ”FADE: G D, C-
,  V A”, Proceedings of the 8th International Symposium on
Graph Drawing68 (PDF), . 197–210, ISBN69 3-540-41554-870 ,   
71 (PDF)  2006-05-21.
9. ”A G  L G”72 . R 22 O 2017.
10. C, C; K, S; N, J; P, J;
W, K (2003), ”A S  G- V  
E  S”, Proceedings of the 2003 ACM Symposium on Software
Visualization (SoftVis '03)73 , N Y, NY, USA: ACM, . 77–86,   .
212, 74 :10.1145/774833.77484475 , ISBN76 1-58113-642-077 , To achieve an aestheti-
cally pleasing layout of the graph it is also necessary to employ modified Fruchterman–
Reingold forces, as the Kamada–Kawai method does not achieve satisfactory methods
by itself but rather creates a good approximate layout so that the Fruchterman-Reingold
calculations can quickly ”tidy up” the layout.
11. K, T; K, S (1989), ”A    -
  ”, Information Processing Letters, Elsevier, 31 (1): 7–15,
doi78 :10.1016/0020-0190(89)90102-679 .
12. F, T M. J.; R, E M.80 (1991), ”G D-
  F-D P”, Software − Practice & Experience, Wiley,
21 (11): 1129–1164, doi81 :10.1002/spe.438021110282 .
13. 83 A Multilevel Algorithm for Force-Directed Graph-Drawing

60 http://www.aaronvose.com/phytree3d/
62 https://en.wikipedia.org/wiki/David_Harel
63 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
64 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.20.5390
65 https://en.wikipedia.org/wiki/ISBN_(identifier)
66 https://en.wikipedia.org/wiki/Special:BookSources/3-540-00158-1
67 https://en.wikipedia.org/wiki/Peter_Eades
https://web.archive.org/web/20060521183023/http://www.cs.ucd.ie/staff/aquigley/home/
68
downloads/aq-gd2000.pdf
69 https://en.wikipedia.org/wiki/ISBN_(identifier)
70 https://en.wikipedia.org/wiki/Special:BookSources/3-540-41554-8
71 http://www.cs.ucd.ie/staff/aquigley/home/downloads/aq-gd2000.pdf
72 http://yifanhu.net/GALLERY/GRAPHS/
73 https://www.researchgate.net/publication/2851716
74 https://en.wikipedia.org/wiki/Doi_(identifier)
75 https://doi.org/10.1145%2F774833.774844
76 https://en.wikipedia.org/wiki/ISBN_(identifier)
77 https://en.wikipedia.org/wiki/Special:BookSources/1-58113-642-0
78 https://en.wikipedia.org/wiki/Doi_(identifier)
79 https://doi.org/10.1016%2F0020-0190%2889%2990102-6
80 https://en.wikipedia.org/wiki/Edward_Reingold
81 https://en.wikipedia.org/wiki/Doi_(identifier)
82 https://doi.org/10.1002%2Fspe.4380211102
83 http://jgaa.info/accepted/2003/Walshaw2003.7.3.pdf

970
Further reading

14. T, W. T.84 (1963), ”H    ”, Proceedings of the London
Mathematical Society, 13 (52): 743–768, doi85 :10.1112/plms/s3-13.1.74386 .
15. E, P87 (1984), ”A H  G D”, Congressus Nu-
merantium, 42 (11): 149–160.

82.8 Further reading


•  B, G; P E88 ; R T89 ; I G. T-
 (1999), Graph Drawing: Algorithms for the Visualization of Graphs, Prentice Hall,
ISBN90 978-0-13-301615-491
• K, M; W, D92 , . (2001), Drawing graphs: methods
and models, Lecture Notes in Computer Science 2025, 2025, Springer, doi93 :10.1007/3-
540-44969-894 , ISBN95 978-3-540-42062-096

82.9 External links


• Video of Spring Algorithm97 [dead link as of 27 May 2016]
• Live visualisation in flash + source code and description98
• Daniel Tunkelang's dissertation99 on force-directed graph layout (source code available
on Github100 )
• Hyperassociative Map Algorithm101
• Interactive and real-time force-directed graphing algorithms used in an online database
modeling tool102

84 https://en.wikipedia.org/wiki/W._T._Tutte
85 https://en.wikipedia.org/wiki/Doi_(identifier)
86 https://doi.org/10.1112%2Fplms%2Fs3-13.1.743
87 https://en.wikipedia.org/wiki/Peter_Eades
88 https://en.wikipedia.org/wiki/Peter_Eades
89 https://en.wikipedia.org/wiki/Roberto_Tamassia
90 https://en.wikipedia.org/wiki/ISBN_(identifier)
91 https://en.wikipedia.org/wiki/Special:BookSources/978-0-13-301615-4
92 https://en.wikipedia.org/wiki/Dorothea_Wagner
93 https://en.wikipedia.org/wiki/Doi_(identifier)
94 https://doi.org/10.1007%2F3-540-44969-8
95 https://en.wikipedia.org/wiki/ISBN_(identifier)
96 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-42062-0
https://web.archive.org/web/20090930051635/http://www.cs.usyd.edu.au/~aquigley/avi/
97
spring.avi
98 http://blog.ivank.net/force-based-graph-drawing-in-as3.html
99 http://reports-archive.adm.cs.cmu.edu/anon/1998/abstracts/98-189.html
100 https://github.com/dtunkelang/jiggle
101 http://jeffreyfreeman.me/hyperassociative-map-explanation/
102 http://www.anchormodeling.com/modeler

971
83 Ford–Fulkerson algorithm

algorithm to compute the maximum flow in a flow network (equivalently; the minimum
cut) The Ford−Fulkerson method or Ford–Fulkerson algorithm (FFA) is a greedy
algorithm1 that computes the maximum flow2 in a flow network3 . It is sometimes called a
”method” instead of an ”algorithm” as the approach to finding augmenting paths in a residual
graph is not fully specified[1] or it is specified in several implementations with different
running times.[2] It was published in 1956 by L. R. Ford Jr.4 and D. R. Fulkerson5 .[3] The
name ”Ford−Fulkerson” is often also used for the Edmonds–Karp algorithm6 , which is a
fully defined implementation of the Ford−Fulkerson method.
The idea behind the algorithm is as follows: as long as there is a path from the source
(start node) to the sink (end node), with available capacity on all edges in the path, we
send flow along one of the paths. Then we find another path, and so on. A path with
available capacity is called an augmenting path7 .

83.1 Algorithm

Let G(V, E) be a graph, and for each edge from u to v, let c(u, v) be the capacity and f (u, v)
be the flow. We want to find the maximum flow from the source s to the sink t. After every
step in the algorithm the following is maintained:

Capacity con- ∀(u, v) ∈ E : f (u, v) ≤ c(u, v) The flow along an edge can
straints not exceed its capacity.
Skew symmetry ∀(u, v) ∈ E : f (u, v) = −f (v, u) The net flow from u to v
must be the opposite of the
net flow from v to u (see ex-
ample).

Flow conservation ∀u ∈ V : u ̸= s and u ̸= t ⇒ Thew)net
f (u, = 0flow to a node is
w∈V zero, except for the source,
which ”produces” flow, and
the sink, which ”consumes”
flow.

1 https://en.wikipedia.org/wiki/Greedy_algorithm
2 https://en.wikipedia.org/wiki/Maximum_flow_problem
3 https://en.wikipedia.org/wiki/Flow_network
4 https://en.wikipedia.org/wiki/L._R._Ford_Jr.
5 https://en.wikipedia.org/wiki/D._R._Fulkerson
6 https://en.wikipedia.org/wiki/Edmonds%E2%80%93Karp_algorithm
7 https://en.wikipedia.org/wiki/Augmenting_path

973
Ford–Fulkerson algorithm

∑ ∑
Value(f ) f (s, u) = f (v, t) The flow leaving from s must
(s,u)∈E (v,t)∈E be equal to the flow arriving
at t.

This means that the flow through the network is a legal flow after each round in the al-
gorithm. We define the residual network Gf (V, Ef ) to be the network with capacity
cf (u, v) = c(u, v) − f (u, v) and no flow. Notice that it can happen that a flow from v to u
is allowed in the residual network, though disallowed in the original network: if f (u, v) > 0
and c(v, u) = 0 then cf (v, u) = c(v, u) − f (v, u) = f (u, v) > 0.

Algorithm Ford−Fulkerson

Inputs Given a Network G = (V, E) with flow capacity c, a source node s, and a sink
node t
Output Compute a flow f from s to t of maximum value
1. f (u, v) ← 0 for all edges (u, v)
2. While there is a path p from s to t in Gf , such that cf (u, v) > 0 for all edges
(u, v) ∈ p:
a) Find cf (p) = min{cf (u, v) : (u, v) ∈ p}
b) For each edge (u, v) ∈ p
i. f (u, v) ← f (u, v) + cf (p) (Send flow along the path)
ii. f (v, u) ← f (v, u) − cf (p) (The flow might be ”returned” later)
• ”←” denotes assignment8 . For instance, ”largest ← item” means that the value of
largest changes to the value of item.
• ”return” terminates the algorithm and outputs the following value.

The path in step 2 can be found with for example a breadth-first search9 (BFS) or a depth-
first search10 in Gf (V, Ef ). If you use the former, the algorithm is called Edmonds–Karp11 .
When no more paths in step 2 can be found, s will not be able to reach t in the residual
network. If S is the set of nodes reachable by s in the residual network, then the total
capacity in the original network of edges from S to the remainder of V is on the one hand
equal to the total flow we found from s to t, and on the other hand serves as an upper
bound for all such flows. This proves that the flow we found is maximal. See also Max-flow
Min-cut theorem12 .
If the graph G(V, E) has multiple sources and sinks, we act as follows: Suppose that
T = {t | t is a sink} and S = {s | s is a source}. Add a new source ∑ s∗ with an edge
(s∗ , s) from s∗ to every node s ∈ S, with capacity c(s∗ , s) = ds (ds = c(s, u)). And
(s,u)∈E

9 https://en.wikipedia.org/wiki/Breadth-first_search
10 https://en.wikipedia.org/wiki/Depth-first_search
11 https://en.wikipedia.org/wiki/Edmonds%E2%80%93Karp_algorithm
12 https://en.wikipedia.org/wiki/Max-flow_min-cut_theorem

974
Complexity

add a new sink ∑ t∗ with an edge (t, t∗ ) from every node t ∈ T to t∗ , with capacity
c(t, t∗ ) = dt (dt = c(v, t)). Then apply the Ford−Fulkerson algorithm.
(v,t)∈E

Also, if a node u has capacity constraint du , we replace this node with two nodes uin , uout ,
and an edge (uin , uout ), with capacity c(uin , uout ) = du . Then apply the Ford−Fulkerson
algorithm.

83.2 Complexity

By adding the flow augmenting path to the flow already established in the graph, the
maximum flow will be reached when no more flow augmenting paths can be found in the
graph. However, there is no certainty that this situation will ever be reached, so the best
that can be guaranteed is that the answer will be correct if the algorithm terminates. In
the case that the algorithm runs forever, the flow might not even converge towards the
maximum flow. However, this situation only occurs with irrational flow values. When the
capacities are integers, the runtime of Ford–Fulkerson is bounded by O(Ef ) (see big O
notation13 ), where E is the number of edges in the graph and f is the maximum flow in the
graph. This is because each augmenting path can be found in O(E) time and increases the
flow by an integer amount of at least 1, with the upper bound f .
A variation of the Ford−Fulkerson algorithm with guaranteed termination and a runtime
independent of the maximum flow value is the Edmonds–Karp algorithm14 , which runs in
O(V E 2 ) time.

83.3 Integral example

The following example shows the first steps of Ford–Fulkerson in a flow network with 4
nodes, source A and sink D. This example shows the worst-case behaviour of the algorithm.
In each step, only a flow of 1 is sent across the network. If breadth-first-search were used
instead, only two steps would be needed.

Path Capacity Resulting flow network

Initial flow network

Figure 207

13 https://en.wikipedia.org/wiki/Big_O_notation
14 https://en.wikipedia.org/wiki/Edmonds%E2%80%93Karp_algorithm

975
Ford–Fulkerson algorithm

Path Capacity Resulting flow network

min(cf (A, B), cf (B, C), cf (C, D))


A, B, C, D = min(c(A, B) − f (A, B), c(B, C) − f (B, C), c(C, D) − f (C, D))
= min(1000 − 0, 1 − 0, 1000 − 0) = 1

Figure 208

min(cf (A, C), cf (C, B), cf (B, D))


A, C, B, D = min(c(A, C) − f (A, C), c(C, B) − f (C, B), c(B, D) − f (B, D))
= min(1000 − 0, 0 − (−1), 1000 − 0) = 1

Figure 209
After 1998 more steps ...

Final flow network

Figure 210

Notice how flow is ”pushed back” from C to B when finding the path A, C, B, D.

976
Non-terminating example

83.4 Non-terminating example

Figure 211

Consider the flow network shown on √ the right, with source s, sink t, capacities of edges
e1 , e2 and e3 respectively 1, r = ( 5 − 1)/2 and 1 and the capacity of all other edges
some integer M ≥ 2. The constant r was chosen so, that r2 = 1 − r. We use augmenting
paths according to the following table, where p1 = {s, v4 , v3 , v2 , v1 , t}, p2 = {s, v2 , v3 , v4 , t}
and p3 = {s, v1 , v2 , v3 , t}.
Residual capacities
Step Augmenting path Sent flow
e1 e2 e3
0 r0 = 1 r 1
1 {s, v2 , v3 , t} 1 r0 r1 0
2 p1 r1 r2 0 r1
3 p2 r1 r2 r1 0
4 p1 r2 0 r3 r2
5 p3 r2 r 2
r3 0

Note that after step 1 as well as after step 5, the residual capacities of edges e1 , e2 and
e3 are in the form rn , rn+1 and 0, respectively, for some n ∈ N. This means that we can
use augmenting paths p1 , p2 , p1 and p3 infinitely many times and residual capacities of
these edges will always be in the same form. Total flow in the network after step 5 is
1 + 2(r1 + r2 ). If we continue to use augmenting paths as above, the total flow converges

to 1 + 2 ∞ i
i=1 r = 3 + 2r. However, note that there is a flow of value 2M + 1, by sending
M units of flow along sv1 t, 1 unit of flow along sv2 v3 t, and M units of flow along sv4 t.
Therefore, the algorithm never terminates and the flow does not even converge to the
maximum flow.[4]

977
Ford–Fulkerson algorithm

Another non-terminating example based on the Euclidean algorithm15 is given by Backman


& Huynh (2018)16 , where they also show that the worst case running-time of the Ford-
Fulkerson algorithm on a network G(V, E) in ordinal numbers17 is ω Θ(|E|) .

83.5 Python implementation of Edmonds–Karp algorithm

import collections

# This class represents a directed graph using adjacency matrix representation


class Graph:

def __init__(self,graph):
self.graph = graph # residual graph
self.ROW = len(graph)

def bfs(self, s, t, parent):


Returns true if there is a path from source s to sink t in
residual graph. Also fills parent[] to store the path

# Mark all the vertices as not visited


visited = [False] * (self.ROW)

# Create a queue for BFS


queue = collections.deque()

# Mark the source node as visited and enqueue it


queue.append(s)
visited[s] = True

# Standard BFS loop


while queue:
u = queue.popleft()

# Get all adjacent verticess of the dequeued vertex u


# If an adjacent has not been visited, then mark it
# visited and enqueue it
for ind, val in enumerate(self.graph[u]):
if (visited[ind] == False) and (val > 0):
queue.append(ind)
visited[ind] = True
parent[ind] = u

# If we reached sink in BFS starting from source, then return


# true, else false
return visited[t]

# Returns the maximum flow from s to t in the given graph


def edmonds_karp(self, source, sink):

# This array is filled by BFS and to store path


parent = [-1] * (self.ROW)

max_flow = 0 # There is no flow initially

# Augment the flow while there is path from source to sink


while self.bfs (source, sink, parent):

15 https://en.wikipedia.org/wiki/Euclidean_algorithm
16 #CITEREFBackmanHuynh2018
17 https://en.wikipedia.org/wiki/Ordinal_numbers

978
See also

# Find minimum residual capacity of the edges along the


# path filled by BFS. Or we can say find the maximum flow
# through the path found.
path_flow = float("Inf")
s = sink
while s != source:
path_flow = min(path_flow, self.graph[parent[s]][s])
s = parent[s]

# Add path flow to overall flow


max_flow += path_flow

# update residual capacities of the edges and reverse edges


# along the path
v = sink
while v != source:
u = parent[v]
self.graph[u][v] -= path_flow
self.graph[v][u] += path_flow
v = parent[v]

return max_flow

83.6 See also


• Approximate max-flow min-cut theorem18
• Turn restriction routing19

83.7 Notes
1. L-T W, Y-W C, K-T (T) C (2009).
Electronic Design Automation: Synthesis, Verification, and Test. Morgan Kaufmann.
p. 204. ISBN20 008092200721 .CS1 maint: multiple names: authors list (link22 )
2. T H. C; C E. L; R L. R; C
S (2009). Introduction to Algorithms. MIT Press. p. 714. ISBN23 026225810224 .
3. F, L. R.25 ; F, D. R.26 (1956). ”M  
 ” 27 (PDF). Canadian Journal of Mathematics28 . 8: 399–404.
doi :10.4153/CJM-1956-045-530 .
29

18 https://en.wikipedia.org/wiki/Approximate_max-flow_min-cut_theorem
19 https://en.wikipedia.org/wiki/Turn_restriction_routing
20 https://en.wikipedia.org/wiki/ISBN_(identifier)
21 https://en.wikipedia.org/wiki/Special:BookSources/0080922007
22 https://en.wikipedia.org/wiki/Category:CS1_maint:_multiple_names:_authors_list
23 https://en.wikipedia.org/wiki/ISBN_(identifier)
24 https://en.wikipedia.org/wiki/Special:BookSources/0262258102
25 https://en.wikipedia.org/wiki/L._R._Ford_Jr.
26 https://en.wikipedia.org/wiki/D._R._Fulkerson
27 http://www.cs.yale.edu/homes/lans/readings/routing/ford-max_flow-1956.pdf
28 https://en.wikipedia.org/wiki/Canadian_Journal_of_Mathematics
29 https://en.wikipedia.org/wiki/Doi_(identifier)
30 https://doi.org/10.4153%2FCJM-1956-045-5

979
Ford–Fulkerson algorithm

4. Z, U31 (21 A 1995). ”T     
F–F       ”. Theoret-
ical Computer Science32 . 148 (1): 165–170. doi33 :10.1016/0304-3975(95)00022-O34 .

83.8 References
• C, T H.35 ; L, C E.36 ; R, R L.37 ; S,
C38 (2001). ”S 26.2: T F−F ”. Introduction to
Algorithms39 (S .). MIT P  MG−H. . 651–664. ISBN40 0-
262-03293-741 .
• G T. H; G P; S S (2008). ”C 8:N-
 F A”. Algorithms in a Nutshell. Oreilly Media42 . pp. 226–250.
ISBN43 978-0-596-51624-644 .
• J K; É T (2006). ”C 7:E   M-
F P”. Algorithm Design45 . P E. . 378–38446 . ISBN47 0-
321-29535-848 .
• S G (2009). ENGRI 1101. Cornell University.
• B, S; H, T (2018). ”T F–F
   ”. Computability. 7 (4): 341–347. arXiv49 :1504.0436350 .
doi51 :10.3233/COM-18008252 .

83.9 External links


• A tutorial explaining the Ford–Fulkerson method to solve the max-flow problem53
• Another Java animation54

31 https://en.wikipedia.org/wiki/Uri_Zwick
32 https://en.wikipedia.org/wiki/Theoretical_Computer_Science_(journal)
33 https://en.wikipedia.org/wiki/Doi_(identifier)
34 https://doi.org/10.1016%2F0304-3975%2895%2900022-O
35 https://en.wikipedia.org/wiki/Thomas_H._Cormen
36 https://en.wikipedia.org/wiki/Charles_E._Leiserson
37 https://en.wikipedia.org/wiki/Ronald_L._Rivest
38 https://en.wikipedia.org/wiki/Clifford_Stein
39 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
40 https://en.wikipedia.org/wiki/ISBN_(identifier)
41 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
42 https://en.wikipedia.org/wiki/Oreilly_Media
43 https://en.wikipedia.org/wiki/ISBN_(identifier)
44 https://en.wikipedia.org/wiki/Special:BookSources/978-0-596-51624-6
45 https://archive.org/details/algorithmdesign0000klei/page/378
46 https://archive.org/details/algorithmdesign0000klei/page/378
47 https://en.wikipedia.org/wiki/ISBN_(identifier)
48 https://en.wikipedia.org/wiki/Special:BookSources/0-321-29535-8
49 https://en.wikipedia.org/wiki/ArXiv_(identifier)
50 http://arxiv.org/abs/1504.04363
51 https://en.wikipedia.org/wiki/Doi_(identifier)
52 https://doi.org/10.3233%2FCOM-180082
53 http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=maxFlow
54 http://www.cs.pitt.edu/~kirk/cs1501/animations/Network.html

980
External links

• Java Web Start application55

Media related to Ford-Fulkerson's algorithm56 at Wikimedia Commons

55 http://rrusin.blogspot.com/2011/03/implementing-graph-editor-in-javafx.html
56 https://commons.wikimedia.org/wiki/Category:Ford-Fulkerson%27s_algorithm

981
84 Fringe search

This article includes a list of references1 , related reading or external links2 , but
its sources remain unclear because it lacks inline citations3 . Please help
to improve4 this article by introducing5 more precise citations. (June 2013)(Learn
how and when to remove this template message6 )

1 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
2 https://en.wikipedia.org/wiki/Wikipedia:External_links
3 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
4 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Fact_and_Reference_Check
5 https://en.wikipedia.org/wiki/Wikipedia:When_to_cite
6 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

983
Fringe search

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

984
External links

In computer science7 , fringe search is a graph search algorithm8 that finds the least-cost
path from a given initial node9 to one goal node10 .
In essence, fringe search is a middle ground between A*11 and the iterative deepening A*12
variant (IDA*).
If g(x) is the cost of the search path from the first node to the current, and h(x) is the
heuristic13 estimate of the cost from the current node to the goal, then ƒ(x) = g(x) + h(x),
and h* is the actual path cost to the goal. Consider IDA*, which does a recursive14 left-
to-right depth-first search15 from the root node, stopping the recursion once the goal has
been found or the nodes have reached a maximum value ƒ. If no goal is found in the first
threshold ƒ, the threshold is then increased and the algorithm searches again. I.E. It iterates
on the threshold.
There are three major inefficiencies with IDA*. First, IDA* will repeat states when there
are multiple (sometimes non-optimal) paths to a goal node - this is often solved by keeping
a cache of visited states. IDA* thus altered is denoted as memory-enhanced IDA* (ME-
IDA*), since it uses some storage. Furthermore, IDA* repeats all previous operations in a
search when it iterates in a new threshold, which is necessary to operate with no storage.
By storing the leaf nodes of a previous iteration and using them as the starting position
of the next, IDA*'s efficiency is significantly improved (otherwise, in the last iteration it
would always have to visit every node in the tree).
Fringe search implements these improvements on IDA* by making use of a data structure
that is more or less two lists16 to iterate over the frontier or fringe of the search tree. One
list now, stores the current iteration, and the other list later stores the immediate next
iteration. So from the root node of the search tree, now will be the root and later will be
empty. Then the algorithm takes one of two actions: If ƒ(head) is greater than the current
threshold, remove head from now and append it to the end of later; i.e. save head for the
next iteration. Otherwise, if ƒ(head) is less than or equal to the threshold, expand head and
discard head, consider its children, adding them to the beginning of now. At the end of an
iteration, the threshold is increased, the later list becomes the now list, and later is emptied.
An important difference here between fringe and A* is that the contents of the lists in fringe
do not necessarily have to be sorted - a significant gain over A*, which requires the often
expensive maintenance of order in its open list. Unlike A*, however, fringe will have to
visit the same nodes repeatedly, but the cost for each such visit is constant compared to
the worst-case logarithmic time of sorting the list in A*.

7 https://en.wikipedia.org/wiki/Computer_science
8 https://en.wikipedia.org/wiki/Graph_search_algorithm
9 https://en.wikipedia.org/wiki/Node_(graph_theory)
10 https://en.wikipedia.org/wiki/Goal_node
11 https://en.wikipedia.org/wiki/A*_search_algorithm
12 https://en.wikipedia.org/wiki/Iterative_deepening_A*
13 https://en.wikipedia.org/wiki/Heuristic_algorithm
14 https://en.wikipedia.org/wiki/Recursion_(computer_science)
15 https://en.wikipedia.org/wiki/Depth-first_search
16 https://en.wikipedia.org/wiki/List_(computing)

985
Fringe search

84.1 Pseudocode

Implementing both lists in one doubly linked list, where nodes that precede the current node
are the later portion and all else are the now list. Using an array of pre-allocated nodes in
the list for each node in the grid, access time to nodes in the list is reduced to a constant.
Similarly, a marker array allows lookup of a node in the list to be done in constant time.
g is stored as a hash-table, and a last marker array is stored for constant-time lookup of
whether or not a node has been visited before and if a cache entry is valid.
init(start, goal)
fringe F = s
cache C[start] = (0, null)
flimit = h(start)
found = false

while (found == false) AND (F not empty)


fmin = ∞
for node in F, from left to right
(g, parent) = C[node]
f = g + h(node)
if f > flimit
fmin = min(f, fmin)
continue
if node == goal
found = true
break
for child in children(node), from right to left
g_child = g + cost(node, child)
if C[child] != null
(g_cached, parent) = C[child]
if g_child >= g_cached
continue
if child in F
remove child from F
insert child in F past node
C[child] = (g_child, node)
remove node from F
flimit = fmin

if reachedgoal == true
reverse_path(goal)

Reverse pseudo-code.

reverse_path(node)
(g, parent) = C[node]
if parent != null
reverse_path(parent)
print node

84.2 Experiments

When tested on grid-based environments typical of computer games including impassable


obstacles, fringe outperformed A* by some 10 percent to 40 percent, depending on use of
tiles or octiles. Possible further improvements include use of a data structure that lends
itself more easily to caches.

986
References

84.3 References
• Björnsson, Yngvi; Enzenberger, Markus; Holte, Robert C.; Schaeffer, Johnathan. Fringe
Search: Beating A* at Pathfinding on Game Maps. Proceedings of the 2005 IEEE Sym-
posium on Computational Intelligence and Games (CIG05). Essex University, Colchester,
Essex, UK, 4−6 April, 2005. IEEE 2005. 17

84.4 External links


• Jesús Manuel Mager Hois's implementation of Fringe Search in C 18

https://web.archive.org/web/20090219220415/http://www.cs.ualberta.ca/~games/pathfind/
17
publications/cig2005.pdf
18 https://github.com/pywirrarika/fringesearch

987
85 Girvan–Newman algorithm

The Girvan–Newman algorithm (named after Michelle Girvan1 and Mark Newman2 ) is
a hierarchical method used to detect communities3 in complex systems4 .[1]

85.1 Edge betweenness and community structure

The Girvan–Newman algorithm detects communities by progressively removing edges from


the original network. The connected components of the remaining network are the com-
munities. Instead of trying to construct a measure that tells us which edges are the most
central to communities, the Girvan–Newman algorithm focuses on edges that are most likely
”between” communities.
Vertex betweenness5 is an indicator of highly central6 nodes in networks. For any node i,
vertex betweenness is defined as the number of shortest paths between pairs of nodes that
run through it. It is relevant to models where the network modulates transfer of goods
between known start and end points, under the assumption that such transfer seeks the
shortest available route.
The Girvan–Newman algorithm extends this definition to the case of edges, defining the
”edge betweenness” of an edge as the number of shortest paths between pairs of nodes that
run along it. If there is more than one shortest path between a pair of nodes, each path
is assigned equal weight such that the total weight of all of the paths is equal to unity. If
a network contains communities or groups that are only loosely connected by a few inter-
group edges, then all shortest paths between different communities must go along one of
these few edges. Thus, the edges connecting communities will have high edge betweenness
(at least one of them). By removing these edges, the groups are separated from one another
and so the underlying community structure of the network is revealed.
The algorithm's steps for community detection are summarized below
1. The betweenness of all existing edges in the network is calculated first.
2. The edge(s) with the highest betweenness are removed.
3. The betweenness of all edges affected by the removal is recalculated.
4. Steps 2 and 3 are repeated until no edges remain.

1 https://en.wikipedia.org/w/index.php?title=Michelle_Girvan&action=edit&redlink=1
2 https://en.wikipedia.org/wiki/Mark_Newman
3 https://en.wikipedia.org/wiki/Community_structure
4 https://en.wikipedia.org/wiki/Complex_system
5 https://en.wikipedia.org/wiki/Betweenness_centrality
6 https://en.wikipedia.org/wiki/Centrality

989
Girvan–Newman algorithm

The fact that the only betweennesses being recalculated are only the ones which are af-
fected by the removal, may lessen the running time of the process' simulation in computers.
However, the betweenness centrality must be recalculated with each step, or severe errors
occur. The reason is that the network adapts itself to the new conditions set after the edge
removal. For instance, if two communities are connected by more than one edge, then there
is no guarantee that all of these edges will have high betweenness. According to the method,
we know that at least one of them will have, but nothing more than that is known. By
recalculating betweennesses after the removal of each edge, it is ensured that at least one
of the remaining edges between two communities will always have a high value.
The end result of the Girvan–Newman algorithm is a dendrogram7 . As the Girvan–Newman
algorithm runs, the dendrogram is produced from the top down (i.e. the network splits
up into different communities with the successive removal of links). The leaves of the
dendrogram are individual nodes.

85.2 See also


• Closeness8
• Hierarchical clustering9
• Modularity10

85.3 References
1. Girvan M. and Newman M. E. J., Community structure in social and biological net-
works11 , Proc. Natl. Acad. Sci. USA 99, 7821–7826 (2002)

7 https://en.wikipedia.org/wiki/Dendrogram
8 https://en.wikipedia.org/wiki/Closeness_(mathematics)
9 https://en.wikipedia.org/wiki/Hierarchical_clustering
10 https://en.wikipedia.org/wiki/Modularity_(networks)
11 https://dx.doi.org/10.1073/pnas.122653799

990
86 Goal node (computer science)

This article includes a list of references1 , related reading or external links2 , but
its sources remain unclear because it lacks inline citations3 . Please
help to improve4 this article by introducing5 more precise citations. (February
2018)(Learn how and when to remove this template message6 )

In computer science7 , a goal node is a node8 in a graph9 that meets defined criteria for
success or termination.
Heuristical10 artificial intelligence11 algorithms, like A*12 and B*13 , attempt to reach such
nodes in optimal time by defining the distance to the goal node. When the goal node is
reached, A* defines the distance to the goal node as 0 and all other nodes' distances as
positive values.

86.1 References
• N.J. Nilsson Principles of Artificial Intelligence (1982 Birkhäuser) p. 63

86.2 See also


• Tree traversal14

1 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
2 https://en.wikipedia.org/wiki/Wikipedia:External_links
3 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
4 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Fact_and_Reference_Check
5 https://en.wikipedia.org/wiki/Wikipedia:When_to_cite
6 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
7 https://en.wikipedia.org/wiki/Computer_science
8 https://en.wikipedia.org/wiki/Node_(computer_science)
9 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
10 https://en.wikipedia.org/wiki/Heuristic_algorithm
11 https://en.wikipedia.org/wiki/Artificial_intelligence
12 https://en.wikipedia.org/wiki/A*_search_algorithm
13 https://en.wikipedia.org/wiki/B*
14 https://en.wikipedia.org/wiki/Tree_traversal

991
Goal node (computer science)

This artificial intelligence15 -related article is a stub16 . You can help Wikipedia by
expanding it17 .
• v18
• t19
• e20

15 https://en.wikipedia.org/wiki/Artificial_intelligence
16 https://en.wikipedia.org/wiki/Wikipedia:Stub
17 https://en.wikipedia.org/w/index.php?title=Goal_node_(computer_science)&action=edit
18 https://en.wikipedia.org/wiki/Template:Compu-ai-stub
19 https://en.wikipedia.org/wiki/Template_talk:Compu-ai-stub
20 https://en.wikipedia.org/w/index.php?title=Template:Compu-ai-stub&action=edit

992
87 Gomory–Hu tree

In combinatorial optimization1 , the Gomory–Hu tree[1] of an undirected graph with ca-


pacities is a weighted tree2 that represents the minimum s-t cuts for all s-t pairs in the graph.
The Gomory–Hu tree can be constructed in | V | − 1 maximum flow3 computations.

87.1 Definition

Let G = ((VG , EG ), c) be an undirected graph with c(u,v) being the capacity of the edge
(u,v) respectively.
Denote the minimum capacity of an s-t cut by λst for each s, t ∈VG .
Let T = (VT ,ET ) be a tree with VT = VG , denote the set of edges in an s-t path by Pst
for each s,t ∈VT .
Then T is said to be a Gomory–Hu tree of G if
λst = mine∈Pst c(Se , Te ) for all s, t ∈VG ,
where
1. Se and Te are the two connected components of T∖{e} in the sense that (Se , Te ) form
a s-t cut in G, and
2. c(Se , Te ) is the capacity of the cut in G.

87.2 Algorithm

Gomory–Hu Algorithm
Input: A weighted undirected graph G = ((VG , EG ), c)
Output: A Gomory–Hu Tree T = (VT , ET ).
1. Set VT = {VG } and ET = ∅.
2. Choose some X∈VT with | X | ≥2 if such X exists. Otherwise, go to step 6.
3. For each connected component C = (VC , EC ) in T∖X. Let SC = ∪vT ∈VC vT . Let S = {
SC | C is a connected component in T∖X}.

1 https://en.wikipedia.org/wiki/Combinatorial_optimization
2 https://en.wikipedia.org/wiki/Tree_(graph_theory)
3 https://en.wikipedia.org/wiki/Maximum_flow_problem

993
Gomory–Hu tree

Contract the components to form G' = ((VG' , EG' ), c'), where


VG' = X ∪S.
EG' = EG |X×X ∪ {(u, SC ) ∈ X×S | (u,v)∈EG for some v∈SC } ∪ {(SC1 , SC2 ) ∈ S×S | (u,v)∈EG
for some u∈SC1 and v∈SC2 }.
c' : VG' ×VG' →R+ is the capacity function defined as,
1. if (u,SC )∈EG |X×S , c'(u,SC ) = Σv∈SC :(u,v)∈EG c(u,v),
2. if (SC1 ,SC2 )∈EG |S×S , c'(SC1 ,SC2 ) = Σ(u,v)∈EG :u∈SC1 ∧v∈SC2 c(u,v),
3. c'(u,v) = c(u,v) otherwise.
4. Choose two vertices s, t ∈X and find a minimum s-t cut (A',B') in G'.
Set A = (∪SC ∈A'∩S SC ) ∪ (A' ∩ X) and B = (∪SC ∈B'∩S SC ) ∪ (B' ∩ X).
5. Set VT = (VT ∖X) ∪ {A ∩X, B ∩X}.
For each e = (X, Y) ∈ ET do
If Y⊂A, set e' = (A ∩X, Y), else set e' = (B ∩X, Y).
Set ET = (ET ∖{e}) ∪ {e'} and w(e') = w(e).
Set ET = ET ∪ {(A∩X, B∩X)}.
Set w((A∩X, B∩X)) = c'(A', B').
Go to step 2.
6. Replace each {v} ∈ VT by v and each ({u},{v}) ∈ ET by (u,v). Output T.

87.3 Analysis

Using the submodular4 property of the capacity function c, one has


c(X) + c(Y) ≥ c(X ∩Y) + c(X ∪Y).
Then it can be shown that the minimum s-t cut in G' is also a minimum s-t cut in G for
any s, t ∈X.
To show that for all (P, Q) ∈ ET , w(P,Q) = λpq for some p ∈P, q ∈Q throughout the algo-
rithm, one makes use of the following Lemma,
For any i, j, k in VG , λik ≥ min(λij , λjk ).
The Lemma can be used again repeatedly to show that the output T satisfies the properties
of a Gomory–Hu Tree.

4 https://en.wikipedia.org/wiki/Submodular

994
Example

87.4 Example

The following is a simulation of the Gomory–Hu's algorithm, where


1. green circles are vertices of T.
2. red and blue circles are the vertices in G'.
3. grey vertices are the chosen s and t.
4. red and blue coloring represents the s-t cut.
5. dashed edges are the s-t cut-set.
6. A is the set of vertices circled in red and B is the set of vertices circled in blue.

G' T

Figure 214 Figure 215

1. Set VT = {VG } = { {0, 1, 2, 3, 4, 5} } and ET = ∅.


2. Since VT has only one vertex, choose X = VG = {0, 1, 2, 3, 4, 5}. Note that
| X | = 6 ≥2.

1.

Figure 216 Figure 217

3. Since T∖X = ∅, there is no contraction and therefore G' = G.


4. Choose s = 1 and t = 5. The minimum s-t cut (A', B') is ({0, 1, 2, 4}, {3,
5}) with c'(A', B') = 6.
Set A = {0, 1, 2, 4} and B = {3, 5}.
5. Set VT = (VT ∖X) ∪ {A ∩X, B ∩X} = { {0, 1, 2, 4}, {3, 5} }.
Set ET = { ({0, 1, 2, 4}, {3, 5}) }.
Set w( ({0, 1, 2, 4}, {3, 5}) ) = c'(A', B') = 6.
Go to step 2.
2. Choose X = {3, 5}. Note that | X | = 2 ≥2.

995
Gomory–Hu tree

G' T

2.

Figure 218 Figure 219

3. {0, 1, 2, 4} is the connected component in T∖X and thus S = { {0, 1, 2, 4}


}.
Contract {0, 1, 2, 4} to form G', where
c'( (3, {0, 1, 2 ,4}) ) = c( (3, 1) ) + c( (3, 4) ) = 4.
c'( (5, {0, 1, 2, 4}) ) = c( (5, 4) ) = 2.
c'( (3, 5)) = c( (3, 5) ) = 6.
4. Choose s = 3, t = 5. The minimum s-t cut (A', B') in G' is ( {{0, 1, 2, 4},
3}, {5} ) with c'(A', B') = 8.
Set A = {0, 1, 2, 3, 4} and B = {5}.
5. Set VT = (VT ∖X) ∪ {A ∩X, B ∩X} = { {0, 1, 2, 4}, {3}, {5} }.
Since (X, {0, 1, 2, 4}) ∈ ET and {0, 1, 2, 4} ⊂ A, replace it with (A ∩X, Y)
= ({3}, {0, 1, 2 ,4}).
Set ET = { ({3}, {0, 1, 2 ,4}), ({3}, {5}) } with
w(({3}, {0, 1, 2 ,4})) = w((X, {0, 1, 2, 4})) = 6.
w(({3}, {5})) = c'(A', B') = 8.
Go to step 2.
2. Choose X = {0, 1, 2, 4}. Note that | X | = 4 ≥2.

3.

Figure 220 Figure 221

996
Example

G' T

3. { {3}, {5} } is the connected component in T∖X and thus S = { {3, 5} }.


Contract {3, 5} to form G', where
c'( (1, {3, 5}) ) = c( (1, 3) ) = 3.
c'( (4, {3, 5}) ) = c( (4, 3) ) + c( (4, 5) ) = 3.
c'(u,v) = c(u,v) for all u,v ∈X.
4. Choose s = 1, t = 2. The minimum s-t cut (A', B') in G' is ( {1, {3, 5}, 4},
{0, 2} ) with c'(A', B') = 6.
Set A = {1, 3, 4, 5} and B = {0, 2}.
5. Set VT = (VT ∖X) ∪ {A ∩X, B ∩X} = { {3}, {5}, {1, 4}, {0, 2} }.
Since (X, {3}) ∈ ET and {3} ⊂ A, replace it with (A ∩X, Y) = ({1, 4}, {3}).
Set ET = { ({1, 4}, {3}), ({3}, {5}), ({0, 2}, {1, 4}) } with
w(({1, 4}, {3})) = w((X, {3})) = 6.
w(({0, 2}, {1, 4})) = c'(A', B') = 6.
Go to step 2.
2. Choose X = {1, 4}. Note that | X | = 2 ≥2.

4.

Figure 222 Figure 223

997
Gomory–Hu tree

G' T

3. { {3}, {5} }, { {0, 2} } are the connected components in T∖X and thus S =
{ {0, 2}, {3, 5} }
Contract {0, 2} and {3, 5} to form G', where
c'( (1, {3, 5}) ) = c( (1, 3) ) = 3.
c'( (4, {3, 5}) ) = c( (4, 3) ) + c( (4, 5) ) = 3.
c'( (1, {0, 2}) ) = c( (1, 0) ) + c( (1, 2) ) = 2.
c'( (4, {0, 2}) ) = c( (4, 2) ) = 4.
c'(u,v) = c(u,v) for all u,v ∈X.
4. Choose s = 1, t = 4. The minimum s-t cut (A', B') in G' is ( {1, {3, 5}},
{{0, 2}, 4} ) with c'(A', B') = 7.
Set A = {1, 3, 5} and B = {0, 2, 4}.
5. Set VT = (VT ∖X) ∪ {A ∩X, B ∩X} = { {3}, {5}, {0, 2}, {1}, {4} }.
Since (X, {3}) ∈ ET and {3} ⊂ A, replace it with (A ∩X, Y) = ({1}, {3}).
Since (X, {0, 2}) ∈ ET and {0, 2} ⊂ B, replace it with (B ∩X, Y) = ({4}, {0,
2}).
Set ET = { ({1}, {3}), ({3}, {5}), ({4}, {0, 2}), ({1}, {4}) } with
w(({1}, {3})) = w((X, {3})) = 6.
w(({4}, {0, 2})) = w((X, {0, 2})) = 6.
w(({1}, {4})) = c'(A', B') = 7.
Go to step 2.
2. Choose X = {0, 2}. Note that | X | = 2 ≥2.

5.

Figure 224 Figure 225

998
Implementations: Sequential and Parallel

G' T

3. { {1}, {3}, {4}, {5} } is the connected component in T∖X and thus S = {
{1, 3, 4, 5} }.
Contract {1, 3, 4, 5} to form G', where
c'( (0, {1, 3, 4, 5}) ) = c( (0, 1) ) = 1.
c'( (2, {1, 3, 4, 5}) ) = c( (2, 1) ) + c( (2, 4) ) = 5.
c'( (0, 2) ) = c( (0, 2) ) = 7.
4. Choose s = 0, t = 2. The minimum s-t cut (A', B') in G' is ( {0}, {2, {1, 3,
4, 5}} ) with c'(A', B') = 8.
Set A = {0} and B = {1, 2, 3 ,4 ,5}.
5. Set VT = (VT ∖X) ∪ {A ∩X, B ∩X} = { {3}, {5}, {1}, {4}, {0}, {2} }.
Since (X, {4}) ∈ ET and {4} ⊂ B, replace it with (B ∩X, Y) = ({2}, {4}).
Set ET = { ({1}, {3}), ({3}, {5}), ({2}, {4}), ({1}, {4}), ({0}, {2}) } with
w(({2}, {4})) = w((X, {4})) = 6.
w(({0}, {2})) = c'(A', B') = 8.
Go to step 2.
2. There does not exist X∈VT with | X | ≥2. Hence, go to step 6.

6.

Figure 226

6. Replace VT = { {3}, {5}, {1}, {4}, {0}, {2} } by {3, 5, 1, 4, 0, 2}.


Replace ET = { ({1}, {3}), ({3}, {5}), ({2}, {4}), ({1}, {4}), ({0}, {2}) }
by { (1, 3), (3, 5), (2, 4), (1, 4), (0, 2) }.
Output T. Note that exactly | V | − 1 = 6 − 1 = 5 times min-cut computa-
tion is performed.

87.5 Implementations: Sequential and Parallel

Gusfield's algorithm can be used to find a Gomory–Hu tree without any vertex contraction
in the same running time-complexity, which simplifies the implementation of constructing
a Gomory–Hu Tree.

999
Gomory–Hu tree

Andrew V. Goldberg5 and K. Tsioutsiouliklis implemented the Gomory-Hu algorithm and


Gusfield algorithm. Experimental results comparing these algorithms are reported in[2]
Source code is available here6 .
Cohen et al.[3] reports results on two parallel implementations of Gusfield's algorithm using
OpenMP and MPI, respectively. Source code of these implementations is available here:
Parallel Cut Tree Algorithms Page7 .

87.6 History

The Gomory–Hu tree was introduced by R. E. Gomory8 and T. C. Hu in 1961.

87.7 Related concepts

In planar graphs9 , the Gomory–Hu tree is dual to the minimum weight cycle basis10 , in the
sense that the cuts of the Gomory–Hu tree are dual to a collection of cycles in the dual
graph11 that form a minimum-weight cycle basis.[4]

87.8 See also


• Cut (graph theory)12
• Max-flow min-cut theorem13
• Maximum flow problem14

87.9 References
1. G, R. E.15 ; H, T. C. (1961). ”M-  ”.
Journal of the Society for Industrial and Applied Mathematics. 9 (4): 551–570.
doi16 :10.1137/010904717 .

5 https://en.wikipedia.org/wiki/Andrew_V._Goldberg
6 https://web.archive.org/web/20130225053024/http://www.cs.princeton.edu/~kt/cut-tree/
7 https://archive.is/20130101000727/http://www.inf.ufpr.br/jaime/parallel-cuttree.html
8 https://en.wikipedia.org/wiki/Ralph_E._Gomory
9 https://en.wikipedia.org/wiki/Planar_graph
10 https://en.wikipedia.org/wiki/Cycle_basis
11 https://en.wikipedia.org/wiki/Dual_graph
12 https://en.wikipedia.org/wiki/Cut_(graph_theory)
13 https://en.wikipedia.org/wiki/Max-flow_min-cut_theorem
14 https://en.wikipedia.org/wiki/Maximum_flow_problem
15 https://en.wikipedia.org/wiki/Ralph_Edward_Gomory
16 https://en.wikipedia.org/wiki/Doi_(identifier)
17 https://doi.org/10.1137%2F0109047

1000
References

2. G, A. V.; T, K. (2001). ”C T A-


: A E S”. Journal of Algorithms. 38 (1): 51–83.
doi18 :10.1006/jagm.2000.113619 .
3. C, J.; L. A. R; F. S; R. C; A. G; E. P. D J.
(2011). ”P I  G' C T A”. Lec-
ture Notes in Computer Science (LNCS). 7016. Springer. 7016 (11th International
Conference Algorithms and Architectures for Parallel Processing (ICA3PP)): 258–269.
doi20 :10.1007/978-3-642-24650-0_2221 . ISBN22 978-3-642-24649-423 . ISSN24 0302-
974325 .
4. H, D.; M, R. (1994). ”T -    
       ”. SIAM J. Discrete Math.
7 (3): 403–418. doi26 :10.1137/S089548019017704227 ..
• D G (1990). ”V S M  A P N F
A”. SIAM J. Comput. 19 (1): 143–155. doi28 :10.1137/021900929 .
• B. H. K, J V (2008). ”8.6 G–H T”. Combinatorial Opti-
mization: Theory and Algorithms (Algorithms and Combinatorics, 21). Springer Berlin
Heidelberg. pp. 180–186. ISBN30 978-3-540-71844-431 .

18 https://en.wikipedia.org/wiki/Doi_(identifier)
19 https://doi.org/10.1006%2Fjagm.2000.1136
20 https://en.wikipedia.org/wiki/Doi_(identifier)
21 https://doi.org/10.1007%2F978-3-642-24650-0_22
22 https://en.wikipedia.org/wiki/ISBN_(identifier)
23 https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-24649-4
24 https://en.wikipedia.org/wiki/ISSN_(identifier)
25 http://www.worldcat.org/issn/0302-9743
26 https://en.wikipedia.org/wiki/Doi_(identifier)
27 https://doi.org/10.1137%2FS0895480190177042
28 https://en.wikipedia.org/wiki/Doi_(identifier)
29 https://doi.org/10.1137%2F0219009
30 https://en.wikipedia.org/wiki/ISBN_(identifier)
31 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-71844-4

1001
88 Graph bandwidth

In graph theory1 , the graph bandwidth problem is to label the n vertices vi of a graph
G with distinct integers f(vi ) so that the quantity max{ |f (vi ) − f (vj )| : vi vj ∈ E } is mini-
mized (E is the edge set of G).[1] The problem may be visualized as placing the vertices of
a graph at distinct integer points along the x-axis so that the length of the longest edge is
minimized. Such placement is called linear graph arrangement, linear graph layout or
linear graph placement.[2]
The weighted graph bandwidth problem is a generalization wherein the
edges are assigned weights wij and the cost function2 to be minimized is
max{ wij |f (vi ) − f (vj )| : vi vj ∈ E }.
In terms of matrices, the (unweighted) graph bandwidth is the bandwidth3 of the symmetric
matrix4 which is the adjacency matrix5 of the graph. The bandwidth may also be defined
as one less than the maximum clique6 size in a proper interval7 supergraph of the given
graph, chosen to minimize its clique size (Kaplan & Shamir 19968 ).

88.1 Bandwidth formulas for some graphs

For several families of graphs, the bandwidth φ(G) is given by an explicit formula.
The bandwidth of a path graph9 Pn on n vertices is 1, and for a complete graph Km we
have φ(Kn ) = n − 1. For the complete bipartite graph10 Km,n ,
φ(Km,n ) = ⌊(m − 1)/2⌋ + n, assuming m ≥ n ≥ 1,
which was proved by Chvátal.[3] As a special case of this formula, the star graph11 Sk = Kk,1
on k + 1 vertices has bandwidth φ(Sk ) = ⌊(k − 1)/2⌋ + 1.
For the hypercube graph12 Qn on 2n vertices the bandwidth was determined by Harper
(1966)13 to be

1 https://en.wikipedia.org/wiki/Graph_theory
2 https://en.wikipedia.org/wiki/Loss_function
3 https://en.wikipedia.org/wiki/Bandwidth_(matrix_theory)
4 https://en.wikipedia.org/wiki/Symmetric_matrix
5 https://en.wikipedia.org/wiki/Adjacency_matrix
6 https://en.wikipedia.org/wiki/Maximum_clique
7 https://en.wikipedia.org/wiki/Proper_interval_graph
8 #CITEREFKaplanShamir1996
9 https://en.wikipedia.org/wiki/Path_graph
10 https://en.wikipedia.org/wiki/Complete_bipartite_graph
11 https://en.wikipedia.org/wiki/Star_graph
12 https://en.wikipedia.org/wiki/Hypercube_graph
13 #CITEREFHarper1966

1003
Graph bandwidth

( )

n−1
m
φ(Qn ) = .
m=0
⌊m/2⌋

Chvatálová showed[4] that the bandwidth of the m × n square grid graph14 Pm × Pn , that
is, the Cartesian product15 of two path graphs on m and n vertices, is equal to min{m,n}.

88.2 Bounds

The bandwidth of a graph can be bounded in terms of various other graph parameters. For
instance, letting χ(G) denote the chromatic number16 of G,
φ(G) ≥ χ(G) − 1;
letting diam(G) denote the diameter17 of G, the following inequalities hold:[5]
⌈(n − 1)/ diam(G)⌉ ≤ φ(G) ≤ n − diam(G),
where n is the number of vertices in G.
If a graph G has bandwidth k, then its pathwidth18 is at most k (Kaplan & Shamir 199619 ),
and its tree-depth20 is at most k log(n/k) (Gruber 201221 ). In contrast, as noted in the
previous section, the star graph Sk , a structurally very simple example of a tree22 , has
comparatively large bandwidth. Observe that the pathwidth23 of Sk is 1, and its tree-depth
is 2.
Some graph families of bounded degree have sublinear bandwidth: Chung (1988)24 proved
that if T is a tree of maximum degree at most ∆, then
5n
φ(T ) ≤ .
log∆ n
More generally, for planar graphs25 of bounded maximum degree at most ∆, a similar bound
holds (cf. Böttcher et al. 201026 ):
20n
φ(G) ≤ .
log∆ n

14 https://en.wikipedia.org/wiki/Lattice_graph
15 https://en.wikipedia.org/wiki/Cartesian_product_of_graphs
16 https://en.wikipedia.org/wiki/Chromatic_number
17 https://en.wikipedia.org/wiki/Diameter_(graph_theory)
18 https://en.wikipedia.org/wiki/Pathwidth
19 #CITEREFKaplanShamir1996
20 https://en.wikipedia.org/wiki/Tree-depth
21 #CITEREFGruber2012
22 https://en.wikipedia.org/wiki/Tree_(graph_theory)
23 https://en.wikipedia.org/wiki/Pathwidth
24 #CITEREFChung1988
25 https://en.wikipedia.org/wiki/Planar_graph
26 #CITEREFB%C3%B6ttcherPruessmannTarazW%C3%BCrfl2010

1004
Computing the bandwidth

88.3 Computing the bandwidth

Both the unweighted and weighted versions are special cases of the quadratic bottleneck
assignment problem27 . The bandwidth problem is NP-hard28 , even for some special cases.[6]
Regarding the existence of efficient approximation algorithms29 , it is known that the band-
width is NP-hard to approximate30 within any constant, and this even holds when the input
graphs are restricted to caterpillar trees31 with maximum hair length 2 (Dubey, Feige &
Unger 201032 ). For the case of dense graphs, a 3-approximation algorithm was designed by
Karpinski, Wirtgen & Zelikovsky (1997)33 . On the other hand, a number of polynomially-
solvable special cases are known.[2] A heuristic34 algorithm for obtaining linear graph layouts
of low bandwidth is the Cuthill–McKee algorithm35 . Fast multilevel algorithm for graph
bandwidth computation was proposed in.[7]

88.4 Applications

The interest in this problem comes from some application areas.


One area is sparse matrix36 /band matrix37 handling, and general algorithms from this area,
such as Cuthill–McKee algorithm38 , may be applied to find approximate solutions for the
graph bandwidth problem.
Another application domain is in electronic design automation39 . In standard cell40 design
methodology, typically standard cells have the same height, and their placement41 is ar-
ranged in a number of rows. In this context, graph bandwidth problem models the problem
of placement of a set of standard cells in a single row with the goal of minimizing the
maximal propagation delay42 (which is assumed to be proportional to wire length).

88.5 See also


• Pathwidth43 , a different NP-complete optimization problem involving linear layouts of
graphs.

27 https://en.wikipedia.org/wiki/Quadratic_bottleneck_assignment_problem
28 https://en.wikipedia.org/wiki/NP-hard
29 https://en.wikipedia.org/wiki/Approximation_algorithm
30 https://en.wikipedia.org/wiki/Hardness_of_approximation
31 https://en.wikipedia.org/wiki/Caterpillar_tree
32 #CITEREFDubeyFeigeUnger2010
33 #CITEREFKarpinskiWirtgenZelikovsky1997
34 https://en.wikipedia.org/wiki/Heuristic
35 https://en.wikipedia.org/wiki/Cuthill%E2%80%93McKee_algorithm
36 https://en.wikipedia.org/wiki/Sparse_matrix
37 https://en.wikipedia.org/wiki/Band_matrix
38 https://en.wikipedia.org/wiki/Cuthill%E2%80%93McKee_algorithm
39 https://en.wikipedia.org/wiki/Electronic_design_automation
40 https://en.wikipedia.org/wiki/Standard_cell
41 https://en.wikipedia.org/wiki/Placement_(EDA)
42 https://en.wikipedia.org/wiki/Propagation_delay
43 https://en.wikipedia.org/wiki/Pathwidth

1005
Graph bandwidth

88.6 References
1. (Chinn et al. 198244 )
2. ”Coping with the NP-Hardness of the Graph Bandwidth Problem”, Uriel Feige, Lecture
Notes in Computer Science45 , Volume 1851, 2000, pp. 129-145, doi46 :10.1007/3-540-
44985-X_247
3. A remark on a problem of Harary. V. Chvátal, Czechoslovak Mathematical
Journal 20(1):109−111, 1970. http://dml.cz/dmlcz/100949
4. Optimal Labelling of a product of two paths. J. Chvatálová, Discrete Mathematics 11,
249−253, 1975.
5. Chinn et al. 198248
6. Garey–Johnson: GT40
7. I S  D R  A B (2008). ”M A-
  L O P”. ACM Journal of Experimental Algo-
rithmics. 13: 1.4–1.20. doi49 :10.1145/1412228.141223250 .
• B, J.; P, K. P.; T, A.; W, A. (2010). ”B,
, ,     -
”. European Journal of Combinatorics. 31: 1217–1227. arXiv51 :0910.301452 .
doi53 :10.1016/j.ejc.2009.10.01054 .CS1 maint: ref=harv (link55 )
• C, P. Z.56 ; C, J.; D, A. K.57 ; G, N. E. (1982). ”T -
     — ”. Journal of Graph Theory.
6: 223–254. doi58 :10.1002/jgt.319006030259 .CS1 maint: ref=harv (link60 )
• C, F R. K.61 (1988), ”L  G”,  B, L W.;
W, R J. (.), Selected Topics in Graph Theory62 (PDF), A P,
. 151–168, ISBN63 978-0-12-086203-064 CS1 maint: ref=harv (link65 )

44 #CITEREFChinnChv%C3%A1talov%C3%A1DewdneyGibbs1982
45 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
46 https://en.wikipedia.org/wiki/Doi_(identifier)
47 https://doi.org/10.1007%2F3-540-44985-X_2
48 #CITEREFChinnChv%C3%A1talov%C3%A1DewdneyGibbs1982
49 https://en.wikipedia.org/wiki/Doi_(identifier)
50 https://doi.org/10.1145%2F1412228.1412232
51 https://en.wikipedia.org/wiki/ArXiv_(identifier)
52 http://arxiv.org/abs/0910.3014
53 https://en.wikipedia.org/wiki/Doi_(identifier)
54 https://doi.org/10.1016%2Fj.ejc.2009.10.010
55 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
56 https://en.wikipedia.org/wiki/Phyllis_Chinn
57 https://en.wikipedia.org/wiki/Alexander_Dewdney
58 https://en.wikipedia.org/wiki/Doi_(identifier)
59 https://doi.org/10.1002%2Fjgt.3190060302
60 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
61 https://en.wikipedia.org/wiki/Fan_Chung
62 http://www.math.ucsd.edu/~fan/mypaps/fanpap/86log.PDF
63 https://en.wikipedia.org/wiki/ISBN_(identifier)
64 https://en.wikipedia.org/wiki/Special:BookSources/978-0-12-086203-0
65 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

1006
External links

• D, C.; F, U.; U, W. (2010). ”H   -
  ”. Journal of Computer and System Sciences. 77: 62–90.
doi66 :10.1016/j.jcss.2010.06.00667 .CS1 maint: ref=harv (link68 )
• G, M.R.69 ; J, D.S.70 (1979). Computers and Intractability: A Guide to
the Theory of NP-Completeness71 . N Y: W.H. F. ISBN72 0-7167-1045-
573 .CS1 maint: ref=harv (link74 )
• G, H (2012), ”O B S, T, 
C R”, Journal of Combinatorics, 3 (4): 669–682, arXiv75 :1012.134476 ,
doi77 :10.4310/joc.2012.v3.n4.a578 CS1 maint: ref=harv (link79 )
• H, L. (1966). ”O     
”. Journal of Combinatorial Theory. 1: 385–393. doi80 :10.1016/S0021-
9800(66)80059-581 .CS1 maint: ref=harv (link82 )
• K, H; S, R (1996), ”P, ,  
       ”, SIAM Journal on
Computing83 , 25 (3): 540–561, doi84 :10.1137/s009753979325814385 CS1 maint: ref=harv
(link86 )
• K, M; W, J; Z, A (1997). ”A A-
 A   B P  D G”87 . Elec-
tronic Colloquium on Computational Complexity. 4 (17).CS1 maint: ref=harv (link88 )

88.7 External links


• Minimum bandwidth problem89 , in: Pierluigi Crescenzi and Viggo Kann (eds.), A com-
pendium of NP optimization problems. Accessed May 26, 2010.

66 https://en.wikipedia.org/wiki/Doi_(identifier)
67 https://doi.org/10.1016%2Fj.jcss.2010.06.006
68 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
69 https://en.wikipedia.org/wiki/Michael_Garey
70 https://en.wikipedia.org/wiki/David_S._Johnson
https://en.wikipedia.org/wiki/Computers_and_Intractability:_A_Guide_to_the_Theory_of_
71
NP-Completeness
72 https://en.wikipedia.org/wiki/ISBN_(identifier)
73 https://en.wikipedia.org/wiki/Special:BookSources/0-7167-1045-5
74 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
75 https://en.wikipedia.org/wiki/ArXiv_(identifier)
76 http://arxiv.org/abs/1012.1344
77 https://en.wikipedia.org/wiki/Doi_(identifier)
78 https://doi.org/10.4310%2Fjoc.2012.v3.n4.a5
79 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
80 https://en.wikipedia.org/wiki/Doi_(identifier)
81 https://doi.org/10.1016%2FS0021-9800%2866%2980059-5
82 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
83 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
84 https://en.wikipedia.org/wiki/Doi_(identifier)
85 https://doi.org/10.1137%2Fs0097539793258143
86 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
87 http://eccc.hpi-web.de/report/1997/017/
88 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
89 http://www.csc.kth.se/~viggo/wwwcompendium/node53.html

1007
89 Graph embedding

Embedding a graph in a topological space, often Euclidean


In topological graph theory1 , an embedding (also spelled imbedding) of a graph2 G on a
surface3 Σ is a representation of G on Σ in which points of Σ are associated with vertices4
and simple arcs (homeomorphic5 images of [0, 1]) are associated with edges6 in such a way
that:
• the endpoints of the arc associated with an edge e are the points associated with the end
vertices of e,
• no arcs include points associated with other vertices,
• two arcs never intersect at a point which is interior to either of the arcs.
Here a surface is a compact7 , connected8 2-manifold9 .
Informally, an embedding of a graph into a surface is a drawing of the graph on the surface
in such a way that its edges may intersect only at their endpoints. It is well known that
any finite graph can be embedded in 3-dimensional Euclidean space R3 [1] . A planar graph10
is one that can be embedded in 2-dimensional Euclidean space R2 .
Often, an embedding is regarded as an equivalence class (under homeomorphisms of Σ)
of representations of the kind just described.
Some authors define a weaker version of the definition of ”graph embedding” by omitting
the non-intersection condition for edges. In such contexts the stricter definition is described
as ”non-crossing graph embedding”.[2]
This article deals only with the strict definition of graph embedding. The weaker definition
is discussed in the articles ”graph drawing11 ” and ”crossing number12 ”.

1 https://en.wikipedia.org/wiki/Topological_graph_theory
2 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
3 https://en.wikipedia.org/wiki/Surface_(mathematics)
4 https://en.wikipedia.org/wiki/Graph_theory
5 https://en.wikipedia.org/wiki/Homeomorphism
6 https://en.wikipedia.org/wiki/Graph_theory
7 https://en.wikipedia.org/wiki/Compact_space
8 https://en.wikipedia.org/wiki/Connected_space
9 https://en.wikipedia.org/wiki/Manifold
10 https://en.wikipedia.org/wiki/Planar_graph
11 https://en.wikipedia.org/wiki/Graph_drawing
12 https://en.wikipedia.org/wiki/Crossing_number_(graph_theory)

1009
Graph embedding

89.1 Terminology

If a graph G is embedded on a closed surface Σ, the complement of the union of the points
and arcs associated with the vertices and edges of G is a family of regions (or faces13 ).[3]
A 2-cell embedding, cellular embedding or map is an embedding in which every face
is homeomorphic to an open disk.[4] A closed 2-cell embedding is an embedding in which
the closure of every face is homeomorphic to a closed disk.
The genus of a graph14 is the minimal integer n such that the graph can be embedded
in a surface of genus15 n. In particular, a planar graph16 has genus 0, because it can be
drawn on a sphere without self-crossing. The non-orientable genus of a graph17 is the
minimal integer n such that the graph can be embedded in a non-orientable surface of
(non-orientable) genus n.[3]
The Euler genus of a graph is the minimal integer n such that the graph can be embedded
in an orientable surface of (orientable) genus n/2 or in a non-orientable surface of (non-
orientable) genus n. A graph is orientably simple if its Euler genus is smaller than its
non-orientable genus.
The maximum genus of a graph18 is the maximal integer n such that the graph can be
2-cell embedded in an orientable surface of genus19 n.

89.2 Combinatorial embedding

Main article: Rotation system20 An embedded graph uniquely defines cyclic orders21 of
edges incident to the same vertex. The set of all these cyclic orders is called a rotation
system22 . Embeddings with the same rotation system are considered to be equivalent and
the corresponding equivalence class of embeddings is called combinatorial embedding (as
opposed to the term topological embedding, which refers to the previous definition in
terms of points and curves). Sometimes, the rotation system itself is called a ”combinatorial
embedding”.[5][6][7]
An embedded graph also defines natural cyclic orders of edges which constitutes the bound-
aries of the faces of the embedding. However handling these face-based orders is less straight-
forward, since in some cases some edges may be traversed twice along a face boundary. For
example this is always the case for embeddings of trees, which have a single face. To over-
come this combinatorial nuisance, one may consider that every edge is ”split” lengthwise
in two ”half-edges”, or ”sides”. Under this convention in all face boundary traversals each

13 https://en.wikipedia.org/wiki/Face_(graph_theory)
14 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
15 https://en.wikipedia.org/wiki/Genus_(mathematics)
16 https://en.wikipedia.org/wiki/Planar_graph
17 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
18 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
19 https://en.wikipedia.org/wiki/Genus_(mathematics)
20 https://en.wikipedia.org/wiki/Rotation_system
21 https://en.wikipedia.org/wiki/Cyclic_order
22 https://en.wikipedia.org/wiki/Rotation_system

1010
Computational complexity

half-edge is traversed only once and the two half-edges of the same edge are always traversed
in opposite directions.
Other equivalent representations for cellular embeddings include the ribbon graph23 , a topo-
logical space formed by gluing together topological disks for the vertices and edges of an
embedded graph, and the graph-encoded map24 , an edge-colored cubic graph25 with four
vertices for each edge of the embedded graph.

89.3 Computational complexity

The problem of finding the graph genus is NP-hard26 (the problem of determining whether
an n-vertex graph has genus g is NP-complete27 ).[8]
At the same time, the graph genus problem is fixed-parameter tractable28 , i.e., polynomial
time29 algorithms are known to check whether a graph can be embedded into a surface of
a given fixed genus as well as to find the embedding.
The first breakthrough in this respect happened in 1979, when algorithms of time complex-
ity30 O(nO(g) ) were independently submitted to the Annual ACM Symposium on Theory of
Computing31 : one by I. Filotti and G.L. Miller32 and another one by John Reif33 . Their
approaches were quite different, but upon the suggestion of the program committee they
presented a joint paper.[9] However, Wendy Myrvold34 and William Kocay35 proved in 2011
that the algorithm given by Filotti, Miller and Reif was incorrect.[10]
In 1999 it was reported that the fixed-genus case can be solved in time linear36 in the graph
size and doubly exponential37 in the genus.[11]

89.4 Embeddings of graphs into higher-dimensional spaces

It is known that any finite graph can be embedded into a three-dimensional space.[1]
One method for doing this is to place the points on any line in space and to draw the edges
as curves each of which lies in a distinct halfplane38 , with all halfplanes having that line as

23 https://en.wikipedia.org/wiki/Ribbon_graph
24 https://en.wikipedia.org/wiki/Graph-encoded_map
25 https://en.wikipedia.org/wiki/Cubic_graph
26 https://en.wikipedia.org/wiki/NP-hard
27 https://en.wikipedia.org/wiki/NP-complete
28 https://en.wikipedia.org/wiki/Fixed-parameter_tractability
29 https://en.wikipedia.org/wiki/Polynomial_time
30 https://en.wikipedia.org/wiki/Time_complexity
31 https://en.wikipedia.org/wiki/ACM_Symposium_on_Theory_of_Computing
32 https://en.wikipedia.org/wiki/Gary_Miller_(computer_scientist)
33 https://en.wikipedia.org/wiki/John_Reif
34 https://en.wikipedia.org/wiki/Wendy_Myrvold
35 https://en.wikipedia.org/wiki/William_Lawrence_Kocay
36 https://en.wikipedia.org/wiki/Linear_time
37 https://en.wikipedia.org/wiki/Double_exponential_function
38 https://en.wikipedia.org/wiki/Halfplane

1011
Graph embedding

their common boundary. An embedding like this in which the edges are drawn on halfplanes
is called a book embedding39 of the graph. This metaphor40 comes from imagining that
each of the planes where an edge is drawn is like a page of a book. It was observed that in
fact several edges may be drawn in the same ”page”; the book thickness of the graph is the
minimum number of halfplanes needed for such a drawing.
Alternatively, any finite graph can be drawn with straight-line edges in three dimensions
without crossings by placing its vertices in general position41 so that no four are coplanar.
For instance, this may be achieved by placing the ith vertex at the point (i,i2 ,i3 ) of the
moment curve42 .
An embedding of a graph into three-dimensional space in which no two of the cycles are
topologically linked is called a linkless embedding43 . A graph has a linkless embedding if
and only if it does not have one of the seven graphs of the Petersen family44 as a minor45 .

89.5 See also


• Embedding46 , for other kinds of embeddings
• Book thickness47
• Graph thickness48
• Doubly connected edge list49 , a data structure to represent a graph embedding in the
plane50
• Regular map (graph theory)51
• Fáry's theorem52 , which says that a straight line planar embedding of a planar graph is
always possible.
• Triangulation (geometry)53

89.6 References
1. C, R F.; E, P54 ; L, T; R, F55 (1995),
”T-  ”,  T, R56 ; T, I-

39 https://en.wikipedia.org/wiki/Book_embedding
40 https://en.wikipedia.org/wiki/Metaphor
41 https://en.wikipedia.org/wiki/General_position
42 https://en.wikipedia.org/wiki/Moment_curve
43 https://en.wikipedia.org/wiki/Linkless_embedding
44 https://en.wikipedia.org/wiki/Petersen_family
45 https://en.wikipedia.org/wiki/Minor_(graph_theory)
46 https://en.wikipedia.org/wiki/Embedding
47 https://en.wikipedia.org/wiki/Book_thickness
48 https://en.wikipedia.org/wiki/Graph_thickness
49 https://en.wikipedia.org/wiki/Doubly_connected_edge_list
50 https://en.wikipedia.org/wiki/Plane_(geometry)
51 https://en.wikipedia.org/wiki/Regular_map_(graph_theory)
52 https://en.wikipedia.org/wiki/F%C3%A1ry%27s_theorem
53 https://en.wikipedia.org/wiki/Triangulation_(geometry)
54 https://en.wikipedia.org/wiki/Peter_Eades
55 https://en.wikipedia.org/wiki/Frank_Ruskey
56 https://en.wikipedia.org/wiki/Roberto_Tamassia

1012
References

 G. (.), Graph Drawing: DIMACS International Workshop, GD '94 Princeton,


New Jersey, USA, October 10–12, 1994, Proceedings, Lecture Notes in Computer
Science57 , 894, Springer, pp. 1–11, doi58 :10.1007/3-540-58950-3_35159 , ISBN60 978-
3-540-58950-161 .
2. K, N; T, S- (2007), ”E C
N- G S T”, Computing and Combinatorics,
13th Annual International Conference, COCOON 2007, Banff, Canada, July 16-
19, 2007, Proceedings, Lecture Notes in Computer Science62 , 4598, Springer-
Verlag, pp. 243–253, CiteSeerX63 10.1.1.483.87464 , doi65 :10.1007/978-3-540-73545-
8_2566 , ISBN67 978-3-540-73544-168 .
3. G, J; T, T W.69 (2001), Topological Graph Theory,
Dover Publications, ISBN70 978-0-486-41741-771 .
4. L, S K.; Z, A K. (2004), Graphs on Surfaces and their
Applications, Springer-Verlag, ISBN72 978-3-540-00203-173 .
5. M, P74 ; W, R (2000), ”C O E-
  P G”, Computing and Combinatorics, 6th Annual Inter-
national Conference, COCOON 2000, Sydney, Australia, July 26–28, 2000, Pro-
ceedings, Lecture Notes in Computer Science, 1858, Springer-Verlag, pp. 95–104,
doi75 :10.1007/3-540-44968-X_1076 , ISBN77 978-3-540-67787-178 .
6. D, H N. (1995), ”O       ”,
Graph Drawing, DIMACS International Workshop, GD '94, Princeton, New Jersey,
USA, October 10–12, 1994, Proceedings79 , L N  C S,
894, Springer-Verlag, pp. 76–83, doi80 :10.1007/3-540-58950-3_35881 , ISBN82 978-3-
540-58950-183 .

57 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
58 https://en.wikipedia.org/wiki/Doi_(identifier)
59 https://doi.org/10.1007%2F3-540-58950-3_351
60 https://en.wikipedia.org/wiki/ISBN_(identifier)
61 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-58950-1
62 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
63 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
64 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.483.874
65 https://en.wikipedia.org/wiki/Doi_(identifier)
66 https://doi.org/10.1007%2F978-3-540-73545-8_25
67 https://en.wikipedia.org/wiki/ISBN_(identifier)
68 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-73544-1
69 https://en.wikipedia.org/wiki/Thomas_W._Tucker
70 https://en.wikipedia.org/wiki/ISBN_(identifier)
71 https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-41741-7
72 https://en.wikipedia.org/wiki/ISBN_(identifier)
73 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-00203-1
74 https://en.wikipedia.org/wiki/Petra_Mutzel
75 https://en.wikipedia.org/wiki/Doi_(identifier)
76 https://doi.org/10.1007%2F3-540-44968-X_10
77 https://en.wikipedia.org/wiki/ISBN_(identifier)
78 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-67787-1
79 https://en.wikipedia.org/wiki/International_Symposium_on_Graph_Drawing
80 https://en.wikipedia.org/wiki/Doi_(identifier)
81 https://doi.org/10.1007%2F3-540-58950-3_358
82 https://en.wikipedia.org/wiki/ISBN_(identifier)
83 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-58950-1

1013
Graph embedding

7. D, C; G, M T.84 ; K, S (2010),


”P D  H-G G”, Graph Drawing, 17th Interna-
tional Symposium, GD 2009, Chicago, IL, USA, September 22-25, 2009, Revised Pa-
pers85 , L N  C S, 5849, Springer-Verlag, pp. 45–56,
arXiv86 :0908.160887 , doi88 :10.1007/978-3-642-11805-0_789 , ISBN90 978-3-642-11804-
391 .
8. T, C92 (1989), ”T     NP-”,
Journal of Algorithms, 10 (4): 568–576, doi93 :10.1016/0196-6774(89)90006-094
9. F, I. S.; M, G L.95 ; R, J96 (1979), ”O -
       O( O()) (P R)”,
Proc. 11th Annu. ACM Symposium on Theory of Computing97 , . 27–37,
98 :10.1145/800135.80439599 .
10. M, W100 ; K, W101 (M 1, 2011). ”E  G
E A”. Journal of Computer and System Sciences. 2 (77): 430–
438. doi102 :10.1016/j.jcss.2010.06.002103 .
11. M, B104 (1999), ”A      
   ”, SIAM Journal on Discrete Mathematics105 , 12 (1): 6–26,
CiteSeerX106 10.1.1.97.9588107 , doi108 :10.1137/S089548019529248X109

84 https://en.wikipedia.org/wiki/Michael_T._Goodrich
85 https://en.wikipedia.org/wiki/International_Symposium_on_Graph_Drawing
86 https://en.wikipedia.org/wiki/ArXiv_(identifier)
87 http://arxiv.org/abs/0908.1608
88 https://en.wikipedia.org/wiki/Doi_(identifier)
89 https://doi.org/10.1007%2F978-3-642-11805-0_7
90 https://en.wikipedia.org/wiki/ISBN_(identifier)
91 https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-11804-3
92 https://en.wikipedia.org/wiki/Carsten_Thomassen
93 https://en.wikipedia.org/wiki/Doi_(identifier)
94 https://doi.org/10.1016%2F0196-6774%2889%2990006-0
95 https://en.wikipedia.org/wiki/Gary_Miller_(computer_scientist)
96 https://en.wikipedia.org/wiki/John_Reif
97 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
98 https://en.wikipedia.org/wiki/Doi_(identifier)
99 https://doi.org/10.1145%2F800135.804395
100 https://en.wikipedia.org/wiki/Wendy_Myrvold
101 https://en.wikipedia.org/wiki/William_Lawrence_Kocay
102 https://en.wikipedia.org/wiki/Doi_(identifier)
103 https://doi.org/10.1016%2Fj.jcss.2010.06.002
104 https://en.wikipedia.org/wiki/Bojan_Mohar
105 https://en.wikipedia.org/wiki/SIAM_Journal_on_Discrete_Mathematics
106 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
107 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.97.9588
108 https://en.wikipedia.org/wiki/Doi_(identifier)
109 https://doi.org/10.1137%2FS089548019529248X

1014
90 Graph isomorphism

In graph theory1 , an isomorphism of graphs2 G and H is a bijection3 between the vertex


sets of G and H
f : V (G) → V (H)
such that any two vertices u and v of G are adjacent4 in G if and only if5 f(u) and f(v) are
adjacent in H. This kind of bijection is commonly described as ”edge-preserving bijection”, in
accordance with the general notion of isomorphism6 being a structure-preserving bijection
If an isomorphism7 exists between two graphs, then the graphs are called isomorphic and
denoted as G ≃ H. In the case when the bijection is a mapping of a graph onto itself, i.e.,
when G and H are one and the same graph, the bijection is called an automorphism8 of G.
Graph isomorphism is an equivalence relation9 on graphs and as such it partitions the class10
of all graphs into equivalence classes11 . A set of graphs isomorphic to each other is called
an isomorphism class12 of graphs.
The two graphs shown below are isomorphic, despite their different looking drawings13 .

Graph G Graph H An isomorphism


between G and H

1 https://en.wikipedia.org/wiki/Graph_theory
2 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
3 https://en.wikipedia.org/wiki/Bijection
4 https://en.wikipedia.org/wiki/Adjacent_(graph_theory)
5 https://en.wikipedia.org/wiki/If_and_only_if
6 https://en.wikipedia.org/wiki/Isomorphism
7 https://en.wikipedia.org/wiki/Isomorphism
8 https://en.wikipedia.org/wiki/Graph_automorphism
9 https://en.wikipedia.org/wiki/Equivalence_relation
10 https://en.wikipedia.org/wiki/Class_(set_theory)
11 https://en.wikipedia.org/wiki/Equivalence_class
12 https://en.wikipedia.org/wiki/Isomorphism_class
13 https://en.wikipedia.org/wiki/Graph_drawing

1015
Graph isomorphism

f(a) = 1f(b) = 6f(c) = 8f(d)


= 3f(g) = 5f(h) = 2f(i) =
4f(j) = 7

Figure 228

Figure 227

90.1 Variations

In the above definition, graphs are understood to be uni-directed14 non-labeled15 non-


weighted16 graphs. However, the notion of isomorphic may be applied to all other variants
of the notion of graph, by adding the requirements to preserve the corresponding additional
elements of structure: arc directions, edge weights, etc., with the following exception.

90.1.1 Isomorphism of labeled graphs

For labeled graphs17 , two definitions of isomorphism are in use.


Under one definition, an isomorphism is a vertex bijection which is both edge-preserving
and label-preserving.[1][2]
Under another definition, an isomorphism is an edge-preserving vertex bijection which pre-
serves equivalence classes of labels, i.e., vertices with equivalent (e.g., the same) labels are
mapped onto the vertices with equivalent labels and vice versa; same with edge labels.[3]
For example, the K2 graph with the two vertices labelled with 1 and 2 has a single auto-
morphism under the first definition, but under the second definition there are two auto-
morphisms.

14 https://en.wikipedia.org/wiki/Directed_graph
15 https://en.wikipedia.org/wiki/Labeled_graph
16 https://en.wikipedia.org/wiki/Weighted_graph
17 https://en.wikipedia.org/wiki/Labeled_graph

1016
Motivation

The second definition is assumed in certain situations when graphs are endowed with unique
labels commonly taken from the integer range 1,...,n, where n is the number of the vertices
of the graph, used only to uniquely identify the vertices. In such cases two labeled graphs
are sometimes said to be isomorphic if the corresponding underlying unlabeled graphs are
isomorphic (otherwise the definition of isomorphism would be trivial).

90.2 Motivation

The formal notion of ”isomorphism”, e.g., of ”graph isomorphism”, captures the informal
notion that some objects have ”the same structure” if one ignores individual distinctions of
”atomic” components of objects in question. Whenever individuality of ”atomic” components
(vertices and edges, for graphs) is important for correct representation of whatever is mod-
eled by graphs, the model is refined by imposing additional restrictions on the structure,
and other mathematical objects are used: digraphs18 , labeled graphs19 , colored graphs20 ,
rooted trees21 and so on. The isomorphism relation may also be defined for all these gen-
eralizations of graphs: the isomorphism bijection must preserve the elements of structure
which define the object type in question: arcs22 , labels, vertex/edge colors, the root of the
rooted tree, etc.
The notion of ”graph isomorphism” allows us to distinguish graph properties23 inherent to
the structures of graphs themselves from properties associated with graph representations:
graph drawings24 , data structures for graphs25 , graph labelings26 , etc. For example, if a
graph has exactly one cycle27 , then all graphs in its isomorphism class also have exactly
one cycle. On the other hand, in the common case when the vertices of a graph are
(represented by) the integers28 1, 2,... N, then the expression

v · deg v
v∈V (G)

may be different for two isomorphic graphs.

90.3 Whitney theorem

Main article: Whitney graph isomorphism theorem29

18 https://en.wikipedia.org/wiki/Digraph_(mathematics)
19 https://en.wikipedia.org/wiki/Labeled_graph
20 https://en.wikipedia.org/wiki/Colored_graph
21 https://en.wikipedia.org/wiki/Rooted_tree
22 https://en.wikipedia.org/wiki/Arc_(graph_theory)
23 https://en.wikipedia.org/wiki/Graph_properties
24 https://en.wikipedia.org/wiki/Graph_drawing
25 https://en.wikipedia.org/wiki/Graph_(data_structure)
26 https://en.wikipedia.org/wiki/Graph_labeling
27 https://en.wikipedia.org/wiki/Cycle_(graph_theory)
28 https://en.wikipedia.org/wiki/Integer
29 https://en.wikipedia.org/wiki/Whitney_graph_isomorphism_theorem

1017
Graph isomorphism

Figure 229 The exception of Whitney's theorem: these two graphs are not isomorphic
but have isomorphic line graphs.

The Whitney graph isomorphism theorem,[4] shown by Hassler Whitney30 , states that
two connected graphs are isomorphic if and only if their line graphs31 are isomorphic, with
a single exception: K3 , the complete graph32 on three vertices, and the complete bipartite
graph33 K1,3 , which are not isomorphic but both have K3 as their line graph. The Whitney
graph theorem can be extended to hypergraphs34 .[5]

90.4 Recognition of graph isomorphism

Main article: Graph isomorphism problem35 While graph isomorphism may be studied in
a classical mathematical way, as exemplified by the Whitney theorem, it is recognized that
it is a problem to be tackled with an algorithmic approach. The computational problem
of determining whether two finite graphs are isomorphic is called the graph isomorphism
problem.
Its practical applications include primarily cheminformatics36 , mathematical chemistry37
(identification of chemical compounds), and electronic design automation38 (verification of
equivalence of various representations of the design of an electronic circuit39 ).
The graph isomorphism problem is one of few standard problems in computational complex-
ity theory40 belonging to NP41 , but not known to belong to either of its well-known (and, if

30 https://en.wikipedia.org/wiki/Hassler_Whitney
31 https://en.wikipedia.org/wiki/Line_graph
32 https://en.wikipedia.org/wiki/Complete_graph
33 https://en.wikipedia.org/wiki/Complete_bipartite_graph
34 https://en.wikipedia.org/wiki/Hypergraph
35 https://en.wikipedia.org/wiki/Graph_isomorphism_problem
36 https://en.wikipedia.org/wiki/Cheminformatics
37 https://en.wikipedia.org/wiki/Mathematical_chemistry
38 https://en.wikipedia.org/wiki/Electronic_design_automation
39 https://en.wikipedia.org/wiki/Electronic_circuit
40 https://en.wikipedia.org/wiki/Computational_complexity_theory
41 https://en.wikipedia.org/wiki/NP_(complexity)

1018
See also

P ≠ NP42 , disjoint) subsets: P43 and NP-complete44 . It is one of only two, out of 12 total,
problems listed in Garey & Johnson (1979)45 whose complexity remains unresolved, the
other being integer factorization46 . It is however known that if the problem is NP-complete
then the polynomial hierarchy47 collapses to a finite level.[6]
In November 2015, László Babai48 , a mathematician and computer scientist at the Uni-
versity of Chicago, claimed to have proven that the graph isomorphism problem is solv-
50
able in quasi-polynomial time49 . As of November 2015,[update] this work has not yet been
vetted.[7][8] In January 2017, Babai briefly retracted the quasi-polynomiality claim and
stated a sub-exponential time51 time complexity bound instead. He restored the original
claim five days later.[9]
Its generalization, the subgraph isomorphism problem52 , is known to be NP-complete.
The main areas of research for the problem are design of fast algorithms and theoretical
investigations of its computational complexity53 , both for the general problem and for special
classes of graphs.

90.5 See also


• Graph homomorphism54
• Graph automorphism problem55
• Graph isomorphism problem56
• Graph canonization57

90.6 Notes
1. p.42458
2. ”Efficient Method to Perform Isomorphism Testing of Labeled Graphs”59 in: Compu-
tational Science and Its Applications - ICCSA 2006, pp 422–431

42 https://en.wikipedia.org/wiki/P_versus_NP_problem
43 https://en.wikipedia.org/wiki/P_(complexity)
44 https://en.wikipedia.org/wiki/NP-complete
45 #CITEREFGareyJohnson1979
46 https://en.wikipedia.org/wiki/Integer_factorization
47 https://en.wikipedia.org/wiki/Polynomial_hierarchy
48 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Babai
49 https://en.wikipedia.org/wiki/Quasi-polynomial_time
51 https://en.wikipedia.org/wiki/Sub-exponential_time
52 https://en.wikipedia.org/wiki/Subgraph_isomorphism_problem
53 https://en.wikipedia.org/wiki/Analysis_of_algorithms
54 https://en.wikipedia.org/wiki/Graph_homomorphism
55 https://en.wikipedia.org/wiki/Graph_automorphism_problem
56 https://en.wikipedia.org/wiki/Graph_isomorphism_problem
57 https://en.wikipedia.org/wiki/Graph_canonization
58 https://books.google.com/books?id=14138OJXzy4C&pg=PA424
59 https://link.springer.com/chapter/10.1007/11751649_46

1019
Graph isomorphism

3. Pierre-Antoine Champ in, Christine Sol-non, ”Measuring the Similarity of Labeled


Graphs”60 in: Lecture Notes in Computer Science61 , vol. 2689, pp 80–95
4. W, H (J 1932). ”C G   C-
  G”. American Journal of Mathematics. 54 (1): 150–168.
doi62 :10.2307/237108663 . hdl64 :10338.dmlcz/10106765 . JSTOR66 237108667 .
5. Dirk L. Vertigan, Geoffrey P. Whittle: A 2-Isomorphism Theorem for Hypergraphs.
J. Comb. Theory, Ser. B 71(2): 215–230. 1997.
6. S, U (1988). ”G      ”.
Journal of Computer and System Sciences68 . 37 (3): 312–323. doi69 :10.1016/0022-
0000(88)90010-470 .
7. C, A (N 10, 2015), ”M  
  ”, Science71 , 72 :10.1126/.741673 .
8. K, E (D 14, 2015), ”L A B 30-
Y I”74 , Quanta Magazine75
9. B, L (J 9, 2017), Graph isomorphism update76

90.7 References
• G, M R.77 ; J, D S.78 (1979), Computers and Intractability: A
Guide to the Theory of NP-Completeness79 , W. H. F80 , ISBN81 0-7167-1045-582

60 https://link.springer.com/chapter/10.1007/3-540-45006-8_9
61 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
62 https://en.wikipedia.org/wiki/Doi_(identifier)
63 https://doi.org/10.2307%2F2371086
64 https://en.wikipedia.org/wiki/Hdl_(identifier)
65 http://hdl.handle.net/10338.dmlcz%2F101067
66 https://en.wikipedia.org/wiki/JSTOR_(identifier)
67 http://www.jstor.org/stable/2371086
68 https://en.wikipedia.org/wiki/Journal_of_Computer_and_System_Sciences
69 https://en.wikipedia.org/wiki/Doi_(identifier)
70 https://doi.org/10.1016%2F0022-0000%2888%2990010-4
71 https://en.wikipedia.org/wiki/Science_(journal)
72 https://en.wikipedia.org/wiki/Doi_(identifier)
73 https://doi.org/10.1126%2Fscience.aad7416
74 https://www.quantamagazine.org/20151214-graph-isomorphism-algorithm/
75 https://en.wikipedia.org/wiki/Quanta_Magazine
76 http://people.cs.uchicago.edu/~laci/update.html
77 https://en.wikipedia.org/wiki/Michael_Garey
78 https://en.wikipedia.org/wiki/David_S._Johnson
79 https://en.wikipedia.org/wiki/Computers_and_Intractability
80 https://en.wikipedia.org/wiki/W._H._Freeman_and_Company
81 https://en.wikipedia.org/wiki/ISBN_(identifier)
82 https://en.wikipedia.org/wiki/Special:BookSources/0-7167-1045-5

1020
91 Graph isomorphism problem

Computational problem

Unsolved problem in computer science:


Can the graph isomorphism problem be solved in polynomial time?(more unsolved
problems in computer science)1

The graph isomorphism problem is the computational problem2 of determining whether


two finite graphs3 are isomorphic4 .
The problem is not known to be solvable in polynomial time5 nor to be NP-complete6 , and
therefore may be in the computational complexity class7 NP-intermediate8 . It is known
that the graph isomorphism problem is in the low hierarchy9 of class NP10 , which implies
that it is not NP-complete unless the polynomial time hierarchy11 collapses to its second
level.[1] At the same time, isomorphism for many special classes of graphs can be solved in
polynomial time, and in practice graph isomorphism can often be solved efficiently.[2]
This problem is a special case of the subgraph isomorphism problem12 ,[3] which asks whether
a given graph G contains a subgraph that is isomorphic to another given graph H; this
problem is known to be NP-complete. It is also known to be a special case of the non-
abelian13 hidden subgroup problem14 over the symmetric group15 .[4]
In the area of image recognition16 it is known as the exact graph matching17 .[5]

1 https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_computer_science
2 https://en.wikipedia.org/wiki/Computational_problem
3 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
4 https://en.wikipedia.org/wiki/Graph_isomorphism
5 https://en.wikipedia.org/wiki/Polynomial_time
6 https://en.wikipedia.org/wiki/NP-complete
7 https://en.wikipedia.org/wiki/Complexity_class
8 https://en.wikipedia.org/wiki/NP-intermediate
9 https://en.wikipedia.org/wiki/Low_hierarchy
10 https://en.wikipedia.org/wiki/Class_NP
11 https://en.wikipedia.org/wiki/Polynomial_time_hierarchy
12 https://en.wikipedia.org/wiki/Subgraph_isomorphism_problem
13 https://en.wikipedia.org/wiki/Abelian_group
14 https://en.wikipedia.org/wiki/Hidden_subgroup_problem
15 https://en.wikipedia.org/wiki/Symmetric_group
16 https://en.wikipedia.org/wiki/Image_recognition
17 https://en.wikipedia.org/wiki/Exact_graph_matching

1021
Graph isomorphism problem

91.1 State of the art

The best currently accepted theoretical algorithm is due to Babai & Luks (1983)18 , and
is based on the earlier work by Luks (1982)19 combined with a subfactorial algorithm of
V. N. Zemlyachenko√
(Zemlyachenko, Korneenko & Tyshkevich 198520 ). The algorithm
O( n log n)
has run time 2 for graphs with n vertices and relies on the classification of finite
√ 2
simple groups . Without the CFSG theorem22 , a slightly weaker bound 2O( n log n) was
21

obtained first for strongly regular graphs23 by László Babai24 (198025 ), and then extended to

general graphs by Babai & Luks (1983)26 . Improvement of the exponent n is a major open
problem; for strongly regular graphs this was done by Spielman (1996)27 . For hypergraphs28
of bounded rank, a subexponential29 upper bound matching the case of graphs was obtained
by Babai & Codenotti (2008)30 .
In November 2015, Babai announced a quasipolynomial time31 algorithm for all graphs,
c
that is, one with running time 2O((log n) ) for some fixed c > 0.[6][7][8] On January 4, 2017,
Babai retracted the quasi-polynomial claim and stated a sub-exponential time32 bound
instead after Harald Helfgott33 discovered a flaw in the proof. On January 9, 2017, Babai
announced a correction (published in full on January 19) and restored the quasi-polynomial
claim, with Helfgott confirming the fix.[9][10] Helfgott further claims that one can take c =
3
3, so the running time is 2O((log n) ) .[11][12] The new proof has not been fully peer-reviewed
yet.
There are several competing practical algorithms for graph isomorphism, such as those due
to McKay (1981)34 , Schmidt & Druffel (1976)35 , and Ullman (1976)36 . While they seem to
perform well on random graphs37 , a major drawback of these algorithms is their exponential
time performance in the worst case38 .[13]
The graph isomorphism problem is computationally equivalent to the problem of computing
the automorphism group39 of a graph[14][15] , and is weaker than the permutation group40

18 #CITEREFBabaiLuks1983
19 #CITEREFLuks1982
20 #CITEREFZemlyachenkoKorneenkoTyshkevich1985
21 https://en.wikipedia.org/wiki/Classification_of_finite_simple_groups
22 https://en.wikipedia.org/wiki/Classification_of_finite_simple_groups
23 https://en.wikipedia.org/wiki/Strongly_regular_graph
24 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Babai
25 #CITEREFBabai1980
26 #CITEREFBabaiLuks1983
27 #CITEREFSpielman1996
28 https://en.wikipedia.org/wiki/Hypergraph
29 https://en.wikipedia.org/wiki/Sub-exponential_time
30 #CITEREFBabaiCodenotti2008
31 https://en.wikipedia.org/wiki/Time_complexity#Quasi-polynomial_time
32 https://en.wikipedia.org/wiki/Sub-exponential_time
33 https://en.wikipedia.org/wiki/Harald_Helfgott
34 #CITEREFMcKay1981
35 #CITEREFSchmidtDruffel1976
36 #CITEREFUllman1976
37 https://en.wikipedia.org/wiki/Random_graphs
38 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
39 https://en.wikipedia.org/wiki/Automorphism_group
40 https://en.wikipedia.org/wiki/Permutation_group

1022
Solved special cases

isomorphism problem and the permutation group intersection problem. For the latter two
problems, Babai, Kantor & Luks (1983)41 obtained complexity bounds similar to that for
graph isomorphism.

91.2 Solved special cases

A number of important special cases of the graph isomorphism problem have efficient,
polynomial-time solutions:
• Trees42[16][17]
• Planar graphs43[18] (In fact, planar graph isomorphism is in log space44 ,[19] a class con-
tained in P45 )
• Interval graphs46[20]
• Permutation graphs47[21]
• Circulant graphs48[22]
• Bounded-parameter graphs
• Graphs of bounded treewidth49[23]
• Graphs of bounded genus50[24] (Planar graphs are graphs of genus 0.)
• Graphs of bounded degree51[25]
• Graphs with bounded eigenvalue52 multiplicity[26]
• k-Contractible graphs (a generalization of bounded degree and bounded genus)[27]
• Color-preserving isomorphism of colored graphs53 with bounded color multiplicity (i.e.,
at most k vertices have the same color for a fixed k) is in class NC54 , which is a subclass
of P55[28]

91.3 Complexity class GI

Since the graph isomorphism problem is neither known to be NP-complete nor known
to be tractable, researchers have sought to gain insight into the problem by defining a
new class GI, the set of problems with a polynomial-time Turing reduction56 to the graph

41 #CITEREFBabaiKantorLuks1983
42 https://en.wikipedia.org/wiki/Tree_(graph_theory)
43 https://en.wikipedia.org/wiki/Planar_graph
44 https://en.wikipedia.org/wiki/L_(complexity)
45 https://en.wikipedia.org/wiki/P_(complexity)
46 https://en.wikipedia.org/wiki/Interval_graph
47 https://en.wikipedia.org/wiki/Permutation_graph
48 https://en.wikipedia.org/wiki/Circulant_graph
49 https://en.wikipedia.org/wiki/Treewidth
50 https://en.wikipedia.org/wiki/Genus_(mathematics)
51 https://en.wikipedia.org/wiki/Degree_(graph_theory)
52 https://en.wikipedia.org/wiki/Eigenvalue
53 https://en.wikipedia.org/wiki/Colored_graph
54 https://en.wikipedia.org/wiki/NC_(complexity)
55 https://en.wikipedia.org/wiki/P_(complexity)
56 https://en.wikipedia.org/wiki/Polynomial-time_Turing_reduction

1023
Graph isomorphism problem

isomorphism problem.[29] If in fact the graph isomorphism problem is solvable in polynomial


time, GI would equal P57 .
As is common for complexity classes58 within the polynomial time hierarchy59 , a problem
is called GI-hard if there is a polynomial-time Turing reduction60 from any problem in
GI to that problem, i.e., a polynomial-time solution to a GI-hard problem would yield a
polynomial-time solution to the graph isomorphism problem (and so all problems in GI).
A problem X is called complete61 for GI, or GI-complete, if it is both GI-hard and a
polynomial-time solution to the GI problem would yield a polynomial-time solution to X.
The graph isomorphism problem is contained in both NP and co-AM62 . GI is contained in
and low63 for Parity P64 , as well as contained in the potentially much smaller class SPP.[30]
That it lies in Parity P means that the graph isomorphism problem is no harder than
determining whether a polynomial-time nondeterministic Turing machine65 has an even
or odd number of accepting paths. GI is also contained in and low for ZPP66NP .[31] This
essentially means that an efficient Las Vegas algorithm67 with access to an NP oracle68 can
solve graph isomorphism so easily that it gains no power from being given the ability to do
so in constant time.

91.3.1 GI-complete and GI-hard problems

Isomorphism of other objects

There are a number of classes of mathematical objects for which the problem of isomorphism
is a GI-complete problem. A number of them are graphs endowed with additional properties
or restrictions:[32]
• digraphs69[32]
• labelled graphs70 , with the proviso that an isomorphism is not required to preserve the
labels,[32] but only the equivalence relation71 consisting of pairs of vertices with the same
label
• ”polarized graphs” (made of a complete graph72 Km and an empty graph73 Kn plus some
edges connecting the two; their isomorphism must preserve the partition)[32]

57 https://en.wikipedia.org/wiki/P_(complexity)
58 https://en.wikipedia.org/wiki/Complexity_class
59 https://en.wikipedia.org/wiki/Polynomial_time_hierarchy
60 https://en.wikipedia.org/wiki/Polynomial-time_Turing_reduction
61 https://en.wikipedia.org/wiki/Complete_(complexity)
62 https://en.wikipedia.org/wiki/AM_(complexity)
63 https://en.wikipedia.org/wiki/Low_(complexity)
64 https://en.wikipedia.org/wiki/Parity_P
65 https://en.wikipedia.org/wiki/Nondeterministic_Turing_machine
66 https://en.wikipedia.org/wiki/ZPP_(complexity)
67 https://en.wikipedia.org/wiki/Las_Vegas_algorithm
68 https://en.wikipedia.org/wiki/Oracle_machine
69 https://en.wikipedia.org/wiki/Directed_graph
70 https://en.wikipedia.org/wiki/Labelled_graph
71 https://en.wikipedia.org/wiki/Equivalence_relation
72 https://en.wikipedia.org/wiki/Complete_graph
73 https://en.wikipedia.org/wiki/Empty_graph

1024
Complexity class GI

• 2-colored graphs74[32]
• explicitly given finite structures75[32]
• multigraphs76[32]
• hypergraphs77[32]
• finite automata78[32]
• Markov Decision Processes79[33]
• commutative80 class 3 nilpotent81 (i.e., xyz = 0 for every elements x, y, z) semigroups82[32]
• finite rank associative83 algebras84 over a fixed algebraically closed field with zero squared
radical and commutative factor over the radical.[32][34]
• context-free grammars85[32]
• balanced incomplete block designs86[32]
• Recognizing combinatorial isomorphism87 of convex polytopes88 represented by vertex-
facet incidences.[35]
This list is incomplete89 ; you can help by expanding it90 .

GI-complete classes of graphs

A class of graphs is called GI-complete if recognition of isomorphism for graphs from this
subclass is a GI-complete problem. The following classes are GI-complete:[32]
• connected graphs91[32]
• graphs of diameter92 2 and radius93 1[32]
• directed acyclic graphs94[32]
• regular graphs95[32]
• bipartite graphs96 without non-trivial strongly regular subgraphs97[32]

74 https://en.wikipedia.org/wiki/Colored_graph
75 https://en.wikipedia.org/wiki/Structure_(mathematical_logic)
76 https://en.wikipedia.org/wiki/Multigraph
77 https://en.wikipedia.org/wiki/Hypergraph
78 https://en.wikipedia.org/wiki/Finite_automata
79 https://en.wikipedia.org/wiki/Markov_decision_process
80 https://en.wikipedia.org/wiki/Commutative
81 https://en.wikipedia.org/wiki/Nilpotent
82 https://en.wikipedia.org/wiki/Semigroup
83 https://en.wikipedia.org/wiki/Associative
84 https://en.wikipedia.org/wiki/Algebra_over_a_field
85 https://en.wikipedia.org/wiki/Context-free_grammar
86 https://en.wikipedia.org/wiki/Balanced_incomplete_block_design
87 https://en.wikipedia.org/wiki/Combinatorial_isomorphism
88 https://en.wikipedia.org/wiki/Convex_polytope
89 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Lists#Incomplete_lists
90 https://en.wikipedia.org/w/index.php?title=Graph_isomorphism_problem&action=edit
91 https://en.wikipedia.org/wiki/Connected_graph
92 https://en.wikipedia.org/wiki/Diameter_(graph_theory)
93 https://en.wikipedia.org/wiki/Radius_(graph_theory)
94 https://en.wikipedia.org/wiki/Directed_acyclic_graph
95 https://en.wikipedia.org/wiki/Regular_graph
96 https://en.wikipedia.org/wiki/Bipartite_graph
97 https://en.wikipedia.org/wiki/Strongly_regular_graph

1025
Graph isomorphism problem

• bipartite Eulerian graphs98[32]


• bipartite regular graphs[32]
• line graphs99[32]
• split graphs100[36]
• chordal graphs101[32]
• regular self-complementary graphs102[32]
• polytopal graphs103 of general, simple104 , and simplicial105 convex polytopes106 in arbi-
trary dimensions.[37]
This list is incomplete107 ; you can help by expanding it108 . Many classes of digraphs are
also GI-complete.

Other GI-complete problems

There are other nontrivial GI-complete problems in addition to isomorphism problems.


• The recognition of self-complementarity of a graph or digraph.[38]
• A clique problem109 for a class of so-called M-graphs. It is shown that finding an isomor-
phism for n-vertex graphs is equivalent to finding an n-clique in an M-graph of size n2 .
This fact is interesting because the problem of finding an (n − ε)-clique in a M-graph of
size n2 is NP-complete for arbitrarily small positive ε.[39]
• The problem of homeomorphism of 2-complexes.[40]

GI-hard problems
• The problem of counting the number of isomorphisms between two graphs is polynomial-
time equivalent to the problem of telling whether even one exists.[41]
• The problem of deciding whether two convex polytopes110 given by either the V-
description111 or H-description112 are projectively or affinely isomorphic. The latter means
existence of a projective or affine map between the spaces that contain the two poly-
topes (not necessarily of the same dimension) which induces a bijection between the
polytopes.[37]

98 https://en.wikipedia.org/wiki/Eulerian_graph
99 https://en.wikipedia.org/wiki/Line_graph
100 https://en.wikipedia.org/wiki/Split_graph
101 https://en.wikipedia.org/wiki/Chordal_graph
102 https://en.wikipedia.org/wiki/Self-complementary_graph
103 https://en.wikipedia.org/wiki/Polytopal_graph
104 https://en.wikipedia.org/wiki/Simple_polytope
105 https://en.wikipedia.org/wiki/Simplicial_polytope
106 https://en.wikipedia.org/wiki/Convex_polytope
107 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Lists#Incomplete_lists
108 https://en.wikipedia.org/w/index.php?title=Graph_isomorphism_problem&action=edit
109 https://en.wikipedia.org/wiki/Clique_problem
110 https://en.wikipedia.org/wiki/Convex_polytope
111 https://en.wikipedia.org/wiki/V-description
112 https://en.wikipedia.org/wiki/H-description

1026
Program checking

91.4 Program checking

Manuel Blum113 and Sampath Kannan (1995114 ) have shown a probabilistic checker for
programs for graph isomorphism. Suppose P is a claimed polynomial-time procedure that
checks if two graphs are isomorphic, but it is not trusted. To check if G and H are isomor-
phic:
• Ask P whether G and H are isomorphic.
• If the answer is ”yes':
• Attempt to construct an isomorphism using P as subroutine. Mark a vertex u in
G and v in H, and modify the graphs to make them distinctive (with a small local
change). Ask P if the modified graphs are isomorphic. If no, change v to a different
vertex. Continue searching.
• Either the isomorphism will be found (and can be verified), or P will contradict itself.
• If the answer is ”no”:
• Perform the following 100 times. Choose randomly G or H, and randomly permute
its vertices. Ask P if the graph is isomorphic to G and H. (As in AM115 protocol for
graph nonisomorphism).
• If any of the tests are failed, judge P as invalid program. Otherwise, answer ”no”.
This procedure is polynomial-time and gives the correct answer if P is a correct program
for graph isomorphism. If P is not a correct program, but answers correctly on G and H,
the checker will either give the correct answer, or detect invalid behaviour of P. If P is
not a correct program, and answers incorrectly on G and H, the checker will detect invalid
behaviour of P with high probability, or answer wrong with probability 2−100 .
Notably, P is used only as a blackbox.

91.5 Applications

Graphs are commonly used to encode structural information in many fields, including com-
puter vision116 and pattern recognition117 , and graph matching118 , i.e., identification of
similarities between graphs, is an important tools in these areas. In these areas graph
isomorphism problem is known as the exact graph matching.[42]
In cheminformatics119 and in mathematical chemistry120 , graph isomorphism testing is used
to identify a chemical compound121 within a chemical database122 .[43] Also, in organic math-

113 https://en.wikipedia.org/wiki/Manuel_Blum
114 #CITEREFBlumKannan1995
115 https://en.wikipedia.org/wiki/AM_(complexity)
116 https://en.wikipedia.org/wiki/Computer_vision
117 https://en.wikipedia.org/wiki/Pattern_recognition
118 https://en.wikipedia.org/wiki/Graph_matching
119 https://en.wikipedia.org/wiki/Cheminformatics
120 https://en.wikipedia.org/wiki/Mathematical_chemistry
121 https://en.wikipedia.org/wiki/Chemical_compound
122 https://en.wikipedia.org/wiki/Chemical_database

1027
Graph isomorphism problem

ematical chemistry graph isomorphism testing is useful for generation of molecular graphs123
and for computer synthesis124 .
Chemical database search is an example of graphical data mining125 , where the graph can-
onization126 approach is often used.[44] In particular, a number of identifiers127 for chem-
ical substances128 , such as SMILES129 and InChI130 , designed to provide a standard and
human-readable way to encode molecular information and to facilitate the search for such
information in databases and on the web, use canonization step in their computation, which
is essentially the canonization of the graph which represents the molecule.
In electronic design automation131 graph isomorphism is the basis of the Layout Versus
Schematic132 (LVS) circuit design step, which is a verification whether the electric circuits133
represented by a circuit schematic134 and an integrated circuit layout135 are the same.[45]

91.6 See also


• Graph automorphism problem136
• Graph canonization137

91.7 Notes
1. Schöning (1987)138 .
2. McKay (1981)139 .
3. Ullman (1976)140 .
4. Moore, Russell & Schulman (2008)141 .
5. Endika Bengoetxea, ”Inexact Graph Matching Using Estimation of Distribution Algo-
rithms”142 , Ph. D., 2002, Chapter 2:The graph matching problem143 (retrieved June
28, 2017)

123 https://en.wikipedia.org/wiki/Molecular_graph
124 https://en.wikipedia.org/wiki/Combinatorial_chemistry
125 https://en.wikipedia.org/wiki/Data_mining
126 https://en.wikipedia.org/wiki/Graph_canonization
127 https://en.wikipedia.org/wiki/Identifier
128 https://en.wikipedia.org/wiki/Chemical_substance
129 https://en.wikipedia.org/wiki/SMILES
130 https://en.wikipedia.org/wiki/InChI
131 https://en.wikipedia.org/wiki/Electronic_design_automation
132 https://en.wikipedia.org/wiki/Layout_Versus_Schematic
133 https://en.wikipedia.org/wiki/Electric_circuit
134 https://en.wikipedia.org/wiki/Circuit_diagram
135 https://en.wikipedia.org/wiki/Integrated_circuit_layout
136 https://en.wikipedia.org/wiki/Graph_automorphism_problem
137 https://en.wikipedia.org/wiki/Graph_canonization
138 #CITEREFSch%C3%B6ning1987
139 #CITEREFMcKay1981
140 #CITEREFUllman1976
141 #CITEREFMooreRussellSchulman2008
142 http://www.sc.ehu.es/acwbecae/ikerkuntza/these/
143 http://www.sc.ehu.es/acwbecae/ikerkuntza/these/Ch2.pdf

1028
Notes

6. ”M     ”144 . Science.


November 10, 2015.
7. Babai (2015)145
8. Video of first 2015 lecture linked from Babai's home page146
9. B, L (J 9, 2017), Graph isomorphism update147
10. Erica Klarreich148 , Graph Isomorphism Vanquished — Again, Quanta Magazine, Jan-
uary 14, 2017 see here149
11. H, H (J 16, 2017), Isomorphismes de graphes en temps
quasi-polynomial (d'après Babai et Luks, Weisfeiler-Leman...), arXiv150 :1701.04372151 ,
Bibcode152 :2017arXiv170104372A153
12. D, D; B, J; H, H A (O 12,
2017). ”G   - ”. X154 :1710.04574155
[.GR156 ].
13. Foggia, Sansone & Vento (2001)157 .
14. L, E (1993-09-01). ”P   -
”. DIMACS Series in Discrete Mathematics and Theoretical Computer
Science. 11. Providence, Rhode Island: American Mathematical Society. pp. 139–
175. doi158 :10.1090/dimacs/011/11159 . ISBN160 978-0-8218-6599-6161 . ISSN162 1052-
1798163 .
15. Algeboy (164 ), Graph isomorphism and the automorphism group, URL (version: 2018-
09-20): 165
16. Kelly (1957)166 .
17. Aho, Hopcroft & Ullman (1974)167 , p. 84-86.
18. Hopcroft & Wong (1974)168 .
19. Datta et al. (2009)169 .

http://news.sciencemag.org/math/2015/11/mathematician-claims-breakthrough-complexity-
144
theory
145 #CITEREFBabai2015
146 http://people.cs.uchicago.edu/~laci/
147 http://people.cs.uchicago.edu/~laci/update.html
148 https://en.wikipedia.org/wiki/Erica_Klarreich
149 https://www.quantamagazine.org/20170114-graph-isomorphism-babai-fix/
150 https://en.wikipedia.org/wiki/ArXiv_(identifier)
151 http://arxiv.org/abs/1701.04372
152 https://en.wikipedia.org/wiki/Bibcode_(identifier)
153 https://ui.adsabs.harvard.edu/abs/2017arXiv170104372A
154 https://en.wikipedia.org/wiki/ArXiv_(identifier)
155 http://arxiv.org/abs/1710.04574
156 http://arxiv.org/archive/math.GR
157 #CITEREFFoggiaSansoneVento2001
158 https://en.wikipedia.org/wiki/Doi_(identifier)
159 https://doi.org/10.1090%2Fdimacs%2F011%2F11
160 https://en.wikipedia.org/wiki/ISBN_(identifier)
161 https://en.wikipedia.org/wiki/Special:BookSources/978-0-8218-6599-6
162 https://en.wikipedia.org/wiki/ISSN_(identifier)
163 http://www.worldcat.org/issn/1052-1798
164 https://cs.stackexchange.com/users/90177/algeboy
165 https://cs.stackexchange.com/q/97575
166 #CITEREFKelly1957
167 #CITEREFAhoHopcroftUllman1974
168 #CITEREFHopcroftWong1974
169 #CITEREFDattaLimayeNimbhorkarThierauf2009

1029
Graph isomorphism problem

20. Booth & Lueker (1979)170 .


21. Colbourn (1981)171 .
22. Muzychuk (2004)172 .
23. Bodlaender (1990)173 .
24. Miller 1980174 ; Filotti & Mayer 1980175 .
25. Luks (1982)176 .
26. Babai, Grigoryev & Mount (1982)177 .
27. Miller (1983)178 .
28. Luks (1986)179 .
29. Booth & Colbourn 1977180 ; Köbler, Schöning & Torán 1993181 .
30. Köbler, Schöning & Torán 1992182 ; Arvind & Kurur 2006183
31. Arvind & Köbler (2000)184 .
32. Zemlyachenko, Korneenko & Tyshkevich (1985)185
33. Narayanamurthy & Ravindran (2008)186 .
34. Grigor'ev (1981)187 .
35. Johnson (2005)188 ; Kaibel & Schwartz (2003)189 .
36. Chung (1985)190 .
37. Kaibel & Schwartz (2003)191 .
38. Colbourn & Colbourn (1978)192 .
39. Kozen (1978)193 .
40. Shawe-Taylor & Pisanski (1994)194 .
41. Mathon (1979)195 ; Johnson 2005196 .
42. Endika Bengoetxea, Ph.D., Abstract197

170 #CITEREFBoothLueker1979
171 #CITEREFColbourn1981
172 #CITEREFMuzychuk2004
173 #CITEREFBodlaender1990
174 #CITEREFMiller1980
175 #CITEREFFilottiMayer1980
176 #CITEREFLuks1982
177 #CITEREFBabaiGrigoryevMount1982
178 #CITEREFMiller1983
179 #CITEREFLuks1986
180 #CITEREFBoothColbourn1977
181 #CITEREFK%C3%B6blerSch%C3%B6ningTor%C3%A1n1993
182 #CITEREFK%C3%B6blerSch%C3%B6ningTor%C3%A1n1992
183 #CITEREFArvindKurur2006
184 #CITEREFArvindK%C3%B6bler2000
185 #CITEREFZemlyachenkoKorneenkoTyshkevich1985
186 #CITEREFNarayanamurthyRavindran2008
187 #CITEREFGrigor&#39;ev1981
188 #CITEREFJohnson2005
189 #CITEREFKaibelSchwartz2003
190 #CITEREFChung1985
191 #CITEREFKaibelSchwartz2003
192 #CITEREFColbournColbourn1978
193 #CITEREFKozen1978
194 #CITEREFShawe-TaylorPisanski1994
195 #CITEREFMathon1979
196 #CITEREFJohnson2005
197 http://www.sc.ehu.es/acwbecae/ikerkuntza/these/

1030
References

43. Irniger (2005)198 .


44. Cook & Holder (2007)199 .
45. Baird & Cho (1975)200 .

91.8 References
• A, A V.201 ; H, J202 ; U, J D.203 (1974), The Design
and Analysis of Computer Algorithms, Reading, MA: Addison-Wesley.
• A, V; K, J (2000), ”G    
ZPP(NP)    .”, Proceedings of the 17th Annual Symposium on
Theoretical Aspects of Computer Science204 , L N  C S205 ,
1770, Springer-Verlag, pp. 431–442206 , doi207 :10.1007/3-540-46541-3_36208 , ISBN209 3-
540-67141-2210 , MR211 1781752212 .
• A, V; K, P P. (2006), ”G    SPP”,
Information and Computation213 , 204 (5): 835–852, doi214 :10.1016/j.ic.2006.02.002215 ,
MR216 2226371217 .
• B, L218 (1980), ”O      
  ”, SIAM Journal on Computing219 , 9 (1): 212–216,
doi220 :10.1137/0209018221 , MR222 0557839223 .
• B, L224 ; C, P (2008), ”I   
     ”225 (PDF), Proceedings of the 49th An-
nual IEEE Symposium on Foundations of Computer Science (FOCS 2008), IEEE Com-

198 #CITEREFIrniger2005
199 #CITEREFCookHolder2007
200 #CITEREFBairdCho1975
201 https://en.wikipedia.org/wiki/Alfred_Aho
202 https://en.wikipedia.org/wiki/John_Hopcroft
203 https://en.wikipedia.org/wiki/Jeffrey_Ullman
204 https://archive.org/details/stacs200017thann0000annu/page/431
205 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
206 https://archive.org/details/stacs200017thann0000annu/page/431
207 https://en.wikipedia.org/wiki/Doi_(identifier)
208 https://doi.org/10.1007%2F3-540-46541-3_36
209 https://en.wikipedia.org/wiki/ISBN_(identifier)
210 https://en.wikipedia.org/wiki/Special:BookSources/3-540-67141-2
211 https://en.wikipedia.org/wiki/MR_(identifier)
212 http://www.ams.org/mathscinet-getitem?mr=1781752
213 https://en.wikipedia.org/wiki/Information_and_Computation
214 https://en.wikipedia.org/wiki/Doi_(identifier)
215 https://doi.org/10.1016%2Fj.ic.2006.02.002
216 https://en.wikipedia.org/wiki/MR_(identifier)
217 http://www.ams.org/mathscinet-getitem?mr=2226371
218 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Babai
219 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
220 https://en.wikipedia.org/wiki/Doi_(identifier)
221 https://doi.org/10.1137%2F0209018
222 https://en.wikipedia.org/wiki/MR_(identifier)
223 http://www.ams.org/mathscinet-getitem?mr=0557839
224 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Babai
225 https://www.cs.uchicago.edu/~laci/papers/hypergraphiso.pdf

1031
Graph isomorphism problem

puter Society, pp. 667–676, doi226 :10.1109/FOCS.2008.80227 , ISBN228 978-0-7695-3436-


7229 .
• B, L230 ; G, D. Y.231 ; M, D M.232 (1982), ”I-
      ”, Proceedings
of the 14th Annual ACM Symposium on Theory of Computing, pp. 310–324,
doi233 :10.1145/800070.802206234 , ISBN235 0-89791-070-2236 .
• B, L237 ; K, W238 ; L, E239 (1983), ”C
       ”, Proceedings of the
24th Annual Symposium on Foundations of Computer Science (FOCS), pp. 162–171,
doi240 :10.1109/SFCS.1983.10241 .
• B, L242 ; L, E M.243 (1983), ”C   ”,
Proceedings of the Fifteenth Annual ACM Symposium on Theory of Computing (STOC
'83), pp. 171–183, doi244 :10.1145/800061.808746245 , ISBN246 0-89791-099-0247 .
• B, L (2015), Graph Isomorphism in Quasipolynomial Time,
arXiv248 :1512.03547249 , Bibcode250 :2015arXiv151203547B251 CS1 maint: ref=harv
(link252 )
• B, H. S.; C, Y. E. (1975), ”A    ”253 ,
Proceedings of the 12th Design Automation Conference (DAC '75), Piscataway, NJ, USA:
IEEE Press, pp. 414–420.

226 https://en.wikipedia.org/wiki/Doi_(identifier)
227 https://doi.org/10.1109%2FFOCS.2008.80
228 https://en.wikipedia.org/wiki/ISBN_(identifier)
229 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7695-3436-7
230 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Babai
231 https://en.wikipedia.org/wiki/Dima_Grigoriev
232 https://en.wikipedia.org/wiki/David_Mount
233 https://en.wikipedia.org/wiki/Doi_(identifier)
234 https://doi.org/10.1145%2F800070.802206
235 https://en.wikipedia.org/wiki/ISBN_(identifier)
236 https://en.wikipedia.org/wiki/Special:BookSources/0-89791-070-2
237 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Babai
238 https://en.wikipedia.org/wiki/William_Kantor
239 https://en.wikipedia.org/wiki/Eugene_M._Luks
240 https://en.wikipedia.org/wiki/Doi_(identifier)
241 https://doi.org/10.1109%2FSFCS.1983.10
242 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Babai
243 https://en.wikipedia.org/wiki/Eugene_M._Luks
244 https://en.wikipedia.org/wiki/Doi_(identifier)
245 https://doi.org/10.1145%2F800061.808746
246 https://en.wikipedia.org/wiki/ISBN_(identifier)
247 https://en.wikipedia.org/wiki/Special:BookSources/0-89791-099-0
248 https://en.wikipedia.org/wiki/ArXiv_(identifier)
249 http://arxiv.org/abs/1512.03547
250 https://en.wikipedia.org/wiki/Bibcode_(identifier)
251 https://ui.adsabs.harvard.edu/abs/2015arXiv151203547B
252 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
253 http://dl.acm.org/citation.cfm?id=800261.809095

1032
References

• B, M254 ; K, S (1995), ”D  


  ”255 , Journal of the ACM256 , 42 (1): 269–291, Cite-
SeerX257 10.1.1.38.2537258 , doi259 :10.1145/200836.200880260 .
• B, H261 (1990), ”P    
     k-trees”, Journal of Algorithms, 11 (4): 631–643,
doi262 :10.1016/0196-6774(90)90013-5263 , MR264 1079454265 .
• B, K S.; C, C. J.266 (1977), Problems polynomially equivalent to
graph isomorphism, Technical Report, CS-77-04, Computer Science Department, Univer-
sity of Waterloo.
• B, K S.; L, G S. (1979), ”A    
   ”, Journal of the ACM267 , 26 (2): 183–195,
doi268 :10.1145/322123.322125269 , MR270 0528025271 .
• B, C.; L, D. (2006), Graph isomorphism completeness for perfect graphs
and subclasses of perfect graphs272 (PDF), T R, CS-2006-32, C
S D, U  W.
• C, F R. K.273 (1985), ”O      -
   ”, SIAM Journal on Algebraic and Discrete Methods, 6 (2): 268–277,
doi274 :10.1137/0606026275 , MR276 0778007277 .
• C, C. J.278 (1981), ”O     ”,
Networks, 11: 13–21, doi279 :10.1002/net.3230110103280 , MR281 0608916282 .

254 https://en.wikipedia.org/wiki/Manuel_Blum
255 ftp://ftp.cis.upenn.edu/pub/kannan/jacm.ps.gz
256 https://en.wikipedia.org/wiki/Journal_of_the_ACM
257 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
258 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.38.2537
259 https://en.wikipedia.org/wiki/Doi_(identifier)
260 https://doi.org/10.1145%2F200836.200880
261 https://en.wikipedia.org/wiki/Hans_L._Bodlaender
262 https://en.wikipedia.org/wiki/Doi_(identifier)
263 https://doi.org/10.1016%2F0196-6774%2890%2990013-5
264 https://en.wikipedia.org/wiki/MR_(identifier)
265 http://www.ams.org/mathscinet-getitem?mr=1079454
266 https://en.wikipedia.org/wiki/Charles_Colbourn
267 https://en.wikipedia.org/wiki/Journal_of_the_ACM
268 https://en.wikipedia.org/wiki/Doi_(identifier)
269 https://doi.org/10.1145%2F322123.322125
270 https://en.wikipedia.org/wiki/MR_(identifier)
271 http://www.ams.org/mathscinet-getitem?mr=0528025
272 http://www.cs.uwaterloo.ca/research/tr/2006/CS-2006-32.pdf
273 https://en.wikipedia.org/wiki/Fan_Chung
274 https://en.wikipedia.org/wiki/Doi_(identifier)
275 https://doi.org/10.1137%2F0606026
276 https://en.wikipedia.org/wiki/MR_(identifier)
277 http://www.ams.org/mathscinet-getitem?mr=0778007
278 https://en.wikipedia.org/wiki/Charles_Colbourn
279 https://en.wikipedia.org/wiki/Doi_(identifier)
280 https://doi.org/10.1002%2Fnet.3230110103
281 https://en.wikipedia.org/wiki/MR_(identifier)
282 http://www.ams.org/mathscinet-getitem?mr=0608916

1033
Graph isomorphism problem

• C, M J; C, C J.283 (1978), ”G -


  - ”, ACM SIGACT News284 , 10 (1): 25–29,
doi285 :10.1145/1008605.1008608286 .
• C, D J.; H, L B. (2007), ”S 6.2.1: C L-
”287 , Mining Graph Data, Wiley, pp. 120–122, ISBN288 978-0-470-07303-2289 .
• D, S.; L, N.; N, P.; T, T.; W, F.
(2009), ”P     -”, 2009 24th Annual
IEEE Conference on Computational Complexity, p. 203, arXiv290 :0809.2319291 ,
doi292 :10.1109/CCC.2009.16293 , ISBN294 978-0-7695-3717-7295 .
• F, I. S.; M, J N. (1980), ”A - 
        ”, Proceedings
of the 12th Annual ACM Symposium on Theory of Computing296 , . 236–243,
297 :10.1145/800141.804671298 , ISBN299 0-89791-017-6300 .
• F, P.; S, C.; V, M. (2001), ”A    
   ”301 (PDF), Proc. 3rd IAPR-TC15 Workshop
Graph-Based Representations in Pattern Recognition, pp. 188–199.
• G, M R.302 ; J, D S.303 (1979), Computers and Intractability:
A Guide to the Theory of NP-Completeness304 , W. H. F, ISBN305 978-0-7167-
1045-5306 .
• G', D. J. (1981), ”C  ''     
    ”, Zapiski Nauchnykh Seminarov Leningrad-
skogo Otdeleniya Matematicheskogo Instituta Imeni V. A. Steklova Akademii Nauk SSSR
(LOMI) (in Russian), 105: 10–17, 198, MR307 0628981308 . English translation in Journal
of Mathematical Sciences 22 (3): 1285–1289, 1983.

283 https://en.wikipedia.org/wiki/Charles_Colbourn
284 https://en.wikipedia.org/wiki/ACM_SIGACT_News
285 https://en.wikipedia.org/wiki/Doi_(identifier)
286 https://doi.org/10.1145%2F1008605.1008608
287 https://books.google.com/books?id=bHGy0_H0g8QC&pg=PA120
288 https://en.wikipedia.org/wiki/ISBN_(identifier)
289 https://en.wikipedia.org/wiki/Special:BookSources/978-0-470-07303-2
290 https://en.wikipedia.org/wiki/ArXiv_(identifier)
291 http://arxiv.org/abs/0809.2319
292 https://en.wikipedia.org/wiki/Doi_(identifier)
293 https://doi.org/10.1109%2FCCC.2009.16
294 https://en.wikipedia.org/wiki/ISBN_(identifier)
295 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7695-3717-7
296 https://hal.inria.fr/inria-00076553/document
297 https://en.wikipedia.org/wiki/Doi_(identifier)
298 https://doi.org/10.1145%2F800141.804671
299 https://en.wikipedia.org/wiki/ISBN_(identifier)
300 https://en.wikipedia.org/wiki/Special:BookSources/0-89791-017-6
http://www.engr.uconn.edu/~vkk06001/GraphIsomorphism/Papers/VF_SD_NAUTY_Ullman_
301
Experiments.pdf
302 https://en.wikipedia.org/wiki/Michael_Garey
303 https://en.wikipedia.org/wiki/David_S._Johnson
https://en.wikipedia.org/wiki/Computers_and_Intractability:_A_Guide_to_the_Theory_of_
304
NP-Completeness
305 https://en.wikipedia.org/wiki/ISBN_(identifier)
306 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7167-1045-5
307 https://en.wikipedia.org/wiki/MR_(identifier)
308 http://www.ams.org/mathscinet-getitem?mr=0628981

1034
References

• H, J309 ; W, J. (1974), ”L    


  ”, Proceedings of the Sixth Annual ACM Symposium on Theory of
Computing, pp. 172–184, doi310 :10.1145/800119.803896311 .
• I, C-A M (2005), Graph Matching: Filtering Databases of
Graphs Using Machine Learning, Dissertationen zur künstlichen Intelligenz, 293, AKA,
ISBN312 1-58603-557-6313 .
• K, V; S, A (2003), ”O   
  ”314 , Graphs and Combinatorics315 , 19 (2): 215–
230, arXiv316 :math/0106093317 , doi318 :10.1007/s00373-002-0503-y319 , MR320 1996205321 ,
archived from the original322 on 2015-07-21.
• K, P J.323 (1957), ”A    ”, Pacific Journal of
Mathematics324 , 7: 961–968, doi325 :10.2140/pjm.1957.7.961326 , MR327 0087949328 .
• K, J; S, U329 ; T, J (1992), ”G
    PP”, Computational Complexity, 2 (4): 301–330,
330 331 332
doi :10.1007/BF01200427 , MR 1215315 . 333

• K, D334 (1978), ”A      -


”, ACM SIGACT News335 , 10 (2): 50–52, doi336 :10.1145/990524.990529337 .

309 https://en.wikipedia.org/wiki/John_Hopcroft
310 https://en.wikipedia.org/wiki/Doi_(identifier)
311 https://doi.org/10.1145%2F800119.803896
312 https://en.wikipedia.org/wiki/ISBN_(identifier)
313 https://en.wikipedia.org/wiki/Special:BookSources/1-58603-557-6
https://web.archive.org/web/20150721175904/http://eprintweb.org/S/authors/All/ka/
314
Kaibel/16
315 https://en.wikipedia.org/wiki/Graphs_and_Combinatorics
316 https://en.wikipedia.org/wiki/ArXiv_(identifier)
317 http://arxiv.org/abs/math/0106093
318 https://en.wikipedia.org/wiki/Doi_(identifier)
319 https://doi.org/10.1007%2Fs00373-002-0503-y
320 https://en.wikipedia.org/wiki/MR_(identifier)
321 http://www.ams.org/mathscinet-getitem?mr=1996205
322 http://eprintweb.org/S/authors/All/ka/Kaibel/16
323 https://en.wikipedia.org/wiki/Paul_Kelly_(mathematician)
324 https://en.wikipedia.org/wiki/Pacific_Journal_of_Mathematics
325 https://en.wikipedia.org/wiki/Doi_(identifier)
326 https://doi.org/10.2140%2Fpjm.1957.7.961
327 https://en.wikipedia.org/wiki/MR_(identifier)
328 http://www.ams.org/mathscinet-getitem?mr=0087949
329 https://en.wikipedia.org/wiki/Uwe_Sch%C3%B6ning
330 https://en.wikipedia.org/wiki/Doi_(identifier)
331 https://doi.org/10.1007%2FBF01200427
332 https://en.wikipedia.org/wiki/MR_(identifier)
333 http://www.ams.org/mathscinet-getitem?mr=1215315
334 https://en.wikipedia.org/wiki/Dexter_Kozen
335 https://en.wikipedia.org/wiki/ACM_SIGACT_News
336 https://en.wikipedia.org/wiki/Doi_(identifier)
337 https://doi.org/10.1145%2F990524.990529

1035
Graph isomorphism problem

• L, E M.338 (1982), ”I      


    ”, Journal of Computer and System Sciences339 , 25:
42–65, doi340 :10.1016/0022-0000(82)90009-5341 , MR342 0685360343 .
• L, E M.344 (1986), ”P    
  ”, Proc. IEEE Symp. Foundations of Computer Science,
pp. 292–302.
• M, R (1979), ”A      
”, Information Processing Letters345 , 8 (3): 131–132, doi346 :10.1016/0020-
0190(79)90004-8347 , MR348 0526453349 .
• MK, B D.350 (1981), ”P  ”351 , 10th. Mani-
toba Conference on Numerical Mathematics and Computing (Winnipeg, 1980), Congressus
Numerantium, 30, pp. 45–87, MR352 0635936353 .
• M, G354 (1980), ”I      ”,
Proceedings of the 12th Annual ACM Symposium on Theory of Computing, pp. 225–235,
doi355 :10.1145/800141.804670356 , ISBN357 0-89791-017-6358 .
• M, G L.359 (1983), ”I      k-
contractable graphs (a generalization of bounded valence and bounded genus)”, Proc.
Int. Conf. on Foundations of Computer Theory, Lecture Notes in Computer Science360 ,
158, pp. 310–327, doi361 :10.1007/3-540-12689-9_114362 . Full paper in Information and
Control363 56 (1–2): 1–20, 1983.
• M, C364 ; R, A; S, L J.365
(2008), ”T     F ”, SIAM

338 https://en.wikipedia.org/wiki/Eugene_M._Luks
339 https://en.wikipedia.org/wiki/Journal_of_Computer_and_System_Sciences
340 https://en.wikipedia.org/wiki/Doi_(identifier)
341 https://doi.org/10.1016%2F0022-0000%2882%2990009-5
342 https://en.wikipedia.org/wiki/MR_(identifier)
343 http://www.ams.org/mathscinet-getitem?mr=0685360
344 https://en.wikipedia.org/wiki/Eugene_M._Luks
345 https://en.wikipedia.org/wiki/Information_Processing_Letters
346 https://en.wikipedia.org/wiki/Doi_(identifier)
347 https://doi.org/10.1016%2F0020-0190%2879%2990004-8
348 https://en.wikipedia.org/wiki/MR_(identifier)
349 http://www.ams.org/mathscinet-getitem?mr=0526453
350 https://en.wikipedia.org/wiki/Brendan_McKay
351 http://cs.anu.edu.au/~bdm/nauty/PGI/
352 https://en.wikipedia.org/wiki/MR_(identifier)
353 http://www.ams.org/mathscinet-getitem?mr=0635936
354 https://en.wikipedia.org/wiki/Gary_Miller_(computer_scientist)
355 https://en.wikipedia.org/wiki/Doi_(identifier)
356 https://doi.org/10.1145%2F800141.804670
357 https://en.wikipedia.org/wiki/ISBN_(identifier)
358 https://en.wikipedia.org/wiki/Special:BookSources/0-89791-017-6
359 https://en.wikipedia.org/wiki/Gary_L._Miller_(mathematician)
360 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
361 https://en.wikipedia.org/wiki/Doi_(identifier)
362 https://doi.org/10.1007%2F3-540-12689-9_114
363 https://en.wikipedia.org/wiki/Information_and_Control
364 https://en.wikipedia.org/wiki/Cristopher_Moore
365 https://en.wikipedia.org/wiki/Leonard_Schulman

1036
References

Journal on Computing366 , 37 (6): 1842–1864, arXiv367 :quant-ph/0501056368 ,


doi369 :10.1137/050644896370 , MR371 2386215372 .
• M, M (2004), ”A S   I P-
  C G”, Proc. London Math. Soc., 88: 1–41,
doi373 :10.1112/s0024611503014412374 , MR375 2018956376 .
• N, S. M.; R, B. (2008), ”O    
  M  ”377 (PDF), Proceedings of the Twenty-fifth
International Conference on Machine Learning (ICML 2008), pp. 688–696.
• S, D C.; D, L E. (1976), ”A   -
         ”, Jour-
nal of the ACM378 , 23 (3): 433–445, doi379 :10.1145/321958.321963380 , MR381 0411230382 .
• S, U383 (1987), ”G      ”, Pro-
ceedings of the 4th Annual Symposium on Theoretical Aspects of Computer Science384 ,
. 114–124; also Journal of Computer and System Sciences 37: 312–323, 1988.
• S-T, J; P, T385 (1994), ”H  2-
    ”, SIAM Journal on Computing386 ,
23 (1): 120–132, doi387 :10.1137/S0097539791198900388 , MR389 1258998390 .
• S, D A.391 (1996), ”F     -
 ”, Proceedings of the Twenty-eighth Annual ACM Symposium on Theory of
Computing (STOC '96), ACM, pp. 576–584, ISBN392 978-0-89791-785-8393 .

366 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
367 https://en.wikipedia.org/wiki/ArXiv_(identifier)
368 http://arxiv.org/abs/quant-ph/0501056
369 https://en.wikipedia.org/wiki/Doi_(identifier)
370 https://doi.org/10.1137%2F050644896
371 https://en.wikipedia.org/wiki/MR_(identifier)
372 http://www.ams.org/mathscinet-getitem?mr=2386215
373 https://en.wikipedia.org/wiki/Doi_(identifier)
374 https://doi.org/10.1112%2Fs0024611503014412
375 https://en.wikipedia.org/wiki/MR_(identifier)
376 http://www.ams.org/mathscinet-getitem?mr=2018956
377 http://www.cse.iitm.ac.in/~ravi/papers/Shravan-ICML08.pdf
378 https://en.wikipedia.org/wiki/Journal_of_the_ACM
379 https://en.wikipedia.org/wiki/Doi_(identifier)
380 https://doi.org/10.1145%2F321958.321963
381 https://en.wikipedia.org/wiki/MR_(identifier)
382 http://www.ams.org/mathscinet-getitem?mr=0411230
383 https://en.wikipedia.org/wiki/Uwe_Sch%C3%B6ning
384 https://en.wikipedia.org/wiki/Symposium_on_Theoretical_Aspects_of_Computer_Science
385 https://en.wikipedia.org/wiki/Toma%C5%BE_Pisanski
386 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
387 https://en.wikipedia.org/wiki/Doi_(identifier)
388 https://doi.org/10.1137%2FS0097539791198900
389 https://en.wikipedia.org/wiki/MR_(identifier)
390 http://www.ams.org/mathscinet-getitem?mr=1258998
391 https://en.wikipedia.org/wiki/Daniel_Spielman
392 https://en.wikipedia.org/wiki/ISBN_(identifier)
393 https://en.wikipedia.org/wiki/Special:BookSources/978-0-89791-785-8

1037
Graph isomorphism problem

• U, J R. (1976), ”A    ”394


(PDF), Journal of the ACM395 , 23: 31–42, CiteSeerX396 10.1.1.361.7741397 ,
doi :10.1145/321921.321925 , MR 0495173401 .
398 399 400

91.8.1 Surveys and monographs


• R, R C.; C, D G.402 (1977), ”T   -
”, Journal of Graph Theory, 1 (4): 339–363, doi403 :10.1002/jgt.3190010410404 ,
MR405 0485586406 .
• G, G. (1979), ”F      -
”, Journal of Graph Theory, 3 (2): 95–109, doi407 :10.1002/jgt.3190030202408 .
• Z, V. N.; K, N. M.; T, R. I.409 (1985),
”G  ”, Journal of Mathematical Sciences, 29 (4): 1426–
1481, doi410 :10.1007/BF02104746411 . (Translated from Zapiski Nauchnykh Seminarov
Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN
SSSR (Records of Seminars of the Leningrad Department of Steklov Institute of Mathe-
matics of the USSR Academy of Sciences412 ), Vol. 118, pp. 83–158, 1982.)
• A, V.; T, J (2005), ”I : P 
 ”413 (PDF), Bulletin of the European Association for Theoretical Com-
puter Science, 86: 66–84. (A brief survey of open questions related to the isomorphism
problem for graphs, rings and groups.)
• K, J; S, U414 ; T, J (1993), The Graph Iso-
morphism Problem: Its Structural Complexity, Birkhäuser, ISBN415 978-0-8176-3680-7416 .
(From the book cover: The books focuses on the issue of the computational complexity
of the problem and presents several recent results that provide a better understanding of
the relative position of the problem in the class NP as well as in other complexity classes.)

394 http://www.cs.bgu.ac.il/~dinitz/Course/SS-12/Ullman_Algorithm.pdf
395 https://en.wikipedia.org/wiki/Journal_of_the_ACM
396 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
397 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.361.7741
398 https://en.wikipedia.org/wiki/Doi_(identifier)
399 https://doi.org/10.1145%2F321921.321925
400 https://en.wikipedia.org/wiki/MR_(identifier)
401 http://www.ams.org/mathscinet-getitem?mr=0495173
402 https://en.wikipedia.org/wiki/Derek_Corneil
403 https://en.wikipedia.org/wiki/Doi_(identifier)
404 https://doi.org/10.1002%2Fjgt.3190010410
405 https://en.wikipedia.org/wiki/MR_(identifier)
406 http://www.ams.org/mathscinet-getitem?mr=0485586
407 https://en.wikipedia.org/wiki/Doi_(identifier)
408 https://doi.org/10.1002%2Fjgt.3190030202
409 https://en.wikipedia.org/wiki/Regina_Tyshkevich
410 https://en.wikipedia.org/wiki/Doi_(identifier)
411 https://doi.org/10.1007%2FBF02104746
https://en.wikipedia.org/wiki/Leningrad_Department_of_Steklov_Institute_of_
412
Mathematics_of_the_USSR_Academy_of_Sciences
http://www.uni-ulm.de/fileadmin/website_uni_ulm/iui.inst.190/Mitarbeiter/toran/
413
beatcs/column86.pdf
414 https://en.wikipedia.org/wiki/Uwe_Sch%C3%B6ning
415 https://en.wikipedia.org/wiki/ISBN_(identifier)
416 https://en.wikipedia.org/wiki/Special:BookSources/978-0-8176-3680-7

1038
References

• J, D S.417 (2005), ”T NP-C C”, ACM Transactions


on Algorithms418 , 1 (1): 160–176, doi419 :10.1145/1077464.1077476420 . (This 24th edition
of the Column discusses the state of the art for the open problems from the book Com-
puters and Intractability421 and previous columns, in particular, for Graph Isomorphism.)
• T, J; W, F (2009), ”T    
”422 (PDF), Bulletin of the European Association for Theoretical Computer
Science, 97, archived from the original423 (PDF) on 2010-09-20, retrieved 2010-06-03.

91.8.2 Software
• Graph Isomorphism424 , review of implementations, The Stony Brook Algorithm Reposi-
tory425 .

417 https://en.wikipedia.org/wiki/David_S._Johnson
418 https://en.wikipedia.org/wiki/ACM_Transactions_on_Algorithms
419 https://en.wikipedia.org/wiki/Doi_(identifier)
420 https://doi.org/10.1145%2F1077464.1077476
421 https://en.wikipedia.org/wiki/Computers_and_Intractability
https://web.archive.org/web/20100920154742/http://theorie.informatik.uni-ulm.de/
422
Personen/toran/beatcs/column97.pdf
423 http://theorie.informatik.uni-ulm.de/Personen/toran/beatcs/column97.pdf
424 http://www.cs.sunysb.edu/~algorith/files/graph-isomorphism.shtml
425 http://www.cs.sunysb.edu/~algorith

1039
92 Graph kernel

This article is about machine learning. For the graph-theoretical notion, see Glossary of
graph theory1 . In structure mining2 , a domain of learning on structured data objects in
machine learning3 , a graph kernel is a kernel function4 that computes an inner prod-
uct5 on graphs6 .[1] Graph kernels can be intuitively understood as functions measuring the
similarity of pairs of graphs. They allow kernelized7 learning algorithms such as support
vector machines8 to work directly on graphs, without having to do feature extraction9 to
transform them to fixed-length, real-valued feature vectors10 . They find applications in
bioinformatics11 , in chemoinformatics12 (as a type of molecule kernels13[2] ), and in social
network analysis14 .[1]
Concepts of graph kernels have been around since the 1999, when D. Haussler[3] introduced
convolutional kernels on discrete structures. The term graph kernels was more officially
coined in 2002 by R. I. Kondor and John Lafferty[4] as kernels on graphs, i.e. similarity
functions between the nodes of a single graph, with the World Wide Web15 hyperlink16
graph as a suggested application. In 2003, Gaertner et al.[5] and Kashima et al.[6] defined
kernels between graphs. In 2010, Vishwanathan et al. gave their unified framework.[1] In
2018, Ghosh et al. [7] described the history of graph kernels and their evolution over two
decades.
An example of a kernel between graphs is the random walk kernel[5][6] , which conceptually
performs random walks17 on two graphs simultaneously, then counts the number of paths18
that were produced by both walks. This is equivalent to doing random walks on the direct
product19 of the pair of graphs, and from this, a kernel can be derived that can be efficiently
computed.[1]

1 https://en.wikipedia.org/wiki/Glossary_of_graph_theory
2 https://en.wikipedia.org/wiki/Structure_mining
3 https://en.wikipedia.org/wiki/Machine_learning
4 https://en.wikipedia.org/wiki/Positive-definite_kernel
5 https://en.wikipedia.org/wiki/Inner_product_space
6 https://en.wikipedia.org/wiki/Graph_(abstract_data_type)
7 https://en.wikipedia.org/wiki/Kernel_trick
8 https://en.wikipedia.org/wiki/Support_vector_machine
9 https://en.wikipedia.org/wiki/Feature_extraction
10 https://en.wikipedia.org/wiki/Feature_vector
11 https://en.wikipedia.org/wiki/Bioinformatics
12 https://en.wikipedia.org/wiki/Chemoinformatics
13 https://en.wikipedia.org/wiki/Molecule_kernel
14 https://en.wikipedia.org/wiki/Social_network_analysis
15 https://en.wikipedia.org/wiki/World_Wide_Web
16 https://en.wikipedia.org/wiki/Hyperlink
17 https://en.wikipedia.org/wiki/Random_walk
18 https://en.wikipedia.org/wiki/Path_(graph_theory)
19 https://en.wikipedia.org/wiki/Tensor_product_of_graphs

1041
Graph kernel

92.1 Applications

The marginalized graph kernel has been shown to allow accurate predictions of the atom-
ization energy of small organic molecules[8] .

92.2 References
1. S.V. N. V; N N. S; R K; K M.
B (2010). ”G ”20 (PDF). Journal of Machine Learning Re-
search21 . 11: 1201–1242.
2. L. R; S. J. S; H. S; P. B (2005). ”G -
   ”. Neural Networks. 18 (8): 1093–1110.
doi22 :10.1016/j.neunet.2005.07.00923 . PMID24 1615747125 .
3. H, D (1999). Convolution Kernels on Discrete Structures. Cite-
SeerX26 10.1.1.110.63827 .
4. R I K; J L (2002). Diffusion Kernels on Graphs and
Other Discrete Input Spaces28 (PDF). P. I' C.  M L
(ICML).
5. T G; P F; S W (2003). On graph ker-
nels: Hardness results and efficient alternatives. Proc. the 16th Annual Confer-
ence on Computational Learning Theory (COLT) and the 7th Kernel Workshop.
doi29 :10.1007/978-3-540-45167-9_1130 .
6. H K; K T; A I (2003). Marginalized kernels
between labeled graphs31 (PDF). P.  20 I C 
M L (ICML).
7. G, S; D, N; G, T; Q,
P; K, M (2018). ”T    -
   ”. Computer Science Review. 27: 88–111.
doi32 :10.1016/j.cosrev.2017.11.00233 .
8. Y-H T; W A.  J (2019). ”P   -
      ”. The Journal of Chemical

20 http://jmlr.csail.mit.edu/papers/volume11/vishwanathan10a/vishwanathan10a.pdf
21 https://en.wikipedia.org/wiki/Journal_of_Machine_Learning_Research
22 https://en.wikipedia.org/wiki/Doi_(identifier)
23 https://doi.org/10.1016%2Fj.neunet.2005.07.009
24 https://en.wikipedia.org/wiki/PMID_(identifier)
25 http://pubmed.ncbi.nlm.nih.gov/16157471
26 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
27 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.110.638
28 http://people.cs.uchicago.edu/~risi/papers/diffusion-kernels.pdf
29 https://en.wikipedia.org/wiki/Doi_(identifier)
30 https://doi.org/10.1007%2F978-3-540-45167-9_11
31 http://www.aaai.org/Papers/ICML/2003/ICML03-044.pdf
32 https://en.wikipedia.org/wiki/Doi_(identifier)
33 https://doi.org/10.1016%2Fj.cosrev.2017.11.002

1042
See also

Physics. 150 (4): 044107. arXiv34 :1810.0731035 . Bibcode36 :2019JChPh.150d4107T37 .


doi38 :10.1063/1.507864039 . PMID40 3070928641 .

92.3 See also


• Tree kernel42 , as special case of non-cyclic graphs
• Molecule mining43 , as special case of small multi-label graphs

This computer science44 article is a stub45 . You can help Wikipedia by expanding
it46 .
• v47
• t48
• e49

34 https://en.wikipedia.org/wiki/ArXiv_(identifier)
35 http://arxiv.org/abs/1810.07310
36 https://en.wikipedia.org/wiki/Bibcode_(identifier)
37 https://ui.adsabs.harvard.edu/abs/2019JChPh.150d4107T
38 https://en.wikipedia.org/wiki/Doi_(identifier)
39 https://doi.org/10.1063%2F1.5078640
40 https://en.wikipedia.org/wiki/PMID_(identifier)
41 http://pubmed.ncbi.nlm.nih.gov/30709286
42 https://en.wikipedia.org/wiki/Tree_kernel
43 https://en.wikipedia.org/wiki/Molecule_mining
44 https://en.wikipedia.org/wiki/Computer_science
45 https://en.wikipedia.org/wiki/Wikipedia:Stub
46 https://en.wikipedia.org/w/index.php?title=Graph_kernel&action=edit
47 https://en.wikipedia.org/wiki/Template:Comp-sci-stub
48 https://en.wikipedia.org/wiki/Template_talk:Comp-sci-stub
49 https://en.wikipedia.org/w/index.php?title=Template:Comp-sci-stub&action=edit

1043
93 Graph reduction

This article is about the computer science term. For for the graph theory use, see transitive
reduction1 . In computer science2 , graph reduction implements an efficient version of
non-strict evaluation, an evaluation strategy3 where the arguments to a function are not
immediately evaluated. This form of non-strict evaluation is also known as lazy evaluation4
and used in functional programming languages5 . The technique was first developed by Chris
Wadsworth6 in 1971.

93.1 Motivation

A simple example of evaluating an arithmetic expression follows:


((2 + 2) + (2 + 2)) + (3 + 3)
=((2 + 2) + (2 + 2)) + 6
=((2 + 2) + 4) + 6
=(4 + 4) + 6
=8 + 6
=14
The above reduction sequence employs a strategy known as outermost tree reduction7 . The
same expression can be evaluated using innermost tree reduction8 , yielding the reduction
sequence:
((2 + 2) + (2 + 2)) + (3 + 3)
=((2 + 2) + 4) + (3 + 3)
=(4 + 4) + (3 + 3)
=(4 + 4) + 6
=8 + 6
=14

1 https://en.wikipedia.org/wiki/Transitive_reduction
2 https://en.wikipedia.org/wiki/Computer_science
3 https://en.wikipedia.org/wiki/Evaluation_strategy
4 https://en.wikipedia.org/wiki/Lazy_evaluation
5 https://en.wikipedia.org/wiki/Functional_programming
6 https://en.wikipedia.org/w/index.php?title=Chris_Wadsworth&action=edit&redlink=1
https://en.wikipedia.org/w/index.php?title=Outermost_tree_reduction&action=edit&
7
redlink=1
https://en.wikipedia.org/w/index.php?title=Innermost_tree_reduction&action=edit&
8
redlink=1

1045
Graph reduction

Notice that the reduction order is made explicit by the addition of parentheses. This expres-
sion could also have been simply evaluated right to left, because addition is an associative9
operation.
Represented as a tree10 , the expression above looks like this:

Figure 231

This is where the term tree reduction comes from. When represented as a tree, we can
think of innermost reduction as working from the bottom up, while outermost works from
the top down.
The expression can also be represented as a directed acyclic graph11 , allowing sub-
expressions to be shared:

9 https://en.wikipedia.org/wiki/Associative
10 https://en.wikipedia.org/wiki/Tree_data_structure
11 https://en.wikipedia.org/wiki/Directed_acyclic_graph

1046
Motivation

Figure 232

As for trees, outermost and innermost reduction also applies to graphs. Hence we have
graph reduction.
Now evaluation with outermost graph reduction can proceed as follows:

Figure 233

1047
Graph reduction

Notice that evaluation now only requires four steps. Outermost graph reduction is referred
to as lazy evaluation12 and innermost graph reduction is referred to as eager evaluation13 .

93.2 Combinator graph reduction

Combinator graph reduction is a fundamental implementation technique for functional


programming14 languages, in which a program is converted into a combinator15 represen-
tation which is mapped to a directed graph16 data structure17 in computer memory, and
program execution then consists of rewriting parts of this graph (”reducing” it) so as to
move towards useful results.

93.3 History

The concept of a graph reduction that allows evaluated values to be shared was first devel-
oped by Chris Wadsworth18 in his 1971 Ph.D. dissertation.[1] This dissertation was cited by
Peter Henderson and James H. Morris Jr. in 1976 paper, “A lazy evaluator”[2] that intro-
duced the notion of lazy evaluation. In 1976 David Turner19 incorporated lazy evaluation
into SASL20 using combinators.[3] SASL was an early functional programming language first
developed by Turner in 1972.

93.4 See also


• Graph reduction machine21
• SECD machine22

12 https://en.wikipedia.org/wiki/Lazy_evaluation
13 https://en.wikipedia.org/wiki/Eager_evaluation
14 https://en.wikipedia.org/wiki/Functional_programming
15 https://en.wikipedia.org/wiki/Combinator
16 https://en.wikipedia.org/wiki/Directed_graph
17 https://en.wikipedia.org/wiki/Data_structure
18 https://en.wikipedia.org/w/index.php?title=Chris_Wadsworth&action=edit&redlink=1
19 https://en.wikipedia.org/wiki/David_Turner_(computer_scientist)
20 https://en.wikipedia.org/wiki/SASL_programming_language
21 https://en.wikipedia.org/wiki/Graph_reduction_machine
22 https://en.wikipedia.org/wiki/SECD_machine

1048
Notes

93.5 Notes
1. H, P (S 1989). ”C, ,  
   ”. ACM23 Computing Surveys. 21 (3):
359–411. CiteSeerX24 10.1.1.83.650525 . doi26 :10.1145/72551.7255427 .
2. A lazy evaluator28
3. H, P; H, J; P J, S; W, P. ”A H-
  H: B L  C”29 . History of Programming Languages
Conference 2007.

93.6 References
• B, R (1998). Introduction to Functional Programming using Haskell. Pren-
tice Hall. ISBN30 0-13-484346-031 .

93.7 Further reading


• Simon Peyton Jones32 , The Implementation of Functional Programming Languages, Pren-
tice Hall, 1987. Full text online.[1]33

23 https://en.wikipedia.org/wiki/Association_for_Computing_Machinery
24 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
25 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.83.6505
26 https://en.wikipedia.org/wiki/Doi_(identifier)
27 https://doi.org/10.1145%2F72551.72554
28 http://portal.acm.org/citation.cfm?id=811543
29 http://haskell.org/haskellwiki/History_of_Haskell
30 https://en.wikipedia.org/wiki/ISBN_(identifier)
31 https://en.wikipedia.org/wiki/Special:BookSources/0-13-484346-0
32 https://en.wikipedia.org/wiki/Simon_Peyton_Jones
33 http://research.microsoft.com/users/simonpj/papers/slpj-book-1987/index.htm

1049
94 Graph traversal

”Graph search” redirects here. It is not to be confused with Facebook Graph Search1 .

This article needs additional citations for verification2 . Please help improve
this article3 by adding citations to reliable sources4 . Unsourced material may be
challenged and removed.
Find sources: ”Graph traversal”5 – news6 · newspapers7 · books8 · scholar9 · JSTOR10
(October 2014)(Learn how and when to remove this template message11 )

1 https://en.wikipedia.org/wiki/Facebook_Graph_Search
2 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
3 https://en.wikipedia.org/w/index.php?title=Graph_traversal&action=edit
4 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
5 http://www.google.com/search?as_eq=wikipedia&q=%22Graph+traversal%22
6 http://www.google.com/search?tbm=nws&q=%22Graph+traversal%22+-wikipedia
http://www.google.com/search?&q=%22Graph+traversal%22+site:news.google.com/
7
newspapers&source=newspapers
8 http://www.google.com/search?tbs=bks:1&q=%22Graph+traversal%22+-wikipedia
9 http://scholar.google.com/scholar?q=%22Graph+traversal%22
10 https://www.jstor.org/action/doBasicSearch?Query=%22Graph+traversal%22&acc=on&wc=on
11 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

1051
Graph traversal

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

1052
Redundancy

In computer science12 , graph traversal (also known as graph search) refers to the process
of visiting (checking and/or updating) each vertex in a graph13 . Such traversals are classified
by the order in which the vertices are visited. Tree traversal14 is a special case of graph
traversal.

94.1 Redundancy

Unlike tree traversal, graph traversal may require that some vertices be visited more than
once, since it is not necessarily known before transitioning to a vertex that it has already
been explored. As graphs become more dense15 , this redundancy becomes more prevalent,
causing computation time to increase; as graphs become more sparse, the opposite holds
true.
Thus, it is usually necessary to remember which vertices have already been explored by the
algorithm, so that vertices are revisited as infrequently as possible (or in the worst case, to
prevent the traversal from continuing indefinitely). This may be accomplished by associating
each vertex of the graph with a ”color” or ”visitation” state during the traversal, which is
then checked and updated as the algorithm visits each vertex. If the vertex has already
been visited, it is ignored and the path is pursued no further; otherwise, the algorithm
checks/updates the vertex and continues down its current path.
Several special cases of graphs imply the visitation of other vertices in their structure,
and thus do not require that visitation be explicitly recorded during the traversal. An
important example of this is a tree: during a traversal it may be assumed that all ”ancestor”
vertices of the current vertex (and others depending on the algorithm) have already been
visited. Both the depth-first16 and breadth-first graph searches17 are adaptations of tree-
based algorithms, distinguished primarily by the lack of a structurally determined ”root”
vertex and the addition of a data structure to record the traversal's visitation state.

94.2 Graph traversal algorithms

Note. — If each vertex in a graph is to be traversed by a tree-based algorithm (such as DFS


or BFS), then the algorithm must be called at least once for each connected component18
of the graph. This is easily accomplished by iterating through all the vertices of the graph,
performing the algorithm on each vertex that is still unvisited when examined.

12 https://en.wikipedia.org/wiki/Computer_science
13 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
14 https://en.wikipedia.org/wiki/Tree_traversal
15 https://en.wikipedia.org/wiki/Dense_graph
16 https://en.wikipedia.org/wiki/Depth-first_search
17 https://en.wikipedia.org/wiki/Breadth-first_search
18 https://en.wikipedia.org/wiki/Connected_component_(graph_theory)

1053
Graph traversal

Figure 235 A non-verbal description of three graph traversal algorithms: randomly,


depth-first search, and breadth-first search.

94.2.1 Depth-first search

Main article: Depth-first search19 A depth-first search (DFS) is an algorithm for traversing
a finite graph. DFS visits the child vertices before visiting the sibling vertices; that is, it
traverses the depth of any particular path before exploring its breadth. A stack (often the
program's call stack20 via recursion21 ) is generally used when implementing the algorithm.
The algorithm begins with a chosen ”root” vertex; it then iteratively transitions from the
current vertex to an adjacent, unvisited vertex, until it can no longer find an unexplored
vertex to transition to from its current location. The algorithm then backtracks22 along
previously visited vertices, until it finds a vertex connected to yet more uncharted territory.
It will then proceed down the new path as it had before, backtracking as it encounters
dead-ends, and ending only when the algorithm has backtracked past the original ”root”
vertex from the very first step.

19 https://en.wikipedia.org/wiki/Depth-first_search
20 https://en.wikipedia.org/wiki/Call_stack
21 https://en.wikipedia.org/wiki/Recursion_(computer_science)
22 https://en.wikipedia.org/wiki/Backtracking

1054
Graph traversal algorithms

DFS is the basis for many graph-related algorithms, including topological sorts23 and pla-
narity testing24 .

Pseudocode
• Input: A graph G and a vertex v of G.
• Output: A labeling of the edges in the connected component of v as discovery edges and
back edges.
procedure DFS(G, v) is
label v as explored
for all edges e in G.incidentEdges(v) do
if edge e is unexplored then
w ← G.adjacentVertex(v, e)
if vertex w is unexplored then
label e as a discovered edge
recursively call DFS(G, w)
else
label e as a back edge

94.2.2 Breadth-first search

Main article: Breadth-first search25

This section needs expansion. You can help by adding to it26 . (October 2012)

A breadth-first search (BFS) is another technique for traversing a finite graph. BFS visits
the sibling vertices before visiting the child vertices, and a queue27 is used in the search
process. This algorithm is often used to find the shortest path from one vertex to another.

Pseudocode
• Input: A graph G and a vertex v of G.
• Output: The closest vertex to v satisfying some conditions, or null if no such vertex exists.
procedure BFS(G, v) is
create a queue Q
enqueue v onto Q
mark v
while Q is not empty do
w ← Q.dequeue()
if w is what we are looking for then
return w
for all edges e in G.adjacentEdges(w) do
x ← G.adjacentVertex(w, e)
if x is not marked then
mark x
enqueue x onto Q

23 https://en.wikipedia.org/wiki/Topological_sort
24 https://en.wikipedia.org/wiki/Planarity_testing
25 https://en.wikipedia.org/wiki/Breadth-first_search
26 https://en.wikipedia.org/w/index.php?title=Graph_traversal&action=edit&section=
27 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)

1055
Graph traversal

return null

94.3 Applications

Breadth-first search can be used to solve many problems in graph theory, for example:
• finding all vertices within one connected component28 ;
• Cheney's algorithm29 ;
• finding the shortest path30 between two vertices;
• testing a graph for bipartiteness31 ;
• Cuthill–McKee algorithm32 mesh numbering;
• Ford–Fulkerson algorithm33 for computing the maximum flow34 in a flow network35 ;
• serialization/deserialization of a binary tree vs serialization in sorted order (allows the
tree to be re-constructed in an efficient manner);
• maze generation algorithms36 ;
• flood fill37 algorithm for marking contiguous regions of a two dimensional image or n-
dimensional array;
• analysis of networks and relationships.

94.4 Graph exploration

The problem of graph exploration can be seen as a variant of graph traversal. It is an


online problem, meaning that the information about the graph is only revealed during the
runtime of the algorithm. A common model is as follows: given a connected graph G =
(V, E) with non-negative edge weights. The algorithm starts at some vertex, and knows all
incident outgoing edges and the vertices at the end of these edges—but not more. When
a new vertex is visited, then again all incident outgoing edges and the vertices at the end
are known. The goal is to visit all n vertices and return to the starting vertex, but the
sum of the weights of the tour should be as small as possible. The problem can also be
understood as a specific version of the travelling salesman problem38 , where the salesman
has to discover the graph on the go.
For general graphs, the best known algorithms for both undirected and directed graphs is
a simple greedy algorithm39 :

28 https://en.wikipedia.org/wiki/Connected_component_(graph_theory)
29 https://en.wikipedia.org/wiki/Cheney%27s_algorithm
30 https://en.wikipedia.org/wiki/Shortest_path
31 https://en.wikipedia.org/wiki/Bipartite_graph
32 https://en.wikipedia.org/wiki/Cuthill%E2%80%93McKee_algorithm
33 https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
34 https://en.wikipedia.org/wiki/Maximum_flow_problem
35 https://en.wikipedia.org/wiki/Flow_network
36 https://en.wikipedia.org/wiki/Maze_generation_algorithm
37 https://en.wikipedia.org/wiki/Flood_fill
38 https://en.wikipedia.org/wiki/Travelling_salesman_problem
39 https://en.wikipedia.org/wiki/Greedy_algorithm

1056
References

• In the undirected case, the greedy tour is at most O(ln n)-times longer than an optimal
tour.[1] The best lower bound known for any deterministic online algorithm is 2.5 − ε;[2]
• In the directed case, the greedy tour is at most (n − 1)-times longer than an optimal
tour. This matches the lower bound of n − 1.[3] An analogous competitive lower bound
of Ω(n) also holds for randomized algorithms that know the coordinates of each node in
a geometric embedding. If instead of visiting all nodes just a single ”treasure” node has
to be found, the competitive bounds are Θ(n2 ) on unit weight directed graphs, for both
deterministic and randomized algorithms.

94.5 Universal traversal sequences

This section needs expansion. You can help by adding to it40 . (December 2016)

A universal traversal sequence is a sequence of instructions comprising a graph traversal for


any regular graph41 with a set number of vertices and for any starting vertex. A probabilistic
proof was used by Aleliunas et al. to show that there exists a universal traversal sequence
with number of instructions proportional to O(n5 ) for any regular graph with n vertices.[4]
The steps specified in the sequence are relative to the current node, not absolute. For
example, if the current node is vj , and vj has d neighbors, then the traversal sequence will
specify the next node to visit, vj+1 , as the ith neighbor of vj , where 1 ≤ i ≤d.

94.6 References
1. R, D J.; S, R E.; L, II, P M. (1977).
”A A  S H   T S P”.
SIAM Journal on Computing. 6 (3): 563–581. doi42 :10.1137/020604143 .
2. D, S; K, R; M, E (2012). ”O
G E  A”. Proc. Of the 19th International Colloquium
on Structural Information and Communication Complexity (SIROCCO). Lecture
Notes in Computer Science. 7355: 267–278. doi44 :10.1007/978-3-642-31104-8_2345 .
ISBN46 978-3-642-31103-147 .
3. F, K-T; W, R (D 2016). ”L
        ”48 .
Theoretical Computer Science. 655: 15–29. doi49 :10.1016/j.tcs.2015.11.01750 .

40 https://en.wikipedia.org/w/index.php?title=Graph_traversal&action=edit&section=
41 https://en.wikipedia.org/wiki/Regular_graph
42 https://en.wikipedia.org/wiki/Doi_(identifier)
43 https://doi.org/10.1137%2F0206041
44 https://en.wikipedia.org/wiki/Doi_(identifier)
45 https://doi.org/10.1007%2F978-3-642-31104-8_23
46 https://en.wikipedia.org/wiki/ISBN_(identifier)
47 https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-31103-1
48 https://zenodo.org/record/895821
49 https://en.wikipedia.org/wiki/Doi_(identifier)
50 https://doi.org/10.1016%2Fj.tcs.2015.11.017

1057
Graph traversal

4. A, R.; K, R.; L, R.; L, L.; R, C. (1979). ”R-
 ,   ,     
”. 20th Annual Symposium on Foundations of Computer Science (SFCS
1979): 218–223. doi51 :10.1109/SFCS.1979.3452 .

94.7 See also


• External memory graph traversal53

51 https://en.wikipedia.org/wiki/Doi_(identifier)
52 https://doi.org/10.1109%2FSFCS.1979.34
53 https://en.wikipedia.org/wiki/External_memory_graph_traversal

1058
95 Hierarchical clustering of networks

Hierarchical clustering1 is one method for finding community structures2 in a network3 . The
technique arranges the network into a hierarchy of groups according to a specified weight
function. The data can then be represented in a tree structure known as a dendrogram4 .
Hierarchical clustering can either be agglomerative5 or divisive6 depending on whether one
proceeds through the algorithm by adding links to or removing links from the network,
respectively. One divisive technique is the Girvan–Newman algorithm7 .

95.1 Algorithm

In the hierarchical clustering algorithm, a weight8 Wij is first assigned to each pair of
vertices9 (i, j) in the network. The weight, which can vary depending on implementation
(see section below), is intended to indicate how closely related the vertices are. Then,
starting with all the nodes in the network disconnected, begin pairing nodes in order of
decreasing weight between the pairs (in the divisive case, start from the original network
and remove links in order of decreasing weight). As links are added, connected subsets
begin to form. These represent the network's community structures.
The components at each iterative step are always a subset of other structures. Hence, the
subsets can be represented using a tree diagram, or dendrogram10 . Horizontal slices of the
tree at a given level indicate the communities that exist above and below a value of the
weight.

95.2 Weights

There are many possible weights for use in hierarchical clustering algorithms. The specific
weight used is dictated by the data as well as considerations for computational speed.
Additionally, the communities found in the network are highly dependent on the choice of

1 https://en.wikipedia.org/wiki/Hierarchical_clustering
2 https://en.wikipedia.org/wiki/Community_structure
3 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
4 https://en.wikipedia.org/wiki/Dendrogram
5 https://en.wikipedia.org/wiki/Agglomerative_clustering
6 https://en.wikipedia.org/wiki/Divisive_clustering
7 https://en.wikipedia.org/wiki/Girvan%E2%80%93Newman_algorithm
8 https://en.wikipedia.org/wiki/Weight_(mathematics)
9 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
10 https://en.wikipedia.org/wiki/Dendrogram

1059
Hierarchical clustering of networks

weighting function. Hence, when compared to real-world data with a known community
structure, the various weighting techniques have been met with varying degrees of success.
Two weights that have been used previously with varying success are the number of node-
independent paths between each pair of vertices and the total number of paths between
vertices weighted by the length of the path. One disadvantage of these weights, however, is
that both weighting schemes tend to separate single peripheral vertices from their rightful
communities because of the small number of paths going to these vertices. For this reason,
their use in hierarchical clustering techniques is far from optimal.[1]
Edge betweenness centrality11 has been used successfully as a weight in the Girvan–Newman
algorithm12 .[1] This technique is similar to a divisive hierarchical clustering algorithm, except
the weights are recalculated with each step.
The change in modularity13 of the network with the addition of a node has also been used
successfully as a weight.[2] This method provides a computationally less-costly alternative
to the Girvan-Newman algorithm while yielding similar results.

95.3 See also


• Network topology14
• Numerical taxonomy15
• Tree structure16

95.4 References
1. M. Girvan and M. E. J. Newman. Community structure in social and biological
networks17 . Proc. Natl. Acad. Sci. USA 99, 7821–7826 (2002).
2. M. E. J. Newman. Fast algorithm for detecting community structure in networks18 .
Phys. Rev. E 69, 066133 (2004).

11 https://en.wikipedia.org/wiki/Betweenness_centrality
12 https://en.wikipedia.org/wiki/Girvan%E2%80%93Newman_algorithm
13 https://en.wikipedia.org/wiki/Modularity_(networks)
14 https://en.wikipedia.org/wiki/Network_topology
15 https://en.wikipedia.org/wiki/Numerical_taxonomy
16 https://en.wikipedia.org/wiki/Tree_structure
17 https://arxiv.org/abs/cond-mat/0112110/
18 https://arxiv.org/abs/cond-mat/0309508/

1060
96 Hopcroft–Karp algorithm

Hopcroft–Karp algorithm
Class Graph algo-
rithm
Data structure Graph

Worst-case perfor- O(E V )
mance
Worst-case space com- O(V )
plexity

In computer science1 , the Hopcroft–Karp algorithm (sometimes more accurately called


the Hopcroft–Karp–Karzanov algorithm)[1] is an algorithm2 that takes as input a
bipartite graph3 and produces as output a maximum cardinality matching4 – a set of as
many edges √ as possible with the property that no two edges share an endpoint. It runs
in O(|E| |V |) time in the worst case5 , where E is set of edges in the graph, V is set of
vertices of the graph, and it is assumed that |E| = Ω(|V |). In the case of dense graphs6 the
time bound becomes O(|V |2.5 ), and for sparse random graphs7 it runs in near-linear (in |E|)
8
time[citation needed ] .
The algorithm was found by John Hopcroft9 and Richard Karp10 (197311 ) and independently
by Alexander Karzanov12 (197313 ).[2] As in previous methods for matching such as the
Hungarian algorithm14 and the work of Edmonds (1965)15 , the Hopcroft–Karp algorithm
repeatedly increases the size of a partial matching by finding augmenting paths. These
paths are sequences of edges of the graph, which alternate between edges in the matching
and edges out of the partial matching, and where the initial and final edge are not in
the partial matching. Finding an augmenting path allows us to increment the size of the
partial matching, by simply toggling the edges of the augmenting path (putting in the
partial matching those that were not, and vice versa). Simpler algorithms for bipartite

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Algorithm
3 https://en.wikipedia.org/wiki/Bipartite_graph
4 https://en.wikipedia.org/wiki/Maximum_cardinality_matching
5 https://en.wikipedia.org/wiki/Worst_case_analysis
6 https://en.wikipedia.org/wiki/Dense_graph
7 https://en.wikipedia.org/wiki/Random_graph
9 https://en.wikipedia.org/wiki/John_Hopcroft
10 https://en.wikipedia.org/wiki/Richard_Karp
11 #CITEREFHopcroftKarp1973
12 https://en.wikipedia.org/wiki/Alexander_V._Karzanov
13 #CITEREFKarzanov1973
14 https://en.wikipedia.org/wiki/Hungarian_algorithm
15 #CITEREFEdmonds1965

1061
Hopcroft–Karp algorithm

matching, such as the Ford–Fulkerson algorithm16 ‚ find one augmenting path per iteration:
the Hopkroft-Karp algorithm√instead finds a maximal set of shortest augmenting paths,
so as to ensure that only O( |V |) iterations are needed instead of O(V ) iterations. The

same performance of O(|E| |V |) can be achieved to find maximum cardinality matchings
in arbitrary graphs, with the more complicated algorithm of Micali and Vazirani[3] .
The Hopcroft–Karp algorithm can be seen as a special case of Dinic's algorithm17 for the
maximum flow problem18 .[4]

96.1 Augmenting paths

A vertex that is not the endpoint of an edge in some partial matching M is called a free
vertex. The basic concept that the algorithm relies on is that of an augmenting path, a
path that starts at a free vertex, ends at a free vertex, and alternates between unmatched
and matched edges within the path. It follows from this definition that, except for the
endpoints, all other vertices (if any) in augmenting path must be non-free vertices. An
augmenting path could consist of only two vertices (both free) and single unmatched edge
between them.
If M is a matching, and P is an augmenting path relative to M , then the symmetric
difference19 of the two sets of edges, M ⊕ P , would form a matching with size |M | + 1.
Thus, by finding augmenting paths, an algorithm may increase the size of the matching.
Conversely, suppose that a matching M is not optimal, and let P be the symmetric difference
M ⊕ M ∗ where M ∗ is an optimal matching. Because M and M ∗ are both matchings, every
vertex has degree at most 2 in P . So P must form a collection of disjoint cycles, of paths
with an equal number of matched and unmatched edges in M , of augmenting paths for
M , and of augmenting paths for M ∗ ; but the latter is impossible because M ∗ is optimal.
Now, the cycles and the paths with equal numbers of matched and unmatched vertices
do not contribute to the difference in size between M and M ∗ , so this difference is equal
to the number of augmenting paths for M in P . Thus, whenever there exists a matching
M ∗ larger than the current matching M , there must also exist an augmenting path. If no
augmenting path can be found, an algorithm may safely terminate, since in this case M
must be optimal.
An augmenting path in a matching problem is closely related to the augmenting paths20
arising in maximum flow problems21 , paths along which one may increase the amount of
flow between the terminals of the flow. It is possible to transform the bipartite matching
problem into a maximum flow instance, such that the alternating paths of the matching
problem become augmenting paths of the flow problem.[5] In fact, a generalization of the

16 https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
17 https://en.wikipedia.org/wiki/Dinic%27s_algorithm
18 https://en.wikipedia.org/wiki/Maximum_flow_problem
19 https://en.wikipedia.org/wiki/Symmetric_difference
20 https://en.wikipedia.org/wiki/Augmenting_path
21 https://en.wikipedia.org/wiki/Maximum_flow_problem

1062
Algorithm

technique used in Hopcroft–Karp algorithm to arbitrary flow networks is known as Dinic's


algorithm22 .

96.2 Algorithm

The algorithm may be expressed in the following pseudocode23 .


Input: Bipartite graph G(U ∪ V, E)
Output: Matching M ⊆ E
M ←∅
repeat
P ← {P1 , P2 , . . . , Pk } maximal set of vertex-disjoint shortest augmenting paths
M ← M ⊕ (P1 ∪ P2 ∪ · · · ∪ Pk )
until P = ∅
In more detail, let U and V be the two sets in the bipartition of G, and let the matching
from U to V at any time be represented as the set M . The algorithm is run in phases. Each
phase consists of the following steps.
• A breadth-first search24 partitions the vertices of the graph into layers. The free vertices in
U are used as the starting vertices of this search and form the first layer of the partitioning.
At the first level of the search, there are only unmatched edges, since the free vertices
in U are by definition not adjacent to any matched edges. At subsequent levels of the
search, the traversed edges are required to alternate between matched and unmatched.
That is, when searching for successors from a vertex in U , only unmatched edges may be
traversed, while from a vertex in V only matched edges may be traversed. The search
terminates at the first layer k where one or more free vertices in V are reached.
• All free vertices in V at layer k are collected into a set F . That is, a vertex v is put into
F if and only if it ends a shortest augmenting path.
• The algorithm finds a maximal set of vertex disjoint augmenting paths of length k.
(Maximal means that no more such paths can be added. This is different from find-
ing the maximum number of such paths, which would be harder to do. Fortunately, it
is sufficient here to find a maximal set of paths.) This set may be computed by depth
first search25 (DFS) from F to the free vertices in U , using the breadth first layering to
guide the search: the DFS is only allowed to follow edges that lead to an unused vertex
in the previous layer, and paths in the DFS tree must alternate between matched and
unmatched edges. Once an augmenting path is found that involves one of the vertices in
F , the DFS is continued from the next starting vertex. Any vertex encountered during
the DFS can immediately be marked as used, since if there is no path from it to U at the
current point in the DFS, then that vertex can't be used to reach U at any other point

22 https://en.wikipedia.org/wiki/Dinic%27s_algorithm
23 https://en.wikipedia.org/wiki/Pseudocode
24 https://en.wikipedia.org/wiki/Breadth-first_search
25 https://en.wikipedia.org/wiki/Depth_first_search

1063
Hopcroft–Karp algorithm

in the DFS. This ensures O(|E|) running time for the DFS. It is also possible to work in
the other direction, from free vertices in U to those in V , which is the variant used in the
pseudocode.
• Every one of the paths found in this way is used to enlarge M .
The algorithm terminates when no more augmenting paths are found in the breadth first
search part of one of the phases.

96.3 Analysis

Each phase consists of a single breadth first search and a single depth first
√ search. Thus,
a single phase may be implemented in O(|E|) time. Therefore, the first |V | phases, in a

graph with |V | vertices and |E| edges, take time O(|E| |V |).
Each phase increases the length of the shortest augmenting path by at least one: the phase
finds a maximal set of augmenting paths of the given
√ length, so any remaining augmenting
path must be longer. Therefore, once the initial |V | phases of the algorithm are complete,

the shortest remaining augmenting path has at least |V | edges in it. However, the sym-
metric difference26 of the eventual optimal matching and of the partial matching M found
by the initial phases forms a collection of vertex-disjoint augmenting
√ paths and alternating
cycles. If each of the paths in this collection has length at least |V |, there can be at most

|V | paths in the collection, and the size of the optimal matching can differ from the size

of M by at most |V | edges. Since each phase of the algorithm increases the size of the

matching by at least one, there can be at most |V | additional phases before the algorithm
terminates.

Since the algorithm performs a total of at most 2 |V | phases, it takes a total time of

O(|E| |V |) in the worst case.
In many instances, however, the time taken by the algorithm may be even faster than
this worst case analysis indicates. For instance, in the average case27 for sparse28 bipartite
random graphs29 , Bast et al. (2006)30 (improving a previous result of Motwani 199431 )
showed that with high probability all non-optimal matchings have augmenting paths of
logarithmic32 length. As a consequence, for these graphs, the Hopcroft–Karp algorithm
takes O(log |V |) phases and O(|E| log |V |) total time.

26 https://en.wikipedia.org/wiki/Symmetric_difference
27 https://en.wikipedia.org/wiki/Average_case_analysis
28 https://en.wikipedia.org/wiki/Sparse_graph
29 https://en.wikipedia.org/wiki/Random_graph
30 #CITEREFBastMehlhornSchaferTamaki2006
31 #CITEREFMotwani1994
32 https://en.wikipedia.org/wiki/Logarithm

1064
Comparison with other bipartite matching algorithms

96.4 Comparison with other bipartite matching algorithms

For sparse graphs33 , the Hopcroft–Karp algorithm continues to have the best known worst-
case performance, but for dense graphs (|E| = Ω(|V |2 ))(a more
√ recent ) algorithm by Alt et
|E|
al. (1991)34 achieves a slightly better time bound, O |V |1.5 . Their algorithm
log |V |
is based on using a push-relabel maximum flow algorithm35 and then, when the match-
ing created by this algorithm becomes close to optimum, switching to the Hopcroft–Karp
method.
Several authors have performed experimental comparisons of bipartite matching algorithms.
Their results in general tend to show that the Hopcroft–Karp method is not as good in
practice as it is in theory: it is outperformed both by simpler breadth-first and depth-first
strategies for finding augmenting paths, and by push-relabel techniques.[6]

96.5 Non-bipartite graphs

The same idea of finding a maximal set of shortest augmenting paths works also for find-
ing maximum cardinality matchings in √ non-bipartite graphs, and for the same reasons the
algorithms based on this idea take O( |V |) phases. However, for non-bipartite graphs, the
task of finding the augmenting paths within each phase is more difficult. Building on the
work of several slower predecessors, Micali & Vazirani (1980)36 showed how to implement
a phase in linear time, resulting in a non-bipartite matching algorithm with the same time
bound as the Hopcroft–Karp algorithm for bipartite graphs. The Micali–Vazirani tech-
nique is complex, and its authors did not provide full proofs of their results; subsequently,
a ”clear exposition” was published by Peterson & Loui (1988)37 harvtxt error: multiple
targets (2×): CITEREFPetersonLoui1988 (help38 ) and alternative methods were described
by other authors.[7] In 2012, Vazirani offered a new simplified proof of the Micali-Vazirani
algorithm.[8]

96.6 Pseudocode
/*
G = U ∪ V ∪ {NIL}
where U and V are the left and right sides of the bipartite graph and NIL is a
special null vertex
*/

function BFS() is
for each u in U do
if Pair_U[u] = NIL then

33 https://en.wikipedia.org/wiki/Sparse_graph
34 #CITEREFAltBlumMehlhornPaul1991
35 https://en.wikipedia.org/wiki/Push-relabel_maximum_flow_algorithm
36 #CITEREFMicaliVazirani1980
37 #CITEREFPetersonLoui1988
38 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors

1065
Hopcroft–Karp algorithm

Dist[u] := 0
Enqueue(Q, u)
else
Dist[u] := ∞
Dist[NIL] := ∞
while Empty(Q) = false do
u := Dequeue(Q)
if Dist[u] < Dist[NIL] then
for each v in Adj[u] do
if Dist[Pair_V[v]] = ∞ then
Dist[Pair_V[v]] := Dist[u] + 1
Enqueue(Q, Pair_V[v])
return Dist[NIL] ≠ ∞

function DFS(u) is
if u ≠ NIL then
for each v in Adj[u] do
if Dist[Pair_V[v]] = Dist[u] + 1 then
if DFS(Pair_V[v]) = true then
Pair_V[v] := u
Pair_U[u] := v
return true
Dist[u] := ∞
return false
return true

function Hopcroft–Karp is
for each u in U do
Pair_U[u] := NIL
for each v in V do
Pair_V[v] := NIL
matching := 0
while BFS() = true do
for each u in U do
if Pair_U[u] = NIL then
if DFS(u) = true then
matching := matching + 1
return matching

1066
Pseudocode

Figure 238 Execution on an example graph showing input graph and matching after
intermediate iteration 1 and final iteration 2.

96.6.1 Explanation

Let the vertices of our graph be partitioned in U and V, and consider a partial matching,
as indicated by the Pair_U and Pair_V tables that contain the one vertex to which each
vertex of U and of V is matched, or NIL for unmatched vertices. The key idea is to add two
dummy vertices on each side of the graph: uDummy connected to all unmatched vertices
in U and vDummy connected to all unmatched vertices in V. Now, if we run a breadth-first
search39 (BFS) from uDummy to vDummy then we can get the paths of minimal length that
connect currently unmatched vertices in U to currently unmatched vertices in V. Note that,
as the graph is bipartite, these paths always alternate between vertices in U and vertices in
V, and we require in our BFS that when going from V to U, we always select a matched
edge. If we reach an unmatched vertex of V, then we end at vDummy and the search for
paths in the BFS terminate. To summarize, the BFS starts at unmatched vertices in U,
goes to all their neighbors in V, if all are matched then it goes back to the vertices in U to
which all these vertices are matched (and which were not visited before), then it goes to all
the neighbors of these vertices, etc., until one of the vertices reached in V is unmatched.
Observe in particular that BFS marks the unmatched nodes of U with distance 0, then
increments the distance every time it comes back to U. This guarantees that the paths con-
sidered in the BFS are of minimal length to connect unmatched vertices of U to unmatched
vertices of V while always going back from V to U on edges that are currently part of the
matching. In particular, the special NIL vertex, which corresponds to vDummy, then gets

39 https://en.wikipedia.org/wiki/Breadth-first_search

1067
Hopcroft–Karp algorithm

assigned a finite distance, so the BFS function returns true iff some path has been found.
If no path has been found, then there are no augmenting paths left and the matching is
maximal.
If BFS returns true, then we can go ahead and update the pairing for vertices on the
minimal-length paths found from U to V: we do so using a depth-first search40 (DFS). Note
that each vertex in V on such a path, except for the last one, is currently matched. So we can
explore with the DFS, making sure that the paths that we follow correspond to the distances
computed in the BFS. We update along every such path by removing from the matching all
edges of the path that are currently in the matching, and adding to the matching all edges
of the path that are currently not in the matching: as this is an augmenting path (the first
and last edges of the path were not part of the matching, and the path alternated between
matched and unmatched edges), then this increases the number of edges in the matching.
This is same as replacing the current matching by the symmetric difference between the
current matching and the entire path..
Note that the code ensures that all augmenting paths that we consider are vertex disjoint.
Indeed, after doing the symmetric difference for a path, none of its vertices could be con-
sidered again in the DFS, just because the Dist[Pair_V[v]] will not be equal to Dist[u] + 1
(it would be exactly Dist[u]).
Also observe that the DFS does not visit the same vertex multiple times. This is thanks to
the following lines:
Dist[u] = ∞
return false

When we were not able to find any shortest augmenting path from a vertex u, then the DFS
marks vertex u by setting Dist[u] to infinity, so that these vertices are not visited again.
One last observation is that we actually don't need uDummy: its role is simply to put
all unmatched vertices of U in the queue when we start the BFS. As for vDummy, it is
denotend as NIL in the pseudocode above.

96.7 See also


• Maximum cardinality matching41 , the problem solved by the algorithm, and its general-
ization to non-bipartite graphs
• Assignment problem42 , a generalization of this problem on weighted graphs43 , solved e.g.
by the Hungarian algorithm44
• Edmonds–Karp algorithm45 for finding maximum flow, a generalization of the Hopcroft–
Karp algorithm

40 https://en.wikipedia.org/wiki/Depth-first_search
41 https://en.wikipedia.org/wiki/Maximum_cardinality_matching
42 https://en.wikipedia.org/wiki/Assignment_problem
43 https://en.wikipedia.org/w/index.php?title=Weighted_graphs&action=edit&redlink=1
44 https://en.wikipedia.org/wiki/Hungarian_algorithm
45 https://en.wikipedia.org/wiki/Edmonds%E2%80%93Karp_algorithm

1068
Notes

96.8 Notes
1. Gabow (2017)46 ; Annamalai (2018)47
2. Dinitz (2006)48 .
3. P, P A.; L, M C. (1988-11-01). ”T  
     ”49 . Algorithmica. 3 (1): 511–533.
doi50 :10.1007/BF0176212951 . ISSN52 1432-054153 .
4. T, R E (1983-01-01). Data Structures and Network Algorithms54 .
CBMS-NSF R C S  A M. S
 I  A M. 55 :10.1137/1.978161197026556 .
ISBN57 978-0-89871-187-558 ., p102
5. Ahuja, Magnanti & Orlin (1993)59 , section 12.3, bipartite cardinality matching prob-
lem, pp. 469–470.
6. Chang & McCormick (1990)60 ; Darby-Dowman (1980)61 ; Setubal (1993)62 ; Setubal
(1996)63 .
7. Gabow & Tarjan (1991)64 .
8. Vazirani (2012)65

96.9 References
• A, R K.66 ; M, T L.67 ; O, J B.68 (1993), Net-
work Flows: Theory, Algorithms and Applications, Prentice-Hall.

46 #CITEREFGabow2017
47 #CITEREFAnnamalai2018
48 #CITEREFDinitz2006
49 https://doi.org/10.1007/BF01762129
50 https://en.wikipedia.org/wiki/Doi_(identifier)
51 https://doi.org/10.1007%2FBF01762129
52 https://en.wikipedia.org/wiki/ISSN_(identifier)
53 http://www.worldcat.org/issn/1432-0541
54 https://epubs.siam.org/doi/book/10.1137/1.9781611970265
55 https://en.wikipedia.org/wiki/Doi_(identifier)
56 https://doi.org/10.1137%2F1.9781611970265
57 https://en.wikipedia.org/wiki/ISBN_(identifier)
58 https://en.wikipedia.org/wiki/Special:BookSources/978-0-89871-187-5
59 #CITEREFAhujaMagnantiOrlin1993
60 #CITEREFChangMcCormick1990
61 #CITEREFDarby-Dowman1980
62 #CITEREFSetubal1993
63 #CITEREFSetubal1996
64 #CITEREFGabowTarjan1991
65 #CITEREFVazirani2012
66 https://en.wikipedia.org/wiki/Ravindra_K._Ahuja
67 https://en.wikipedia.org/wiki/Thomas_L._Magnanti
68 https://en.wikipedia.org/wiki/James_B._Orlin

1069
Hopcroft–Karp algorithm

• A, H.; B, N.; M, K.69 ; P, M. (1991), ”C


(  
√ m )
        O n 1.5
log n
”, Information
Processing Letters, 37 (4): 237–240, doi70 :10.1016/0020-0190(91)90195-N71 .
• A, C (2018), ”F    -
 ”, Combinatorica, 38 (6): 1285–1307, arXiv72 :1509.0700773 ,
doi74 :10.1007/s00493-017-3567-275 , MR76 391087677
• B, H78 ; M, K79 ; S, G; T, H (2006),
”M       ”, Theory of Computing
Systems, 39 (1): 3–14, doi80 :10.1007/s00224-005-1254-y81 .
• C, S. F; MC, S. T (1990), A faster implementation of a
bipartite cardinality matching algorithm, Tech. Rep. 90-MSC-005, Faculty of Commerce
and Business Administration, Univ. of British Columbia. As cited by Setubal (1996)82 .
• D-D, K (1980), The exploitation of sparsity in large scale lin-
ear programming problems – Data structures and restructuring algorithms, Ph.D. thesis,
Brunel University. As cited by Setubal (1996)83 .
• D, Y (2006), ”D' A: T O V  E'
V”,  G, O84 ; R, A L.85 ; S, A L.
(.), Theoretical Computer Science: Essays in Memory of Shimon Even, Lec-
ture Notes in Computer Science, 3895, Berlin and Heidelberg: Springer, pp. 218–240,
doi86 :10.1007/11685654_1087 .
• E, J88 (1965), ”P, T  F”, Canadian Journal of Math-
ematics, 17: 449–467, doi89 :10.4153/CJM-1965-045-490 , MR91 017790792 .

69 https://en.wikipedia.org/wiki/Kurt_Mehlhorn
70 https://en.wikipedia.org/wiki/Doi_(identifier)
71 https://doi.org/10.1016%2F0020-0190%2891%2990195-N
72 https://en.wikipedia.org/wiki/ArXiv_(identifier)
73 http://arxiv.org/abs/1509.07007
74 https://en.wikipedia.org/wiki/Doi_(identifier)
75 https://doi.org/10.1007%2Fs00493-017-3567-2
76 https://en.wikipedia.org/wiki/MR_(identifier)
77 http://www.ams.org/mathscinet-getitem?mr=3910876
78 https://en.wikipedia.org/wiki/Hannah_Bast
79 https://en.wikipedia.org/wiki/Kurt_Mehlhorn
80 https://en.wikipedia.org/wiki/Doi_(identifier)
81 https://doi.org/10.1007%2Fs00224-005-1254-y
82 #CITEREFSetubal1996
83 #CITEREFSetubal1996
84 https://en.wikipedia.org/wiki/Oded_Goldreich
85 https://en.wikipedia.org/wiki/Arnold_L._Rosenberg
86 https://en.wikipedia.org/wiki/Doi_(identifier)
87 https://doi.org/10.1007%2F11685654_10
88 https://en.wikipedia.org/wiki/Jack_Edmonds
89 https://en.wikipedia.org/wiki/Doi_(identifier)
90 https://doi.org/10.4153%2FCJM-1965-045-4
91 https://en.wikipedia.org/wiki/MR_(identifier)
92 http://www.ams.org/mathscinet-getitem?mr=0177907

1070
References

• G, H N. (2017), ”T     -


  ”, Fundamenta Informaticae, 154 (1–4): 109–130,
arXiv93 :1703.0399894 , doi95 :10.3233/FI-2017-155596 , MR97 369057398
• G, H N.; T, R E.99 (1991), ”F  
    ”, Journal of the ACM, 38 (4): 815–853,
doi100 :10.1145/115234.115366101 .
• H, J E.102 ; K, R M.103 (1973), ”A n5/2 algorithm for max-
imum matchings in bipartite graphs”, SIAM Journal on Computing, 2 (4): 225–231,
doi104 :10.1137/0202019105 . Previously announced at the 12th Annual Symposium on
Switching and Automata Theory, 1971.
• K, A. V.106 (1973), ”A        
 ,      ”, Problems in Cyber-
netics, 5: 66–70. Previously announced at the Seminar on Combinatorial Mathematics
(Moscow, 1971).

• M, S.107 ; V, V. V.108 (1980), ”A O( |V |·|E|)   
    ”, Proc. 21st IEEE Symp. Foundations of
Computer Science109 , . 17–27, 110 :10.1109/SFCS.1980.12111 .
• P, P A.; L, M C. (1988), ”T   -
   M  V”, Algorithmica112 , 3 (1–4): 511–533, Cite-
SeerX113 10.1.1.228.9625114 , doi115 :10.1007/BF01762129116 .
• M, R117 (1994), ”A-    
   ”, Journal of the ACM, 41 (6): 1329–1356,
doi118 :10.1145/195613.195663119 .

93 https://en.wikipedia.org/wiki/ArXiv_(identifier)
94 http://arxiv.org/abs/1703.03998
95 https://en.wikipedia.org/wiki/Doi_(identifier)
96 https://doi.org/10.3233%2FFI-2017-1555
97 https://en.wikipedia.org/wiki/MR_(identifier)
98 http://www.ams.org/mathscinet-getitem?mr=3690573
99 https://en.wikipedia.org/wiki/Robert_Tarjan
100 https://en.wikipedia.org/wiki/Doi_(identifier)
101 https://doi.org/10.1145%2F115234.115366
102 https://en.wikipedia.org/wiki/John_Hopcroft
103 https://en.wikipedia.org/wiki/Richard_Karp
104 https://en.wikipedia.org/wiki/Doi_(identifier)
105 https://doi.org/10.1137%2F0202019
106 https://en.wikipedia.org/wiki/Alexander_V._Karzanov
107 https://en.wikipedia.org/wiki/Silvio_Micali
108 https://en.wikipedia.org/wiki/Vijay_Vazirani
109 https://en.wikipedia.org/wiki/Symposium_on_Foundations_of_Computer_Science
110 https://en.wikipedia.org/wiki/Doi_(identifier)
111 https://doi.org/10.1109%2FSFCS.1980.12
112 https://en.wikipedia.org/wiki/Algorithmica
113 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
114 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.228.9625
115 https://en.wikipedia.org/wiki/Doi_(identifier)
116 https://doi.org/10.1007%2FBF01762129
117 https://en.wikipedia.org/wiki/Rajeev_Motwani
118 https://en.wikipedia.org/wiki/Doi_(identifier)
119 https://doi.org/10.1145%2F195613.195663

1071
Hopcroft–Karp algorithm

• S, J C. (1993), ”N     ”,


Proc. Netflow93, Dept. of Informatics, Univ. of Pisa, pp. 211–216. As cited by Setubal
(1996)120 .
• S, J C. (1996), Sequential and parallel experimental results with bipartite
matching algorithms, Tech. Rep. IC-96-09, Inst. of Computing, Univ. of Campinas,
CiteSeerX121 10.1.1.48.3539122 .
• V, V (2012), An Improved Definition of Blossoms and a Simpler Proof
of the MV Matching Algorithm, CoRR abs/1210.4594, arXiv123 :1210.4594124 , Bib-
code125 :2012arXiv1210.4594V126 .

120 #CITEREFSetubal1996
121 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
122 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.3539
123 https://en.wikipedia.org/wiki/ArXiv_(identifier)
124 http://arxiv.org/abs/1210.4594
125 https://en.wikipedia.org/wiki/Bibcode_(identifier)
126 https://ui.adsabs.harvard.edu/abs/2012arXiv1210.4594V

1072
97 Iterative deepening A*

This article may be too technical for most readers to understand. Please
help improve it1 to make it understandable to non-experts2 , without removing
the technical details. (November 2009)(Learn how and when to remove this template
message3 )

Iterative deepening A*
Class Search algo-
rithm
Data structure Tree, Graph
Worst-case space com- O(d)
plexity

1 https://en.wikipedia.org/w/index.php?title=Iterative_deepening_A*&action=edit
2 https://en.wikipedia.org/wiki/Wikipedia:Make_technical_articles_understandable
3 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

1073
Iterative deepening A*

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

1074
Description

Iterative deepening A* (IDA*) is a graph traversal and path search4 algorithm that
can find the shortest path5 between a designated start node and any member of a set of
goal nodes in a weighted graph. It is a variant of iterative deepening depth-first search6
that borrows the idea to use a heuristic function to evaluate the remaining cost to get to the
goal from the A* search algorithm7 . Since it is a depth-first search algorithm, its memory
usage is lower than in A*, but unlike ordinary iterative deepening search, it concentrates
on exploring the most promising nodes and thus does not go to the same depth everywhere
in the search tree. Unlike A*, IDA* does not utilize dynamic programming8 and therefore
often ends up exploring the same nodes many times.
While the standard iterative deepening depth-first search uses search depth as the cutoff
for each iteration, the IDA* uses the more informative f (n) = g(n) + h(n), where g(n) is
the cost to travel from the root to node n and h(n) is a problem-specific heuristic estimate
of the cost to travel from n to the goal.
The algorithm was first described by Richard Korf in 1985.[1]

97.1 Description

Iterative-deepening-A* works as follows: at each iteration, perform a depth-first search,


cutting off a branch when its total cost f (n) = g(n) + h(n) exceeds a given threshold. This
threshold starts at the estimate of the cost at the initial state, and increases for each
iteration of the algorithm. At each iteration, the threshold used for the next iteration is the
minimum cost of all values that exceeded the current threshold.[2]
As in A*, the heuristic has to have particular properties to guarantee optimality (shortest
paths). See Properties9 below.

97.2 Pseudocode
path current search path (acts like a stack)
node current node (last node in current path)
g the cost to reach current node
f estimated cost of the cheapest path (root..node..goal)
h(node) estimated cost of the cheapest path (node..goal)
cost(node, succ) step cost function
is_goal(node) goal test
successors(node) node expanding function, expand nodes ordered by g + h(node)
ida_star(root) return either NOT_FOUND or a pair with the best path and its cost

procedure ida_star(root)
bound := h(root)
path := [root]
loop
t := search(path, 0, bound)

4 https://en.wikipedia.org/wiki/Pathfinding
5 https://en.wikipedia.org/wiki/Shortest_path_problem
6 https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search
7 https://en.wikipedia.org/wiki/A*_search_algorithm
8 https://en.wikipedia.org/wiki/Dynamic_programming
9 #Properties

1075
Iterative deepening A*

if t = FOUND then return (path, bound)


if t = ∞ then return NOT_FOUND
bound := t
end loop
end procedure

function search(path, g, bound)


node := path.last
f := g + h(node)
if f > bound then return f
if is_goal(node) then return FOUND
min := ∞
for succ in successors(node) do
if succ not in path then
path.push(succ)
t := search(path, g + cost(node, succ), bound)
if t = FOUND then return FOUND
if t < min then min := t
path.pop()
end if
end for
return min
end function

97.3 Properties

Like A*, IDA* is guaranteed to find the shortest path leading from the given start node to
any goal node in the problem graph, if the heuristic function h is admissible10 ,[2] that is
h(n) ≤ h∗ (n)
for all nodes n, where h* is the true cost of the shortest path from n to the nearest goal
(the ”perfect heuristic”).[3]
IDA* is beneficial when the problem is memory constrained. A* search keeps a large queue
of unexplored nodes that can quickly fill up memory. By contrast, because IDA* does
not remember any node except the ones on the current path11 , it requires an amount of
memory12 that is only linear in the length of the solution that it constructs. Its time
complexity is analyzed by Korf et al. under the assumption that the heuristic cost estimate
h is consistent, meaning that
h(n) ≤ cost(n, n′ ) + h(n′ )
for all nodes n and all neighbors n' of n; they conclude that compared to a brute-force
tree search over an exponential-sized problem, IDA* achieves a smaller search depth (by a
constant factor), but not a smaller branching factor.[4]
Recursive best-first search13 is another memory-constrained version of A* search that can
be faster in practice than IDA*, since it requires less regenerating of nodes.[3]:282–289

10 https://en.wikipedia.org/wiki/Admissible_heuristic
11 https://en.wikipedia.org/wiki/Path_(graph_theory)
12 https://en.wikipedia.org/wiki/Space_complexity
https://en.wikipedia.org/w/index.php?title=Recursive_best-first_search&action=edit&
13
redlink=1

1076
Applications

97.4 Applications

Applications of IDA* are found in such problems as planning14 .[5]

97.5 References
1. K, R E. (1985). ”D- I-D: A O-
 A T S”15 (PDF). Artificial Intelligence16 . 27: 97–109.
doi17 :10.1016/0004-3702(85)90084-018 .
2. K, R E. (1985). ”D-  ”19 (PDF): 7.
Cite journal requires |journal= (help20 )
3. B, I21 (2001). Prolog Programming for Artificial Intelligence. Pearson
Education.
4. K, R E.; R, M; E, S (2001). ”T -
  --A∗”. Artificial Intelligence. 129 (1–2): 199–218.
doi22 :10.1016/S0004-3702(01)00094-723 .
5. B, B; G, H C. (2001). ”P   ”.
Artificial Intelligence. 129 (1–2): 5–33. doi24 :10.1016/S0004-3702(01)00108-425 .
hdl26 :10230/3632527 .

14 https://en.wikipedia.org/wiki/Automated_planning_and_scheduling
15 http://www.cse.sc.edu/~mgv/csce580f09/gradPres/korf_IDAStar_1985.pdf
16 https://en.wikipedia.org/wiki/Artificial_Intelligence_(journal)
17 https://en.wikipedia.org/wiki/Doi_(identifier)
18 https://doi.org/10.1016%2F0004-3702%2885%2990084-0
19 https://cse.sc.edu/~mgv/csce580f09/gradPres/korf_IDAStar_1985.pdf
20 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
21 https://en.wikipedia.org/wiki/Ivan_Bratko_(computer_scientist)
22 https://en.wikipedia.org/wiki/Doi_(identifier)
23 https://doi.org/10.1016%2FS0004-3702%2801%2900094-7
24 https://en.wikipedia.org/wiki/Doi_(identifier)
25 https://doi.org/10.1016%2FS0004-3702%2801%2900108-4
26 https://en.wikipedia.org/wiki/Hdl_(identifier)
27 http://hdl.handle.net/10230%2F36325

1077
98 Iterative deepening depth-first search

This article needs additional citations for verification1 . Please help improve
this article2 by adding citations to reliable sources3 . Unsourced material may be
challenged and removed.
Find sources: ”Iterative deepening depth-first search”4 –
news5 · newspapers6 · books7 · scholar8 · JSTOR9 (January 2017)(Learn how and
when to remove this template message10 )

Iterative deepening depth-first search


Class Search algorithm
Data Tree, Graph
structure
Worst- O(bd ), where b is the branch-
case per- ing factor and d is the depth
formance of the shallowest solution
Worst- O(d)[1]:5
case space
complex-
ity

1 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
https://en.wikipedia.org/w/index.php?title=Iterative_deepening_depth-first_search&
2
action=edit
3 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
http://www.google.com/search?as_eq=wikipedia&q=%22Iterative+deepening+depth-first+
4
search%22
http://www.google.com/search?tbm=nws&q=%22Iterative+deepening+depth-first+search%22+-
5
wikipedia
http://www.google.com/search?&q=%22Iterative+deepening+depth-first+search%22+site:
6
news.google.com/newspapers&source=newspapers
http://www.google.com/search?tbs=bks:1&q=%22Iterative+deepening+depth-first+search%
7
22+-wikipedia
8 http://scholar.google.com/scholar?q=%22Iterative+deepening+depth-first+search%22
https://www.jstor.org/action/doBasicSearch?Query=%22Iterative+deepening+depth-
9
first+search%22&acc=on&wc=on
10 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

1079
Iterative deepening depth-first search

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

1080
Algorithm for directed graphs

In computer science11 , iterative deepening search or more specifically iterative deep-


ening depth-first search[2] (IDS or IDDFS) is a state space12 /graph search strategy in
which a depth-limited version of depth-first search13 is run repeatedly with increasing depth
limits until the goal is found. IDDFS is optimal like breadth-first search14 , but uses much
less memory; at each iteration, it visits the nodes15 in the search tree16 in the same order
as depth-first search, but the cumulative order in which nodes are first visited is effectively
breadth-first.

98.1 Algorithm for directed graphs

The following pseudocode shows IDDFS implemented in terms of a recursive depth-limited


DFS (called DLS) for directed graphs17 . This implementation of IDDFS does not account
for already-visited nodes and therefore does not work for undirected graphs18 .
function IDDFS(root) is
for depth from 0 to ∞ do
found, remaining ← DLS(root, depth)
if found ≠ null then
return found
else if not remaining then
return null

function DLS(node, depth) is


if depth = 0 then
if node is a goal then
return (node, true)
else
return (null, true) (Not found, but may have children)

else if depth > 0 then


any_remaining ← false
foreach child of node do
found, remaining ← DLS(child, depth−1)
if found ≠ null then
return (found, true)
if remaining then
any_remaining ← true (At least one node found at depth, let IDDFS deepen)
return (null, any_remaining)

If the goal node is found, then DLS unwinds the recursion returning with no further iter-
ations. Otherwise, if at least one node exists at that level of depth, the remaining flag will
let IDDFS continue.

11 https://en.wikipedia.org/wiki/Computer_science
12 https://en.wikipedia.org/wiki/State_space_search
13 https://en.wikipedia.org/wiki/Depth-first_search
14 https://en.wikipedia.org/wiki/Breadth-first_search
15 https://en.wikipedia.org/wiki/Node_(computer_science)
16 https://en.wikipedia.org/wiki/Search_tree
17 https://en.wikipedia.org/wiki/Directed_graph
18 https://en.wikipedia.org/wiki/Undirected_graph

1081
Iterative deepening depth-first search

2-tuples19 are useful as return value to signal IDDFS to continue deepening or stop, in case
tree depth and goal membership are unknown a priori. Another solution could use sentinel
values20 instead to represent not found or remaining level results.

98.2 Properties

IDDFS combines depth-first search's space-efficiency and breadth-first search's completeness


(when the branching factor21 is finite). If a solution exists, it will find a solution path with
the fewest arcs.[3]
Since iterative deepening visits states multiple times, it may seem wasteful, but it turns out
to be not so costly, since in a tree most of the nodes are in the bottom level, so it does not
matter much if the upper levels are visited multiple times.[4]
The main advantage of IDDFS in game tree22 searching is that the earlier searches tend
to improve the commonly used heuristics, such as the killer heuristic23 and alpha-beta
pruning24 , so that a more accurate estimate of the score of various nodes at the final depth
search can occur, and the search completes more quickly since it is done in a better order.
For example, alpha-beta pruning is most efficient if it searches the best moves first.[4]
A second advantage is the responsiveness of the algorithm. Because early iterations use
small values for d, they execute extremely quickly. This allows the algorithm to supply
early indications of the result almost immediately, followed by refinements as d increases.
When used in an interactive setting, such as in a chess25 -playing program, this facility
allows the program to play at any time with the current best move found in the search
it has completed so far. This can be phrased as each depth of the search corecursively26
producing a better approximation of the solution, though the work done at each step is
recursive. This is not possible with a traditional depth-first search, which does not produce
intermediate results.

98.3 Asymptotic analysis

98.3.1 Time complexity

The time complexity27 of IDDFS in a (well-balanced) tree works out to be the same as
breadth-first search, i.e. O(bd ),[1]:5 where b is the branching factor and d is the depth of the
goal.

19 https://en.wikipedia.org/wiki/Tuple
20 https://en.wikipedia.org/wiki/Sentinel_value
21 https://en.wikipedia.org/wiki/Branching_factor
22 https://en.wikipedia.org/wiki/Game_tree
23 https://en.wikipedia.org/wiki/Killer_heuristic
24 https://en.wikipedia.org/wiki/Alpha-beta_pruning
25 https://en.wikipedia.org/wiki/Chess
26 https://en.wikipedia.org/wiki/Corecursive
27 https://en.wikipedia.org/wiki/Time_complexity

1082
Asymptotic analysis

Proof

In an iterative deepening search, the nodes at depth d are expanded once, those at depth
d − 1 are expanded twice, and so on up to the root of the search tree, which is expanded
d + 1 times.[1]:5 So the total number of expansions in an iterative deepening search is

d
bd + 2bd−1 + 3bd−2 + · · · + (d − 1)b2 + db + (d + 1) = (d + 1 − i)bi
i=0

where bd is the number of expansions at depth d, 2bd−1 is the number of expansions at


depth d − 1, and so on. Factoring out bd gives
bd (1 + 2b−1 + 3b−2 + · · · + (d − 1)b2−d + db1−d + (d + 1)b−d )
1
Now let x = = b−1 . Then we have
b
bd (1 + 2x + 3x2 + · · · + (d − 1)xd−2 + dxd−1 + (d + 1)xd )
This is less than the infinite series
(∞ )

b (1 + 2x + 3x + 4x + · · · ) = b
d 2 3 d
nx n−1

n=1

which converges28 to
1
bd (1 − x)−2 = bd , for abs(x) < 1
(1 − x)2
That is, we have
bd (1 + 2x + 3x2 + · · · + (d − 1)xd−2 + dxd−1 + (d + 1)xd ) ≤ bd (1 − x)−2 , for abs(x) < 1
( )
1 −2
Since (1 − x)−2 or 1 − is a constant independent of d (the depth), if b > 1 (i.e., if the
b
branching factor is greater than 1), the running time of the depth-first iterative deepening
search is O(bd ).

Example

For b = 10 and d = 5 the number is



5
(5 + 1 − i)10i = 6 + 50 + 400 + 3000 + 20000 + 100000 = 123456
i=0

All together, an iterative deepening search from depth 1 all the way down to depth d expands
only about 11% more nodes than a single breadth-first or depth-limited search to depth d,
when b = 10.[5]
The higher the branching factor, the lower the overhead of repeatedly expanded states,[1]:6
but even when the branching factor is 2, iterative deepening search only takes about twice

28 https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series

1083
Iterative deepening depth-first search

as long as a complete breadth-first search. This means that the time complexity of iterative
deepening is still O(bd ).

98.3.2 Space complexity

The space complexity29 of IDDFS is O(d),[1]:5 where d is the depth of the goal.

Proof

Since IDDFS, at any point, is engaged in a depth-first search, it need only store a stack of
nodes which represents the branch of the tree it is expanding. Since it finds a solution of
optimal length, the maximum depth of this stack is d, and hence the maximum amount of
space is O(d).
In general, iterative deepening is the preferred search method when there is a large search
space and the depth of the solution is not known.[4]

98.4 Example

For the following graph:

Figure 240

a depth-first search starting at A, assuming that the left edges in the shown graph are
chosen before right edges, and assuming the search remembers previously-visited nodes and
will not repeat them (since this is a small graph), will visit the nodes in the following order:
A, B, D, F, E, C, G. The edges traversed in this search form a Trémaux tree30 , a structure
with important applications in graph theory31 .

29 https://en.wikipedia.org/wiki/Space_complexity
30 https://en.wikipedia.org/wiki/Tr%C3%A9maux_tree
31 https://en.wikipedia.org/wiki/Graph_theory

1084
Related algorithms

Performing the same search without remembering previously visited nodes results in visiting
nodes in the order A, B, D, F, E, A, B, D, F, E, etc. forever, caught in the A, B, D, F, E
cycle and never reaching C or G.
Iterative deepening prevents this loop and will reach the following nodes on the following
depths, assuming it proceeds left-to-right as above:
• 0: A
• 1: A, B, C, E
(Iterative deepening has now seen C, when a conventional depth-first search did not.)
• 2: A, B, D, F, C, G, E, F
(It still sees C, but that it came later. Also it sees E via a different path, and loops back to
F twice.)
• 3: A, B, D, F, E, C, G, E, F, B
For this graph, as more depth is added, the two cycles ”ABFE” and ”AEFB” will simply get
longer before the algorithm gives up and tries another branch.

98.5 Related algorithms

Similar to iterative deepening is a search strategy called iterative lengthening search32 that
works with increasing path-cost limits instead of depth-limits. It expands nodes in the order
of increasing path cost; therefore the first goal it encounters is the one with the cheapest
path cost. But iterative lengthening incurs substantial overhead that makes it less useful
than iterative deepening.[4]
Iterative deepening A*33 is a best-first search that performs iterative deepening based on
”f”-values similar to the ones computed in the A* algorithm34 .

98.5.1 Bidirectional IDDFS

IDDFS has a bidirectional counterpart,[1]:6 which alternates two searches: one starting from
the source node and moving along the directed arcs, and another one starting from the
target node and proceeding along the directed arcs in opposite direction (from the arc's
head node to the arc's tail node). The search process first checks that the source node
and the target node are same, and if so, returns the trivial path consisting of a single
source/target node. Otherwise, the forward search process expands the child nodes of the
source node (set A), the backward search process expands the parent nodes of the target
node (set B), and it is checked whether A and B intersect. If so, a shortest path is found.
Otherwise, the search depth is incremented and the same computation takes place.

https://en.wikipedia.org/w/index.php?title=Iterative_lengthening_search&action=edit&
32
redlink=1
33 https://en.wikipedia.org/wiki/Iterative_deepening_A*
34 https://en.wikipedia.org/wiki/A*_algorithm

1085
Iterative deepening depth-first search

One limitation of the algorithm is that the shortest path consisting of an odd number of arcs
will not be detected. Suppose we have a shortest path ⟨s, u, v, t⟩. When the depth will reach
two hops along the arcs, the forward search will proceed to v from u, and the backward
search will proceed from v to u. Pictorially, the search frontiers will go through each other,
and instead a suboptimal path consisting of an even number of arcs will be returned. This
is illustrated in the below diagrams:

Figure 241 Bidirectional IDDFS

What comes to space complexity, the algorithm colors the deepest nodes in the forward
search process in order to detect existing of the middle node where the two search processes
meet.
Additional difficulty of applying bidirectional IDDFS is that if the source and the target
nodes are in different strongly connected components, say, s ∈ S, t ∈ T , if there is no arc
leaving S and entering T , the search will never terminate.

1086
Related algorithms

Time and space complexities

The running time of bidirectional IDDFS is given by


n/2

2 bk
k=0

and the space complexity is given by


bn/2 ,
where n is the number of nodes in the shortest s, t-path. Since the running time complexity

n
of iterative deepening depth-first search is bk , the speedup is roughly
k=0
∑n
bk
1−bn+1
1−b 1 − bn+1 bn+1 − 1 bn+1 √
k=0
∑n/2 k = = = ≈ = Θ(bn/2
) = Θ( bn ).
2 b 1−bn/2+1
2 1−b 2(1 − bn/2+1 ) 2(bn/2+1 − 1) 2bn/2+1
k=0

Pseudocode

function Build-Path(s, µ, B) is
π ← Find-Shortest-Path(s, µ) (Recursively compute the path to the relay node)
remove the last node from π
return π <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mo> </mo> </mstyle> </mrow> {\displaystyle \circ } </semantics> B (Append the backward search stack)

function Depth-Limited-Search-Forward(u, ∆, F) is
if ∆ = 0 then
F ← F <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mo> </mo> </mstyle> </mrow> {\displaystyle \cup } </semantics> {u} (Mark the node)
return
foreach child of u do
Depth-Limited-Search-Forward(child, ∆ − 1, F)

function Depth-Limited-Search-Backward(u, ∆, B, F) is
prepend u to B
if ∆ = 0 then
if u in F then
return u (Reached the marked node, use it as a relay node)
remove the head node of B
return null
foreach parent of u do
µ ← Depth-Limited-Search-Backward(parent, ∆ − 1, B, F)
if µ <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mo>≠</mo> </mstyle> </mrow> {\displaystyle \neq } </semantics> null then
return µ
remove the head node of B
return null

function Find-Shortest-Path(s, t) is
if s = t then
return <s>
F, B, ∆ ← ∅, ∅, 0
forever do
Depth-Limited-Search-Forward(s, ∆, F)
foreach δ = ∆, ∆ + 1 do
µ ← Depth-Limited-Search-Backward(t, δ, B, F)
if µ <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mo>≠</mo> </mstyle> </mrow> {\displaystyle \neq } </semantics> null then

1087
Iterative deepening depth-first search

return Build-Path(s, µ, B) (Found a relay node)


B←∅
F, ∆ ← ∅, ∆ + 1

98.6 References
1. KORF, R E. (1985). ”D-  ”35 (PDF). Cite
journal requires |journal= (help36 )
2. K, R (1985). ”D- I-D: A O A-
 T S”. Artificial Intelligence. 27: 97–109. doi37 :10.1016/0004-
3702(85)90084-038 .
3. D P; A M. ”3.5.3 I D‣ Chapter 3
Searching for Solutions ‣Artificial Intelligence: Foundations of Computational Agents,
2nd Edition”39 . artint.info. Retrieved 29 November 2018.
4. R, S J.40 ; N, P41 (2003), Artificial Intelligence: A Mod-
ern Approach42 (2 .), U S R, N J: P H,
ISBN43 0-13-790395-244
5. R; N (1994). Artificial Intelligence: A Modern Approach.

35 https://cse.sc.edu/~mgv/csce580f09/gradPres/korf_IDAStar_1985.pdf
36 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
37 https://en.wikipedia.org/wiki/Doi_(identifier)
38 https://doi.org/10.1016%2F0004-3702%2885%2990084-0
39 https://artint.info/2e/html/ArtInt2e.Ch3.S5.SS3.html
40 https://en.wikipedia.org/wiki/Stuart_J._Russell
41 https://en.wikipedia.org/wiki/Peter_Norvig
42 http://aima.cs.berkeley.edu/
43 https://en.wikipedia.org/wiki/ISBN_(identifier)
44 https://en.wikipedia.org/wiki/Special:BookSources/0-13-790395-2

1088
99 Johnson's algorithm

Johnson's algorithm
Class All-pairs shortest path
problem (for weighted
graphs)
Data struc- Graph
ture
Worst-case O(|V |2 log |V | + |V ||E|)
performance

For the scheduling algorithm of the same name, see Job shop scheduling1 .

1 https://en.wikipedia.org/wiki/Job_shop_scheduling

1089
Johnson's algorithm

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

1090
Algorithm description

Johnson's algorithm is a way to find the shortest paths2 between all pairs of vertices3
in an edge-weighted4 , directed graph5 . It allows some of the edge weights to be negative
numbers6 , but no negative-weight cycles7 may exist. It works by using the Bellman–Ford al-
gorithm8 to compute a transformation of the input graph that removes all negative weights,
allowing Dijkstra's algorithm9 to be used on the transformed graph.[1][2] It is named after
Donald B. Johnson10 , who first published the technique in 1977.[3]
A similar reweighting technique is also used in Suurballe's algorithm11 for finding two dis-
joint paths of minimum total length between the same two vertices in a graph with non-
negative edge weights.[4]

99.1 Algorithm description

Johnson's algorithm consists of the following steps:[1][2]


1. First, a new node12 q is added to the graph, connected by zero-weight edges13 to each
of the other nodes.
2. Second, the Bellman–Ford algorithm14 is used, starting from the new vertex q, to find
for each vertex v the minimum weight h(v) of a path from q to v. If this step detects
a negative cycle, the algorithm is terminated.
3. Next the edges of the original graph are reweighted using the values computed by the
Bellman–Ford algorithm: an edge from u to v, having length w(u, v), is given the new
length w(u,v) + h(u) − h(v).
4. Finally, q is removed, and Dijkstra's algorithm15 is used to find the shortest paths
from each node s to every other vertex in the reweighted graph.

99.2 Example

The first three stages of Johnson's algorithm are depicted in the illustration below.

2 https://en.wikipedia.org/wiki/Shortest_path
3 https://en.wikipedia.org/wiki/All-pairs_shortest_path_problem
4 https://en.wikipedia.org/wiki/Weighted_graph
5 https://en.wikipedia.org/wiki/Directed_graph
6 https://en.wikipedia.org/wiki/Negative_number
7 https://en.wikipedia.org/wiki/Cycle_(graph_theory)
8 https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm
9 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
10 https://en.wikipedia.org/wiki/Donald_B._Johnson
11 https://en.wikipedia.org/wiki/Suurballe%27s_algorithm
12 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
13 https://en.wikipedia.org/wiki/Edge_(graph_theory)
14 https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm
15 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm

1091
Johnson's algorithm

Figure 242

The graph on the left of the illustration has two negative edges, but no negative cycles. At
the center is shown the new vertex q, a shortest path tree as computed by the Bellman–Ford
algorithm with q as starting vertex, and the values h(v) computed at each other node as the
length of the shortest path from q to that node. Note that these values are all non-positive,
because q has a length-zero edge to each vertex and the shortest path can be no longer
than that edge. On the right is shown the reweighted graph, formed by replacing each
edge weight w(u, v) by w(u,v) + h(u) − h(v). In this reweighted graph, all edge weights
are non-negative, but the shortest path between any two nodes uses the same sequence
of edges as the shortest path between the same two nodes in the original graph. The
algorithm concludes by applying Dijkstra's algorithm to each of the four starting nodes in
the reweighted graph.

99.3 Correctness

In the reweighted graph, all paths between a pair s and t of nodes have the same quantity
h(s) − h(t) added to them. The previous statement can be proven as follows: Let p be an
s − t path. Its weight W in the reweighted graph is given by the following expression:
( ) ( ) ( )
w(s, p1 ) + h(s) − h(p1 ) + w(p1 , p2 ) + h(p1 ) − h(p2 ) + ... + w(pn , t) + h(pn ) − h(t) .
Every +h(pi ) is cancelled by −h(pi ) in the previous bracketed expression; therefore, we are
left with the following expression for W:
( )
w(s, p1 ) + w(p1 , p2 ) + ... + w(pn , t) + h(s) − h(t)
The bracketed expression is the weight of p in the original weighting.
Since the reweighting adds the same amount to the weight of every s − t path, a path is a
shortest path in the original weighting if and only if it is a shortest path after reweighting.
The weight of edges that belong to a shortest path from q to any node is zero, and therefore
the lengths of the shortest paths from q to every node become zero in the reweighted graph;
however, they still remain shortest paths. Therefore, there can be no negative edges: if
edge uv had a negative weight after the reweighting, then the zero-length path from q to

1092
Analysis

u together with this edge would form a negative-length path from q to v, contradicting
the fact that all vertices have zero distance from q. The non-existence of negative edges
ensures the optimality of the paths found by Dijkstra's algorithm. The distances in the
original graph may be calculated from the distances calculated by Dijkstra's algorithm in
the reweighted graph by reversing the reweighting transformation.[1]

99.4 Analysis

The time complexity16 of this algorithm, using Fibonacci heaps17 in the implementation of
Dijkstra's algorithm, is O(|V |2 log |V | + |V ||E|): the algorithm uses O(|V ||E|) time for the
Bellman–Ford stage of the algorithm, and O(|V | log |V | + |E|) for each of the |V | instantia-
tions of Dijkstra's algorithm. Thus, when the graph is sparse18 , the total time can be faster
than the Floyd–Warshall algorithm19 , which solves the same problem in time O(|V |3 ).[1]

99.5 References
1. C, T H.20 ; L, C E.21 ; R, R L.22 ; S,
C23 (2001), Introduction to Algorithms24 , MIT P  MG-H,
ISBN25 978-0-262-03293-326 . Section 25.3, ”Johnson's algorithm for sparse graphs”,
pp. 636–640.
2. B, P E. (2004), ”J' A”, Dictionary of Algorithms and
Data Structures27 , N I  S  T28 .
3. J, D B.29 (1977), ”E   -
    ”, Journal of the ACM30 , 24 (1): 1–13,
doi31 :10.1145/321992.32199332 .
4. S, J. W. (1974), ”D    ”, Networks, 14 (2):
125–145, doi33 :10.1002/net.323004020434 .

16 https://en.wikipedia.org/wiki/Time_complexity
17 https://en.wikipedia.org/wiki/Fibonacci_heap
18 https://en.wikipedia.org/wiki/Sparse_graph
19 https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
20 https://en.wikipedia.org/wiki/Thomas_H._Cormen
21 https://en.wikipedia.org/wiki/Charles_E._Leiserson
22 https://en.wikipedia.org/wiki/Ron_Rivest
23 https://en.wikipedia.org/wiki/Clifford_Stein
24 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
25 https://en.wikipedia.org/wiki/ISBN_(identifier)
26 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03293-3
27 https://xlinux.nist.gov/dads/HTML/johnsonsAlgorithm.html
28 https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
29 https://en.wikipedia.org/wiki/Donald_B._Johnson
30 https://en.wikipedia.org/wiki/Journal_of_the_ACM
31 https://en.wikipedia.org/wiki/Doi_(identifier)
32 https://doi.org/10.1145%2F321992.321993
33 https://en.wikipedia.org/wiki/Doi_(identifier)
34 https://doi.org/10.1002%2Fnet.3230040204

1093
Johnson's algorithm

99.6 External links


• Boost: All Pairs Shortest Paths35

35 http://www.boost.org/doc/libs/1_40_0/libs/graph/doc/johnson_all_pairs_shortest.html

1094
100 Journal of Graph Algorithms and
Applications

Academic journal

Journal of Graph Algorithms and Applications


Discipline Graph algorithms
Language English
Edited by Giuseppe Liotta
Publication details
History 1997–present
Open access Yes
Standard abbreviations
ISO 4 (alt) · Bluebook (alt1 · alt2)
NLM (alt) · MathSciNet (alt )
ISO 4 J. Graph Algorithms Appl.
Indexing
CODEN · JSTOR (alt) · LCCN (alt)
MIAR · NLM (alt) · Scopus
ISSN 1526-1719

LCCN 2007208750
OCLC no. 605074748
Links

• Journal homepage
• Online access

The Journal of Graph Algorithms and Applications is an open access1 peer-reviewed2


scientific journal3 covering the subject of graph algorithms4 and graph drawing5 .[1][2] The

1 https://en.wikipedia.org/wiki/Open_access
2 https://en.wikipedia.org/wiki/Peer-reviewed
3 https://en.wikipedia.org/wiki/Scientific_journal
4 https://en.wikipedia.org/wiki/Graph_algorithm
5 https://en.wikipedia.org/wiki/Graph_drawing

1095
Journal of Graph Algorithms and Applications

journal was established in 1997 and the editor-in-chief6 is Giuseppe Liotta (University of
Perugia7 ).[3] It is abstracted and indexed by Scopus8 and MathSciNet9 .[4]

100.1 References
1. H, I; M, G; M, M. S (2000). ”G -
     : A ”10 . IEEE
Transactions on Information Visualization and Computer Graphics. 6 (1): 24–43.
doi11 :10.1109/2945.84111912 .
2. W, P C; C, G.; F, H.; M, P.; T, J.
(2006), ”H G – A V A F  L S-
 G”13 (PDF), IEEE Symposium on Visual Analytics Science and Tech-
nology, pp. 67–74, CiteSeerX14 10.1.1.452.483615 , doi16 :10.1109/VAST.2006.26143217 ,
ISBN18 978-1-4244-0591-619
3. ”J  G A  A”20 . J.. R-
 2014-09-27.
4. Journal Information for ”Journal of Graph Algorithms and Applica-
22
tions”21[permanent dead link ] , MathSciNet, retrieved 2011-03-02.

100.2 External links


• Official website23

6 https://en.wikipedia.org/wiki/Editor-in-chief
7 https://en.wikipedia.org/wiki/University_of_Perugia
8 https://en.wikipedia.org/wiki/Scopus
9 https://en.wikipedia.org/wiki/MathSciNet
10 http://cumincad.architexturez.net//doc/oai-cumincadworks-id-b836
11 https://en.wikipedia.org/wiki/Doi_(identifier)
12 https://doi.org/10.1109%2F2945.841119
13 http://www.purdue.edu/discoverypark/vaccine/publications/pdf/Have%20Green.pdf
14 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
15 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.452.4836
16 https://en.wikipedia.org/wiki/Doi_(identifier)
17 https://doi.org/10.1109%2FVAST.2006.261432
18 https://en.wikipedia.org/wiki/ISBN_(identifier)
19 https://en.wikipedia.org/wiki/Special:BookSources/978-1-4244-0591-6
20 http://jgaa.info/index.jsp
21 http://www.ams.org/mathscinet/search/journaldoc.html?jc=JGRAA
23 http://jgaa.info/

1096
101 Jump point search

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

1097
Jump point search

Graph and tree


search algorithms

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

In computer science1 , jump point search (JPS) is an optimization to the A* search al-
gorithm2 for uniform-cost grids. It reduces symmetries in the search procedure by means
of graph pruning,[1] eliminating certain nodes in the grid based on assumptions that can be
made about the current node's neighbors, as long as certain conditions relating to the grid
are satisfied. As a result, the algorithm can consider long ”jumps” along straight (horizontal,
vertical and diagonal) lines in the grid, rather than the small steps from one grid position
to the next that ordinary A* considers.[2]
Jump point search preserves A*'s optimality, while potentially reducing its running time by
an order of magnitude.[1]

101.1 History

Harabor and Grastien's original publication provides algorithms for neighbour pruning
and identifying successors.[1] The original algorithm for neighbour pruning allowed corner-
cutting to occur, which meant the algorithm could only be used for moving agents with
zero width, limiting its application to either real-life agents (e.g. robotics) or simulations
(e.g. many games).
The authors presented modified pruning rules for applications where corner-cutting is not
allowed the following year.[3] This paper also presents an algorithm for pre-processing a grid
in order to minimise online search times.
A number of further optimisations were published by the authors in 2014.[4] These optimiza-
tions include exploring columns or rows of nodes instead of individual nodes, pre-computing
”jumps” on the grid, and stronger pruning rules.

101.2 Future work

Although jump point search is limited to uniform cost grids and homogeneously sized agents,
the authors are placing future research into applying JPS with existing grid-based speed-up
techniques such as hierarchical grids.[4][5]

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/A*_search_algorithm

1098
References

101.3 References
1. D. H; A. G (2011). Online Graph Pruning for Pathfinding on Grid
Maps3 (PDF). 25 N C  A I. AAAI.
2. W, N (5 M 2013). ”J P S E”4 . zerowidth
positive lookahead. Retrieved 9 March 2014.
3. D. H; A. G (2012). The JPS Pathfinding System5 . 26 N
C  A I. AAAI.
4. H, D; G, A. ”I J P S”6
(PDF). Australian National University College of Engineering and Computer Science.
Association for the Advancement of Artificial Intelligence (www.aaai.org). Retrieved
11 July 2015.
5. A B; M M (2004). ”N O H P-
F”7 (PDF). University of Alberta. University of Alberta.

3 http://users.cecs.anu.edu.au/~dharabor/data/papers/harabor-grastien-aaai11.pdf
4 http://zerowidth.com/2013/05/05/jump-point-search-explained.html
5 http://www.aaai.org/ocs/index.php/SOCS/SOCS12/paper/viewFile/5396/5212
6 http://users.cecs.anu.edu.au/~dharabor/data/papers/harabor-grastien-icaps14.pdf
7 https://webdocs.cs.ualberta.ca/~mmueller/ps/hpastar.pdf

1099
102 k shortest path routing

The k shortest path routingproblem is a generalization of the shortest path routing


problem1 in a given network.2 It asks not only about a shortest path but also about
next k−1 shortest paths (which may be longer than the shortest path). A variation of the
problem is the loopless3 k shortest paths.
Finding k shortest paths is possible by extending Dijkstra algorithm4 or Bellman-Ford al-
gorithm5 and extend them to find more than one path.

102.1 History

Since 1957 many papers were published on the k shortest path routing problem. Most of
the fundamental works were done between 1960s and 2001. Since then, most of the research
has been on the problem's applications and its variants. In 2010, Michael Günther et al.
published a book on Symbolic calculation ofk-shortest paths and related measures with the
stochastic process algebra tool CASPA.[1]

102.2 Algorithm

The Dijkstra algorithm6 can be generalized to find the k shortest paths.

1 https://en.wikipedia.org/wiki/Shortest-path_routing
2 https://en.wikipedia.org/wiki/Network_theory
3 https://en.wikipedia.org/wiki/Loopless_algorithm
4 https://en.wikipedia.org/wiki/Dijkstra_algorithm
5 https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm
6 https://en.wikipedia.org/wiki/Dijkstra_algorithm

1101
k shortest path routing

Definitions:
• G(V, E): weighted directed graph, with set of vertices7 V and set of directed edges8 E,
• w(u, v): cost of directed edge from node u to node v (costs are non-negative).
Links that do not satisfy constraints on the shortest path are removed from the graph
• s: the source node
• t: the destination node
• K: the number of shortest paths to find
• Pu : a path from s to u
• B is a heap data structure containing paths
• P: set of shortest paths from s to t
• countu : number of shortest paths found to node u
Algorithm:
P =empty,
countu = 0, for all u in V
insert path Ps = {s} into B with cost 0
while B is not empty and countt < K:
– let Pu be the shortest cost path in B with cost C
– B = B − {Pu }, countu = countu + 1
– if u = t then P = P U {Pu }
– if countu ≤K then
• for each vertex v adjacent to u:
– let Pv be a new path with cost C + w(u, v) formed by concatenating edge (u, v) to path
Pu
– insert Pv into B
return P

102.3 Variations

There are two main variations of the k shortest path routing problem. In on variation,
paths are allowed to visit the same node more than once, thus creating loops. In another
variation, paths are required to be simple and loopless9 . The loopy version is solvable using
Eppstein's algorithm[2] and the loopless variation is solvable by Yen's algorithm10 .[3][4]

102.3.1 Loopy variant

In this variant, the problem is simplified by not requiring paths to be loopless.[4] A solution
was given by B. L. Fox in 1975 in which the k-shortest paths are determined in O(m +
kn log n) asymptotic time complexity11 (using big O notation12 .[5] In 1998, David Epp-
stein13 reported an approach that maintains an asymptotic complexity of O(m + n log n +

7 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
8 https://en.wikipedia.org/wiki/Edge_(geometry)
9 https://en.wikipedia.org/wiki/Simple_graph
10 https://en.wikipedia.org/wiki/Yen%27s_algorithm
11 https://en.wikipedia.org/wiki/Asymptotic_time_complexity
12 https://en.wikipedia.org/wiki/Big_O_notation
13 https://en.wikipedia.org/wiki/David_Eppstein

1102
Some examples and description

k) by computing an implicit representation of the paths, each of which can be output in


O(n) extra time.[2][4] In 2007, John Hershberger14 and Subhash Suri15 proposed a replace-
ment paths algorithm, a more efficient implementation of Eppstein's algorithm with O(n)
improvement in time.[6] In 2015, Akuba et al. devised an indexing method as a significantly
faster alternative for Eppstein's algorithm, in which a data structure called an index is
constructed from a graph and then top-k distances between arbitrary pairs of vertices can
be rapidly obtained.[7]

102.3.2 Loopless variant

In the loopless variant, the paths are forbidden to contain loops which adds an additional
level of complexity.[4] It can be solved using Yen's algorithm16[3][4] to find the lengths of
all shortest paths from a fixed node to all other nodes in an n-node non negative-distance
network, a technique requiring only 2n2 additions and n2 comparison, fewer than other avail-
able shortest path algorithms17 need. The running time complexity is pseudo-polynomial18 ,
being O(kn(m + n log n)) (where m and n represent the number of edges and vertices,
respectively).[3][4]

102.4 Some examples and description

102.4.1 Example #1

The following example makes use of Yen’s model to find k shortest paths between commu-
nicating end nodes. That is, it finds a shortest path, second shortest path, etc. up to the
Kth shortest path. More details can be found here19 . The code provided in this example
attempts to solve the k shortest path routing problem for a 15-nodes network containing a
combination of unidirectional and bidirectional links:

14 https://en.wikipedia.org/wiki/John_Hershberger
15 https://en.wikipedia.org/wiki/Subhash_Suri
16 https://en.wikipedia.org/wiki/Yen%27s_algorithm
17 https://en.wikipedia.org/wiki/Shortest_path_algorithms
18 https://en.wikipedia.org/wiki/Pseudo-polynomial_time
19 http://www.technical-recipes.com/2012/the-k-shortest-paths-algorithm-in-c/#more-2432

1103
k shortest path routing

Figure 243 15-node network containing a combination of bi-directional and


uni-directional links

102.4.2 Example #2

Another example is the use of k shortest paths algorithm to track multiple objects. The
technique implements a multiple object tracker based on the k shortest paths routing algo-
rithm. A set of probabilistic occupancy maps is used as input. An object detector provides
the input.
The complete details can be found at ”Computer Vision Laboratory20 – CVLAB” .

102.4.3 Example #3

Another use of k shortest paths algorithms is to design a transit network that enhances
passengers' experience in public transportation systems. Such an example of a transit
network can be constructed by putting traveling time under consideration. In addition to
traveling time, other conditions may be taken depending upon economical and geographical
limitations. Despite variations in parameters, the k shortest path algorithms finds the most
optimal solutions that satisfies almost all user needs. Such applications of k shortest path
algorithms are becoming common, recently Xu, He, Song, and Chaudry (2012) studied the
k shortest path problems in transit network systems. [8]

20 http://cvlab.epfl.ch/software/ksp/

1104
Applications

102.5 Applications

The k shortest path routing is a good alternative for:


• Geographic path planning21
• Network routing, especially in optical mesh network22 where there are additional con-
straints that cannot be solved by using ordinary shortest path algorithms23 .
• Hypothesis generation in computational linguistics24
• Sequence alignment and metabolic pathway finding in bioinformatics25
• Multiple object tracking26 as described above
• Road Networks: road junctions are the nodes (vertices) and each edge (link) of the graph
is associated with a road segment between two junctions.

102.6 Related problems


• The breadth-first search algorithm27 is used when the search is only limited to two oper-
ations.
• The Floyd–Warshall algorithm28 solves all pairs shortest paths.
• Johnson's algorithm29 solves all pairs' shortest paths, and may be faster than Floyd–
Warshall on sparse graphs30 .
• Perturbation theory31 finds (at worst) the locally shortest path.
Cherkassky et al.[9] provide more algorithms and associated evaluations.

102.7 See also


• Constrained shortest path routing32

102.8 Notes
1. Michael Günther et al.: “Symbolic calculation of k-shortest paths and related measures
with the stochastic process algebra tool CASPA”. In: Int’l Workshop on Dynamic
Aspects in Dependability Models for Fault-Tolerant Systems (DYADEM-FTS), ACM
Press (2010) 13–18.

21 http://raweb.inria.fr/rapportsactivite/RA2009/aspi/uid62.html
22 https://en.wikipedia.org/wiki/Optical_mesh_network
23 https://en.wikipedia.org/wiki/Shortest_path_algorithms
24 https://en.wikipedia.org/wiki/Computational_linguistics
http://bioinformatics.oxfordjournals.org/content/21/16/3401.full.pdf?keytype=ref&
25
ijkey=LBKAnjRh0mW0xP4
https://web.archive.org/web/20130108024800/http://cvlab.epfl.ch/publications/
26
publications/2011/BerclazFTF11.pdf
27 https://en.wikipedia.org/wiki/Breadth-first_search
28 https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
29 https://en.wikipedia.org/wiki/Johnson%27s_algorithm
30 https://en.wikipedia.org/wiki/Sparse_graph
31 https://en.wikipedia.org/wiki/Perturbation_theory
32 https://en.wikipedia.org/wiki/Constrained_Shortest_Path_First

1105
k shortest path routing

2. E, D33 (1998). ”F  k Shortest Paths”34 (PDF). SIAM J.


Comput.35 28 (2): 652–673. doi36 :10.1137/S009753979529047737 .
3. Y, J. Y. (1971). ”F  k-Shortest Loopless Paths in a Network”. Man-
agement Science38 . 1 7 (11): 712–716. doi39 :10.1287/mnsc.17.11.71240 ..
4. B, E; E, G; L, J-F; R-
, R (2007). ”P R − P 2: H”41 . Path Routing in
Mesh Optical Networks. John Wiley & Sons42 . pp. 125–138. ISBN43 978047001565044 .
5. F, B. L. (1975). ”Kth shortest paths and applications to the probabilistic net-
works”. ORSA/TIMS Joint National Meeting45 . 23: B263. CiNii National Article
ID46 : 10012857200.
6. H, J47 ; M, M; S, S48 (2007). ”F-
  k Shortest Simple Paths: A New Algorithm and its Implementation”49
(PDF). ACM Transactions on Algorithms50 . 3 (4). Article 45 (19 pages).
doi51 :10.1145/1290672.129068252 .
7. A, T; H, T; N, N; I, Y; Y,
Y (J 2015). ”E T-k Shortest-Path Distance Queries on Large
Networks by Pruned Landmark Labeling”53 . Proceedings of the Twenty-Ninth AAAI
Conference on Artificial Intelligence54 . A, TX: A   A-
  A I55 . . 2–8.
8. Xu, W., He, S., Song, R., & Chaudhry, S. (2012). Finding the k shortest paths in a
schedule-based transit network. Computers & Operations Research, 39(8), 1812-1826.
doi:10.1016/j.cor.2010.02.005
9. Cherkassky, Boris V.; Goldberg, Andrew V.56 ; Radzik, Tomasz (1996). ”Shortest
paths algorithms: theory and experimental evaluation”. Mathematical Programming.
Ser. A 73 (2): 129–174.

33 https://en.wikipedia.org/wiki/David_Eppstein
34 https://www.ics.uci.edu/~eppstein/pubs/Epp-SJC-98.pdf
35 https://en.wikipedia.org/wiki/SIAM_J._Comput.
36 https://en.wikipedia.org/wiki/Doi_(identifier)
37 https://doi.org/10.1137%2FS0097539795290477
38 https://en.wikipedia.org/wiki/Management_Science_(journal)
39 https://en.wikipedia.org/wiki/Doi_(identifier)
40 https://doi.org/10.1287%2Fmnsc.17.11.712
41 https://books.google.com/books?id=zSSjFf-jZT8C&pg=PG129
42 https://en.wikipedia.org/wiki/John_Wiley_%26_Sons
43 https://en.wikipedia.org/wiki/ISBN_(identifier)
44 https://en.wikipedia.org/wiki/Special:BookSources/9780470015650
https://en.wikipedia.org/wiki/Institute_for_Operations_Research_and_the_Management_
45
Sciences
46 https://en.wikipedia.org/wiki/CiNii
47 https://en.wikipedia.org/wiki/John_Hershberger
48 https://en.wikipedia.org/wiki/Subhash_Suri
49 https://archive.siam.org/meetings/alenex03/Abstracts/jhershberger.pdf
50 https://en.wikipedia.org/wiki/ACM_Transactions_on_Algorithms
51 https://en.wikipedia.org/wiki/Doi_(identifier)
52 https://doi.org/10.1145%2F1290672.1290682
53 https://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/9320/9217
54 https://www.aaai.org/ocs/index.php/AAAI/AAAI15/schedConf/presentations
https://en.wikipedia.org/wiki/Association_for_the_Advancement_of_Artificial_
55
Intelligence
56 https://en.wikipedia.org/wiki/Andrew_V._Goldberg

1106
External links

102.9 External links


• Implementation of Yen's algorithm57
• 58
• Multiple objects tracking technique using K-shortest path algorithm: 59

• Computer Vision Laboratory: 60

57 https://code.google.com/p/k-shortest-paths/
58 http://www.technical-recipes.com/2012/the-k-shortest-paths-algorithm-in-c/#more-2432
59 http://cvlab.epfl.ch/software/ksp/
60 http://cvlab.epfl.ch/software/ksp/

1107
103 Karger's algorithm

Figure 244 A graph and two of its cuts. The dotted line in red is a cut with three
crossing edges. The dashed line in green is a min-cut of this graph, crossing only two
edges.

In computer science1 and graph theory2 , Karger's algorithm is a randomized algorithm3


to compute a minimum cut4 of a connected graph5 . It was invented by David Karger6 and
first published in 1993.[1]

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Graph_theory
3 https://en.wikipedia.org/wiki/Randomized_algorithm
4 https://en.wikipedia.org/wiki/Minimum_cut
5 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
6 https://en.wikipedia.org/wiki/David_Karger

1109
Karger's algorithm

The idea of the algorithm is based on the concept of contraction of an edge7 (u, v) in an
undirected graph G = (V, E). Informally speaking, the contraction of an edge merges the
nodes u and v into one, reducing the total number of nodes of the graph by one. All other
edges connecting either u or v are ”reattached” to the merged node, effectively producing
a multigraph8 . Karger's basic algorithm iteratively contracts randomly chosen edges until
only two nodes remain; those nodes represent a cut9 in the original graph. By iterating
this basic algorithm a sufficient number of times, a minimum cut can be found with high
probability.

103.1 The global minimum cut problem

Main article: Minimum cut10 A cut (S, T ) in an undirected graph G = (V, E) is a partition
of the vertices V into two non-empty, disjoint sets S ∪ T = V . The cutset of a cut consists
of the edges { uv ∈ E : u ∈ S, v ∈ T } between the two parts. The size (or weight) of a cut in
an unweighted graph is the cardinality of the cutset, i.e., the number of edges between the
two parts,
w(S, T ) = |{ uv ∈ E : u ∈ S, v ∈ T }| .
There are 2|V | ways of choosing for each vertex whether it belongs to S or to T , but two
of these choices make S or T empty and do not give rise to cuts. Among the remaining
choices, swapping the roles of S and T does not change the cut, so each cut is counted twice;
therefore, there are 2|V |−1 − 1 distinct cuts. The minimum cut problem is to find a cut of
smallest size among these cuts.
For weighted graphs with positive edge weights w : E → R+ the weight of the cut is the
sum of the weights of edges between vertices in each part

w(S, T ) = w(uv) ,
uv∈E : u∈S,v∈T

which agrees with the unweighted definition for w = 1.


A cut is sometimes called a “global cut” to distinguish it from an “s-t cut” for a given
pair of vertices, which has the additional requirement that s ∈ S and t ∈ T . Every global
cut is an s-t cut for some s, t ∈ V . Thus, the minimum cut problem can be solved in
polynomial time11 by iterating over all choices of s, t ∈ V and solving the resulting minimum
s-t cut problem using the max-flow min-cut theorem12 and a polynomial time algorithm
for maximum flow13 , such as the push-relabel algorithm14 , though this approach is not
optimal. Better deterministic algorithms for the global minimum cut problem include the
Stoer–Wagner algorithm15 , which has a running time of O(mn + n2 log n).[2]

7 https://en.wikipedia.org/wiki/Edge_contraction
8 https://en.wikipedia.org/wiki/Multigraph
9 https://en.wikipedia.org/wiki/Cut_(graph_theory)
10 https://en.wikipedia.org/wiki/Minimum_cut
11 https://en.wikipedia.org/wiki/Polynomial_time
12 https://en.wikipedia.org/wiki/Max-flow_min-cut_theorem
13 https://en.wikipedia.org/wiki/Maximum_flow
14 https://en.wikipedia.org/wiki/Push%E2%80%93relabel_maximum_flow_algorithm
15 https://en.wikipedia.org/wiki/Stoer%E2%80%93Wagner_algorithm

1110
Contraction algorithm

103.2 Contraction algorithm

The fundamental operation of Karger’s algorithm is a form of edge contraction16 . The


result of contracting the edge e = {u, v} is new node uv. Every edge {w, u} or {w, v} for
w∈ / {u, v} to the endpoints of the contracted edge is replaced by an edge {w, uv} to the
new node. Finally, the contracted nodes u and v with all their incident edges are removed.
In particular, the resulting graph contains no self-loops. The result of contracting edge e is
denoted G/e.

Figure 245 The marked edge


is contracted into a single node.

The contraction algorithm repeatedly contracts random edges in the graph, until only two
nodes remain, at which point there is only a single cut.

Figure 246 Successful run of Karger’s algorithm on a 10-vertex graph. The minimum
cut has size 3.

procedure contract( <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>


<mi>G</mi> <mo>=</mo> <mo stretchy=”false”>(</mo> <mi>V</mi> <mo>,</mo> <mi>E</mi> <mo
stretchy=”false”>)</mo> </mstyle> </mrow> {\displaystyle G=(V,E)} </semantics> ):
while <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mrow
class=”MJX-TeXAtom-ORD”> <mo stretchy=”false”>|</mo> </mrow> <mi>V</mi> <mrow class=”MJX-TeXAtom-
ORD”> <mo stretchy=”false”>|</mo> </mrow> <mo>></mo> <mn>2</mn> </mstyle> </mrow> {\displaystyle
|V|&gt;2} </semantics>
choose <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mi>e</mi> <mo> </mo> <mi>E</mi> </mstyle> </mrow> {\displaystyle e\in E} </semantics>
uniformly at random
<semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mi>G</mi>
<mo stretchy=”false”>←</mo> <mi>G</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo>/</mo> </mrow>
<mi>e</mi> </mstyle> </mrow> {\displaystyle G\leftarrow G/e} </semantics>
return the only cut in <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true”
scriptlevel=”0”> <mi>G</mi> </mstyle> </mrow> {\displaystyle G} </semantics>

16 https://en.wikipedia.org/wiki/Edge_contraction

1111
Karger's algorithm

When the graph is represented using adjacency lists17 or an adjacency matrix18 , a single
edge contraction operation can be implemented with a linear number of updates to the data
structure, for a total running time of O(|V |2 ). Alternatively, the procedure can be viewed
as an execution of Kruskal’s algorithm19 for constructing the minimum spanning tree20 in
a graph where the edges have weights w(ei ) = π(i) according to a random permutation π.
Removing the heaviest edge of this tree results in two components that describe a cut. In
this way, the contraction procedure can be implemented like Kruskal’s algorithm21 in time
O(|E| log |E|).

Figure 247 The random edge choices in Karger’s algorithm correspond to an execution
of Kruskal’s algorithm on a graph with random edge ranks until only two components
remain.

The best known implementations use O(|E|) time and space, or O(|E| log |E|) time and
O(|V |) space, respectively.[1]

103.2.1 Success probability of the contraction algorithm

In a graph G = (V, E) with n = |V | vertices, the contraction algorithm returns a minimum


( )−1
n
cut with polynomially small probability . Every graph has 2n−1 − 1 cuts,[3] among
2
( )
which at most n2 can be minimum cuts. Therefore, the success probability for this algo-
rithm
(n )
is much better than the probability for picking a cut at random, which is at most
2 /(2 n−1
− 1)
( )
n
For instance, the cycle graph22 on n vertices has exactly minimum cuts, given by every
2
choice of 2 edges. The contraction procedure finds each of these with equal probability.
To establish the bound on the success probability in general, let C denote the edges of a
specific minimum cut of size k. The contraction algorithm returns C if none of the random
edges belongs to the cutset of C. In particular, the first edge contraction avoids C, which

17 https://en.wikipedia.org/wiki/Adjacency_list
18 https://en.wikipedia.org/wiki/Adjacency_matrix
19 https://en.wikipedia.org/wiki/Kruskal%E2%80%99s_algorithm
20 https://en.wikipedia.org/wiki/Minimum_spanning_tree
21 https://en.wikipedia.org/wiki/Kruskal%E2%80%99s_algorithm
22 https://en.wikipedia.org/wiki/Cycle_graph

1112
Contraction algorithm

happens with probability 1 − k/|E|. The minimum degree of G is at least k (otherwise a


minimum degree vertex would induce a smaller cut), so |E| ≥ nk/2. Thus, the probability
that the contraction algorithm picks an edge from C is
k k 2
≤ = .
|E| nk/2 n
The probability pn that
( the)contraction algorithm on an n-vertex graph avoids C satisfies
2
the recurrence pn ≥ 1 − pn−1 , with p2 = 1, which can be expanded as
n
( )−1
∏(
n−3
2 ) n−3
∏ n−i−2 n−2 n−3 n−4 3 2 1 n
pn ≥ 1− = = · · ··· · · = .
i=0
n−i i=0
n − i n n − 1 n − 2 5 4 3 2

1113
Karger's algorithm

103.2.2 Repeating the contraction algorithm

Figure 248 10 repetitions of the contraction procedure. The 5th repetition finds the
minimum cut of size 3.

( )
n
By repeating the contraction algorithm T = ln n times with independent random
2
choices and returning the smallest cut, the probability of not finding a minimum cut is
 ( )−1 T
1 −
n  ≤
1 1
= .
2 eln n n

1114
Karger–Stein algorithm

The total running time for T repetitions for a graph with n vertices and m edges is
O(T m) = O(n2 m log n).

103.3 Karger–Stein algorithm

An extension of Karger’s algorithm due to David Karger23 and Clifford Stein24 achieves an
order of magnitude improvement.[4]
The basic idea is to perform the contraction procedure until the graph reaches t vertices.
procedure contract( <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mi>G</mi> <mo>=</mo> <mo stretchy=”false”>(</mo> <mi>V</mi> <mo>,</mo> <mi>E</mi> <mo
stretchy=”false”>)</mo> </mstyle> </mrow> {\displaystyle G=(V,E)} </semantics> , <semantics> <mrow
class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mi>t</mi> </mstyle> </mrow> {\dis-
playstyle t} </semantics> ):
while <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mrow
class=”MJX-TeXAtom-ORD”> <mo stretchy=”false”>|</mo> </mrow> <mi>V</mi> <mrow class=”MJX-TeXAtom-
ORD”> <mo stretchy=”false”>|</mo> </mrow> <mo>></mo> <mi>t</mi> </mstyle> </mrow> {\displaystyle
|V|&gt;t} </semantics>
choose <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mi>e</mi> <mo> </mo> <mi>E</mi> </mstyle> </mrow> {\displaystyle e\in E} </semantics>
uniformly at random
<semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mi>G</mi>
<mo stretchy=”false”>←</mo> <mi>G</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo>/</mo> </mrow>
<mi>e</mi> </mstyle> </mrow> {\displaystyle G\leftarrow G/e} </semantics>
return <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mi>G</mi>
</mstyle> </mrow> {\displaystyle G} </semantics>

The probability pn,t that this contraction procedure avoids a specific cut C in an n-vertex
graph is

( )/( )
∏ (
n−t−1
2 ) t n
pn,t ≥ 1− = .
i=0
n−i 2 2
1 √
This expression is approximately t2 /n2 and becomes less than around t = n/ 2. In
2
particular, the probability that an edge from C is contracted grows towards the end. This
motivates the idea of switching to a slower algorithm after a certain number of contraction
steps.
procedure fastmincut( <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true”
scriptlevel=”0”> <mi>G</mi> <mo>=</mo> <mo stretchy=”false”>(</mo> <mi>V</mi> <mo>,</mo>
<mi>E</mi> <mo stretchy=”false”>)</mo> </mstyle> </mrow> {\displaystyle G=(V,E)} </semantics> ):
if <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mrow class=”MJX-
TeXAtom-ORD”> <mo stretchy=”false”>|</mo> </mrow> <mi>V</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo
stretchy=”false”>|</mo> </mrow> <mo> </mo> <mn>6</mn> </mstyle> </mrow> {\displaystyle |V|\leq 6}
</semantics> :
return mincut( <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<mi>V</mi> </mstyle> </mrow> {\displaystyle V} </semantics> )
else:
<semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mi>t</mi>
<mo stretchy=”false”>←</mo> <mo fence=”false” stretchy=”false”> </mo> <mn>1</mn> <mo>+</mo>
<mrow class=”MJX-TeXAtom-ORD”> <mo stretchy=”false”>|</mo> </mrow> <mi>V</mi> <mrow class=”MJX-
TeXAtom-ORD”> <mo stretchy=”false”>|</mo> </mrow> <mrow class=”MJX-TeXAtom-ORD”> <mo>/</mo>

23 https://en.wikipedia.org/wiki/David_Karger
24 https://en.wikipedia.org/wiki/Clifford_Stein

1115
Karger's algorithm

</mrow> <mrow class=”MJX-TeXAtom-ORD”> <msqrt> <mn>2</mn> </msqrt> </mrow> <mo fence=”false”


stretchy=”false”> </mo> </mstyle> </mrow> {\displaystyle t\leftarrow \lceil 1+|V|/{\sqrt {2}}\rceil } </semantics>

<semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>


<msub> <mi>G</mi> <mrow class=”MJX-TeXAtom-ORD”> <mn>1</mn> </mrow> </msub> <mo
stretchy=”false”>←</mo> </mstyle> </mrow> {\displaystyle G_{1}\leftarrow } </semantics> contract( <semantics>
<mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mi>G</mi> </mstyle> </mrow>
{\displaystyle G} </semantics> , <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true”
scriptlevel=”0”> <mi>t</mi> </mstyle> </mrow> {\displaystyle t} </semantics> )
<semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<msub> <mi>G</mi> <mrow class=”MJX-TeXAtom-ORD”> <mn>2</mn> </mrow> </msub> <mo
stretchy=”false”>←</mo> </mstyle> </mrow> {\displaystyle G_{2}\leftarrow } </semantics> contract( <semantics>
<mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <mi>G</mi> </mstyle> </mrow>
{\displaystyle G} </semantics> , <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true”
scriptlevel=”0”> <mi>t</mi> </mstyle> </mrow> {\displaystyle t} </semantics> )
return min {fastmincut( <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true”
scriptlevel=”0”> <msub> <mi>G</mi> <mrow class=”MJX-TeXAtom-ORD”> <mn>1</mn> </mrow> </msub>
</mstyle> </mrow> {\displaystyle G_{1}} </semantics> ), fastmincut( <semantics> <mrow class=”MJX-TeXAtom-
ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <msub> <mi>G</mi> <mrow class=”MJX-TeXAtom-ORD”>
<mn>2</mn> </mrow> </msub> </mstyle> </mrow> {\displaystyle G_{2}} </semantics> )}

103.3.1 Analysis

The probability P (n) the algorithm finds a specific cutset C is given by the recurrence
relation
( (⌈ ))
1 n ⌉ 2
P (n) = 1 − 1 − P 1 + √
2 2
( )
1
with solution P (n) = Ω . The running time of fastmincut satisfies
log n
(⌈ )
n ⌉
T (n) = 2T 1+ √ + O(n2 )
2
with solution T (n) = O(n2 log n). To achieve error probability O(1/n), the algorithm can
log n
be repeated O(log n/P (n)) times, for an overall running time of T (n) · = O(n2 log3 n).
P (n)
This is an order of magnitude improvement over Karger’s original algorithm.

103.3.2 Finding all min-cuts

Theorem: With high probability we can find all min cuts in the running time of O(ln2 n).
( )
1
Proof: We know that P (n) = O , therefore after running this algorithm O(ln2 n)
ln n
times the probability of missing a specific min-cut is
( ) 3 ln2 n
c c 1
Pr[miss a specific min-cut] = (1 − P (n)) O(ln2 n)
≤ 1− ≤ e−3 ln n = .
ln n n3
( )
n
And there are at most min-cuts, hence the probability of missing any min-cut is
2
( ) ( )
n 1 1
Pr[miss any min-cut] ≤ · 3 =O .
2 n n

1116
References

The probability of failures is considerably small when n is large enough.∎

103.3.3 Improvement bound

To determine a min-cut, one has to touch every edge in the graph at least once, which is
Θ(n2 ) time in a dense graph25 . The Karger–Stein's min-cut algorithm takes the running
time of O(n2 lnO(1) n), which is very close to that.

103.4 References
1. K, D (1993). ”G M-  RNC  O R
  S M A”26 . Proc. 4th Annual ACM-SIAM Symposium on
Discrete Algorithms.
2. S, M.; W, F. (1997). ”A  - ”. Journal of the
ACM. 44 (4): 585. doi27 :10.1145/263867.26387228 .
3. P, M; P, M (2001), ”T   
- ”,  B, A29 ; L, V B (.),
Graph-Theoretic Concepts in Computer Science: 27th International Workshop, WG
2001, Boltenhagen, Germany, June 14-16, 2001, Proceedings, Lecture Notes in
Computer Science, 2204, Berlin: Springer, pp. 284–295, doi30 :10.1007/3-540-45477-
2_2631 , MR32 190564033 .
4. K, D R.34 ; S, C35 (1996). ”A   
   ”36 (PDF). Journal of the ACM. 43 (4): 601.
doi37 :10.1145/234533.23453438 .

25 https://en.wikipedia.org/wiki/Dense_graph
26 http://people.csail.mit.edu/karger/Papers/mincut.ps
27 https://en.wikipedia.org/wiki/Doi_(identifier)
28 https://doi.org/10.1145%2F263867.263872
29 https://en.wikipedia.org/wiki/Andreas_Brandst%C3%A4dt
30 https://en.wikipedia.org/wiki/Doi_(identifier)
31 https://doi.org/10.1007%2F3-540-45477-2_26
32 https://en.wikipedia.org/wiki/MR_(identifier)
33 http://www.ams.org/mathscinet-getitem?mr=1905640
34 https://en.wikipedia.org/wiki/David_Karger
35 https://en.wikipedia.org/wiki/Clifford_Stein
36 http://www.columbia.edu/~cs2035/courses/ieor6614.S09/Contraction.pdf
37 https://en.wikipedia.org/wiki/Doi_(identifier)
38 https://doi.org/10.1145%2F234533.234534

1117
104 Knight's tour

Figure 249 An open knight's tour of a chessboard

1119
Knight's tour

Figure 250 An animation of an open knight's tour on a 5 × 5 board

A knight's tour is a sequence of moves of a knight1 on a chessboard2 such that the knight
visits every square exactly once. If the knight ends on a square that is one knight's move
from the beginning square (so that it could tour the board again immediately, following the
same path), the tour is closed; otherwise, it is open.[1][2]
The knight's tour problem is the mathematical problem3 of finding a knight's tour.
Creating a program4 to find a knight's tour is a common problem given to computer science5

1 https://en.wikipedia.org/wiki/Knight_(chess)
2 https://en.wikipedia.org/wiki/Chessboard
3 https://en.wikipedia.org/wiki/Mathematical_chess_problem
4 https://en.wikipedia.org/wiki/Computer_program
5 https://en.wikipedia.org/wiki/Computer_science

1120
Theory

students.[3] Variations of the knight's tour problem involve chessboards of different sizes than
the usual 8 × 8, as well as irregular (non-rectangular) boards.

104.1 Theory

Figure 251 Knight's graph showing all possible paths for a knight's tour on a standard
8 × 8 chessboard. The numbers on each node indicate the number of possible moves that
can be made from that position.

1121
Knight's tour

The knight's tour problem is an instance of the more general Hamiltonian path problem6 in
graph theory7 . The problem of finding a closed knight's tour is similarly an instance of the
Hamiltonian cycle problem8 . Unlike the general Hamiltonian path problem, the knight's
tour problem can be solved in linear time9 .[4]

104.2 History

Figure 252 The knight's tour as solved by the Turk, a chess-playing machine hoax.
This particular solution is closed (circular), and can thus be completed from any point on
the board.

6 https://en.wikipedia.org/wiki/Hamiltonian_path_problem
7 https://en.wikipedia.org/wiki/Graph_theory
8 https://en.wikipedia.org/wiki/Hamiltonian_cycle_problem
9 https://en.wikipedia.org/wiki/Linear_time

1122
History

The earliest known reference to the knight's tour problem dates back to the 9th century AD.
In Rudraṭa's10 Kavyalankara[5] (5.15), a Sanskrit work on Poetics, the pattern of a knight's
tour on a half-board has been presented as an elaborate poetic figure (citra-alaṅkāra) called
the turagapadabandha or 'arrangement in the steps of a horse'. The same verse in four lines
of eight syllables each can be read from left to right or by following the path of the knight
on tour. Since the Indic writing systems11 used for Sanskrit are syllabic, each syllable can
be thought of as representing a square on a chessboard. Rudrata's example is as follows:
से ना ली ली ली ना ना ली
ली ना ना ना ना ली ली ली
न ली ना ली ले ना ली ना
ली ली ली ना ना ना ना ली

transliterated:
se nā lī lī lī nā nā lī
lī nā nā nā nā lī lī lī
na lī nā lī le nā lī nā
lī lī lī nā nā nā nā lī

For example, the first line can be read from left to right or by moving from the first square
to the second line, third syllable (2.3) and then to 1.5 to 2.7 to 4.8 to 3.6 to 4.4 to 3.2.
The Sri Vaishnava12 poet and philosopher Vedanta Desika13 during 14th century in his
1,008-verse magnum opus praising Lord Ranganatha14 's divine sandals of Srirangam15 , i.e.
Paduka Sahasram (in chapter 30: Chitra Paddhati) has composed two consecutive Sanskrit16
verses containing 32 letters each (in Anushtubh17 meter) where the second verse can be
derived from the first verse by performing Knight's tour on a 4 × 8 board, starting from
the top-left corner.[6] The transliterated 19th verse is as follows:

sThi(1) rA(30) ga(9) sAm(20) sa(3) dhA(24) rA(11) dhyA(26)


vi(16) ha(19) thA(2) ka(29) tha(10) thA(27) ma(4) thA(23)
sa(31) thpA(8) dhu(17) kE(14) sa(21) rA(6) sA(25) mA(12)
ran(18) ga(15) rA(32) ja(7) pa(28) dha(13) nna(22) ya(5)

The 20th verse that can be obtained by performing Knight's tour on the above verse is as
follows:
sThi thA sa ma ya rA ja thpA
ga tha rA mA dha kE ga vi |
dhu ran ha sAm sa nna thA dhA
sA dhyA thA pa ka rA sa rA ||

10 https://en.wikipedia.org/wiki/Rudrata
11 https://en.wikipedia.org/wiki/Indic_Writing_Systems
12 https://en.wikipedia.org/wiki/Sri_Vaishnavism
13 https://en.wikipedia.org/wiki/Vedanta_Desika
14 https://en.wikipedia.org/wiki/Ranganatha
15 https://en.wikipedia.org/wiki/Ranganathaswamy_Temple,_Srirangam
16 https://en.wikipedia.org/wiki/Sanskrit
17 https://en.wikipedia.org/wiki/Anu%E1%B9%A3%E1%B9%ADubh

1123
Knight's tour

It is believed that Desika composed all 1008 verses (including the special Chaturanga Tu-
ranga Padabandham mentioned above) in a single night as a challenge.[7]
A tour reported in the fifth book of Bhagavantabaskaraby by Bhat Nilakantha, a cyclope-
dic work in Sanskrit on ritual, law and politics, written either about 1600 or about 1700
describes three knight's tours. The tours are not only reentrant but also symmetrical, and
the verses are based on the same tour, starting from different squares.[8] The work by Bhat
Nilakantha is an extraordinary achievement being a fully symmetric closed tour, predating
the work of Euler (1759) by at least 60 years.
One of the first mathematicians to investigate the knight's tour was Leonhard Euler18 . The
first procedure for completing the knight's tour was Warnsdorff's rule, first described in
1823 by H. C. von Warnsdorff.
In the 20th century, the Oulipo19 group of writers used it, among many others. The most
notable example is the 10 × 10 knight's tour which sets the order of the chapters in Georges
Perec20 's novel Life a User's Manual21 .
The sixth game of the World Chess Championship 201022 between Viswanathan Anand23
and Veselin Topalov24 saw Anand making 13 consecutive knight moves (albeit using both
knights); online commentators jested that Anand was trying to solve the knight's tour
problem during the game.

18 https://en.wikipedia.org/wiki/Leonhard_Euler
19 https://en.wikipedia.org/wiki/Oulipo
20 https://en.wikipedia.org/wiki/Georges_Perec
21 https://en.wikipedia.org/wiki/Life_a_User%27s_Manual
22 https://en.wikipedia.org/wiki/World_Chess_Championship_2010
23 https://en.wikipedia.org/wiki/Viswanathan_Anand
24 https://en.wikipedia.org/wiki/Veselin_Topalov

1124
Existence

104.3 Existence

Figure 253 A radially symmetric closed knight's tour

Schwenk[9] proved that for any m × n board with m ≤n, a closed knight's tour is always
possible unless one or more of these three conditions are met:
1. m and n are both odd
2. m = 1, 2, or 4
3. m = 3 and n = 4, 6, or 8.
Cull et al. and Conrad et al. proved that on any rectangular board whose smaller dimension
is at least 5, there is a (possibly open) knight's tour.[4][10]

1125
Knight's tour

104.4 Number of tours

On an 8 × 8 board, there are exactly 26,534,728,821,064 directed25 closed tours (i.e. two
tours along the same path that travel in opposite directions are counted separately, as are
rotations26 and reflections27 ).[11][12][13] The number of undirected28 closed tours is half this
number, since every tour can be traced in reverse. There are 9,862 undirected closed tours
on a 6 × 6 board.[14]

n Number of directed tours (open and closed)


on an n × n board
(sequence A16513429 in the OEIS30 )
1 1
2 0
3 0
4 0
5 1,728
6 6,637,920
7 165,575,218,320
8 19,591,828,170,979,904

104.5 Finding tours with computers

There are several ways to find a knight's tour on a given board with a computer. Some of
these methods are algorithms31 while others are heuristics32 .

104.5.1 Brute-force algorithms

A brute-force search33 for a knight's tour is impractical on all but the smallest boards.[15] For
example, there are approximately 4×1051 possible move sequences on an 8 × 8 board,[16]
and it is well beyond the capacity of modern computers (or networks of computers) to
perform operations on such a large set. However, the size of this number is not indicative
of the difficulty of the problem, which can be solved ”by using human insight and ingenuity
... without much difficulty.”[15]

25 https://en.wikipedia.org/wiki/Glossary_of_graph_theory_terms#D
26 https://en.wikipedia.org/wiki/Rotation_(mathematics)
27 https://en.wikipedia.org/wiki/Reflection_(mathematics)
28 https://en.wikipedia.org/wiki/Glossary_of_graph_theory_terms#D
31 https://en.wikipedia.org/wiki/Algorithm
32 https://en.wikipedia.org/wiki/Heuristic
33 https://en.wikipedia.org/wiki/Brute-force_search

1126
Finding tours with computers

104.5.2 Divide-and-conquer algorithms

By dividing the board into smaller pieces, constructing tours on each piece, and patching
the pieces together, one can construct tours on most rectangular boards in linear time34 –
that is, in a time proportional to the number of squares on the board.[10][17]

104.5.3 Warnsdorff's rule

a b c d e f g h
8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
a b c d e f g h

A graphical representation of Warnsdorff's Rule. Each square contains an integer giving


the number of moves that the knight could make from that square. In this case, the rule
tells us to move to the square with the smallest integer in it, namely 2.

34 https://en.wikipedia.org/wiki/Time_complexity#Linear_time

1127
Knight's tour

Figure 254 A very large (130 × 130) square open knight's tour created using
Warnsdorff's Rule

Warnsdorff's rule is a heuristic35 for finding a single knight's tour. The knight is moved so
that it always proceeds to the square from which the knight will have the fewest onward
moves. When calculating the number of onward moves for each candidate square, we do
not count moves that revisit any square already visited. It is possible to have two or more
choices for which the number of onward moves is equal; there are various methods for
breaking such ties, including one devised by Pohl[18] and another by Squirrel and Cull.[19]
This rule may also more generally be applied to any graph. In graph-theoretic terms, each
move is made to the adjacent vertex with the least degree36 .[20] Although the Hamiltonian

35 https://en.wikipedia.org/wiki/Heuristic
36 https://en.wikipedia.org/wiki/Degree_(graph_theory)

1128
Finding tours with computers

path problem37 is NP-hard38 in general, on many graphs that occur in practice this heuristic
is able to successfully locate a solution in linear time39 .[18] The knight's tour is such a special
case.[21]
The heuristic40 was first described in ”Des Rösselsprungs einfachste und allgemeinste Lö-
sung” by H. C. von Warnsdorff in 1823.[21]
A computer program that finds a knight's tour for any starting position using Warnsdorff's
rule was written by Gordon Horsington and published in 1984 in the book Century/Acorn
User Book of Computer Puzzles.[22]

37 https://en.wikipedia.org/wiki/Hamiltonian_path_problem
38 https://en.wikipedia.org/wiki/NP-hardness
39 https://en.wikipedia.org/wiki/Linear_time
40 https://en.wikipedia.org/wiki/Heuristic

1129
Knight's tour

104.5.4 Neural network solutions

Figure 255 Closed knight's tour on a 24 × 24 board solved by a neural network

The knight's tour problem also lends itself to being solved by a neural network41
implementation.[23] The network is set up such that every legal knight's move is represented
by a neuron42 , and each neuron is initialized randomly to be either ”active” or ”inactive”
(output of 1 or 0), with 1 implying that the neuron is part of the solution. Each neuron
also has a state function (described below) which is initialized to 0.

41 https://en.wikipedia.org/wiki/Neural_network
42 https://en.wikipedia.org/wiki/Artificial_neuron

1130
See also

When the network is allowed to run, each neuron can change its state and output based on
the states and outputs of its neighbors (those exactly one knight's move away) according
to the following transition rules:

Ut+1 (Ni,j ) = Ut (Ni,j ) + 2 − Vt (N )
N ∈G(Ni,j )


 1 if Ut+1 (Ni,j ) > 3
Vt+1 (Ni,j ) = 0 if Ut+1 (Ni,j ) < 0

 V (N ) otherwise,
t i,j

where t represents discrete intervals of time, U (Ni,j ) is the state of the neuron connecting
square i to square j, V (Ni,j ) is the output of the neuron from i to j, and G(Ni,j ) is the set
of neighbors of the neuron.
Although divergent cases are possible, the network should eventually converge, which occurs
when no neuron changes its state from time t to t + 1. When the network converges, either
the network encodes a knight's tour or a series of two or more independent circuits within
the same board.

104.6 See also


• Abu Bakr bin Yahya al-Suli43
• George Koltanowski44
• Longest uncrossed knight's path45
• Eight queens puzzle46

104.7 Notes
1. Brown, Alfred James (2017). ”Knight's Tours and Zeta Functions”47 (PDF). San José
State University. p. 3. Retrieved 2019-04-13.
2. H, D48 ; W, K49 (1996) [F . 1992]. ”'
”. The Oxford Companion to Chess50 (2 .). O U P51 .
. 204. ISBN52 0-19-280049-353 .

43 https://en.wikipedia.org/wiki/Abu_Bakr_bin_Yahya_al-Suli
44 https://en.wikipedia.org/wiki/George_Koltanowski
45 https://en.wikipedia.org/wiki/Longest_uncrossed_knight%27s_path
46 https://en.wikipedia.org/wiki/Eight_queens_puzzle
47 https://scholarworks.sjsu.edu/cgi/viewcontent.cgi?article=8383&context=etd_theses
48 https://en.wikipedia.org/wiki/David_Vincent_Hooper
49 https://en.wikipedia.org/wiki/Kenneth_Whyld
50 https://en.wikipedia.org/wiki/The_Oxford_Companion_to_Chess
51 https://en.wikipedia.org/wiki/Oxford_University_Press
52 https://en.wikipedia.org/wiki/ISBN_(identifier)
53 https://en.wikipedia.org/wiki/Special:BookSources/0-19-280049-3

1131
Knight's tour

3. D, H. M.; D, P. J. (2003). Java How To Program Fifth Edition54 (5
.). P H55 . . 326–32856 . ISBN57 978-013101621758 .
4. C, A.; H, T.; M, H. & W, I. (1994). ”S 
 K' H P P  C”. Discrete Applied
Mathematics. 50 (2): 125–134. doi59 :10.1016/0166-218X(92)00170-Q60 .
5. S, C. Kavyalankara of Rudrata (Sanskrit text, with Hindi trans-
lation);61 . D: P S S N. 30.
6. ”I I  I T, B”62 .
www.iiitb.ac.in. Retrieved 2019-10-11.
7. B- (2011-08-05). ”B-I: P S  V
D”63 . Bridge-India. Retrieved 2019-10-16.
8. A History of Chess by Murray
9. A J. S (1991). ”W R C H 
K' T?”64 (PDF). Mathematics Magazine: 325–332.
10. C, P.; D C, J. (1978). ”K' T R”65 (PDF). Fibonacci
Quarterly. 16: 276–285.
11. M L; I W (1996). ”T N  K'
T E 33,439,123,484,294 — C  B D D-
”. The Electronic Journal of Combinatorics. 3 (1): R5. doi66 :10.37236/122967 .
Remark: The authors later admitted68 that the announced number is incorrect. Ac-
cording to McKay's report, the correct number is 13,267,364,410,532 and this number
is repeated in Wegener's 2000 book.
12. B MK69 (1997). ”K' T   8 × 8 C”70 . Tech-
nical Report TR-CS-97-03. Department of Computer Science, Australian National
University. Archived from the original71 on 2013-09-28. Retrieved 2013-09-22.
13. W, I. (2000). Branching Programs and Binary Decision Diagrams72 . S-
  I & A M. ISBN73 978-0-89871-458-674 .

54 https://archive.org/details/javahowtoprogram00deit_1/page/326
55 https://en.wikipedia.org/wiki/Prentice_Hall
56 https://archive.org/details/javahowtoprogram00deit_1/page/326
57 https://en.wikipedia.org/wiki/ISBN_(identifier)
58 https://en.wikipedia.org/wiki/Special:BookSources/978-0131016217
59 https://en.wikipedia.org/wiki/Doi_(identifier)
60 https://doi.org/10.1016%2F0166-218X%2892%2900170-Q
61 https://en.wikipedia.org/wiki/Hindi
62 https://www.iiitb.ac.in/CSL/projects/paduka/paduka.html
63 http://bridge-india.blogspot.com/2011/08/paduka-sahasram-by-vedanta-desika.html
64 https://pdfs.semanticscholar.org/c3f5/e69e771771de1be50a8a8bf2561804026d69.pdf
65 https://www.fq.math.ca/Scanned/16-3/cull.pdf
66 https://en.wikipedia.org/wiki/Doi_(identifier)
67 https://doi.org/10.37236%2F1229
68 http://www.combinatorics.org/ojs/index.php/eljc/article/view/v3i1r5/comment
69 https://en.wikipedia.org/wiki/Brendan_McKay
https://web.archive.org/web/20130928001649/http://www.combinatorics.org/ojs/index.
70
php/eljc/article/downloadSuppFile/v3i1r5/mckay
71 http://www.combinatorics.org/ojs/index.php/eljc/article/downloadSuppFile/v3i1r5/mckay
72 https://books.google.com/books?id=-DZjVz9E4f8C&pg=PA369&dq=532
73 https://en.wikipedia.org/wiki/ISBN_(identifier)
74 https://en.wikipedia.org/wiki/Special:BookSources/978-0-89871-458-6

1132
Notes

14. W, E W.75 ”K' T”76 . MathWorld77 .


15. S, D (2013), Evolutionary Optimization Algorithms78 , J W & S,
. 449–450, ISBN79 978111865950280 , The knight's tour problem is a classic combi-
natorial optimization problem. ... The cardinality Nx of x (the size of the search
space) is over 3.3×1013 (Löbbing and Wegener, 1995). We would not want to try to
solve this problem using brute force, but by using human insight and ingenuity we
can solve the knight's tour without much difficulty. We see that the cardinality of a
combinatorial optimization problem is not necessarily indicative of its difficulty.
16. ”E  K' T”81 . A   82 
2019-06-15.
17. P, I (1997). ”A E A   K'
83
T P” (PDF). Discrete Applied Mathematics. 73 (3): 251–260.
doi84 :10.1016/S0166-218X(96)00010-885 .
18. P, I (J 1967). ”A    H  
K' ”. Communications of the ACM. 10 (7): 446–449. Cite-
SeerX86 10.1.1.412.841087 . doi88 :10.1145/363427.36346389 .
19. S, D; C, P. (1996). ”A W-R A 
K' T  S B”90 (PDF). R 2011-08-21.
20. V H, G; O, R; S, J; V  B, D
(2018). A Predictive Data Analytic for the Hardness of Hamiltonian Cycle Problem
Instances91 (PDF). DATA ANALYTICS 2018: T S I C-
  D A. A, : XPS92 . . 91–96. ISBN93 978-1-
61208-681-194 . R 2018-11-27.
21. A, K; W, K. (1992). Finding Re-entrant Knight's Tours on N-
by-M Boards. ACM Southeast Regional Conference. New York, New York: ACM95 .
pp. 377–382. doi96 :10.1145/503720.50380697 .

75 https://en.wikipedia.org/wiki/Eric_W._Weisstein
76 https://mathworld.wolfram.com/KnightsTour.html
77 https://en.wikipedia.org/wiki/MathWorld
78 https://books.google.com/books?id=gwUwIEPqk30C&pg=PA449
79 https://en.wikipedia.org/wiki/ISBN_(identifier)
80 https://en.wikipedia.org/wiki/Special:BookSources/9781118659502
81 https://web.archive.org/web/20190615155047/http://www.josiahland.com/archives/781
82 http://www.josiahland.com/archives/781
83 https://core.ac.uk/download/pdf/81964499.pdf
84 https://en.wikipedia.org/wiki/Doi_(identifier)
85 https://doi.org/10.1016%2FS0166-218X%2896%2900010-8
86 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
87 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.412.8410
88 https://en.wikipedia.org/wiki/Doi_(identifier)
89 https://doi.org/10.1145%2F363427.363463
90 https://github.com/douglassquirrel/warnsdorff/blob/master/5_Squirrel96.pdf?raw=true
91 https://hamiltoncycle.gijsvanhorn.nl/data/vanHornetal-aPredictiveDataAnalytic.pdf
https://en.wikipedia.org/w/index.php?title=Xpert_Publishing_Services&action=edit&
92
redlink=1
93 https://en.wikipedia.org/wiki/ISBN_(identifier)
94 https://en.wikipedia.org/wiki/Special:BookSources/978-1-61208-681-1
95 https://en.wikipedia.org/wiki/Association_for_Computing_Machinery
96 https://en.wikipedia.org/wiki/Doi_(identifier)
97 https://doi.org/10.1145%2F503720.503806

1133
Knight's tour

22. D, S, . (1984). Century/Acorn User Book of Computer Puzzles.
ISBN98 978-071260541099 .
23. Y. Takefuji, K. C. Lee. ”Neural network computing for knight's tour problems.” Neu-
rocomputing, 4(5):249–254, 1992.

104.8 External links

Wikimedia Commons has media related to Knight's Tours100 .

• OEIS sequence A001230 (Number of undirected closed knight's tours on a 2n X 2n chess-


board)101
• H. C. von Warnsdorf 1823 in Google Books102
• Introduction to Knight's tours by George Jelliss103
• Knight's tours complete notes by George Jelliss104
• P, A (2013). ”A G P-K? T A-
  E   I”. IEEE Potentials. 32 (6): 10–16.
105
doi :10.1109/MPOT.2012.2219651 . 106

98 https://en.wikipedia.org/wiki/ISBN_(identifier)
99 https://en.wikipedia.org/wiki/Special:BookSources/978-0712605410
100 https://commons.wikimedia.org/wiki/Category:Knight%27s_Tours
101 https://oeis.org/A001230
https://books.google.com/books?id=w5FZAAAAYAAJ&printsec=frontcover&dq=h.+c.+von+
102 warnsdorf&hl=en&sa=X&ei=2QZmU9-jJIaXO8-hgKgL&ved=0CDEQ6AEwAA#v=onepage&q=h.%20c.
%20von%20warnsdorf&f=false
103 http://www.mayhematics.com/t/1n.htm
104 http://www.mayhematics.com/t/t.htm
105 https://en.wikipedia.org/wiki/Doi_(identifier)
106 https://doi.org/10.1109%2FMPOT.2012.2219651

1134
105 Kosaraju's algorithm

In computer science1 , Kosaraju's algorithm (also known as the Kosaraju–Sharir algo-


rithm) is a linear time2 algorithm3 to find the strongly connected components4 of a directed
graph5 . Aho6 , Hopcroft7 and Ullman8 credit it to S. Rao Kosaraju9 and Micha Sharir10 .
Kosaraju11 suggested it in 1978 but did not publish it, while Sharir12 independently dis-
covered it and published it in 1981. It makes use of the fact that the transpose graph13
(the same graph with the direction of every edge reversed) has exactly the same strongly
connected components as the original graph.

105.1 The algorithm

The primitive graph operations that the algorithm uses are to enumerate the vertices of
the graph, to store data per vertex (if not in the graph data structure itself, then in some
table that can use vertices as indices), to enumerate the out-neighbours of a vertex (traverse
edges in the forward direction), and to enumerate the in-neighbours of a vertex (traverse
edges in the backward direction); however the last can be done without, at the price of
constructing a representation of the transpose graph during the forward traversal phase.
The only additional data structure needed by the algorithm is an ordered list L of graph
vertices, that will grow to contain each vertex once.
If strong components are to be represented by appointing a separate root vertex for each
component, and assigning to each vertex the root vertex of its component, then Kosaraju's
algorithm can be stated as follows.
1. For each vertex u of the graph, mark u as unvisited. Let L be empty.
2. For each vertex u of the graph do Visit(u), where Visit(u) is the recursive subroutine:
If u is unvisited then:
a) Mark u as visited.
b) For each out-neighbour v of u, do Visit(v).

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Linear_time
3 https://en.wikipedia.org/wiki/Algorithm
4 https://en.wikipedia.org/wiki/Strongly_connected_component
5 https://en.wikipedia.org/wiki/Directed_graph
6 https://en.wikipedia.org/wiki/Alfred_V._Aho
7 https://en.wikipedia.org/wiki/John_E._Hopcroft
8 https://en.wikipedia.org/wiki/Jeffrey_D._Ullman
9 https://en.wikipedia.org/wiki/S._Rao_Kosaraju
10 https://en.wikipedia.org/wiki/Micha_Sharir
11 https://en.wikipedia.org/wiki/S._Rao_Kosaraju
12 https://en.wikipedia.org/wiki/Micha_Sharir
13 https://en.wikipedia.org/wiki/Transpose_graph

1135
Kosaraju's algorithm

c) Prepend u to L.
Otherwise do nothing.
3. For each element u of L in order, do Assign(u,u) where Assign(u,root) is the recursive
subroutine:
If u has not been assigned to a component then:
a) Assign u as belonging to the component whose root is root.
b) For each in-neighbour v of u, do Assign(v,root).
Otherwise do nothing.
Trivial variations are to instead assign a component number to each vertex, or to construct
per-component lists of the vertices that belong to it. The unvisited/visited indication may
share storage location with the final assignment of root for a vertex.
The key point of the algorithm is that during the first (forward) traversal of the graph
edges, vertices are prepended to the list L in post-order14 relative to the search tree being
explored. This means it does not matter whether a vertex v was first Visited because it
appeared in the enumeration of all vertices or because it was the out-neighbour of another
vertex u that got Visited; either way v will be prepended to L before u is, so if there is a
forward path from u to v then u will appear before v on the final list L (unless u and v both
belong to the same strong component, in which case their relative order in L is arbitrary).
As given above, the algorithm for simplicity employs depth-first search15 , but it could just
as well use breadth-first search16 as long as the post-order property is preserved.
The algorithm can be understood as identifying the strong component of a vertex u as
the set of vertices which are reachable from u both by backwards and forwards traversal.
Writing F (u) for the set of vertices reachable from u by forward traversal, B(u) for the set
of vertices reachable from u by backwards traversal, and P (u) for the set of vertices which
appear strictly before u on the list L after phase 2 of the algorithm, the strong component
containing a vertex u appointed as root is
B(u) ∩ F (u) = B(u) \ (B(u) \ F (u)) = B(u) \ P (u).
Set intersection is computationally costly, but it is logically equivalent to a double set
difference17 , and since B(u) \ F (u) ⊆ P (u) it becomes sufficient to test whether a newly
encountered element of B(u) has already been assigned to a component or not.

105.2 Complexity

Provided the graph is described using an adjacency list18 , Kosaraju's algorithm performs
two complete traversals of the graph and so runs in Θ(V+E) (linear) time, which is asymp-
totically optimal19 because there is a matching lower bound (any algorithm must examine
all vertices and edges). It is the conceptually simplest efficient algorithm, but is not as effi-

14 https://en.wikipedia.org/wiki/Tree_traversal#Post-order
15 https://en.wikipedia.org/wiki/Depth-first_search
16 https://en.wikipedia.org/wiki/Breadth-first_search
17 https://en.wikipedia.org/wiki/Set_difference
18 https://en.wikipedia.org/wiki/Adjacency_list
19 https://en.wikipedia.org/wiki/Asymptotically_optimal

1136
References

cient in practice as Tarjan's strongly connected components algorithm20 and the path-based
strong component algorithm21 , which perform only one traversal of the graph.
If the graph is represented as an adjacency matrix22 , the algorithm requires Ο(V2 ) time.

105.3 References
• Alfred V. Aho23 , John E. Hopcroft24 , Jeffrey D. Ullman25 . Data Structures and Algo-
rithms. Addison-Wesley, 1983.
• Thomas H. Cormen26 , Charles E. Leiserson27 , Ronald L. Rivest28 , Clifford Stein29 . In-
troduction to Algorithms30 , 3rd edition. The MIT Press, 2009. ISBN31 0-262-03384-432 .
• Micha Sharir33 . A strong-connectivity algorithm and its applications to data flow analysis.
Computers and Mathematics with Applications 7(1):67–72, 1981.

105.4 External links


• Good Math, Bad Math: Computing Strongly Connected Components34

20 https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm
21 https://en.wikipedia.org/wiki/Path-based_strong_component_algorithm
22 https://en.wikipedia.org/wiki/Adjacency_matrix
23 https://en.wikipedia.org/wiki/Alfred_V._Aho
24 https://en.wikipedia.org/wiki/John_E._Hopcroft
25 https://en.wikipedia.org/wiki/Jeffrey_D._Ullman
26 https://en.wikipedia.org/wiki/Thomas_H._Cormen
27 https://en.wikipedia.org/wiki/Charles_E._Leiserson
28 https://en.wikipedia.org/wiki/Ronald_L._Rivest
29 https://en.wikipedia.org/wiki/Clifford_Stein
30 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
31 https://en.wikipedia.org/wiki/ISBN_(identifier)
32 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
33 https://en.wikipedia.org/wiki/Micha_Sharir
34 http://scienceblogs.com/goodmath/2007/10/30/computing-strongly-connected-c/

1137
106 Kruskal's algorithm

This article needs additional citations for verification1 . Please help improve
this article2 by adding citations to reliable sources3 . Unsourced material may be
challenged and removed.
Find sources: ”Kruskal's algorithm”4 – news5 · newspapers6 · books7 · scholar8 · JSTOR9
(September 2018)(Learn how and when to remove this template message10 )

1 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
2 https://en.wikipedia.org/w/index.php?title=Kruskal%27s_algorithm&action=edit
3 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
4 http://www.google.com/search?as_eq=wikipedia&q=%22Kruskal%27s+algorithm%22
5 http://www.google.com/search?tbm=nws&q=%22Kruskal%27s+algorithm%22+-wikipedia
http://www.google.com/search?&q=%22Kruskal%27s+algorithm%22+site:news.google.com/
6
newspapers&source=newspapers
7 http://www.google.com/search?tbs=bks:1&q=%22Kruskal%27s+algorithm%22+-wikipedia
8 http://scholar.google.com/scholar?q=%22Kruskal%27s+algorithm%22
https://www.jstor.org/action/doBasicSearch?Query=%22Kruskal%27s+algorithm%22&acc=on&
9
wc=on
10 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

1139
Kruskal's algorithm

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

1140
Algorithm

Kruskal's algorithmis a minimum-spanning-tree algorithm11 which finds an edge of the


least possible weight that connects any two trees in the forest.[1] It is a greedy algorithm12
in graph theory13 as it finds a minimum spanning tree14 for a connected15 weighted graph16
adding increasing cost arcs at each step.[1] This means it finds a subset of the edges17 that
forms a tree that includes every vertex18 , where the total weight of all the edges in the tree
is minimized. If the graph is not connected, then it finds a minimum spanning forest (a
minimum spanning tree for each connected component19 ).
This algorithm first appeared in Proceedings of the American Mathematical Society20 ,
pp. 48−50 in 1956, and was written by Joseph Kruskal21 .[2]
Other algorithms for this problem include Prim's algorithm22 , Reverse-delete algorithm23 ,
and Borůvka's algorithm24 .

106.1 Algorithm
• create a forest F (a set of trees), where each vertex in the graph is a separate tree25
• create a set S containing all the edges in the graph
• while S is nonempty26 and F is not yet spanning27
• remove an edge with minimum weight from S
• if the removed edge connects two different trees then add it to the forest F, combining
two trees into a single tree
At the termination of the algorithm, the forest forms a minimum spanning forest of the
graph. If the graph is connected, the forest has a single component and forms a minimum
spanning tree

11 https://en.wikipedia.org/wiki/Minimum_spanning_tree#Algorithms
12 https://en.wikipedia.org/wiki/Greedy_algorithm
13 https://en.wikipedia.org/wiki/Graph_theory
14 https://en.wikipedia.org/wiki/Minimum_spanning_tree
15 https://en.wikipedia.org/wiki/Connectivity_(graph_theory)
16 https://en.wikipedia.org/wiki/Glossary_of_graph_theory#Weighted_graphs_and_networks
17 https://en.wikipedia.org/wiki/Edge_(graph_theory)
18 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
19 https://en.wikipedia.org/wiki/Connected_component_(graph_theory)
20 https://en.wikipedia.org/wiki/Proceedings_of_the_American_Mathematical_Society
21 https://en.wikipedia.org/wiki/Joseph_Kruskal
22 https://en.wikipedia.org/wiki/Prim%27s_algorithm
23 https://en.wikipedia.org/wiki/Reverse-delete_algorithm
24 https://en.wikipedia.org/wiki/Bor%C5%AFvka%27s_algorithm
25 https://en.wikipedia.org/wiki/Tree_(graph_theory)
26 https://en.wikipedia.org/wiki/Nonempty
27 https://en.wikipedia.org/wiki/Spanning_tree

1141
Kruskal's algorithm

106.2 Pseudocode

Figure 257 A demo for Kruskal's algorithm based on Euclidean distance.

The following code is implemented with disjoint-set data structure28 :


algorithm Kruskal(G) is
A := ∅
for each v ∈ G.V do
MAKE-SET(v)
for each (u, v) in G.E ordered by weight(u, v), increasing do
if FIND-SET(u) ≠ FIND-SET(v) then
A := A ∪ {(u, v)}
UNION(FIND-SET(u), FIND-SET(v))

28 https://en.wikipedia.org/wiki/Disjoint-set_data_structure

1142
Complexity

return A

106.3 Complexity

Kruskal's algorithm can be shown to run in O29 (E log30 E) time, or equivalently, O(E log
V) time, where E is the number of edges in the graph and V is the number of vertices, all
with simple data structures. These running times are equivalent because:
• E is at most V 2 and log V 2 = 2 log V ∈ O(log V ).
• Each isolated vertex is a separate component of the minimum spanning forest. If we
ignore isolated vertices we obtain V ≤2E, so log V is O(log E).
We can achieve this bound as follows: first sort the edges by weight using a comparison
sort31 in O(E log E) time; this allows the step ”remove an edge with minimum weight from
S” to operate in constant time. Next, we use a disjoint-set data structure32 to keep track of
which vertices are in which components. We need to perform O(V) operations, as in each
iteration we connect a vertex to the spanning tree, two 'find' operations and possibly one
union for each edge. Even a simple disjoint-set data structure such as disjoint-set forests
with union by rank can perform O(V) operations in O(V log V) time. Thus the total time
is O(E log E) = O(E log V).
Provided that the edges are either already sorted or can be sorted in linear time (for example
with counting sort33 or radix sort34 ), the algorithm can use a more sophisticated disjoint-set
data structure35 to run in O(E α(V)) time, where α is the extremely slowly growing inverse
of the single-valued Ackermann function36 .

106.4 Example

Image Description

29 https://en.wikipedia.org/wiki/Big-O_notation
30 https://en.wikipedia.org/wiki/Binary_logarithm
31 https://en.wikipedia.org/wiki/Comparison_sort
32 https://en.wikipedia.org/wiki/Disjoint-set_data_structure
33 https://en.wikipedia.org/wiki/Counting_sort
34 https://en.wikipedia.org/wiki/Radix_sort
35 https://en.wikipedia.org/wiki/Disjoint-set_data_structure
36 https://en.wikipedia.org/wiki/Ackermann_function

1143
Kruskal's algorithm

Image Description

AD and CE are the shortest edges, with length 5, and AD has been arbitrar-
ily37 chosen, so it is highlighted.

Figure 258

CE is now the shortest edge that does not form a cycle, with length 5, so it is
highlighted as the second edge.

Figure 259

The next edge, DF with length 6, is highlighted using much the same method.

Figure 260

37 https://en.wikipedia.org/wiki/Arbitrary

1144
Example

Image Description

The next-shortest edges are AB and BE, both with length 7. AB is chosen
arbitrarily, and is highlighted. The edge BD has been highlighted in red,
because there already exists a path (in green) between B and D, so it would
form a cycle (ABD) if it were chosen.

Figure 261

The process continues to highlight the next-smallest edge, BE with length


7. Many more edges are highlighted in red at this stage: BC because it
would form the loop BCE, DE because it would form the loop DEBA, and
FE because it would form FEBAD.

Figure 262

Finally, the process finishes with the edge EG of length 9, and the minimum
spanning tree is found.

Figure 263

1145
Kruskal's algorithm

106.5 Proof of correctness

The proof consists of two parts. First, it is proved that the algorithm produces a spanning
tree38 . Second, it is proved that the constructed spanning tree is of minimal weight.

106.5.1 Spanning tree

Let G be a connected, weighted graph and let Y be the subgraph of G produced by the
algorithm. Y cannot have a cycle, being within one subtree and not between two different
trees. Y cannot be disconnected, since the first encountered edge that joins two components
of Y would have been added by the algorithm. Thus, Y is a spanning tree of G.

106.5.2 Minimality

We show that the following proposition P is true by induction39 : If F is the set of edges
chosen at any stage of the algorithm, then there is some minimum spanning tree that
contains F and none of the edges rejected by the algorithm.
• Clearly P is true at the beginning, when F is empty: any minimum spanning tree will do,
and there exists one because a weighted connected graph always has a minimum spanning
tree.
• Now assume P is true for some non-final edge set F and let T be a minimum spanning
tree that contains F.
• If the next chosen edge e is also in T, then P is true for F + e.
• Otherwise, if e is not in T then T + e has a cycle C. This cycle contains edges which
do not belong to F, since e does not form a cycle when added to F but does in T. Let
f be an edge which is in C but not in F + e. Note that f also belongs to T, and by
P has not been considered by the algorithm. f must therefore have a weight at least as
large as e. Then T − f + e is a tree, and it has the same or less weight as T. So T −
f + e is a minimum spanning tree containing F + e and again P holds.
• Therefore, by the principle of induction, P holds when F has become a spanning tree,
which is only possible if F is a minimum spanning tree itself.

106.6 Parallel algorithm

Kruskal's algorithm is inherently sequential and hard to parallelize. It is, however, possible
to perform the initial sorting of the edges in parallel or, alternatively, to use a parallel
implementation of a binary heap40 to extract the minimum-weight edge in every iteration.[3]
As parallel sorting is possible in time O(n) on O(log n) processors,[4] the runtime of Kruskal's
algorithm can be reduced to O(E α(V)), where α again is the inverse of the single-valued
Ackermann function41 .

38 https://en.wikipedia.org/wiki/Spanning_tree
39 https://en.wikipedia.org/wiki/Mathematical_induction
40 https://en.wikipedia.org/wiki/Binary_heap
41 https://en.wikipedia.org/wiki/Ackermann_function

1146
Parallel algorithm

A variant of Kruskal's algorithm, named Filter-Kruskal, has been described by Osipov et


al.[5] and is better suited for parallelization. The basic idea behind Filter-Kruskal is to
partition the edges in a similar way to quicksort42 and filter out edges that connect vertices
of the same tree to reduce the cost of sorting. The following pseudocode43 demonstrates
this.
function filter_kruskal(G) is
if |G.E| < kruskal_threshold:
return kruskal(G)
pivot = choose_random(G.E)
<semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <msub>
<mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo><=</mo> </mrow> </msub> </mstyle> </mrow> {\dis-
playstyle E_{&lt;=}} </semantics> , <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true”
scriptlevel=”0”> <msub> <mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo>></mo> </mrow> </msub>
</mstyle> </mrow> {\displaystyle E_{&gt;}} </semantics> = partition(G.E, pivot)
A = filter_kruskal( <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”>
<msub> <mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo><=</mo> </mrow> </msub> </mstyle>
</mrow> {\displaystyle E_{&lt;=}} </semantics> )
<semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <msub>
<mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo>></mo> </mrow> </msub> </mstyle> </mrow>
{\displaystyle E_{&gt;}} </semantics> = filter( <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle
displaystyle=”true” scriptlevel=”0”> <msub> <mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo>></mo>
</mrow> </msub> </mstyle> </mrow> {\displaystyle E_{&gt;}} </semantics> )
A = A ∪ filter_kruskal( <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true”
scriptlevel=”0”> <msub> <mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo>></mo> </mrow> </msub>
</mstyle> </mrow> {\displaystyle E_{&gt;}} </semantics> )
return A

function partition(E, pivot) is


<semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <msub>
<mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo><=</mo> </mrow> </msub> </mstyle> </mrow>
{\displaystyle E_{&lt;=}} </semantics> = ∅, <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle dis-
playstyle=”true” scriptlevel=”0”> <msub> <mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo>></mo>
</mrow> </msub> </mstyle> </mrow> {\displaystyle E_{&gt;}} </semantics> = ∅
foreach (u, v) in E do
if weight(u, v) <= pivot then
<semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <msub>
<mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo><=</mo> </mrow> </msub> </mstyle> </mrow>
{\displaystyle E_{&lt;=}} </semantics> = <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle dis-
playstyle=”true” scriptlevel=”0”> <msub> <mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo><=</mo>
</mrow> </msub> </mstyle> </mrow> {\displaystyle E_{&lt;=}} </semantics> ∪ {(u, v)}
else
<semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <msub>
<mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo>></mo> </mrow> </msub> </mstyle> </mrow> {\dis-
playstyle E_{&gt;}} </semantics> = <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true”
scriptlevel=”0”> <msub> <mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo>></mo> </mrow> </msub>
</mstyle> </mrow> {\displaystyle E_{&gt;}} </semantics> ∪ {(u, v)}
return <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <msub>
<mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo><=</mo> </mrow> </msub> </mstyle> </mrow> {\dis-
playstyle E_{&lt;=}} </semantics> , <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true”
scriptlevel=”0”> <msub> <mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mo>></mo> </mrow> </msub>
</mstyle> </mrow> {\displaystyle E_{&gt;}} </semantics>

function filter(E) is
<semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <msub>
<mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mi>f</mi> <mi>i</mi> <mi>l</mi> <mi>t</mi>
<mi>e</mi> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </mstyle> </mrow> {\displaystyle
E_{filtered}} </semantics> = ∅
foreach (u, v) in E do
if find_set(u) ≠ find_set(v) then
<semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <msub>
<mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mi>f</mi> <mi>i</mi> <mi>l</mi> <mi>t</mi>

42 https://en.wikipedia.org/wiki/Quicksort
43 https://en.wikipedia.org/wiki/Pseudocode

1147
Kruskal's algorithm

<mi>e</mi> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </mstyle> </mrow> {\displaystyle


E_{filtered}} </semantics> = <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true”
scriptlevel=”0”> <msub> <mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mi>f</mi> <mi>i</mi>
<mi>l</mi> <mi>t</mi> <mi>e</mi> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </mstyle>
</mrow> {\displaystyle E_{filtered}} </semantics> ∪ {(u, v)}
return <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true” scriptlevel=”0”> <msub>
<mi>E</mi> <mrow class=”MJX-TeXAtom-ORD”> <mi>f</mi> <mi>i</mi> <mi>l</mi> <mi>t</mi>
<mi>e</mi> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </mstyle> </mrow> {\displaystyle
E_{filtered}} </semantics>

Filter-Kruskal lends itself better for parallelization as sorting, filtering, and partitioning can
easily be performed in parallel by distributing the edges between the processors.[5]
Finally, other variants of a parallel implementation of Kruskal's algorithm have been ex-
plored. Examples include a scheme that uses helper threads to remove edges that are
definitely not part of the MST in the background,[6] and a variant which runs the sequen-
tial algorithm on p subgraphs, then merges those subgraphs until only one, the final MST,
remains.[7]

106.7 See also


• Prim's algorithm44
• Dijkstra's algorithm45
• Borůvka's algorithm46
• Reverse-delete algorithm47
• Single-linkage clustering48
• Greedy geometric spanner49

106.8 References
1. C, T; C E L, R L R, C S
(2009). Introduction To Algorithms (Third ed.). MIT Press. p. 631. ISBN50 978-
026225810451 .CS1 maint: multiple names: authors list (link52 )
2. K, J. B.53 (1956). ”O       
    ”. Proceedings of the American Mathe-

44 https://en.wikipedia.org/wiki/Prim%27s_algorithm
45 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
46 https://en.wikipedia.org/wiki/Bor%C5%AFvka%27s_algorithm
47 https://en.wikipedia.org/wiki/Reverse-delete_algorithm
48 https://en.wikipedia.org/wiki/Single-linkage_clustering
49 https://en.wikipedia.org/wiki/Greedy_geometric_spanner
50 https://en.wikipedia.org/wiki/ISBN_(identifier)
51 https://en.wikipedia.org/wiki/Special:BookSources/978-0262258104
52 https://en.wikipedia.org/wiki/Category:CS1_maint:_multiple_names:_authors_list
53 https://en.wikipedia.org/wiki/Joseph_Kruskal

1148
References

matical Society54 . 7 (1): 48–50. doi55 :10.1090/S0002-9939-1956-0078686-756 . JS-


TOR57 203324158 .
3. Q, M J.; D, N (1984). ”P  ”.
ACM Computing Surveys. 16 (3): 319–348. doi59 :10.1145/2514.251560 .
4. G, A; G, A; K, G; K, V (2003).
Introduction to Parallel Computing. pp. 412–413. ISBN61 978-020164865262 .
5. O, V; S, P; S, J (2009). ”T -
    ”63 (PDF). Proceedings of the
Eleventh Workshop on Algorithm Engineering and Experiments (ALENEX). Society
for Industrial and Applied Mathematics: 52–61.
6. K, A; A, N; K, N;
K, N (2012). ”A    ' -
   ”64 (PDF). Parallel and Distributed Processing Sym-
posium Workshops & PHD Forum (IPDPSW), 2012 IEEE 26th International: 1601–
1610.
7. LČ, V; ŠĆ, S; B, A (2014). ”P
 M S T A U D M A-
”65 . Transactions on Engineering Technologies.: 543–554.
• Thomas H. Cormen66 , Charles E. Leiserson67 , Ronald L. Rivest68 , and Clifford Stein69 .
Introduction to Algorithms70 , Second Edition. MIT Press and McGraw-Hill, 2001.
ISBN71 0-262-03293-772 . Section 23.2: The algorithms of Kruskal and Prim, pp. 567−574.
• Michael T. Goodrich73 and Roberto Tamassia74 . Data Structures and Algorithms in Java,
Fourth Edition. John Wiley & Sons, Inc., 2006. ISBN75 0-471-73884-076 . Section 13.7.1:
Kruskal's Algorithm, pp. 632..

54 https://en.wikipedia.org/wiki/Proceedings_of_the_American_Mathematical_Society
55 https://en.wikipedia.org/wiki/Doi_(identifier)
56 https://doi.org/10.1090%2FS0002-9939-1956-0078686-7
57 https://en.wikipedia.org/wiki/JSTOR_(identifier)
58 http://www.jstor.org/stable/2033241
59 https://en.wikipedia.org/wiki/Doi_(identifier)
60 https://doi.org/10.1145%2F2514.2515
61 https://en.wikipedia.org/wiki/ISBN_(identifier)
62 https://en.wikipedia.org/wiki/Special:BookSources/978-0201648652
63 http://algo2.iti.kit.edu/documents/fkruskal.pdf
64 http://tarjomefa.com/wp-content/uploads/2017/10/7793-English-TarjomeFa.pdf
65 https://www.researchgate.net/publication/235994104
66 https://en.wikipedia.org/wiki/Thomas_H._Cormen
67 https://en.wikipedia.org/wiki/Charles_E._Leiserson
68 https://en.wikipedia.org/wiki/Ronald_L._Rivest
69 https://en.wikipedia.org/wiki/Clifford_Stein
70 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
71 https://en.wikipedia.org/wiki/ISBN_(identifier)
72 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
73 https://en.wikipedia.org/wiki/Michael_T._Goodrich
74 https://en.wikipedia.org/wiki/Roberto_Tamassia
75 https://en.wikipedia.org/wiki/ISBN_(identifier)
76 https://en.wikipedia.org/wiki/Special:BookSources/0-471-73884-0

1149
Kruskal's algorithm

106.9 External links


• Data for the article's example77 .
• Gephi Plugin For Calculating a Minimum Spanning Tree78 source code79 .
• Kruskal's Algorithm with example and program in c++80
• Kruskal's Algorithm code in C++ as applied to random numbers81

77 https://github.com/carlschroedl/kruskals-minimum-spanning-tree-algorithm-example-data
78 https://gephi.org/plugins/#/plugin/spanning-tree-plugin
https://github.com/carlschroedl/gephi-plugins/tree/minimum-spanning-tree-plugin/
79
modules/MinimumSpanningTree
80 http://www.geeksforgeeks.org/kruskals-minimum-spanning-tree-using-stl-in-c/
81 https://meyavuz.wordpress.com/2017/03/11/how-does-kruskals-algorithm-progress/

1150
107 Lexicographic breadth-first search

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

1151
Lexicographic breadth-first search

Graph and tree


search algorithms

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

In computer science1 , lexicographic breadth-first search or Lex-BFS is a linear time2


algorithm for ordering the vertices3 of a graph4 . The algorithm is different from a breadth-
first search5 , but it produces an ordering that is consistent with breadth-first search.
The lexicographic breadth-first search algorithm is based on the idea of partition refinement6
and was first developed by Donald J. Rose, Robert E. Tarjan7 , and George S. Lueker (19768 ).
A more detailed survey of the topic is presented by Corneil (2004)9 . It has been used as
a subroutine in other graph algorithms including the recognition of chordal graphs10 , and
optimal coloring11 of distance-hereditary graphs12 .

107.1 Background

The breadth-first search13 algorithm is commonly defined by the following process:


• Initialize a queue14 of graph vertices, with the starting vertex of the graph as the queue's
only element.
• While the queue is non-empty, remove (dequeue) a vertex v from the queue, and add to
the queue (enqueue) all the other vertices that can be reached by an edge from v that
have not already been added in earlier steps.
However, rather than defining the vertex to choose at each step in an imperative15 way as
the one produced by the dequeue operation of a queue, one can define the same sequence of

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Linear_time
3 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
4 https://en.wikipedia.org/wiki/Graph_theory
5 https://en.wikipedia.org/wiki/Breadth-first_search
6 https://en.wikipedia.org/wiki/Partition_refinement
7 https://en.wikipedia.org/wiki/Robert_Tarjan
8 #CITEREFRoseTarjanLueker1976
9 #CITEREFCorneil2004
10 https://en.wikipedia.org/wiki/Chordal_graph
11 https://en.wikipedia.org/wiki/Graph_coloring
12 https://en.wikipedia.org/wiki/Distance-hereditary_graph
13 https://en.wikipedia.org/wiki/Breadth-first_search
14 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
15 https://en.wikipedia.org/wiki/Imperative_programming

1152
Algorithm

vertices declaratively by the properties of these vertices. That is, a standard breadth-first
search is just the result of repeatedly applying this rule:
• Repeatedly output a vertex v, choosing at each step a vertex v that has not already been
chosen and that has a predecessor (a vertex that has an edge to v) as early in the output
as possible.
In some cases, this ordering of vertices by the output positions of their predecessors may
have ties — two different vertices have the same earliest predecessor. In this case, the
order in which those two vertices are chosen may be arbitrary. The output of lexicographic
breadth-first search differs from a standard breadth-first search in having a consistent rule
for breaking such ties. In lexicographic breadth-first search, the output ordering is the order
that would be produced by the rule:
• Repeatedly output a vertex v, choosing at each step a vertex v that has not already
been chosen and whose entire set of already-output predecessors is as small as possible
in lexicographic order16 .
So, when two vertices v and w have the same earliest predecessor, earlier than any other
unchosen vertices, the standard breadth-first search algorithm will order them arbitrarily.
Instead, in this case, the LexBFS algorithm would choose between v and w by the output
ordering of their second-earliest predecessors. If only one of them has a second-earliest
predecessor that has already been output, that one is chosen. If both v and w have the
same second-earliest predecessor, then the tie is broken by considering their third-earliest
predecessors, and so on.
Applying this rule directly by comparing vertices according to this rule would lead to an
inefficient algorithm. Instead, the lexicographic breadth-first search uses a set partitioning
data structure in order to produce the same ordering more efficiently, just as a standard
breadth-first search uses a queue data structure to produce its ordering efficiently.

107.2 Algorithm

The lexicographic breadth-first search algorithm replaces the queue17 of vertices of a stan-
dard breadth-first search with an ordered sequence of sets of vertices. The sets in the
sequence form a partition18 of the remaining vertices. At each step, a vertex v from the first
set in the sequence is removed from that set, and if that removal causes the set to become
empty then the set is removed from the sequence. Then, each set in the sequence is replaced
by two subsets: the neighbors of v and the non-neighbors of v. The subset of neighbors
is placed earlier in the sequence than the subset of non-neighbors. In pseudocode19 , the
algorithm can be expressed as follows:
• Initialize a sequence Σ of sets, to contain a single set containing all vertices.
• Initialize the output sequence of vertices to be empty.
• While Σ is non-empty:

16 https://en.wikipedia.org/wiki/Lexicographic_order
17 https://en.wikipedia.org/wiki/Queue_(data_structure)
18 https://en.wikipedia.org/wiki/Partition_(set_theory)
19 https://en.wikipedia.org/wiki/Pseudocode

1153
Lexicographic breadth-first search

• Find and remove a vertex v from the first set in Σ


• If the first set in Σ is now empty, remove it from Σ
• Add v to the end of the output sequence.
• For each edge v-w such that w still belongs to a set S in Σ:
• If the set S containing w has not yet been replaced while processing v, create a new
empty replacement set T and place it prior to S in the sequence; otherwise, let T be
the set prior to S.
• Move w from S to T, and if this causes S to become empty remove S from Σ.
Each vertex is processed once, each edge is examined only when its two endpoints are
processed, and (with an appropriate representation for the sets in Σ that allows items to be
moved from one set to another in constant time) each iteration of the inner loop takes only
constant time. Therefore, like simpler graph search algorithms such as breadth-first search
and depth first search20 , this algorithm takes linear time.
The algorithm is called lexicographic breadth-first search because the order it produces is
an ordering that could also have been produced by a breadth-first search, and because if
the ordering is used to index the rows and columns of an adjacency matrix21 of a graph
then the algorithm sorts22 the rows and columns into lexicographical order23 .

107.3 Applications

107.3.1 Chordal graphs

A graph G is defined to be chordal24 if its vertices have a perfect elimination ordering, an


ordering such that for any vertex v the neighbors that occur later in the ordering form
a clique. In a chordal graph, the reverse of a lexicographic ordering is always a perfect
elimination ordering. Therefore, one can test whether a graph is chordal in linear time by
the following algorithm:
• Use lexicographic breadth-first search to find a lexicographic ordering of G
• For each vertex v:
• Let w be the neighbor of v occurring prior to v, as close to v in the sequence as possible
• (Continue to the next vertex v if there is no such w)
• If the set of earlier neighbors of v (excluding w itself) is not a subset of the set of earlier
neighbors of w, the graph is not chordal
• If the loop terminates without showing that the graph is not chordal, then it is chordal.
This application was the original motivation that led Rose, Tarjan & Lueker (1976)25 to
develop the lexicographic breadth first search algorithm.[1]

20 https://en.wikipedia.org/wiki/Depth_first_search
21 https://en.wikipedia.org/wiki/Adjacency_matrix
22 https://en.wikipedia.org/wiki/Sorting_algorithm
23 https://en.wikipedia.org/wiki/Lexicographical_order
24 https://en.wikipedia.org/wiki/Chordal_graph
25 #CITEREFRoseTarjanLueker1976

1154
LexBFS ordering

107.3.2 Graph coloring

A graph G is said to be perfectly orderable if there is a sequence of its vertices with the
property that, for any induced subgraph26 of G, a greedy coloring27 algorithm that colors
the vertices in the induced sequence ordering is guaranteed to produce an optimal coloring.
For a chordal graph, a perfect elimination ordering is a perfect ordering: the number of the
color used for any vertex is the size of the clique formed by it and its earlier neighbors, so
the maximum number of colors used is equal to the size of the largest clique in the graph,
and no coloring can use fewer colors. An induced subgraph of a chordal graph is chordal and
the induced subsequence of its perfect elimination ordering is a perfect elimination ordering
on the subgraph, so chordal graphs are perfectly orderable, and lexicographic breadth-first
search can be used to optimally color them.
The same property is true for a larger class of graphs, the distance-hereditary graphs28 :
distance-hereditary graphs are perfectly orderable, with a perfect ordering given by the
reverse of a lexicographic ordering, so lexicographic breadth-first search can be used in
conjunction with greedy coloring algorithms to color them optimally in linear time.[2]

107.3.3 Other applications

Bretscher et al. (2008)29 describe an extension of lexicographic breadth-first search that


breaks any additional ties using the complement graph30 of the input graph. As they show,
this can be used to recognize cographs31 in linear time. Habib et al. (2000)32 describe
additional applications of lexicographic breadth-first search including the recognition of
comparability graphs33 and interval graphs34 .

107.4 LexBFS ordering

An enumeration of the vertices of a graph is said to be a LexBFS ordering if it is the possible


output of the application of LexBFS to this graph.
Let G = (V, E) be a graph with n vertices. Recall that N (v) is the set of neighbors of v.
Let σ = (v1 , . . . , vn ) be an enumeration of the vertices of V . The enumeration σ is a LexBFS
ordering (with source v1 ) if, for all 1 ≤ i < j < k ≤ n with vi ∈ N (vj ) \ N (vk ), there exists
m < i such that vm ∈ N (vj ) \ V (vk ).

26 https://en.wikipedia.org/wiki/Induced_subgraph
27 https://en.wikipedia.org/wiki/Greedy_coloring
28 https://en.wikipedia.org/wiki/Distance-hereditary_graph
29 #CITEREFBretscherCorneilHabibPaul2008
30 https://en.wikipedia.org/wiki/Complement_graph
31 https://en.wikipedia.org/wiki/Cograph
32 #CITEREFHabibMcConnellPaulViennot2000
33 https://en.wikipedia.org/wiki/Comparability_graph
34 https://en.wikipedia.org/wiki/Interval_graph

1155
Lexicographic breadth-first search

107.5 Notes
1. Corneil (2004)35 .
2. Brandstädt, Le & Spinrad (1999)36 , Theorem 5.2.4, p. 71.

107.6 References
• B, A37 ; L, V B; S, J (1999), Graph Classes:
A Survey38 , SIAM M  D M  A,
ISBN39 0-89871-432-X40 .
• B, A; C, D41 ; H, M; P, C
(2008), ”A    LBFS   ”42 , SIAM
Journal on Discrete Mathematics43 , 22 (4): 1277–1296, CiteSeerX44 10.1.1.188.501645 ,
doi46 :10.1137/06066469047 .
• C, D G.48 (2004), ”L    –  ”,
Graph-Theoretic Methods in Computer Science: 30th International Workshop, WG 2004,
Bad Honnef, Germany, June 21-23, 2004, Revised Papers, Lecture Notes in Computer
Science, 3353, Springer-Verlag, pp. 1–19, doi49 :10.1007/978-3-540-30559-0_150 .
• H, M; MC, R; P, C; V, L (2000),
”L-BFS   ,     -
,       ”51 (PDF),
Theoretical Computer Science, 234 (1–2): 59–84, doi52 :10.1016/S0304-3975(97)00241-
753 , archived from the original54 (PDF) on 2011-07-26.
• R, D. J.; T, R. E.55 ; L, G. S. (1976), ”A  
   ”, SIAM Journal on Computing56 , 5 (2): 266–283,
doi57 :10.1137/020502158 .

35 #CITEREFCorneil2004
36 #CITEREFBrandst%C3%A4dtLeSpinrad1999
37 https://en.wikipedia.org/wiki/Andreas_Brandst%C3%A4dt
38 https://archive.org/details/graphclassessurv0000bran
39 https://en.wikipedia.org/wiki/ISBN_(identifier)
40 https://en.wikipedia.org/wiki/Special:BookSources/0-89871-432-X
41 https://en.wikipedia.org/wiki/Derek_Corneil
42 http://www.liafa.jussieu.fr/~habib/Documents/cograph.ps
43 https://en.wikipedia.org/wiki/SIAM_Journal_on_Discrete_Mathematics
44 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
45 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.188.5016
46 https://en.wikipedia.org/wiki/Doi_(identifier)
47 https://doi.org/10.1137%2F060664690
48 https://en.wikipedia.org/wiki/Derek_Corneil
49 https://en.wikipedia.org/wiki/Doi_(identifier)
50 https://doi.org/10.1007%2F978-3-540-30559-0_1
https://web.archive.org/web/20110726091134/http://www.cecm.sfu.ca/~cchauve/MATH445/
51
PROJECTS/MATH445-TCS-234-59.pdf
52 https://en.wikipedia.org/wiki/Doi_(identifier)
53 https://doi.org/10.1016%2FS0304-3975%2897%2900241-7
54 http://www.cecm.sfu.ca/~cchauve/MATH445/PROJECTS/MATH445-TCS-234-59.pdf
55 https://en.wikipedia.org/wiki/Robert_Tarjan
56 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
57 https://en.wikipedia.org/wiki/Doi_(identifier)
58 https://doi.org/10.1137%2F0205021

1156
108 Longest path problem

In graph theory1 and theoretical computer science2 , the longest path problem is the
problem of finding a simple path3 of maximum length in a given graph. A path is called
simple if it does not have any repeated vertices; the length of a path may either be measured
by its number of edges, or (in weighted graphs4 ) by the sum of the weights of its edges. In
contrast to the shortest path problem5 , which can be solved in polynomial time in graphs
without negative-weight cycles, the longest path problem is NP-hard6 and the decision
version of the problem, which asks whether a path exists of at least some given length, is
NP-complete7 . This means that the decision problem cannot be solved in polynomial time8
for arbitrary graphs unless P = NP9 . Stronger hardness results are also known showing
that it is difficult to approximate10 . However, it has a linear time11 solution for directed
acyclic graphs12 , which has important applications in finding the critical path13 in scheduling
problems.

108.1 NP-hardness

The NP-hardness of the unweighted longest path problem can be shown using a reduction
from the Hamiltonian path problem14 : a graph G has a Hamiltonian path if and only if
its longest path has length n − 1, where n is the number of vertices in G. Because the
Hamiltonian path problem is NP-complete, this reduction shows that the decision version15
of the longest path problem is also NP-complete. In this decision problem, the input is a
graph G and a number k; the desired output is ”yes” if G contains a path of k or more edges,
and no otherwise.[1]
If the longest path problem could be solved in polynomial time, it could be used to solve this
decision problem, by finding a longest path and then comparing its length to the number k.

1 https://en.wikipedia.org/wiki/Graph_theory
2 https://en.wikipedia.org/wiki/Theoretical_computer_science
3 https://en.wikipedia.org/wiki/Path_(graph_theory)
4 https://en.wikipedia.org/wiki/Weighted_graph
5 https://en.wikipedia.org/wiki/Shortest_path_problem
6 https://en.wikipedia.org/wiki/NP-hard
7 https://en.wikipedia.org/wiki/NP-complete
8 https://en.wikipedia.org/wiki/Polynomial_time
9 https://en.wikipedia.org/wiki/P_%3D_NP
10 https://en.wikipedia.org/wiki/Approximation_algorithm
11 https://en.wikipedia.org/wiki/Linear_time
12 https://en.wikipedia.org/wiki/Directed_acyclic_graph
13 https://en.wikipedia.org/wiki/Critical_path_method
14 https://en.wikipedia.org/wiki/Hamiltonian_path_problem
15 https://en.wikipedia.org/wiki/Decision_problem

1157
Longest path problem

Therefore, the longest path problem is NP-hard. The question ”does there exist a simple
path in a given graph with at least k edges” is NP-complete.[2]
In weighted complete graphs16 with non-negative edge weights, the weighted longest path
problem is the same as the Travelling salesman path problem17 , because the longest path
always includes all vertices.[3]

108.2 Acyclic graphs and critical paths

A longest path between two given vertices s and t in a weighted graph G is the same thing
as a shortest path in a graph −G derived from G by changing every weight to its negation.
Therefore, if shortest paths can be found in −G, then longest paths can also be found in
G.[4]
For most graphs, this transformation is not useful because it creates cycles of negative
length in −G. But if G is a directed acyclic graph18 , then no negative cycles can be created,
and a longest path in G can be found in linear time19 by applying a linear time algorithm
for shortest paths in −G, which is also a directed acyclic graph.[4] For instance, for each
vertex v in a given DAG, the length of the longest path ending at v may be obtained by the
following steps:
1. Find a topological ordering20 of the given DAG.
2. For each vertex v of the DAG, in the topological ordering, compute the length of the
longest path ending at v by looking at its incoming neighbors and adding one to the
maximum length recorded for those neighbors. If v has no incoming neighbors, set
the length of the longest path ending at v to zero. In either case, record this number
so that later steps of the algorithm can access it.
Once this has been done, the longest path in the whole DAG may be obtained by starting
at the vertex v with the largest recorded value, then repeatedly stepping backwards to its
incoming neighbor with the largest recorded value, and reversing the sequence of vertices
found in this way.
The critical path method21 for scheduling a set of activities involves the construction of
a directed acyclic graph in which the vertices represent project milestones and the edges
represent activities that must be performed after one milestone and before another; each
edge is weighted by an estimate of the amount of time the corresponding activity will take
to complete. In such a graph, the longest path from the first milestone to the last one is
the critical path, which describes the total time for completing the project.[4]
Longest paths of directed acyclic graphs may also be applied in layered graph drawing22 :
assigning each vertex v of a directed acyclic graph G to the layer whose number is the

16 https://en.wikipedia.org/wiki/Complete_graph
17 https://en.wikipedia.org/wiki/Travelling_salesman_problem
18 https://en.wikipedia.org/wiki/Directed_acyclic_graph
19 https://en.wikipedia.org/wiki/Linear_time
20 https://en.wikipedia.org/wiki/Topological_ordering
21 https://en.wikipedia.org/wiki/Critical_path_method
22 https://en.wikipedia.org/wiki/Layered_graph_drawing

1158
Approximation

length of the longest path ending at v results in a layer assignment for G with the minimum
possible number of layers.[5]

108.3 Approximation

Björklund, Husfeldt & Khanna (2004)23 write that the longest path problem in un-
weighted undirected graphs ”is notorious for the difficulty of understanding its approxi-
mation hardness”.[6] The best polynomial time approximation√algorithm known for this
case achieves only a very weak approximation ratio, n/ exp(Ω( log n)).[7] For all ϵ > 0, it
1−ϵ
is not possible to approximate the longest path to within a factor of 2(log n) unless NP is
contained within quasi-polynomial deterministic time24 ; however, there is a big gap between
this inapproximability result and the known approximation algorithms for this problem.[8]
In the case of unweighted but directed graphs, strong inapproximability results are known.
For every ϵ > 0 the problem cannot be approximated to within a factor of n1−ϵ unless P
= NP, and with stronger complexity-theoretic assumptions it cannot be approximated to
within a factor of n/ log2+ϵ n.[6] The color-coding25 technique can be used to find paths of
logarithmic length, if they exist, but this gives an approximation ratio of only O(n/ log n).[9]

108.4 Parameterized complexity

The longest path problem is fixed-parameter tractable26 when parameterized by the length
of the path. For instance, it can be solved in time linear in the size of the input graph (but
exponential in the length of the path), by an algorithm that performs the following steps:
1. Perform a depth-first search27 of the graph. Let d be the depth of the resulting
depth-first search tree28 .
2. Use the sequence of root-to-leaf paths of the depth-first search tree, in the order in
which they were traversed by the search, to construct a path decomposition29 of the
graph, with pathwidth d.
3. Apply dynamic programming30 to this path decomposition to find a longest path in
time O(d!2d n), where n is the number of vertices in the graph.
Since the output path has length at least as large as d, the running time is also bounded by
O(ℓ!2ℓ n), where ℓ is the length of the longest path.[10] Using color-coding, the dependence
on path length can be reduced to singly exponential.[9][11][12][13] A similar dynamic program-
ming technique shows that the longest path problem is also fixed-parameter tractable when
parameterized by the treewidth31 of the graph.

23 #CITEREFBj%C3%B6rklundHusfeldtKhanna2004
24 https://en.wikipedia.org/wiki/Time_complexity
25 https://en.wikipedia.org/wiki/Color-coding
26 https://en.wikipedia.org/wiki/Parameterized_complexity
27 https://en.wikipedia.org/wiki/Depth-first_search
28 https://en.wikipedia.org/wiki/Tr%C3%A9maux_tree
29 https://en.wikipedia.org/wiki/Path_decomposition
30 https://en.wikipedia.org/wiki/Dynamic_programming
31 https://en.wikipedia.org/wiki/Treewidth

1159
Longest path problem

For graphs of bounded clique-width32 , the longest path can also be solved by a polyno-
mial time dynamic programming algorithm. However, the exponent of the polynomial de-
pends on the clique-width of the graph, so this algorithms is not fixed-parameter tractable.
The longest path problem, parameterized by clique-width, is hard for the parameterized
complexity33 class W [1], showing that a fixed-parameter tractable algorithm is unlikely to
exist.[14]

108.5 Special classes of graphs

A linear-time algorithm for finding a longest path in a tree was proposed by Dijkstra in
1960's, while a formal proof of this algorithm was published in 2002.[15] Furthermore, a
longest path can be computed in polynomial time on weighted trees, on block graphs34 , on
cacti35 ,[16] on bipartite36 permutation graphs37 ,[17] and on Ptolemaic graphs38 .[18]
For the class of interval graphs39 , an O(n4 )-time algorithm is known, which uses a dynamic
programming approach.[19] This dynamic programming approach has been exploited to ob-
tain polynomial-time algorithms on the greater classes of circular-arc graphs40[20] and of
co-comparability graphs (i.e. of the complements41 of comparability graphs42 , which also
contain permutation graphs43 ),[21] both having the same running time O(n4 ). The latter
algorithm is based on special properties of the Lexicographic Depth First Search (LDFS)
vertex ordering[22] of co-comparability graphs. For co-comparability graphs also an alterna-
tive polynomial-time algorithm with higher running time O(n7 ) is known, which is based
on the Hasse diagram44 of the partially ordered set45 defined by the complement46 of the
input co-comparability graph.[23]
Furthermore, the longest path problem is solvable in polynomial time on any class of graphs
with bounded treewidth or bounded clique-width, such as the distance-hereditary graphs47 .
Finally, it is clearly NP-hard on all graph classes on which the Hamiltonian path problem
is NP-hard, such as on split graphs48 , circle graphs49 , and planar graphs50 .

32 https://en.wikipedia.org/wiki/Clique-width
33 https://en.wikipedia.org/wiki/Parameterized_complexity
34 https://en.wikipedia.org/wiki/Block_graph
35 https://en.wikipedia.org/wiki/Cactus_graph
36 https://en.wikipedia.org/wiki/Bipartite_graph
37 https://en.wikipedia.org/wiki/Permutation_graph
38 https://en.wikipedia.org/wiki/Ptolemaic_graph
39 https://en.wikipedia.org/wiki/Interval_graph
40 https://en.wikipedia.org/wiki/Circular-arc_graph
41 https://en.wikipedia.org/wiki/Complement_graph
42 https://en.wikipedia.org/wiki/Comparability_graph
43 https://en.wikipedia.org/wiki/Permutation_graph
44 https://en.wikipedia.org/wiki/Hasse_diagram
45 https://en.wikipedia.org/wiki/Partially_ordered_set
46 https://en.wikipedia.org/wiki/Complement_graph
47 https://en.wikipedia.org/wiki/Distance-hereditary_graph
48 https://en.wikipedia.org/wiki/Split_graph
49 https://en.wikipedia.org/wiki/Circle_graph
50 https://en.wikipedia.org/wiki/Planar_graph

1160
See also

108.6 See also


• Gallai–Hasse–Roy–Vitaver theorem51 , a duality relation between longest paths and graph
coloring52
• Longest uncrossed knight's path53
• Snake-in-the-box54 , the longest induced path55 in a hypercube graph56

108.7 References
1. S, A57 (2003), Combinatorial Optimization: Polyhedra and
Efficiency, Volume 158 , A  C, 24, Springer, p. 114,
ISBN59 978354044389660 .
2. C, T H.61 ; L, C E.62 ; R, R L.63 ; S,
C64 (2001), Introduction To Algorithms65 (2 .), MIT P, . 978,
ISBN66 978026203293367 .
3. L, E L.68 (2001), Combinatorial Optimization: Networks and Ma-
troids69 , C D P, . 64, ISBN70 978048641453971 .
4. S, R72 ; W, K D (2011), Algorithms73 (4 .),
A-W P, . 661–666, ISBN74 978032157351375 .
5. D B, G; E, P76 ; T, R77 ; T, I-
 G. (1998), ”L D  D”, Graph Drawing: Algorithms for

https://en.wikipedia.org/wiki/Gallai%E2%80%93Hasse%E2%80%93Roy%E2%80%93Vitaver_
51
theorem
52 https://en.wikipedia.org/wiki/Graph_coloring
53 https://en.wikipedia.org/wiki/Longest_uncrossed_knight%27s_path
54 https://en.wikipedia.org/wiki/Snake-in-the-box
55 https://en.wikipedia.org/wiki/Induced_path
56 https://en.wikipedia.org/wiki/Hypercube_graph
57 https://en.wikipedia.org/wiki/Alexander_Schrijver
58 https://books.google.com/books?id=mqGeSQ6dJycC&pg=PA114
59 https://en.wikipedia.org/wiki/ISBN_(identifier)
60 https://en.wikipedia.org/wiki/Special:BookSources/9783540443896
61 https://en.wikipedia.org/wiki/Thomas_H._Cormen
62 https://en.wikipedia.org/wiki/Charles_E._Leiserson
63 https://en.wikipedia.org/wiki/Ron_Rivest
64 https://en.wikipedia.org/wiki/Clifford_Stein
65 https://books.google.com/books?id=NLngYyWFl_YC&pg=PA978
66 https://en.wikipedia.org/wiki/ISBN_(identifier)
67 https://en.wikipedia.org/wiki/Special:BookSources/9780262032933
68 https://en.wikipedia.org/wiki/Eugene_Lawler
69 https://books.google.com/books?id=m4MvtFenVjEC&pg=PA64
70 https://en.wikipedia.org/wiki/ISBN_(identifier)
71 https://en.wikipedia.org/wiki/Special:BookSources/9780486414539
72 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
73 https://books.google.com/books?id=idUdqdDXqnAC&pg=PA661
74 https://en.wikipedia.org/wiki/ISBN_(identifier)
75 https://en.wikipedia.org/wiki/Special:BookSources/9780321573513
76 https://en.wikipedia.org/wiki/Peter_Eades
77 https://en.wikipedia.org/wiki/Roberto_Tamassia

1161
Longest path problem

the Visualization of Graphs, Prentice Hall78 , pp. 265–302, ISBN79 978-0-13-301615-


480 .
6. B, A; H, T; K, S (2004), ”A-
     ”, Proc. Int. Coll. Automata,
Languages and Programming (ICALP 2004), Lecture Notes in Computer Science81 ,
3142, Berlin: Springer-Verlag, pp. 222–233, MR82 216093583 .
7. G, H N.; N, S (2008), ”F  ,  
”, International Symposium on Algorithms and Computation, Lecture Notes
in Computer Science, 5369, Berlin: Springer, pp. 752–763, doi84 :10.1007/978-3-540-
92182-0_6685 , ISBN86 978-3-540-92181-387 , MR88 253996889 . For earlier work with
even weaker approximation bounds, see
G, H N. (2007), ”F     -
 ”90 (PDF), SIAM Journal on Computing, 36 (6): 1648–1671,
doi91 :10.1137/S009753970444536692 , MR93 229941894 and
B, A; H, T (2003), ”F    -
 ”95 , SIAM Journal on Computing96 , 32 (6): 1395–1402,
doi97 :10.1137/S009753970241676198 , MR99 2034242100 .
8. K, D101 ; M, R102 ; R, G. D. S. (1997), ”O
      ”, Algorithmica103 , 18 (1): 82–98,
doi104 :10.1007/BF02523689105 , MR106 1432030107 .

78 https://en.wikipedia.org/wiki/Prentice_Hall
79 https://en.wikipedia.org/wiki/ISBN_(identifier)
80 https://en.wikipedia.org/wiki/Special:BookSources/978-0-13-301615-4
81 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
82 https://en.wikipedia.org/wiki/MR_(identifier)
83 http://www.ams.org/mathscinet-getitem?mr=2160935
84 https://en.wikipedia.org/wiki/Doi_(identifier)
85 https://doi.org/10.1007%2F978-3-540-92182-0_66
86 https://en.wikipedia.org/wiki/ISBN_(identifier)
87 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-92181-3
88 https://en.wikipedia.org/wiki/MR_(identifier)
89 http://www.ams.org/mathscinet-getitem?mr=2539968
90 http://www.cs.colorado.edu/~hal/u.pdf
91 https://en.wikipedia.org/wiki/Doi_(identifier)
92 https://doi.org/10.1137%2FS0097539704445366
93 https://en.wikipedia.org/wiki/MR_(identifier)
94 http://www.ams.org/mathscinet-getitem?mr=2299418
95 http://lup.lub.lu.se/record/526604
96 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
97 https://en.wikipedia.org/wiki/Doi_(identifier)
98 https://doi.org/10.1137%2FS0097539702416761
99 https://en.wikipedia.org/wiki/MR_(identifier)
100 http://www.ams.org/mathscinet-getitem?mr=2034242
101 https://en.wikipedia.org/wiki/David_Karger
102 https://en.wikipedia.org/wiki/Rajeev_Motwani
103 https://en.wikipedia.org/wiki/Algorithmica
104 https://en.wikipedia.org/wiki/Doi_(identifier)
105 https://doi.org/10.1007%2FBF02523689
106 https://en.wikipedia.org/wiki/MR_(identifier)
107 http://www.ams.org/mathscinet-getitem?mr=1432030

1162
References

9. A, N108 ; Y, R; Z, U109 (1995), ”C-”,


Journal of the ACM110 , 42 (4): 844–856, doi111 :10.1145/210332.210337112 ,
113
MR 1411787 . 114

10. B, H L.115 (1993), ”O    
 - ”, Journal of Algorithms, 14 (1): 1–23,
doi116 :10.1006/jagm.1993.1001117 , MR118 1199244119 . For an earlier FPT algo-
rithm with slightly better dependence on the path length, but worse dependence on
the size of the graph, see
M, B. (1985), ”H     ”, Analysis and design
of algorithms for combinatorial problems (Udine, 1982), North-Holland Math. Stud.,
109, Amsterdam: North-Holland, pp. 239–254, doi120 :10.1016/S0304-0208(08)73110-
4121 , ISBN122 9780444876997123 , MR124 0808004125 .
11. C, J; L, S; S, S-H; Z, F (2007), ”I-
   , ,   ”, Proc. 18th
ACM-SIAM Symposium on Discrete algorithms (SODA '07)126 (PDF), . 298–307.
12. K, I (2008), ”F     
 ”, International Colloquium on Automata, Languages and Pro-
gramming127 (PDF), L N  C S, 5125, Berlin:
Springer, pp. 575–586, CiteSeerX128 10.1.1.141.6899129 , doi130 :10.1007/978-3-540-
70575-8_47131 , ISBN132 978-3-540-70574-1133 , MR134 2500302135 , archived from the
original136 (PDF) on 2017-08-09, retrieved 2013-08-09.

108 https://en.wikipedia.org/wiki/Noga_Alon
109 https://en.wikipedia.org/wiki/Uri_Zwick
110 https://en.wikipedia.org/wiki/Journal_of_the_ACM
111 https://en.wikipedia.org/wiki/Doi_(identifier)
112 https://doi.org/10.1145%2F210332.210337
113 https://en.wikipedia.org/wiki/MR_(identifier)
114 http://www.ams.org/mathscinet-getitem?mr=1411787
115 https://en.wikipedia.org/wiki/Hans_L._Bodlaender
116 https://en.wikipedia.org/wiki/Doi_(identifier)
117 https://doi.org/10.1006%2Fjagm.1993.1001
118 https://en.wikipedia.org/wiki/MR_(identifier)
119 http://www.ams.org/mathscinet-getitem?mr=1199244
120 https://en.wikipedia.org/wiki/Doi_(identifier)
121 https://doi.org/10.1016%2FS0304-0208%2808%2973110-4
122 https://en.wikipedia.org/wiki/ISBN_(identifier)
123 https://en.wikipedia.org/wiki/Special:BookSources/9780444876997
124 https://en.wikipedia.org/wiki/MR_(identifier)
125 http://www.ams.org/mathscinet-getitem?mr=0808004
126 http://faculty.cse.tamu.edu/shsze/papers/kpath.pdf
https://web.archive.org/web/20170809084237/http://ccom.uprrp.edu/~ikoutis/papers/
127
MultilinearDetection.pdf
128 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
129 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.141.6899
130 https://en.wikipedia.org/wiki/Doi_(identifier)
131 https://doi.org/10.1007%2F978-3-540-70575-8_47
132 https://en.wikipedia.org/wiki/ISBN_(identifier)
133 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-70574-1
134 https://en.wikipedia.org/wiki/MR_(identifier)
135 http://www.ams.org/mathscinet-getitem?mr=2500302
136 http://ccom.uprrp.edu/~ikoutis/papers/MultilinearDetection.pdf

1163
Longest path problem

13. W, R137 (2009), ”F    k in O*(2k )


time”, Information Processing Letters, 109 (6): 315–318, arXiv138 :0807.3026139 ,
doi140 :10.1016/j.ipl.2008.11.004141 , MR142 2493730143 .
14. F, F V.; G, P A.; L, D; S, S
(2009), ”C-:     ”, Proc. 20th ACM-SIAM
Symposium on Discrete Algorithms (SODA '09)144 (PDF), . 825–834, 
  145 (PDF)  2012-10-18,  2012-12-01.
15. B, R.W.;   S, F.W.; Z, G.; V, T.; 
G, A.J.M. (2002), ”O       ”, Informa-
tion Processing Letters, 81 (2): 93–96, doi146 :10.1016/S0020-0190(01)00198-3147 .
16. U, R; U, Y (2004), ”E   
  ”, Isaac 2004, Lecture Notes in Computer Science, 3341:
871–883, doi148 :10.1007/978-3-540-30551-4_74149 , ISBN150 978-3-540-24131-7151 .
17. U, R; V, G (2007), ”L  
       ”, In-
formation Processing Letters, 103 (2): 71–77, CiteSeerX152 10.1.1.101.96153 ,
doi154 :10.1016/j.ipl.2007.02.010155 .
18. T, Y; T, S; U, R (2008),
”L    P ”, IEICE Transactions, 91-D (2):
170–177, doi156 :10.1093/ietisy/e91-d.2.170157 .
19. I, K; M, G B.; N, S D.
(2011), ”T         -
 ”, Algorithmica, 61 (2): 320–341, CiteSeerX158 10.1.1.224.4927159 ,
doi160 :10.1007/s00453-010-9411-3161 .

137 https://en.wikipedia.org/wiki/Ryan_Williams_(computer_scientist)
138 https://en.wikipedia.org/wiki/ArXiv_(identifier)
139 http://arxiv.org/abs/0807.3026
140 https://en.wikipedia.org/wiki/Doi_(identifier)
141 https://doi.org/10.1016%2Fj.ipl.2008.11.004
142 https://en.wikipedia.org/wiki/MR_(identifier)
143 http://www.ams.org/mathscinet-getitem?mr=2493730
https://web.archive.org/web/20121018164522/http://siam.org/proceedings/soda/2009/
144
SODA09_090_fominf.pdf
145 https://www.siam.org/proceedings/soda/2009/SODA09_090_fominf.pdf
146 https://en.wikipedia.org/wiki/Doi_(identifier)
147 https://doi.org/10.1016%2FS0020-0190%2801%2900198-3
148 https://en.wikipedia.org/wiki/Doi_(identifier)
149 https://doi.org/10.1007%2F978-3-540-30551-4_74
150 https://en.wikipedia.org/wiki/ISBN_(identifier)
151 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-24131-7
152 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
153 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.101.96
154 https://en.wikipedia.org/wiki/Doi_(identifier)
155 https://doi.org/10.1016%2Fj.ipl.2007.02.010
156 https://en.wikipedia.org/wiki/Doi_(identifier)
157 https://doi.org/10.1093%2Fietisy%2Fe91-d.2.170
158 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
159 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.224.4927
160 https://en.wikipedia.org/wiki/Doi_(identifier)
161 https://doi.org/10.1007%2Fs00453-010-9411-3

1164
External links

20. M, G B.; B, I (2014), ”C  -
    -    ”, Dis-
crete Applied Mathematics, 164 (2): 383–399, CiteSeerX162 10.1.1.224.779163 ,
doi164 :10.1016/j.dam.2012.08.024165 .
21. M, G B.; C, D G. (2012), ”A  
        ”,
SIAM Journal on Discrete Mathematics, 26 (3): 940–963, arXiv166 :1004.4560167 ,
doi168 :10.1137/100793529169 .
22. C, D G.; K, R (2008), ”A   
 ”, SIAM Journal on Discrete Mathematics, 22 (4): 1259–1276,
doi170 :10.1137/050623498171 .
23. I, K; N, S D. (2011), ”T  
     ”172 (PDF), Algorithmica,
65: 177–205, CiteSeerX173 10.1.1.415.9996174 , doi175 :10.1007/s00453-011-9583-5176 .

108.8 External links


• ”Find the Longest Path177 ”, song by Dan Barrett178

162 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
163 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.224.779
164 https://en.wikipedia.org/wiki/Doi_(identifier)
165 https://doi.org/10.1016%2Fj.dam.2012.08.024
166 https://en.wikipedia.org/wiki/ArXiv_(identifier)
167 http://arxiv.org/abs/1004.4560
168 https://en.wikipedia.org/wiki/Doi_(identifier)
169 https://doi.org/10.1137%2F100793529
170 https://en.wikipedia.org/wiki/Doi_(identifier)
171 https://doi.org/10.1137%2F050623498
172 http://www.cs.uoi.gr/~stavros/J-Papers/J-2012-ALGO.pdf
173 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
174 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.415.9996
175 https://en.wikipedia.org/wiki/Doi_(identifier)
176 https://doi.org/10.1007%2Fs00453-011-9583-5
177 http://valis.cs.uiuc.edu/~sariel/misc/funny/longestpath.mp3
178 https://en.wikipedia.org/wiki/Daniel_J._Barrett

1165
109 Minimax

This article is about the decision theory concept. For other uses, see Minimax (disambigua-
tion)1 . Minimax (sometimes MinMax, MM[1] or saddle point[2] ) is a decision rule used
in artificial intelligence2 , decision theory3 , game theory4 , statistics5 , and philosophy6 for
minimizing the possible loss7 for a worst case (maximum loss) scenario8 . When dealing
with gains, it is referred to as ”maximin”—to maximize the minimum gain. Originally for-
mulated for two-player zero-sum9 game theory10 , covering both the cases where players take
alternate moves and those where they make simultaneous moves, it has also been extended
to more complex games and to general decision-making in the presence of uncertainty.

109.1 Game theory

109.1.1 In general games

The maximin value is the highest value that the player can be sure to get without knowing
the actions of the other players; equivalently, it is the lowest value the other players can
force the player to receive when they know the player's action. Its formal definition is:[3]
vi = max min vi (ai , a−i )
ai a−i

Where:
• i is the index of the player of interest.
• −i denotes all other players except player i.
• ai is the action taken by player i.
• a−i denotes the actions taken by all other players.
• vi is the value function of player i.
Calculating the maximin value of a player is done in a worst-case approach: for each possible
action of the player, we check all possible actions of the other players and determine the
worst possible combination of actions—the one that gives player i the smallest value. Then,

1 https://en.wikipedia.org/wiki/Minimax_(disambiguation)
2 https://en.wikipedia.org/wiki/Artificial_intelligence
3 https://en.wikipedia.org/wiki/Decision_theory
4 https://en.wikipedia.org/wiki/Game_theory
5 https://en.wikipedia.org/wiki/Statistics
6 https://en.wikipedia.org/wiki/Philosophy
7 https://en.wikipedia.org/wiki/Loss_function
8 https://en.wikipedia.org/wiki/Worst-case_scenario
9 https://en.wikipedia.org/wiki/Zero-sum
10 https://en.wikipedia.org/wiki/Game_theory

1167
Minimax

we determine which action player i can take in order to make sure that this smallest value
is the highest possible.
For example, consider the following game for two players, where the first player (”row
player”) may choose any of three moves, labelled T, M, or B, and the second player (”column”
player) may choose either of two moves, L or R. The result of the combination of both moves
is expressed in a payoff table:

L R
T 3,1 2,-20
M 5,0 -10,1
B -100,2 4,4

(where the first number in each cell is the pay-out of the row player and the second number
is the pay-out of the column player).
For the sake of example, we consider only pure strategies. Check each player in turn:
• The row player can play T, which guarantees them a payoff of at least 2 (playing B is
risky since it can lead to payoff −100, and playing M can result in a payoff of −10).
Hence: vrow = 2.
• The column player can play L and secure a payoff of at least 0 (playing R puts them in
the risk of getting −20). Hence: vcol = 0.
If both players play their respective maximin strategies (T, L), the payoff vector is (3, 1).
The minimax value of a player is the smallest value that the other players can force the
player to receive, without knowing the player's actions; equivalently, it is the largest value
the player can be sure to get when they know the actions of the other players. Its formal
definition is:[3]
vi = min max vi (ai , a−i )
a−i ai

The definition is very similar to that of the maximin value—only the order of the maximum
and minimum operators is inverse. In the above example:
• The row player can get a maximum value of 4 (if the other player plays R) or 5 (if the
other player plays L), so: vrow = 4.
• The column player can get a maximum value of 1 (if the other player plays T), 1 (if M)
or 4 (if B). Hence: vcol = 1.
For every player i, the maximin is at most the minimax:
vi ≤ vi
Intuitively, in maximin the maximization comes before the minimization, so player i tries to
maximize their value before knowing what the others will do; in minimax the maximization
comes after the minimization, so player i is in a much better position—they maximize their
value knowing what the others did.
Another way to understand the notation is by reading from right to left: when we write
( )
vi = min max vi (ai , a−i ) = min max vi (ai , a−i )
a−i ai a−i ai

1168
Game theory

the initial set of outcomes vi (ai , a−i ) depends on both ai and a−i . We first marginalize
away ai from vi (ai , a−i ), by maximizing over ai (for every possible value of a−i ) to yield a
set of marginal outcomes vi′ (a−i ), which depends only on a−i . We then minimize over a−i
over these outcomes. (Conversely for maximin.)
Although it is always the case that vrow ≤ vrow and vcol ≤ vcol , the payoff vector resulting
from both players playing their minimax strategies, (2, −20) in the case of (T, R) or (−10, 1)
in the case of (M, R), cannot similarly be ranked against the payoff vector (3, 1) resulting
from both players playing their maximin strategy.

109.1.2 In zero-sum games

In two-player zero-sum games11 , the minimax solution is the same as the Nash equilibrium12 .
14 ]
In the context of zero-sum games, the minimax theorem13 is equivalent to:[4][failed verification
For every two-person, zero-sum15 game with finitely many strategies, there exists a value
V and a mixed strategy for each player, such that
(a) Given player 2's strategy, the best payoff possible for player 1 is V, and
(b) Given player 1's strategy, the best payoff possible for player 2 is −V.
Equivalently, Player 1's strategy guarantees them a payoff of V regardless of Player 2's
strategy, and similarly Player 2 can guarantee themselves a payoff of −V. The name minimax
arises because each player minimizes the maximum payoff possible for the other—since the
game is zero-sum, they also minimize their own maximum loss (i.e. maximize their minimum
payoff). See also example of a game without a value16 .

109.1.3 Example

B chooses B1 B chooses B2 B chooses B3


A chooses A1 +3 −2 +2
A chooses A2 −1 +0 +4
A chooses A3 −4 −3 +1

The following example of a zero-sum game, where A and B make simultaneous moves,
illustrates minimax solutions. Suppose each player has three choices and consider the payoff
matrix17 for A displayed on the right. Assume the payoff matrix for B is the same matrix
with the signs reversed (i.e. if the choices are A1 and B1 then B pays 3 to A). Then, the
minimax choice for A is A2 since the worst possible result is then having to pay 1, while
the simple minimax choice for B is B2 since the worst possible result is then no payment.

11 https://en.wikipedia.org/wiki/Zero-sum_game
12 https://en.wikipedia.org/wiki/Nash_equilibrium
13 https://en.wikipedia.org/wiki/Minimax_theorem
15 https://en.wikipedia.org/wiki/Zero-sum
16 https://en.wikipedia.org/wiki/Example_of_a_game_without_a_value
17 https://en.wikipedia.org/wiki/Payoff_matrix

1169
Minimax

However, this solution is not stable, since if B believes A will choose A2 then B will choose
B1 to gain 1; then if A believes B will choose B1 then A will choose A1 to gain 3; and then
B will choose B2; and eventually both players will realize the difficulty of making a choice.
So a more stable strategy is needed.
Some choices are dominated by others and can be eliminated: A will not choose A3 since
either A1 or A2 will produce a better result, no matter what B chooses; B will not choose B3
since some mixtures of B1 and B2 will produce a better result, no matter what A chooses.
A can avoid having to make an expected payment of more than 1∕3 by choosing A1 with
probability 1∕6 and A2 with probability 5∕6: The expected payoff for A would be 3 ×
(1∕6) − 1 × (5∕6) = −1∕3 in case B chose B1 and −2 × (1∕6) + 0 × (5∕6) = −1/3 in
case B chose B2. Similarly, B can ensure an expected gain of at least 1/3, no matter what
A chooses, by using a randomized strategy of choosing B1 with probability 1∕3 and B2 with
probability 2∕3. These mixed18 minimax strategies are now stable and cannot be improved.

109.1.4 Maximin

Frequently, in game theory, maximin is distinct from minimax. Minimax is used in zero-
sum games to denote minimizing the opponent's maximum payoff. In a zero-sum game19 ,
this is identical to minimizing one's own maximum loss, and to maximizing one's own
minimum gain.
”Maximin” is a term commonly used for non-zero-sum games to describe the strategy which
maximizes one's own minimum payoff. In non-zero-sum games, this is not generally the
same as minimizing the opponent's maximum gain, nor the same as the Nash equilibrium20
strategy.

109.1.5 In repeated games

The minimax values are very important in the theory of repeated games21 . One of the
central theorems in this theory, the folk theorem22 , relies on the minimax values.

109.2 Combinatorial game theory

In combinatorial game theory23 , there is a minimax algorithm for game solutions.


A simple version of the minimax algorithm, stated below, deals with games such as tic-
tac-toe24 , where each player can win, lose, or draw. If player A can win in one move, their
best move is that winning move. If player B knows that one move will lead to the situation

18 https://en.wikipedia.org/wiki/Mixed_strategy
19 https://en.wikipedia.org/wiki/Zero-sum_game
20 https://en.wikipedia.org/wiki/Nash_equilibrium
21 https://en.wikipedia.org/wiki/Repeated_games
22 https://en.wikipedia.org/wiki/Folk_theorem_(game_theory)
23 https://en.wikipedia.org/wiki/Combinatorial_game_theory
24 https://en.wikipedia.org/wiki/Tic-tac-toe

1170
Combinatorial game theory

where player A can win in one move, while another move will lead to the situation where
player A can, at best, draw, then player B's best move is the one leading to a draw. Late in
the game, it's easy to see what the ”best” move is. The Minimax algorithm helps find the
best move, by working backwards from the end of the game. At each step it assumes that
player A is trying to maximize the chances of A winning, while on the next turn player
B is trying to minimize the chances of A winning (i.e., to maximize B's own chances of
winning).

109.2.1 Minimax algorithm with alternate moves

A minimax algorithm[5] is a recursive algorithm25 for choosing the next move in an n-


player game26 , usually a two-player game. A value is associated with each position or state
of the game. This value is computed by means of a position evaluation function27 and it
indicates how good it would be for a player to reach that position. The player then makes
the move that maximizes the minimum value of the position resulting from the opponent's
possible following moves. If it is A's turn to move, A gives a value to each of their legal
moves.
A possible allocation method consists in assigning a certain win for A as +1 and for B as
−1. This leads to combinatorial game theory28 as developed by John Horton Conway29 . An
alternative is using a rule that if the result of a move is an immediate win for A it is assigned
positive infinity and if it is an immediate win for B, negative infinity. The value to A of
any other move is the maximum of the values resulting from each of B's possible replies.
For this reason, A is called the maximizing player and B is called the minimizing player,
hence the name minimax algorithm. The above algorithm will assign a value of positive or
negative infinity to any position since the value of every position will be the value of some
final winning or losing position. Often this is generally only possible at the very end of
complicated games such as chess30 or go31 , since it is not computationally feasible to look
ahead as far as the completion of the game, except towards the end, and instead, positions
are given finite values as estimates of the degree of belief that they will lead to a win for
one player or another.
This can be extended if we can supply a heuristic32 evaluation function which gives values
to non-final game states without considering all possible following complete sequences. We
can then limit the minimax algorithm to look only at a certain number of moves ahead. This
number is called the ”look-ahead”, measured in ”plies33 ”. For example, the chess computer
Deep Blue34 (the first one to beat a reigning world champion, Garry Kasparov35 at that
time) looked ahead at least 12 plies, then applied a heuristic evaluation function.[6]

25 https://en.wikipedia.org/wiki/Algorithm
26 https://en.wikipedia.org/wiki/Game_theory
27 https://en.wikipedia.org/wiki/Evaluation_function
28 https://en.wikipedia.org/wiki/Combinatorial_game_theory
29 https://en.wikipedia.org/wiki/John_Horton_Conway
30 https://en.wikipedia.org/wiki/Chess
31 https://en.wikipedia.org/wiki/Go_(board_game)
32 https://en.wikipedia.org/wiki/Heuristic
33 https://en.wikipedia.org/wiki/Ply_(chess)
34 https://en.wikipedia.org/wiki/IBM_Deep_Blue
35 https://en.wikipedia.org/wiki/Garry_Kasparov

1171
Minimax

The algorithm can be thought of as exploring the nodes36 of a game tree37 . The effective
branching factor38 of the tree is the average number of children39 of each node (i.e., the
average number of legal moves in a position). The number of nodes to be explored usually
increases exponentially40 with the number of plies (it is less than exponential if evaluating
forced moves41 or repeated positions). The number of nodes to be explored for the analysis
of a game is therefore approximately the branching factor raised to the power of the number
of plies. It is therefore impractical42 to completely analyze games such as chess using the
minimax algorithm.
The performance of the naïve minimax algorithm may be improved dramatically, without
affecting the result, by the use of alpha-beta pruning43 . Other heuristic pruning methods
can also be used, but not all of them are guaranteed to give the same result as the un-pruned
search.
A naïve minimax algorithm may be trivially modified to additionally return an entire Prin-
cipal Variation44 along with a minimax score.

109.2.2 Pseudocode

The pseudocode45 for the depth limited minimax algorithm is given below.
function minimax(node, depth, maximizingPlayer) is
if depth = 0 or node is a terminal node then
return the heuristic value of node
if maximizingPlayer then
value := −∞
for each child of node do
value := max(value, minimax(child, depth − 1, FALSE))
return value
else (* minimizing player *)
value := +∞
for each child of node do
value := min(value, minimax(child, depth − 1, TRUE))
return value

(* Initial call *)
minimax(origin, depth, TRUE)

The minimax function returns a heuristic value for leaf nodes46 (terminal nodes and nodes
at the maximum search depth). Non leaf nodes inherit their value from a descendant
leaf node. The heuristic value is a score measuring the favorability of the node for the
maximizing player. Hence nodes resulting in a favorable outcome, such as a win, for the

36 https://en.wikipedia.org/wiki/Node_(computer_science)
37 https://en.wikipedia.org/wiki/Game_tree
38 https://en.wikipedia.org/wiki/Branching_factor
39 https://en.wikipedia.org/wiki/Child_node
40 https://en.wikipedia.org/wiki/Exponential_growth
41 https://en.wikipedia.org/wiki/Forced_move
42 https://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability
43 https://en.wikipedia.org/wiki/Alpha-beta_pruning
44 https://en.wikipedia.org/wiki/Variation_(game_tree)#Principal_variation
45 https://en.wikipedia.org/wiki/Pseudocode
46 https://en.wikipedia.org/wiki/Leaf_nodes

1172
Combinatorial game theory

maximizing player have higher scores than nodes more favorable for the minimizing player.
The heuristic value for terminal (game ending) leaf nodes are scores corresponding to win,
loss, or draw, for the maximizing player. For non terminal leaf nodes at the maximum search
depth, an evaluation function estimates a heuristic value for the node. The quality of this
estimate and the search depth determine the quality and accuracy of the final minimax
result.
Minimax treats the two players (the maximizing player and the minimizing player) sepa-
rately in its code. Based on the observation that max(a, b) = − min(−a, −b), minimax may
often be simplified into the negamax47 algorithm.

109.2.3 Example

Figure 264 A minimax tree example

47 https://en.wikipedia.org/wiki/Negamax

1173
Minimax

Figure 265 An animated pedagogical example that attempts to be human-friendly by


substituting initial infinite (or arbitrarily large) values for emptiness and by avoiding
using the negamax coding simplifications.

Suppose the game being played only has a maximum of two possible moves per player each
turn. The algorithm generates the tree48 on the right, where the circles represent the moves
of the player running the algorithm (maximizing player), and squares represent the moves
of the opponent (minimizing player). Because of the limitation of computation resources,
as explained above, the tree is limited to a look-ahead of 4 moves.
The algorithm evaluates each leaf node49 using a heuristic evaluation function, obtaining
the values shown. The moves where the maximizing player wins are assigned with positive
infinity, while the moves that lead to a win of the minimizing player are assigned with
negative infinity. At level 3, the algorithm will choose, for each node, the smallest of the
child node50 values, and assign it to that same node (e.g. the node on the left will choose
the minimum between ”10” and ”+∞”, therefore assigning the value ”10” to itself). The next
step, in level 2, consists of choosing for each node the largest of the child node values. Once
again, the values are assigned to each parent node51 . The algorithm continues evaluating
the maximum and minimum values of the child nodes alternately until it reaches the root

48 https://en.wikipedia.org/wiki/Game_tree
49 https://en.wikipedia.org/wiki/Leaf_node
50 https://en.wikipedia.org/wiki/Child_node
51 https://en.wikipedia.org/wiki/Parent_node

1174
Minimax for individual decisions

node52 , where it chooses the move with the largest value (represented in the figure with
a blue arrow). This is the move that the player should make in order to minimize the
maximum possible loss53 .

109.3 Minimax for individual decisions

109.3.1 Minimax in the face of uncertainty

Minimax theory has been extended to decisions where there is no other player, but where
the consequences of decisions depend on unknown facts. For example, deciding to prospect
for minerals entails a cost which will be wasted if the minerals are not present, but will
bring major rewards if they are. One approach is to treat this as a game against nature (see
move by nature54 ), and using a similar mindset as Murphy's law55 or resistentialism56 , take
an approach which minimizes the maximum expected loss, using the same techniques as in
the two-person zero-sum games.
In addition, expectiminimax trees57 have been developed, for two-player games in which
chance (for example, dice) is a factor.

109.3.2 Minimax criterion in statistical decision theory

Main article: Minimax estimator58 In classical statistical decision theory59 , we have an


estimator60 δ that is used to estimate a parameter61 θ ∈ Θ. We also assume a risk function62
R(θ, δ), usually specified as the integral of a loss function63 . In this framework, δ̃ is called
minimax if it satisfies
sup R(θ, δ̃) = inf sup R(θ, δ).
θ δ θ

An alternative criterion in the decision theoretic framework is the Bayes estimator64 in the
presence of a prior distribution65 Π. An estimator is Bayes if it minimizes the average66 risk

R(θ, δ) dΠ(θ).
Θ

52 https://en.wikipedia.org/wiki/Root_node
53 https://en.wikipedia.org/wiki/Loss_function
54 https://en.wikipedia.org/wiki/Move_by_nature
55 https://en.wikipedia.org/wiki/Murphy%27s_law
56 https://en.wikipedia.org/wiki/Resistentialism
57 https://en.wikipedia.org/wiki/Expectiminimax_tree
58 https://en.wikipedia.org/wiki/Minimax_estimator
59 https://en.wikipedia.org/wiki/Decision_theory
60 https://en.wikipedia.org/wiki/Estimator
61 https://en.wikipedia.org/wiki/Parameter
62 https://en.wikipedia.org/wiki/Risk_function
63 https://en.wikipedia.org/wiki/Loss_function
64 https://en.wikipedia.org/wiki/Bayes_estimator
65 https://en.wikipedia.org/wiki/Prior_distribution
66 https://en.wikipedia.org/wiki/Average

1175
Minimax

109.3.3 Non-probabilistic decision theory

A key feature of minimax decision making is being non-probabilistic: in contrast to decisions


using expected value67 or expected utility68 , it makes no assumptions about the probabilities
of various outcomes, just scenario analysis69 of what the possible outcomes are. It is thus
robust70 to changes in the assumptions, as these other decision techniques are not. Various
extensions of this non-probabilistic approach exist, notably minimax regret71 and Info-gap
decision theory72 .
Further, minimax only requires ordinal measurement73 (that outcomes be compared and
ranked), not interval measurements (that outcomes include ”how much better or worse”),
and returns ordinal data, using only the modeled outcomes: the conclusion of a minimax
analysis is: ”this strategy is minimax, as the worst case is (outcome), which is less bad than
any other strategy”. Compare to expected value analysis, whose conclusion is of the form:
”this strategy yields E(X)=n.” Minimax thus can be used on ordinal data, and can be more
transparent.

109.4 Maximin in philosophy

In philosophy, the term ”maximin” is often used in the context of John Rawls74 's A Theory
of Justice75 , where he refers to it (Rawls 1971, p. 152) in the context of The Difference
Principle76 . Rawls defined this principle as the rule which states that social and economic
inequalities should be arranged so that ”they are to be of the greatest benefit to the least-
advantaged members of society”.[7][8]

109.5 See also


• Alpha-beta pruning77
• Expectiminimax78
• Negamax79
• Sion's minimax theorem80
• Minimax Condorcet81

67 https://en.wikipedia.org/wiki/Expected_value
68 https://en.wikipedia.org/wiki/Expected_utility
69 https://en.wikipedia.org/wiki/Scenario_analysis
70 https://en.wiktionary.org/wiki/robust
71 https://en.wikipedia.org/wiki/Minimax_regret
72 https://en.wikipedia.org/wiki/Info-gap_decision_theory
73 https://en.wikipedia.org/wiki/Ordinal_measurement
74 https://en.wikipedia.org/wiki/John_Rawls
75 https://en.wikipedia.org/wiki/A_Theory_of_Justice
76 https://en.wikipedia.org/wiki/Difference_Principle
77 https://en.wikipedia.org/wiki/Alpha-beta_pruning
78 https://en.wikipedia.org/wiki/Expectiminimax
79 https://en.wikipedia.org/wiki/Negamax
80 https://en.wikipedia.org/wiki/Sion%27s_minimax_theorem
81 https://en.wikipedia.org/wiki/Minimax_Condorcet

1176
Notes

• Computer chess82
• Horizon effect83
• Monte Carlo tree search84
• Minimax regret85
• Negascout86
• Tit for Tat87
• Transposition table88
• Wald's maximin model89

109.6 Notes
1. Provincial Healthcare Index 201390 (Bacchus Barua, Fraser Institute, January 2013
-see page 25-)
2. Turing and von Neumann - Professor Raymond Flood - Gresham College at 12:0091
3. M M, E S92 & S Z (2013). Game Theory.
Cambridge University Press93 . pp. 176–180. ISBN94 978110700548895 .CS1 maint:
uses authors parameter (link96 )
4. Osborne, Martin J., and Ariel Rubinstein97 . A Course in Game Theory. Cambridge,
MA: MIT, 1994. Print.
5. R, S J.98 ; N, P99 (2003), Artificial Intelligence: A Mod-
ern Approach100 (2 .), U S R, N J: P H,
. 163–171, ISBN101 0-13-790395-2102
6. H, F-H (1999), ”IBM' D B C G C”,
IEEE Micro, Los Alamitos, CA, USA: IEEE Computer Society, 19 (2): 70–81,
doi103 :10.1109/40.755469104 , During the 1997 match, the software search extended the

82 https://en.wikipedia.org/wiki/Computer_chess
83 https://en.wikipedia.org/wiki/Horizon_effect
84 https://en.wikipedia.org/wiki/Monte_Carlo_tree_search
85 https://en.wikipedia.org/wiki/Regret_(decision_theory)
86 https://en.wikipedia.org/wiki/Negascout
87 https://en.wikipedia.org/wiki/Tit_for_Tat
88 https://en.wikipedia.org/wiki/Transposition_table
89 https://en.wikipedia.org/wiki/Wald%27s_maximin_model
http://www.fraserinstitute.org/uploadedFiles/fraser-ca/Content/research-news/
90
research/publications/provincial-healthcare-index-2013.pdf
91 https://www.youtube.com/watch?v=fJltiCjPeMA&t=12m0s
92 https://en.wikipedia.org/wiki/Eilon_Solan
93 https://en.wikipedia.org/wiki/Cambridge_University_Press
94 https://en.wikipedia.org/wiki/ISBN_(identifier)
95 https://en.wikipedia.org/wiki/Special:BookSources/9781107005488
96 https://en.wikipedia.org/wiki/Category:CS1_maint:_uses_authors_parameter
97 https://en.wikipedia.org/wiki/Ariel_Rubinstein
98 https://en.wikipedia.org/wiki/Stuart_J._Russell
99 https://en.wikipedia.org/wiki/Peter_Norvig
100 http://aima.cs.berkeley.edu/
101 https://en.wikipedia.org/wiki/ISBN_(identifier)
102 https://en.wikipedia.org/wiki/Special:BookSources/0-13-790395-2
103 https://en.wikipedia.org/wiki/Doi_(identifier)
104 https://doi.org/10.1109%2F40.755469

1177
Minimax

search to about 40 plies along the forcing lines, even though the nonextended search
reached only about 12 plies.
7. Arrow105 , ”Some Ordinalist-Utilitarian Notes on Rawls's Theory of Justice106 , Journal
of Philosophy 70, 9 (May 1973), pp. 245-263.
8. Harsanyi107 , ”Can the Maximin Principle Serve as a Basis for Morality?108 a Critique
of John Rawls's Theory, American Political Science Review 69, 2 (June 1975), pp.
594-606.

109.7 External links

Look up minimax109 in Wiktionary, the free dictionary.

Wikiquote has quotations related to: Minimax110

• H, M111 , . (2001) [1994], ”M ”112 , Encyclopedia


of Mathematics113 , S S+B M B.V. / K A
P, ISBN114 978-1-55608-010-4115
• A visualization applet116
• Maximin principle117 at Dictionary of Philosophical Terms and Names
• Play a betting-and-bluffing game against a mixed minimax strategy118
• Minimax119 at Dictionary of Algorithms and Data Structures120
• Minimax121 (with or without alpha-beta pruning) algorithm visualization — game tree
solving (Java Applet), for balance or off-balance trees.
• Minimax Tutorial with a Numerical Solution Platform122
• Java implementation used in a Checkers Game123

105 https://en.wikipedia.org/wiki/Kenneth_Arrow
106 https://www.pdcnet.org/jphil/content/jphil_1973_0070_0009_0245_0263
107 https://en.wikipedia.org/wiki/John_Harsanyi
108 http://piketty.pse.ens.fr/files/Harsanyi1975.pdf
109 https://en.wiktionary.org/wiki/Special:Search/minimax
110 https://en.wikiquote.org/wiki/Special:Search/Minimax
111 https://en.wikipedia.org/wiki/Michiel_Hazewinkel
112 https://www.encyclopediaofmath.org/index.php?title=p/m063950
113 https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics
114 https://en.wikipedia.org/wiki/ISBN_(identifier)
115 https://en.wikipedia.org/wiki/Special:BookSources/978-1-55608-010-4
116 http://www.cut-the-knot.org/Curriculum/Games/MixedStrategies.shtml
https://web.archive.org/web/20060307183023/http://www.swif.uniba.it/lei/foldop/
117
foldoc.cgi?maximin+principle
118 http://www.bewersdorff-online.de/quaak/rules.htm
119 https://xlinux.nist.gov/dads/HTML/minimax.html
120 https://en.wikipedia.org/wiki/Dictionary_of_Algorithms_and_Data_Structures
121 http://ksquared.de/gamevisual/launch.php
122 http://apmonitor.com/me575/index.php/Main/MiniMax
https://github.com/ykaragol/checkersmaster/blob/master/CheckersMaster/src/checkers/
123
algorithm/MinimaxAlgorithm.java

1178
External links

Topics in game theory

1179
110 Minimum cut

Figure 266 A graph and two of its cuts. The dotted line in red represents a cut with
three crossing edges. The dashed line in green represents one of the minimum cuts of this
graph, crossing only two edges.[1]

In graph theory1 , a minimum cut or min-cut of a graph2 is a cut3 (a partition4 of the


vertices of a graph into two disjoint subsets) that is minimal in some sense.
Variations of the minimum cut problem consider weighted graphs, directed graphs, termi-
nals, and partitioning the vertices into more than two sets.

1 https://en.wikipedia.org/wiki/Graph_theory
2 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
3 https://en.wikipedia.org/wiki/Cut_(graph_theory)
4 https://en.wikipedia.org/wiki/Partition_of_a_set

1181
Minimum cut

110.1 Without terminal nodes

The minimum cut problem in undirected5 , weighted graphs can be solved in polynomial
time by the Stoer-Wagner algorithm6 . In the special case when the graph is unweighted,
Karger's algorithm7 provides an efficient randomized method for finding the cut. In this
case, the minimum cut equals the edge connectivity8 of the graph.
A generalization of the minimum cut problem without terminals is the minimum k-cut9 , in
which the goal is to partition the graph into at least k connected components by removing
as few edges as possible. For a fixed value of k, this problem can be solved in polynomial
time, though the algorithm is not practical for large k. [2]

110.2 With terminal nodes

When two terminal nodes are given, they are typically referred to as the source and the
sink. In a directed, weighted flow network10 , the minimum cut separates the source and
sink vertices and minimizes the total weight on the edges that are directed from the source
side of the cut to the sink side of the cut. As shown in the max-flow min-cut theorem11 , the
weight of this cut equals the maximum amount of flow that can be sent from the source to
the sink in the given network.
In a weighted, undirected network, it is possible to calculate the cut that separates a par-
ticular pair of vertices from each other and has minimum possible weight. A system of
cuts that solves this problem for every possible vertex pair can be collected into a structure
known as the Gomory–Hu tree12 of the graph.
A generalization of the minimum cut problem with terminals is the k-terminal cut, or
multiterminal cut. This problem is NP-hard13 , even for k = 3.[3]

110.3 Applications

Graph partition14 problems are a family of combinatorial optimization problems in which


a graph is to be partitioned into two or more parts with additional constraints such as
balancing the sizes of the two sides of the cut.

5 https://en.wikipedia.org/wiki/Undirected_graph
6 https://en.wikipedia.org/wiki/Stoer-Wagner_algorithm
7 https://en.wikipedia.org/wiki/Karger%27s_algorithm
8 https://en.wikipedia.org/wiki/K-edge-connected_graph
9 https://en.wikipedia.org/wiki/Minimum_k-cut
10 https://en.wikipedia.org/wiki/Flow_network
11 https://en.wikipedia.org/wiki/Max-flow_min-cut_theorem
12 https://en.wikipedia.org/wiki/Gomory%E2%80%93Hu_tree
13 https://en.wikipedia.org/wiki/NP-hardness
14 https://en.wikipedia.org/wiki/Graph_partition

1182
References

Due to max-flow min-cut theorem15 , 2 nodes' Minimum cut value is equal to their maxflow16
value.In this case, some algorithms used in maxflow problem could also be used to solve
this question.

110.4 Number of minimum cuts


( )
n n(n − 1)
A graph with n vertices can at the most have = distinct minimum cuts.
2 2
n(n − 1)
This bound is tight in the sense that a (simple) cycle17 on n vertices has exactly
2
minimum cuts.

110.5 See also


• Maximum cut18
• Vertex separator19 , an analogous concept to minimum cuts for vertices instead of edges

110.6 References
1. ”4 M-C A”20 .
2. ”A P A   - P  F ”21 . Cite
journal requires |journal= (help22 )
3. ”T C  M C”23 (PDF). Cite journal requires
|journal= (help24 )

Index of articles associated with the same nameThis article25 includes a list of
related items that share the same name (or similar names).
If an internal link26 incorrectly led you here, you may wish to change the link to
point directly to the intended article.

15 https://en.wikipedia.org/wiki/Max-flow_min-cut_theorem
16 https://en.wikipedia.org/wiki/Maxflow
17 https://en.wikipedia.org/wiki/Cycle_(graph_theory)#Definitions
18 https://en.wikipedia.org/wiki/Maximum_cut
19 https://en.wikipedia.org/wiki/Vertex_separator
20 http://www.cs.dartmouth.edu/~ac/Teach/CS105-Winter05/Notes/loomis-scribe.ps
21 https://pubsonline.informs.org/doi/pdf/10.1287/moor.19.1.24
22 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
23 https://pdfs.semanticscholar.org/17ff/d84480267785c6a9987211a8a86a58cea1a9.pdf
24 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
25 https://en.wikipedia.org/wiki/Wikipedia:Set_index_articles
https://en.wikipedia.org/w/index.php?title=Special:Whatlinkshere/Minimum_cut&
26
namespace=0

1183
111 Nearest neighbour algorithm

This article is about an approximation algorithm to solve the travelling salesman problem1 .
For other uses, see Nearest neighbor2 .

Nearest neighbour algorithm


Class Approximation
algorithm
Data structure Graph
Worst-case perfor- Θ(N 2 )
mance
Worst-case space Θ(N )
complexity

The nearest neighbour algorithm was one of the first algorithms3 used to solve the
travelling salesman problem4 approximately. In that problem, the salesman starts at a
random city and repeatedly visits the nearest city until all have been visited. The algorithm
quickly yields a short tour, but usually not the optimal one.

111.1 Algorithm

These are the steps of the algorithm:


1. Initialize all vertices as unvisited.
2. Select an arbitrary vertex, set it as the current vertex u. Mark u as visited.
3. Find out the shortest edge connecting the current vertex u and an unvisited vertex v.
4. Set v as the current vertex u. Mark v as visited.
5. If all the vertices in the domain are visited, then terminate. Else, go to step 3.
The sequence of the visited vertices is the output of the algorithm.
The nearest neighbour algorithm is easy to implement and executes quickly, but it can
sometimes miss shorter routes which are easily noticed with human insight, due to its
”greedy” nature. As a general guide, if the last few stages of the tour are comparable in
length to the first stages, then the tour is reasonable; if they are much greater, then it is
likely that much better tours exist. Another check is to use an algorithm such as the lower
bound5 algorithm to estimate if this tour is good enough.

1 https://en.wikipedia.org/wiki/Travelling_salesman_problem
2 https://en.wikipedia.org/wiki/Nearest_neighbor_(disambiguation)
3 https://en.wikipedia.org/wiki/Algorithm
4 https://en.wikipedia.org/wiki/Travelling_salesman_problem
5 https://en.wikipedia.org/wiki/Upper_and_lower_bounds

1185
Nearest neighbour algorithm

In the worst case, the algorithm results in a tour that is much longer than the optimal tour.
To be precise, for every constant r there is an instance of the traveling salesman problem
such that the length of the tour computed by the nearest neighbour algorithm is greater
than r times the length of the optimal tour. Moreover, for each number of cities there is an
assignment of distances between the cities for which the nearest neighbor heuristic produces
the unique worst possible tour. (If the algorithm is applied on every vertex as the starting
vertex, the best path found will be better than at least N/2-1 other tours, where N is the
number of vertexes)[1]
The nearest neighbour algorithm may not find a feasible tour at all, even when one exists.

111.2 Notes
1. G. Gutin, A. Yeo and A. Zverovich, 2002

111.3 References
• G. Gutin, A. Yeo and A. Zverovich, Traveling salesman should not be greedy: domination
analysis of greedy-type heuristics for the TSP6 . Discrete Applied Mathematics 117 (2002),
81–86.
• J. Bang-Jensen, G. Gutin and A. Yeo, When the greedy algorithm fails7 . Discrete Opti-
mization 1 (2004), 121–127.
• G. Bendall and F. Margot, Greedy Type Resistance of Combinatorial Problems8 , Discrete
Optimization 3 (2006), 288–298.

6 http://www.sciencedirect.com/science/article/pii/S0166218X01001950
7 http://www.sciencedirect.com/science/article/pii/S1572528604000222
8 http://www.sciencedirect.com/science/article/pii/S1572528606000430

1186
112 Nonblocking minimal spanning
switch

This article needs additional citations for verification1 . Please help improve
this article2 by adding citations to reliable sources3 . Unsourced material may be
challenged and removed.
Find sources: ”Nonblocking minimal spanning switch”4 –
news5 · newspapers6 · books7 · scholar8 · JSTOR9 (June 2018)(Learn how and
when to remove this template message10 )

1 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
https://en.wikipedia.org/w/index.php?title=Nonblocking_minimal_spanning_switch&
2
action=edit
3 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
http://www.google.com/search?as_eq=wikipedia&q=%22Nonblocking+minimal+spanning+
4
switch%22
http://www.google.com/search?tbm=nws&q=%22Nonblocking+minimal+spanning+switch%22+-
5
wikipedia
http://www.google.com/search?&q=%22Nonblocking+minimal+spanning+switch%22+site:
6
news.google.com/newspapers&source=newspapers
http://www.google.com/search?tbs=bks:1&q=%22Nonblocking+minimal+spanning+switch%22+-
7
wikipedia
8 http://scholar.google.com/scholar?q=%22Nonblocking+minimal+spanning+switch%22
https://www.jstor.org/action/doBasicSearch?Query=%22Nonblocking+minimal+spanning+
9
switch%22&acc=on&wc=on
10 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

1187
Nonblocking minimal spanning switch

Figure 269 A substitute for a 16x16 crossbar switch made from 12 4x4 crossbar
switches.

A nonblocking minimal spanning switch is a device that can connect N inputs to N


outputs in any combination. The most familiar use of switches of this type is in a telephone
exchange11 . The term ”non-blocking” means that if it is not defective, it can always make
the connection. The term ”minimal” means that it has the fewest possible components, and
therefore the minimal expense.
Historically, in telephone switches, connections between callers were arranged with large,
expensive banks of electromechanical relays12 , Strowger switches13 . The basic mathematical

11 https://en.wikipedia.org/wiki/Telephone_exchange
12 https://en.wikipedia.org/wiki/Relay
13 https://en.wikipedia.org/wiki/Strowger_switch

1188
Background: switching topologies

property of Strowger switches is that for each input to the switch, there is exactly one output.
Much of the mathematical switching circuit theory14 attempts to use this property to reduce
the total number of switches needed to connect a combination of inputs to a combination
of outputs.
In the 1940s and 1950s, engineers in Bell Laboratories15 began an extended series of math-
ematical investigations into methods for reducing the size and expense of the ”switched
fabric16 ” needed to implement a telephone exchange. One early, successful mathematical
analysis was performed by Charles Clos (French pronunciation: [ʃaʁl klo]17 ), and a switched
fabric18 constructed of smaller switches is called a Clos network19 .[1]

112.1 Background: switching topologies

112.1.1 The crossbar switch

The crossbar switch20 has the property of being able to connect N inputs to N outputs in any
one-to-one21 combination, so it can connect any caller to any non-busy receiver, a property
given the technical term ”nonblocking”. Being nonblocking it could always complete a call
(to a non-busy receiver), which would maximize service availability.
However, the crossbar switch does so at the expense of using N2 (N squared) simple SPST
switches22 . For large N (and the practical requirements of a phone switch are considered
large) this growth was too expensive. Further, large crossbar switches had physical prob-
lems. Not only did the switch require too much space, but the metal bars containing the
switch contacts would become so long that they would sag and become unreliable. Engi-
neers also noticed that at any time, each bar of a crossbar switch was only making a single
connection. The other contacts on the two bars were unused. This seemed to imply that
most of the switching fabric of a crossbar switch was wasted.
The obvious way to emulate a crossbar switch was to find some way to build it from smaller
crossbar switches. If a crossbar switch could be emulated by some arrangement of smaller
crossbar switches, then these smaller crossbar switches could also, in turn be emulated
by even smaller crossbar switches. The switching fabric could become very efficient, and
possibly even be created from standardized parts. This is called a Clos network23 .

14 https://en.wikipedia.org/wiki/Switching_circuit_theory
15 https://en.wikipedia.org/wiki/Bell_Laboratories
16 https://en.wikipedia.org/wiki/Switched_fabric
17 https://en.wikipedia.org/wiki/Help:IPA/French
18 https://en.wikipedia.org/wiki/Switched_fabric
19 https://en.wikipedia.org/wiki/Clos_network
20 https://en.wikipedia.org/wiki/Crossbar_switch
21 https://en.wikipedia.org/wiki/Bijection
22 https://en.wikipedia.org/wiki/Switch#Contact_terminology
23 https://en.wikipedia.org/wiki/Clos_network

1189
Nonblocking minimal spanning switch

112.1.2 Completely connected 3-layer switches

The next approach was to break apart the crossbar switch into three layers of smaller
crossbar switches. There would be an ”input layer”, a ”middle layer” and an ”output layer.”
The smaller switches are less massive, more reliable, and generally easier to build, and
therefore less expensive.
A telephone system only has to make a one-to-one connection. Intuitively this seems to
mean that the number of inputs and the number of outputs can always be equal in each
subswitch, but intuition does not prove this can be done nor does it tell us how to do
so. Suppose we want to synthesize a 16 by 16 crossbar switch. The design could have
4 subswitches on the input side, each with 4 inputs, for 16 total inputs. Further, on the
output side, we could also have 4 output subswitches, each with 4 outputs, for a total of
16 outputs. It is desirable that the design use as few wires as possible, because wires cost
real money. The least possible number of wires that can connect two subswitches is a single
wire. So, each input subswitch will have a single wire to each middle subswitch. Also, each
middle subswitch will have a single wire to each output subswitch.
The question is how many middle subswitches are needed, and therefore how many total
wires should connect the input layer to the middle layer. Since telephone switches are
symmetric (callers and callees are interchangeable), the same logic will apply to the output
layer, and the middle subswitches will be ”square”, having the same number of inputs as
outputs.
The number of middle subswitches depends on the algorithm used to allocate connection
to them. The basic algorithm for managing a three-layer switch is to search the middle
subswitches for a middle subswitch that has unused wires to the needed input and output
switches. Once a connectible middle subswitch is found, connecting to the correct inputs
and outputs in the input and output switches is trivial.
Theoretically, in the example, only four central switches are needed, each with exactly
one connection to each input switch and one connection to each output switch. This is
called a ”minimal spanning switch,” and managing it was the holy grail of the Bell Labs'
investigations.
However, a bit of work with a pencil and paper will show that it is easy to get such a
minimal switch into conditions in which no single middle switch has a connection to both
the needed input switch and the needed output switch. It only takes four calls to partially
block the switch. If an input switch is half-full, it has connections via two middle switches.
If an output switch is also half full with connections from the other two middle switches,
then there is no remaining middle switch which can provide a path between that input and
output.
For this reason, a ”simply connected nonblocking switch” 16x16 switch with four input
subswitches and four output switches was thought to require 7 middle switches; in the
worst case an almost-full input subswitch would use three middle switches, an almost-full
output subswitch would use three different ones, and the seventh would be guaranteed to
be free to make the last connection. For this reason, sometimes this switch arrangement is
called a ”2n−1 switch”, where n is the number of input ports of the input subswitches.

1190
Background: switching topologies

The example is intentionally small, and in such a small example, the reorganization does not
save many switches. A 16×16 crossbar has 256 contacts, while a 16×16 minimal spanning
switch has 4×4×4×3 = 192 contacts.
As the numbers get larger, the savings increase. For example, a 10,000 line exchange would
need 100 million contacts to implement a full crossbar. But three layers of 100 100×100
subswitches would use only 300 10,000 contact subswitches, or 3 million contacts.
Those subswitches could in turn each be made of 3×10 10×10 crossbars, a total of 3000
contacts, making 900,000 for the whole exchange; that is a far smaller number than 100
million.

112.1.3 Managing a minimal spanning switch

The crucial discovery was a way to reorganize connections in the middle switches to ”trade
wires” so that a new connection could be completed.
The first step is to find an unused link from the input subswitch to a middle-layer subswitch
(which we shall call A), and an unused link from a middle-layer subswitch (which we shall
call B) to the desired output subswitch. Since, prior to the arrival of the new connection,
the input and output subswitches each had at least one unused connection, both of these
unused links must exist.
If A and B happen to be the same middle-layer switch, then the connection can be made
immediately just as in the ”2n−1” switch case. However, if A and B are different middle-
layer subswitches, more work is required. The algorithm finds a new arrangement of the
connections through the middle subswitches A and B which includes all of the existing
connections, plus the desired new connection.
Make a list of all of the desired connections that pass through A or B. That is, all of
the existing connections to be maintained and the new connection. The algorithm proper
only cares about the internal connections from input to output switch, although a practical
implementation also has to keep track of the correct input and output switch connections.
In this list, each input subswitch can appear in at most two connections: one to subswitch A,
and one to subswitch B. The options are zero, one, or two. Likewise, each output subswitch
appears in at most two connections.
Each connection is linked to at most two others by a shared input or output subswitch,
forming one link in a ”chain” of connections.
Next, begin with the new connection. Assign it the path from its input subswitch, through
middle subswitch A, to its output subswitch. If this first connection's output subswitch has
a second connection, assign that second connection a path from its input subswitch through
subswitch B. If that input subswitch has another connection, assign that third connection
a path through subswitch A. Continue back and forth in this manner, alternating between
middle subswitches A and B. Eventually one of two things must happen:
1. the chain terminates in a subswitch with only one connection, or
2. the chain loops back to the originally chosen connection.

1191
Nonblocking minimal spanning switch

In the first case, go back to the new connection's input subswitch and follow its chain
backward, assigning connections to paths through middle subswitches B and A in the same
alternating pattern.
When this is done, each input or output subswitch in the chain has at most two connections
passing through it, and they are assigned to different middle switches. Thus, all the required
links are available.
There may be additional connections through subswitches A and B which are not part of
the chain including the new connection; those connections may be left as-is.
After the new connection pattern is designed in the software, then the electronics of the
switch can actually be reprogrammed, physically moving the connections. The electronic
switches are designed internally so that the new configuration can be written into the
electronics without disturbing the existing connection, and then take effect with a single
logic pulse. The result is that the connection moves instantaneously, with an imperceptible
interruption to the conversation. In older electromechanical switches, one occasionally heard
a clank of ”switching noise.”
This algorithm is a form of topological sort24 , and is the heart of the algorithm that controls
a minimal spanning switch.

112.2 Practical implementations of switches

As soon as the algorithm was discovered, Bell system engineers and managers began dis-
cussing it. After several years, Bell engineers began designing electromechanical switches
that could be controlled by it. At the time, computers used tubes25 and were not reliable
enough to control a phone system (phone system switches are safety-critical, and they are
designed to have an unplanned failure about once per thirty years). Relay26 -based com-
puters were too slow to implement the algorithm. However, the entire system could be
designed so that when computers were reliable enough, they could be retrofitted to existing
switching systems.
It's not difficult to make composite switches fault-tolerant27 . When a subswitch fails, the
callers simply redial. So, on each new connection, the software tries the next free connection
in each subswitch rather than reusing the most recently released one. The new connection
is more likely to work because it uses different circuitry.
Therefore, in a busy switch, when a particular PCB lacks any connections, it is an excellent
candidate for testing.
To test or remove a particular printed circuit card from service, there is a well-known
algorithm. As fewer connections pass through the card's subswitch, the software routes
more test signals through the subswitch to a measurement device, and then reads the
measurement. This does not interrupt old calls, which remain working.

24 https://en.wikipedia.org/wiki/Topological_sort
25 https://en.wikipedia.org/wiki/Vacuum_tube
26 https://en.wikipedia.org/wiki/Relay
27 https://en.wikipedia.org/wiki/Fault-tolerant

1192
Digital switches

If a test fails, the software isolates the exact circuit board by reading the failure from several
external switches. It then marks the free circuits in the failing circuitry as busy. As calls
using the faulty circuitry are ended, those circuits are also marked busy. Some time later,
when no calls pass through the faulty circuitry, the computer lights a light on the circuit
board that needs replacement, and a technician can replace the circuit board. Shortly after
replacement, the next test succeeds, the connections to the repaired subswitch are marked
”not busy,” and the switch returns to full operation.
The diagnostics on Bell's early electronic switches would actually light a green light on each
good printed circuit board, and light a red light on each failed printed circuit board. The
printed circuits were designed so that they could be removed and replaced without turning
off the whole switch.
The eventual result was the Bell 1ESS28 . This was controlled by a CPU called the Central
Control (CC), a lock-step29 , Harvard architecture30 dual computer using reliable diode–
transistor logic31 . In the 1ESS CPU, two computers performed each step, checking each
other. When they disagreed, they would diagnose themselves, and the correctly running
computer would take up switch operation while the other would disqualify itself and request
repair. The 1ESS switch was still in limited use as of 2012, and had a verified reliability
of less than one unscheduled hour of failure in each thirty years of operation, validating its
design.
Initially it was installed on long distance trunks in major cities, the most heavily used parts
of each telephone exchange. On the first Mother's Day that major cities operated with it,
the Bell system set a record for total network capacity, both in calls completed, and total
calls per second per switch. This resulted in a record for total revenue per trunk.

112.3 Digital switches

A practical implementation of a switch can be created from an odd number of layers of


smaller subswitches. Conceptually, the crossbar switches of the three-stage switch can each
be further decomposed into smaller crossbar switches. Although each subswitch has limited
multiplexing capability, working together they synthesize the effect of a larger N×N crossbar
switch.
In a modern digital telephone switch, application of two different multiplexer approaches in
alternate layers further reduces the cost of the switching fabric:
1. space-division multiplexers32 are something like the crossbar switches33 already de-
scribed, or some arrangement of crossover switches34 or banyan switches35 . Any single
output can select from any input. In digital switches, this is usually an arrangement of

28 https://en.wikipedia.org/wiki/1ESS
29 https://en.wikipedia.org/wiki/Lockstep_(computing)
30 https://en.wikipedia.org/wiki/Harvard_architecture
31 https://en.wikipedia.org/wiki/Diode%E2%80%93transistor_logic
32 https://en.wikipedia.org/wiki/Multiplexer
33 https://en.wikipedia.org/wiki/Crossbar_switch
34 https://en.wikipedia.org/wiki/Crossover_switch
35 https://en.wikipedia.org/wiki/Banyan_switch

1193
Nonblocking minimal spanning switch

AND gates36 . 8000 times per second, the connection is reprogrammed to connect par-
ticular wires for the duration of a time slot37 . Design advantage: In space-division
systems the number of space-division connections is divided by the number of time
slots in the time-division multiplexing system. This dramatically reduces the size and
expense of the switching fabric. It also increases the reliability, because there are far
fewer physical connections to fail.
2. time-division multiplexers38 each have a memory which is read in a fixed order and
written in a programmable order (or vice versa). This type of switch permutes time-
slots in a time-division multiplexed signal39 that goes to the space-division multiplex-
ers in its adjacent layers. Design advantage: Time-division switches have only one
input and output wire. Since they have far fewer electrical connections to fail, they are
far more reliable than space-division switches, and are therefore the preferred switches
for the outer (input and output) layers of modern telephone switches.
Practical digital telephonic switches minimize the size and expense of the electronics. First,
it is typical to ”fold” the switch, so that both the input and output connections to a
subscriber-line are handled by the same control logic. Then, a time-division switch is used
in the outer layer. The outer layer is implemented in subscriber-line interface cards (SLICs)
in the local presence street-side boxes. Under remote control from the central switch, the
cards connect to timing-slots in a time-multiplexed line to a central switch. In the U.S. the
multiplexed line is a multiple of a T-1 line40 . In Europe and many other countries it is a
multiple of an E-1 line41 .
The scarce resources in a telephone switch42 are the connections between layers of sub-
switches. These connections can be either time slots or wires, depending on the type of
multiplexing. The control logic43 has to allocate these connections, and the basic method
is the algorithm already discussed. The subswitches are logically arranged so that they
synthesize larger subswitches. Each subswitch, and synthesized subswitch is controlled
(recursively44 ) by logic derived from Clos's mathematics. The computer code decomposes
larger multiplexers into smaller multiplexers.
If the recursion is taken to the limit, breaking down the crossbar to the minimum possible
number of switching elements, the resulting device is sometimes called a crossover switch45
or a banyan switch46 depending on its topology.
Switches typically interface to other switches and fiber optic networks via fast multiplexed
data lines such as SONET47 .

36 https://en.wikipedia.org/wiki/AND_gate
37 https://en.wikipedia.org/wiki/Time-division_multiplexing
38 https://en.wikipedia.org/wiki/Time-slot_interchange
39 https://en.wikipedia.org/wiki/Time-division_multiplexing
40 https://en.wikipedia.org/wiki/T-carrier
41 https://en.wikipedia.org/wiki/E-carrier
42 https://en.wikipedia.org/wiki/Telephone_switch
43 https://en.wikipedia.org/wiki/Control_logic
44 https://en.wikipedia.org/wiki/Recursion
45 https://en.wikipedia.org/wiki/Crossover_switch
46 https://en.wikipedia.org/wiki/Banyan_switch
47 https://en.wikipedia.org/wiki/SONET

1194
Example of rerouting a switch

Each line of a switch may be periodically tested by the computer, by sending test data
through it. If a switch's line fails, all lines of a switch are marked as in use. Multiplexer lines
are allocated in a first-in-first out way, so that new connections find new switch elements.
When all connections are gone from a defective switch, the defective switch can be avoided,
and later replaced.
As of 2018, such switches are no longer made. They are being replaced by high-speed
Internet Protocol48 routers.

112.4 Example of rerouting a switch

Figure 270 Signals A, B, C, D are routed but signal E is blocked, unless a signal, such
as D shown in purple is rerouted

48 https://en.wikipedia.org/wiki/Internet_Protocol

1195
Nonblocking minimal spanning switch

Figure 271 After D, in purple, is rerouted, Signal E can be routed and all the
additional signals plus E are connected

112.5 See also


• Time-slot interchange49
• Clos network50
• Crossbar switch51
• Banyan switch52
• Fat tree53
• Omega network54

49 https://en.wikipedia.org/wiki/Time-slot_interchange
50 https://en.wikipedia.org/wiki/Clos_network
51 https://en.wikipedia.org/wiki/Crossbar_switch
52 https://en.wikipedia.org/wiki/Banyan_switch
53 https://en.wikipedia.org/wiki/Fat_tree
54 https://en.wikipedia.org/wiki/Omega_network

1196
References

112.6 References
1. C, C (M 1953). ”A   - 
”55 (PDF). Bell System Technical Journal56 . 32 (2): 406–424.
doi57 :10.1002/j.1538-7305.1953.tb01433.x58 . ISSN59 0005-858060 . Retrieved 22 March
2011.

55 http://www.alcatel-lucent.com/bstj/vol32-1953/articles/bstj32-2-406.pdf
56 https://en.wikipedia.org/wiki/Bell_Labs_Technical_Journal
57 https://en.wikipedia.org/wiki/Doi_(identifier)
58 https://doi.org/10.1002%2Fj.1538-7305.1953.tb01433.x
59 https://en.wikipedia.org/wiki/ISSN_(identifier)
60 http://www.worldcat.org/issn/0005-8580

1197
113 Path-based strong component
algorithm

In graph theory1 , the strongly connected components2 of a directed graph3 may be found
using an algorithm that uses depth-first search4 in combination with two stacks5 , one to
keep track of the vertices in the current component and the second to keep track of the
current search path.[1] Versions of this algorithm have been proposed by Purdom (1970)6 ,
Munro (1971)7 , Dijkstra (1976)8 , Cheriyan & Mehlhorn (1996)9 , and Gabow (2000)10 ; of
these, Dijkstra's version was the first to achieve linear time11 .[2]

113.1 Description

The algorithm performs a depth-first search of the given graph G, maintaining as it does
two stacks S and P (in addition to the normal call stack for a recursive function). Stack
S contains all the vertices that have not yet been assigned to a strongly connected com-
ponent, in the order in which the depth-first search reaches the vertices. Stack P contains
vertices that have not yet been determined to belong to different strongly connected com-
ponents from each other. It also uses a counter C of the number of vertices reached so far,
which it uses to compute the preorder numbers of the vertices.
When the depth-first search reaches a vertex v, the algorithm performs the following steps:
1. Set the preorder number of v to C, and increment C.
2. Push v onto S and also onto P.
3. For each edge from v to a neighboring vertex w:
• If the preorder number of w has not yet been assigned, recursively search w;
• Otherwise, if w has not yet been assigned to a strongly connected component:
• Repeatedly pop vertices from P until the top element of P has a preorder number
less than or equal to the preorder number of w.
4. If v is the top element of P:

1 https://en.wikipedia.org/wiki/Graph_theory
2 https://en.wikipedia.org/wiki/Strongly_connected_component
3 https://en.wikipedia.org/wiki/Directed_graph
4 https://en.wikipedia.org/wiki/Depth-first_search
5 https://en.wikipedia.org/wiki/Stack_(data_structure)
6 #CITEREFPurdom1970
7 #CITEREFMunro1971
8 #CITEREFDijkstra1976
9 #CITEREFCheriyanMehlhorn1996
10 #CITEREFGabow2000
11 https://en.wikipedia.org/wiki/Linear_time

1199
Path-based strong component algorithm

• Pop vertices from S until v has been popped, and assign the popped vertices to a
new component.
• Pop v from P.
The overall algorithm consists of a loop through the vertices of the graph, calling this
recursive search on each vertex that does not yet have a preorder number assigned to it.

113.2 Related algorithms

Like this algorithm, Tarjan's strongly connected components algorithm12 also uses depth
first search together with a stack to keep track of vertices that have not yet been assigned
to a component, and moves these vertices into a new component when it finishes expanding
the final vertex of its component. However, in place of the stack P, Tarjan's algorithm uses
a vertex-indexed array13 of preorder numbers, assigned in the order that vertices are first
visited in the depth-first search14 . The preorder array is used to keep track of when to form
a new component.

113.3 Notes
1. Sedgewick (2004)15 .
2. History of Path-based DFS for Strong Components16 , Harold N. Gabow, accessed
2012-04-24.

113.4 References
• C, J.; M, K.17 (1996), ”A    
     ”, Algorithmica18 , 15: 521–549,
doi19 :10.1007/BF0194088020 .
• D, E21 (1976), A Discipline of Programming, NJ: Prentice Hall, Ch. 25.
• G, H N. (2000), ”P- -    
 ”, Information Processing Letters, 74 (3–4): 107–114,
doi22 :10.1016/S0020-0190(00)00051-X23 , MR24 176155125 .

12 https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm
13 https://en.wikipedia.org/wiki/Array_(data_type)
14 https://en.wikipedia.org/wiki/Depth-first_search
15 #CITEREFSedgewick2004
16 http://www.cs.colorado.edu/~hal/Papers/DFS/pbDFShistory.html
17 https://en.wikipedia.org/wiki/Kurt_Mehlhorn
18 https://en.wikipedia.org/wiki/Algorithmica
19 https://en.wikipedia.org/wiki/Doi_(identifier)
20 https://doi.org/10.1007%2FBF01940880
21 https://en.wikipedia.org/wiki/Edsger_Dijkstra
22 https://en.wikipedia.org/wiki/Doi_(identifier)
23 https://doi.org/10.1016%2FS0020-0190%2800%2900051-X
24 https://en.wikipedia.org/wiki/MR_(identifier)
25 http://www.ams.org/mathscinet-getitem?mr=1761551

1200
References

• M, I (1971), ”E      


  ”, Information Processing Letters, 1: 56–58, doi26 :10.1016/0020-
0190(71)90006-827 .
• P, P., J. (1970), ”A   ”, BIT, 10: 76–94,
doi28 :10.1007/bf0194089229 .
• S, R. (2004), ”19.8 S C  D”, Algorithms in
Java, Part 5 – Graph Algorithms (3rd ed.), Cambridge MA: Addison-Wesley, pp. 205–
216.

26 https://en.wikipedia.org/wiki/Doi_(identifier)
27 https://doi.org/10.1016%2F0020-0190%2871%2990006-8
28 https://en.wikipedia.org/wiki/Doi_(identifier)
29 https://doi.org/10.1007%2Fbf01940892

1201
114 Prim's algorithm

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

1203
Prim's algorithm

Graph and tree


search algorithms

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

Figure 272 A demo for Prim's algorithm based on Euclidean distance.

1204
References

In computer science1 , Prim's (also known as Jarník's) algorithm is a greedy algorithm2


that finds a minimum spanning tree3 for a weighted4 undirected graph5 . This means it finds
a subset of the edges6 that forms a tree7 that includes every vertex8 , where the total weight
of all the edges9 in the tree is minimized. The algorithm operates by building this tree
one vertex at a time, from an arbitrary starting vertex, at each step adding the cheapest
possible connection from the tree to another vertex.
The algorithm was developed in 1930 by Czech10 mathematician Vojtěch Jarník11[1] and
later rediscovered and republished by computer scientists12 Robert C. Prim13 in 1957[2]
and Edsger W. Dijkstra14 in 1959.[3] Therefore, it is also sometimes called the Jarník's
algorithm,[4] Prim–Jarník algorithm,[5] Prim–Dijkstra algorithm[6] or the DJP al-
gorithm.[7]
Other well-known algorithms for this problem include Kruskal's algorithm15 and Borůvka's
algorithm16 .[8] These algorithms find the minimum spanning forest in a possibly discon-
nected graph; in contrast, the most basic form of Prim's algorithm only finds minimum
spanning trees in connected graphs. However, running Prim's algorithm separately for each
connected component17 of the graph, it can also be used to find the minimum spanning
forest.[9] In terms of their asymptotic time complexity18 , these three algorithms are equally
fast for sparse graphs19 , but slower than other more sophisticated algorithms.[7][6] However,
for graphs that are sufficiently dense, Prim's algorithm can be made to run in linear time20 ,
meeting or improving the time bounds for other algorithms.[10]

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Greedy_algorithm
3 https://en.wikipedia.org/wiki/Minimum_spanning_tree
4 https://en.wikipedia.org/wiki/Weighted_graph
5 https://en.wikipedia.org/wiki/Undirected_graph
6 https://en.wikipedia.org/wiki/Edge_(graph_theory)
7 https://en.wikipedia.org/wiki/Tree_(graph_theory)
8 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
9 https://en.wikipedia.org/wiki/Graph_theory
10 https://en.wikipedia.org/wiki/Czech_people
11 https://en.wikipedia.org/wiki/Vojt%C4%9Bch_Jarn%C3%ADk
12 https://en.wikipedia.org/wiki/Computer_scientist
13 https://en.wikipedia.org/wiki/Robert_C._Prim
14 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
15 https://en.wikipedia.org/wiki/Kruskal%27s_algorithm
16 https://en.wikipedia.org/wiki/Bor%C5%AFvka%27s_algorithm
17 https://en.wikipedia.org/wiki/Connected_component_(graph_theory)
18 https://en.wikipedia.org/wiki/Time_complexity
19 https://en.wikipedia.org/wiki/Sparse_graph
20 https://en.wikipedia.org/wiki/Linear_time

1205
Prim's algorithm

Figure 273 Prim's algorithm starting at vertex A. In the third step, edges BD and AB
both have weight 2, so BD is chosen arbitrarily. After that step, AB is no longer a
candidate for addition to the tree because it links two nodes that are already in the tree.

114.1 Description

The algorithm may informally be described as performing the following steps:


1. Initialize a tree with a single vertex, chosen arbitrarily from the graph.
2. Grow the tree by one edge: of the edges that connect the tree to vertices not yet in
the tree, find the minimum-weight edge, and transfer it to the tree.

1206
Description

3. Repeat step 2 (until all vertices are in the tree).


In more detail, it may be implemented following the pseudocode21 below.
1. Associate with each vertex v of the graph a number C[v] (the cheapest cost of a
connection to v) and an edge E[v] (the edge providing that cheapest connection). To
initialize these values, set all values of C[v] to +∞ (or to any number larger than the
maximum edge weight) and set each E[v] to a special flag value22 indicating that there
is no edge connecting v to earlier vertices.
2. Initialize an empty forest F and a set Q of vertices that have not yet been included
in F (initially, all vertices).
3. Repeat the following steps until Q is empty:
a) Find and remove a vertex v from Q having the minimum possible value of C[v]
b) Add v to F and, if E[v] is not the special flag value, also add E[v] to F
c) Loop over the edges vw connecting v to other vertices w. For each such edge, if
w still belongs to Q and vw has smaller weight than C[w], perform the following
steps:
i. Set C[w] to the cost of edge vw
ii. Set E[w] to point to edge vw.
4. Return F
As described above, the starting vertex for the algorithm will be chosen arbitrarily, because
the first iteration of the main loop of the algorithm will have a set of vertices in Q that
all have equal weights, and the algorithm will automatically start a new tree in F when it
completes a spanning tree of each connected component of the input graph. The algorithm
may be modified to start with any particular vertex s by setting C[s] to be a number
smaller than the other values of C (for instance, zero), and it may be modified to only find
a single spanning tree rather than an entire spanning forest (matching more closely the
informal description) by stopping whenever it encounters another vertex flagged as having
no associated edge.
Different variations of the algorithm differ from each other in how the set Q is implemented:
as a simple linked list23 or array24 of vertices, or as a more complicated priority queue25
data structure. This choice leads to differences in the time complexity26 of the algorithm.
In general, a priority queue will be quicker at finding the vertex v with minimum cost, but
will entail more expensive updates when the value of C[w] changes.

21 https://en.wikipedia.org/wiki/Pseudocode
22 https://en.wikipedia.org/wiki/Flag_value
23 https://en.wikipedia.org/wiki/Linked_list
24 https://en.wikipedia.org/wiki/Array_data_structure
25 https://en.wikipedia.org/wiki/Priority_queue
26 https://en.wikipedia.org/wiki/Time_complexity

1207
Prim's algorithm

114.2 Time complexity

Play media27 28 Prim's algorithm has many applications, such as in the generation29 of this
maze, which applies Prim's algorithm to a randomly weighted grid graph30 . The time
complexity of Prim's algorithm depends on the data structures used for the graph and for
ordering the edges by weight, which can be done using a priority queue31 . The following
table shows the typical choices:

Minimum edge weight data structure Time complexity (total)


adjacency matrix32 , searching O(|V |2 )
binary heap33 and adjacency list34 O((|V | + |E|) log |V |) = O(|E| log |V |)
Fibonacci heap35 and adjacency list36 O(|E| + |V | log |V |)

A simple implementation of Prim's, using an adjacency matrix37 or an adjacency list38 graph


representation and linearly searching an array of weights to find the minimum weight edge to
add, requires O39 (|V|2 ) running time. However, this running time can be greatly improved
further by using heaps40 to implement finding minimum weight edges in the algorithm's
inner loop.
A first improved version uses a heap to store all edges of the input graph, ordered by their
weight. This leads to an O(|E| log |E|) worst-case running time. But storing vertices instead
of edges can improve it still further. The heap should order the vertices by the smallest edge-
weight that connects them to any vertex in the partially constructed minimum spanning
tree41 (MST) (or infinity if no such edge exists). Every time a vertex v is chosen and added
to the MST, a decrease-key operation is performed on all vertices w outside the partial MST
such that v is connected to w, setting the key to the minimum of its previous value and the
edge cost of (v,w).
Using a simple binary heap42 data structure, Prim's algorithm can now be shown to run in
time O43 (|E| log |V|) where |E| is the number of edges and |V| is the number of vertices.
Using a more sophisticated Fibonacci heap44 , this can be brought down to O45 (|E| + |V| log

27 http://upload.wikimedia.org/wikipedia/commons/b/b1/MAZE_30x20_Prim.ogv
28 https://en.wikipedia.org/wiki/File:MAZE_30x20_Prim.ogv
29 https://en.wikipedia.org/wiki/Maze_generation
30 https://en.wikipedia.org/wiki/Grid_graph
31 https://en.wikipedia.org/wiki/Priority_queue
32 https://en.wikipedia.org/wiki/Adjacency_matrix
33 https://en.wikipedia.org/wiki/Binary_heap
34 https://en.wikipedia.org/wiki/Adjacency_list
35 https://en.wikipedia.org/wiki/Fibonacci_heap
36 https://en.wikipedia.org/wiki/Adjacency_list
37 https://en.wikipedia.org/wiki/Adjacency_matrix
38 https://en.wikipedia.org/wiki/Adjacency_list
39 https://en.wikipedia.org/wiki/Big-O_notation
40 https://en.wikipedia.org/wiki/Heap_(data_structure)
41 https://en.wikipedia.org/wiki/Minimum_spanning_tree
42 https://en.wikipedia.org/wiki/Binary_heap
43 https://en.wikipedia.org/wiki/Big-O_notation
44 https://en.wikipedia.org/wiki/Fibonacci_heap
45 https://en.wikipedia.org/wiki/Big-O_notation

1208
Time complexity

|V|), which is asymptotically faster46 when the graph is dense47 enough that |E| is ω48 (|V|),
and linear time49 when |E| is at least |V| log |V|. For graphs of even greater density (having
at least |V|c edges for some c > 1), Prim's algorithm can be made to run in linear time even
more simply, by using a d-ary heap50 in place of a Fibonacci heap.[10][11]

Figure 274 Demonstration of proof. In this case, the graph Y1 = Y − f + e is already


equal to Y. In general, the process may need to be repeated.

46 https://en.wikipedia.org/wiki/Asymptotic_computational_complexity
47 https://en.wikipedia.org/wiki/Dense_graph
https://en.wikipedia.org/wiki/Big-O_notation#Family_of_Bachmann.E2.80.93Landau_
48
notations
49 https://en.wikipedia.org/wiki/Linear_time
50 https://en.wikipedia.org/wiki/D-ary_heap

1209
Prim's algorithm

114.3 Proof of correctness

Let P be a connected, weighted graph51 . At every iteration of Prim's algorithm, an edge


must be found that connects a vertex in a subgraph to a vertex outside the subgraph.
Since P is connected, there will always be a path to every vertex. The output Y of Prim's
algorithm is a tree52 , because the edge and vertex added to tree Y are connected. Let
Y1 be a minimum spanning tree of graph P. If Y1 =Y then Y is a minimum spanning tree.
Otherwise, let e be the first edge added during the construction of tree Y that is not in tree
Y1 , and V be the set of vertices connected by the edges added before edge e. Then one
endpoint of edge e is in set V and the other is not. Since tree Y1 is a spanning tree of graph
P, there is a path in tree Y1 joining the two endpoints. As one travels along the path, one
must encounter an edge f joining a vertex in set V to one that is not in set V. Now, at the
iteration when edge e was added to tree Y, edge f could also have been added and it would
be added instead of edge e if its weight was less than e, and since edge f was not added, we
conclude that
w(f ) ≥ w(e).
Let tree Y2 be the graph obtained by removing edge f from and adding edge e to tree Y1 .
It is easy to show that tree Y2 is connected, has the same number of edges as tree Y1 ,
and the total weights of its edges is not larger than that of tree Y1 , therefore it is also a
minimum spanning tree of graph P and it contains edge e and all the edges added before
it during the construction of set V. Repeat the steps above and we will eventually obtain a
minimum spanning tree of graph P that is identical to tree Y. This shows Y is a minimum
spanning tree. The minimum spanning tree allows for the first subset of the sub-region to
be expanded into a smaller subset X, which we assume to be the minimum.

51 https://en.wikipedia.org/wiki/Graph_theory
52 https://en.wikipedia.org/wiki/Tree_(graph_theory)

1210
Parallel algorithm

114.4 Parallel algorithm

Figure 275 The adjacency matrix distributed between multiple processors for parallel
Prim's algorithm. In each iteration of the algorithm, every processor updates its part of
C by inspecting the row of the newly inserted vertex in its set of columns in the adjacency
matrix. The results are then collected and the next vertex to include in the MST is
selected globally.

The main loop of Prim's algorithm is inherently sequential and thus not parallelizable53 .
However, the inner loop54 , which determines the next edge of minimum weight that does not

53 https://en.wikipedia.org/wiki/Parallel_algorithm
54 #step3c

1211
Prim's algorithm

form a cycle, can be parallelized by dividing the vertices and edges between the available
processors.[12] The following pseudocode55 demonstrates this.
1. Assign each processors Pi a set Vi of consecutive vertices of length |V |
|P | .
2. Create C, E, F, and Q as in the sequential algorithm56 and divide C, E, as well as the
graph between all processors such that each processor holds the incoming edges to its
set of vertices. Let Ci , Ei denote the parts of C, E stored on processor Pi .
3. Repeat the following steps until Q is empty:
a) On every processor: find the vertex vi having the minimum value in Ci [vi ] (local
solution).
b) Min-reduce57 the local solutions to find the vertex v having the minimum possible
value of C[v] (global solution).
c) Broadcast58 the selected node to every processor.
d) Add v to F and, if E[v] is not the special flag value, also add E[v] to F.
e) On every processor: update Ci and Ei as in the sequential algorithm.
4. Return F
This algorithm can generally be implemented on distributed machines[12] as well as on
shared memory machines.[13] It has also been implemented on graphical processing units
(GPUs).[14] The running time is O( |V|P|| ) + O(|V | log |P |), assuming that the reduce and
2

broadcast operations can be performed in O(log |P |).[12] A variant of Prim's algorithm for
shared memory machines, in which Prim's sequential algorithm is being run in parallel,
starting from different vertices, has also been explored.[15] It should, however, be noted
that more sophisticated algorithms exist to solve the distributed minimum spanning tree59
problem in a more efficient manner.

114.5 See also


• Dijkstra's algorithm60 , a very similar algorithm for the shortest path problem61
• Greedoids62 offer a general way to understand the correctness of Prim's algorithm

114.6 References
1. J, V.63 (1930), ”O   ” [A   -
 ], Práce Moravské Přírodovědecké Společnosti (in Czech), 6 (4): 57–63,
hdl64 :10338.dmlcz/50072665 .

55 https://en.wikipedia.org/wiki/Pseudocode
56 #sequential_algorithm
57 https://en.wikipedia.org/wiki/Reduction_Operator
58 https://en.wikipedia.org/wiki/Broadcasting_(computing)
59 https://en.wikipedia.org/wiki/Distributed_minimum_spanning_tree
60 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
61 https://en.wikipedia.org/wiki/Shortest_path_problem
62 https://en.wikipedia.org/wiki/Greedoid
63 https://en.wikipedia.org/wiki/Vojt%C4%9Bch_Jarn%C3%ADk
64 https://en.wikipedia.org/wiki/Hdl_(identifier)
65 http://hdl.handle.net/10338.dmlcz%2F500726

1212
References

2. P, R. C.66 (N 1957), ”S   A 


”67 , Bell System Technical Journal68 , 36 (6): 1389–1401, Bib-
code69 :1957BSTJ...36.1389P70 , doi71 :10.1002/j.1538-7305.1957.tb01515.x72 .
3. D, E. W.73 (D 1959), ”A      -
  ”74 (PDF), Numerische Mathematik, 1 (1): 269–271, Cite-
SeerX75 10.1.1.165.757776 , doi77 :10.1007/BF0138639078 .
4. S, R79 ; W, K D (2011), Algorithms80 (4 .),
A-W, . 628, ISBN81 978-0-321-57351-382 .
5. R, K (2011), Discrete Mathematics and Its Applications83 (7 .),
MG-H S, . 798.
6. C, D84 ; T, R E85 (1976), ”F -
  ”, SIAM Journal on Computing86 , 5 (4): 724–742,
doi87 :10.1137/020505188 , MR89 044645890 .
7. P, S; R, V (J 2002), ”A  -
   ”91 (PDF), Journal of the ACM, 49 (1): 16–34,
CiteSeerX92 10.1.1.110.767093 , doi94 :10.1145/505241.50524395 , MR96 214843197 .
8. T, R E98 (1983), ”C 6. M  . 6.2.
T  ”, Data Structures and Network Algorithms, CBMS-

66 https://en.wikipedia.org/wiki/Robert_C._Prim
67 https://archive.org/details/bstj36-6-1389
68 https://en.wikipedia.org/wiki/Bell_System_Technical_Journal
69 https://en.wikipedia.org/wiki/Bibcode_(identifier)
70 https://ui.adsabs.harvard.edu/abs/1957BSTJ...36.1389P
71 https://en.wikipedia.org/wiki/Doi_(identifier)
72 https://doi.org/10.1002%2Fj.1538-7305.1957.tb01515.x
73 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
74 http://www-m3.ma.tum.de/twiki/pub/MN0506/WebHome/dijkstra.pdf
75 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
76 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.165.7577
77 https://en.wikipedia.org/wiki/Doi_(identifier)
78 https://doi.org/10.1007%2FBF01386390
79 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
80 https://books.google.com/books?id=MTpsAQAAQBAJ&pg=PA628
81 https://en.wikipedia.org/wiki/ISBN_(identifier)
82 https://en.wikipedia.org/wiki/Special:BookSources/978-0-321-57351-3
83 https://books.google.com/books?id=6EJOCAAAQBAJ&pg=PA798
84 https://en.wikipedia.org/wiki/David_Cheriton
85 https://en.wikipedia.org/wiki/Robert_Tarjan
86 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
87 https://en.wikipedia.org/wiki/Doi_(identifier)
88 https://doi.org/10.1137%2F0205051
89 https://en.wikipedia.org/wiki/MR_(identifier)
90 http://www.ams.org/mathscinet-getitem?mr=0446458
91 http://www.cs.utexas.edu/~vlr/papers/optmsf-jacm.pdf
92 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
93 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.110.7670
94 https://en.wikipedia.org/wiki/Doi_(identifier)
95 https://doi.org/10.1145%2F505241.505243
96 https://en.wikipedia.org/wiki/MR_(identifier)
97 http://www.ams.org/mathscinet-getitem?mr=2148431
98 https://en.wikipedia.org/wiki/Robert_Tarjan

1213
Prim's algorithm

NSF Regional Conference Series in Applied Mathematics, 44, Society for Industrial
and Applied Mathematics99 , pp. 72–77.
9. K, J; G, J (2011), Graph Algorithms in the Language of
Linear Algebra100 , S, E,  T, 22, Society for Industrial
and Applied Mathematics101 , p. 55, ISBN102 9780898719901103 .
10. Tarjan (1983)104 , p. 77.
11. J, D B.105 (D 1975), ”P    
   ”, Information Processing Letters, 4 (3): 53–57,
doi106 :10.1016/0020-0190(75)90001-0107 .
12. G, A; G, A; K, G; K, V (2003).
Introduction to Parallel Computing. pp. 444–446. ISBN108 978-0201648652109 .
13. Q, M J.; D, N (1984). ”P  ”.
ACM Computing Surveys. 16 (3): 319–348. doi110 :10.1145/2514.2515111 .
14. W, W; H, Y; G, S (2011). ”D  I-
  GPU-B P' A”. International Journal of Mod-
ern Education and Computer Science 3.4.
15. S, R (2009). ”A      
 ”. Proc. International Conference on High Performance Computing
(HiPC).

114.7 External links


• Prim's Algorithm progress on randomly distributed points112
• Media related to Prim's algorithm113 at Wikimedia Commons

Edsger Dijkstra

99 https://en.wikipedia.org/wiki/Society_for_Industrial_and_Applied_Mathematics
100 https://books.google.com/books?id=JBXDc83jRBwC&pg=PA55
101 https://en.wikipedia.org/wiki/Society_for_Industrial_and_Applied_Mathematics
102 https://en.wikipedia.org/wiki/ISBN_(identifier)
103 https://en.wikipedia.org/wiki/Special:BookSources/9780898719901
104 #CITEREFTarjan1983
105 https://en.wikipedia.org/wiki/Donald_B._Johnson
106 https://en.wikipedia.org/wiki/Doi_(identifier)
107 https://doi.org/10.1016%2F0020-0190%2875%2990001-0
108 https://en.wikipedia.org/wiki/ISBN_(identifier)
109 https://en.wikipedia.org/wiki/Special:BookSources/978-0201648652
110 https://en.wikipedia.org/wiki/Doi_(identifier)
111 https://doi.org/10.1145%2F2514.2515
https://meyavuz.wordpress.com/2017/03/10/prims-algorithm-animation-for-randomly-
112
distributed-points
113 https://commons.wikimedia.org/wiki/Category:Prim%27s_algorithm

1214
External links

Edsger Dijkstra

• Wikiquote

1215
115 Proof-number search

Proof-number search (short: PN search) is a game tree1 search algorithm2 invented by


Victor Allis3 ,[1] with applications mostly in endgame solvers4 , but also for sub-goals during
games.
Using a binary goal (e.g. first player wins the game), game trees of two-person perfect-
information games5 can be mapped to an and–or tree6 . Maximizing nodes become OR-
nodes, minimizing nodes are mapped to AND-nodes. For all nodes proof and disproof
numbers are stored, and updated during the search.
To each node of the partially expanded game tree the proof number and disproof number are
associated. A proof number represents the minimum number of leaf nodes which have to be
proved in order to prove the node. Analogously, a disproof number represents the minimum
number of leaves which have to be disproved in order to disprove the node. Because the
goal of the tree is to prove a forced win, winning nodes are regarded as proved. Therefore,
they have proof number 0 and disproof number ∞. Lost or drawn nodes are regarded as
disproved. They have proof number ∞and disproof number 0. Unknown leaf nodes have a
proof and disproof number of unity. The proof number of an internal AND node is equal
to the sum of its children's proof numbers, since to prove an AND node all the children
have to be proved. The disproof number of an AND node is equal to the minimum of its
children's disproof numbers. The disproof number of an internal OR node is equal to the
sum of its children's disproof numbers, since to disprove an OR node all the children have
to be disproved. Its proof number is equal to the minimum of its children's proof numbers.
The procedure of selecting the most-proving node to expand is the following. We start at
the root. Then, at each OR node the child with the lowest proof number is selected as
successor, and at each AND node the child with the lowest disproof number is selected as
successor. Finally, when a leaf node is reached, it is expanded and its children are evaluated.
The proof and disproof numbers represent lower bounds on the number of nodes to be evalu-
ated to prove (or disprove) certain nodes. By always selecting the most proving (disproving)
node to expand, an efficient search is generated.
Some variants of proof number search like dfPN, PN2 , PDS-PN[2] have been developed to
address the quite big memory requirements of the algorithm.

1 https://en.wikipedia.org/wiki/Game_tree
2 https://en.wikipedia.org/wiki/Search_algorithm
3 https://en.wikipedia.org/wiki/Victor_Allis
4 https://en.wikipedia.org/w/index.php?title=Endgame_solver&action=edit&redlink=1
5 https://en.wikipedia.org/wiki/Perfect-information_game
6 https://en.wikipedia.org/wiki/And%E2%80%93or_tree

1217
Proof-number search

115.1 References
1. A, L V. Searching for Solutions in Games and Artificial Intelligence.
PhD Thesis7 . ISBN8 90-9007488-09 . A     2004-12-04.
R 24 O 2014.CS1 maint: BOT: original-url status unknown (link10 )
2. M H.M. W, J W.H.M. U,  H. J   H11
(2003). ”PDS-PN: A N P-N S A”12 (PDF). Lecture
Notes in Computer Science.CS1 maint: multiple names: authors list (link13 )

115.2 Further reading

A. Kishimoto, M.H.M. Winands, M. Müller, and J-T. Saito (2012) Game-tree search using
proof numbers: The first twenty years, ICGA, 35(3):131–156, pdf14

7 https://web.archive.org/web/20041204235835/http://www.cs.vu.nl/~victor/thesis.html
8 https://en.wikipedia.org/wiki/ISBN_(identifier)
9 https://en.wikipedia.org/wiki/Special:BookSources/90-9007488-0
10 https://en.wikipedia.org/wiki/Category:CS1_maint:_BOT:_original-url_status_unknown
11 https://en.wikipedia.org/wiki/H._Jaap_van_den_Herik
12 https://dke.maastrichtuniversity.nl/m.winands/documents/PDSPNCG2002.pdf
13 https://en.wikipedia.org/wiki/Category:CS1_maint:_multiple_names:_authors_list
14 https://webdocs.cs.ualberta.ca/~mmueller/ps/ICGA2012PNS.pdf

1218
116 Push–relabel maximum flow
algorithm

In mathematical optimization1 , the push–relabel algorithm (alternatively, preflow–


push algorithm) is an algorithm for computing maximum flows2 in a flow network3 . The
name ”push–relabel” comes from the two basic operations used in the algorithm. Throughout
its execution, the algorithm maintains a ”preflow” and gradually converts it into a maxi-
mum flow by moving flow locally between neighboring nodes using push operations under
the guidance of an admissible network maintained by relabel operations. In comparison, the
Ford–Fulkerson algorithm4 performs global augmentations that send flow following paths
from the source all the way to the sink.[1]
The push–relabel algorithm is considered one of the most efficient maximum flow algo-
rithms. The generic algorithm has a strongly polynomial5 O(V 2 E) time complexity, which
is asymptotically more efficient than the O(VE 2 ) Edmonds–Karp algorithm6 .[2] Specific
variants of the algorithms achieve even lower time complexities. The variant based on the

highest label node selection rule has O(V 2 E) time complexity and is generally regarded
as the benchmark for maximum flow algorithms.[3][4] Subcubic O(VElog(V 2 /E)) time com-
plexity can be achieved using dynamic trees7 , although in practice it is less efficient.[2]
The push–relabel algorithm has been extended to compute minimum cost flows8 .[5] The idea
of distance labels has led to a more efficient augmenting path algorithm, which in turn can
be incorporated back into the push–relabel algorithm to create a variant with even higher
empirical performance.[4][6]

116.1 History

The concept of a preflow was originally designed by Alexander V. Karzanov9 and was
published in 1974 in Soviet Mathematical Dokladi 15. This pre-flow algorithm also used a
push operation; however, it used distances in the auxiliary network to determine where to
push the flow instead of a labeling system.[2][7]

1 https://en.wikipedia.org/wiki/Mathematical_optimization
2 https://en.wikipedia.org/wiki/Maximum_flow
3 https://en.wikipedia.org/wiki/Flow_network
4 https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
5 https://en.wikipedia.org/wiki/Strongly_polynomial
6 https://en.wikipedia.org/wiki/Edmonds%E2%80%93Karp_algorithm
7 https://en.wikipedia.org/wiki/Link-cut_tree
8 https://en.wikipedia.org/wiki/Minimum_cost_flow
9 https://en.wikipedia.org/wiki/Alexander_V._Karzanov

1219
Push–relabel maximum flow algorithm

The push-relabel algorithm was designed by Andrew V. Goldberg10 and Robert Tarjan11 .
The algorithm was initially presented in November 1986 in STOC '86: Proceedings of
the eighteenth annual ACM symposium on Theory of computing, and then officially in
October 1988 as an article in the Journal of the ACM. Both papers detail a generic
form of the algorithm terminating in O(V 2 E) along with a O(V 3 ) sequential implemen-
tation, a O(VE log(V 2 /E)) implementation using dynamic trees, and parallel/distributed
implementation.[2][8] A explained in [9] Goldberg-Tarjan introduced distance labels by in-
corporating them into the parallel maximum flow algorithm of Yossi Shiloach and Uzi
Vishkin12 .[10]

116.2 Concepts

116.2.1 Definitions and notations

Main article: Flow network13 Let:


• G = (V, E) be a network with capacity function c: V × V → ℝ∞ ,
• F = (G, c, s, t) a flow network, where s ∈V and t ∈V are chosen source and sink vertices
respectively,
• f : V × V → ℝdenote a pre-flow14 in F,
• xf : V → ℝdenote the excess function with respect to the flow f, defined by xf (u) =
∑v ∈V f (v, u) − ∑v ∈V f (u, v),
• cf : V × V → ℝ∞ denote the residual capacity function with respect to the flow f, defined
by cf (e) = c(e) − f (e),
and
• Gf (V, Ef ) denote the residual network15 of G with respect to the flow f.
The push–relabel algorithm uses a nonnegative integer valid labeling function which
makes use of distance labels, or heights, on nodes to determine which arcs should be selected
for the push operation. This labeling function is denoted by : V → ℕ. This function must
satisfy the following conditions in order to be considered valid:
Valid labeling:
(u) ≤ (v) + 1 for all (u, v) ∈ Ef
Source condition:
(s) = | V |
Sink conservation:
(t) = 0

10 https://en.wikipedia.org/wiki/Andrew_V._Goldberg
11 https://en.wikipedia.org/wiki/Robert_Tarjan
12 https://en.wikipedia.org/wiki/Uzi_Vishkin
13 https://en.wikipedia.org/wiki/Flow_network
14 https://en.wikipedia.org/wiki/Flow_network#Flows
15 https://en.wikipedia.org/wiki/Flow_network#Residuals

1220
Concepts

In the algorithm, the label values of s and t are fixed. (u) is a lower bound of the
unweighted distance from u to t in Gf if t is reachable from u. If u has been disconnected
from t, then (u) − | V | is a lower bound of the unweighted distance from u to s. As a
result, if a valid labeling function exists, there are no s-t paths in Gf because no such paths
can be longer than | V | − 1.
An arc (u, v) ∈ Ef is called admissible if (u) = (v) + 1. The admissible
network G ̃ f (V, Ẽf ) is composed of the set of arcs e ∈Ef that are admissible. The ad-
missible network is acyclic.

116.2.2 Operations

Initialization

The algorithm starts by creating a residual graph, initializing the preflow values to zero
and performing a set of saturating push operations on residual arcs exiting the source, (s,
v) where v ∈V \ {s}. Similarly, the labels are initialized such that the label at the source
is the number of nodes in the graph, (s) = | V |, and all other nodes are given a label of
zero. Once the initialization is complete the algorithm repeatedly performs either the push
or relabel operations against active nodes until no applicable operation can be performed.

Push

The push operation applies on an admissible out-arc (u, v) of an active node u in Gf . It


moves min{xf (u), cf (u,v)} units of flow from u to v.
push(u, v):
assert xf [u] > 0 and [u] == [v] + 1
Δ = min(xf [u], c[u][v] - f[u][v])
f[u][v] += Δ
f[v][u] -= Δ
xf [u] -= Δ
xf [v] += Δ

A push operation that causes f (u, v) to reach c(u, v) is called a saturating push since
it uses up all the available capacity of the residual arc. Otherwise, all of the excess at the
node is pushed across the residual arc. This is called an unsaturating or non-saturating
push.

Relabel

The relabel operation applies on an active node u without any admissible out-arcs in Gf .
It modifies (u) to be the minimum value such that an admissible out-arc is created. Note
that this always increases (u) and never creates a steep arc, which is an arc (u, v) such
that cf (u, v) > 0, and (u) > (v) + 1.
relabel(u):
assert xf [u] > 0 and [u] <= [v] for all v such that c[u][v] - f[u][v] > 0
[u] = min( [v] for all v such that c[u][v] - f[u][v] > 0) + 1

1221
Push–relabel maximum flow algorithm

Effects of push and relabel

After a push or relabel operation, remains a valid labeling function with respect to f.
For a push operation on an admissible arc (u, v), it may add an arc (v, u) to Ef , where
(v) = (u) − 1 ≤ (u) + 1; it may also remove the arc (u, v) from Ef , where it effectively
removes the constraint (u) ≤ (v) + 1.
To see that a relabel operation on node u preserves the validity of (u), notice that this is
trivially guaranteed by definition for the out-arcs of u in Gf . For the in-arcs of u in Gf , the
increased (u) can only satisfy the constraints less tightly, not violate them.

116.3 The generic push–relabel algorithm

The generic push–relabel algorithm is used as a proof of concept only and does not contain
implementation details on how to select an active node for the push and relabel operations.
This generic version of the algorithm will terminate in O(V2 E).
Since (s) = | V |, (t) = 0, and there are no paths longer than | V | − 1 in Gf , in order for
(s) to satisfy the valid labeling condition s must be disconnected from t. At initialisation,
the algorithm fulfills this requirement by creating a pre-flow f that saturates all out-arcs of s,
after which (v) = 0 is trivially valid for all v ∈V \ {s, t}. After initialisation, the algorithm
repeatedly executes an applicable push or relabel operation until no such operations apply,
at which point the pre-flow has been converted into a maximum flow.
generic-push-relabel(G, c, s, t):
create a pre-flow f that saturates all out-arcs of s
let [s] = |V|
let [v] = 0 for all v ∈ V \ {s}
while there is an applicable push or relabel operation do
execute the operation

116.3.1 Correctness

The algorithm maintains the condition that is a valid labeling during its execution. This
can be proven true by examining the effects of the push and relabel operations on the label
function . The relabel operation increases the label value by the associated minimum plus
one which will always satisfy the (u) ≤ (v) + 1 constraint. The push operation can send
flow from u to v if (u) = (v) + 1. This may add (v, u) to Gf and may delete (u, v)
from Gf . The addition of (v, u) to Gf will not affect the valid labeling since (v) = (u)
− 1. The deletion of (u, v) from Gf removes the corresponding constraint since the valid
labeling property (u) ≤ (v) + 1 only applies to residual arcs in Gf .[8]
If a preflow f and a valid labeling for f exists then there is no augmenting path from s
to t in the residual graph Gf . This can be proven by contradiction based on inequalities
which arise in the labeling function when supposing that an augmenting path does exist. If
the algorithm terminates, then all nodes in V \ {s, t} are not active. This means all v ∈V \
{s, t} have no excess flow, and with no excess the preflow f obeys the flow conservation

1222
The generic push–relabel algorithm

constraint and can be considered a normal flow. This flow is the maximum flow according
to the max-flow min-cut theorem16 since there is no augmenting path from s to t.[8]
Therefore, the algorithm will return the maximum flow upon termination.

116.3.2 Time complexity

In order to bound the time complexity of the algorithm, we must analyze the number of
push and relabel operations which occur within the main loop. The numbers of relabel,
saturating push and nonsaturating push operations are analyzed separately.
In the algorithm, the relabel operation can be performed at most (2| V | − 1)(| V | − 2) <
2| V |2 times. This is because the labeling (u) value for any node u can never decrease,
and the maximum label value is at most 2| V | − 1 for all nodes. This means the relabel
operation could potentially be performed 2| V | − 1 times for all nodes V \ {s, t} (i.e. | V |
− 2). This results in a bound of O(V 2 ) for the relabel operation.
Each saturating push on an admissible arc (u, v) removes the arc from Gf . For the arc
to be reinserted into Gf for another saturating push, v must first be relabeled, followed
by a push on the arc (v, u), then u must be relabeled. In the process, (u) increases by
at least two. Therefore, there are O(V) saturating pushes on (u, v), and the total number
of saturating pushes is at most 2| V || E |. This results in a time bound of O(VE) for the
saturating push operations.
Bounding the number of nonsaturating pushes can be achieved via a potential argument17 .
We use the potential function Φ = ∑[u ∈V ∧xf (u) > 0] 𝓁(u) (i.e. Φ is the sum of the labels of
all active nodes). It is obvious that Φ is 0 initially and stays nonnegative throughout the
execution of the algorithm. Both relabels and saturating pushes can increase Φ. However,
the value of Φ must be equal to 0 at termination since there cannot be any remaining
active nodes at the end of the algorithm's execution. This means that over the execution
of the algorithm, the nonsaturating pushes must make up the difference of the relabel and
saturating push operations in order for Φ to terminate with a value of 0. The relabel
operation can increase Φ by at most (2| V | − 1)(| V | − 2). A saturating push on (u, v)
activates v if it was inactive before the push, increasing Φ by at most 2| V | − 1. Hence, the
total contribution of all saturating pushes operations to Φ is at most (2| V | − 1)(2| V || E |).
A nonsaturating push on (u, v) always deactivates u, but it can also activate v as in a
saturating push. As a result, it decreases Φ by at least (u) − (v) = 1. Since relabels and
saturating pushes increase Φ, the total number of nonsaturating pushes must make up the
difference of (2| V | − 1)(| V | − 2) + (2| V | − 1)(2| V || E |) ≤ 4| V |2 | E |. This results
in a time bound of O(V 2 E) for the nonsaturating push operations.
In sum, the algorithm executes O(V 2 ) relabels, O(VE) saturating pushes and O(V 2 E)
nonsaturating pushes. Data structures can be designed to pick and execute an applicable
operation in O(1) time. Therefore, the time complexity of the algorithm is O(V 2 E).[1][8]

16 https://en.wikipedia.org/wiki/Max-flow_min-cut_theorem
17 https://en.wikipedia.org/wiki/Potential_method

1223
Push–relabel maximum flow algorithm

116.3.3 Example

The following is a sample execution of the generic push-relabel algorithm, as defined above,
on the following simple network flow graph diagram.

Figure 277

Initial flow network graph

Figure 278

1224
The generic push–relabel algorithm

Final maximum flow network graph


In the example, the h and e values denote the label and excess xf , respectively, of the node
during the execution of the algorithm. Each residual graph in the example only contains the
residual arcs with a capacity larger than zero. Each residual graph may contain multiple
iterations of the perform operation loop.
Algorithm Operation(s) Residual Graph

Initialise the residual graph by setting the preflow to values 0 and initialis-
ing the labeling.

Figure 279 Step 1

Initial saturating push is performed across all preflow arcs out of the source,
s.

Figure 280 Step 2


Node a is relabeled in order to push its excess flow towards the sink, t.The
excess at a is then pushed to b then d in two subsequent saturating pushes;
which still leaves a with some excess.

Figure 281 Step 3


Once again, a is relabeled in order to push its excess along its last remaining
positive residual (i.e. push the excess back to s).The node a is then removed
from the set of active nodes.

1225
Push–relabel maximum flow algorithm

Algorithm Operation(s) Residual Graph

Figure 282 Step 4

Relabel b and then push its excess to t and c.

Figure 283 Step 5

Relabel c and then push its excess to d.

Figure 284 Step 6

Relabel d and then push its excess to t.

Figure 285 Step 7


This leaves the node b as the only remaining active node, but it cannot
push its excess flow towards the sink.Relabel b and then push its excess
towards the source, s, via the node a.

1226
Practical implementations

Algorithm Operation(s) Residual Graph

Figure 286 Step 8


Push the last bit of excess at a back to the source, s.There are no remaining
active nodes. The algorithm terminates and returns the maximum flow of
the graph (as seen above).

Figure 287 Step 9

The example (but with initial flow of 0) can be run here18 interactively.

116.4 Practical implementations

While the generic push–relabel algorithm has O(V 2 E) time complexity, efficient implemen-
tations achieve O(V 3 ) or lower time complexity by enforcing appropriate rules in selecting
applicable push and relabel operations. The empirical performance can be further improved
by heuristics.

116.4.1 ”Current-arc” data structure and discharge operation

The ”current-arc” data structure is a mechanism for visiting the in- and out-neighbors of
a node in the flow network in a static circular order. If a singly linked list of neighbors is
created for a node, the data structure can be as simple as a pointer into the list that steps
through the list and rewinds to the head when it runs off the end.
Based on the ”current-arc” data structure, the discharge operation can be defined. A dis-
charge operation applies on an active node and repeatedly pushes flow from the node until
it becomes inactive, relabeling it as necessary to create admissible arcs in the process.

http://www.adrian-haarbach.de/idp-graph-algorithms/implementation/maxflow-push-
18
relabel/index_en.html

1227
Push–relabel maximum flow algorithm

discharge(u):
while xf [u] > 0 do
if current-arc[u] has run off the end of neighbors[u] then
relabel(u)
rewind current-arc[u]
else
let (u, v) = current-arc[u]
if (u, v) is admissible then
push(u, v)
let current-arc[u] point to the next neighbor of u

116.4.2 Active node selection rules

Definition of the discharge operation reduces the push–relabel algorithm to repeatedly se-
lecting an active node to discharge. Depending on the selection rule, the algorithm exhibits
different time complexities. For the sake of brevity, we ignore s and t when referring to the
nodes in the following discussion.

FIFO selection rule

The FIFO19 push–relabel algorithm[2] organizes the active nodes into a queue. The initial
active nodes can be inserted in arbitrary order. The algorithm always removes the node
at the front of the queue for discharging. Whenever an inactive node becomes active, it is
appended to the back of the queue.
The algorithm has O(V 3 ) time complexity.

Relabel-to-front selection rule

The relabel-to-front push–relabel algorithm[1] organizes all nodes into a linked list and
maintains the invariant that the list is topologically sorted20 with respect to the admissible
network. The algorithm scans the list from front to back and performs a discharge operation
on the current node if it is active. If the node is relabeled, it is moved to the front of the
list, and the scan is restarted from the front.
The algorithm also has O(V 3 ) time complexity.

Highest label selection rule

The highest-label push–relabel algorithm[11] organizes all nodes into buckets indexed by their
labels. The algorithm always selects an active node with the largest label to discharge.

The algorithm has O(V 2 E) time complexity. If the lowest-label selection rule is used
instead, the time complexity becomes O(V 2 E).[3]

19 https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)
20 https://en.wikipedia.org/wiki/Topological_sorting

1228
Sample implementations

116.4.3 Implementation techniques

Although in the description of the generic push–relabel algorithm above, (u) is set to zero
for each node u other than s and t at the beginning, it is preferable to perform a backward
breadth-first search21 from t to compute exact labels.[2]
The algorithm is typically separated into two phases. Phase one computes a maximum
pre-flow by discharging only active nodes whose labels are below n. Phase two converts the
maximum preflow into a maximum flow by returning excess flow that cannot reach t to s.
It can be shown that phase two has O(VE) time complexity regardless of the order of push
and relabel operations and is therefore dominated by phase one. Alternatively, it can be
implemented using flow decomposition.[9]
Heuristics are crucial to improving the empirical performance of the algorithm.[12] Two
commonly used heuristics are the gap heuristic and the global relabeling heuristic.[2][13] The
gap heuristic detects gaps in the labeling function. If there is a label 0 < ' < | V | for
which there is no node u such that (u) = ', then any node u with ' < (u) < | V |
has been disconnected from t and can be relabeled to (| V | + 1) immediately. The global
relabeling heuristic periodically performs backward breadth-first search from t in Gf to
compute the exact labels of the nodes. Both heuristics skip unhelpful relabel operations,
which are a bottleneck of the algorithm and contribute to the ineffectiveness of dynamic
trees.[4]

116.5 Sample implementations

C22 implementation

#include <stdlib.h>
#include <stdio.h>

#define NODES 6
#define MIN(X,Y) ((X) < (Y) ? (X) : (Y))
#define INFINITE 10000000

void push(const int * const * C, int ** F, int *excess, int u, int v) {


int send = MIN(excess[u], C[u][v] - F[u][v]);
F[u][v] += send;
F[v][u] -= send;
excess[u] -= send;
excess[v] += send;
}

void relabel(const int * const * C, const int * const * F, int *height, int u) {
int v;
int min_height = INFINITE;
for (v = 0; v < NODES; v++) {
if (C[u][v] - F[u][v] > 0) {
min_height = MIN(min_height, height[v]);
height[u] = min_height + 1;
}
}
};

21 https://en.wikipedia.org/wiki/Breadth-first_search
22 https://en.wikipedia.org/wiki/C_(programming_language)

1229
Push–relabel maximum flow algorithm

void discharge(const int * const * C, int ** F, int *excess, int *height, int
*seen, int u) {
while (excess[u] > 0) {
if (seen[u] < NODES) {
int v = seen[u];
if ((C[u][v] - F[u][v] > 0) && (height[u] > height[v])) {
push(C, F, excess, u, v);
} else {
seen[u] += 1;
}
} else {
relabel(C, F, height, u);
seen[u] = 0;
}
}
}

void moveToFront(int i, int *A) {


int temp = A[i];
int n;
for (n = i; n > 0; n--) {
A[n] = A[n-1];
}
A[0] = temp;
}

int pushRelabel(const int * const * C, int ** F, int source, int sink) {


int *excess, *height, *list, *seen, i, p;

excess = (int *) calloc(NODES, sizeof(int));


height = (int *) calloc(NODES, sizeof(int));
seen = (int *) calloc(NODES, sizeof(int));

list = (int *) calloc((NODES-2), sizeof(int));

for (i = 0, p = 0; i < NODES; i++){


if ((i != source) && (i != sink)) {
list[p] = i;
p++;
}
}

height[source] = NODES;
excess[source] = INFINITE;
for (i = 0; i < NODES; i++)
push(C, F, excess, source, i);

p = 0;
while (p < NODES - 2) {
int u = list[p];
int old_height = height[u];
discharge(C, F, excess, height, seen, u);
if (height[u] > old_height) {
moveToFront(p, list);
p = 0;
} else {
p += 1;
}
}
int maxflow = 0;
for (i = 0; i < NODES; i++)
maxflow += F[source][i];

free(list);

free(seen);
free(height);

1230
Sample implementations

free(excess);

return maxflow;
}

void printMatrix(const int * const * M) {


int i, j;
for (i = 0; i < NODES; i++) {
for (j = 0; j < NODES; j++)
printf("%d\t",M[i][j]);
printf("\n");
}
}

int main(void) {
int **flow, **capacities, i;
flow = (int **) calloc(NODES, sizeof(int*));
capacities = (int **) calloc(NODES, sizeof(int*));
for (i = 0; i < NODES; i++) {
flow[i] = (int *) calloc(NODES, sizeof(int));
capacities[i] = (int *) calloc(NODES, sizeof(int));
}

// Sample graph
capacities[0][1] = 2;
capacities[0][2] = 9;
capacities[1][2] = 1;
capacities[1][3] = 0;
capacities[1][4] = 0;
capacities[2][4] = 7;
capacities[3][5] = 7;
capacities[4][5] = 4;

printf("Capacity:\n");
printMatrix(capacities);

printf("Max Flow:\n%d\n", pushRelabel(capacities, flow, 0, 5));

printf("Flows:\n");
printMatrix(flow);

return 0;
}

Python23 implementation
def relabel_to_front(C, source: int, sink: int) -> int:
n = len(C) # C is the capacity matrix
F = [[0] * n for _ in range(n)]
# residual capacity from u to v is C[u][v] - F[u][v]

height = [0] * n # height of node


excess = [0] * n # flow into node minus flow from node
seen = [0] * n # neighbours seen since last relabel
# node "queue"
nodelist = [i for i in range(n) if i != source and i != sink]

def push(u, v):


send = min(excess[u], C[u][v] - F[u][v])
F[u][v] += send
F[v][u] -= send
excess[u] -= send
excess[v] += send

23 https://en.wikipedia.org/wiki/Python_(programming_language)

1231
Push–relabel maximum flow algorithm

def relabel(u):
# Find smallest new height making a push possible,
# if such a push is possible at all.
min_height = ∞
for v in xrange(n):
if C[u][v] - F[u][v] > 0:
min_height = min(min_height, height[v])
height[u] = min_height + 1

def discharge(u):
while excess[u] > 0:
if seen[u] < n: # check next neighbour
v = seen[u]
if C[u][v] - F[u][v] > 0 and height[u] > height[v]:
push(u, v)
else:
seen[u] += 1
else: # we have checked all neighbours. must relabel
relabel(u)
seen[u] = 0

height[source] = n # longest path from source to sink is less than n long


excess[source] = ∞ # send as much flow as possible to neighbours of source
for v in range(n):
push(source, v)

p=0
while p < len(nodelist):
u = nodelist[p]
old_height = height[u]
discharge(u)
if height[u] > old_height:
nodelist.insert(0, nodelist.pop(p)) # move to front of list
p = 0 # start from front of list
else:
p += 1

return sum(F[source])

116.6 References
1. C, T. H.24 ; L, C. E.25 ; R, R. L.26 ; S, C.27 (2001). ”§26
M ”. Introduction to Algorithms28 (2 .). T MIT P. . 643–
698. ISBN29 978-026203293330 .

24 https://en.wikipedia.org/wiki/Thomas_H._Cormen
25 https://en.wikipedia.org/wiki/Charles_E._Leiserson
26 https://en.wikipedia.org/wiki/Ron_Rivest
27 https://en.wikipedia.org/wiki/Clifford_Stein
28 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
29 https://en.wikipedia.org/wiki/ISBN_(identifier)
30 https://en.wikipedia.org/wiki/Special:BookSources/978-0262032933

1232
References

2. G, A V; T, R E (1986). ”A      


”. Proceedings of the eighteenth annual ACM symposium on Theory of com-
puting – STOC '86. p. 136. doi31 :10.1145/12130.1214432 . ISBN33 978-089791193134 .
3. A, R K.; K, M; M, A K.; O, J B.
(1997). ”C     ”. Eu-
ropean Journal of Operational Research. 97 (3): 509. CiteSeerX35 10.1.1.297.294536 .
doi37 :10.1016/S0377-2217(96)00269-X38 .
4. G, A V. (2008). ”T P A–R A-
   M F P”. Algorithms – ESA 2008. Lecture
Notes in Computer Science. 5193. pp. 466–477. CiteSeerX39 10.1.1.150.510340 .
doi41 :10.1007/978-3-540-87744-8_3942 . ISBN43 978-3-540-87743-144 .
5. G, A V (1997). ”A E I   S-
 M-C F A”. Journal of Algorithms. 22: 1–29.
doi45 :10.1006/jagm.1995.080546 .
6. A, R K.; O, J B. (1991). ”D- -
         
”. Naval Research Logistics. 38 (3): 413. CiteSeerX47 10.1.1.297.569848 .
doi49 :10.1002/1520-6750(199106)38:3<413::AID-NAV3220380310>3.0.CO;2-J50 .
7. G, A V.; T, R E. (2014). ”E -
  ”. Communications of the ACM. 57 (8): 82.
doi51 :10.1145/262803652 .
8. G, A V.; T, R E. (1988). ”A  
  - ”. Journal of the ACM. 35 (4): 921.
doi53 :10.1145/48014.6105154 .

31 https://en.wikipedia.org/wiki/Doi_(identifier)
32 https://doi.org/10.1145%2F12130.12144
33 https://en.wikipedia.org/wiki/ISBN_(identifier)
34 https://en.wikipedia.org/wiki/Special:BookSources/978-0897911931
35 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
36 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.297.2945
37 https://en.wikipedia.org/wiki/Doi_(identifier)
38 https://doi.org/10.1016%2FS0377-2217%2896%2900269-X
39 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
40 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.150.5103
41 https://en.wikipedia.org/wiki/Doi_(identifier)
42 https://doi.org/10.1007%2F978-3-540-87744-8_39
43 https://en.wikipedia.org/wiki/ISBN_(identifier)
44 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-87743-1
45 https://en.wikipedia.org/wiki/Doi_(identifier)
46 https://doi.org/10.1006%2Fjagm.1995.0805
47 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
48 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.297.5698
49 https://en.wikipedia.org/wiki/Doi_(identifier)
https://doi.org/10.1002%2F1520-6750%28199106%2938%3A3%3C413%3A%3AAID-NAV3220380310%
50
3E3.0.CO%3B2-J
51 https://en.wikipedia.org/wiki/Doi_(identifier)
52 https://doi.org/10.1145%2F2628036
53 https://en.wikipedia.org/wiki/Doi_(identifier)
54 https://doi.org/10.1145%2F48014.61051

1233
Push–relabel maximum flow algorithm

9. A, R. K.; M, T. L.; O, J. B. (1993). Network Flows: Theory,


Algorithms, and Applications (1st ed.). Prentice Hall. ISBN55 978-013617549056 .
10. S, Y; V, U (1982). ”A O(2 )  -
 ”. Journal of Algorithms. 3 (2): 128–146. doi57 :10.1016/0196-
6774(82)90013-X58 .
11. C, J.; M, S. N. (1988). ”A    -
    ”. Foundations of Software Technology and
Theoretical Computer Science. Lecture Notes in Computer Science. 338. p. 30.
doi59 :10.1007/3-540-50517-2_6960 . ISBN61 978-3-540-50517-462 .
12. C, B V.; G, A V. (1995). ”O 
-      ”. Integer Programming
and Combinatorial Optimization. Lecture Notes in Computer Science. 920. p. 157.
CiteSeerX63 10.1.1.150.360964 . doi65 :10.1007/3-540-59408-6_4966 . ISBN67 978-3-540-
59408-668 .
13. D, U.; M, W. (1989). ”I G' --
 ? A  ”. ZOR Zeitschrift für Op-
erations Research Methods and Models of Operations Research. 33 (6): 383.
doi69 :10.1007/BF0141593770 .

Optimization: Algorithms, methods, and heuristics

55 https://en.wikipedia.org/wiki/ISBN_(identifier)
56 https://en.wikipedia.org/wiki/Special:BookSources/978-0136175490
57 https://en.wikipedia.org/wiki/Doi_(identifier)
58 https://doi.org/10.1016%2F0196-6774%2882%2990013-X
59 https://en.wikipedia.org/wiki/Doi_(identifier)
60 https://doi.org/10.1007%2F3-540-50517-2_69
61 https://en.wikipedia.org/wiki/ISBN_(identifier)
62 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-50517-4
63 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
64 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.150.3609
65 https://en.wikipedia.org/wiki/Doi_(identifier)
66 https://doi.org/10.1007%2F3-540-59408-6_49
67 https://en.wikipedia.org/wiki/ISBN_(identifier)
68 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-59408-6
69 https://en.wikipedia.org/wiki/Doi_(identifier)
70 https://doi.org/10.1007%2FBF01415937

1234
117 Reverse-delete algorithm

The reverse-delete algorithm is an algorithm1 in graph theory2 used to obtain a min-


imum spanning tree3 from a given connected, edge-weighted graph4 . It first appeared in
Kruskal (1956)5 , but it should not be confused with Kruskal's algorithm6 which appears in
the same paper. If the graph is disconnected, this algorithm will find a minimum spanning
tree for each disconnected part of the graph. The set of these minimum spanning trees is
called a minimum spanning forest, which contains every vertex in the graph.
This algorithm is a greedy algorithm7 , choosing the best choice given any situation. It is
the reverse of Kruskal's algorithm8 , which is another greedy algorithm to find a minimum
spanning tree. Kruskal’s algorithm starts with an empty graph and adds edges while the
Reverse-Delete algorithm starts with the original graph and deletes edges from it. The
algorithm works as follows:
• Start with graph G, which contains a list of edges E.
• Go through E in decreasing order of edge weights.
• For each edge, check if deleting the edge will further disconnect the graph.
• Perform any deletion that does not lead to additional disconnection.

117.1 Pseudocode
function ReverseDelete(edges[] E) is
sort E in decreasing order
Define an index i ← 0

while i < size(E) do


Define edge ← E[i]
delete E[i]
if graph is not connected then
E[i] ← edge
i←i+1

return edges[] E

In the above the graph is the set of edges E with each edge containing a weight and connected
vertices v1 and v2.

1 https://en.wikipedia.org/wiki/Algorithm
2 https://en.wikipedia.org/wiki/Graph_theory
3 https://en.wikipedia.org/wiki/Minimum_spanning_tree
4 https://en.wikipedia.org/wiki/Weighted_graph
5 #CITEREFKruskal1956
6 https://en.wikipedia.org/wiki/Kruskal%27s_algorithm
7 https://en.wikipedia.org/wiki/Greedy_algorithm
8 https://en.wikipedia.org/wiki/Kruskal%27s_algorithm

1235
Reverse-delete algorithm

117.2 Example

In the following example green edges are being evaluated by the algorithm and red edges
have been deleted.

This is our original graph. The numbers near the edges indicate their edge
weight.

Figure 288

The algorithm will start with the maximum weighted edge, which in this case
is DE with an edge weight of 15. Since deleting edge DE does not further
disconnect the graph, it is deleted.

Figure 289

The next largest edge is FG so the algorithm will check if deleting this edge will
further disconnect the graph. Since deleting the edge will not further disconnect
the graph, the edge is then deleted.

Figure 290

1236
Example

The next largest edge is edge BD so the algorithm will check this edge and
delete the edge.

Figure 291

The next edge to check is edge EG, which will not be deleted since it would
disconnect node G from the graph. Therefore, the next edge to delete is edge
BC.

Figure 292

The next largest edge is edge EF so the algorithm will check this edge and
delete the edge.

Figure 293

1237
Reverse-delete algorithm

The algorithm will then search the remaining edges and will not find another
edge to delete; therefore this is the final graph returned by the algorithm.

Figure 294

117.3 Running time

The algorithm can be shown to run in O(E log V (log log V)3 ) time (using big-O notation9 ),
where E is the number of edges and V is the number of vertices. This bound is achieved as
follows:
• Sorting the edges by weight using a comparison sort takes O(E log E) time, which can
be simplified to O(E log V) using the fact that the largest E can be is V2 .
• There are E iterations of the loop.
• Deleting an edge, checking the connectivity of the resulting graph, and (if it is discon-
nected) re-inserting the edge can be done in O(logV (log log V)3 ) time per operation
(Thorup 200010 ).

117.4 Proof of correctness

It is recommended to read the proof of the Kruskal's algorithm11 first.


The proof consists of two parts. First, it is proved that the edges that remain after the
algorithm is applied form a spanning tree. Second, it is proved that the spanning tree is of
minimal weight.

117.4.1 Spanning tree

The remaining sub-graph (g) produced by the algorithm is not disconnected since the algo-
rithm checks for that in line 7. The result sub-graph cannot contain a cycle since if it does
then when moving along the edges we would encounter the max edge in the cycle and we
would delete that edge. Thus g must be a spanning tree of the main graph G.

9 https://en.wikipedia.org/wiki/Big-O_notation
10 #CITEREFThorup2000
11 https://en.wikipedia.org/wiki/Kruskal%27s_algorithm

1238
Proof of correctness

117.4.2 Minimality

We show that the following proposition P is true by induction: If F is the set of edges
remained at the end of the while loop, then there is some minimum spanning tree that (its
edges) are a subset of F.
1. Clearly Pholds before the start of the while loop . since a weighted connected graph
always has a minimum spanning tree and since F contains all the edges of the graph
then this minimum spanning tree must be a subset of F.
2. Now assume P is true for some non-final edge set F and let T be a minimum spanning
tree that is contained in F.we must show that after deleting edge e in the algorithm
there exists some (possibly other) spanning tree T' that is a subset of F.
a) if the next deleted edge e doesn't belong to T then T=T' is a subset of F and
Pholds. .
b) otherwise, if e belongs to T: first note that the algorithm only removes the
edges that do not cause a disconnectedness in the F. so e does not cause a
disconnectedness. But deleting e causes a disconnectedness in tree T (since it
is a member of T). assume e separates T into sub-graphs t1 and t2. Since the
whole graph is connected after deleting e then there must exists a path between
t1 and t2 (other than e) so there must exist a cycle C in the F (before removing
e). now we must have another edge in this cycle (call it f) that is not in T but
it is in F (since if all the cycle edges were in tree T then it would not be a tree
anymore). we now claim that T' = T - e + f is the minimum spanning tree that
is a subset of F.
c) firstly we prove that T' is a spanning tree . we know by deleting an edge in
a tree and adding another edge that does not cause a cycle we get another tree
with the same vertices. since T was a spanning tree so T' must be a spanning
tree too. since adding ” f ” does not cause any cycles since ”e” is removed.(note
that tree T contains all the vertices of the graph).
d) secondly we prove T' is a minimum spanning tree . we have three cases for the
edges ”e” and ” f ”. wt is the weight function.
i. wt( f ) < wt( e ) this is impossible since this causes the weight of tree T' to
be strictly less than T . since T is the minimum spanning tree, this is simply
impossible.
ii. wt( f ) > wt( e ) this is also impossible. since then when we are going through
edges in decreasing order of edge weights we must see ” f ” first . since we
have a cycle C so removing ” f ” would not cause any disconnectedness in the
F. so the algorithm would have removed it from F earlier . so ” f ” does not
exist in F which is impossible( we have proved f exists in step 4 .
iii. so wt(f) = wt(e) so T' is also a minimum spanning tree. so again Pholds.
3. so Pholds when the while loop is done ( which is when we have seen all the edges
) and we proved at the end F becomes a spanning tree and we know F has a
minimum spanning tree as its subset . so F must be the minimum spanning
tree itself .

1239
Reverse-delete algorithm

117.5 See also


• Kruskal's algorithm12
• Prim's algorithm13
• Borůvka's algorithm14
• Dijkstra's algorithm15

117.6 References
• K, J; T, É16 (2006), Algorithm Design, New York: Pearson Ed-
ucation, Inc..
• K, J B.17 (1956), ”O       
    ”, Proceedings of the American Mathematical
Society18 , 7 (1): 48–50, doi19 :10.2307/203324120 , JSTOR21 203324122 .
• T, M23 (2000), ”N- -  -
”, Proc. 32nd ACM Symposium on Theory of Computing24 , . 343–350,
25 :10.1145/335305.33534526 .

12 https://en.wikipedia.org/wiki/Kruskal%27s_algorithm
13 https://en.wikipedia.org/wiki/Prim%27s_algorithm
14 https://en.wikipedia.org/wiki/Bor%C5%AFvka%27s_algorithm
15 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
16 https://en.wikipedia.org/wiki/%C3%89va_Tardos
17 https://en.wikipedia.org/wiki/Joseph_Kruskal
18 https://en.wikipedia.org/wiki/Proceedings_of_the_American_Mathematical_Society
19 https://en.wikipedia.org/wiki/Doi_(identifier)
20 https://doi.org/10.2307%2F2033241
21 https://en.wikipedia.org/wiki/JSTOR_(identifier)
22 http://www.jstor.org/stable/2033241
23 https://en.wikipedia.org/wiki/Mikkel_Thorup
24 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
25 https://en.wikipedia.org/wiki/Doi_(identifier)
26 https://doi.org/10.1145%2F335305.335345

1240
118 Sethi–Ullman algorithm

In computer science1 , the Sethi–Ullman algorithm is an algorithm2 named after Ravi


Sethi3 and Jeffrey D. Ullman4 , its inventors, for translating abstract syntax trees5 into
machine code6 that uses as few registers7 as possible.

118.1 Overview

When generating code8 for arithmetic expressions, the compiler9 has to decide which is
the best way to translate the expression in terms of number of instructions used as well
as number of registers needed to evaluate a certain subtree. Especially in the case that
free registers are scarce, the order of evaluation10 can be important to the length of the
generated code, because different orderings may lead to larger or smaller numbers of inter-
mediate values being spilled11 to memory and then restored. The Sethi–Ullman algorithm
(also known as Sethi–Ullman numbering) fulfills the property of producing code which
needs the fewest instructions possible as well as the fewest storage references (under the
assumption that at the most commutativity12 and associativity13 apply to the operators
used, but distributive laws i.e. a ∗ b + a ∗ c = a ∗ (b + c) do not hold). Please note that the
algorithm succeeds as well if neither commutativity14 nor associativity15 hold for the ex-
pressions used, and therefore arithmetic transformations can not be applied. The algorithm
also does not take advantage of common subexpressions or apply directly to expressions
represented as general directed acyclic graphs rather than trees.

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Algorithm
3 https://en.wikipedia.org/wiki/Ravi_Sethi
4 https://en.wikipedia.org/wiki/Jeffrey_D._Ullman
5 https://en.wikipedia.org/wiki/Abstract_syntax_tree
6 https://en.wikipedia.org/wiki/Machine_code
7 https://en.wikipedia.org/wiki/Processor_register
8 https://en.wikipedia.org/wiki/Code_generation_(compiler)
9 https://en.wikipedia.org/wiki/Compiler
10 https://en.wikipedia.org/w/index.php?title=Order_of_evaluation&action=edit&redlink=1
11 https://en.wikipedia.org/wiki/Register_allocation
12 https://en.wikipedia.org/wiki/Commutativity
13 https://en.wikipedia.org/wiki/Associativity
14 https://en.wikipedia.org/wiki/Commutativity
15 https://en.wikipedia.org/wiki/Associativity

1241
Sethi–Ullman algorithm

118.2 Simple Sethi–Ullman algorithm

The simple Sethi–Ullman algorithm works as follows (for a load/store architecture16 ):


1. Traverse the abstract syntax tree17 in pre- or postorder
a) For every non-constant leaf node, assign a 1 (i.e. 1 register is needed to hold the
variable/field/etc.) if it is the left child of its parent else assign a 0. For every
constant leaf node (RHS of an operation – literals, values), assign a 0.
b) For every non-leaf node n, assign the number of registers needed to evaluate the
respective subtrees of n. If the number of registers needed in the left subtree
(l) are not equal to the number of registers needed in the right subtree (r), the
number of registers needed for the current node n is max(l, r). If l == r, then
the number of registers needed for the current node is r + 1.
2. Code emission
a) If the number of registers needed to compute the left subtree of node n is big-
ger than the number of registers for the right subtree, then the left subtree is
evaluated first (since it may be possible that the one more register needed by
the right subtree to save the result makes the left subtree spill18 ). If the right
subtree needs more registers than the left subtree, the right subtree is evaluated
first accordingly. If both subtrees need equal as much registers, then the order
of evaluation is irrelevant.

118.2.1 Example

For an arithmetic expression a = (b + c + f ∗ g) ∗ (d + 3), the abstract syntax tree19 looks like
this:
=
/ \
a *
/ \
/ \
+ +
/ \ / \
/ \ d 3
+ *
/ \ / \
b c f g

To continue with the algorithm, we need only to examine the arithmetic expression
(b + c + f ∗ g) ∗ (d + 3), i.e. we only have to look at the right subtree of the assignment
'=':
*
/ \
/ \
+ +
/ \ / \

16 https://en.wikipedia.org/wiki/Load/store_architecture
17 https://en.wikipedia.org/wiki/Abstract_syntax_tree
18 https://en.wikipedia.org/wiki/Register_spilling
19 https://en.wikipedia.org/wiki/Abstract_syntax_tree

1242
Advanced Sethi–Ullman algorithm

/ \ d 3
+ *
/ \ / \
b c f g

Now we start traversing the tree (in preorder for now), assigning the number of reg-
isters needed to evaluate each subtree (note that the last summand in the expression
(b + c + f ∗ g) ∗ (d + 3) is a constant):
*2
/\
/ \
+2 +1
/\ /\
/ \ d1 30
+1 *1
/\ /\
b1 c0 f1 g0

From this tree it can be seen that we need 2 registers to compute the left subtree of the '*',
but only 1 register to compute the right subtree. Nodes 'c' and 'g' do not need registers
for the following reasons: If T is a tree leaf, then the number of registers to evaluate T is
either 1 or 0 depending whether T is a left or a right subtree (since an operation such as
add R1, A can handle the right component A directly without storing it into a register).
Therefore we shall start to emit code for the left subtree first, because we might run into
the situation that we only have 2 registers left to compute the whole expression. If we now
computed the right subtree first (which needs only 1 register), we would then need a register
to hold the result of the right subtree while computing the left subtree (which would still
need 2 registers), therefore needing 3 registers concurrently. Computing the left subtree
first needs 2 registers, but the result can be stored in 1, and since the right subtree needs
only 1 register to compute, the evaluation of the expression can do with only 2 registers
left.

118.3 Advanced Sethi–Ullman algorithm

In an advanced version of the Sethi–Ullman algorithm, the arithmetic expressions are


first transformed, exploiting the algebraic properties of the operators used.

118.4 See also


• Strahler number20 , the minimum number of registers needed to evaluate an expression
without any external storage
• Ershov Number21 , basically the same concept as Strahler number

20 https://en.wikipedia.org/wiki/Strahler_number
21 https://en.wikipedia.org/wiki/Ershov_Number

1243
Sethi–Ullman algorithm

118.5 References
• S, R22 ; U, J D.23 (1970), ”T G  O C
 A E”, Journal of the Association for Computing Machinery24 ,
17 (4): 715–728, doi25 :10.1145/321607.32162026 , hdl27 :10338.dmlcz/10120728 .

118.6 External links


• Code Generation for Trees29

22 https://en.wikipedia.org/wiki/Ravi_Sethi
23 https://en.wikipedia.org/wiki/Jeffrey_D._Ullman
24 https://en.wikipedia.org/wiki/Journal_of_the_Association_for_Computing_Machinery
25 https://en.wikipedia.org/wiki/Doi_(identifier)
26 https://doi.org/10.1145%2F321607.321620
27 https://en.wikipedia.org/wiki/Hdl_(identifier)
28 http://hdl.handle.net/10338.dmlcz%2F101207
29 http://lambda.uta.edu/cse5317/fall02/notes/node40.html

1244
119 Shortest Path Faster Algorithm

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

1245
Shortest Path Faster Algorithm

Graph and tree


search algorithms

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

The Shortest Path Faster Algorithm (SPFA) is an improvement of the Bellman–Ford


algorithm1 which computes single-source shortest paths in a weighted directed graph. The
algorithm is believed to work well on random sparse graphs and is particularly suitable for
graphs that contain negative-weight edges.[1] However, the worst-case complexity of SPFA
is the same as that of Bellman–Ford, so for graphs with nonnegative edge weights Dijkstra's
algorithm2 is preferred. The SPFA algorithm was first published by Edward F. Moore3 in
1959, as a generalization of breadth first search4 ;[2] the same algorithm was rediscovered in
1994 by Fanding Duan.[3]

119.1 Algorithm

Given a weighted directed graph G = (V, E) and a source vertex s, the SPFA algorithm
finds the shortest path from s, to each vertex v, in the graph. The length of the shortest
path from s, to v is stored in d(v) for each vertex v.
The basic idea of SPFA is the same as Bellman–Ford algorithm in that each vertex is used as
a candidate to relax its adjacent vertices. The improvement over the latter is that instead of
trying all vertices blindly, SPFA maintains a queue of candidate vertices and adds a vertex
to the queue only if that vertex is relaxed. This process repeats until no more vertex can
be relaxed.
Below is the pseudo-code of the algorithm.[4] Here Q is a first-in, first-out queue of candidate
vertices, and w(u, v) is the edge weight of (u, v).

1 https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm
2 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
3 https://en.wikipedia.org/wiki/Edward_F._Moore
4 https://en.wikipedia.org/wiki/Breadth_first_search

1246
Algorithm

Figure 295 A demo of SPFA based on Euclidean distance. Red lines are the shortest
path covering (so far observed). Blue lines indicate where relaxing happens, i.e.,
connecting v with a node u in Q, which gives a shorter path from the source to v.

procedure Shortest-Path-Faster-Algorithm(G, s)
1 for each vertex v ≠ s in V(G)
2 d(v) := ∞
3 d(s) := 0
4 offer s into Q
5 while Q is not empty do
6 u := poll Q
7 for each edge (u, v) in E(G) do
8 if d(u) + w(u, v) < d(v) then
9 d(v) := d(u) + w(u, v)
10 if v is not in Q then
11 offer v into Q

1247
Shortest Path Faster Algorithm

The algorithm can also be applied to an undirected graph by replacing each undirected edge
with two directed edge of opposite directions.

119.2 Proof of correctness

We will prove that the algorithm never computes incorrect shortest path lengths.
Lemma: Whenever the queue is checked for emptiness, any vertex currently capable of
causing relaxation is in the queue.
Proof: We want to show that if dist[w] > dist[u] + wt(u, w) for any two vertices u and w at
the time the condition is checked,u is in the queue. We do so by induction on the number
of iterations of the loop that have already occurred. First we note that this certainly
holds before the loop is entered: if u ̸= v, then relaxation is not possible; relaxation is
possible from u = v , and this is added to the queue immediately before the while loop
is entered. Now, consider what happens inside the loop. A vertex u is popped, and is
used to relax all its neighbors, if possible. Therefore, immediately after that iteration of
the loop,u is not capable of causing any more relaxations (and does not have to be in the
queue anymore). However, the relaxation by x might cause some other vertices to become
capable of causing relaxation. If there exists some dist[x] > dist[w] + wt(w, x) such that
before the current loop iteration, then w is already in the queue. If this condition becomes
true duringthe current loop iteration, then either dist[x] increased, which is impossible, or
dist[w] decreased, implying that w was relaxed. But after w is relaxed, it is added to the
queue if it is not already present.
Corollary: The algorithm terminates when and only when no further relaxations are pos-
sible.
Proof: If no further relaxations are possible, the algorithm continues to remove vertices
from the queue, but does not add any more into the queue, because vertices are added only
upon successful relaxations. Therefore the queue becomes empty and the algorithm termi-
nates. If any further relaxations are possible, the queue is not empty, and the algorithm
continues to run.
The algorithm fails to terminate if negative-weight cycles are reachable from the source. See
here5 for a proof that relaxations are always possible when negative-weight cycles exist. In a
graph with no cycles of negative weight, when no more relaxations are possible, the correct
shortest paths have been computed (proof6 ). Therefore in graphs containing no cycles of
negative weight, the algorithm will never terminate with incorrect shortest path lengths[5] .

https://wcipeg.com/wiki/Bellman%E2%80%93Ford_algorithm#Proof_of_detection_of_
5
negative-weight_cycles
6 https://wcipeg.com/wiki/Shortest_path#Relaxation

1248
Running time

119.3 Running time

The worst-case running time of the algorithm is O(|V | · |E|), just like the standard Bellman-
Ford7 algorithm.[1] Experiments suggest that the average running time is O(|E|)[3] , but this
bound on the average run time has not been proved.

119.4 Optimization techniques

The performance of the algorithm is strongly determined by the order in which candidate
vertices are used to relax other vertices. In fact, if Q is a priority queue, then the algorithm
pretty much resembles Dijkstra's. However, since a priority queue is not used here, two
techniques are sometimes employed to improve the quality of the queue, which in turn im-
proves the average-case performance (but not the worst-case performance). Both techniques
rearranges the order of elements in Q so that vertices closer to the source are processed first.
Therefore, when implementing these techniques, Q is no longer a first-in, first-out queue,
but rather a normal doubly linked list or double-ended queue.
Small Label First (SLF) technique. In( line 11, instead
)
of always pushing vertex v to the
end of the queue, we compare d(v) to d front(Q) , and insert v to the front of the queue
if d(v) is smaller. The pseudo-code for this technique is (after pushing v to the end of the
queue in line 11):
procedure Small-Label-First(G, Q)
if d(back(Q)) < d(front(Q)) then
u := pop back of Q
push u into front of Q

Large Label Last (LLL) technique. After line 11, we update the queue so that the first
element is smaller than the average, and any element larger than the average is moved to
the end of the queue. The pseudo-code is:
procedure Large-Label-Last(G, Q)
x := average of d(v) for all v in Q
while d(front(Q)) > x
u := pop front of Q
push u to back of Q

119.5 References
1. About the so-called SPFA algorithm8
2. M, E F.9 (1959). ”T     ”. Proceed-
ings of the International Symposium on the Theory of Switching. Harvard University
Press. pp. 285–292. SPFA is Moore's “Algorithm D”.

7 https://en.wikipedia.org/wiki/Bellman-Ford
8 http://poj.org/showmessage?message_id=136458
9 https://en.wikipedia.org/wiki/Edward_F._Moore

1249
Shortest Path Faster Algorithm

3. D, F (1994), ”关于最短路径的SPFA快速算法”10 ,


西南交通大学学报 [Journal of Southwest Jiaotong University], 29 (2): 207–212
4. 11

5. ”S P F A”12 . wcipeg.

10 http://wenku.baidu.com/view/3b8c5d778e9951e79a892705.html
11 http://codeforces.com/blog/entry/16221
12 https://wcipeg.com/wiki/Shortest_Path_Faster_Algorithm

1250
120 Shortest path problem

This article includes a list of references1 , but its sources remain unclear be-
cause it has insufficient inline citations2 . Please help to improve3 this article
by introducing4 more precise citations. (June 2009)(Learn how and when to remove
this template message5 )

Figure 296 Shortest path (A, C, E, D, F) between vertices A and F in the weighted
directed graph

In graph theory6 , the shortest path problem is the problem of finding a path7 between
two vertices8 (or nodes) in a graph9 such that the sum of the weights10 of its constituent
edges is minimized.

1 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
2 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
3 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Fact_and_Reference_Check
4 https://en.wikipedia.org/wiki/Wikipedia:When_to_cite
5 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
6 https://en.wikipedia.org/wiki/Graph_theory
7 https://en.wikipedia.org/wiki/Path_(graph_theory)
8 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
9 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
10 https://en.wikipedia.org/wiki/Glossary_of_graph_theory_terms#weighted_graph

1251
Shortest path problem

The problem of finding the shortest path between two intersections on a road map may
be modeled as a special case of the shortest path problem in graphs, where the vertices
correspond to intersections and the edges correspond to road segments, each weighted by
the length of the segment.

120.1 Definition

The shortest path problem can be defined for graphs11 whether undirected12 , directed13 , or
mixed14 . It is defined here for undirected graphs; for directed graphs the definition of path
requires that consecutive vertices be connected by an appropriate directed edge.
Two vertices are adjacent when they are both incident to a common edge. A path15 in an
undirected graph is a sequence16 of vertices P = (v1 , v2 , . . . , vn ) ∈ V × V × · · · × V such that
vi is adjacent to vi+1 for 1 ≤ i < n. Such a path P is called a path of length n − 1 from v1
to vn . (The vi are variables; their numbering here relates to their position in the sequence
and needs not to relate to any canonical labeling of the vertices.)
Let ei,j be the edge incident to both vi and vj . Given a real-valued17 weight function
f : E → R, and an undirected (simple) graph G, the shortest path from v to v ′ is the path
P = (v1 , v2 , . . . , vn ) (where v1 = v and vn = v ′ ) that over all possible n minimizes the sum

n−1
f (ei,i+1 ). When each edge in the graph has unit weight or f : E → {1}, this is equivalent
i=1
to finding the path with fewest edges.
The problem is also sometimes called the single-pair shortest path problem, to distin-
guish it from the following variations:
• The single-source shortest path problem, in which we have to find shortest paths
from a source vertex v to all other vertices in the graph.
• The single-destination shortest path problem, in which we have to find shortest
paths from all vertices in the directed graph to a single destination vertex v. This can be
reduced to the single-source shortest path problem by reversing the arcs in the directed
graph.
• The all-pairs shortest path problem, in which we have to find shortest paths between
every pair of vertices v, v' in the graph.
These generalizations have significantly more efficient algorithms than the simplistic ap-
proach of running a single-pair shortest path algorithm on all relevant pairs of vertices.

11 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
12 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)#Undirected_graph
13 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)#Directed_graph
14 https://en.wikipedia.org/wiki/Mixed_graph
15 https://en.wikipedia.org/wiki/Path_(graph_theory)
16 https://en.wikipedia.org/wiki/Sequence
17 https://en.wikipedia.org/wiki/Function_(mathematics)#Real-valued_functions

1252
Single-source shortest paths

120.2 Algorithms

The most important algorithms for solving this problem are:


• Dijkstra's algorithm18 solves the single-source shortest path problem with non-negative
edge weight.
• Bellman–Ford algorithm19 solves the single-source problem if edge weights may be nega-
tive.
• A* search algorithm20 solves for single pair shortest path using heuristics to try to speed
up the search.
• Floyd–Warshall algorithm21 solves all pairs shortest paths.
• Johnson's algorithm22 solves all pairs shortest paths, and may be faster than Floyd–
Warshall on sparse graphs23 .
• Viterbi algorithm24 solves the shortest stochastic path problem with an additional prob-
abilistic weight on each node.
Additional algorithms and associated evaluations may be found in Cherkassky, Goldberg &
Radzik (1996)25 .

120.3 Single-source shortest paths

120.3.1 Undirected graphs

Weights Time complexity26 Author


ℝ+ O(V2 ) Dijkstra 195927
ℝ+ O((E + V) log V) Johnson 197728 (binary heap29 )
ℝ+ O(E + V log V) Fredman & Tarjan 198430 (Fi-
bonacci heap31 )
ℕ O(E) Thorup 199932 (requires constant-
time multiplication).

120.3.2 Unweighted graphs

18 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
19 https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm
20 https://en.wikipedia.org/wiki/A*_search_algorithm
21 https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
22 https://en.wikipedia.org/wiki/Johnson%27s_algorithm
23 https://en.wikipedia.org/wiki/Sparse_graph
24 https://en.wikipedia.org/wiki/Viterbi_algorithm
25 #CITEREFCherkasskyGoldbergRadzik1996
27 #CITEREFDijkstra1959
28 #CITEREFJohnson1977
29 https://en.wikipedia.org/wiki/Binary_heap
30 #CITEREFFredmanTarjan1984
31 https://en.wikipedia.org/wiki/Fibonacci_heap
32 #CITEREFThorup1999

1253
Shortest path problem

Algorithm Time complex- Author


ity
Breadth-first search33 O(E + V)

120.3.3 Directed acyclic graphs (DAGs)

An algorithm using topological sorting34 can solve the single-source shortest path problem
in linear time, Θ(E + V), in weighted DAGs.

120.3.4 Directed graphs with nonnegative weights

The following table is taken from Schrijver (2004)35 , with some corrections and additions.
A green background indicates an asymptotically best bound in the table; L is the maximum
length (or weight) among all edges, assuming integer edge weights.

Algorithm Time complexity Author


O(V 2 EL) Ford 195636
Bellman–Ford algorithm37 O(VE) Shimbel 195538 , Bellman
195839 , Moore 195940
O(V 2 log V) Dantzig 196041
Dijkstra's algorithm42 with O(V 2 ) Leyzorek et al. 195743 , Dijk-
list stra 195944 , Minty (see Pol-
lack & Wiebenson 196045 ),
Whiting & Hillier 196046
Dijkstra's algorithm47 with O((E + V) log V) Johnson 197749
binary heap48
. . . . . . . . .

33 https://en.wikipedia.org/wiki/Breadth-first_search
https://en.wikipedia.org/wiki/Topological_sorting#Application_to_shortest_path_
34
finding
35 #CITEREFSchrijver2004
36 #CITEREFFord1956
37 https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm
38 #CITEREFShimbel1955
39 #CITEREFBellman1958
40 #CITEREFMoore1959
41 #CITEREFDantzig1960
42 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
43 #CITEREFLeyzorekGrayJohnsonLadew1957
44 #CITEREFDijkstra1959
45 #CITEREFPollackWiebenson1960
46 #CITEREFWhitingHillier1960
47 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
48 https://en.wikipedia.org/wiki/Binary_heap
49 #CITEREFJohnson1977

1254
Single-source shortest paths

Algorithm Time complexity Author


Dijkstra's algorithm50 with O(E + V log V) Fredman & Tarjan 198452 ,
Fibonacci heap51 Fredman & Tarjan 198753
O(E log log L) Johnson 198154 , Karlsson &
Poblete 198355
Gabow's algorithm56 O(E logE/V L) Gabow 198357 , Gabow 198558

O(E + V log L) Ahuja et al. 199059
Thorup O(E + V log log V) Thorup 200460

This list is incomplete61 ; you can help by expanding it62 .

120.3.5 Directed graphs with arbitrary weights without negative cycles

Algorithm Time complex- Author


ity
O(V 2 EL) Ford 195663
Bellman–Ford algorithm64 O(VE) Shimbel 195565 , Bellman 195866 ,
Moore 195967

This list is incomplete68 ; you can help by expanding it69 .

50 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
51 https://en.wikipedia.org/wiki/Fibonacci_heap
52 #CITEREFFredmanTarjan1984
53 #CITEREFFredmanTarjan1987
54 #CITEREFJohnson1981
55 #CITEREFKarlssonPoblete1983
https://en.wikipedia.org/w/index.php?title=Gabow%27s_algorithm_(single-source_
56
shortest_paths)&action=edit&redlink=1
57 #CITEREFGabow1983
58 #CITEREFGabow1985
59 #CITEREFAhujaMehlhornOrlinTarjan1990
60 #CITEREFThorup2004
61 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Lists#Incomplete_lists
62 https://en.wikipedia.org/w/index.php?title=Shortest_path_problem&action=edit
63 #CITEREFFord1956
64 https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm
65 #CITEREFShimbel1955
66 #CITEREFBellman1958
67 #CITEREFMoore1959
68 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Lists#Incomplete_lists
69 https://en.wikipedia.org/w/index.php?title=Shortest_path_problem&action=edit

1255
Shortest path problem

120.3.6 Planar directed graphs with arbitrary weights

120.4 All-pairs shortest paths

The all-pairs shortest path problem finds the shortest paths between every pair of vertices
v, v' in the graph. The all-pairs shortest paths problem for unweighted directed graphs was
introduced by Shimbel (1953)70 , who observed that it could be solved by a linear number
of matrix multiplications that takes a total time of O(V4 ).

120.4.1 Undirected graph

Weights Time complexity Algorithm


ℝ+ O(V3 ) Floyd–Warshall algorithm71
{+1, +∞} O(V ω log V ) Seidel's algorithm72 (expected running time).
1/2
ℕ O(V 3 /2Ω(log n) ) Williams 201473
ℝ+ O(EV log α(E,V)) Pettie & Ramachandran 200274
ℕ O(EV) Thorup 199975 applied to every vertex (requires
constant-time multiplication).

120.4.2 Directed graph

Weights Time complexity Algorithm


ℝ (no negative cycles) O(V3 ) Floyd–Warshall algorithm76
1/2
ℕ O(V 3 /2Ω(log n) ) Williams 201477
ℝ (no negative cycles) O(EV + V2 log V) Johnson–Dijkstra78
ℝ (no negative cycles) O(EV + V2 log log V) Pettie 200479
ℕ O(EV + V2 log log V) Hagerup 200080

70 #CITEREFShimbel1953
71 https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
72 https://en.wikipedia.org/wiki/Seidel%27s_algorithm
73 #CITEREFWilliams2014
74 #CITEREFPettieRamachandran2002
75 #CITEREFThorup1999
76 https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
77 #CITEREFWilliams2014
78 https://en.wikipedia.org/wiki/Johnson%27s_algorithm
79 #CITEREFPettie2004
80 #CITEREFHagerup2000

1256
Applications

120.5 Applications

Shortest path algorithms are applied to automatically find directions between physical lo-
cations, such as driving directions on web mapping81 websites like MapQuest82 or Google
Maps83 . For this application fast specialized algorithms are available.[1]
If one represents a nondeterministic abstract machine84 as a graph where vertices describe
states and edges describe possible transitions, shortest path algorithms can be used to find
an optimal sequence of choices to reach a certain goal state, or to establish lower bounds
on the time needed to reach a given state. For example, if vertices represent the states of a
puzzle like a Rubik's Cube85 and each directed edge corresponds to a single move or turn,
shortest path algorithms can be used to find a solution that uses the minimum possible
number of moves.
In a networking86 or telecommunications87 mindset, this shortest path problem is sometimes
called the min-delay path problem and usually tied with a widest path problem88 . For
example, the algorithm may seek the shortest (min-delay) widest path, or widest shortest
(min-delay) path.
A more lighthearted application is the games of ”six degrees of separation89 ” that try to find
the shortest path in graphs like movie stars appearing in the same film.
Other applications, often studied in operations research90 , include plant and facility layout,
robotics91 , transportation92 , and VLSI93 design.[2]

120.5.1 Road networks

A road network can be considered as a graph with positive weights. The nodes represent
road junctions and each edge of the graph is associated with a road segment between two
junctions. The weight of an edge may correspond to the length of the associated road
segment, the time needed to traverse the segment, or the cost of traversing the segment.
Using directed edges it is also possible to model one-way streets. Such graphs are special
in the sense that some edges are more important than others for long distance travel (e.g.
highways). This property has been formalized using the notion of highway dimension.[3]
There are a great number of algorithms that exploit this property and are therefore able to
compute the shortest path a lot quicker than would be possible on general graphs.

81 https://en.wikipedia.org/wiki/Web_mapping
82 https://en.wikipedia.org/wiki/MapQuest
83 https://en.wikipedia.org/wiki/Google_Maps
84 https://en.wikipedia.org/wiki/Abstract_machine
85 https://en.wikipedia.org/wiki/Rubik%27s_Cube
86 https://en.wikipedia.org/wiki/Computer_network
87 https://en.wikipedia.org/wiki/Telecommunications_network
88 https://en.wikipedia.org/wiki/Widest_path_problem
89 https://en.wikipedia.org/wiki/Six_degrees_of_separation
90 https://en.wikipedia.org/wiki/Operations_research
91 https://en.wikipedia.org/wiki/Robotics
92 https://en.wikipedia.org/wiki/Transportation
93 https://en.wikipedia.org/wiki/Very-large-scale_integration

1257
Shortest path problem

All of these algorithms work in two phases. In the first phase, the graph is preprocessed
without knowing the source or target node. The second phase is the query phase. In this
phase, source and target node are known. The idea is that the road network is static, so
the preprocessing phase can be done once and used for a large number of queries on the
same road network.
The algorithm with the fastest known query time is called hub labeling and is able to
compute shortest path on the road networks of Europe or the USA in a fraction of a
microsecond.[4] Other techniques that have been used are:
• ALT (A* search94 , landmarks, and triangle inequality95 )
• Arc flags
• Contraction hierarchies96
• Transit node routing
• Reach-based pruning
• Labeling
• Hub labels97

120.6 Related problems

For shortest path problems in computational geometry98 , see Euclidean shortest path99 .
The travelling salesman problem100 is the problem of finding the shortest path that goes
through every vertex exactly once, and returns to the start. Unlike the shortest path
problem, which can be solved in polynomial time in graphs without negative cycles, the
travelling salesman problem is NP-complete101 and, as such, is believed not to be efficiently
solvable for large sets of data (see P = NP problem102 ). The problem of finding the longest
path103 in a graph is also NP-complete.
The Canadian traveller problem104 and the stochastic shortest path problem are general-
izations where either the graph isn't completely known to the mover, changes over time, or
where actions (traversals) are probabilistic.
The shortest multiple disconnected path [5] is a representation of the primitive path network
within the framework of Reptation theory105 .

94 https://en.wikipedia.org/wiki/A*_search
95 https://en.wikipedia.org/wiki/Triangle_inequality
96 https://en.wikipedia.org/wiki/Contraction_hierarchies
97 https://en.wikipedia.org/wiki/Hub_labels
98 https://en.wikipedia.org/wiki/Computational_geometry
99 https://en.wikipedia.org/wiki/Euclidean_shortest_path
100 https://en.wikipedia.org/wiki/Traveling_salesman_problem
101 https://en.wikipedia.org/wiki/NP-complete
102 https://en.wikipedia.org/wiki/P_%3D_NP_problem
103 https://en.wikipedia.org/wiki/Longest_path_problem
104 https://en.wikipedia.org/wiki/Canadian_traveller_problem
105 https://en.wikipedia.org/wiki/Reptation_theory

1258
Related problems

The widest path problem106 seeks a path so that the minimum label of any edge is as large
as possible.

120.6.1 Strategic shortest-paths

This section does not cite107 any sources108 . Please help improve this sec-
tion109 by adding citations to reliable sources110 . Unsourced material may be chal-
lenged and removed111 .
Find sources: ”Shortest path problem”112 –
news113 · newspapers114 · books115 · scholar116 · JSTOR117 (December 2015)(Learn
how and when to remove this template message118 )

Sometimes, the edges in a graph have personalities: each edge has its own selfish interest.
An example is a communication network, in which each edge is a computer that possibly
belongs to a different person. Different computers have different transmission speeds, so
every edge in the network has a numeric weight equal to the number of milliseconds it takes
to transmit a message. Our goal is to send a message between two points in the network in
the shortest time possible. If we know the transmission-time of each computer (the weight
of each edge), then we can use a standard shortest-paths algorithm. If we do not know the
transmission times, then we have to ask each computer to tell us its transmission-time. But,
the computers may be selfish: a computer might tell us that its transmission time is very
long, so that we will not bother it with our messages. A possible solution to this problem
is to use a variant of the VCG mechanism119 , which gives the computers an incentive to
reveal their true weights.

106 https://en.wikipedia.org/wiki/Widest_path_problem
107 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
108 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
109 https://en.wikipedia.org/w/index.php?title=Shortest_path_problem&action=edit
110 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
111 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
112 http://www.google.com/search?as_eq=wikipedia&q=%22Shortest+path+problem%22
113 http://www.google.com/search?tbm=nws&q=%22Shortest+path+problem%22+-wikipedia
http://www.google.com/search?&q=%22Shortest+path+problem%22+site:news.google.com/
114
newspapers&source=newspapers
115 http://www.google.com/search?tbs=bks:1&q=%22Shortest+path+problem%22+-wikipedia
116 http://scholar.google.com/scholar?q=%22Shortest+path+problem%22
https://www.jstor.org/action/doBasicSearch?Query=%22Shortest+path+problem%22&acc=on&
117
wc=on
118 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
https://en.wikipedia.org/wiki/Vickrey%E2%80%93Clarke%E2%80%93Groves_mechanism#
119
quickest_paths

1259
Shortest path problem

120.7 Linear programming formulation

There is a natural linear programming120 formulation for the shortest path problem, given
below. It is very simple compared to most other uses of linear programs in discrete opti-
mization121 , however it illustrates connections to other concepts.
Given a directed graph (V, A) with source node s, target node t, and cost wij for each edge
(i, j) in A, consider the program with variables xij


 if i = s;
∑ ∑ ∑ 1,
minimize wij xij subject to x ≥ 0 and for all i, xij − xji = −1, if i = t;


ij∈A j j 0, otherwise.
The intuition behind this is that xij is an indicator variable for whether edge (i, j) is
part of the shortest path: 1 when it is, and 0 if it is not. We wish to select the set
of edges with minimal weight, subject to the constraint that this set forms a path from
s to t (represented by the equality constraint: for all vertices except s and t the number of
incoming and outcoming edges that are part of the path must be the same (i.e., that it
should be a path from s to t).
This LP has the special property that it is integral; more specifically, every basic optimal
solution122 (when one exists) has all variables equal to 0 or 1, and the set of edges whose
variables equal 1 form an s-t dipath123 . See Ahuja et al.[6] for one proof, although the origin
of this approach dates back to mid-20th century.
The dual for this linear program is
maximize yt − ys subject to for all ij, yj − yi ≤ wij
and feasible duals correspond to the concept of a consistent heuristic124 for the A* algo-

rithm125 for shortest paths. For any feasible dual y the reduced costs126 wij = wij − yj + yi
127 128
are nonnegative and A* essentially runs Dijkstra's algorithm on these reduced costs.

120.8 General algebraic framework on semirings: the


algebraic path problem

This section needs expansion. You can help by adding to it129 . (August 2014)

120 https://en.wikipedia.org/wiki/Linear_programming
121 https://en.wikipedia.org/wiki/Discrete_optimization
122 https://en.wikipedia.org/wiki/Linear_programming#Theory
123 https://en.wikipedia.org/wiki/Dipath
124 https://en.wikipedia.org/wiki/Consistent_heuristic
125 https://en.wikipedia.org/wiki/A-star_algorithm
126 https://en.wikipedia.org/wiki/Reduced_cost
127 https://en.wikipedia.org/wiki/A-star_algorithm
128 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
129 https://en.wikipedia.org/w/index.php?title=Shortest_path_problem&action=edit&section=

1260
Shortest path in stochastic time-dependent networks

Many problems can be framed as a form of the shortest path for some suitably substituted
notions of addition along a path and taking the minimum. The general approach to these
is to consider the two operations to be those of a semiring130 . Semiring multiplication is
done along the path, and the addition is between paths. This general framework is known
as the algebraic path problem131 .[7][8][9]
Most of the classic shortest-path algorithms (and new ones) can be formulated as solving
linear systems over such algebraic structures.[10]
More recently, an even more general framework for solving these (and much less obviously
related problems) has been developed under the banner of valuation algebras132 .[11]

120.9 Shortest path in stochastic time-dependent networks

In real-life situations, the transportation network is usually stochastic and time-dependent.


In fact, a traveler traversing a link daily may experiences different travel times on that link
due not only to the fluctuations in travel demand (origin-destination matrix) but also due
to such incidents as work zones, bad weather conditions, accidents and vehicle breakdowns.
As a result, a stochastic time-dependent (STD) network is a more realistic representation
of an actual road network compared with the deterministic one.[12][13]
Despite considerable progress during the course of the past decade, it remains a controversial
question how an optimal path should be defined and identified in stochastic road networks.
In other words, there is no unique definition of an optimal path under uncertainty. One
possible and common answer to this question is to find a path with the minimum expected
travel time. The main advantage of using this approach is that efficient shortest path
algorithms introduced for the deterministic networks can be readily employed to identify
the path with the minimum expected travel time in a stochastic network. However, the
resulting optimal path identified by this approach may not be reliable, because this approach
fails to address travel time variability. To tackle this issue some researchers use distribution
of travel time instead of expected value of it so they find the probability distribution of
total travelling time using different optimization methods such as dynamic programming133
and Dijkstra's algorithm134 .[14] These methods use stochastic optimization135 , specifically
stochastic dynamic programming to find the shortest path in networks with probabilistic
arc length.[15] The concept of travel time reliability is used interchangeably with travel time
variability in the transportation research literature, so that, in general, one can say that
the higher the variability in travel time, the lower the reliability would be, and vice versa.
In order to account for travel time reliability more accurately, two common alternative def-
initions for an optimal path under uncertainty have been suggested. Some have introduced
the concept of the most reliable path, aiming to maximize the probability of arriving on
time or earlier than a given travel time budget. Others, alternatively, have put forward

130 https://en.wikipedia.org/wiki/Semiring
https://en.wikipedia.org/w/index.php?title=Algebraic_path_problem&action=edit&
131
redlink=1
132 https://en.wikipedia.org/wiki/Valuation_algebra
133 https://en.wikipedia.org/wiki/Dynamic_programming
134 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
135 https://en.wikipedia.org/wiki/Stochastic_optimization

1261
Shortest path problem

the concept of an α-reliable path based on which they intended to minimize the travel time
budget required to ensure a pre-specified on-time arrival probability.

120.10 See also


• Pathfinding136
• IEEE 802.1aq137
• Flow network138
• Shortest path tree139
• Euclidean shortest path140
• K shortest path routing141
• Min-plus matrix multiplication142
• Bidirectional search143 , an algorithm that finds the shortest path between two vertices on
a directed graph

120.11 References

120.11.1 Notes
1. S, P144 (M 23, 2009). ”F  ”145 . G
T T. Cite journal requires |journal= (help146 )CS1 maint: ref=harv (link147 )
2. C, D Z. (D 1996). ”D   
    ”. ACM Computing Surveys. 28 (4es).
Article 18. doi148 :10.1145/242224.242246149 .CS1 maint: ref=harv (link150 )
3. Abraham, Ittai; Fiat, Amos; Goldberg, Andrew V.151 ; Werneck, Renato F. ”High-
way Dimension, Shortest Paths, and Provably Efficient Algorithms”152 . ACM-SIAM
Symposium on Discrete Algorithms, pages 782–793, 2010.
4. Abraham, Ittai; Delling, Daniel; Goldberg, Andrew V.153 ; Werneck, Renato F.
research.microsoft.com/pubs/142356/HL-TR.pdf ”A Hub-Based Labeling Algorithm

136 https://en.wikipedia.org/wiki/Pathfinding
137 https://en.wikipedia.org/wiki/IEEE_802.1aq
138 https://en.wikipedia.org/wiki/Flow_network
139 https://en.wikipedia.org/wiki/Shortest_path_tree
140 https://en.wikipedia.org/wiki/Euclidean_shortest_path
141 https://en.wikipedia.org/wiki/K_shortest_path_routing
142 https://en.wikipedia.org/wiki/Min-plus_matrix_multiplication
143 https://en.wikipedia.org/wiki/Bidirectional_search
144 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
145 https://www.youtube.com/watch?v=-0ErpE8tQbw
146 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
147 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
148 https://en.wikipedia.org/wiki/Doi_(identifier)
149 https://doi.org/10.1145%2F242224.242246
150 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
151 https://en.wikipedia.org/wiki/Andrew_V._Goldberg
http://research.microsoft.com/pubs/115272/soda10.pdf%20research.microsoft.com/pubs/
152
115272/soda10.pdf
153 https://en.wikipedia.org/wiki/Andrew_V._Goldberg

1262
References

for Shortest Paths on Road Networks”154 . Symposium on Experimental Algorithms,


pages 230–241, 2011.
5. K, M (2005). ”S    
     -  - -
 ”. Computer Physics Communications. 168 (3): 209–232.
Bibcode155 :2005CoPhC.168..209K156 . doi157 :10.1016/j.cpc.2005.01.020158 .CS1 maint:
ref=harv (link159 )
6. A, R K.160 ; M, T L.161 ; O, J B.162 (1993).
Network Flows: Theory, Algorithms and Applications. Prentice Hall. ISBN163 978-0-
13-617549-0164 .
7. P, C (1967), ”S       -
     (O      -
 )”,  R (.), Théorie des graphes (journées internationales
d'études) -- Theory of Graphs (international symposium), Rome (Italy), July 1966:
Dunod (Paris) et Gordon and Breach (New York), p. 271CS1 maint: location (link165 )
8. D, J C; P, C (1971), Problèmes de cheminement dans
les graphes (Path Problems in Graphs), Dunod (Paris)
9. B, J; T, G (4 A 2010). Path Problems in
Networks166 . M & C P. . 9–. ISBN167 978-1-59829-924-
3168 .
10. G, M; M, M (2008). Graphs, Dioids and Semirings: New
Models and Algorithms. Springer Science & Business Media. chapter 4. ISBN169 978-
0-387-75450-5170 .
11. P, M; K, J (2011). Generic Inference: A Unifying Theory for
Automated Reasoning. John Wiley & Sons. Chapter 6. Valuation Algebras for Path
Problems. ISBN171 978-1-118-01086-0172 .
12. Loui, R.P., 1983. Optimal paths in graphs with stochastic or multidimensional
weights. Communications of the ACM, 26(9), pp.670-676.
13. R-B, M; S-M, A; B,
M; A, C W (2015). ”M-   
 -    - 

154 http://research.microsoft.com/pubs/142356/HL-TR.pdf
155 https://en.wikipedia.org/wiki/Bibcode_(identifier)
156 https://ui.adsabs.harvard.edu/abs/2005CoPhC.168..209K
157 https://en.wikipedia.org/wiki/Doi_(identifier)
158 https://doi.org/10.1016%2Fj.cpc.2005.01.020
159 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
160 https://en.wikipedia.org/wiki/Ravindra_K._Ahuja
161 https://en.wikipedia.org/wiki/Thomas_L._Magnanti
162 https://en.wikipedia.org/wiki/James_B._Orlin
163 https://en.wikipedia.org/wiki/ISBN_(identifier)
164 https://en.wikipedia.org/wiki/Special:BookSources/978-0-13-617549-0
165 https://en.wikipedia.org/wiki/Category:CS1_maint:_location
166 https://books.google.com/books?id=fZJeAQAAQBAJ&pg=PA9
167 https://en.wikipedia.org/wiki/ISBN_(identifier)
168 https://en.wikipedia.org/wiki/Special:BookSources/978-1-59829-924-3
169 https://en.wikipedia.org/wiki/ISBN_(identifier)
170 https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-75450-5
171 https://en.wikipedia.org/wiki/ISBN_(identifier)
172 https://en.wikipedia.org/wiki/Special:BookSources/978-1-118-01086-0

1263
Shortest path problem

 ”. Expert Systems with Applications. 42 (12): 5056–5064.


doi173 :10.1016/j.eswa.2015.02.046174 .
14. O, M H (2014). ”F     
 –     ”. International
Journal of Operational Research. 21 (1): 25–37. doi175 :10.1504/IJOR.2014.064020176 .
15. O, M H (2014). ”A D'  
       
 ”. International Journal of Operational Research. 21 (2): 143–154.
doi177 :10.1504/IJOR.2014.064541178 .

120.11.2 Bibliography
• A, R K.; M, K; O, J; T, R
E.179 (A 1990). ”F      -
”180 . Journal of the ACM. ACM. 37 (2): 213–223. doi181 :10.1145/77600.77615182 .
hdl183 :1721.1/47994184 .CS1 maint: ref=harv (link185 )
• B, R186 (1958). ”O   ”. Quarterly of Applied
Mathematics. 16: 87–90. doi187 :10.1090/qam/102435188 . MR189 0102435190 .CS1 maint:
ref=harv (link191 )
• C, B V.; G, A V.192 ; R, T (1996).
”S  :    ”193 . Mathe-
matical Programming. Ser. A. 73 (2): 129–174. doi194 :10.1016/0025-5610(95)00021-6195 .
MR196 1392160197 .CS1 maint: ref=harv (link198 )

173 https://en.wikipedia.org/wiki/Doi_(identifier)
174 https://doi.org/10.1016%2Fj.eswa.2015.02.046
175 https://en.wikipedia.org/wiki/Doi_(identifier)
176 https://doi.org/10.1504%2FIJOR.2014.064020
177 https://en.wikipedia.org/wiki/Doi_(identifier)
178 https://doi.org/10.1504%2FIJOR.2014.064541
179 https://en.wikipedia.org/wiki/Robert_Tarjan
180 http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA194031
181 https://en.wikipedia.org/wiki/Doi_(identifier)
182 https://doi.org/10.1145%2F77600.77615
183 https://en.wikipedia.org/wiki/Hdl_(identifier)
184 http://hdl.handle.net/1721.1%2F47994
185 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
186 https://en.wikipedia.org/wiki/Richard_Bellman
187 https://en.wikipedia.org/wiki/Doi_(identifier)
188 https://doi.org/10.1090%2Fqam%2F102435
189 https://en.wikipedia.org/wiki/MR_(identifier)
190 http://www.ams.org/mathscinet-getitem?mr=0102435
191 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
192 https://en.wikipedia.org/wiki/Andrew_V._Goldberg
193 http://ftp.cs.stanford.edu/cs/theory/pub/goldberg/sp-alg.ps.Z
194 https://en.wikipedia.org/wiki/Doi_(identifier)
195 https://doi.org/10.1016%2F0025-5610%2895%2900021-6
196 https://en.wikipedia.org/wiki/MR_(identifier)
197 http://www.ams.org/mathscinet-getitem?mr=1392160
198 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

1264
References

• C, T H.199 ; L, C E.200 ; R, R L.201 ; S,
C202 (2001) [1990]. ”S-S S P  A-P S-
 P”. Introduction to Algorithms203 (2 .). MIT P  MG-H.
. 580–642. ISBN204 0-262-03293-7205 .
• D, G. B. (J 1960). ”O  S R   N-
”. Management Science. 6 (2): 187–190. doi206 :10.1287/mnsc.6.2.187207 .CS1
maint: ref=harv (link208 )
• D, J C; P, C (1971), Problèmes de cheminement dans les
graphes (Path Problems in Graphs), Dunod (Paris)
• D, E. W.209 (1959). ”A       
”. Numerische Mathematik. 1: 269–271. doi210 :10.1007/BF01386390211 .CS1
maint: ref=harv (link212 )
• F, L. R. (1956). ”N F T”213 . R C. P-923.
Cite journal requires |journal= (help214 )CS1 maint: ref=harv (link215 )
• F, M L216 ; T, R E.217 (1984). Fibonacci
heaps and their uses in improved network optimization algorithms. 25th An-
nual Symposium on Foundations of Computer Science. 218
IEEE . pp. 338–346.
doi219 :10.1109/SFCS.1984.715934220 . ISBN221 0-8186-0591-X222 .CS1 maint: ref=harv
(link223 )
• F, M L224 ; T, R E.225 (1987). ”F-
         -

199 https://en.wikipedia.org/wiki/Thomas_H._Cormen
200 https://en.wikipedia.org/wiki/Charles_E._Leiserson
201 https://en.wikipedia.org/wiki/Ron_Rivest
202 https://en.wikipedia.org/wiki/Clifford_Stein
203 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
204 https://en.wikipedia.org/wiki/ISBN_(identifier)
205 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
206 https://en.wikipedia.org/wiki/Doi_(identifier)
207 https://doi.org/10.1287%2Fmnsc.6.2.187
208 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
209 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
210 https://en.wikipedia.org/wiki/Doi_(identifier)
211 https://doi.org/10.1007%2FBF01386390
212 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
213 http://www.rand.org/pubs/papers/P923.html
214 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
215 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
216 https://en.wikipedia.org/wiki/Michael_Fredman
217 https://en.wikipedia.org/wiki/Robert_Tarjan
218 https://en.wikipedia.org/wiki/IEEE
219 https://en.wikipedia.org/wiki/Doi_(identifier)
220 https://doi.org/10.1109%2FSFCS.1984.715934
221 https://en.wikipedia.org/wiki/ISBN_(identifier)
222 https://en.wikipedia.org/wiki/Special:BookSources/0-8186-0591-X
223 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
224 https://en.wikipedia.org/wiki/Michael_Fredman
225 https://en.wikipedia.org/wiki/Robert_Tarjan

1265
Shortest path problem

”. Journal of the Association for Computing Machinery. 34 (3): 596–615.


doi226 :10.1145/28869.28874227 .CS1 maint: ref=harv (link228 )
• G, H. N. (1983). ”S    ”. Proceedings
of the 24th Annual Symposium on Foundations of Computer Science (FOCS 1983)229
(PDF). . 248–258. 230 :10.1109/SFCS.1983.68231 .CS1 maint: ref=harv (link232 )
• G, H N. (1985). ”S    ”.
Journal of Computer and System Sciences233 . 31 (2): 148–168. doi234 :10.1016/0022-
0000(85)90039-X235 . MR236 0828519237 .CS1 maint: ref=harv (link238 )
• H, T (2000). M, U; R, J D. P.; W, E
(.). Improved Shortest Paths on the Word RAM239 . Proceedings of the 27th Interna-
tional Colloquium on Automata, Languages and Programming. pp. 61–72. ISBN240 978-
3-540-67715-4241 .CS1 maint: ref=harv (link242 )
• J, D B.243 (1977). ”E   -
    ”. Journal of the ACM244 . 24 (1): 1–13.
doi245 :10.1145/321992.321993246 .
• J, D B.247 (D 1981). ”A     -
     O(log log D) time”. Mathematical Systems
Theory. 15 (1): 295–309. doi248 :10.1007/BF01786986249 . MR250 0683047251 .CS1 maint:
ref=harv (link252 )

226 https://en.wikipedia.org/wiki/Doi_(identifier)
227 https://doi.org/10.1145%2F28869.28874
228 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
http://www.eecs.umich.edu/%7Epettie/matching/Gabow-scaling-algorithms-for-network-
229
problems.pdf
230 https://en.wikipedia.org/wiki/Doi_(identifier)
231 https://doi.org/10.1109%2FSFCS.1983.68
232 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
233 https://en.wikipedia.org/wiki/Journal_of_Computer_and_System_Sciences
234 https://en.wikipedia.org/wiki/Doi_(identifier)
235 https://doi.org/10.1016%2F0022-0000%2885%2990039-X
236 https://en.wikipedia.org/wiki/MR_(identifier)
237 http://www.ams.org/mathscinet-getitem?mr=0828519
238 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
239 http://dl.acm.org/citation.cfm?id=686343&CFID=563073233&CFTOKEN=28801665
240 https://en.wikipedia.org/wiki/ISBN_(identifier)
241 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-67715-4
242 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
243 https://en.wikipedia.org/wiki/Donald_B._Johnson
244 https://en.wikipedia.org/wiki/Journal_of_the_ACM
245 https://en.wikipedia.org/wiki/Doi_(identifier)
246 https://doi.org/10.1145%2F321992.321993
247 https://en.wikipedia.org/wiki/Donald_B._Johnson
248 https://en.wikipedia.org/wiki/Doi_(identifier)
249 https://doi.org/10.1007%2FBF01786986
250 https://en.wikipedia.org/wiki/MR_(identifier)
251 http://www.ams.org/mathscinet-getitem?mr=0683047
252 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

1266
References

• K, R G.; P, P V. (1983). ”A O(m log log D) algorithm
for shortest paths”. Discrete Applied Mathematics253 . 6 (1): 91–93. doi254 :10.1016/0166-
218X(83)90104-X255 . MR256 0700028257 .CS1 maint: ref=harv (link258 )
• L, M.; G, R. S.; J, A. A.; L, W. C.; M, S. R., J.;
P, R. M.; S, R. N. (1957). Investigation of Model Techniques — First Annual
Report — 6 June 1956 — 1 July 1957 — A Study of Model Techniques for Communication
Systems. Cleveland, Ohio: Case Institute of Technology.CS1 maint: ref=harv (link259 )
• M, E. F.260 (1959). ”T     ”. Proceedings of an
International Symposium on the Theory of Switching (Cambridge, Massachusetts, 2–5
April 1957). Cambridge: Harvard University Press. pp. 285–292.CS1 maint: ref=harv
(link261 )
• P, S; R, V (2002). Computing shortest paths with com-
parisons and additions262 . Proceedings of the Thirteenth Annual ACM-SIAM Sympo-
sium on Discrete Algorithms. pp. 267–276263 . ISBN264 978-0-89871-513-2265 .CS1 maint:
ref=harv (link266 )
• P, S (26 J 2004). ”A    - 
  - ”. Theoretical Computer Science. 312 (1): 47–74.
doi267 :10.1016/s0304-3975(03)00402-x268 .CS1 maint: ref=harv (link269 )
• P, M; W, W (M–A 1960). ”S
  S-R P—A R”. Oper. Res. 8 (2): 224–230.
doi270 :10.1287/opre.8.2.224271 .CS1 maint: ref=harv (link272 ) Attributes Dijkstra's algo-
rithm to Minty (”private communication”) on p.225.
• S, A (2004). Combinatorial Optimization — Polyhedra and Effi-
ciency. Algorithms and Combinatorics. 24. Springer. ISBN273 978-3-540-20456-5274 .CS1
maint: ref=harv (link275 ) Here: vol.A, sect.7.5b, p. 103

253 https://en.wikipedia.org/wiki/Discrete_Applied_Mathematics
254 https://en.wikipedia.org/wiki/Doi_(identifier)
255 https://doi.org/10.1016%2F0166-218X%2883%2990104-X
256 https://en.wikipedia.org/wiki/MR_(identifier)
257 http://www.ams.org/mathscinet-getitem?mr=0700028
258 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
259 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
260 https://en.wikipedia.org/wiki/Edward_F._Moore
261 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
262 https://archive.org/details/proceedingsofthi2002acms/page/267
263 https://archive.org/details/proceedingsofthi2002acms/page/267
264 https://en.wikipedia.org/wiki/ISBN_(identifier)
265 https://en.wikipedia.org/wiki/Special:BookSources/978-0-89871-513-2
266 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
267 https://en.wikipedia.org/wiki/Doi_(identifier)
268 https://doi.org/10.1016%2Fs0304-3975%2803%2900402-x
269 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
270 https://en.wikipedia.org/wiki/Doi_(identifier)
271 https://doi.org/10.1287%2Fopre.8.2.224
272 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
273 https://en.wikipedia.org/wiki/ISBN_(identifier)
274 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-20456-5
275 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

1267
Shortest path problem

• S, A (1953). ”S   -


 ”. Bulletin of Mathematical Biophysics. 15 (4): 501–507.
doi276 :10.1007/BF02476438277 .CS1 maint: ref=harv (link278 )
• S, A. (1955). Structure in communication nets. Proceedings of the Symposium
on Information Networks. New York, NY: Polytechnic Press of the Polytechnic Institute
of Brooklyn. pp. 199–203.CS1 maint: ref=harv (link279 )
• T, M (1999). ”U -    -
     ”. Journal of the ACM. 46 (3): 362–394.
doi280 :10.1145/316542.316548281 .CS1 maint: ref=harv (link282 )
• T, M (2004). ”I       -
        ”283 . Journal of Com-
puter and System Sciences. 69 (3): 330–353. doi284 :10.1016/j.jcss.2004.04.003285 .CS1
maint: ref=harv (link286 )
• W, P. D.; H, J. A. (M–J 1960). ”A M  F
 S R   R N”. Operational Research Quarterly.
11 (1/2): 37–40. doi287 :10.1057/jors.1960.32288 .CS1 maint: ref=harv (link289 )
• W, R290 (2014). ”F -     -
”. Proceedings of the 46th Annual ACM Symposium on Theory of Com-
puting (STOC '14)291 . N Y: ACM. . 664–673. X292 :1312.6680293 .
294 :10.1145/2591796.2591811295 . MR296 3238994297 .CS1 maint: ref=harv (link298 )

120.12 Further reading


• F, D.; M-S, A.; N, U. (1998). ”F -
       ”. Proc. 7th

276 https://en.wikipedia.org/wiki/Doi_(identifier)
277 https://doi.org/10.1007%2FBF02476438
278 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
279 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
280 https://en.wikipedia.org/wiki/Doi_(identifier)
281 https://doi.org/10.1145%2F316542.316548
282 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
283 http://dl.acm.org/citation.cfm?id=1039326
284 https://en.wikipedia.org/wiki/Doi_(identifier)
285 https://doi.org/10.1016%2Fj.jcss.2004.04.003
286 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
287 https://en.wikipedia.org/wiki/Doi_(identifier)
288 https://doi.org/10.1057%2Fjors.1960.32
289 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
290 https://en.wikipedia.org/wiki/Ryan_Williams_(computer_scientist)
291 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
292 https://en.wikipedia.org/wiki/ArXiv_(identifier)
293 http://arxiv.org/abs/1312.6680
294 https://en.wikipedia.org/wiki/Doi_(identifier)
295 https://doi.org/10.1145%2F2591796.2591811
296 https://en.wikipedia.org/wiki/MR_(identifier)
297 http://www.ams.org/mathscinet-getitem?mr=3238994
298 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

1268
Further reading

Annu. ACM-SIAM Symp. Discrete Algorithms. Atlanta, GA. pp. 212–221. Cite-
SeerX299 10.1.1.32.9856300 .
• D, S. E. (O 1967). A A  S S P A-
301 (PDF) (R). P R. U S A F. RM-5433-
PR. DTIC AD-661265.

299 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
300 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.9856
301 http://www.dtic.mil/dtic/tr/fulltext/u2/661265.pdf

1269
121 SMA*

This article needs additional citations for verification1 . Please help improve
this article2 by adding citations to reliable sources3 . Unsourced material may be
challenged and removed.
Find sources: ”SMA*”4 – news5 · newspapers6 · books7 · scholar8 · JSTOR9 (March
2015)(Learn how and when to remove this template message10 )

This article may be too technical for most readers to understand. Please
help improve it11 to make it understandable to non-experts12 , without removing
the technical details. (November 2009)(Learn how and when to remove this template
message13 )

1 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
2 https://en.wikipedia.org/w/index.php?title=SMA*&action=edit
3 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
4 http://www.google.com/search?as_eq=wikipedia&q=%22SMA%2A%22
5 http://www.google.com/search?tbm=nws&q=%22SMA%2A%22+-wikipedia
http://www.google.com/search?&q=%22SMA%2A%22+site:news.google.com/newspapers&source=
6
newspapers
7 http://www.google.com/search?tbs=bks:1&q=%22SMA%2A%22+-wikipedia
8 http://scholar.google.com/scholar?q=%22SMA%2A%22
9 https://www.jstor.org/action/doBasicSearch?Query=%22SMA%2A%22&acc=on&wc=on
10 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
11 https://en.wikipedia.org/w/index.php?title=SMA*&action=edit
12 https://en.wikipedia.org/wiki/Wikipedia:Make_technical_articles_understandable
13 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

1271
SMA*

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

1272
Process

SMA* or Simplified Memory Bounded A* is a shortest path algorithm14 based on the


A*15 algorithm. The main advantage of SMA* is that it uses a bounded memory, while
the A* algorithm might need exponential memory. All other characteristics of SMA* are
inherited from A*.

121.1 Process

Like A*, it expands the most promising branches according to the heuristic. What sets
SMA* apart is that it prunes nodes whose expansion has revealed less promising than
expected. The approach allows the algorithm to explore branches and backtrack to explore
other branches.
Expansion and pruning of nodes is driven by keeping two values of f for every node. Node
x stores a value f (x) which estimates the cost of reaching the goal by taking a path through
that node. The lower the value, the higher the priority. As in A* this value is initialized to
h(x) + g(x), but will then be updated to reflect changes to this estimate when its children
are expanded. A fully expanded node will have an f value at least as high as that of its
successors. In addition, the node stores the f value of the best forgotten successor. This
value is restored if the forgotten successor is revealed to be the most promising successor.
Starting with the first node, it maintains OPEN, ordered lexicographically by f and depth.
When choosing a node to expand, it chooses the best according to that order. When
selecting a node to prune, it chooses the worst.

121.2 Properties

SMA* has the following properties


• It works with a heuristic16 , just as A*
• It is complete if the allowed memory is high enough to store the shallowest solution
• It is optimal if the allowed memory is high enough to store the shallowest optimal solution,
otherwise it will return the best solution that fits in the allowed memory
• It avoids repeated states as long as the memory bound allows it
• It will use all memory available
• Enlarging the memory bound of the algorithm will only speed up the calculation
• When enough memory is available to contain the entire search tree, then calculation has
an optimal speed

14 https://en.wikipedia.org/wiki/Shortest_path_algorithm
15 https://en.wikipedia.org/wiki/A*_search_algorithm
16 https://en.wikipedia.org/wiki/Heuristic

1273
SMA*

121.3 Implementation

The implementation of SMA* is very similar to the one of A*, the only difference is that
when there isn't any space left, nodes with the highest f-cost are pruned from the queue.
Because those nodes are deleted, the SMA* also has to remember the f-cost of the best
forgotten child with the parent node. When it seems that all explored paths are worse than
such a forgotten path, the path is re-generated.[1]
Pseudo code:
function SMA-star(problem): path
queue: set of nodes, ordered by f-cost;
begin
queue.insert(problem.root-node);

while True do begin


if queue.empty() then return failure; //there is no solution that fits in
the given memory
node := queue.begin(); // min-f-cost-node
if problem.is-goal(node) then return success;

s := next-successor(node)
if !problem.is-goal(s) && depth(s) == max_depth then
f(s) := inf;
// there is no memory left to go past s, so the entire path is useless
else
f(s) := max(f(node), g(s) + h(s));
// f-value of the successor is the maximum of
// f-value of the parent and
// heuristic of the successor + path length to the successor
endif
if no more successors then
update f-cost of node and those of its ancestors if needed

if node.successors ⊆ queue then queue.remove(node);


// all children have already been added to the queue via a shorter way
if memory is full then begin
badNode := shallowest node with highest f-cost;
for parent in badNode.parents do begin
parent.successors.remove(badNode);
if needed then queue.insert(parent);
endfor
endif

queue.insert(s);
endwhile
end

121.4 References
1. R, S. (1992). ”E -  ”. I N-
, B. (.). Proceedings of the 10th European Conference on Artificial in-
telligence. Vienna, Austria: John Wiley & Sons, New York, NY. pp. 1–5. Cite-
SeerX17 10.1.1.105.783918 .

17 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
18 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.105.7839

1274
122 Spectral layout

Spectral layout is a class of algorithm1 for drawing graphs2 . The layout uses the eigen-
vectors3 of a matrix, such as the Laplace matrix4 of the graph, as Cartesian coordinates5 of
the graph's vertices.
The idea of the layout is to compute the two largest (or smallest) eigenvalues and corre-
sponding eigenvectors of the laplacian matrix of the graph and then use those for actually
placing the nodes. Usually nodes are placed in the 2 dimensional plane, an embedding into
more dimensions can be found by using more eigenvectors. For actually placing a node
from a graph which corresponds to a row/column i in the (symmetric) laplacian matrix, the
x-coordinate is the value of the i-th coordinate of the first eigenvector. Correspondingly the
i-th component of the second eigenvector describes the y-coordinate of the point i.

122.1 References
• B, B (1994), Theory of Spectral Graph Layout6 , T. R MSR-
TR-94-04, M R.
• K, Y (2005), ”D   :   -
”7 (PDF), Computers & Mathematics with Applications, 49 (11–12): 1867–1888,
doi8 :10.1016/j.camwa.2004.08.0159 , MR10 215469111 .

1 https://en.wikipedia.org/wiki/Algorithm
2 https://en.wikipedia.org/wiki/Graph_drawing
3 https://en.wikipedia.org/wiki/Eigenvectors
4 https://en.wikipedia.org/wiki/Laplace_matrix
5 https://en.wikipedia.org/wiki/Cartesian_coordinate
6 http://research.microsoft.com/apps/pubs/default.aspx?id=69611
7 https://pdfs.semanticscholar.org/64d8/c8099fef68bdfd7bab4c57b9c5e5f5aa21a6.pdf
8 https://en.wikipedia.org/wiki/Doi_(identifier)
9 https://doi.org/10.1016%2Fj.camwa.2004.08.015
10 https://en.wikipedia.org/wiki/MR_(identifier)
11 http://www.ams.org/mathscinet-getitem?mr=2154691

1275
Spectral layout

This applied mathematics12 -related article is a stub13 . You can help Wikipedia by
expanding it14 .
• v15
• t16
• e17

12 https://en.wikipedia.org/wiki/Applied_mathematics
13 https://en.wikipedia.org/wiki/Wikipedia:Stub
14 https://en.wikipedia.org/w/index.php?title=Spectral_layout&action=edit
15 https://en.wikipedia.org/wiki/Template:Applied-math-stub
16 https://en.wikipedia.org/wiki/Template_talk:Applied-math-stub
17 https://en.wikipedia.org/w/index.php?title=Template:Applied-math-stub&action=edit

1276
123 Strongly connected component

Figure 301 Graph with strongly connected components marked

Relevant topics on
Graph connectivity

• Connectivity
• Algebraic connectivity
• Cycle rank
• Rank (graph theory)
• SPQR tree
• St-connectivity
• K-connectivity certificate
• Pixel connectivity
• Vertex separator
• Strongly connected component
• Biconnected graph
• Bridge

1277
Strongly connected component

In the mathematical theory of directed graphs1 , a graph is said to be strongly connected if


every vertex is reachable2 from every other vertex. The strongly connected compo-
nents3 of an arbitrary directed graph form a partition4 into subgraphs that are themselves
strongly connected. It is possible to test the strong connectivity5 of a graph, or to find its
strongly connected components, in linear time6 (that is, Θ(V+E)).

123.1 Definitions

A directed graph7 is called strongly connected if there is a path8 in each direction between
each pair of vertices of the graph. That is, a path exists from the first vertex in the pair
to the second, and another path exists from the second vertex to the first. In a directed
graph G that may not itself be strongly connected, a pair of vertices u and v are said to be
strongly connected to each other if there is a path in each direction between them.
The binary relation9 of being strongly connected is an equivalence relation10 , and the in-
duced subgraphs11 of its equivalence classes12 are called strongly connected compo-
nents. Equivalently, a strongly connected component of a directed graph G is a sub-
graph that is strongly connected, and is maximal13 with this property: no additional edges
or vertices from G can be included in the subgraph without breaking its property of being
strongly connected. The collection of strongly connected components forms a partition14 of
the set of vertices of G.

1 https://en.wikipedia.org/wiki/Directed_graph
2 https://en.wikipedia.org/wiki/Reachability
3 https://en.wikipedia.org/wiki/Component_(graph_theory)
4 https://en.wikipedia.org/wiki/Partition_of_a_set
5 https://en.wikipedia.org/wiki/Connectivity_(graph_theory)
6 https://en.wikipedia.org/wiki/Linear_time
7 https://en.wikipedia.org/wiki/Directed_graph
8 https://en.wikipedia.org/wiki/Path_(graph_theory)
9 https://en.wikipedia.org/wiki/Binary_relation
10 https://en.wikipedia.org/wiki/Equivalence_relation
11 https://en.wikipedia.org/wiki/Induced_subgraph
12 https://en.wikipedia.org/wiki/Equivalence_class
13 https://en.wikipedia.org/wiki/Maximal_element
14 https://en.wikipedia.org/wiki/Partition_of_a_set

1278
Algorithms

Figure 302 The yellow directed acyclic graph is the condensation of the blue directed
graph. It is formed by contracting each strongly connected component of the blue graph
into a single yellow vertex.

If each strongly connected component is contracted15 to a single vertex, the resulting graph
is a directed acyclic graph16 , the condensation of G. A directed graph is acyclic if and only
if it has no strongly connected subgraphs with more than one vertex, because a directed
cycle is strongly connected and every nontrivial strongly connected component contains at
least one directed cycle.

123.2 Algorithms

123.2.1 DFS-based linear-time algorithms

Several algorithms based on depth first search17 compute strongly connected components
in linear time18 .
• Kosaraju's algorithm19 uses two passes of depth first search20 . The first, in the original
graph, is used to choose the order in which the outer loop of the second depth first search

15 https://en.wikipedia.org/wiki/Vertex_contraction
16 https://en.wikipedia.org/wiki/Directed_acyclic_graph
17 https://en.wikipedia.org/wiki/Depth_first_search
18 https://en.wikipedia.org/wiki/Linear_time
19 https://en.wikipedia.org/wiki/Kosaraju%27s_algorithm
20 https://en.wikipedia.org/wiki/Depth_first_search

1279
Strongly connected component

tests vertices for having been visited already and recursively explores them if not. The
second depth first search is on the transpose graph21 of the original graph, and each
recursive exploration finds a single new strongly connected component.[1][2] It is named
after S. Rao Kosaraju22 , who described it (but did not publish his results) in 1978; Micha
Sharir23 later published it in 1981.[3]
• Tarjan's strongly connected components algorithm24 , published by Robert Tarjan25 in
1972,[4] performs a single pass of depth first search. It maintains a stack26 of vertices that
have been explored by the search but not yet assigned to a component, and calculates
”low numbers” of each vertex (an index number of the highest ancestor reachable in one
step from a descendant of the vertex) which it uses to determine when a set of vertices
should be popped off the stack into a new component.
• The path-based strong component algorithm27 uses a depth first search, like Tarjan's
algorithm, but with two stacks. One of the stacks is used to keep track of the vertices
not yet assigned to components, while the other keeps track of the current path in the
depth first search tree. The first linear time version of this algorithm was published by
Edsger W. Dijkstra28 in 1976.[5]
Although Kosaraju's algorithm is conceptually simple, Tarjan's and the path-based algo-
rithm require only one depth-first search29 rather than two.

123.2.2 Reachability-based Algorithms

Previous linear-time algorithms are based on depth-first search30 which is generally consid-
ered hard to parallelize. Fleischer et al.[6] in 2000 proposed a divide-and-conquer31 approach
based on reachability32 queries, and such algorithms are usually called reachability-based
SCC algorithms. The idea of this approach is to pick a random pivot vertex and apply
forward and backward reachability queries from this vertex. The two queries partition the
vertex set into 4 subsets: vertices reached by both, either one, or none of the searches. One
can show that a strongly connected component has to be contained in one of the subsets.
The vertex subset reached by both searches forms a strongly connected components, and
the algorithm then recurses on the other 3 subsets.
The expected sequential running time of this algorithm is shown to be O(n log n), a factor of
O(log n) more than the classic algorithms. The parallelism comes from: (1) the reachability
queries can be parallelized more easily (e.g. by a BFS33 , and it can be fast if the diameter
of the graph is small); and (2) the independence between the subtasks in the divide-and-

21 https://en.wikipedia.org/wiki/Transpose_graph
22 https://en.wikipedia.org/wiki/S._Rao_Kosaraju
23 https://en.wikipedia.org/wiki/Micha_Sharir
24 https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm
25 https://en.wikipedia.org/wiki/Robert_Tarjan
26 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
27 https://en.wikipedia.org/wiki/Path-based_strong_component_algorithm
28 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
29 https://en.wikipedia.org/wiki/Depth-first_search
30 https://en.wikipedia.org/wiki/Depth-first_search
31 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
32 https://en.wikipedia.org/wiki/Reachability
33 https://en.wikipedia.org/wiki/Breadth-first_search

1280
Applications

conquer process. This algorithm performs well on real-world graphs,[2] but does not have
theoretical guarantee on the parallelism (consider if a graph has no edges, the algorithm
requires O(n) levels of recursions).
Blelloch et al.[7] in 2016 shows that if the reachability queries are applied in a random order,
the cost bound of O(n log n) still holds. Furthermore, the queries then can be batched in a
prefix-doubling manner (i.e. 1, 2, 4, 8 queries) and run simultaneously in one round. The
overall span34 of this algorithm is log2 n reachability queries, which is probably the optimal
parallelism that can be achieved using the reachability-based approach.

123.2.3 Generating random strongly-connected graphs

Peter M. Maurer describes an algorithm for generating random strongly-connected graphs,[8]


based on a modification of Tarjan's algorithm to create a spanning tree and adding a mini-
mum of edges such that the result becomes strongly-connected. When used in conjunction
with the Gilbert or Erdős-Rényi models with node relabelling, the algorithm is capable of
generating any strongly-connected graph on n nodes, without restriction on the kinds of
structures that can be generated.

123.3 Applications

Algorithms for finding strongly connected components may be used to solve 2-satisfiability35
problems (systems of Boolean variables with constraints on the values of pairs of variables):
as Aspvall, Plass & Tarjan (1979)36 showed, a 2-satisfiability37 instance is unsatisfiable if
and only if there is a variable v such that v and its complement are both contained in the
same strongly connected component of the implication graph38 of the instance.[9]
Strongly connected components are also used to compute the Dulmage–Mendelsohn decom-
position39 , a classification of the edges of a bipartite graph40 , according to whether or not
they can be part of a perfect matching41 in the graph.[10]

123.4 Related results

A directed graph is strongly connected if and only if it has an ear decomposition42 , a parti-
tion of the edges into a sequence of directed paths and cycles such that the first subgraph in
the sequence is a cycle, and each subsequent subgraph is either a cycle sharing one vertex
with previous subgraphs, or a path sharing its two endpoints with previous subgraphs.

34 https://en.wikipedia.org/wiki/Analysis_of_parallel_algorithms
35 https://en.wikipedia.org/wiki/2-satisfiability
36 #CITEREFAspvallPlassTarjan1979
37 https://en.wikipedia.org/wiki/2-satisfiability
38 https://en.wikipedia.org/wiki/Implication_graph
39 https://en.wikipedia.org/wiki/Dulmage%E2%80%93Mendelsohn_decomposition
40 https://en.wikipedia.org/wiki/Bipartite_graph
41 https://en.wikipedia.org/wiki/Perfect_matching
42 https://en.wikipedia.org/wiki/Ear_decomposition

1281
Strongly connected component

According to Robbins' theorem43 , an undirected graph may be oriented44 in such a way that
it becomes strongly connected, if and only if it is 2-edge-connected45 . One way to prove this
result is to find an ear decomposition of the underlying undirected graph and then orient
each ear consistently.[11]

123.5 See also


• Clique46
• Connected component47
• Modular decomposition48

123.6 References
1. Thomas H. Cormen49 , Charles E. Leiserson50 , Ronald L. Rivest51 , and Clifford Stein52 .
Introduction to Algorithms53 , Second Edition. MIT Press and McGraw-Hill, 2001.
ISBN54 0-262-03293-755 . Section 22.5, pp. 552−557.
2. H, S; R, N C.; O, K (2013), ”O 
      (SCC)  -
 ”56 (PDF), Proceedings of the International Conference on High
Performance Computing, Networking, Storage and Analysis - SC '13, pp. 1–11,
doi57 :10.1145/2503210.250324658 , ISBN59 978145032378960
3. S, M61 (1981), ”A -    -
    ”, Computers & Mathematics with Applications, 7:
67–72, doi62 :10.1016/0898-1221(81)90008-063

43 https://en.wikipedia.org/wiki/Robbins%27_theorem
44 https://en.wikipedia.org/wiki/Graph_orientation
45 https://en.wikipedia.org/wiki/K-edge-connected_graph
46 https://en.wikipedia.org/wiki/Clique_(graph_theory)
47 https://en.wikipedia.org/wiki/Connected_component_(graph_theory)
48 https://en.wikipedia.org/wiki/Modular_decomposition
49 https://en.wikipedia.org/wiki/Thomas_H._Cormen
50 https://en.wikipedia.org/wiki/Charles_E._Leiserson
51 https://en.wikipedia.org/wiki/Ronald_L._Rivest
52 https://en.wikipedia.org/wiki/Clifford_Stein
53 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
54 https://en.wikipedia.org/wiki/ISBN_(identifier)
55 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
56 https://ppl.stanford.edu/papers/sc13-hong.pdf
57 https://en.wikipedia.org/wiki/Doi_(identifier)
58 https://doi.org/10.1145%2F2503210.2503246
59 https://en.wikipedia.org/wiki/ISBN_(identifier)
60 https://en.wikipedia.org/wiki/Special:BookSources/9781450323789
61 https://en.wikipedia.org/wiki/Micha_Sharir
62 https://en.wikipedia.org/wiki/Doi_(identifier)
63 https://doi.org/10.1016%2F0898-1221%2881%2990008-0

1282
References

4. T, R. E.64 (1972), ”D-     ”,


SIAM Journal on Computing65 , 1 (2): 146–160, doi66 :10.1137/020101067
5. D, E68 (1976), A Discipline of Programming, NJ: Prentice Hall,
Ch. 25.
6. F, L K.; H, B; P, A (2000), ”O I-
 S C C  P”69 (PDF), Parallel and
Distributed Processing, Lecture Notes in Computer Science, 1800, pp. 505–511,
doi70 :10.1007/3-540-45591-4_6871 , ISBN72 978-3-540-67442-973
7. B, G E.; G, Y; S, J; S, Y (2016), ”P
 R I A”74 (PDF), Proceedings of the 28th ACM
Symposium on Parallelism in Algorithms and Architectures - SPAA '16, pp. 467–478,
doi75 :10.1145/2935764.293576676 , ISBN77 978145034210078 .
8. M, P. M., Generating strongly connected random graphs79 (PDF), I'
C. M, S.  V. M MSV'17, CSREA P, ISBN80 1-
60132-465-081 ,  D 27, 2019
9. A, B; P, M F.; T, R E.82 (1979), ”A -
         
”, Information Processing Letters, 8 (3): 121–123, doi83 :10.1016/0020-
0190(79)90002-484 .
10. D, A. L. & M, N. S.85 (1958), ”C  
”, Can. J. Math., 10: 517–534, doi86 :10.4153/cjm-1958-052-087 .

64 https://en.wikipedia.org/wiki/Robert_Tarjan
65 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
66 https://en.wikipedia.org/wiki/Doi_(identifier)
67 https://doi.org/10.1137%2F0201010
68 https://en.wikipedia.org/wiki/Edsger_Dijkstra
69 https://www.sandia.gov/~apinar/papers/irreg00.pdf
70 https://en.wikipedia.org/wiki/Doi_(identifier)
71 https://doi.org/10.1007%2F3-540-45591-4_68
72 https://en.wikipedia.org/wiki/ISBN_(identifier)
73 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-67442-9
74 https://people.csail.mit.edu/guyan/paper/SPAA16/Incremental.pdf
75 https://en.wikipedia.org/wiki/Doi_(identifier)
76 https://doi.org/10.1145%2F2935764.2935766
77 https://en.wikipedia.org/wiki/ISBN_(identifier)
78 https://en.wikipedia.org/wiki/Special:BookSources/9781450342100
79 https://csce.ucmss.com/cr/books/2017/LFS/CSREA2017/MSV3359.pdf
80 https://en.wikipedia.org/wiki/ISBN_(identifier)
81 https://en.wikipedia.org/wiki/Special:BookSources/1-60132-465-0
82 https://en.wikipedia.org/wiki/Robert_Tarjan
83 https://en.wikipedia.org/wiki/Doi_(identifier)
84 https://doi.org/10.1016%2F0020-0190%2879%2990002-4
85 https://en.wikipedia.org/wiki/Nathan_Mendelsohn
86 https://en.wikipedia.org/wiki/Doi_(identifier)
87 https://doi.org/10.4153%2Fcjm-1958-052-0

1283
Strongly connected component

11. R, H. E.88 (1939), ”A   ,     


   ”, American Mathematical Monthly89 , 46 (5): 281–
283, doi90 :10.2307/230389791 , hdl92 :10338.dmlcz/10151793 , JSTOR94 230389795 .

123.7 External links


• Java implementation for computation of strongly connected components96 in the jBPT
library (see StronglyConnectedComponents class).
• C++ implementation of Strongly Connected Components97

88 https://en.wikipedia.org/wiki/Herbert_Robbins
89 https://en.wikipedia.org/wiki/American_Mathematical_Monthly
90 https://en.wikipedia.org/wiki/Doi_(identifier)
91 https://doi.org/10.2307%2F2303897
92 https://en.wikipedia.org/wiki/Hdl_(identifier)
93 http://hdl.handle.net/10338.dmlcz%2F101517
94 https://en.wikipedia.org/wiki/JSTOR_(identifier)
95 http://www.jstor.org/stable/2303897
96 https://code.google.com/p/jbpt/
97 http://www.geeksforgeeks.org/tarjan-algorithm-find-strongly-connected-components/

1284
124 Subgraph isomorphism problem

In theoretical computer science1 , the subgraph isomorphism problem is a computational


task in which two graphs2 G and H are given as input, and one must determine whether
G contains a subgraph3 that is isomorphic4 to H. Subgraph isomorphism is a generalization
of both the maximum clique problem5 and the problem of testing whether a graph con-
tains a Hamiltonian cycle6 , and is therefore NP-complete7 .[1] However certain other cases
of subgraph isomorphism may be solved in polynomial time.[2]
Sometimes the name subgraph matching is also used for the same problem. This name
puts emphasis on finding such a subgraph as opposed to the bare decision problem.

124.1 Decision problem and computational complexity

To prove subgraph isomorphism is NP-complete, it must be formulated as a decision prob-


lem8 . The input to the decision problem is a pair of graphs G and H. The answer to the
problem is positive if H is isomorphic to a subgraph of G, and negative otherwise.
Formal question:
Let G = (V, E), H = (V ′ , E ′ ) be graphs. Is there a subgraph
G0 = (V0 , E0 ) | V0 ⊆ V, E0 ⊆ E ∩ (V0 × V0 ) such that G0 ∼ = H? I. e., does there exist a
bijection9 f : V0 → V ′ such that { v1 , v2 } ∈ E0 ⇐⇒ { f (v1 ), f (v2 ) } ∈ E ′ ?
The proof of subgraph isomorphism being NP-complete is simple and based on reduction
of the clique problem10 , an NP-complete decision problem in which the input is a single
graph G and a number k, and the question is whether G contains a complete subgraph11
with k vertices. To translate this to a subgraph isomorphism problem, simply let H be the
complete graph Kk ; then the answer to the subgraph isomorphism problem for G and H is
equal to the answer to the clique problem for G and k. Since the clique problem is NP-
complete, this polynomial-time many-one reduction12 shows that subgraph isomorphism is
also NP-complete.[3]

1 https://en.wikipedia.org/wiki/Theoretical_computer_science
2 https://en.wikipedia.org/wiki/Undirected_graph
3 https://en.wikipedia.org/wiki/Glossary_of_graph_theory#subgraph
4 https://en.wikipedia.org/wiki/Graph_isomorphism
5 https://en.wikipedia.org/wiki/Clique_problem
6 https://en.wikipedia.org/wiki/Hamiltonian_cycle
7 https://en.wikipedia.org/wiki/NP-complete
8 https://en.wikipedia.org/wiki/Decision_problem
9 https://en.wikipedia.org/wiki/Bijection
10 https://en.wikipedia.org/wiki/Clique_problem
11 https://en.wikipedia.org/wiki/Complete_graph
12 https://en.wikipedia.org/wiki/Polynomial-time_many-one_reduction

1285
Subgraph isomorphism problem

An alternative reduction from the Hamiltonian cycle13 problem translates a graph G which
is to be tested for Hamiltonicity into the pair of graphs G and H, where H is a cycle having
the same number of vertices as G. Because the Hamiltonian cycle problem is NP-complete
even for planar graphs14 , this shows that subgraph isomorphism remains NP-complete even
in the planar case.[4]
Subgraph isomorphism is a generalization of the graph isomorphism problem15 , which asks
whether G is isomorphic to H: the answer to the graph isomorphism problem is true if
and only if G and H both have the same numbers of vertices and edges and the subgraph
isomorphism problem for G and H is true. However the complexity-theoretic status of graph
isomorphism remains an open question.
In the context of the Aanderaa–Karp–Rosenberg conjecture16 on the query complexity17
of monotone graph properties, Gröger (1992)18 showed that any subgraph isomorphism
problem has query complexity Ω(n3/2 ); that is, solving the subgraph isomorphism requires
an algorithm to check the presence or absence in the input of Ω(n3/2 ) different edges in the
graph.[5]

124.2 Algorithms

Ullmann (1976)19 describes a recursive backtracking procedure for solving the subgraph
isomorphism problem. Although its running time is, in general, exponential, it takes poly-
nomial time for any fixed choice of H (with a polynomial that depends on the choice of H).
When G is a planar graph20 (or more generally a graph of bounded expansion21 ) and H is
fixed, the running time of subgraph isomorphism can be reduced to linear time22 .[2]
Ullmann (2010)23 is a substantial update to the 1976 subgraph isomorphism algorithm
paper.
Cordella (2004)24 proposed in 2004 another algorithm based on Ullmann's, VF2, which
improves the refinement process using different heuristics and uses significantly less memory.
Bonnici (2013)25 harvtxt error: no target: CITEREFBonnici2013 (help26 ) proposed a better
algorithm, which improves the initial order of the vertices using some heuristics.

13 https://en.wikipedia.org/wiki/Hamiltonian_path_problem
14 https://en.wikipedia.org/wiki/Planar_graphs
15 https://en.wikipedia.org/wiki/Graph_isomorphism_problem
16 https://en.wikipedia.org/wiki/Aanderaa%E2%80%93Karp%E2%80%93Rosenberg_conjecture
17 https://en.wikipedia.org/wiki/Query_complexity
18 #CITEREFGr%C3%B6ger1992
19 #CITEREFUllmann1976
20 https://en.wikipedia.org/wiki/Planar_graph
21 https://en.wikipedia.org/wiki/Bounded_expansion
22 https://en.wikipedia.org/wiki/Linear_time
23 #CITEREFUllmann2010
24 #CITEREFCordella2004
25 #CITEREFBonnici2013
26 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors

1286
Applications

For large graphs, state-of-the art algorithms include CFL-Match and Turboiso, and exten-
sions thereupon such as DAF by Han (2019)27 harvtxt error: no target: CITEREFHan2019
(help28 ).

124.3 Applications

As subgraph isomorphism has been applied in the area of cheminformatics29 to find simi-
larities between chemical compounds from their structural formula; often in this area the
term substructure search is used.[6] A query structure is often defined graphically using
a structure editor30 program; SMILES31 based database systems typically define queries
using SMARTS32 , a SMILES33 extension.
The closely related problem of counting the number of isomorphic copies of a graph H in
a larger graph G has been applied to pattern discovery in databases,[7] the bioinformatics34
of protein-protein interaction networks,[8] and in exponential random graph35 methods for
mathematically modeling social networks36 .[9]
Ohlrich et al. (1993)37 describe an application of subgraph isomorphism in the computer-
aided design38 of electronic circuits39 . Subgraph matching is also a substep in graph rewrit-
ing40 (the most runtime-intensive), and thus offered by graph rewrite tools41 .
The problem is also of interest in artificial intelligence42 , where it is considered part of an
array of pattern matching43 in graphs problems; an extension of subgraph isomorphism
known as graph mining44 is also of interest in that area.[10]

124.4 See also


• Frequent subtree mining45
• Induced subgraph isomorphism problem46

27 #CITEREFHan2019
28 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
29 https://en.wikipedia.org/wiki/Cheminformatics
30 https://en.wikipedia.org/wiki/Structure_editor
31 https://en.wikipedia.org/wiki/SMILES
32 https://en.wikipedia.org/wiki/Smiles_arbitrary_target_specification
33 https://en.wikipedia.org/wiki/SMILES
34 https://en.wikipedia.org/wiki/Bioinformatics
https://en.wikipedia.org/w/index.php?title=Exponential_random_graph&action=edit&
35
redlink=1
36 https://en.wikipedia.org/wiki/Social_network
37 #CITEREFOhlrichEbelingGintingSather1993
38 https://en.wikipedia.org/wiki/Computer-aided_design
39 https://en.wikipedia.org/wiki/Electronic_circuits
40 https://en.wikipedia.org/wiki/Graph_rewriting
41 https://en.wikipedia.org/wiki/Graph_rewriting#Implementations_and_applications
42 https://en.wikipedia.org/wiki/Artificial_intelligence
43 https://en.wikipedia.org/wiki/Pattern_matching
44 https://en.wikipedia.org/wiki/Structure_mining
45 https://en.wikipedia.org/wiki/Frequent_subtree_mining
46 https://en.wikipedia.org/wiki/Induced_subgraph_isomorphism_problem

1287
Subgraph isomorphism problem

• Maximum common edge subgraph problem47


• Maximum common subgraph isomorphism problem48

124.5 Notes
1. The original Cook (1971)49 paper that proves the Cook–Levin theorem50 already
showed subgraph isomorphism to be NP-complete, using a reduction from 3-SAT51
involving cliques.
2. Eppstein (1999)52 ; Nešetřil & Ossona de Mendez (2012)53
3. W, I (2005), Complexity Theory: Exploring the Limits of Efficient
Algorithms54 , S, . 81, ISBN55 978354021045056 .
4.   H, C; J, J-C; S, É;
D, G; S, C (2013), ”P  
     ”57 (PDF), Theoretical Computer
Science, 498: 76–99, doi58 :10.1016/j.tcs.2013.05.02659 , MR60 308351561 , It is known
since the mid-70’s that the isomorphism problem is solvable in polynomial time for
plane graphs. However, it has also been noted that the subisomorphism problem is still
N P-complete, in particular because the Hamiltonian cycle problem is NP-complete
for planar graphs.
5. Here Ω invokes Big Omega notation62 .
6. Ullmann (1976)63
7. Kuramochi & Karypis (2001)64 .
8. Pržulj, Corneil & Jurisica (2006)65 .
9. Snijders et al. (2006)66 .
10. 67 ; expanded version at 68

47 https://en.wikipedia.org/wiki/Maximum_common_edge_subgraph_problem
48 https://en.wikipedia.org/wiki/Maximum_common_subgraph_isomorphism_problem
49 #CITEREFCook1971
50 https://en.wikipedia.org/wiki/Cook%E2%80%93Levin_theorem
51 https://en.wikipedia.org/wiki/3-SAT
52 #CITEREFEppstein1999
53 #CITEREFNe%C5%A1et%C5%99ilOssona_de_Mendez2012
54 https://books.google.com/books?id=1fo7_KoFUPsC&pg=PA81
55 https://en.wikipedia.org/wiki/ISBN_(identifier)
56 https://en.wikipedia.org/wiki/Special:BookSources/9783540210450
57 https://www.ibisc.univ-evry.fr/~janodet/pub/hjsds13.pdf
58 https://en.wikipedia.org/wiki/Doi_(identifier)
59 https://doi.org/10.1016%2Fj.tcs.2013.05.026
60 https://en.wikipedia.org/wiki/MR_(identifier)
61 http://www.ams.org/mathscinet-getitem?mr=3083515
62 https://en.wikipedia.org/wiki/Big_O_notation
63 #CITEREFUllmann1976
64 #CITEREFKuramochiKarypis2001
65 #CITEREFPr%C5%BEuljCorneilJurisica2006
66 #CITEREFSnijdersPattisonRobinsHandcock2006
67 http://www.aaai.org/Papers/Symposia/Fall/2006/FS-06-02/FS06-02-007.pdf
68 https://e-reports-ext.llnl.gov/pdf/332302.pdf

1288
References

124.6 References
• C, S. A.69 (1971), ”T   - -
”70 , Proc. 3rd ACM Symposium on Theory of Computing71 , . 151–158,
 :10.1145/800157.80504773 .
72

• E, D74 (1999), ”S      -


 ”75 (PDF), Journal of Graph Algorithms and Applications76 , 3 (3): 1–
27, arXiv77 :cs.DS/991100378 , doi79 :10.7155/jgaa.0001480 .
• G, M R.81 ; J, D S.82 (1979), Computers and Intractability: A
Guide to the Theory of NP-Completeness83 , W.H. F, ISBN84 978-0-7167-1045-
585 . A1.4: GT48, pg.202.
• G, H D (1992), ”O     
 ”86 (PDF), Acta Cybernetica, 10 (3): 119–127.
• H, M; K, H; G, G; P, K; H, W
(2019), ”E S M: H D P,
A M O,  F S T”87 , SIGMOD
• K, M; K, G (2001), ”F  -
”, 1st IEEE International Conference on Data Mining, p. 313, Cite-
SeerX88 10.1.1.22.499289 , doi90 :10.1109/ICDM.2001.98953491 , ISBN92 978-0-7695-1119-
193 .
• O, M; E, C; G, E; S, L (1993), ”S-
G:        -

69 https://en.wikipedia.org/wiki/Stephen_Cook
70 http://4mhz.de/cook.html
71 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
72 https://en.wikipedia.org/wiki/Doi_(identifier)
73 https://doi.org/10.1145%2F800157.805047
74 https://en.wikipedia.org/wiki/David_Eppstein
75 http://www.cs.brown.edu/publications/jgaa/accepted/99/Eppstein99.3.3.pdf
76 https://en.wikipedia.org/wiki/Journal_of_Graph_Algorithms_and_Applications
77 https://en.wikipedia.org/wiki/ArXiv_(identifier)
78 http://arxiv.org/abs/cs.DS/9911003
79 https://en.wikipedia.org/wiki/Doi_(identifier)
80 https://doi.org/10.7155%2Fjgaa.00014
81 https://en.wikipedia.org/wiki/Michael_R._Garey
82 https://en.wikipedia.org/wiki/David_S._Johnson
https://en.wikipedia.org/wiki/Computers_and_Intractability:_A_Guide_to_the_Theory_of_
83
NP-Completeness
84 https://en.wikipedia.org/wiki/ISBN_(identifier)
85 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7167-1045-5
http://www.inf.u-szeged.hu/actacybernetica/edb/vol10n3/pdf/Groger_1992_
86
ActaCybernetica.pdf
87 https://dl.acm.org/doi/10.1145/3299869.3319880
88 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
89 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.4992
90 https://en.wikipedia.org/wiki/Doi_(identifier)
91 https://doi.org/10.1109%2FICDM.2001.989534
92 https://en.wikipedia.org/wiki/ISBN_(identifier)
93 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7695-1119-1

1289
Subgraph isomorphism problem

”, Proceedings of the 30th international Design Automation Conference, pp. 31–37,
doi94 :10.1145/157485.16455695 , ISBN96 978-0-89791-577-997 .
• NŘ, J98 ; O  M, P99 (2012), ”18.3 T 
   B ”, Sparsity: Graphs, Structures, and Al-
gorithms, Algorithms and Combinatorics, 28, Springer, pp. 400–401, doi100 :10.1007/978-
3-642-27875-4101 , ISBN102 978-3-642-27874-7103 , MR104 2920058105 .
• P, N.; C, D. G.106 ; J, I. (2006), ”E 
     –  -
”, Bioinformatics, 22 (8): 974–980, doi107 :10.1093/bioinformatics/btl030108 ,
PMID109 16452112110 .
• S, T. A. B.; P, P. E.; R, G.; H, M. S. (2006),
”N      ”, Sociological
Methodology, 36 (1): 99–153, CiteSeerX111 10.1.1.62.7975112 , doi113 :10.1111/j.1467-
9531.2006.00176.x114 .
• U, J R. (1976), ”A    ”, Journal
of the ACM115 , 23 (1): 31–42, doi116 :10.1145/321921.321925117 .
• J, H (2011), ”C S I Q  S-
 U  M G S”, 26th ACM Symposium on
Applied Computing, pp. 1058–1063.
• U, J R. (2010), ”B-    
   ”, Journal of Experimental Algorithmics118 ,
15: 1.1, CiteSeerX119 10.1.1.681.8766120 , doi121 :10.1145/1671970.1921702122 .

94 https://en.wikipedia.org/wiki/Doi_(identifier)
95 https://doi.org/10.1145%2F157485.164556
96 https://en.wikipedia.org/wiki/ISBN_(identifier)
97 https://en.wikipedia.org/wiki/Special:BookSources/978-0-89791-577-9
98 https://en.wikipedia.org/wiki/Jaroslav_Ne%C5%A1et%C5%99il
99 https://en.wikipedia.org/wiki/Patrice_Ossona_de_Mendez
100 https://en.wikipedia.org/wiki/Doi_(identifier)
101 https://doi.org/10.1007%2F978-3-642-27875-4
102 https://en.wikipedia.org/wiki/ISBN_(identifier)
103 https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-27874-7
104 https://en.wikipedia.org/wiki/MR_(identifier)
105 http://www.ams.org/mathscinet-getitem?mr=2920058
106 https://en.wikipedia.org/wiki/Derek_Corneil
107 https://en.wikipedia.org/wiki/Doi_(identifier)
108 https://doi.org/10.1093%2Fbioinformatics%2Fbtl030
109 https://en.wikipedia.org/wiki/PMID_(identifier)
110 http://pubmed.ncbi.nlm.nih.gov/16452112
111 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
112 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.62.7975
113 https://en.wikipedia.org/wiki/Doi_(identifier)
114 https://doi.org/10.1111%2Fj.1467-9531.2006.00176.x
115 https://en.wikipedia.org/wiki/Journal_of_the_ACM
116 https://en.wikipedia.org/wiki/Doi_(identifier)
117 https://doi.org/10.1145%2F321921.321925
https://en.wikipedia.org/w/index.php?title=Journal_of_Experimental_Algorithmics&
118
action=edit&redlink=1
119 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
120 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.681.8766
121 https://en.wikipedia.org/wiki/Doi_(identifier)
122 https://doi.org/10.1145%2F1671970.1921702

1290
References

• C, L P. (2004), ”A ()     -


  ”, IEEE Transactions on Pattern Analysis and Machine Intelligence,
26 (10): 1367–1372, CiteSeerX123 10.1.1.101.5342124 , doi125 :10.1109/tpami.2004.75126 ,
PMID127 15641723128
• B, V.; G, R. (2013), ”A     
   ”, BMC Bioinformatics, 14(Suppl7) (13): S13,
doi129 :10.1186/1471-2105-14-s7-s13130 , PMC131 3633016132 , PMID133 23815292134

123 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
124 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.101.5342
125 https://en.wikipedia.org/wiki/Doi_(identifier)
126 https://doi.org/10.1109%2Ftpami.2004.75
127 https://en.wikipedia.org/wiki/PMID_(identifier)
128 http://pubmed.ncbi.nlm.nih.gov/15641723
129 https://en.wikipedia.org/wiki/Doi_(identifier)
130 https://doi.org/10.1186%2F1471-2105-14-s7-s13
131 https://en.wikipedia.org/wiki/PMC_(identifier)
132 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3633016
133 https://en.wikipedia.org/wiki/PMID_(identifier)
134 http://pubmed.ncbi.nlm.nih.gov/23815292

1291
125 Suurballe's algorithm

In theoretical computer science1 and network routing2 , Suurballe's algorithm is an al-


gorithm for finding two disjoint paths in a nonnegatively-weighted directed graph3 , so that
both paths connect the same pair of vertices4 and have minimum total length.[1] The al-
gorithm was conceived by John W. Suurballe and published in 1974.[2] The main idea of
Suurballe's algorithm is to use Dijkstra's algorithm5 to find one path, to modify the weights
of the graph edges, and then to run Dijkstra's algorithm a second time. The output of the
algorithm is formed by combining these two paths, discarding edges that are traversed in
opposite directions by the paths, and using the remaining edges to form the two paths to
return as the output. The modification to the weights is similar to the weight modification
in Johnson's algorithm6 , and preserves the non-negativity of the weights while allowing the
second instance of Dijkstra's algorithm to find the correct second path.
The problem of finding two disjoint paths of minimum weight can be seen as a special
case of a minimum cost flow7 problem, where in this case there are two units of ”flow”
and nodes have unit ”capacity”. Suurballe's algorithm, also, can be seen as a special case
of a minimum cost flow algorithm that repeatedly pushes the maximum possible amount
of flow along a shortest augmenting path. The first path found by Suurballe's algorithm
is the shortest augmenting path for the initial (zero) flow, and the second path found by
Suurballe's algorithm is the shortest augmenting path for the residual graph8 left after
pushing one unit of flow along the first path.

125.1 Definitions

Let G be a weighted directed graph with vertex set V and edge set E (figure A); let s be a
designated source vertex in G, and let t be a designated destination vertex. Let each edge
(u,v) in E, from vertex u to vertex v, have a non-negative cost w(u,v).
Define d(s,u) to be the cost of the shortest path9 to vertex u from vertex s in the shortest
path tree rooted at s (figure C).
Note: Node and Vertex are often used interchangeably.

1 https://en.wikipedia.org/wiki/Theoretical_computer_science
2 https://en.wikipedia.org/wiki/Network_routing
3 https://en.wikipedia.org/wiki/Directed_graph
4 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
5 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
6 https://en.wikipedia.org/wiki/Johnson%27s_algorithm
7 https://en.wikipedia.org/wiki/Minimum_cost_flow
8 https://en.wikipedia.org/wiki/Residual_graph
9 https://en.wikipedia.org/wiki/Shortest_path

1293
Suurballe's algorithm

125.2 Algorithm

Suurballe's algorithm performs the following steps:


1. Find the shortest path tree T rooted at node s by running Dijkstra's algorithm (figure
C). This tree contains for every vertex u, a shortest path from s to u. Let P1 be the
shortest cost path from s to t (figure B). The edges in T are called tree edges and the
remaining edges (the edges missing from figure C) are called non-tree edges.
2. Modify the cost of each edge in the graph by replacing the cost w(u,v) of every edge
(u,v) by w′(u,v) = w(u,v) − d(s,v) + d(s,u). According to the resulting modified cost
function, all tree edges have a cost of 0, and non-tree edges have a non-negative cost.
For example:
If u=B, v=E, then w′(u,v) = w(B,E) − d(A,E) + d(A,B) = 2 − 3 + 1 = 0
If u=E, v=B, then w′(u,v) = w(E,B) − d(A,B) + d(A,E) = 2 − 1 + 3 = 4
3. Create a residual graph Gt formed from G by removing the edges of G on path P1
that are directed into s and then reverse the direction of the zero length edges along
path P1 (figure D).
4. Find the shortest path P2 in the residual graph Gt by running Dijkstra's algorithm
(figure E).
5. Discard the reversed edges of P2 from both paths. The remaining edges of P1 and
P2 form a subgraph with two outgoing edges at s, two incoming edges at t, and one
incoming and one outgoing edge at each remaining vertex. Therefore, this subgraph
consists of two edge-disjoint paths from s to t and possibly some additional (zero-
length) cycles. Return the two disjoint paths from the subgraph.

125.3 Example

The following example shows how Suurballe's algorithm finds the shortest pair of disjoint
paths from A to F.

1294
Correctness

Figure 303

Figure A illustrates a weighted graph G.


Figure B calculates the shortest path P1 from A to F (A–B–D–F).
Figure C illustrates the shortest path tree T rooted at A, and the computed distances from
A to every vertex (u).
Figure D shows the residual graph Gt with the updated cost of each edge and the edges of
path 'P1 reversed.
Figure E calculates path P2 in the residual graph Gt (A–C–D–B–E–F).
Figure F illustrates both path P1 and path P2 .
Figure G finds the shortest pair of disjoint paths by combining the edges of paths P1 and
P2 and then discarding the common reversed edges between both paths (B–D). As a result,
we get the two shortest pair of disjoint paths (A–B–E–F) and (A–C–D–F).

125.4 Correctness

The weight of any path from s to t in the modified system of weights equals the weight in the
original graph, minus d(s,t). Therefore, the shortest two disjoint paths under the modified

1295
Suurballe's algorithm

weights are the same paths as the shortest two paths in the original graph, although they
have different weights.
Suurballe's algorithm may be seen as a special case of the successive shortest paths method
for finding a minimum cost flow10 with total flow amount two from s to t. The modification
to the weights does not affect the sequence of paths found by this method, only their weights.
Therefore, the correctness of the algorithm follows from the correctness of the successive
shortest paths method.

125.5 Analysis and running time

This algorithm requires two iterations of Dijkstra's algorithm. Using Fibonacci heaps11 ,
both iterations can be performed in time O(|E| + |V | log |V |) where |V | and |E| are the
number of vertices and edges respectively. Therefore, the same time bound applies to
Suurballe's algorithm.

125.6 Variations

The version of Suurballe's algorithm as described above finds paths that have disjoint edges,
but that may share vertices. It is possible to use the same algorithm to find vertex-disjoint
paths, by replacing each vertex by a pair of adjacent vertices, one with all of the incoming
adjacencies u-in of the original vertex, and one with all of the outgoing adjacencies u-
out. Two edge-disjoint paths in this modified graph necessarily correspond to two vertex-
disjoint paths in the original graph, and vice versa, so applying Suurballe's algorithm to the
modified graph results in the construction of two vertex-disjoint paths in the original graph.
Suurballe's original 1974 algorithm was for the vertex-disjoint version of the problem, and
was extended in 1984 by Suurballe and Tarjan to the edge-disjoint version.[3]
By using a modified version of Dijkstra's algorithm that simultaneously computes the dis-
tances to each vertex t in the graphs Gt , it is also possible to find the total lengths of the
shortest pairs of paths from a given source vertex s to every other vertex in the graph, in
an amount of time that is proportional to a single instance of Dijkstra's algorithm.
Note: The pair of adjacent vertices resulting from the split are connected by a zero cost
uni-directional edge from the incoming to outgoing vertex. The source vertex becomes
s-out and the destination vertex becomes t-in.

125.7 See also


• Edge disjoint shortest pair algorithm12

10 https://en.wikipedia.org/wiki/Minimum_cost_flow
11 https://en.wikipedia.org/wiki/Fibonacci_heap
12 https://en.wikipedia.org/wiki/Edge_disjoint_shortest_pair_algorithm

1296
References

125.8 References
1. B, R (1999), ”S'   ”, Sur-
vivable Networks: Algorithms for Diverse Routing, Springer-Verlag, pp. 86–91,
ISBN13 978-0-7923-8381-914 .
2. S, J. W. (1974), ”D    ”, Networks, 4 (2): 125–
145, doi15 :10.1002/net.323004020416 .
3. S, J. W.; T, R. E.17 (1984), ”A    -
     ”18 (PDF), Networks, 14 (2): 325–336,
doi19 :10.1002/net.323014020920 .

13 https://en.wikipedia.org/wiki/ISBN_(identifier)
14 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7923-8381-9
15 https://en.wikipedia.org/wiki/Doi_(identifier)
16 https://doi.org/10.1002%2Fnet.3230040204
17 https://en.wikipedia.org/wiki/Robert_Tarjan
18 http://www.cse.yorku.ca/course_archive/2007-08/F/6590/Notes/surballe_alg.pdf
19 https://en.wikipedia.org/wiki/Doi_(identifier)
20 https://doi.org/10.1002%2Fnet.3230140209

1297
126 Tarjan's off-line lowest common
ancestors algorithm

Not to be confused with Tarjan's strongly connected components algorithm1 . In computer


science2 , Tarjan's off-line lowest common ancestors algorithm is an algorithm3 for
computing lowest common ancestors4 for pairs of nodes in a tree, based on the union-find5
data structure. The lowest common ancestor of two nodes d and e in a rooted tree6 T is
the node g that is an ancestor of both d and e and that has the greatest depth in T. It is
named after Robert Tarjan7 , who discovered the technique in 1979. Tarjan's algorithm is
an offline algorithm; that is, unlike other lowest common ancestor algorithms, it requires
that all pairs of nodes for which the lowest common ancestor is desired must be specified in
advance. The simplest version of the algorithm uses the union-find data structure, which
unlike other lowest common ancestor data structures can take more than constant time
per operation when the number of pairs of nodes is similar in magnitude to the number of
nodes. A later refinement by Gabow & Tarjan (1983)8 speeds the algorithm up to linear
time9 .

126.1 Pseudocode

The pseudocode below determines the lowest common ancestor of each pair in P, given
the root r of a tree in which the children of node n are in the set n.children. For this
offline algorithm, the set P must be specified in advance. It uses the MakeSet, Find,
and Union functions of a disjoint-set forest10 . MakeSet(u) removes u to a singleton set,
Find(u) returns the standard representative of the set containing u, and Union(u,v) merges
the set containing u with the set containing v. TarjanOLCA(r) is first called on the root r.
function TarjanOLCA(u) is
MakeSet(u)
u.ancestor := u
for each v in u.children do
TarjanOLCA(v)
Union(u, v)
Find(u).ancestor := u

1 https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm
2 https://en.wikipedia.org/wiki/Computer_science
3 https://en.wikipedia.org/wiki/Algorithm
4 https://en.wikipedia.org/wiki/Lowest_common_ancestor
5 https://en.wikipedia.org/wiki/Union-find
6 https://en.wikipedia.org/wiki/Rooted_tree
7 https://en.wikipedia.org/wiki/Robert_Tarjan
8 #CITEREFGabowTarjan1983
9 https://en.wikipedia.org/wiki/Linear_time
10 https://en.wikipedia.org/wiki/Disjoint-set_data_structure#Disjoint-set_forests

1299
Tarjan's off-line lowest common ancestors algorithm

u.color := black
for each v such that {u, v} in P do
if v.color == black then
print ”Tarjan's Lowest Common Ancestor of ” + u +
” and ” + v + ” is ” + Find(v).ancestor + ”.”

Each node is initially white, and is colored black after it and all its children have been
visited.
For each node pair {u,v} to be investigated:
• When v is already black(viz. when v comes before u in a post-order traversal of the
tree): After u is colored black, the lowest common ancestor of this pair is available as
Find(v).ancestor, but only while the LCA of u and v is not colored black.
• Otherwise: Once v is colored black, the LCA will be available as Find(u).ancestor, while
the LCA is not colored black.
For reference, here are optimized versions of MakeSet, Find, and Union for a disjoint-set
forest11 :
function MakeSet(x) is
x.parent := x
x.rank := 1

function Union(x, y) is
xRoot := Find(x)
yRoot := Find(y)
if xRoot.rank > yRoot.rank then
yRoot.parent := xRoot
else if xRoot.rank < yRoot.rank then
xRoot.parent := yRoot
else if xRoot.rank == yRoot.rank then
yRoot.parent := xRoot
xRoot.rank := xRoot.rank + 1

function Find(x) is
if x.parent != x then
x.parent := Find(x.parent)
return x.parent

126.2 References
• G, H. N.; T, R. E.12 (1983), ”A -    
    ”, Proceedings of the 15th ACM Symposium on Theory of
Computing (STOC), pp. 246–251, doi13 :10.1145/800061.80875314 .
• T, R. E.15 (1979), ”A      ”,
Journal of the ACM16 , 26 (4): 690–715, doi17 :10.1145/322154.32216118 .

11 https://en.wikipedia.org/wiki/Disjoint-set_data_structure#Disjoint-set_forests
12 https://en.wikipedia.org/wiki/Robert_Tarjan
13 https://en.wikipedia.org/wiki/Doi_(identifier)
14 https://doi.org/10.1145%2F800061.808753
15 https://en.wikipedia.org/wiki/Robert_Tarjan
16 https://en.wikipedia.org/wiki/Journal_of_the_ACM
17 https://en.wikipedia.org/wiki/Doi_(identifier)
18 https://doi.org/10.1145%2F322154.322161

1300
127 Tarjan's strongly connected
components algorithm

Tarjan's strongly connected components algorithm


Tarjan's algorithm animation
Data structure Graph
Worst-case performance O(|V | + |E|)

Tarjan's algorithm is an algorithm1 in graph theory2 for finding the strongly connected
components3 of a directed graph4 . It runs in linear time5 , matching the time bound for
alternative methods including Kosaraju's algorithm6 and the path-based strong component
algorithm7 . Tarjan's algorithm is named for its inventor, Robert Tarjan8 .[1]

127.1 Overview

The algorithm takes a directed graph9 as input, and produces a partition10 of the graph's
vertices11 into the graph's strongly connected components. Each vertex of the graph appears
in exactly one of the strongly connected components. Any vertex that is not on a directed
cycle forms a strongly connected component all by itself: for example, a vertex whose
in-degree or out-degree is 0, or any vertex of an acyclic graph.
The basic idea of the algorithm is this: a depth-first search begins from an arbitrary start
node (and subsequent depth-first searches are conducted on any nodes that have not yet
been found). As usual with depth-first search, the search visits every node of the graph
exactly once, declining to revisit any node that has already been visited. Thus, the collection
of search trees is a spanning forest12 of the graph. The strongly connected components will
be recovered as certain subtrees of this forest. The roots of these subtrees are called the
”roots” of the strongly connected components. Any node of a strongly connected component

1 https://en.wikipedia.org/wiki/Algorithm
2 https://en.wikipedia.org/wiki/Graph_theory
3 https://en.wikipedia.org/wiki/Strongly_connected_component
4 https://en.wikipedia.org/wiki/Directed_graph
5 https://en.wikipedia.org/wiki/Linear_time
6 https://en.wikipedia.org/wiki/Kosaraju%27s_algorithm
7 https://en.wikipedia.org/wiki/Path-based_strong_component_algorithm
8 https://en.wikipedia.org/wiki/Robert_Tarjan
9 https://en.wikipedia.org/wiki/Directed_graph
10 https://en.wikipedia.org/wiki/Partition_of_a_set
11 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
12 https://en.wikipedia.org/wiki/Spanning_forest#Spanning_forests

1301
Tarjan's strongly connected components algorithm

might serve as a root, if it happens to be the first node of a component that is discovered
by search.

127.1.1 Stack invariant

Nodes are placed on a stack13 in the order in which they are visited. When the depth-first
search recursively visits a node v and its descendants, those nodes are not all necessarily
popped from the stack when this recursive call returns. The crucial invariant property14 is
that a node remains on the stack after it has been visited if and only if there exists a path
in the input graph from it to some node earlier on the stack. In other words it means that
in the DFS a node would be only removed from the stack after all its connected paths have
been traversed. When the DFS will backtrack it would remove the nodes on a single path
and return back to the root in order to start a new path.
At the end of the call that visits v and its descendants, we know whether v itself has a
path to any node earlier on the stack. If so, the call returns, leaving v on the stack to
preserve the invariant. If not, then v must be the root of its strongly connected component,
which consists of v together with any nodes later on the stack than v (such nodes all have
paths back to v but not to any earlier node, because if they had paths to earlier nodes then
v would also have paths to earlier nodes which is false). The connected component rooted
at v is then popped from the stack and returned, again preserving the invariant.

127.1.2 Bookkeeping

Each node v is assigned a unique integer v.index, which numbers the nodes consecutively in
the order in which they are discovered. It also maintains a value v.lowlink that represents
the smallest index of any node known to be reachable from v through v's DFS subtree,
including v itself. Therefore v must be left on the stack if v.lowlink < v.index, whereas v
must be removed as the root of a strongly connected component if v.lowlink == v.index.
The value v.lowlink is computed during the depth-first search from v, as this finds the
nodes that are reachable from v.

127.2 The algorithm in pseudocode


algorithm tarjan is
input: graph G = (V, E)
output: set of strongly connected components (sets of vertices)

index := 0
S := empty stack
for each v in V do
if v.index is undefined then
strongconnect(v)
end if
end for

13 https://en.wikipedia.org/wiki/Stack_(data_structure)
14 https://en.wikipedia.org/wiki/Invariant_(computer_science)

1302
The algorithm in pseudocode

function strongconnect(v)
// Set the depth index for v to the smallest unused index
v.index := index
v.lowlink := index
index := index + 1
S.push(v)
v.onStack := true

// Consider successors of v
for each (v, w) in E do
if w.index is undefined then
// Successor w has not yet been visited; recurse on it
strongconnect(w)
v.lowlink := min(v.lowlink, w.lowlink)
else if w.onStack then
// Successor w is in stack S and hence in the current SCC
// Ifw is not on stack, then (v,w) is an edge pointing to an SCC already found and must be ignored
// Note: The next line may look odd - but is correct.
// It says w.index not w.lowlink; that is deliberate and from the original paper
v.lowlink := min(v.lowlink, w.index)
end if
end for

// If v is a root node, pop the stack and generate an SCC


if v.lowlink = v.index then
start a new strongly connected component
repeat
w := S.pop()
w.onStack := false
add w to current strongly connected component
while w ≠ v
output the current strongly connected component
end if
end function

The index variable is the depth-first search node number counter. S is the node stack,
which starts out empty and stores the history of nodes explored but not yet committed to
a strongly connected component. Note that this is not the normal depth-first search stack,
as nodes are not popped as the search returns up the tree; they are only popped when an
entire strongly connected component has been found.
The outermost loop searches each node that has not yet been visited, ensuring that nodes
which are not reachable from the first node are still eventually traversed. The function
strongconnect performs a single depth-first search of the graph, finding all successors
from the node v, and reporting all strongly connected components of that subgraph.
When each node finishes recursing, if its lowlink is still set to its index, then it is the root
node of a strongly connected component, formed by all of the nodes above it on the stack.
The algorithm pops the stack up to and including the current node, and presents all of
these nodes as a strongly connected component.
Note that v.lowlink := min(v.lowlink, w.index) is the correct way to update
v.lowlink if w is on stack. Because w is on the stack already, (v, w) is a back-edge in
the DFS tree and therefore w is not in the subtree of v. Because v.lowlink takes into
account nodes reachable only through the nodes in the subtree of v we must stop at w and
use w.index instead of w.lowlink.

1303
Tarjan's strongly connected components algorithm

127.3 Complexity

Time Complexity: The Tarjan procedure is called once for each node; the forall statement
considers each edge at most once. The algorithm's running time is therefore linear in the
number of edges and nodes in G, i.e. O(|V | + |E|).
In order to achieve this complexity, the test for whether w is on the stack should be done in
constant time. This may be done, for example, by storing a flag on each node that indicates
whether it is on the stack, and performing this test by examining the flag.
Space Complexity: The Tarjan procedure requires two words of supplementary data per
vertex for the index and lowlink fields, along with one bit for onStack and another for
determining when index is undefined. In addition, one word is required on each stack frame
to hold v and another for the current position in the edge list. Finally, the worst-case size
of the stack S must be |V | (i.e. when the graph is one giant component). This gives a final
analysis of O(|V | · (2 + 5w)) where w is the machine word size. The variation of Nuutila
and Soisalon-Soininen reduced this to O(|V | · (1 + 4w)) and, subsequently, that of Pearce
requires only O(|V | · (1 + 3w)).[2][3]

127.4 Additional remarks

While there is nothing special about the order of the nodes within each strongly connected
component, one useful property of the algorithm is that no strongly connected component
will be identified before any of its successors. Therefore, the order in which the strongly
connected components are identified constitutes a reverse topological sort15 of the DAG16
formed by the strongly connected components.[4]
Donald Knuth17 described Tarjan's algorithm as one of his favorite implementations in the
book The Stanford GraphBase.[5]
He also wrote:[6]
The data structures that he devised for this problem fit together in an amazingly beau-
tiful way, so that the quantities you need to look at while exploring a directed graph
are always magically at your fingertips. And his algorithm also does topological sorting
as a byproduct.

127.5 References
1. T, R. E.18 (1972), ”D-     ”,
SIAM Journal on Computing19 , 1 (2): 146–160, doi20 :10.1137/020101021

15 https://en.wikipedia.org/wiki/Topological_sorting
16 https://en.wikipedia.org/wiki/Directed_acyclic_graph
17 https://en.wikipedia.org/wiki/Donald_Knuth
18 https://en.wikipedia.org/wiki/Robert_Tarjan
19 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
20 https://en.wikipedia.org/wiki/Doi_(identifier)
21 https://doi.org/10.1137%2F0201010

1304
References

2. N, E (1994). ”O F  S C C-


   D G”. Information Processing Letters. 49 (1): 9–14.
doi22 :10.1016/0020-0190(94)90047-723 .
3. P, D. ”A S E A  D S
C C”. Information Processing Letters. 116 (1): 47–52.
doi24 :10.1016/j.ipl.2015.08.01025 .
4. H, P. ”R    T' 
 P”26 . R 9 F 2011.
5. Knuth, The Stanford GraphBase, pages 512–519.
6. K, D (2014-05-20). Twenty Questions for Donald Knuth27 .

22 https://en.wikipedia.org/wiki/Doi_(identifier)
23 https://doi.org/10.1016%2F0020-0190%2894%2990047-7
24 https://en.wikipedia.org/wiki/Doi_(identifier)
25 https://doi.org/10.1016%2Fj.ipl.2015.08.010
26 http://www.logarithmic.net/pfh/blog/01208083168
http://www.informit.com/articles/article.aspx?p=2213858&WT.mc_id=Author_Knuth_
27
20Questions

1305
128 Topological sorting

”Dependency resolution” redirects here. For other uses, see Dependency (disambiguation)1 .
In computer science2 , a topological sort or topological ordering of a directed graph3
is a linear ordering4 of its vertices5 such that for every directed edge uv from vertex u to
vertex v, u comes before v in the ordering. For instance, the vertices of the graph may
represent tasks to be performed, and the edges may represent constraints that one task
must be performed before another; in this application, a topological ordering is just a valid
sequence for the tasks. A topological ordering is possible if and only if the graph has no
directed cycles6 , that is, if it is a directed acyclic graph7 (DAG). Any DAG has at least one
topological ordering, and algorithms8 are known for constructing a topological ordering of
any DAG in linear time9 .

128.1 Examples

The canonical application of topological sorting is in scheduling10 a sequence of jobs or


tasks based on their dependencies11 . The jobs are represented by vertices, and there is
an edge from x to y if job x must be completed before job y can be started (for example,
when washing clothes, the washing machine must finish before we put the clothes in the
dryer). Then, a topological sort gives an order in which to perform the jobs. A closely
related application of topological sorting algorithms was first studied in the early 1960s
in the context of the PERT12 technique for scheduling in project management13 (Jarnagin
196014 ); in this application, the vertices of a graph represent the milestones of a project,
and the edges represent tasks that must be performed between one milestone and another.
Topological sorting forms the basis of linear-time algorithms for finding the critical path15
of the project, a sequence of milestones and tasks that controls the length of the overall
project schedule.

1 https://en.wikipedia.org/wiki/Dependency_(disambiguation)
2 https://en.wikipedia.org/wiki/Computer_science
3 https://en.wikipedia.org/wiki/Directed_graph
4 https://en.wikipedia.org/wiki/Total_order
5 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
6 https://en.wikipedia.org/wiki/Directed_cycle
7 https://en.wikipedia.org/wiki/Directed_acyclic_graph
8 https://en.wikipedia.org/wiki/Algorithm
9 https://en.wikipedia.org/wiki/Linear_time
10 https://en.wikipedia.org/wiki/Job_shop_scheduling
11 https://en.wikipedia.org/wiki/Dependency_graph
12 https://en.wikipedia.org/wiki/Program_Evaluation_and_Review_Technique
13 https://en.wikipedia.org/wiki/Project_management
14 #CITEREFJarnagin1960
15 https://en.wikipedia.org/wiki/Critical_path_method

1307
Topological sorting

In computer science, applications of this type arise in instruction scheduling16 , ordering of


formula cell evaluation when recomputing formula values in spreadsheets17 , logic synthesis18 ,
determining the order of compilation tasks to perform in makefiles19 , data serialization20 ,
and resolving symbol dependencies in linkers21 . It is also used to decide in which order to
load tables with foreign keys in databases.

The graph shown to the left has many valid


topological sorts, including:
• 5, 7, 3, 11, 8, 2, 9, 10 (visual left-to-right,
top-to-bottom)
• 3, 5, 7, 8, 11, 2, 9, 10 (smallest-numbered
available vertex first)
Figure 304 • 5, 7, 3, 8, 11, 10, 9, 2 (fewest edges first)
• 7, 5, 11, 3, 10, 8, 9, 2 (largest-numbered
available vertex first)
• 5, 7, 11, 2, 3, 8, 9, 10 (attempting top-to-
bottom, left-to-right)
• 3, 7, 8, 5, 11, 10, 2, 9 (arbitrary)

128.2 Algorithms

The usual algorithms for topological sorting have running time linear in the number of
nodes plus the number of edges, asymptotically, O(|V | + |E|).

128.2.1 Kahn's algorithm

One of these algorithms, first described by Kahn (1962)22 , works by choosing vertices in
the same order as the eventual topological sort. First, find a list of ”start nodes” which
have no incoming edges and insert them into a set S; at least one such node must exist in
a non-empty acyclic graph. Then:

16 https://en.wikipedia.org/wiki/Instruction_scheduling
17 https://en.wikipedia.org/wiki/Spreadsheet
18 https://en.wikipedia.org/wiki/Logic_synthesis
19 https://en.wikipedia.org/wiki/Makefile
20 https://en.wikipedia.org/wiki/Serialization
21 https://en.wikipedia.org/wiki/Linker_(computing)
22 #CITEREFKahn1962

1308
Algorithms

L ← Empty list that will contain the sorted elements


S ← Set of all nodes with no incoming edge

while S is non-empty do
remove a node n from S
add n to tail of L
for each node m with an edge e from n to m do
remove edge e from the graph
if m has no other incoming edges then
insert m into S

if graph has edges then


return error (graph has at least one cycle)
else
return L (a topologically sorted order)

If the graph is a DAG23 , a solution will be contained in the list L (the solution is not
necessarily unique). Otherwise, the graph must have at least one cycle and therefore a
topological sort is impossible.
Reflecting the non-uniqueness of the resulting sort, the structure S can be simply a set or a
queue or a stack. Depending on the order that nodes n are removed from set S, a different
solution is created. A variation of Kahn's algorithm that breaks ties lexicographically24
forms a key component of the Coffman–Graham algorithm25 for parallel scheduling and
layered graph drawing26 .

128.2.2 Depth-first search

An alternative algorithm for topological sorting is based on depth-first search27 . The algo-
rithm loops through each node of the graph, in an arbitrary order, initiating a depth-first
search that terminates when it hits any node that has already been visited since the begin-
ning of the topological sort or the node has no outgoing edges (i.e. a leaf node):
L ← Empty list that will contain the sorted nodes
while exists nodes without a permanent mark do
select an unmarked node n
visit(n)

function visit(node n)
if n has a permanent mark then
return
if n has a temporary mark then
stop (not a DAG)

mark n with a temporary mark

for each node m with an edge from n to m do


visit(m)

remove temporary mark from n


mark n with a permanent mark
add n to head of L

23 https://en.wikipedia.org/wiki/Directed_acyclic_graph
24 https://en.wikipedia.org/wiki/Lexicographic_order
25 https://en.wikipedia.org/wiki/Coffman%E2%80%93Graham_algorithm
26 https://en.wikipedia.org/wiki/Layered_graph_drawing
27 https://en.wikipedia.org/wiki/Depth-first_search

1309
Topological sorting

Each node n gets prepended to the output list L only after considering all other nodes which
depend on n (all descendants of n in the graph). Specifically, when the algorithm adds node
n, we are guaranteed that all nodes which depend on n are already in the output list L: they
were added to L either by the recursive call to visit() which ended before the call to visit n,
or by a call to visit() which started even before the call to visit n. Since each edge and node
is visited once, the algorithm runs in linear time. This depth-first-search-based algorithm is
the one described by Cormen et al. (2001)28 ; it seems to have been first described in print
by Tarjan (1976)29 .

128.2.3 Parallel algorithms

On a parallel random-access machine30 , a topological ordering can be constructed in O(log2


n) time using a polynomial number of processors, putting the problem into the complex-
ity class NC312 (Cook 198532 ). One method for doing this is to repeatedly square the
adjacency matrix33 of the given graph, logarithmically many times, using min-plus matrix
multiplication34 with maximization in place of minimization. The resulting matrix describes
the longest path35 distances in the graph. Sorting the vertices by the lengths of their longest
incoming paths produces a topological ordering (Dekel, Nassimi & Sahni 198136 ).

An algorithm for parallel topological sorting on distributed memory37 machines parallelizes


the algorithm of Khan for a DAG38 G = (V, E)[1] . On a high level, the algorithm of Khan
repeatedly removes the vertices of indegree 0 and adds them to the topological sorting in
the order in which they were removed. Since the outgoing edges of the removed vertices
are also removed, there will be a new set of vertices of indegree 0, where the procedure is
repeated until no vertices are left. This algorithm performs D + 1 iterations, where D is
the longest path in G. Each iteration can be parallelized, which is the idea of the following
algorithm.
In the following it is assumed that the graph partition is stored on p processing elements
(PE) which are labeled 0, . . . , p − 1. Each PE i initializes a set of local vertices Q1i with
indegree39 0, where the upper index represents the current iteration. Since all vertices in
the local sets Q10 , . . . , Q1p−1 have indegree 0, i.e. they are not adjacent, they can be given in
an arbitrary order for a valid topological sorting. To assign a global index to each vertex,

28 #CITEREFCormenLeisersonRivestStein2001
29 #CITEREFTarjan1976
30 https://en.wikipedia.org/wiki/Parallel_random-access_machine
31 https://en.wikipedia.org/wiki/NC_(complexity)
32 #CITEREFCook1985
33 https://en.wikipedia.org/wiki/Adjacency_matrix
34 https://en.wikipedia.org/wiki/Min-plus_matrix_multiplication
35 https://en.wikipedia.org/wiki/Longest_path_problem
36 #CITEREFDekelNassimiSahni1981
37 https://en.wikipedia.org/wiki/Distributed_memory
38 https://en.wikipedia.org/wiki/Directed_acyclic_graph
39 https://en.wikipedia.org/wiki/Indegree

1310
Algorithms


p−1
a prefix sum40 is calculated over the sizes of Q10 , . . . , Q1p−1 . So each step, there are |Qi |
i=0
vertices added to the topological sorting.

Figure 305 Execution of the parallel topological sorting algorithm on a DAG with two
processing elements.


j−1 ∑
j
In the first step, PE j assigns the indices |Q1i |, . . . , ( |Q1i |) − 1 to the local vertices in
i=0 i=0
Q1j . These vertices in Q1j are removed, together with their corresponding outgoing edges.
For each outgoing edge (u, v) with endpoint v in another PE l, j ̸= l, the message (u, v) is
posted to PE l. After all vertices in Q1j are removed, the posted messages are sent to their
corresponding PE. Each message (u, v) received updates the indegree of the local vertex v.
If the indegree drops to zero, v is added to Q2j . Then the next iteration starts.


j−1 ∑
j
In step k, PE j assigns the indices ak−1 + |Qki |, . . . , ak−1 + ( |Qki |) − 1, where ak−1 is the
i=0 i=0
total amount of processed vertices after step k-1. This procedure repeats until there are no

p−1
vertices left to process, hence |QD+1
i | = 0. Below is a high level, single program, multiple
i=0
data41 pseudo code overview of this algorithm.

40 https://en.wikipedia.org/wiki/Prefix_sum
41 https://en.wikipedia.org/wiki/SPMD

1311
Topological sorting


j−1 ∑
j
Note that the prefix sum42 for the local offsets ak−1 + |Qki |, . . . , ak−1 + ( |Qki |) − 1 can
i=0 i=0
be efficiently calculated in parallel.
p processing elements with IDs from 0 to p-1
Input: G = (V, E) DAG, distributed to PEs, PE index <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle
displaystyle=”true” scriptlevel=”0”> <mi>j</mi> <mo> </mo> <mo fence=”false” stretchy=”false”>{</mo>
<mn>0</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>p</mi> <mo>−</mo> <mn>1</mn> <mo
fence=”false” stretchy=”false”>}</mo> </mstyle> </mrow> {\displaystyle j\in \{0,\dots ,p-1\}} </semantics>
Output: topological sorting of G

function traverseDAGDistributed
δ incoming degree of local vertices V
Q = {v ∈ V | δ[v] = 0} // All vertices with indegree 0
nrOfVerticesProcessed = 0
do
global build prefix sum over size of Q // get offsets and total amount
of vertices in this step
offset = nrOfVerticesProcessed + <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true”
scriptlevel=”0”> <munderover> <mo> </mo> <mrow class=”MJX-TeXAtom-ORD”> <mi>i</mi> <mo>=</mo>
<mn>0</mn> </mrow> <mrow class=”MJX-TeXAtom-ORD”> <mi>j</mi> <mo>−</mo> <mn>1</mn>
</mrow> </munderover> <mrow class=”MJX-TeXAtom-ORD”> <mo stretchy=”false”>|</mo> </mrow> <msub>
<mi>Q</mi> <mrow class=”MJX-TeXAtom-ORD”> <mi>i</mi> </mrow> </msub> <mrow class=”MJX-TeXAtom-
ORD”> <mo stretchy=”false”>|</mo> </mrow> </mstyle> </mrow> {\displaystyle \sum _{i=0}^{j-1}|Q_{i}|}
</semantics> // j is the processor index
foreach u ∈ Q
localOrder[u] = index++;
foreach (u, v) ∈ E do post message (u, v) to PE owning vertex v
nrOfVerticesProcessed += <semantics> <mrow class=”MJX-TeXAtom-ORD”> <mstyle displaystyle=”true”
scriptlevel=”0”> <munderover> <mo> </mo> <mrow class=”MJX-TeXAtom-ORD”> <mi>i</mi> <mo>=</mo>
<mn>0</mn> </mrow> <mrow class=”MJX-TeXAtom-ORD”> <mi>p</mi> <mo>−</mo> <mn>1</mn>
</mrow> </munderover> <mrow class=”MJX-TeXAtom-ORD”> <mo stretchy=”false”>|</mo> </mrow> <msub>
<mi>Q</mi> <mrow class=”MJX-TeXAtom-ORD”> <mi>i</mi> </mrow> </msub> <mrow class=”MJX-TeXAtom-
ORD”> <mo stretchy=”false”>|</mo> </mrow> </mstyle> </mrow> {\displaystyle \sum _{i=0}^{p-1}|Q_{i}|}
</semantics>
deliver all messages to neighbors of vertices in Q
receive messages for local vertices V
remove all vertices in Q
foreach message (u, v) received:
if --δ[v] = 0
add v to Q
while global size of Q > 0
return localOrder

The communication cost depends heavily on the given graph partition. As for runtime,
on a CRCW-PRAM43 model that allows fetch-and-decrement in constant time, this algo-
(m+n )
rithm runs in O + D(∆ + log n) , where D is again the longest path in G and ∆ the
p
maximum degree.[1]

42 https://en.wikipedia.org/wiki/Prefix_sum#Parallel_algorithms
43 https://en.wikipedia.org/wiki/CRCW_PRAM

1312
Application to shortest path finding

128.3 Application to shortest path finding

The topological ordering can also be used to quickly compute shortest paths44 through a
weighted45 directed acyclic graph. Let V be the list of vertices in such a graph, in topological
order. Then the following algorithm computes the shortest path from some source vertex s
to all other vertices:[2]
• Let d be an array of the same length as V; this will hold the shortest-path distances from
s. Set d[s] = 0, all other d[u] = ∞.
• Let p be an array of the same length as V, with all elements initialized to nil. Each p[u]
will hold the predecessor of u in the shortest path from s to u.
• Loop over the vertices u as ordered in V, starting from s:
• For each vertex v directly following u (i.e., there exists an edge from u to v):
• Let w be the weight of the edge from u to v.
• Relax the edge: if d[v] > d[u] + w, set
• d[v] ← d[u] + w,
• p[v] ← u.
On a graph of n vertices and m edges, this algorithm takes Θ(n + m), i.e., linear46 , time.[2]

128.4 Uniqueness

If a topological sort has the property that all pairs of consecutive vertices in the sorted
order are connected by edges, then these edges form a directed Hamiltonian path47 in the
DAG48 . If a Hamiltonian path exists, the topological sort order is unique; no other order
respects the edges of the path. Conversely, if a topological sort does not form a Hamiltonian
path, the DAG will have two or more valid topological orderings, for in this case it is always
possible to form a second valid ordering by swapping two consecutive vertices that are not
connected by an edge to each other. Therefore, it is possible to test in linear time whether a
unique ordering exists, and whether a Hamiltonian path exists, despite the NP-hardness49
of the Hamiltonian path problem for more general directed graphs (Vernet & Markenzon
199750 ).

44 https://en.wikipedia.org/wiki/Shortest_path_problem
45 https://en.wikipedia.org/wiki/Weighted_graph
46 https://en.wikipedia.org/wiki/Linear_time
47 https://en.wikipedia.org/wiki/Hamiltonian_path
48 https://en.wikipedia.org/wiki/Directed_acyclic_graph
49 https://en.wikipedia.org/wiki/NP-hard
50 #CITEREFVernetMarkenzon1997

1313
Topological sorting

128.5 Relation to partial orders

Topological orderings are also closely related to the concept of a linear extension51 of a par-
tial order52 in mathematics. In high-level terms, there is an adjunction53 between directed
graphs and partial orders.[3]
A partially ordered set is just a set of objects together with a definition of the ”≤” inequality
relation, satisfying the axioms of reflexivity (x ≤ x), antisymmetry (if x ≤ y and y ≤ x then
x = y) and transitivity54 (if x ≤ y and y ≤ z, then x ≤ z). A total order55 is a partial order
in which, for every two objects x and y in the set, either x ≤ y or y ≤ x. Total orders are
familiar in computer science as the comparison operators needed to perform comparison
sorting56 algorithms. For finite sets, total orders may be identified with linear sequences of
objects, where the ”≤” relation is true whenever the first object precedes the second object
in the order; a comparison sorting algorithm may be used to convert a total order into a
sequence in this way. A linear extension of a partial order is a total order that is compatible
with it, in the sense that, if x ≤ y in the partial order, then x ≤ y in the total order as well.
One can define a partial ordering from any DAG by letting the set of objects be the vertices
of the DAG, and defining x ≤ y to be true, for any two vertices x and y, whenever there
exists a directed path57 from x to y; that is, whenever y is reachable58 from x. With these
definitions, a topological ordering of the DAG is the same thing as a linear extension of
this partial order. Conversely, any partial ordering on a finite set may be defined as the
reachability relation in a DAG. One way of doing this is to define a DAG that has a vertex
for every object in the partially ordered set, and an edge xy for every pair of objects for
which x ≤ y. An alternative way of doing this is to use the transitive reduction59 of the
partial ordering; in general, this produces DAGs with fewer edges, but the reachability
relation in these DAGs is still the same partial order. By using these constructions, one can
use topological ordering algorithms to find linear extensions of partial orders.

128.6 See also


• tsort60 , a Unix program for topological sorting
• Feedback arc set61 , a set of edges whose removal allows the remaining subgraph to be
topologically sorted
• Tarjan's strongly connected components algorithm62 , an algorithm that gives the topo-
logically sorted list of strongly connected components in a graph

51 https://en.wikipedia.org/wiki/Linear_extension
52 https://en.wikipedia.org/wiki/Partial_order
53 https://en.wikipedia.org/wiki/Adjunction_(category_theory)
54 https://en.wikipedia.org/wiki/Transitive_relation
55 https://en.wikipedia.org/wiki/Total_order
56 https://en.wikipedia.org/wiki/Comparison_sort
57 https://en.wikipedia.org/wiki/Directed_path
58 https://en.wikipedia.org/wiki/Reachability
59 https://en.wikipedia.org/wiki/Transitive_reduction
60 https://en.wikipedia.org/wiki/Tsort
61 https://en.wikipedia.org/wiki/Feedback_arc_set
62 https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm

1314
References

• Pre-topological order63

128.7 References
1. S, P; M, K; D, M; D,
R (2019). Sequential and Parallel Algorithms and Data Structures: The Basic
Toolbox64 . S I P. ISBN65 978-3-030-25208-366 .
2. C, T H.67 ; L, C E.68 ; R, R L.69 ; S,
C70 (2009) [1990]. Introduction to Algorithms71 (3 .). MIT P 
MG-H. . 655–657. ISBN72 0-262-03384-473 .
3. S, D I. (2014). Category Theory for the Sciences. MIT Press.
• C, S A.74 (1985), ”A T  P  F P-
 A”, Information and Control, 64 (1–3): 2–22, doi75 :10.1016/S0019-
9958(85)80041-376 .
• C, T H.77 ; L, C E.78 ; R, R L.79 ; S,
C80 (2001), ”S 22.4: T ”, Introduction to Algorithms81
(2 .), MIT P  MG-H, . 549–552, ISBN82 0-262-03293-783 .
• D, E; N, D; S, S (1981), ”P -
   ”, SIAM Journal on Computing, 10 (4): 657–675,
doi84 :10.1137/021004985 , MR86 063542487 .
• J, M. P. (1960), Automatic machine methods of testing PERT networks for
consistency, Technical Memorandum No. K-24/60, Dahlgren, Virginia: U. S. Naval
Weapons Laboratory.

63 https://en.wikipedia.org/wiki/Pre-topological_order
64 https://www.springer.com/gp/book/9783030252083
65 https://en.wikipedia.org/wiki/ISBN_(identifier)
66 https://en.wikipedia.org/wiki/Special:BookSources/978-3-030-25208-3
67 https://en.wikipedia.org/wiki/Thomas_H._Cormen
68 https://en.wikipedia.org/wiki/Charles_E._Leiserson
69 https://en.wikipedia.org/wiki/Ron_Rivest
70 https://en.wikipedia.org/wiki/Clifford_Stein
71 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
72 https://en.wikipedia.org/wiki/ISBN_(identifier)
73 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
74 https://en.wikipedia.org/wiki/Stephen_Cook
75 https://en.wikipedia.org/wiki/Doi_(identifier)
76 https://doi.org/10.1016%2FS0019-9958%2885%2980041-3
77 https://en.wikipedia.org/wiki/Thomas_H._Cormen
78 https://en.wikipedia.org/wiki/Charles_E._Leiserson
79 https://en.wikipedia.org/wiki/Ronald_L._Rivest
80 https://en.wikipedia.org/wiki/Clifford_Stein
81 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
82 https://en.wikipedia.org/wiki/ISBN_(identifier)
83 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
84 https://en.wikipedia.org/wiki/Doi_(identifier)
85 https://doi.org/10.1137%2F0210049
86 https://en.wikipedia.org/wiki/MR_(identifier)
87 http://www.ams.org/mathscinet-getitem?mr=0635424

1315
Topological sorting

• K, A B. (1962), ”T    ”, Commu-


nications of the ACM, 5 (11): 558–562, doi88 :10.1145/368996.36902589 .
• T, R E.90 (1976), ”E-    -
”, Acta Informatica, 6 (2): 171–185, doi91 :10.1007/BF0026849992 .
• V, O; M, L (1997), ”H   -
 ”, Proc. 17th International Conference of the Chilean Computer
Science Society (SCCC '97)93 (PDF), . 264–267, 94 :10.1109/SCCC.1997.63709995 .

128.8 Further reading


• D. E. Knuth96 , The Art of Computer Programming97 , Volume 1, section 2.2.3, which
gives an algorithm for topological sorting of a partial ordering, and a brief history.

128.9 External links


• NIST Dictionary of Algorithms and Data Structures: topological sort98
• W, E W.99 ”TS”100 . MathWorld101 .

Sorting algorithms

88 https://en.wikipedia.org/wiki/Doi_(identifier)
89 https://doi.org/10.1145%2F368996.369025
90 https://en.wikipedia.org/wiki/Robert_Tarjan
91 https://en.wikipedia.org/wiki/Doi_(identifier)
92 https://doi.org/10.1007%2FBF00268499
93 http://pantheon.ufrj.br/bitstream/11422/2585/4/02_97_000575787.pdf
94 https://en.wikipedia.org/wiki/Doi_(identifier)
95 https://doi.org/10.1109%2FSCCC.1997.637099
96 https://en.wikipedia.org/wiki/D._E._Knuth
97 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
98 https://xlinux.nist.gov/dads/HTML/topologicalSort.html
99 https://en.wikipedia.org/wiki/Eric_W._Weisstein
100 https://mathworld.wolfram.com/TopologicalSort.html
101 https://en.wikipedia.org/wiki/MathWorld

1316
129 Transitive closure

For other uses, see Closure (disambiguation)1 . This article is about the transitive closure of
a binary relation. For the transitive closure of a set, see transitive set § Transitive closure2 .
In mathematics3 , the transitive closure of a binary relation4 R on a set5 X is the smallest
relation on X that contains R and is transitive6 .
For example, if X is a set of airports and xRy means ”there is a direct flight from airport
x to airport y” (for x and y in X), then the transitive closure of R on X is the relation R+
such that x R+ y means ”it is possible to fly from x to y in one or more flights”. Informally,
the transitive closure gives you the set of all places you can get to from any starting place.
More formally, the transitive closure of a binary relation R on a set X is the transitive
relation7 R+ on set X such that R+ contains R and R+ is minimal Lidl & Pilz (19988 ,
p. 337). If the binary relation itself is transitive, then the transitive closure is that same
binary relation; otherwise, the transitive closure is a different relation.

129.1 Transitive relations and examples

A relation R on a set X is transitive if, for all x, y, z in X, whenever x R y and y R z then x


R z. Examples of transitive relations include the equality relation on any set, the ”less than
or equal” relation on any linearly ordered set, and the relation ”x was born before y” on the
set of all people. Symbolically, this can be denoted as: if x < y and y < z then x < z.
One example of a non-transitive relation is ”city x can be reached via a direct flight from
city y” on the set of all cities. Simply because there is a direct flight from one city to a
second city, and a direct flight from the second city to the third, does not imply there is
a direct flight from the first city to the third. The transitive closure of this relation is a
different relation, namely ”there is a sequence of direct flights that begins at city x and ends
at city y”. Every relation can be extended in a similar way to a transitive relation.
An example of a non-transitive relation with a less meaningful transitive closure is ”x is the
day of the week9 after y”. The transitive closure of this relation is ”some day x comes after

1 https://en.wikipedia.org/wiki/Closure_(disambiguation)
2 https://en.wikipedia.org/wiki/Transitive_set#Transitive_closure
3 https://en.wikipedia.org/wiki/Mathematics
4 https://en.wikipedia.org/wiki/Binary_relation
5 https://en.wikipedia.org/wiki/Set_(mathematics)
6 https://en.wikipedia.org/wiki/Transitive_relation
7 https://en.wikipedia.org/wiki/Transitive_relation
8 #CITEREFLidlPilz1998
9 https://en.wikipedia.org/wiki/Day_of_the_week

1317
Transitive closure

a day y on the calendar”, which is trivially true for all days of the week x and y (and thus
equivalent to the Cartesian square10 , which is ”x and y are both days of the week”).

129.2 Existence and description

For any relation R, the transitive closure of R always exists. To see this, note that the
intersection11 of any family12 of transitive relations is again transitive. Furthermore, there
exists13 at least one transitive relation containing R, namely the trivial one: X × X. The
transitive closure of R is then given by the intersection of all transitive relations containing
R.
For finite sets, we can construct the transitive closure step by step, starting from R and
adding transitive edges. This gives the intuition for a general construction. For any set X,
we can prove that transitive closure is given by the following expression


R+ = Ri .
i=1

where Ri is the i-th power of R, defined inductively by


R1 = R
and, for i > 0,
Ri+1 = R ◦ Ri
where ◦ denotes composition of relations14 .
To show that the above definition of R+ is the least transitive relation containing R, we
show that it contains R, that it is transitive, and that it is the smallest set with both of
those characteristics.
• R ⊆ R+ : R+ contains all of the Ri , so in particular R+ contains R.
• R+ is transitive: If (s1 , s2 ), (s2 , s3 ) ∈ R+ , then (s1 , s2 ) ∈ Rj and (s2 , s3 ) ∈ Rk for some
j, k by definition of R+ . Since composition is associative, Rj+k = Rj ◦ Rk ; hence
(s1 , s3 ) ∈ Rj+k ⊆ R+ by definition of ◦ and R+ .
• R+ is minimal, that is, if T is any transitive relation containing R, then R+ ⊆ T :
Given any such T , induction15 on i can be used to show Ri ⊆ T for all i as follows:
Base: R1 = R ⊆ T by assumption. Step: If Ri ⊆ T holds, and (s1 , s3 ) ∈ Ri+1 = R ◦ Ri ,
then (s1 , s2 ) ∈ R and (s2 , s3 ) ∈ Ri for some s2 , by definition of ◦. Hence,
(s1 , s2 ), (s2 , s3 ) ∈ T by assumption and by induction hypothesis. Hence (s1 , s3 ) ∈ T by
transitivity of T ; this completes the induction. Finally, Ri ⊆ T for all i implies R+ ⊆ T
by definition of R+ .

10 https://en.wikipedia.org/wiki/Cartesian_product
11 https://en.wikipedia.org/wiki/Intersection_(set_theory)
12 https://en.wikipedia.org/wiki/Indexed_family
13 https://en.wikipedia.org/wiki/There_exists
14 https://en.wikipedia.org/wiki/Composition_of_relations
15 https://en.wikipedia.org/wiki/Mathematical_induction

1318
Properties

129.3 Properties

The intersection16 of two transitive relations is transitive.


The union17 of two transitive relations need not be transitive. To preserve transitivity, one
must take the transitive closure. This occurs, for example, when taking the union of two
equivalence relations18 or two preorders19 . To obtain a new equivalence relation or preorder
one must take the transitive closure (reflexivity and symmetry—in the case of equivalence
relations—are automatic).

129.4 In graph theory

Figure 306 Transitive closure constructs the output graph from the input graph.

In computer science20 , the concept of transitive closure can be thought of as constructing a


data structure that makes it possible to answer reachability21 questions. That is, can one
get from node a to node d in one or more hops? A binary relation tells you only that node
a is connected to node b, and that node b is connected to node c, etc. After the transitive
closure is constructed, as depicted in the following figure, in an O(1) operation one may
determine that node d is reachable from node a. The data structure is typically stored as a
matrix, so if matrix[1][4] = 1, then it is the case that node 1 can reach node 4 through one
or more hops.

16 https://en.wikipedia.org/wiki/Intersection_(set_theory)
17 https://en.wikipedia.org/wiki/Union_(set_theory)
18 https://en.wikipedia.org/wiki/Equivalence_relation
19 https://en.wikipedia.org/wiki/Preorder
20 https://en.wikipedia.org/wiki/Computer_science
21 https://en.wikipedia.org/wiki/Reachability

1319
Transitive closure

The transitive closure of the adjacency relation of a directed acyclic graph22 (DAG) is the
reachability relation of the DAG and a strict partial order23 .

129.5 In logic and computational complexity

The transitive closure of a binary relation cannot, in general, be expressed in first-order


logic24 (FO). This means that one cannot write a formula using predicate symbols R and
T that will be satisfied in any model if and only if T is the transitive closure of R. In
finite model theory25 , first-order logic (FO) extended with a transitive closure operator is
usually called transitive closure logic, and abbreviated FO(TC) or just TC. TC is a
sub-type of fixpoint logics26 . The fact that FO(TC) is strictly more expressive than FO was
discovered by Ronald Fagin27 in 1974; the result was then rediscovered by Alfred Aho28 and
Jeffrey Ullman29 in 1979, who proposed to use fixpoint logic as a database query language30
(Libkin 2004:vii). With more recent concepts of finite model theory, proof that FO(TC)
is strictly more expressive than FO follows immediately from the fact that FO(TC) is not
Gaifman-local31 (Libkin 2004:49).
In computational complexity theory32 , the complexity class33 NL34 corresponds precisely
to the set of logical sentences expressible in TC. This is because the transitive closure
property has a close relationship with the NL-complete35 problem STCON36 for finding
directed paths37 in a graph. Similarly, the class L38 is first-order logic with the commutative,
transitive closure. When transitive closure is added to second-order logic39 instead, we
obtain PSPACE40 .

22 https://en.wikipedia.org/wiki/Directed_acyclic_graph
23 https://en.wikipedia.org/wiki/Strict_partial_order
24 https://en.wikipedia.org/wiki/First-order_logic
25 https://en.wikipedia.org/wiki/Finite_model_theory
26 https://en.wikipedia.org/wiki/Fixpoint_logic
27 https://en.wikipedia.org/wiki/Ronald_Fagin
28 https://en.wikipedia.org/wiki/Alfred_Aho
29 https://en.wikipedia.org/wiki/Jeffrey_Ullman
30 https://en.wikipedia.org/wiki/Database_query_language
31 https://en.wikipedia.org/w/index.php?title=Gaifman-local&action=edit&redlink=1
32 https://en.wikipedia.org/wiki/Computational_complexity_theory
33 https://en.wikipedia.org/wiki/Complexity_class
34 https://en.wikipedia.org/wiki/NL_(complexity)
35 https://en.wikipedia.org/wiki/NL-complete
36 https://en.wikipedia.org/wiki/STCON
37 https://en.wikipedia.org/wiki/Directed_path
38 https://en.wikipedia.org/wiki/L_(complexity)
39 https://en.wikipedia.org/wiki/Second-order_logic
40 https://en.wikipedia.org/wiki/PSPACE

1320
In database query languages

129.6 In database query languages

Further information: Hierarchical and recursive queries in SQL41 Since the 1980s Oracle
Database42 has implemented a proprietary SQL43 extension CONNECT BY... START
WITH that allows the computation of a transitive closure as part of a declarative query.
The SQL 344 (1999) standard added a more general WITH RECURSIVE construct also
allowing transitive closures to be computed inside the query processor; as of 2011 the
latter is implemented in IBM DB245 , Microsoft SQL Server46 , Oracle47 , and PostgreSQL48 ,
although not in MySQL49 (Benedikt and Senellart 2011:189).
Datalog50 also implements transitive closure computations (Silberschatz et al. 2010:C.3.6).

129.7 Algorithms

Efficient algorithms for computing the transitive closure of the adjacency relation of a graph
can be found in Nuutila (1995). The fastest worst-case methods, which are not practical,
reduce the problem to matrix multiplication51 . The problem can also be solved by the
Floyd–Warshall algorithm52 , or by repeated breadth-first search53 or depth-first search54
starting from each node of the graph.
More recent research has explored efficient ways of computing transitive closure on dis-
tributed systems based on the MapReduce55 paradigm (Afrati et al. 2011).

129.8 See also


• Ancestral relation56
• Deductive closure57
• Reflexive closure58
• Symmetric closure59

41 https://en.wikipedia.org/wiki/Hierarchical_and_recursive_queries_in_SQL
42 https://en.wikipedia.org/wiki/Oracle_Database
43 https://en.wikipedia.org/wiki/SQL
44 https://en.wikipedia.org/wiki/SQL_3
45 https://en.wikipedia.org/wiki/IBM_DB2
46 https://en.wikipedia.org/wiki/Microsoft_SQL_Server
47 https://en.wikipedia.org/wiki/Oracle_Database
48 https://en.wikipedia.org/wiki/PostgreSQL
49 https://en.wikipedia.org/wiki/MySQL
50 https://en.wikipedia.org/wiki/Datalog
51 https://en.wikipedia.org/wiki/Matrix_multiplication
52 https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
53 https://en.wikipedia.org/wiki/Breadth-first_search
54 https://en.wikipedia.org/wiki/Depth-first_search
55 https://en.wikipedia.org/wiki/MapReduce
56 https://en.wikipedia.org/wiki/Ancestral_relation
57 https://en.wikipedia.org/wiki/Deductive_closure
58 https://en.wikipedia.org/wiki/Reflexive_closure
59 https://en.wikipedia.org/wiki/Symmetric_closure

1321
Transitive closure

• Transitive reduction60 (a smallest relation having the transitive closure of R as its tran-
sitive closure)

129.9 References
• L, R.; P, G. (1998), A  , U T 
M61 (2 .), S, ISBN62 0-387-98290-663
• Keller, U., 2004, Some Remarks on the Definability of Transitive Closure in First-order
Logic and Datalog64 (unpublished manuscript)
• E G; P G. K; L L; M M; J
S; M Y. V; Y V; S W (2007). Finite Model
Theory and Its Applications. Springer. pp. 151–152. ISBN65 978-3-540-68804-466 .
• L, L67 (2004), Elements of Finite Model Theory68 , S, ISBN69 978-
3-540-21202-770
• H-D E; J F (1999). Finite Model Theory71 (2 .).
S. . 12372 –124, 151–161, 220–235. ISBN73 978-3-540-28787-274 .
• A, A. V.; U, J. D. (1979). ”U    ”.
Proceedings of the 6th ACM SIGACT-SIGPLAN Symposium on Principles of program-
ming languages - POPL '79. pp. 110–119. doi75 :10.1145/567752.56776376 .
• B, M.; S, P. (2011). ”D”. I B, E K.; A,
A V. (.). Computer Science. The Hardware, Software and Heart of It. pp. 169–
229. doi77 :10.1007/978-1-4614-1168-0_1078 . ISBN79 978-1-4614-1167-380 .
• Nuutila, E., Efficient Transitive Closure Computation in Large Digraphs.81 Acta Poly-
technica Scandinavica, Mathematics and Computing in Engineering Series No. 74,

60 https://en.wikipedia.org/wiki/Transitive_reduction
61 https://en.wikipedia.org/wiki/Undergraduate_Texts_in_Mathematics
62 https://en.wikipedia.org/wiki/ISBN_(identifier)
63 https://en.wikipedia.org/wiki/Special:BookSources/0-387-98290-6
64 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.127.8266
65 https://en.wikipedia.org/wiki/ISBN_(identifier)
66 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-68804-4
67 https://en.wikipedia.org/wiki/Leonid_Libkin
68 https://archive.org/details/elementsoffinite00libk
69 https://en.wikipedia.org/wiki/ISBN_(identifier)
70 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-21202-7
71 https://archive.org/details/finitemodeltheor0000ebbi
72 https://archive.org/details/finitemodeltheor0000ebbi/page/123
73 https://en.wikipedia.org/wiki/ISBN_(identifier)
74 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-28787-2
75 https://en.wikipedia.org/wiki/Doi_(identifier)
76 https://doi.org/10.1145%2F567752.567763
77 https://en.wikipedia.org/wiki/Doi_(identifier)
78 https://doi.org/10.1007%2F978-1-4614-1168-0_10
79 https://en.wikipedia.org/wiki/ISBN_(identifier)
80 https://en.wikipedia.org/wiki/Special:BookSources/978-1-4614-1167-3
81 http://www.cs.hut.fi/~enu/thesis.html

1322
External links

Helsinki 1995, 124 pages. Published by the Finnish Academy of Technology. ISBN82 951-
666-451-283 , ISSN84 1237-240485 , UDC 681.3.
• A S; H K; S. S (2010). Database System
Concepts86 (6 .). MG-H. ISBN87 978-0-07-352332-388 . Appendix C89 (on-
line only)
• Foto N. Afrati, Vinayak Borkar, Michael Carey, Neoklis Polyzotis, Jeffrey D. Ullman,
Map-Reduce Extensions and Recursive Queries90 , EDBT 2011, March 22–24, 2011, Up-
psala, Sweden, ISBN91 978-1-4503-0528-092

129.10 External links


• ”Transitive closure and reduction93 ”, The Stony Brook Algorithm Repository, Steven
Skiena .
• ”Apti Algoritmi94 ”, An example and some C++ implementations of algorithms that cal-
culate the transitive closure of a given binary relation, Vreda Pieterse.

82 https://en.wikipedia.org/wiki/ISBN_(identifier)
83 https://en.wikipedia.org/wiki/Special:BookSources/951-666-451-2
84 https://en.wikipedia.org/wiki/ISSN_(identifier)
85 https://www.worldcat.org/search?fq=x0:jrnl&q=n2:1237-2404
86 https://en.wikipedia.org/wiki/Database_System_Concepts
87 https://en.wikipedia.org/wiki/ISBN_(identifier)
88 https://en.wikipedia.org/wiki/Special:BookSources/978-0-07-352332-3
89 http://codex.cs.yale.edu/avi/db-book/db6/appendices-dir/c.pdf
https://web.archive.org/web/20140810063150/http://www.edbt.org/Proceedings/2011-
90
Uppsala/papers/edbt/a1-afrati.pdf
91 https://en.wikipedia.org/wiki/ISBN_(identifier)
92 https://en.wikipedia.org/wiki/Special:BookSources/978-1-4503-0528-0
93 http://www.cs.sunysb.edu/~algorith/files/transitive-closure.shtml
94 http://www.cs.up.ac.za/cs/vpieterse/AptiAlgo/AptiAlgoritmi.html

1323
130 Transitive reduction

In mathematics1 , a transitive reduction of a directed graph2 D is another directed graph


with the same vertices and as few edges as possible, such that if there is a (directed) path
from vertex v to vertex w in D, then there is also such a path in the reduction. Transitive
reductions were introduced by Aho, Garey & Ullman (1972)3 , who provided tight bounds
on the computational complexity of constructing them.
More technically, the reduction is a directed graph that has the same reachability4 relation
as D. Equivalently, D and its transitive reduction should have the same transitive closure5
as each other, and its transitive reduction should have as few edges as possible among all
graphs with this property.
The transitive reduction of a finite directed acyclic graph6 (a directed graph without directed
cycles) is unique and is a subgraph7 of the given graph. However, uniqueness fails for graphs
with (directed) cycles, and for infinite graphs not even existence is guaranteed.
The closely related concept of a minimum equivalent graph is a subgraph of D that
has the same reachability relation and as few edges as possible.[1] The difference is that
a transitive reduction does not have to be a subgraph of D. For finite directed acyclic
graphs, the minimum equivalent graph is the same as the transitive reduction. However,
for graphs that may contain cycles, minimum equivalent graphs are NP-hard8 to construct,
while transitive reductions can be constructed in polynomial time9 .
Transitive reduction can be defined for an abstract binary relation10 on a set11 , by inter-
preting the pairs of the relation as arcs in a directed graph.

130.1 In acyclic directed graphs

The transitive reduction of a finite directed graph12 G is a graph with the fewest possible
edges that has the same reachability13 relation as the original graph. That is, if there is a

1 https://en.wikipedia.org/wiki/Mathematics
2 https://en.wikipedia.org/wiki/Directed_graph
3 #CITEREFAhoGareyUllman1972
4 https://en.wikipedia.org/wiki/Reachability
5 https://en.wikipedia.org/wiki/Transitive_closure
6 https://en.wikipedia.org/wiki/Directed_acyclic_graph
7 https://en.wikipedia.org/wiki/Induced_subgraph
8 https://en.wikipedia.org/wiki/NP-hard
9 https://en.wikipedia.org/wiki/Polynomial_time
10 https://en.wikipedia.org/wiki/Binary_relation
11 https://en.wikipedia.org/wiki/Set_(mathematics)
12 https://en.wikipedia.org/wiki/Directed_graph
13 https://en.wikipedia.org/wiki/Reachability

1325
Transitive reduction

path from a vertex x to a vertex y in graph G, there must also be a path from x to y in the
transitive reduction of G, and vice versa. The following image displays drawings of graphs
corresponding to a non-transitive binary relation (on the left) and its transitive reduction
(on the right).

Figure 308
Figure 307

The transitive reduction of a finite directed acyclic graph14 G is unique, and consists of the
edges of G that form the only path between their endpoints. In particular, it is always a
subgraph15 of the given graph. For this reason, the transitive reduction coincides with the
minimum equivalent graph in this case.
In the mathematical theory of binary relations16 , any relation R on a set X may be thought
of as a directed graph17 that has the set X as its vertex set and that has an arc xy for every
ordered pair18 of elements that are related in R. In particular, this method lets partially
ordered sets19 be reinterpreted as directed acyclic graphs, in which there is an arc xy in
the graph whenever there is an order relation x < y between the given pair of elements of
the partial order. When the transitive reduction operation is applied to a directed acyclic
graph that has been constructed in this way, it generates the covering relation20 of the
partial order, which is frequently given visual expression by means of a Hasse diagram21 .
Transitive reduction has been used on networks which can be represented as directed acyclic
graphs (e.g. citation graphs22 or citation networks23 ) to reveal structural differences between
networks.[2]

14 https://en.wikipedia.org/wiki/Directed_acyclic_graph
15 https://en.wikipedia.org/wiki/Induced_subgraph
16 https://en.wikipedia.org/wiki/Binary_relation
17 https://en.wikipedia.org/wiki/Directed_graph
18 https://en.wikipedia.org/wiki/Ordered_pair
19 https://en.wikipedia.org/wiki/Partially_ordered_set
20 https://en.wikipedia.org/wiki/Covering_relation
21 https://en.wikipedia.org/wiki/Hasse_diagram
22 https://en.wikipedia.org/wiki/Citation_graph
23 https://en.wikipedia.org/wiki/Citation_graph

1326
In graphs with cycles

130.2 In graphs with cycles

In a finite graph that may have cycles, the transitive reduction is not unique: there may
be more than one graph on the same vertex set that has a minimum number of edges and
has the same reachability relation as the given graph. Additionally, it may be the case
that none of these minimum graphs is a subgraph of the given graph. Nevertheless, it is
straightforward to characterize the minimum graphs with the same reachability relation as
the given graph G.[3] If G is an arbitrary directed graph, and H is a graph with the minimum
possible number of edges having the same reachability relation as G, then H consists of
• A directed cycle24 for each strongly connected component25 of G, connecting together the
vertices in this component
• An edge xy for each edge XY of the transitive reduction of the condensation26 of G, where
X and Y are two strongly connected components of G that are connected by an edge in
the condensation, x is any vertex in component X, and y is any vertex in component Y.
The condensation of G is a directed acyclic graph that has a vertex for every strongly
connected component of G and an edge for every two components that are connected by
an edge in G. In particular, because it is acyclic, its transitive reduction can be defined
as in the previous section.
The total number of edges in this type of transitive reduction is then equal to the number
of edges in the transitive reduction of the condensation, plus the number of vertices in
nontrivial strongly connected components (components with more than one vertex).
The edges of the transitive reduction that correspond to condensation edges can always be
chosen to be a subgraph of the given graph G. However, the cycle within each strongly
connected component can only be chosen to be a subgraph of G if that component has a
Hamiltonian cycle27 , something that is not always true and is difficult to check. Because
of this difficulty, it is NP-hard28 to find the smallest subgraph of a given graph G with the
same reachability (its minimum equivalent graph).[3]

130.3 Computational complexity

As Aho et al. show,[3] when the time complexity29 of graph algorithms is measured only as
a function of the number n of vertices in the graph, and not as a function of the number
of edges, transitive closure and transitive reduction of directed acyclic graphs have the
same complexity. It had already been shown that transitive closure and multiplication30 of
Boolean matrices31 of size n × n had the same complexity as each other,[4] so this result
put transitive reduction into the same class. The fastest known exact algorithms for matrix

24 https://en.wikipedia.org/wiki/Directed_cycle
25 https://en.wikipedia.org/wiki/Strongly_connected_component
26 https://en.wikipedia.org/wiki/Strongly_connected_component
27 https://en.wikipedia.org/wiki/Hamiltonian_cycle
28 https://en.wikipedia.org/wiki/NP-hard
29 https://en.wikipedia.org/wiki/Time_complexity
30 https://en.wikipedia.org/wiki/Matrix_multiplication
31 https://en.wikipedia.org/wiki/Logical_matrix

1327
Transitive reduction

multiplication, as of 2015, take time O(n2.3729 ),[5] and this gives the fastest known worst-case
time bound for transitive reduction in dense graphs.

130.3.1 Computing the reduction using the closure

To prove that transitive reduction is as easy as transitive closure, Aho et al. rely on
the already-known equivalence with Boolean matrix multiplication. They let A be the
adjacency matrix32 of the given directed acyclic graph, and B be the adjacency matrix of
its transitive closure (computed using any standard transitive closure algorithm). Then an
edge uv belongs to the transitive reduction if and only if there is a nonzero entry in row
u and column v of matrix A, and there is a zero entry in the same position of the matrix
product AB. In this construction, the nonzero elements of the matrix AB represent pairs of
vertices connected by paths of length two or more.[3]

130.3.2 Computing the closure using the reduction

To prove that transitive reduction is as hard as transitive closure, Aho et al. construct from
a given directed acyclic graph G another graph H, in which each vertex of G is replaced by
a path of three vertices, and each edge of G corresponds to an edge in H connecting the
corresponding middle vertices of these paths. In addition, in the graph H, Aho et al. add an
edge from every path start to every path end. In the transitive reduction of H, there is an
edge from the path start for u to the path end for v, if and only if edge uv does not belong
to the transitive closure of G. Therefore, if the transitive reduction of H can be computed
efficiently, the transitive closure of G can be read off directly from it.[3]

130.3.3 Computing the reduction in sparse graphs

When measured both in terms of the number n of vertices and the number m of edges in
a directed acyclic graph, transitive reductions can also be found in time O(nm), a bound
that may be faster than the matrix multiplication methods for sparse graphs33 . To do so,
apply a linear time34 longest path algorithm35 in the given directed acyclic graph, for each
possible choice of starting vertex. From the computed longest paths, keep only those of
length one (single edge); in other words, keep those edges (u,v) for which there exists no
other path from u to v. This O(nm) time bound matches the complexity of constructing
transitive closures by using depth first search36 or breadth first search37 to find the vertices
reachable from every choice of starting vertex, so again with these assumptions transitive
closures and transitive reductions can be found in the same amount of time.

32 https://en.wikipedia.org/wiki/Adjacency_matrix
33 https://en.wikipedia.org/wiki/Sparse_graph
34 https://en.wikipedia.org/wiki/Linear_time
35 https://en.wikipedia.org/wiki/Longest_path_problem
36 https://en.wikipedia.org/wiki/Depth_first_search
37 https://en.wikipedia.org/wiki/Breadth_first_search

1328
Notes

130.4 Notes
1. Moyles & Thompson (1969)38 .
2. Clough et al. (2015)39 .
3. Aho, Garey & Ullman (1972)40
4. Aho et al. credit this result to an unpublished 1971 manuscript of Ian Munro, and to
a 1970 Russian-language paper by M. E. Furman.
5. Le Gall (2014)41 .

130.5 References
• A, A. V.42 ; G, M. R.43 ; U, J. D.44 (1972), ”T  -
    ”, SIAM Journal on Computing45 , 1 (2): 131–137,
doi46 :10.1137/020100847 , MR48 030603249 .
• C, J. R.; G, J.; L, T. V.; E, T. S. (2015), ”T
   ”, Journal of Complex Networks, 3 (2): 189–203,
arXiv50 :1310.822451 , doi52 :10.1093/comnet/cnu03953 .
• M, D M.; T, G L. (1969), ”A A  F 
M E G   D”, Journal of the ACM54 , 16 (3): 455–460,
doi55 :10.1145/321526.32153456 .
• L G, F (2014), ”P  T  F M M-
”, Proc. 39th International Symposium on Symbolic and Algebraic Computation
(ISSAC '14), pp. 296–303, doi57 :10.1145/2608628.260866458 .

38 #CITEREFMoylesThompson1969
39 #CITEREFCloughGollingsLoachEvans2015
40 #CITEREFAhoGareyUllman1972
41 #CITEREFLe_Gall2014
42 https://en.wikipedia.org/wiki/Alfred_Aho
43 https://en.wikipedia.org/wiki/Michael_Garey
44 https://en.wikipedia.org/wiki/Jeffrey_Ullman
45 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
46 https://en.wikipedia.org/wiki/Doi_(identifier)
47 https://doi.org/10.1137%2F0201008
48 https://en.wikipedia.org/wiki/MR_(identifier)
49 http://www.ams.org/mathscinet-getitem?mr=0306032
50 https://en.wikipedia.org/wiki/ArXiv_(identifier)
51 http://arxiv.org/abs/1310.8224
52 https://en.wikipedia.org/wiki/Doi_(identifier)
53 https://doi.org/10.1093%2Fcomnet%2Fcnu039
54 https://en.wikipedia.org/wiki/Journal_of_the_ACM
55 https://en.wikipedia.org/wiki/Doi_(identifier)
56 https://doi.org/10.1145%2F321526.321534
57 https://en.wikipedia.org/wiki/Doi_(identifier)
58 https://doi.org/10.1145%2F2608628.2608664

1329
Transitive reduction

130.6 External links


• W, E W.59 ”T R”60 . MathWorld61 .

59 https://en.wikipedia.org/wiki/Eric_W._Weisstein
60 https://mathworld.wolfram.com/TransitiveReduction.html
61 https://en.wikipedia.org/wiki/MathWorld

1330
131 Travelling salesman problem

Figure 309 Solution of a travelling salesman problem: the black line shows the shortest
possible loop that connects every red dot.

The travelling salesman problem (also called the travelling salesperson problem[1]
or TSP) asks the following question: ”Given a list of cities and the distances between
each pair of cities, what is the shortest possible route that visits each city and returns to

1331
Travelling salesman problem

the origin city?” It is an NP-hard1 problem in combinatorial optimization2 , important in


operations research3 and theoretical computer science4 .
The travelling purchaser problem5 and the vehicle routing problem6 are both generalizations
of TSP.
In the theory of computational complexity7 , the decision version of the TSP (where given a
length L, the task is to decide whether the graph has a tour of at most L) belongs to the class
of NP-complete8 problems. Thus, it is possible that the worst-case9 running time10 for any
algorithm for the TSP increases superpolynomially11 (but no more than exponentially12 )
with the number of cities.
The problem was first formulated in 1930 and is one of the most intensively studied prob-
lems in optimization. It is used as a benchmark13 for many optimization methods. Even
though the problem is computationally difficult, many heuristics14 and exact algorithms15
are known, so that some instances with tens of thousands of cities can be solved completely
and even problems with millions of cities can be approximated within a small fraction of
1%.[2]
The TSP has several applications even in its purest formulation, such as planning16 , lo-
gistics17 , and the manufacture of microchips18 . Slightly modified, it appears as a sub-
problem in many areas, such as DNA sequencing19 . In these applications, the concept
city represents, for example, customers, soldering points, or DNA fragments, and the con-
cept distance represents travelling times or cost, or a similarity measure20 between DNA
fragments. The TSP also appears in astronomy, as astronomers observing many sources
will want to minimize the time spent moving the telescope between the sources. In many
applications, additional constraints such as limited resources or time windows may be im-
posed.

1 https://en.wikipedia.org/wiki/NP-hardness
2 https://en.wikipedia.org/wiki/Combinatorial_optimization
3 https://en.wikipedia.org/wiki/Operations_research
4 https://en.wikipedia.org/wiki/Theoretical_computer_science
5 https://en.wikipedia.org/wiki/Traveling_purchaser_problem
6 https://en.wikipedia.org/wiki/Vehicle_routing_problem
7 https://en.wikipedia.org/wiki/Computational_complexity_theory
8 https://en.wikipedia.org/wiki/NP-completeness
9 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
10 https://en.wikipedia.org/wiki/Time_complexity
11 https://en.wikipedia.org/wiki/Time_complexity#Superpolynomial_time
12 https://en.wikipedia.org/wiki/Exponential_time_hypothesis
13 https://en.wikipedia.org/wiki/Benchmark_(computing)
14 https://en.wikipedia.org/wiki/Heuristic
15 https://en.wikipedia.org/wiki/Exact_algorithm
16 https://en.wikipedia.org/wiki/Planning
17 https://en.wikipedia.org/wiki/Logistics
18 https://en.wikipedia.org/wiki/Integrated_circuit
19 https://en.wikipedia.org/wiki/DNA_sequencing
20 https://en.wikipedia.org/wiki/Similarity_measure

1332
History

131.1 History

The origins of the travelling salesman problem are unclear. A handbook for travelling
salesmen from 1832 mentions the problem and includes example tours through Germany21
and Switzerland22 , but contains no mathematical treatment.[3]

Figure 310 William Rowan Hamilton

21 https://en.wikipedia.org/wiki/Germany
22 https://en.wikipedia.org/wiki/Switzerland

1333
Travelling salesman problem

The travelling salesman problem was mathematically formulated in the 1800s by the Irish
mathematician W.R. Hamilton23 and by the British mathematician Thomas Kirkman24 .
Hamilton’s Icosian Game25 was a recreational puzzle based on finding a Hamiltonian cy-
cle26 .[4] The general form of the TSP appears to have been first studied by mathematicians
during the 1930s in Vienna and at Harvard, notably by Karl Menger27 , who defines the
problem, considers the obvious brute-force algorithm, and observes the non-optimality of
the nearest neighbour heuristic:
We denote by messenger problem (since in practice this question should be solved by
each postman, anyway also by many travelers) the task to find, for finitely many points
whose pairwise distances are known, the shortest route connecting the points. Of course,
this problem is solvable by finitely many trials. Rules which would push the number of
trials below the number of permutations of the given points, are not known. The rule
that one first should go from the starting point to the closest point, then to the point
closest to this, etc., in general does not yield the shortest route.
[5]

It was first considered mathematically in the 1930s by Merrill M. Flood28 who was looking to
solve a school bus routing problem.[6] Hassler Whitney29 at Princeton University30 generated
interest in the problem, which he called the ”48 states problem”. The earliest publication
using the phrase ”traveling salesman problem” was the 1949 Rand Corporation31 report by
Julia Robinson32 , ”On the Hamiltonian game (a traveling salesman problem).”[7][8]
In the 1950s and 1960s, the problem became increasingly popular in scientific circles in Eu-
rope and the USA after the RAND Corporation33 in Santa Monica34 offered prizes for steps
in solving the problem.[6] Notable contributions were made by George Dantzig35 , Delbert
Ray Fulkerson36 and Selmer M. Johnson37 from the RAND Corporation, who expressed the
problem as an integer linear program38 and developed the cutting plane39 method for its
solution. They wrote what is considered the seminal paper on the subject in which with
these new methods they solved an instance with 49 cities to optimality by constructing a
tour and proving that no other tour could be shorter. Dantzig, Fulkerson and Johnson,
however, speculated that given a near optimal solution we may be able to find optimality or
prove optimality by adding a small number of extra inequalities (cuts). They used this idea

23 https://en.wikipedia.org/wiki/William_Rowan_Hamilton
24 https://en.wikipedia.org/wiki/Thomas_Kirkman
25 https://en.wikipedia.org/wiki/Icosian_game
26 https://en.wikipedia.org/wiki/Hamiltonian_path
27 https://en.wikipedia.org/wiki/Karl_Menger
28 https://en.wikipedia.org/wiki/Merrill_M._Flood
29 https://en.wikipedia.org/wiki/Hassler_Whitney
30 https://en.wikipedia.org/wiki/Princeton_University
31 https://en.wikipedia.org/wiki/Rand_Corporation
32 https://en.wikipedia.org/wiki/Julia_Robinson
33 https://en.wikipedia.org/wiki/RAND_Corporation
34 https://en.wikipedia.org/wiki/Santa_Monica
35 https://en.wikipedia.org/wiki/George_Dantzig
36 https://en.wikipedia.org/wiki/Delbert_Ray_Fulkerson
37 https://en.wikipedia.org/wiki/Selmer_M._Johnson
38 https://en.wikipedia.org/wiki/Integer_linear_program
39 https://en.wikipedia.org/wiki/Cutting-plane_method

1334
History

to solve their initial 49 city problem using a string model. They found they only needed
26 cuts to come to a solution for their 49 city problem. While this paper did not give an
algorithmic approach to TSP problems, the ideas that lay within it were indispensable to
later creating exact solution methods for the TSP, though it would take 15 years to find an
algorithmic approach in creating these cuts.[6] As well as cutting plane methods, Dantzig,
Fulkerson and Johnson used branch and bound40 algorithms perhaps for the first time.[6]
In 1959, Jillian Beardwood41 , J.H. Halton and John Hammersley42 published an article
entitled “The Shortest Path Through Many Points” in the journal of the Cambridge Philo-
sophical Society.[9] The Beardwood–Halton–Hammersley theorem provides a practical so-
lution to the traveling salesman problem. The authors derived an asymptotic formula to
determine the length of the shortest route for a salesman who starts at a home or office and
visits a fixed number of locations before returning to the start.
In the following decades, the problem was studied by many researchers from mathematics43 ,
computer science44 , chemistry45 , physics46 , and other sciences. In the 1960s however a
new approach was created, that instead of seeking optimal solutions, one would produce
a solution whose length is provably bounded by a multiple of the optimal length, and in
doing so create lower bounds for the problem; these may then be used with branch and
bound approaches. One method of doing this was to create a minimum spanning tree47 of
the graph and then double all its edges, which produces the bound that the length of an
optimal tour is at most twice the weight of a minimum spanning tree.[6]
Christofides made a big advance in this approach of giving an approach for which we know
the worst-case scenario. Christofides algorithm48 given in 1976, at worst is 1.5 times longer
than the optimal solution. As the algorithm was so simple and quick, many hoped it would
give way to a near optimal solution method. This remains the method with the best worst-
case scenario. However, for a fairly general special case of the problem it was beaten by a
tiny margin in 2011.[10]
Richard M. Karp49 showed in 1972 that the Hamiltonian cycle50 problem was NP-
complete51 , which implies the NP-hardness52 of TSP. This supplied a mathematical ex-
planation for the apparent computational difficulty of finding optimal tours.
Great progress was made in the late 1970s and 1980, when Grötschel, Padberg, Rinaldi and
others managed to exactly solve instances with up to 2,392 cities, using cutting planes and
branch and bound53 .

40 https://en.wikipedia.org/wiki/Branch_and_bound
41 https://en.wikipedia.org/wiki/Jillian_Beardwood
42 https://en.wikipedia.org/wiki/John_Hammersley
43 https://en.wikipedia.org/wiki/Mathematics
44 https://en.wikipedia.org/wiki/Computer_science
45 https://en.wikipedia.org/wiki/Chemistry
46 https://en.wikipedia.org/wiki/Physics
47 https://en.wikipedia.org/wiki/Minimum_spanning_tree
48 https://en.wikipedia.org/wiki/Christofides_algorithm
49 https://en.wikipedia.org/wiki/Richard_M._Karp
50 https://en.wikipedia.org/wiki/Hamiltonian_cycle
51 https://en.wikipedia.org/wiki/NP-complete
52 https://en.wikipedia.org/wiki/NP-hard
53 https://en.wikipedia.org/wiki/Branch_and_bound

1335
Travelling salesman problem

In the 1990s, Applegate54 , Bixby55 , Chvátal56 , and Cook57 developed the program
Concorde that has been used in many recent record solutions. Gerhard Reinelt published
the TSPLIB in 1991, a collection of benchmark instances of varying difficulty, which has
been used by many research groups for comparing results. In 2006, Cook and others com-
puted an optimal tour through an 85,900-city instance given by a microchip layout problem,
currently the largest solved TSPLIB instance. For many other instances with millions of
cities, solutions can be found that are guaranteed to be within 2-3% of an optimal tour.[11]

131.2 Description

131.2.1 As a graph problem

Figure 311 Symmetric TSP with four cities

54 https://en.wikipedia.org/wiki/David_Applegate
55 https://en.wikipedia.org/wiki/Robert_E._Bixby
56 https://en.wikipedia.org/wiki/Va%C5%A1ek_Chv%C3%A1tal
57 https://en.wikipedia.org/wiki/William_J._Cook

1336
Description

TSP can be modelled as an undirected weighted graph58 , such that cities are the graph's
vertices59 , paths are the graph's edges60 , and a path's distance is the edge's weight. It is
a minimization problem starting and finishing at a specified vertex61 after having visited
each other vertex62 exactly once. Often, the model is a complete graph63 (i.e. each pair of
vertices is connected by an edge). If no path exists between two cities, adding an arbitrarily
long edge will complete the graph without affecting the optimal tour.

131.2.2 Asymmetric and symmetric

In the symmetric TSP, the distance between two cities is the same in each opposite direction,
forming an undirected graph64 . This symmetry halves the number of possible solutions. In
the asymmetric TSP, paths may not exist in both directions or the distances might be
different, forming a directed graph65 . Traffic collisions66 , one-way streets67 , and airfares for
cities with different departure and arrival fees are examples of how this symmetry could
break down.

131.2.3 Related problems


• An equivalent formulation in terms of graph theory68 is: Given a complete weighted
graph69 (where the vertices would represent the cities, the edges would represent the
roads, and the weights would be the cost or distance of that road), find a Hamiltonian
cycle70 with the least weight.
• The requirement of returning to the starting city does not change the computational
complexity71 of the problem, see Hamiltonian path problem72 .
• Another related problem is the Bottleneck traveling salesman problem73 (bottleneck
TSP): Find a Hamiltonian cycle in a weighted graph74 with the minimal weight of the
weightiest edge75 . For example, avoiding narrow streets with big buses.[12] The problem
is of considerable practical importance, apart from evident transportation and logistics
areas. A classic example is in printed circuit76 manufacturing: scheduling of a route of

58 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
59 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
60 https://en.wikipedia.org/wiki/Glossary_of_graph_theory_terms
61 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
62 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
63 https://en.wikipedia.org/wiki/Complete_graph
64 https://en.wikipedia.org/wiki/Undirected_graph
65 https://en.wikipedia.org/wiki/Directed_graph
66 https://en.wikipedia.org/wiki/Traffic_collision
67 https://en.wikipedia.org/wiki/One-way_traffic
68 https://en.wikipedia.org/wiki/Graph_theory
69 https://en.wikipedia.org/wiki/Glossary_of_graph_theory
70 https://en.wikipedia.org/wiki/Hamiltonian_cycle
71 https://en.wikipedia.org/wiki/Computational_complexity_theory
72 https://en.wikipedia.org/wiki/Hamiltonian_path_problem
73 https://en.wikipedia.org/wiki/Bottleneck_traveling_salesman_problem
74 https://en.wikipedia.org/wiki/Glossary_of_graph_theory
75 https://en.wikipedia.org/wiki/Edge_(graph_theory)
76 https://en.wikipedia.org/wiki/Printed_circuit_board

1337
Travelling salesman problem

the drill77 machine to drill holes in a PCB. In robotic machining or drilling applications,
the ”cities” are parts to machine or holes (of different sizes) to drill, and the ”cost of travel”
includes time for retooling the robot (single machine job sequencing problem).[13]
• The generalized travelling salesman problem78 , also known as the ”travelling politician
problem”, deals with ”states” that have (one or more) ”cities” and the salesman has to visit
exactly one ”city” from each ”state”. One application is encountered in ordering a solution
to the cutting stock problem79 in order to minimize knife changes. Another is concerned
with drilling in semiconductor80 manufacturing, see e.g., U.S. Patent 7,054,79881 . Noon
and Bean demonstrated that the generalized travelling salesman problem can be trans-
formed into a standard travelling salesman problem with the same number of cities, but
a modified distance matrix82 .
• The sequential ordering problem deals with the problem of visiting a set of cities where
precedence relations between the cities exist.
• A common interview question at Google is how to route data among data processing
nodes; routes vary by time to transfer the data, but nodes also differ by their computing
power and storage, compounding the problem of where to send data.
• The travelling purchaser problem83 deals with a purchaser who is charged with purchasing
a set of products. He can purchase these products in several cities, but at different prices
and not all cities offer the same products. The objective is to find a route between a
subset of the cities, which minimizes total cost (travel cost + purchasing cost) and which
enables the purchase of all required products.

131.3 Integer linear programming formulations

The TSP can be formulated as an integer linear program84 .[14][15][16] Several formulations
are known. Two notable formulations are the Miller–Tucker–Zemlin (MTZ) formulation
and the Dantzig–Fulkerson–Johnson (DFJ) formulation. The DFJ formulation is stronger,
though the MTZ formulation is still useful in certain settings.[17][18]

131.3.1 Miller–Tucker–Zemlin formulation

Label the cities with the numbers 1, . . ., n and define:


{
1 the path goes from city i to city j
xij =
0 otherwise
For i = 1, . . ., n, let ui be a dummy variable, and finally take cij to be the distance from city
i to city j. Then TSP can be written as the following integer linear programming problem:

77 https://en.wikipedia.org/wiki/Drill
78 https://en.wikipedia.org/wiki/Set_TSP_problem
79 https://en.wikipedia.org/wiki/Cutting_stock_problem
80 https://en.wikipedia.org/wiki/Semiconductor
81 http://www.google.com/patents/US7054798
82 https://en.wikipedia.org/wiki/Distance_matrix
83 https://en.wikipedia.org/wiki/Traveling_purchaser_problem
84 https://en.wikipedia.org/wiki/Integer_programming

1338
Integer linear programming formulations


n ∑
n
min cij xij :
i=1 j̸=i,j=1

xij ∈ {0, 1} i, j = 1, . . . , n;
ui ∈ Z i = 2, . . . , n;

n
xij = 1 j = 1, . . . , n;
i=1,i̸=j
∑n
xij = 1 i = 1, . . . , n;
j=1,j̸=i

ui − uj + nxij ≤ n − 1 2 ≤ i ̸= j ≤ n;
0 ≤ ui ≤ n − 1 2 ≤ i ≤ n.
The first set of equalities requires that each city is arrived at from exactly one other city,
and the second set of equalities requires that from each city there is a departure to exactly
one other city. The last constraints enforce that there is only a single tour covering all
cities, and not two or more disjointed tours that only collectively cover all cities. To prove
this, it is shown below (1) that every feasible solution contains only one closed sequence of
cities, and (2) that for every single tour covering all cities, there are values for the dummy
variables ui that satisfy the constraints.
To prove that every feasible solution contains only one closed sequence of cities, it suffices
to show that every subtour in a feasible solution passes through city 1 (noting that the
equalities ensure there can only be one such tour). For if we sum all the inequalities
corresponding to xij = 1 for any subtour of k steps not passing through city 1, we obtain:
nk ≤ (n − 1)k,
which is a contradiction.
It now must be shown that for every single tour covering all cities, there are values for the
dummy variables ui that satisfy the constraints.
Without loss of generality, define the tour as originating (and ending) at city 1. Choose
ui = t if city i is visited in step t (i, t = 1, 2, ..., n). Then
ui − uj ≤ n − 1,
since ui can be no greater than n and uj can be no less than 1; hence the constraints are
satisfied whenever xij = 0. For xij = 1, we have:
ui − uj + nxij = (t) − (t + 1) + n = n − 1,
satisfying the constraint.

131.3.2 Dantzig–Fulkerson–Johnson formulation

Label the cities with the numbers 1, . . ., n and define:


{
1 the path goes from city i to city j
xij =
0 otherwise

1339
Travelling salesman problem

Take cij to be the distance from city i to city j. Then TSP can be written as the following
integer linear programming problem:

n ∑
n
min cij xij :
i=1 j̸=i,j=1

0 ≤ xij ≤ 1 i, j = 1, . . . , n;

n
xij = 1 j = 1, . . . , n;
i=1,i̸=j
∑n
xij = 1 i = 1, . . . , n;
j=1,j̸=i
∑ ∑
xij ≤ |Q| − 1 ∀Q ⊊ {1, . . . , n}, |Q| ≥ 2
i∈Q j̸=i,j∈Q

The last constraint of the DFJ formulation ensures that there are no sub-tours among the
non-starting vertices, so the solution returned is a single tour and not the union of smaller
tours. Because this leads to an exponential number of possible constraints, in practice it is
solved with delayed column generation85 .

131.4 Computing a solution

The traditional lines of attack for the NP-hard problems are the following:
• Devising exact algorithms86 , which work reasonably fast only for small problem sizes.
• Devising ”suboptimal” or heuristic algorithms87 , i.e., algorithms that deliver approximated
solutions in a reasonable time.
• Finding special cases for the problem (”subproblems”) for which either better or exact
heuristics are possible.

131.4.1 Exact algorithms

The most direct solution would be to try all permutations88 (ordered combinations) and see
which one is cheapest (using brute-force search89 ). The running time for this approach lies
within a polynomial factor of O(n!), the factorial90 of the number of cities, so this solution
becomes impractical even for only 20 cities.
One of the earliest applications of dynamic programming91 is the Held−Karp algorithm92
that solves the problem in time O(n2 2n ).[19] This bound has also been reached by Exclusion-
Inclusion in an attempt preceding the dynamic programming approach.

85 https://en.wikipedia.org/wiki/Column_generation
86 https://en.wikipedia.org/wiki/Exact_algorithm
87 https://en.wikipedia.org/wiki/Heuristic_(computer_science)
88 https://en.wikipedia.org/wiki/Permutation
89 https://en.wikipedia.org/wiki/Brute-force_search
90 https://en.wikipedia.org/wiki/Factorial
91 https://en.wikipedia.org/wiki/Dynamic_programming
92 https://en.wikipedia.org/wiki/Held%E2%80%93Karp_algorithm

1340
Computing a solution

Figure 312 Solution to a symmetric TSP with 7 cities using brute force search. Note:
Number of permutations: (7-1)!/2 = 360

Improving these time bounds seems to be difficult. For example, it has not been determined
whether an exact algorithm93 for TSP that runs in time O(1.9999n ) exists.[20]
Other approaches include:
• Various branch-and-bound94 algorithms, which can be used to process TSPs containing
40–60 cities.

Figure 313 Solution of a TSP with 7 cities using a simple Branch and bound algorithm.
Note: The number of permutations is much less than Brute force search

93 https://en.wikipedia.org/wiki/Exact_algorithm
94 https://en.wikipedia.org/wiki/Branch_and_bound

1341
Travelling salesman problem

• Progressive improvement algorithms which use techniques reminiscent of linear program-


ming95 . Works well for up to 200 cities.
• Implementations of branch-and-bound96 and problem-specific cut generation (branch-
and-cut97[21] ); this is the method of choice for solving large instances. This approach
holds the current record, solving an instance with 85,900 cities, see Applegate et al.
(2006)98 .
An exact solution for 15,112 German towns from TSPLIB was found in 2001 using the
cutting-plane method99 proposed by George Dantzig100 , Ray Fulkerson101 , and Selmer M.
Johnson102 in 1954, based on linear programming103 . The computations were performed on
a network of 110 processors located at Rice University104 and Princeton University105 . The
total computation time was equivalent to 22.6 years on a single 500 MHz Alpha processor106 .
In May 2004, the travelling salesman problem of visiting all 24,978 towns in Sweden was
solved: a tour of length approximately 72,500 kilometres was found and it was proven
that no shorter tour exists.[22] In March 2005, the travelling salesman problem of visiting
all 33,810 points in a circuit board was solved using Concorde TSP Solver107 : a tour of
length 66,048,945 units was found and it was proven that no shorter tour exists. The
computation took approximately 15.7 CPU-years (Cook et al. 2006). In April 2006 an
instance with 85,900 points was solved using Concorde TSP Solver, taking over 136 CPU-
years, see Applegate et al. (2006)108 .

131.4.2 Heuristic and approximation algorithms

Various heuristics109 and approximation algorithms110 , which quickly yield good solutions,
have been devised. These include the Multi-fragment algorithm111 . Modern methods can
find solutions for extremely large problems (millions of cities) within a reasonable time
which are with a high probability just 2–3% away from the optimal solution.[11]
Several categories of heuristics are recognized.

95 https://en.wikipedia.org/wiki/Linear_programming
96 https://en.wikipedia.org/wiki/Branch_and_bound
97 https://en.wikipedia.org/wiki/Branch_and_cut
98 #CITEREFApplegateBixbyChv%C3%A1talCook2006
99 https://en.wikipedia.org/wiki/Cutting-plane_method
100 https://en.wikipedia.org/wiki/George_Dantzig
101 https://en.wikipedia.org/wiki/D._R._Fulkerson
102 https://en.wikipedia.org/wiki/Selmer_M._Johnson
103 https://en.wikipedia.org/wiki/Linear_programming
104 https://en.wikipedia.org/wiki/Rice_University
105 https://en.wikipedia.org/wiki/Princeton_University
106 https://en.wikipedia.org/wiki/Alpha_processor
107 https://en.wikipedia.org/wiki/Concorde_TSP_Solver
108 #CITEREFApplegateBixbyChv%C3%A1talCook2006
109 https://en.wikipedia.org/wiki/Heuristic_(computer_science)
110 https://en.wikipedia.org/wiki/Approximation_algorithm
111 https://en.wikipedia.org/wiki/Multi-fragment_algorithm

1342
Computing a solution

Constructive heuristics

Figure 314 Nearest Neighbour algorithm for a TSP with 7 cities. The solution changes
as the starting point is changed

The nearest neighbour (NN) algorithm112 (a greedy algorithm113 ) lets the salesman choose
the nearest unvisited city as his next move. This algorithm quickly yields an effectively short
route. For N cities randomly distributed on a plane, the algorithm on average yields a path
25% longer than the shortest possible path.[23] However, there exist many specially arranged
city distributions which make the NN algorithm give the worst route.[24] This is true for both
asymmetric and symmetric TSPs.[25] Rosenkrantz et al.[26] showed that the NN algorithm
has the approximation factor Θ(log |V |) for instances satisfying the triangle inequality. A
variation of NN algorithm, called Nearest Fragment (NF) operator, which connects a group
(fragment) of nearest unvisited cities, can find shorter route with successive iterations.[27]
The NF operator can also be applied on an initial solution obtained by NN algorithm for
further improvement in an elitist model, where only better solutions are accepted.

112 https://en.wikipedia.org/wiki/Nearest_neighbour_algorithm
113 https://en.wikipedia.org/wiki/Greedy_algorithm

1343
Travelling salesman problem

The bitonic tour114 of a set of points is the minimum-perimeter monotone polygon115 that
has the points as its vertices; it can be computed efficiently by dynamic programming116 .
Another constructive heuristic117 , Match Twice and Stitch (MTS), performs two sequential
matchings118 , where the second matching is executed after deleting all the edges of the first
matching, to yield a set of cycles. The cycles are then stitched to produce the final tour.[28]

Christofides algorithm
The Christofides algorithm119 follows a similar outline but combines the minimum spanning
tree with a solution of another problem, minimum-weight perfect matching120 . This gives
a TSP tour which is at most 1.5 times the optimal. The Christofides algorithm was one
of the first approximation algorithms121 , and was in part responsible for drawing atten-
tion to approximation algorithms as a practical approach to intractable problems. As a
matter of fact, the term ”algorithm” was not commonly extended to approximation algo-
rithms until later; the Christofides algorithm was initially referred to as the Christofides
122
heuristic.[citation needed ]
This algorithm looks at things differently by using a result from graph theory which helps
improve on the LB of the TSP which originated from doubling the cost of the minimum
spanning tree. Given an Eulerian graph123 we can find an Eulerian tour124 in O(n) time.[6]
So if we had an Eulerian graph with cities from a TSP as vertices then we can easily see
that we could use such a method for finding an Eulerian tour to find a TSP solution. By
triangular inequality125 we know that the TSP tour can be no longer than the Eulerian tour
and as such we have a LB for the TSP. Such a method is described below.

114 https://en.wikipedia.org/wiki/Bitonic_tour
115 https://en.wikipedia.org/wiki/Monotone_polygon
116 https://en.wikipedia.org/wiki/Dynamic_programming
117 https://en.wikipedia.org/wiki/Constructive_heuristic
118 https://en.wikipedia.org/wiki/Matching_(graph_theory)
119 https://en.wikipedia.org/wiki/Christofides_algorithm
120 https://en.wikipedia.org/wiki/Perfect_matching
121 https://en.wikipedia.org/wiki/Approximation_algorithm
123 https://en.wikipedia.org/wiki/Eulerian_graph
124 https://en.wikipedia.org/wiki/Eulerian_tour
125 https://en.wikipedia.org/wiki/Triangular_inequality

1344
Computing a solution

Figure 315 Using a shortcut heuristic on the graph created by the matching below

1. Find a minimum spanning tree for the problem


2. Create duplicates for every edge to create an Eulerian graph
3. Find an Eulerian tour for this graph
4. Convert to TSP: if a city is visited twice, create a shortcut from the city before this
in the tour to the one after this.
To improve the lower bound, a better way of creating an Eulerian graph is needed. By
triangular inequality, the best Eulerian graph must have the same cost as the best travelling
salesman tour, hence finding optimal Eulerian graphs is at least as hard as TSP. One way
of doing this is by minimum weight matching126 using algorithms of O(n3 ).[6]

126 https://en.wikipedia.org/wiki/Matching_(graph_theory)

1345
Travelling salesman problem

Figure 316 Creating a matching

Making a graph into an Eulerian graph starts with the minimum spanning tree. Then all
the vertices of odd order must be made even. So a matching for the odd degree vertices
must be added which increases the order of every odd degree vertex by one.[6] This leaves
us with a graph where every vertex is of even order which is thus Eulerian. Adapting the
above method gives Christofides' algorithm,
1. Find a minimum spanning tree for the problem
2. Create a matching for the problem with the set of cities of odd order.
3. Find an Eulerian tour for this graph
4. Convert to TSP using shortcuts.

1346
Computing a solution

Iterative improvement

Figure 317 An example of a 2-opt iteration

Pairwise exchange
The pairwise exchange or 2-opt127 technique involves iteratively removing two edges and
replacing these with two different edges that reconnect the fragments created by edge re-
moval into a new and shorter tour. Similarly, the 3-opt128 technique removes 3 edges and
reconnects them to form a shorter tour. These are special cases of the k-opt method. The
label Lin–Kernighan is an often heard misnomer for 2-opt. Lin–Kernighan is actually the
more general k-opt method.
For Euclidean instances, 2-opt heuristics give on average solutions that are about 5% better
than Christofides' algorithm. If we start with an initial solution made with a greedy algo-
rithm129 , the average number of moves greatly decreases again and is O(n). For random
starts however, the average number of moves is O(n log(n)). However whilst in order this
is a small increase in size, the initial number of moves for small problems is 10 times as big

127 https://en.wikipedia.org/wiki/2-opt
128 https://en.wikipedia.org/wiki/3-opt
129 https://en.wikipedia.org/wiki/Greedy_algorithm

1347
Travelling salesman problem

for a random start compared to one made from a greedy heuristic. This is because such
2-opt heuristics exploit ‘bad' parts of a solution such as crossings. These types of heuristics
are often used within Vehicle routing problem130 heuristics to reoptimize route solutions.[23]

k-opt heuristic, or Lin–Kernighan heuristics


The Lin–Kernighan heuristic is a special case of the V-opt or variable-opt technique. It
involves the following steps:
1. Given a tour, delete k mutually disjoint edges.
2. Reassemble the remaining fragments into a tour, leaving no disjoint subtours (that
is, don't connect a fragment's endpoints together). This in effect simplifies the TSP
under consideration into a much simpler problem.
3. Each fragment endpoint can be connected to 2k − 2 other possibilities: of 2k total
fragment endpoints available, the two endpoints of the fragment under consideration
are disallowed. Such a constrained 2k-city TSP can then be solved with brute force
methods to find the least-cost recombination of the original fragments.
The most popular of the k-opt methods are 3-opt, as introduced by Shen Lin of Bell Labs131
in 1965. A special case of 3-opt is where the edges are not disjoint (two of the edges are
adjacent to one another). In practice, it is often possible to achieve substantial improvement
over 2-opt without the combinatorial cost of the general 3-opt by restricting the 3-changes
to this special subset where two of the removed edges are adjacent. This so-called two-
and-a-half-opt typically falls roughly midway between 2-opt and 3-opt, both in terms of the
quality of tours achieved and the time required to achieve those tours.

V-opt heuristic
The variable-opt method is related to, and a generalization of the k-opt method. Whereas
the k-opt methods remove a fixed number (k) of edges from the original tour, the variable-
opt methods do not fix the size of the edge set to remove. Instead they grow the set as
the search process continues. The best known method in this family is the Lin–Kernighan
method (mentioned above as a misnomer for 2-opt). Shen Lin132 and Brian Kernighan133
first published their method in 1972, and it was the most reliable heuristic for solving
travelling salesman problems for nearly two decades. More advanced variable-opt methods
were developed at Bell Labs in the late 1980s by David Johnson and his research team.
These methods (sometimes called Lin–Kernighan–Johnson134 ) build on the Lin–Kernighan
method, adding ideas from tabu search135 and evolutionary computing136 . The basic Lin–
Kernighan technique gives results that are guaranteed to be at least 3-opt. The Lin–
Kernighan–Johnson methods compute a Lin–Kernighan tour, and then perturb the tour by
what has been described as a mutation that removes at least four edges and reconnecting
the tour in a different way, then V-opting the new tour. The mutation is often enough to

130 https://en.wikipedia.org/wiki/Vehicle_routing_problem
131 https://en.wikipedia.org/wiki/Bell_Labs
132 https://en.wikipedia.org/w/index.php?title=Shen_Lin&action=edit&redlink=1
133 https://en.wikipedia.org/wiki/Brian_Kernighan
https://en.wikipedia.org/w/index.php?title=Lin%E2%80%93Kernighan%E2%80%93Johnson&
134
action=edit&redlink=1
135 https://en.wikipedia.org/wiki/Tabu_search
136 https://en.wikipedia.org/wiki/Evolutionary_computing

1348
Computing a solution

move the tour from the local minimum137 identified by Lin–Kernighan. V-opt methods are
widely considered the most powerful heuristics for the problem, and are able to address
special cases, such as the Hamilton Cycle Problem and other non-metric TSPs that other
heuristics fail on. For many years Lin–Kernighan–Johnson had identified optimal solutions
for all TSPs where an optimal solution was known and had identified the best known
solutions for all other TSPs on which the method had been tried.

Randomized improvement

Optimized Markov chain138 algorithms which use local searching heuristic sub-algorithms
can find a route extremely close to the optimal route for 700 to 800 cities.
TSP is a touchstone for many general heuristics devised for combinatorial optimization such
as genetic algorithms139 , simulated annealing140 , tabu search141 , ant colony optimization142 ,
river formation dynamics143 (see swarm intelligence144 ) and the cross entropy method145 .

Ant colony optimization


Main article: Ant colony optimization algorithms146 Artificial intelligence147 researcher
Marco Dorigo148 described in 1993 a method of heuristically generating ”good solutions” to
the TSP using a simulation of an ant colony149 called ACS (ant colony system).[29] It models
behaviour observed in real ants to find short paths between food sources and their nest,
an emergent150 behaviour resulting from each ant's preference to follow trail pheromones151
deposited by other ants.
ACS sends out a large number of virtual ant agents to explore many possible routes on the
map. Each ant probabilistically chooses the next city to visit based on a heuristic combining
the distance to the city and the amount of virtual pheromone deposited on the edge to the
city. The ants explore, depositing pheromone on each edge that they cross, until they have
all completed a tour. At this point the ant which completed the shortest tour deposits
virtual pheromone along its complete tour route (global trail updating). The amount of
pheromone deposited is inversely proportional to the tour length: the shorter the tour, the
more it deposits.

137 https://en.wikipedia.org/wiki/Local_minimum
138 https://en.wikipedia.org/wiki/Markov_chain
139 https://en.wikipedia.org/wiki/Genetic_algorithm
140 https://en.wikipedia.org/wiki/Simulated_annealing
141 https://en.wikipedia.org/wiki/Tabu_search
142 https://en.wikipedia.org/wiki/Ant_colony_optimization
https://en.wikipedia.org/w/index.php?title=River_formation_dynamics&action=edit&
143
redlink=1
144 https://en.wikipedia.org/wiki/Swarm_intelligence
145 https://en.wikipedia.org/wiki/Cross_entropy_method
146 https://en.wikipedia.org/wiki/Ant_colony_optimization_algorithms
147 https://en.wikipedia.org/wiki/Artificial_intelligence
148 https://en.wikipedia.org/wiki/Marco_Dorigo
149 https://en.wikipedia.org/wiki/Ant_colony_optimization
150 https://en.wikipedia.org/wiki/Emergence
151 https://en.wikipedia.org/wiki/Pheromone#Trail

1349
Travelling salesman problem

Figure 318

Figure 319 Ant colony optimization algorithm for a TSP with 7 cities: Red and thick
lines in the pheromone map indicate presence of more pheromone

131.5 Special cases

131.5.1 Metric

In the metric TSP, also known as delta-TSP or Δ-TSP, the intercity distances satisfy the
triangle inequality152 .
A very natural restriction of the TSP is to require that the distances between cities form a
metric153 to satisfy the triangle inequality154 ; that is the direct connection from A to B is
never farther than the route via intermediate C:
dAB ≤ dAC + dCB .

152 https://en.wikipedia.org/wiki/Triangle_inequality
153 https://en.wikipedia.org/wiki/Metric_(mathematics)
154 https://en.wikipedia.org/wiki/Triangle_inequality

1350
Special cases

The edge spans then build a metric155 on the set of vertices. When the cities are viewed as
points in the plane, many natural distance functions156 are metrics, and so many natural
instances of TSP satisfy this constraint.
The following are some examples of metric TSPs for various metrics.
• In the Euclidean TSP (see below) the distance between two cities is the Euclidean dis-
tance157 between the corresponding points.
• In the rectilinear TSP the distance between two cities is the sum of the absolute values
of the differences of their x- and y-coordinates. This metric is often called the Manhattan
distance158 or city-block metric.
• In the maximum metric159 , the distance between two points is the maximum of the
absolute values of differences of their x- and y-coordinates.
The last two metrics appear, for example, in routing a machine that drills a given set of
holes in a printed circuit board160 . The Manhattan metric corresponds to a machine that
adjusts first one co-ordinate, and then the other, so the time to move to a new point is the
sum of both movements. The maximum metric corresponds to a machine that adjusts both
co-ordinates simultaneously, so the time to move to a new point is the slower of the two
movements.
In its definition, the TSP does not allow cities to be visited twice, but many applications do
not need this constraint. In such cases, a symmetric, non-metric instance can be reduced to
a metric one. This replaces the original graph with a complete graph in which the inter-city
distance dAB is replaced by the shortest path161 between A and B in the original graph.

131.5.2 Euclidean

When the input numbers can be arbitrary real numbers, Euclidean TSP is a particular
case of metric TSP, since distances in a plane obey the triangle inequality. When the
input numbers must be integers, comparing lengths of tours involves comparing sums of
square-roots.
Like the general TSP, Euclidean TSP is NP-hard in either case. With rational coordinates
and discretized metric (distances rounded up to an integer), the problem is NP-complete.[30]
With rational coordinates and the actual Euclidean metric, Euclidean TSP is known to
be in the Counting Hierarchy,[31] a subclass of PSPACE. With arbitrary real coordinates,
Euclidean TSP cannot be in such classes, since there are uncountably many possible inputs.
However, Euclidean TSP is probably the easiest version for approximation.[32] For example,
the minimum spanning tree of the graph associated with an instance of the Euclidean TSP
is a Euclidean minimum spanning tree162 , and so can be computed in expected O (n log

155 https://en.wikipedia.org/wiki/Metric_space
156 https://en.wikipedia.org/wiki/Distance_function
157 https://en.wikipedia.org/wiki/Euclidean_distance
158 https://en.wikipedia.org/wiki/Manhattan_distance
159 https://en.wikipedia.org/wiki/Maximum_metric
160 https://en.wikipedia.org/wiki/Printed_circuit_board
161 https://en.wikipedia.org/wiki/Shortest_path
162 https://en.wikipedia.org/wiki/Euclidean_minimum_spanning_tree

1351
Travelling salesman problem

n) time for n points (considerably less than the number of edges). This enables the simple
2-approximation algorithm for TSP with triangle inequality above to operate more quickly.
In general, for any c > 0, where d is the number of dimensions in the Euclidean space,
there is a polynomial-time algorithm that finds a tour of length at most (1 + 1/c) times
the optimal for geometric instances of TSP in
( √ )
d))d−1
O n(log n)(O(c ,

time; this is called a polynomial-time approximation scheme163 (PTAS).[33] Sanjeev Arora164


and Joseph S. B. Mitchell165 were awarded the Gödel Prize166 in 2010 for their concurrent
discovery of a PTAS for the Euclidean TSP.
In practice, simpler heuristics with weaker guarantees continue to be used.

131.5.3 Asymmetric

In most cases, the distance between two nodes in the TSP network is the same in both
directions. The case where the distance from A to B is not equal to the distance from
B to A is called asymmetric TSP. A practical application of an asymmetric TSP is route
optimization using street-level routing (which is made asymmetric by one-way streets, slip-
roads, motorways, etc.).

Conversion to symmetric

Solving an asymmetric TSP graph can be somewhat complex. The following is a 3×3 matrix
containing all possible path weights between the nodes A, B and C. One option is to turn
an asymmetric matrix of size N into a symmetric matrix of size 2N.[34]

Asymmetric path weights

A B C
A 1 2
B 6 3
C 5 4

To double the size, each of the nodes in the graph is duplicated, creating a second ghost
node, linked to the original node with a ”ghost” edge of very low (possibly negative) weight,
here denoted −w. (Alternatively, the ghost edges have weight 0, and weight w is added to
all other edges.) The original 3×3 matrix shown above is visible in the bottom left and
the transpose of the original in the top-right. Both copies of the matrix have had their

163 https://en.wikipedia.org/wiki/Polynomial-time_approximation_scheme
164 https://en.wikipedia.org/wiki/Sanjeev_Arora
165 https://en.wikipedia.org/wiki/Joseph_S._B._Mitchell
166 https://en.wikipedia.org/wiki/G%C3%B6del_Prize

1352
Special cases

diagonals replaced by the low-cost hop paths, represented by −w. In the new graph, no
edge directly links original nodes and no edge directly links ghost nodes.
Symmetric path weights

A B C A′ B′ C′
A −w 6 5
B 1 −w 4
C 2 3 −w
A′ −w 1 2
B′ 6 −w 3
C′ 5 4 −w

The weight −w of the ”ghost” edges linking the ghost nodes to the corresponding origi-
nal nodes must be low enough to ensure that all ghost edges must belong to any optimal
symmetric TSP solution on the new graph (w=0 is not always low enough). As a conse-
quence, in the optimal symmetric tour, each original node appears next to its ghost node
(e.g. a possible path is A → A′ → C → C′ → B → B′ → A) and by merging the original and
ghost nodes again we get an (optimal) solution of the original asymmetric problem (in our
example, A → C → B → A).

131.5.4 Analyst's problem

There is an analogous problem in geometric measure theory167 which asks the following:
under what conditions may a subset E of Euclidean space168 be contained in a rectifiable
curve169 (that is, when is there a curve with finite length that visits every point in E)? This
problem is known as the analyst's travelling salesman problem170 .

131.5.5 Path length for random sets of points in a square

Suppose X1 , . . . , Xn are n independent random variables with uniform distribution in the


square [0, 1]2 , and let L∗n be the shortest path length (i.e. TSP solution) for this set of
points, according to the usual Euclidean distance171 . It is known[35] that, almost surely,
L∗
√n → β when n → ∞,
n

where β is a positive constant that is not known explicitly. Since L∗n ≤√
2 n + 2 (see below),
it follows from bounded convergence theorem172 that β = lim E[L∗n ]/ n, hence lower and
n→∞
upper bounds on β follow from bounds on E[L∗n ].

167 https://en.wikipedia.org/wiki/Geometric_measure_theory
168 https://en.wikipedia.org/wiki/Euclidean_space
169 https://en.wikipedia.org/wiki/Rectifiable_curve
170 https://en.wikipedia.org/wiki/Analyst%27s_traveling_salesman_theorem
171 https://en.wikipedia.org/wiki/Euclidean_distance
172 https://en.wikipedia.org/wiki/Bounded_convergence_theorem

1353
Travelling salesman problem

L∗
The almost sure limit √n → β as n → ∞ may not exist if the independent locations
n
X1 , . . . , Xn are replaced with observations from a stationary ergodic process with uniform
marginals.[36]

Upper bound

• One has L∗ ≤ 2 n + 2, and therefore
√ β ≤ 2, by using√a naive path which visits monoton-
ically the points inside√each of n slices of width
√ 1/ n in the square.

• Few[37] proved
√ L n ≤ 2n + 1.75, hence β ≤ 2, later improved by Karloff (1987):
β ≤ 0.984 2.
• Some study reported[38] an upper bound that β ≤ 0.92 . . ..
• Some study reported[39] an upper bound that β ≤ 0.73 . . ..

Lower bound
• By observing that E[L∗n ] is greater than n times the distance between X0 and the closest
point Xi ̸= X0 , one gets (after a short computation)

E[L∗n ] ≥ 12 n.
• A better lower bound is obtained[35] by observing that E[L∗n ] is greater than 12 n times the
sum of the distances between X0 and the closest and second closest points Xi , Xj ̸= X0 ,
which gives
( )√ √
E[L∗n ] ≥ 41 + 38 n = 58 n,

• The currently[38] best lower bound is



E[L∗n ] ≥ ( 85 + 5184
19
) n,
• Held and Karp[40] gave a polynomial-time
√ algorithm that provides numerical lower bounds
for L∗n , and thus for β(≃ L∗n / n) which seem to be good up to more or less 1%.[41] In
particular, David S. Johnson[42] obtained a lower bound by computer experiment:

L∗n ≳ 0.7080 n + 0.522,
where 0.522 comes from the points near square boundary which have fewer neighbours, and
Christine L. Valenzuela and Antonia J. Jones[43] obtained the following other numerical
lower bound:

L∗n ≳ 0.7078 n + 0.551.

1354
Computational complexity

131.6 Computational complexity

The problem has been shown to be NP-hard173 (more precisely, it is complete for the
complexity class174 FPNP ; see function problem175 ), and the decision problem176 version
(”given the costs and a number x, decide whether there is a round-trip route cheaper than
x”) is NP-complete177 . The bottleneck traveling salesman problem178 is also NP-hard. The
problem remains NP-hard even for the case when the cities are in the plane with Euclidean
distances179 , as well as in a number of other restrictive cases. Removing the condition of
visiting each city ”only once” does not remove the NP-hardness, since it is easily seen that
in the planar case there is an optimal tour that visits each city only once (otherwise, by
the triangle inequality180 , a shortcut that skips a repeated visit would not increase the tour
length).

131.6.1 Complexity of approximation

In the general case, finding a shortest travelling salesman tour is NPO181 -complete.[44] If
the distance measure is a metric182 (and thus symmetric), the problem becomes APX183 -
complete[45] and Christofides’s algorithm184 approximates it within 1.5.[46] The best known
inapproximability bound is 123/122 .[47]
If the distances are restricted to 1 and 2 (but still are a metric) the approximation ratio
becomes 8/7.[48] In the asymmetric case with triangle inequality185 , only logarithmic perfor-
mance guarantees are known, the best current algorithm achieves performance ratio 0.814
log(n);[49] it is an open question if a constant factor approximation exists. The best known
inapproximability bound is 75/74 .[47]
The corresponding maximization problem of finding the longest travelling salesman tour is
approximable within 63/38.[50] If the distance function is symmetric, the longest tour can
be approximated within 4/3 by a deterministic algorithm[51] and within 25 1
(33 + ε) by a
randomized algorithm. [52]

173 https://en.wikipedia.org/wiki/NP-hard
174 https://en.wikipedia.org/wiki/Complexity_class
175 https://en.wikipedia.org/wiki/Function_problem
176 https://en.wikipedia.org/wiki/Decision_problem
177 https://en.wikipedia.org/wiki/NP-complete
178 https://en.wikipedia.org/wiki/Bottleneck_traveling_salesman_problem
179 https://en.wikipedia.org/wiki/Euclidean_distance
180 https://en.wikipedia.org/wiki/Triangle_inequality
181 https://en.wikipedia.org/wiki/Optimization_problem#NP_optimization_problem
182 https://en.wikipedia.org/wiki/Metric_(mathematics)
183 https://en.wikipedia.org/wiki/APX
184 https://en.wikipedia.org/wiki/Christofides_algorithm
185 https://en.wikipedia.org/wiki/Triangle_inequality

1355
Travelling salesman problem

131.7 Human and animal performance

The TSP, in particular the Euclidean186 variant of the problem, has attracted the atten-
tion of researchers in cognitive psychology187 . It has been observed that humans are able
to produce near-optimal solutions quickly, in a close-to-linear fashion, with performance
that ranges from 1% less efficient for graphs with 10-20 nodes, and 11% less efficient for
graphs with 120 nodes.[53][54] The apparent ease with which humans accurately generate
near-optimal solutions to the problem has led researchers to hypothesize that humans use
one or more heuristics, with the two most popular theories arguably being the convex-hull
hypothesis and the crossing-avoidance heuristic.[55][56][57] However, additional evidence sug-
gests that human performance is quite varied, and individual differences as well as graph
geometry appear to affect performance in the task.[58][59][60] Nevertheless, results suggest
that computer performance on the TSP may be improved by understanding and emulating
the methods used by humans for these problems,[61] and have also led to new insights into
the mechanisms of human thought.[62] The first issue of the Journal of Problem Solving was
devoted to the topic of human performance on TSP,[63] and a 2011 review listed dozens of
papers on the subject.[62]
A 2011 study in animal cognition188 entitled “Let the Pigeon Drive the Bus,” named after
the children's book Don't Let the Pigeon Drive the Bus!189 , examined spatial cognition in
pigeons by studying their flight patterns between multiple feeders in a laboratory in relation
to the travelling salesman problem. In the first experiment, pigeons were placed in the corner
of a lab room and allowed to fly to nearby feeders containing peas. The researchers found
that pigeons largely used proximity to determine which feeder they would select next. In the
second experiment, the feeders were arranged in such a way that flying to the nearest feeder
at every opportunity would be largely inefficient if the pigeons needed to visit every feeder.
The results of the second experiment indicate that pigeons, while still favoring proximity-
based solutions, “can plan several steps ahead along the route when the differences in travel
costs between efficient and less efficient routes based on proximity become larger.”[64] These
results are consistent with other experiments done with non-primates, which have proven
that some non-primates were able to plan complex travel routes. This suggests non-primates
may possess a relatively sophisticated spatial cognitive ability.

131.8 Natural computation

When presented with a spatial configuration of food sources, the amoeboid190 Physarum
polycephalum191 adapts its morphology to create an efficient path between the food sources
which can also be viewed as an approximate solution to TSP.[65] It's considered to present
interesting possibilities and it has been studied in the area of natural computing192 .

186 https://en.wikipedia.org/wiki/Euclidean_distance
187 https://en.wikipedia.org/wiki/Cognitive_psychology
188 https://en.wikipedia.org/wiki/Animal_cognition
189 https://en.wikipedia.org/wiki/Don%27t_Let_the_Pigeon_Drive_the_Bus!
190 https://en.wikipedia.org/wiki/Amoeba
191 https://en.wikipedia.org/wiki/Physarum_polycephalum
192 https://en.wikipedia.org/wiki/Natural_computing

1356
Benchmarks

131.9 Benchmarks

For benchmarking of TSP algorithms, TSPLIB193 is a library of sample instances of the


TSP and related problems is maintained, see the TSPLIB external reference. Many of them
are lists of actual cities and layouts of actual printed circuits194 .

131.10 Popular culture


• Travelling Salesman195 , by director Timothy Lanzone, is the story of four mathematicians
hired by the U.S. government to solve the most elusive problem in computer-science
history: P vs. NP196 .[66]

131.11 See also


• Canadian traveller problem197
• Exact algorithm198
• Route inspection problem199 (also known as ”Chinese postman problem”)
• Set TSP problem200
• Seven Bridges of Königsberg201
• Steiner travelling salesman problem202
• Subway Challenge203
• Tube Challenge204
• Vehicle routing problem205
• Graph exploration206

131.12 Notes
1. Google scholar search for ”Traveling Salesperson Problem207 returns thousands of re-
sults of articles spanning twenty years. Retrieved November 23, 2019.
2. See the TSP world tour problem which has already been solved to within 0.05% of
the optimal solution. [1]208

193 http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/
194 https://en.wikipedia.org/wiki/Printed_circuit_board
195 https://en.wikipedia.org/wiki/Travelling_Salesman_(2012_film)
196 https://en.wikipedia.org/wiki/P_vs._NP
197 https://en.wikipedia.org/wiki/Canadian_traveller_problem
198 https://en.wikipedia.org/wiki/Exact_algorithm
199 https://en.wikipedia.org/wiki/Route_inspection_problem
200 https://en.wikipedia.org/wiki/Set_TSP_problem
201 https://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg
202 https://en.wikipedia.org/wiki/Steiner_travelling_salesman_problem
203 https://en.wikipedia.org/wiki/Subway_Challenge
204 https://en.wikipedia.org/wiki/Tube_Challenge
205 https://en.wikipedia.org/wiki/Vehicle_routing_problem
206 https://en.wikipedia.org/wiki/Graph_traversal#Graph_exploration
207 https://scholar.google.com/scholar?q=%22traveling+salesperson+problem%22
208 http://www.math.uwaterloo.ca/tsp/world/

1357
Travelling salesman problem

3. ”Der Handlungsreisende – wie er sein soll und was er zu tun hat, um Aufträge zu
erhalten und eines glücklichen Erfolgs in seinen Geschäften gewiß zu sein – von einem
alten Commis-Voyageur”209 (The travelling salesman — how he must be and what he
should do in order to get commissions and be sure of the happy success in his business
— by an old commis-voyageur)
4. A discussion of the early work of Hamilton and Kirkman can be found in Graph
Theory 1736–1936
5. Cited and English translation in Schrijver (2005)210 harvtxt error: no tar-
get: CITEREFSchrijver2005 (help211 ). Original German: ”Wir bezeichnen als
Botenproblem (weil diese Frage in der Praxis von jedem Postboten, übrigens auch von
vielen Reisenden zu lösen ist) die Aufgabe, für endlich viele Punkte, deren paarweise
Abstände bekannt sind, den kürzesten die Punkte verbindenden Weg zu finden. Dieses
Problem ist natürlich stets durch endlich viele Versuche lösbar. Regeln, welche die An-
zahl der Versuche unter die Anzahl der Permutationen der gegebenen Punkte herun-
terdrücken würden, sind nicht bekannt. Die Regel, man solle vom Ausgangspunkt
erst zum nächstgelegenen Punkt, dann zu dem diesem nächstgelegenen Punkt gehen
usw., liefert im allgemeinen nicht den kürzesten Weg.”
6. L, E. L. (1985). The Travelling Salesman Problem: A Guided Tour of Com-
binatorial Optimization212 (R.  . .). J W & .
ISBN213 978-0471904137214 .
7. R, J (5 D 1949). ”O  H  ( -
  )”215 (RM-303). S M, CA: T R C-
. R 2 M 2020. Cite journal requires |journal= (help216 )
8. A detailed treatment of the connection between Menger and Whitney as well as the
growth in the study of TSP can be found in Alexander Schrijver217 's 2005 paper ”On
the history of combinatorial optimization (till 1960). Handbook of Discrete Optimiza-
tion (K. Aardal218 , G.L. Nemhauser219 , R. Weismantel, eds.), Elsevier, Amsterdam,
2005, pp. 1–68.PS220 ,PDF221
9. B, J; H, J. H.; H, J. M. (O
1959). ”T     ”222 . Mathemati-

209 https://zs.thulb.uni-jena.de/receive/jportal_jparticle_00248075
210 #CITEREFSchrijver2005
211 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
212 https://books.google.com/books?id=qbFlMwEACAAJ
213 https://en.wikipedia.org/wiki/ISBN_(identifier)
214 https://en.wikipedia.org/wiki/Special:BookSources/978-0471904137
215 http://www.dtic.mil/get-tr-doc/pdf?AD=AD0204961
216 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
217 https://en.wikipedia.org/wiki/Alexander_Schrijver
218 https://en.wikipedia.org/wiki/Karen_Aardal
219 https://en.wikipedia.org/wiki/George_Nemhauser
220 http://homepages.cwi.nl/~lex/files/histco.ps
221 http://homepages.cwi.nl/~lex/files/histco.pdf
https://www.cambridge.org/core/product/identifier/S0305004100034095/type/journal_
222
article

1358
Notes

cal Proceedings of the Cambridge Philosophical Society. 55 (4): 299–327.


223 224 225
doi :10.1017/S0305004100034095 . ISSN 0305-0041 . 226

10. K, E (30 J 2013). ”C S F N
S  I T S P”227 . WIRED. Retrieved
14 June 2015.
11. R, C; G, D; G, F; O, C (2011),
”T   :  , -
   ”, European Journal of Operational Research, 211 (3):
427–441, doi228 :10.1016/j.ejor.2010.09.010229 , MR230 2774420231 .
12. ”How Do You Fix School Bus Routes? Call MIT in Wall street Journal”232 (PDF).
13. B, A; M, M (2002), ”N E T-
   G T S P  T
S P”, Proceedings of the 15th International Conference of Systems
Engineering (Las Vegas)
14. P, C.H.; S, K. (1998), Combinatorial optimization: algo-
rithms and complexity, Mineola, NY: Dover, pp.308-309.
15. Tucker, A. W. (1960), ”On Directed Graphs and Integer Programs”, IBM Mathemat-
ical research Project (Princeton University)
16. Dantzig, George B. (1963), Linear Programming and Extensions, Princeton, NJ:
PrincetonUP, pp. 545–7, ISBN233 0-691-08000-3234 , sixth printing, 1974.
17. V, M (2017). ”S     DFJ
     MTZ    A T-
 S P”. Operations Research Letters. 45 (4): 323–324.
arXiv235 :1805.06997236 . doi237 :10.1016/j.orl.2017.04.010238 .
18. BŞ, T; G, L (2014). ”R   M–T–
Z   ?”. European Journal of Operational
Research. 236 (3): 820–832. doi239 :10.1016/j.ejor.2013.07.038240 .
19. Bellman (1960)241 , Bellman (1962)242 , Held & Karp (1962)243

223 https://en.wikipedia.org/wiki/Doi_(identifier)
224 https://doi.org/10.1017%2FS0305004100034095
225 https://en.wikipedia.org/wiki/ISSN_(identifier)
226 http://www.worldcat.org/issn/0305-0041
227 https://www.wired.com/2013/01/traveling-salesman-problem/
228 https://en.wikipedia.org/wiki/Doi_(identifier)
229 https://doi.org/10.1016%2Fj.ejor.2010.09.010
230 https://en.wikipedia.org/wiki/MR_(identifier)
231 http://www.ams.org/mathscinet-getitem?mr=2774420
232 http://online.WSJ.com/public/resources/documents/print/WSJ_-A002-20170812.pdf
233 https://en.wikipedia.org/wiki/ISBN_(identifier)
234 https://en.wikipedia.org/wiki/Special:BookSources/0-691-08000-3
235 https://en.wikipedia.org/wiki/ArXiv_(identifier)
236 http://arxiv.org/abs/1805.06997
237 https://en.wikipedia.org/wiki/Doi_(identifier)
238 https://doi.org/10.1016%2Fj.orl.2017.04.010
239 https://en.wikipedia.org/wiki/Doi_(identifier)
240 https://doi.org/10.1016%2Fj.ejor.2013.07.038
241 #CITEREFBellman1960
242 #CITEREFBellman1962
243 #CITEREFHeldKarp1962

1359
Travelling salesman problem

20. Woeginger (2003)244


21. Padberg & Rinaldi (1991)245
22. Work by David Applegate, AT&T Labs – Research, Robert Bixby, ILOG246 and Rice
University, Vašek Chvátal, Concordia University, William Cook, University of Water-
loo, and Keld Helsgaun, Roskilde University is discussed on their project web page
hosted by the University of Waterloo and last updated in June 2004, here [2]247
23. J, D. S.248 ; MG, L. A. (1997). ”T T S P-
: A C S  L O”249 (PDF). I A, E. H. L.;
L, J. K.250 (.). Local Search in Combinatorial Optimisation. London:
John Wiley and Sons Ltd. pp. 215–310.
24. G, G; Y, A; Z, A (15 M 2002).
”T     :   
-    TSP”. Discrete Applied Mathematics. 117 (1–3):
81–86. doi251 :10.1016/S0166-218X(01)00195-0252 .>
25. Z, A; Z, W; Y, A; MG, L
A.; G, G; J, D S. (2007), ”E A
 H   ATSP”, The Traveling Salesman Problem and Its Vari-
ations, Combinatorial Optimization, Springer, Boston, MA, pp. 445–487, Cite-
SeerX253 10.1.1.24.2386254 , doi255 :10.1007/0-306-48213-4_10256 , ISBN257 978-0-387-
44459-8258
26. R, D. J.; S, R. E.; L, P. M. (14–16 O 1974). Ap-
proximate algorithms for the traveling salesperson problem. 15th Annual Symposium
on Switching and Automata Theory (swat 1974). doi259 :10.1109/SWAT.1974.4260 .
27. R, S. S.; B, S.; P, S. K. (2007). ”G O-
  C O  TSP  M G O-
”. Applied Intelligence. 26 (3): 183–195. CiteSeerX261 10.1.1.151.132262 .
doi263 :10.1007/s10489-006-0018-y264 .

244 #CITEREFWoeginger2003
245 #CITEREFPadbergRinaldi1991
246 https://en.wikipedia.org/wiki/ILOG
247 http://www.math.uwaterloo.ca/tsp/sweden/
248 https://en.wikipedia.org/wiki/David_S._Johnson
249 https://www.cs.ubc.ca/~hutter/previous-earg/EmpAlgReadingGroup/TSP-JohMcg97.pdf
250 https://en.wikipedia.org/wiki/Jan_Karel_Lenstra
251 https://en.wikipedia.org/wiki/Doi_(identifier)
252 https://doi.org/10.1016%2FS0166-218X%2801%2900195-0
253 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
254 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.2386
255 https://en.wikipedia.org/wiki/Doi_(identifier)
256 https://doi.org/10.1007%2F0-306-48213-4_10
257 https://en.wikipedia.org/wiki/ISBN_(identifier)
258 https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-44459-8
259 https://en.wikipedia.org/wiki/Doi_(identifier)
260 https://doi.org/10.1109%2FSWAT.1974.4
261 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
262 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.151.132
263 https://en.wikipedia.org/wiki/Doi_(identifier)
264 https://doi.org/10.1007%2Fs10489-006-0018-y

1360
Notes

28. K, A. B.; R, S. (2004). ”M T  S: A N TSP
T C H”. Operations Research Letters. 32 (6): 499–509.
doi265 :10.1016/j.orl.2004.04.001266 .
29. Marco Dorigo. ”Ant Colonies for the Traveling Salesman Problem. IRIDIA, Université
Libre de Bruxelles. IEEE Transactions on Evolutionary Computation, 1(1):53−66.
1997. 267
30. Papadimitriou (1977)268 .
31. Allender et al. (2007)269
32. Larson & Odoni (1981)270
33. Arora (1998)271 . sfnp error: multiple targets (2×): CITEREFArora1998 (help272 )
34. J, R; V, T (1983). ”T  
   ”. Operations Research Letters273 .
2 (161–163): 1983. doi274 :10.1016/0167-6377(83)90048-2275 .
35. Beardwood, Halton & Hammersley (1959)276 harvtxt error: multiple targets (2×):
CITEREFBeardwoodHaltonHammersley1959 (help277 )
36. A, A; S, J. M278 (2016), ”B–H–
H     :  -
”, The Annals of Applied Probability, 26 (4): 2141–2168, arXiv279 :1307.0221280 ,
doi281 :10.1214/15-AAP1142282
37. F, L. (1955). ”T        
”. Mathematika. 2 (2): 141–144. doi283 :10.1112/s0025579300000784284 .
38. S, S. (2015). ”N      -
”. Advances in Applied Probability. 47 (1). arXiv285 :1311.6338286 . Bib-
code287 :2013arXiv1311.6338S288 .

265 https://en.wikipedia.org/wiki/Doi_(identifier)
266 https://doi.org/10.1016%2Fj.orl.2004.04.001
267 http://citeseer.ist.psu.edu/86357.html
268 #CITEREFPapadimitriou1977
269 #CITEREFAllenderB%C3%BCrgisserKjeldgaard-PedersenMitersen2007
270 #CITEREFLarsonOdoni1981
271 #CITEREFArora1998
272 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
273 https://en.wikipedia.org/wiki/Operations_Research_Letters
274 https://en.wikipedia.org/wiki/Doi_(identifier)
275 https://doi.org/10.1016%2F0167-6377%2883%2990048-2
276 #CITEREFBeardwoodHaltonHammersley1959
277 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
278 https://en.wikipedia.org/wiki/J._Michael_Steele
279 https://en.wikipedia.org/wiki/ArXiv_(identifier)
280 http://arxiv.org/abs/1307.0221
281 https://en.wikipedia.org/wiki/Doi_(identifier)
282 https://doi.org/10.1214%2F15-AAP1142
283 https://en.wikipedia.org/wiki/Doi_(identifier)
284 https://doi.org/10.1112%2Fs0025579300000784
285 https://en.wikipedia.org/wiki/ArXiv_(identifier)
286 http://arxiv.org/abs/1311.6338
287 https://en.wikipedia.org/wiki/Bibcode_(identifier)
288 https://ui.adsabs.harvard.edu/abs/2013arXiv1311.6338S

1361
Travelling salesman problem

39. F, C.-N. (1994). ”A      
  ”. Disc. Applied Math. 51 (3): 243–267.
doi289 :10.1016/0166-218X(92)00033-I290 .
40. H, M.; K, R.M. (1970). ”T T S P
 M S T”. Operations Research. 18 (6): 1138–1162.
doi291 :10.1287/opre.18.6.1138292 .
41. G, M.; B, D. (1991). ”P    H
 K     E   ”.
Mathematics of Operations Research. 16 (1): 72–89. doi293 :10.1287/moor.16.1.72294 .
42. ””295 . about.att.com.
43. Christine L. Valenzuela and Antonia J. Jones296 Archived297 25 October 2007 at the
Wayback Machine298
44. Orponen (1987)299 harvtxt error: no target: CITEREFOrponen1987 (help300 )
45. Papadimitriou (1983)301 harvtxt error: no target: CITEREFPapadimitriou1983
(help302 )
46. Christofides (1976)303
47. Karpinski, Lampis & Schmied (2015)304
48. Berman & Karpinski (2006)305 .
49. Kaplan (2004)306 harvtxt error: no target: CITEREFKaplan2004 (help307 )
50. Kosaraju (1994)308 harvtxt error: no target: CITEREFKosaraju1994 (help309 )
51. Serdyukov (1984)310
52. Hassin (2000)311 harvtxt error: no target: CITEREFHassin2000 (help312 )

289 https://en.wikipedia.org/wiki/Doi_(identifier)
290 https://doi.org/10.1016%2F0166-218X%2892%2900033-I
291 https://en.wikipedia.org/wiki/Doi_(identifier)
292 https://doi.org/10.1287%2Fopre.18.6.1138
293 https://en.wikipedia.org/wiki/Doi_(identifier)
294 https://doi.org/10.1287%2Fmoor.16.1.72
295 https://about.att.com/error.html
296 http://users.cs.cf.ac.uk/Antonia.J.Jones/Papers/EJORHeldKarp/HeldKarp.pdf
https://web.archive.org/web/20071025205411/http://users.cs.cf.ac.uk/Antonia.J.Jones/
297
Papers/EJORHeldKarp/HeldKarp.pdf
298 https://en.wikipedia.org/wiki/Wayback_Machine
299 #CITEREFOrponen1987
300 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
301 #CITEREFPapadimitriou1983
302 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
303 #CITEREFChristofides1976
304 #CITEREFKarpinskiLampisSchmied2015
305 #CITEREFBermanKarpinski2006
306 #CITEREFKaplan2004
307 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
308 #CITEREFKosaraju1994
309 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
310 #CITEREFSerdyukov1984
311 #CITEREFHassin2000
312 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors

1362
Notes

53. M, J. N.; O, T. (J 1996), ”H   


  ”, Perception & Psychophysics, 58 (4): 527–539,
doi313 :10.3758/BF03213088314 , PMID315 8934685316 .
54. D, M; L, M D.; V, D; H, P (2006).
”H P  V P T S P-
  V N  N”. The Journal of Problem Solving. 1 (1).
CiteSeerX317 10.1.1.360.9763318 . doi319 :10.7771/1932-6246.1004320 . ISSN321 1932-
6246322 .
55. R, I V; S, U; S, A (1 M 2003). ”C-
       E  
: I    ”. Memory & Cogni-
tion. 31 (2): 215–220. CiteSeerX323 10.1.1.12.6117324 . doi325 :10.3758/bf03194380326 .
ISSN327 0090-502X328 . PMID329 12749463330 .
56. MG, J N.; C, Y (2011). ”H P  
T S  R P: A R”. The Journal of Prob-
lem Solving. 3 (2). doi331 :10.7771/1932-6246.1090332 . ISSN333 1932-6246334 .
57. MG, J N.; C, E P.; O, T C. (1
M 2004). ”C    ? S  
   ”. Memory & Cognition. 32 (2): 260–270.
doi335 :10.3758/bf03196857336 . ISSN337 0090-502X338 . PMID339 15190718340 .
58. V, D; M, T; H, M; L, M D;
H, P (2004). ”I     -
        ”.

313 https://en.wikipedia.org/wiki/Doi_(identifier)
314 https://doi.org/10.3758%2FBF03213088
315 https://en.wikipedia.org/wiki/PMID_(identifier)
316 http://pubmed.ncbi.nlm.nih.gov/8934685
317 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
318 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.360.9763
319 https://en.wikipedia.org/wiki/Doi_(identifier)
320 https://doi.org/10.7771%2F1932-6246.1004
321 https://en.wikipedia.org/wiki/ISSN_(identifier)
322 http://www.worldcat.org/issn/1932-6246
323 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
324 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.12.6117
325 https://en.wikipedia.org/wiki/Doi_(identifier)
326 https://doi.org/10.3758%2Fbf03194380
327 https://en.wikipedia.org/wiki/ISSN_(identifier)
328 http://www.worldcat.org/issn/0090-502X
329 https://en.wikipedia.org/wiki/PMID_(identifier)
330 http://pubmed.ncbi.nlm.nih.gov/12749463
331 https://en.wikipedia.org/wiki/Doi_(identifier)
332 https://doi.org/10.7771%2F1932-6246.1090
333 https://en.wikipedia.org/wiki/ISSN_(identifier)
334 http://www.worldcat.org/issn/1932-6246
335 https://en.wikipedia.org/wiki/Doi_(identifier)
336 https://doi.org/10.3758%2Fbf03196857
337 https://en.wikipedia.org/wiki/ISSN_(identifier)
338 http://www.worldcat.org/issn/0090-502X
339 https://en.wikipedia.org/wiki/PMID_(identifier)
340 http://pubmed.ncbi.nlm.nih.gov/15190718

1363
Travelling salesman problem

Personality and Individual Differences. 36 (5): 1059–1071. doi341 :10.1016/s0191-


8869(03)00200-9342 .
59. K, M; G, S R.; F, E (12 J 2017).
”A -    
 E   ”. Psychological Research.
82 (5): 997–1009. doi343 :10.1007/s00426-017-0881-7344 . ISSN345 0340-0727346 .
PMID347 28608230348 .
60. K, M; B, G; G, S; V,
V-A (11 J 2017). ”S    -
       E  -
 ”349 . Heliyon. 3 (11): e00461. doi350 :10.1016/j.heliyon.2017.e00461351 .
PMC352 5727545353 . PMID354 29264418355 .
61. K, M; G, S R.; F, E; D, S-
 U (D 2018). ”H    E T-
 S P: C   -
   ”. Cognitive Systems Research. 52: 387–399.
356
doi :10.1016/j.cogsys.2018.07.027 . 357

62. MG, J N.; C, Y (2011), ”H    -
    : A ”358 , Journal of Problem Solv-
ing, 3 (2), doi359 :10.7771/1932-6246.1090360 .
63. Journal of Problem Solving 1(1)361 , 2006, retrieved 2014-06-06.
64. G, B; W, M; K, D (1 M 2012). ”L
    :        ”. Ani-

341 https://en.wikipedia.org/wiki/Doi_(identifier)
342 https://doi.org/10.1016%2Fs0191-8869%2803%2900200-9
343 https://en.wikipedia.org/wiki/Doi_(identifier)
344 https://doi.org/10.1007%2Fs00426-017-0881-7
345 https://en.wikipedia.org/wiki/ISSN_(identifier)
346 http://www.worldcat.org/issn/0340-0727
347 https://en.wikipedia.org/wiki/PMID_(identifier)
348 http://pubmed.ncbi.nlm.nih.gov/28608230
349 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5727545
350 https://en.wikipedia.org/wiki/Doi_(identifier)
351 https://doi.org/10.1016%2Fj.heliyon.2017.e00461
352 https://en.wikipedia.org/wiki/PMC_(identifier)
353 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5727545
354 https://en.wikipedia.org/wiki/PMID_(identifier)
355 http://pubmed.ncbi.nlm.nih.gov/29264418
356 https://en.wikipedia.org/wiki/Doi_(identifier)
357 https://doi.org/10.1016%2Fj.cogsys.2018.07.027
358 https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1090&context=jps
359 https://en.wikipedia.org/wiki/Doi_(identifier)
360 https://doi.org/10.7771%2F1932-6246.1090
361 https://docs.lib.purdue.edu/jps/vol1/iss1/

1364
References

mal Cognition. 15 (3): 379–391. doi362 :10.1007/s10071-011-0463-9363 . ISSN364 1435-


9456365 . PMID366 21965161367 .
65. J, J; A, A (2014), ”C   
     ”368 (PDF), Natural Computing: 2, 13,
arXiv369 :1303.4969370 , Bibcode371 :2013arXiv1303.4969J372
66. G, D (26 A 2012). ”'T S'  
   P  NP”373 . Wired UK. Retrieved 26 April 2012.

131.13 References
• A, D. L.; B, R. M.; C, V.; C, W. J.374 (2006), The Traveling
Salesman Problem, ISBN375 978-0-691-12993-8376 .
• A, E; B, P; K-P, J;
M, P B (2007), ”O  C  N A-
”377 (PDF), SIAM J. Comput.378 , 38 (5): 1987–2006, CiteSeerX379 10.1.1.167.5495380 ,
doi381 :10.1137/070697926382 .
• A, S383 (1998), ”P     E-
      ”, Journal of the
ACM384 , 45 (5): 753–782, doi385 :10.1145/290179.290180386 , MR387 1668147388 .

362 https://en.wikipedia.org/wiki/Doi_(identifier)
363 https://doi.org/10.1007%2Fs10071-011-0463-9
364 https://en.wikipedia.org/wiki/ISSN_(identifier)
365 http://www.worldcat.org/issn/1435-9456
366 https://en.wikipedia.org/wiki/PMID_(identifier)
367 http://pubmed.ncbi.nlm.nih.gov/21965161
http://www.phychip.eu/wp-content/uploads/2013/03/Computation-of-the-travelling-
368
salesman-problem-by-a-shrinking-blob.pdf
369 https://en.wikipedia.org/wiki/ArXiv_(identifier)
370 http://arxiv.org/abs/1303.4969
371 https://en.wikipedia.org/wiki/Bibcode_(identifier)
372 https://ui.adsabs.harvard.edu/abs/2013arXiv1303.4969J
373 https://www.wired.co.uk/news/archive/2012-04/26/travelling-salesman
374 https://en.wikipedia.org/wiki/William_J._Cook
375 https://en.wikipedia.org/wiki/ISBN_(identifier)
376 https://en.wikipedia.org/wiki/Special:BookSources/978-0-691-12993-8
377 https://www3.math.tu-berlin.de/algebra/work/focs7.pdf
378 https://en.wikipedia.org/wiki/SIAM_J._Comput.
379 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
380 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.167.5495
381 https://en.wikipedia.org/wiki/Doi_(identifier)
382 https://doi.org/10.1137%2F070697926
383 https://en.wikipedia.org/wiki/Sanjeev_Arora
384 https://en.wikipedia.org/wiki/Journal_of_the_ACM
385 https://en.wikipedia.org/wiki/Doi_(identifier)
386 https://doi.org/10.1145%2F290179.290180
387 https://en.wikipedia.org/wiki/MR_(identifier)
388 http://www.ams.org/mathscinet-getitem?mr=1668147

1365
Travelling salesman problem

• B, J.; H, J.H.; H, J.M. (1959), ”T S P
T M P”, Proceedings of the Cambridge Philosophical Society, 55 (4):
299–327, Bibcode389 :1959PCPS...55..299B390 , doi391 :10.1017/s0305004100034095392 .
• B, R. (1960), ”C P  D P”,
 B, R.; H, M. J. (.), Combinatorial Analysis, Proceedings of Symposia
in Applied Mathematics 10, American Mathematical Society, pp. 217–249.
• B, R. (1962), ”D P T   T-
 S P”, J. Assoc. Comput. Mach., 9: 61–63,
393
doi :10.1145/321105.321111 . 394

• B, P; K, M395 (2006), ”8/7- -


  (1,2)-TSP”, Proc. 17th ACM-SIAM Symposium on Dis-
crete Algorithms (SODA '06) , . 641–648, CSX397 10.1.1.430.2224398 ,
396

399 :10.1145/1109557.1109627400 , ISBN401 978-0898716054402 , ECCC403 TR05-069404 .


• C, N. (1976), Worst-case analysis of a new heuristic for the travelling
salesman problem, Technical Report 388, Graduate School of Industrial Administration,
Carnegie-Mellon University, Pittsburgh.
• H, R.; R, S. (2000), ”B    TSP”,
Information Processing Letters, 75 (4): 181–186, CiteSeerX405 10.1.1.35.7209406 ,
doi407 :10.1016/S0020-0190(00)00097-1408 .
• H, M.409 ; K, R. M.410 (1962), ”A D P A 
S P”, Journal of the Society for Industrial and Applied Mathematics,
10 (1): 196–210, doi411 :10.1137/0110015412 , hdl413 :10338.dmlcz/103900414 .
• K, H.; L, L.; S, N.; S, M. (2004), ”A-
 A  A TSP  D D R
M”, In Proc. 44th IEEE Symp. on Foundations of Comput. Sci, pp. 56–65.

389 https://en.wikipedia.org/wiki/Bibcode_(identifier)
390 https://ui.adsabs.harvard.edu/abs/1959PCPS...55..299B
391 https://en.wikipedia.org/wiki/Doi_(identifier)
392 https://doi.org/10.1017%2Fs0305004100034095
393 https://en.wikipedia.org/wiki/Doi_(identifier)
394 https://doi.org/10.1145%2F321105.321111
395 https://en.wikipedia.org/wiki/Marek_Karpinski
396 http://eccc.hpi-web.de/report/2005/069/revision/2/download/
397 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
398 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.430.2224
399 https://en.wikipedia.org/wiki/Doi_(identifier)
400 https://doi.org/10.1145%2F1109557.1109627
401 https://en.wikipedia.org/wiki/ISBN_(identifier)
402 https://en.wikipedia.org/wiki/Special:BookSources/978-0898716054
403 https://en.wikipedia.org/wiki/Electronic_Colloquium_on_Computational_Complexity
404 https://eccc.weizmann.ac.il/report/2005/069/
405 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
406 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.7209
407 https://en.wikipedia.org/wiki/Doi_(identifier)
408 https://doi.org/10.1016%2FS0020-0190%2800%2900097-1
409 https://en.wikipedia.org/w/index.php?title=Michael_Held&action=edit&redlink=1
410 https://en.wikipedia.org/wiki/Richard_Karp
411 https://en.wikipedia.org/wiki/Doi_(identifier)
412 https://doi.org/10.1137%2F0110015
413 https://en.wikipedia.org/wiki/Hdl_(identifier)
414 http://hdl.handle.net/10338.dmlcz%2F103900

1366
References

• K, M.; L, M.; S, R. (2015), ”N I


  TSP”, Journal of Computer and System Sciences, 81 (8): 1665–1677,
arXiv415 :1303.6437416 , doi417 :10.1016/j.jcss.2015.06.003418
• K, S. R.; P, J. K.; S, C. (1994), ”L    -
'”, Proc. 35th Ann. IEEE Symp. on Foundations of Comput. Sci, IEEE
Computer Society, pp. 166–177.
• O, P.; M, H.419 (1987), ”O   -
: C    '”, Technical Report C-1987–28,
Department of Computer Science, University of Helsinki.
• L, R C.; O, A R. (1981), ”6.4.7: A  N-
 M § R P §§ E TSP”420 , Urban Operations Re-
search, Prentice-Hall, ISBN421 9780139394478422 , OCLC423 6331426424 .
• P, M.; R, G. (1991), ”A B--C A   R-
  L-S S T S P”425 , SIAM
Review, 33: 60–100, doi426 :10.1137/1033004427 .
• P, C H.428 (1977), ”T E  -
   NP-”, Theoretical Computer Science, 4 (3): 237–244,
doi429 :10.1016/0304-3975(77)90012-3430 , MR431 0455550432 .
• P, C. H.; Y, M. (1993), ”T  
     ”, Math. Oper. Res., 18: 1–11,
doi433 :10.1287/moor.18.1.1434 .
• S, A. I. (1984), ”A       
    '”, Upravlyaemye Sistemy, 25: 80–86.
• S, S (2015), ”N B   T S-
 C”, Advances in Applied Probability, 47: 27–36, arXiv435 :1311.6338436 ,
doi437 :10.1239/aap/1427814579438 .

415 https://en.wikipedia.org/wiki/ArXiv_(identifier)
416 http://arxiv.org/abs/1303.6437
417 https://en.wikipedia.org/wiki/Doi_(identifier)
418 https://doi.org/10.1016%2Fj.jcss.2015.06.003
419 https://en.wikipedia.org/wiki/Heikki_Mannila
420 http://web.mit.edu/urban_or_book/www/book/chapter6/6.4.7.html
421 https://en.wikipedia.org/wiki/ISBN_(identifier)
422 https://en.wikipedia.org/wiki/Special:BookSources/9780139394478
423 https://en.wikipedia.org/wiki/OCLC_(identifier)
424 http://www.worldcat.org/oclc/6331426
425 https://semanticscholar.org/paper/966580e91b06f55605869b7155784c2fc0c444bb
426 https://en.wikipedia.org/wiki/Doi_(identifier)
427 https://doi.org/10.1137%2F1033004
428 https://en.wikipedia.org/wiki/Christos_Papadimitriou
429 https://en.wikipedia.org/wiki/Doi_(identifier)
430 https://doi.org/10.1016%2F0304-3975%2877%2990012-3
431 https://en.wikipedia.org/wiki/MR_(identifier)
432 http://www.ams.org/mathscinet-getitem?mr=0455550
433 https://en.wikipedia.org/wiki/Doi_(identifier)
434 https://doi.org/10.1287%2Fmoor.18.1.1
435 https://en.wikipedia.org/wiki/ArXiv_(identifier)
436 http://arxiv.org/abs/1311.6338
437 https://en.wikipedia.org/wiki/Doi_(identifier)
438 https://doi.org/10.1239%2Faap%2F1427814579

1367
Travelling salesman problem

• W, G.J.439 (2003), ”E A  NP-H P: A S-


”, Combinatorial Optimization – Eureka, You Shrink! Lecture notes in computer
science, vol. 2570, Springer, pp. 185–207.

131.14 Further reading


• A, L440 (1994), ”M C  S-
 T C P”441 (PDF), Science, 266 (5187):
1021–4, Bibcode442 :1994Sci...266.1021A443 , CiteSeerX444 10.1.1.54.2565445 ,
doi446 :10.1126/science.7973651447 , PMID448 7973651449 , archived from the original450
(PDF) on 6 February 2005
• A, S.451 (1998), ”P     E
     ”452 (PDF), Journal of the
ACM, 45 (5): 753–782, doi453 :10.1145/290179.290180454
• B, G; D, S; L, G (2005), ”I-
   O- H   S T S
P”, The Journal of the Operational Research Society, Cahiers du GERAD,
Montreal: Group for Research in Decision Analysis, G-2005-02 (3): 402–407, Cite-
SeerX455 10.1.1.89.9953456 , JSTOR457 4622707458
• C, W459 (2012). In Pursuit of the Traveling Salesman: Mathematics at the
Limits of Computation460 . P U P. ISBN461 9780691152707462 .

439 https://en.wikipedia.org/wiki/Gerhard_J._Woeginger
440 https://en.wikipedia.org/wiki/Leonard_Adleman
https://web.archive.org/web/20050206144827/http://www.usc.edu/dept/molecular-
441
science/papers/fp-sci94.pdf
442 https://en.wikipedia.org/wiki/Bibcode_(identifier)
443 https://ui.adsabs.harvard.edu/abs/1994Sci...266.1021A
444 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
445 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.54.2565
446 https://en.wikipedia.org/wiki/Doi_(identifier)
447 https://doi.org/10.1126%2Fscience.7973651
448 https://en.wikipedia.org/wiki/PMID_(identifier)
449 http://pubmed.ncbi.nlm.nih.gov/7973651
450 http://www.usc.edu/dept/molecular-science/papers/fp-sci94.pdf
451 https://en.wikipedia.org/wiki/Sanjeev_Arora
452 http://graphics.stanford.edu/courses/cs468-06-winter/Papers/arora-tsp.pdf
453 https://en.wikipedia.org/wiki/Doi_(identifier)
454 https://doi.org/10.1145%2F290179.290180
455 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
456 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.89.9953
457 https://en.wikipedia.org/wiki/JSTOR_(identifier)
458 http://www.jstor.org/stable/4622707
459 https://en.wikipedia.org/wiki/William_J._Cook
460 https://en.wikipedia.org/wiki/In_Pursuit_of_the_Traveling_Salesman
461 https://en.wikipedia.org/wiki/ISBN_(identifier)
462 https://en.wikipedia.org/wiki/Special:BookSources/9780691152707

1368
Further reading

• C, W463 ; E, D; G, M (2007), ”C


 -    TSP”, INFORMS Journal on Computing,
19 (3): 356–365, doi464 :10.1287/ijoc.1060.0204465
• C, T H.466 ; L, C E.467 ; R, R L.468 ; S,
C469 (31 J 2009). ”35.2: T - ”470 . Introduc-
tion to Algorithms471 (2 .). MIT P. . 1027–1033. ISBN472 9780262033848473 .
• D, G. B.474 ; F, R.475 ; J, S. M.476 (1954), ”S  
-   ”477 , Operations Research, 2 (4): 393–410,
doi478 :10.1287/opre.2.4.393479 , JSTOR480 166695481
• G, M R.; J, D S. (1979). ”A2.3: ND22–24”. Computers and
Intractability: A Guide to the Theory of NP-completeness482 . W. H. F. . 211–
212. ISBN483 9780716710448484 .
• G, D. E. (1989), ”G A  S, O &
M L”, Reading: Addison-Wesley, New York: Addison-Wesley, Bib-
code485 :1989gaso.book.....G486 , ISBN487 978-0-201-15767-3488
• G, G.; Y, A.; Z, A. (15 M 2002). ”T 
   :    -  
 TSP”. Discrete Applied Mathematics. 117 (1–3): 81–86. doi489 :10.1016/S0166-
218X(01)00195-0490 . ISSN491 0166-218X492 .

463 https://en.wikipedia.org/wiki/William_J._Cook
464 https://en.wikipedia.org/wiki/Doi_(identifier)
465 https://doi.org/10.1287%2Fijoc.1060.0204
466 https://en.wikipedia.org/wiki/Thomas_H._Cormen
467 https://en.wikipedia.org/wiki/Charles_E._Leiserson
468 https://en.wikipedia.org/wiki/Ronald_L._Rivest
469 https://en.wikipedia.org/wiki/Clifford_Stein
470 https://books.google.com/books?id=i-bUBQAAQBAJ
471 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
472 https://en.wikipedia.org/wiki/ISBN_(identifier)
473 https://en.wikipedia.org/wiki/Special:BookSources/9780262033848
474 https://en.wikipedia.org/wiki/George_Dantzig
475 https://en.wikipedia.org/wiki/D._R._Fulkerson
476 https://en.wikipedia.org/wiki/Selmer_M._Johnson
477 https://semanticscholar.org/paper/0fa37740fe865bcac0d9687c0e5f65131f4492c8
478 https://en.wikipedia.org/wiki/Doi_(identifier)
479 https://doi.org/10.1287%2Fopre.2.4.393
480 https://en.wikipedia.org/wiki/JSTOR_(identifier)
481 http://www.jstor.org/stable/166695
482 https://books.google.com/books?id=fjxGAQAAIAAJ
483 https://en.wikipedia.org/wiki/ISBN_(identifier)
484 https://en.wikipedia.org/wiki/Special:BookSources/9780716710448
485 https://en.wikipedia.org/wiki/Bibcode_(identifier)
486 https://ui.adsabs.harvard.edu/abs/1989gaso.book.....G
487 https://en.wikipedia.org/wiki/ISBN_(identifier)
488 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-15767-3
489 https://en.wikipedia.org/wiki/Doi_(identifier)
490 https://doi.org/10.1016%2FS0166-218X%2801%2900195-0
491 https://en.wikipedia.org/wiki/ISSN_(identifier)
492 http://www.worldcat.org/issn/0166-218X

1369
Travelling salesman problem

• G, G.; P, A. P. (18 M 2007). The Traveling Salesman Problem and Its
Variations493 . S US. ISBN494 9780387444598495 .</ref>
G, G.; P, A. P. (2006), The Traveling Salesman Problem and Its Variations,
Springer, ISBN496 978-0-387-44459-8497
• J, D. S.498 ; MG, L. A. (1997), ”T T S P:
A C S  L O”,  A, E. H. L.; L, J. K.499 (.),
Local Search in Combinatorial Optimisation500 (PDF), J W  S L.,
. 215–310
• L, E. L.; S, D. B.; K, A. H. G. R; L, J. K.
(1985). The Traveling Salesman Problem501 . J W & S, I.
ISBN502 9780471904137503 .
• MG, J. N.; O, T. (1996), ”H    -
  ”504 (PDF), Perception & Psychophysics, 58 (4): 527–539,
doi505 :10.3758/BF03213088506 , PMID507 8934685508 , archived from the original509 (PDF)
on 29 December 2009
• M, J. S. B.510 (1999), ”G   
: A  -    
TSP, k-MST, and related problems”511 , SIAM Journal on Computing, 28 (4): 1298–1309,
doi512 :10.1137/S0097539796309764513
• R, S.; S, W. (1998), ”A    ''
 ''”, Proceedings514 , . 540–550, CSX515 10.1.1.51.8676516
• R, D J.; S, R E.; L, P M., II (1977).
”A A  S H   T S P”517 .

493 https://books.google.com/books?id=pfRSPwAACAAJ
494 https://en.wikipedia.org/wiki/ISBN_(identifier)
495 https://en.wikipedia.org/wiki/Special:BookSources/9780387444598
496 https://en.wikipedia.org/wiki/ISBN_(identifier)
497 https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-44459-8
498 https://en.wikipedia.org/wiki/David_S._Johnson
499 https://en.wikipedia.org/wiki/Jan_Karel_Lenstra
500 https://www.cs.ubc.ca/~hutter/previous-earg/EmpAlgReadingGroup/TSP-JohMcg97.pdf
501 https://books.google.com/books?id=BXBGAAAAYAAJ
502 https://en.wikipedia.org/wiki/ISBN_(identifier)
503 https://en.wikipedia.org/wiki/Special:BookSources/9780471904137
https://web.archive.org/web/20091229053516/http://www.psych.lancs.ac.uk/people/
504
uploads/TomOrmerod20030716T112601.pdf
505 https://en.wikipedia.org/wiki/Doi_(identifier)
506 https://doi.org/10.3758%2FBF03213088
507 https://en.wikipedia.org/wiki/PMID_(identifier)
508 http://pubmed.ncbi.nlm.nih.gov/8934685
509 http://www.psych.lancs.ac.uk/people/uploads/TomOrmerod20030716T112601.pdf
510 https://en.wikipedia.org/wiki/Joseph_S._B._Mitchell
511 http://citeseer.ist.psu.edu/622594.html
512 https://en.wikipedia.org/wiki/Doi_(identifier)
513 https://doi.org/10.1137%2FS0097539796309764
514 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
515 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
516 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.8676
517 https://semanticscholar.org/paper/ac5e93895ab03bf47070dab04f62f58717442f0d

1370
External links

SIAM Journal on Computing. SIAM (Society for Industrial and Applied Mathematics).
6 (5): 563–581. doi518 :10.1137/0206041519 .
• M, A; L, M; B, M; V, D-
 (1 F 2001). ”H    
T S ”. Psychological Research. 65 (1): 34–45.
doi520 :10.1007/s004260000031521 . ISSN522 1430-2772523 . PMID524 11505612525 .
• W, C (2000), A Multilevel Approach to the Travelling Salesman Problem,
CMS Press
• W, C (2001), A Multilevel Lin-Kernighan-Helsgaun Algorithm for the
Travelling Salesman Problem526 , CMS P

131.15 External links

Wikimedia Commons has media related to Traveling salesman problem527 .

• Traveling Salesman Problem528 at the Wayback Machine529 (archived 2013-12-


17[Date mismatch] ) at University of Waterloo530
• TSPLIB531 at the University of Heidelberg532
• Traveling Salesman Problem533 by Jon McLoone at the Wolfram Demonstrations Project
• TSP visualization tool534

518 https://en.wikipedia.org/wiki/Doi_(identifier)
519 https://doi.org/10.1137%2F0206041
520 https://en.wikipedia.org/wiki/Doi_(identifier)
521 https://doi.org/10.1007%2Fs004260000031
522 https://en.wikipedia.org/wiki/ISSN_(identifier)
523 http://www.worldcat.org/issn/1430-2772
524 https://en.wikipedia.org/wiki/PMID_(identifier)
525 http://pubmed.ncbi.nlm.nih.gov/11505612
526 http://dimacs.rutgers.edu/Challenges/TSP/WalshawTR8001.ps
527 https://commons.wikimedia.org/wiki/Category:Traveling_salesman_problem
https://web.archive.org/web/20131217224319/http://www.math.uwaterloo.ca/tsp/index.
528
html
529 https://en.wikipedia.org/wiki/Wayback_Machine
530 https://en.wikipedia.org/wiki/University_of_Waterloo
531 http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/
532 https://en.wikipedia.org/wiki/University_of_Heidelberg
533 http://demonstrations.wolfram.com/TravelingSalesmanProblem/
534 https://tspvis.com/

1371
132 Tree traversal

”Tree search” redirects here. It is not to be confused with Search tree1 .

This article needs additional citations for verification2 . Please help improve
this article3 by adding citations to reliable sources4 . Unsourced material may be
challenged and removed.
Find sources: ”Tree traversal”5 – news6 · newspapers7 · books8 · scholar9 · JSTOR10 (May
2009)(Learn how and when to remove this template message11 )

1 https://en.wikipedia.org/wiki/Search_tree
2 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
3 https://en.wikipedia.org/w/index.php?title=Tree_traversal&action=edit
4 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
5 http://www.google.com/search?as_eq=wikipedia&q=%22Tree+traversal%22
6 http://www.google.com/search?tbm=nws&q=%22Tree+traversal%22+-wikipedia
http://www.google.com/search?&q=%22Tree+traversal%22+site:news.google.com/newspapers&
7
source=newspapers
8 http://www.google.com/search?tbs=bks:1&q=%22Tree+traversal%22+-wikipedia
9 http://scholar.google.com/scholar?q=%22Tree+traversal%22
10 https://www.jstor.org/action/doBasicSearch?Query=%22Tree+traversal%22&acc=on&wc=on
11 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

1373
Tree traversal

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

1374
Types

In computer science12 , tree traversal (also known as tree search and walking the tree)
is a form of graph traversal13 and refers to the process of visiting (checking and/or updating)
each node in a tree data structure14 , exactly once. Such traversals are classified by the order
in which the nodes are visited. The following algorithms are described for a binary tree15 ,
but they may be generalized to other trees as well.

132.1 Types

Unlike linked lists16 , one-dimensional arrays17 and other linear data structures18 , which
are canonically traversed in linear order, trees may be traversed in multiple ways. They
may be traversed in depth-first or breadth-first order. There are three common ways to
traverse them in depth-first order: in-order, pre-order and post-order.[1] Beyond these basic
traversals, various more complex or hybrid schemes are possible, such as depth-limited
searches19 like iterative deepening depth-first search20 . The latter, as well as breadth-first
search, can also be used to traverse infinite trees, see below21 .

132.1.1 Data structures for tree traversal

This section does not cite22 any sources23 . Please help improve this section24
by adding citations to reliable sources25 . Unsourced material may be challenged
and removed26 .
Find sources: ”Tree traversal”27 – news28 · newspapers29 · books30 · scholar31 · JSTOR32
(October 2016)(Learn how and when to remove this template message33 )

12 https://en.wikipedia.org/wiki/Computer_science
13 https://en.wikipedia.org/wiki/Graph_traversal
14 https://en.wikipedia.org/wiki/Tree_(data_structure)
15 https://en.wikipedia.org/wiki/Binary_tree
16 https://en.wikipedia.org/wiki/Linked_list
17 https://en.wikipedia.org/wiki/Array_data_structure
18 https://en.wikipedia.org/wiki/List_of_data_structures#Linear_data_structures
19 https://en.wikipedia.org/wiki/Depth-limited_search
20 https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search
21 #Infinite_trees
22 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
23 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
24 https://en.wikipedia.org/w/index.php?title=Tree_traversal&action=edit
25 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
26 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
27 http://www.google.com/search?as_eq=wikipedia&q=%22Tree+traversal%22
28 http://www.google.com/search?tbm=nws&q=%22Tree+traversal%22+-wikipedia
http://www.google.com/search?&q=%22Tree+traversal%22+site:news.google.com/newspapers&
29
source=newspapers
30 http://www.google.com/search?tbs=bks:1&q=%22Tree+traversal%22+-wikipedia
31 http://scholar.google.com/scholar?q=%22Tree+traversal%22
32 https://www.jstor.org/action/doBasicSearch?Query=%22Tree+traversal%22&acc=on&wc=on
33 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

1375
Tree traversal

Traversing a tree involves iterating over all nodes in some manner. Because from a given
node there is more than one possible next node (it is not a linear data structure), then,
assuming sequential computation (not parallel), some nodes must be deferred—stored in
some way for later visiting. This is often done via a stack34 (LIFO) or queue35 (FIFO). As
a tree is a self-referential (recursively defined) data structure, traversal can be defined by
recursion36 or, more subtly, corecursion37 , in a very natural and clear fashion; in these cases
the deferred nodes are stored implicitly in the call stack38 .
Depth-first search is easily implemented via a stack, including recursively (via the call stack),
while breadth-first search is easily implemented via a queue, including corecursively.

Figure 322 Depth-first traversal of an example tree: pre-order (red): F, B, A, D, C, E,


G, I, H; in-order (yellow): A, B, C, D, E, F, G, H, I; post-order (green): A, C, E, D, B, H,
I, G, F.

34 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
35 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
36 https://en.wikipedia.org/wiki/Recursion
37 https://en.wikipedia.org/wiki/Corecursion
38 https://en.wikipedia.org/wiki/Call_stack

1376
Types

132.1.2 Depth-first search of binary tree

These searches are referred to as depth-first search (DFS), since the search tree is deepened
as much as possible on each child before going to the next sibling. For a binary tree, they are
defined as access operations at each node, starting with the current node, whose algorithm
is as follows:[2] [3]
The general recursive pattern for traversing a binary tree is this:

Go down one level to the recursive argument N. If N exists (is non-empty) execute the
following three operations in a certain order:
(L) Recursively traverse N's left subtree.
(R) Recursively traverse N's right subtree.
(N) Process the current node N itself.
Return by going up one level and arriving at the parent node of N.

In the examples (L) is mostly performed before (R). But (R) before (L) is also possible, see
(RNL).

Pre-order (NLR)
1. Access the data part of the current node.
2. Traverse the left subtree by recursively calling the pre-order function.
3. Traverse the right subtree by recursively calling the pre-order function.
The pre-order traversal is a topologically sorted39 one, because a parent node is processed
before any of its child nodes is done.

In-order (LNR)
1. Traverse the left subtree by recursively calling the in-order function.
2. Access the data part of the current node.
3. Traverse the right subtree by recursively calling the in-order function.
In a binary search tree40 ordered such that in each node the key is greater than all keys in
its left subtree and less than all keys in its right subtree, in-order traversal retrieves the
keys in ascending sorted order.[4]

Reverse in-order (RNL)


1. Traverse the right subtree by recursively calling the reverse in-order function.
2. Access the data part of the current node.
3. Traverse the left subtree by recursively calling the reverse in-order function.

39 https://en.wikipedia.org/wiki/Topological_sorting
40 https://en.wikipedia.org/wiki/Binary_search_tree

1377
Tree traversal

In a binary search tree41 , reverse in-order traversal retrieves the keys in descending sorted
order.

Post-order (LRN)
1. Traverse the left subtree by recursively calling the post-order function.
2. Traverse the right subtree by recursively calling the post-order function.
3. Access the data part of the current node.
The trace of a traversal is called a sequentialisation of the tree. The traversal trace is a list
of each visited root. No one sequentialisation according to pre-, in- or post-order describes
the underlying tree uniquely. Given a tree with distinct elements, either pre-order or post-
order paired with in-order is sufficient to describe the tree uniquely. However, pre-order
with post-order leaves some ambiguity in the tree structure.[5]

Generic tree

To traverse any tree with depth-first search, perform the following operations recursively at
each node:
1. Perform pre-order operation.
2. For each i from 1 to the number of children do:
a) Visit i-th, if present.
b) Perform in-order operation.
3. Perform post-order operation.
Depending on the problem at hand, the pre-order, in-order or post-order operations may be
void, or you may only want to visit a specific child, so these operations are optional. Also,
in practice more than one of pre-order, in-order and post-order operations may be required.
For example, when inserting into a ternary tree, a pre-order operation is performed by
comparing items. A post-order operation may be needed afterwards to re-balance the tree.

41 https://en.wikipedia.org/wiki/Binary_search_tree

1378
Types

132.1.3 Breadth-first search / level order

Figure 323 Level-order: F, B, G, A, D, I, C, E, H.

Main article: Breadth-first search42 Trees can also be traversed in level-order, where we visit
every node on a level before going to a lower level. This search is referred to as breadth-first
search (BFS), as the search tree is broadened as much as possible on each depth before
going to the next depth.

132.1.4 Other types

There are also tree traversal algorithms that classify as neither depth-first search nor
breadth-first search. One such algorithm is Monte Carlo tree search43 , which concentrates
on analyzing the most promising moves, basing the expansion of the search tree44 on random
sampling45 of the search space.

42 https://en.wikipedia.org/wiki/Breadth-first_search
43 https://en.wikipedia.org/wiki/Monte_Carlo_tree_search
44 https://en.wikipedia.org/wiki/Search_tree
45 https://en.wikipedia.org/wiki/Monte_Carlo_method

1379
Tree traversal

132.2 Applications

Figure 324 Tree representing the arithmetic expression A*(B-C) + (D+E)

Pre-order traversal can be used to make a prefix expression (Polish notation46 ) from expres-
sion trees47 : traverse the expression tree pre-orderly. For example, traversing the depicted
arithmetic expression in pre-order yields ”+ * A - B C + D E”.
Post-order traversal can generate a postfix representation (Reverse Polish notation48 ) of a
binary tree. Traversing the depicted arithmetic expression in post-order yields ”A B C -
* D E + +”; the latter can easily be transformed into machine code49 to evaluate the
expression by a stack machine50 .

46 https://en.wikipedia.org/wiki/Polish_notation
47 https://en.wikipedia.org/wiki/Parse_tree
48 https://en.wikipedia.org/wiki/Reverse_Polish_notation
49 https://en.wikipedia.org/wiki/Machine_code
50 https://en.wikipedia.org/wiki/Stack_machine

1380
Implementations

In-order traversal is very commonly used on binary search trees51 because it returns values
from the underlying set in order, according to the comparator that set up the binary search
tree.
Post-order traversal while deleting or freeing nodes and values can delete or free an entire
binary tree. Thereby the node is freed after freeing its children.
Also the duplication of a binary tree yields a post-order sequence of actions, because the
pointer copy to the copy of a node is assigned to the corresponding child field N.child within
the copy of the parent N immediately after returncopy in the recursive procedure. This
means that the parent cannot be finished before all children are finished.

132.3 Implementations

This section does not cite52 any sources53 . Please help improve this section54
by adding citations to reliable sources55 . Unsourced material may be challenged
and removed56 .
Find sources: ”Tree traversal”57 – news58 · newspapers59 · books60 · scholar61 · JSTOR62
(June 2013)(Learn how and when to remove this template message63 )

132.3.1 Depth-first search

Pre-order

51 https://en.wikipedia.org/wiki/Binary_search_tree
52 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
53 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
54 https://en.wikipedia.org/w/index.php?title=Tree_traversal&action=edit
55 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
56 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
57 http://www.google.com/search?as_eq=wikipedia&q=%22Tree+traversal%22
58 http://www.google.com/search?tbm=nws&q=%22Tree+traversal%22+-wikipedia
http://www.google.com/search?&q=%22Tree+traversal%22+site:news.google.com/newspapers&
59
source=newspapers
60 http://www.google.com/search?tbs=bks:1&q=%22Tree+traversal%22+-wikipedia
61 http://scholar.google.com/scholar?q=%22Tree+traversal%22
62 https://www.jstor.org/action/doBasicSearch?Query=%22Tree+traversal%22&acc=on&wc=on
63 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

1381
Tree traversal

preorder(node) iterativePreorder(node)
if (node == null) if (node == null)
return return
visit(node) s ← empty stack
preorder(node.left) s.push(node)
preorder(node.right) while (not s.isEmpty())
node ← s.pop()
visit(node)
//right child is pushed first so that left is processed first
if node.right ≠ null
s.push(node.right)
if node.left ≠ null
s.push(node.left)

In-order

inorder(node) iterativeInorder(node)
if (node == null) s ← empty stack
return while (not s.isEmpty() or node ≠ null)
inorder(node.left) if (node ≠ null)
visit(node) s.push(node)
inorder(node.right) node ← node.left
else
node ← s.pop()
visit(node)
node ← node.right

Post-order

postorder(node) iterativePostorder(node)
if (node == null) s ← empty stack
return lastNodeVisited ← null
postorder(node.left) while (not s.isEmpty() or node ≠ null)
postorder(node.right) if (node ≠ null)
visit(node) s.push(node)
node ← node.left
else
peekNode ← s.peek()
// if right child exists and traversing node
// from left child, then move right
if (peekNode.right ≠ null and lastNodeVisited ≠ peekNode.right)
node ← peekNode.right
else
visit(peekNode)
lastNodeVisited ← s.pop()
node ← null

All the above implementations require stack space proportional to the height of the tree
which is a call stack64 for the recursive and a parent stack for the iterative ones. In a poorly

64 https://en.wikipedia.org/wiki/Call_stack

1382
Implementations

balanced tree, this can be considerable. With the iterative implementations we can remove
the stack requirement by maintaining parent pointers in each node, or by threading the
tree65 (next section).

Morris in-order traversal using threading

Main article: Threaded binary tree66 A binary tree is threaded by making every left child
pointer (that would otherwise be null) point to the in-order predecessor of the node (if it
exists) and every right child pointer (that would otherwise be null) point to the in-order
successor of the node (if it exists).
Advantages:
1. Avoids recursion, which uses a call stack and consumes memory and time.
2. The node keeps a record of its parent.
Disadvantages:
1. The tree is more complex.
2. We can make only one traversal at a time.
3. It is more prone to errors when both the children are not present and both values of
nodes point to their ancestors.
Morris traversal is an implementation of in-order traversal that uses threading:[6]
1. Create links to the in-order successor.
2. Print the data using these links.
3. Revert the changes to restore original tree.

132.3.2 Breadth-first search

Also, listed below is pseudocode for a simple queue67 based level-order traversal, and will
require space proportional to the maximum number of nodes at a given depth. This can be
as much as the total number of nodes / 2. A more space-efficient approach for this type of
traversal can be implemented using an iterative deepening depth-first search68 .
levelorder(root)
q ← empty queue
q.enqueue(root)
while not q.isEmpty() do
node ← q.dequeue()
visit(node)
if node.left ≠ null then
q.enqueue(node.left)
if node.right ≠ null then
q.enqueue(node.right)

65 #Morris_in-order_traversal_using_threading
66 https://en.wikipedia.org/wiki/Threaded_binary_tree
67 https://en.wikipedia.org/wiki/Queue_(data_structure)
68 https://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search

1383
Tree traversal

132.4 Infinite trees

While traversal is usually done for trees with a finite number of nodes (and hence finite
depth and finite branching factor69 ) it can also be done for infinite trees. This is of particular
interest in functional programming70 (particularly with lazy evaluation71 ), as infinite data
structures can often be easily defined and worked with, though they are not (strictly)
evaluated, as this would take infinite time. Some finite trees are too large to represent
explicitly, such as the game tree72 for chess73 or go74 , and so it is useful to analyze them as
if they were infinite.
A basic requirement for traversal is to visit every node eventually. For infinite trees, simple
algorithms often fail this. For example, given a binary tree of infinite depth, a depth-first
search will go down one side (by convention the left side) of the tree, never visiting the
rest, and indeed an in-order or post-order traversal will never visit any nodes, as it has not
reached a leaf (and in fact never will). By contrast, a breadth-first (level-order) traversal
will traverse a binary tree of infinite depth without problem, and indeed will traverse any
tree with bounded branching factor.
On the other hand, given a tree of depth 2, where the root has infinitely many children,
and each of these children has two children, a depth-first search will visit all nodes, as once
it exhausts the grandchildren (children of children of one node), it will move on to the next
(assuming it is not post-order, in which case it never reaches the root). By contrast, a
breadth-first search will never reach the grandchildren, as it seeks to exhaust the children
first.
A more sophisticated analysis of running time can be given via infinite ordinal numbers75 ;
for example, the breadth-first search of the depth 2 tree above will take ω76 ·2 steps: ω for
the first level, and then another ω for the second level.
Thus, simple depth-first or breadth-first searches do not traverse every infinite tree, and
are not efficient on very large trees. However, hybrid methods can traverse any (countably)
infinite tree, essentially via a diagonal argument77 (”diagonal”—a combination of vertical
and horizontal—corresponds to a combination of depth and breadth).
Concretely, given the infinitely branching tree of infinite depth, label the root (), the children
of the root (1), (2), …, the grandchildren (1, 1), (1, 2), …, (2, 1), (2, 2), …, and so on. The
nodes are thus in a one-to-one78 correspondence with finite (possibly empty) sequences of
positive numbers, which are countable and can be placed in order first by sum of entries,
and then by lexicographic order79 within a given sum (only finitely many sequences sum to a

69 https://en.wikipedia.org/wiki/Branching_factor
70 https://en.wikipedia.org/wiki/Functional_programming
71 https://en.wikipedia.org/wiki/Lazy_evaluation
72 https://en.wikipedia.org/wiki/Game_tree
73 https://en.wikipedia.org/wiki/Chess
74 https://en.wikipedia.org/wiki/Go_(game)
75 https://en.wikipedia.org/wiki/Ordinal_number
76 https://en.wikipedia.org/wiki/Ordinal_number#Ordinals_extend_the_natural_numbers
77 https://en.wikipedia.org/wiki/Diagonal_argument_(disambiguation)
78 https://en.wikipedia.org/wiki/Bijection
79 https://en.wikipedia.org/wiki/Lexicographic_order

1384
References

given value, so all entries are reached—formally there are a finite number of compositions80
of a given natural number, specifically 2n−1 compositions of n ≥1), which gives a traversal.
Explicitly:
0: ()
1: (1)
2: (1, 1) (2)
3: (1, 1, 1) (1, 2) (2, 1) (3)
4: (1, 1, 1, 1) (1, 1, 2) (1, 2, 1) (1, 3) (2, 1, 1) (2, 2) (3, 1) (4)

etc.
This can be interpreted as mapping the infinite depth binary tree onto this tree and then
applying breadth-first search: replace the ”down” edges connecting a parent node to its
second and later children with ”right” edges from the first child to the second child, from
the second child to the third child, etc. Thus at each step one can either go down (append
a (, 1) to the end) or go right (add one to the last number) (except the root, which is extra
and can only go down), which shows the correspondence between the infinite binary tree
and the above numbering; the sum of the entries (minus one) corresponds to the distance
from the root, which agrees with the 2n−1 nodes at depth n − 1 in the infinite binary tree
(2 corresponds to binary).

132.5 References
1. ”L 8, T T”81 . R 2 M 2015.
2. 82

3. ”P T A”83 . R 2 M 2015.


4. W, T. ”T T”84 (PDF). UCLA85 Math. Archived from the
original86 (PDF) on February 13, 2015. Retrieved January 2, 2016.
5. ”A, W   -, -  - -
  ?, C S S E”87 . R
2 M 2015.
6. M, J M. (1979). ”T     ”.
Information Processing Letters88 . 9 (5). doi89 :10.1016/0020-0190(79)90068-190 .
General
• Dale, Nell. Lilly, Susan D. ”Pascal Plus Data Structures”. D. C. Heath and Company.
Lexington, MA. 1995. Fourth Edition.

80 https://en.wikipedia.org/wiki/Composition_(number_theory)
81 http://webdocs.cs.ualberta.ca/~holte/T26/tree-traversal.html
82 http://www.cise.ufl.edu/~sahni/cop3530/slides/lec216.pdf
http://www.programmerinterview.com/index.php/data-structures/preorder-traversal-
83
algorithm/
https://web.archive.org/web/20150213195803/http://www.math.ucla.edu/~wittman/10b.1.
84
10w/Lectures/Lec18.pdf
85 https://en.wikipedia.org/wiki/UCLA
86 https://www.math.ucla.edu/~wittman/10b.1.10w/Lectures/Lec18.pdf
87 https://cs.stackexchange.com/q/439
88 https://en.wikipedia.org/wiki/Information_Processing_Letters
89 https://en.wikipedia.org/wiki/Doi_(identifier)
90 https://doi.org/10.1016%2F0020-0190%2879%2990068-1

1385
Tree traversal

• Drozdek, Adam. ”Data Structures and Algorithms in C++”. Brook/Cole. Pacific Grove,
CA. 2001. Second edition.
• 91

132.6 External links


• Storing Hierarchical Data in a Database92 with traversal examples in PHP
• Managing Hierarchical Data in MySQL93
• Working with Graphs in MySQL94
• Sample code for recursive and iterative tree traversal implemented in C.95
• Sample code for recursive tree traversal in C#.96
• See tree traversal implemented in various programming language97 on Rosetta Code98
• Tree traversal without recursion99

91 http://www.math.northwestern.edu/~mlerma/courses/cs310-05s/notes/dm-treetran
92 http://www.sitepoint.com/hierarchical-data-database/
https://web.archive.org/web/20110606032941/http://dev.mysql.com/tech-resources/
93
articles/hierarchical-data.html
94 http://www.artfulsoftware.com/mysqlbook/sampler/mysqled1ch20.html
95 https://code.google.com/p/treetraversal/
http://arachnode.net/blogs/programming_challenges/archive/2009/09/25/recursive-tree-
96
traversal-orders.aspx
97 http://rosettacode.org/wiki/Tree_traversal
98 https://en.wikipedia.org/wiki/Rosetta_Code
99 http://www.perlmonks.org/?node_id=600456

1386
133 Dijkstra's algorithm

Graph search algorithm Not to be confused with Dykstra's projection algorithm1 .

Dijkstra's algorithm
Dijkstra's algorithm to find the shortest path between a and b. It picks the unvisited
vertex with the lowest distance, calculates the distance through it to each unvisited
neighbor, and updates the neighbor's distance if smaller. Mark visited (set to red)
when done with neighbors.
Class Search algorithm
Greedy algorithm
Dynamic
programming[1]
Data structure Graph
Worst-case perfor- O(|E| + |V | log |V |)
mance

1 https://en.wikipedia.org/wiki/Dykstra%27s_projection_algorithm

1387
Dijkstra's algorithm

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

1388
External links

Dijkstra's algorithm (or Dijkstra's Shortest Path First algorithm, SPF algo-
rithm)[2] is an algorithm2 for finding the shortest paths3 between nodes4 in a graph5 ,
which may represent, for example, road networks6 . It was conceived by computer scientist7
Edsger W. Dijkstra8 in 1956 and published three years later.[3][4][5]
The algorithm exists in many variants. Dijkstra's original algorithm found the shortest path
between two given nodes,[5] but a more common variant fixes a single node as the ”source”
node and finds shortest paths from the source to all other nodes in the graph, producing a
shortest-path tree9 .
For a given source node in the graph, the algorithm finds the shortest path between that
node and every other.[6]:196–206 It can also be used for finding the shortest paths from a
single node to a single destination node by stopping the algorithm once the shortest path to
the destination node has been determined. For example, if the nodes of the graph represent
cities and edge path costs represent driving distances between pairs of cities connected by a
direct road (for simplicity, ignore red lights, stop signs, toll roads and other obstructions),
Dijkstra's algorithm can be used to find the shortest route between one city and all other
cities. A widely used application of shortest path algorithm is network routing protocols10 ,
most notably IS-IS11 (Intermediate System to Intermediate System) and Open Shortest
Path First (OSPF12 ). It is also employed as a subroutine13 in other algorithms such as
Johnson's14 .
The Dijkstra algorithm uses labels that are positive integers or real numbers, which are
totally ordered15 . It can be generalized to use any labels that are partially ordered16 ,
provided the subsequent labels (a subsequent label is produced when traversing an edge) are
monotonically17 non-decreasing. This generalization is called the generic Dijkstra shortest-
path algorithm.[7]
Dijkstra's algorithm uses a data structure for storing and querying partial solutions sorted by
distance from the start. While the original algorithm uses a min-priority queue18 and runs in
time19 O(|V | + |E| log |V |)(where |V |is the number of nodes and |E| is the number of edges),
it can also be implemented in O(V 2 ) using an array. The idea of this algorithm is also given

2 https://en.wikipedia.org/wiki/Algorithm
3 https://en.wikipedia.org/wiki/Shortest_path_problem
4 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
5 https://en.wikipedia.org/wiki/Graph_(abstract_data_type)
6 https://en.wikipedia.org/wiki/Road_network
7 https://en.wikipedia.org/wiki/Computer_scientist
8 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
9 https://en.wikipedia.org/wiki/Shortest-path_tree
10 https://en.wikipedia.org/wiki/Routing_protocol
11 https://en.wikipedia.org/wiki/IS-IS
12 https://en.wikipedia.org/wiki/OSPF
13 https://en.wikipedia.org/wiki/Subroutine
14 https://en.wikipedia.org/wiki/Johnson%27s_algorithm
15 https://en.wikipedia.org/wiki/Total_order
16 https://en.wikipedia.org/wiki/Partially_ordered_set
17 https://en.wikipedia.org/wiki/Monotonic_function
18 https://en.wikipedia.org/wiki/Min-priority_queue
19 https://en.wikipedia.org/wiki/Time_complexity

1389
Dijkstra's algorithm

in Leyzorek et al. 195720 . Fredman & Tarjan 198421 propose using a Fibonacci heap22 min-
priority queue to optimize the running time complexity to O(|E| + |V | log |V |)(where |E|is
the number of edges). This is asymptotically23 the fastest known single-source shortest-path
algorithm24 for arbitrary directed graphs25 with unbounded non-negative weights. However,
specialized cases (such as bounded/integer weights, directed acyclic graphs etc.) can indeed
be improved further as detailed in Specialized variants26 .
In some fields, artificial intelligence27 in particular, Dijkstra's algorithm or a variant of it is
known as uniform cost search and formulated as an instance of the more general idea of
best-first search28 .[8]

133.1 History

What is the shortest way to travel from Rotterdam29 to Groningen30 , in general: from
given city to given city. It is the algorithm for the shortest path31 , which I designed in
about twenty minutes. One morning I was shopping in Amsterdam32 with my young
fiancée, and tired, we sat down on the café terrace to drink a cup of coffee and I was
just thinking about whether I could do this, and I then designed the algorithm for the
shortest path. As I said, it was a twenty-minute invention. In fact, it was published in
'59, three years later. The publication is still readable, it is, in fact, quite nice. One of
the reasons that it is so nice was that I designed it without pencil and paper. I learned
later that one of the advantages of designing without pencil and paper is that you are
almost forced to avoid all avoidable complexities. Eventually that algorithm became, to
my great amazement, one of the cornerstones of my fame.

E D,     P L. F, C 
 ACM, 2001[4]
Dijkstra thought about the shortest path problem when working at the Mathematical Center
in Amsterdam33 in 1956 as a programmer to demonstrate the capabilities of a new computer
called ARMAC.[9] His objective was to choose both a problem and a solution (that would
be produced by computer) that non-computing people could understand. He designed
the shortest path algorithm and later implemented it for ARMAC for a slightly simplified
transportation map of 64 cities in the Netherlands (64, so that 6 bits would be sufficient

20 #CITEREFLeyzorekGrayJohnsonLadew1957
21 #CITEREFFredmanTarjan1984
22 https://en.wikipedia.org/wiki/Fibonacci_heap
23 https://en.wikipedia.org/wiki/Asymptotic_computational_complexity
24 https://en.wikipedia.org/wiki/Shortest_path_problem
25 https://en.wikipedia.org/wiki/Directed_graph
26 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm#Specialized_variants
27 https://en.wikipedia.org/wiki/Artificial_intelligence
28 https://en.wikipedia.org/wiki/Best-first_search
29 https://en.wikipedia.org/wiki/Rotterdam
30 https://en.wikipedia.org/wiki/Groningen
31 https://en.wikipedia.org/wiki/Shortest_path_problem
32 https://en.wikipedia.org/wiki/Amsterdam
33 https://en.wikipedia.org/wiki/Centrum_Wiskunde_%26_Informatica

1390
History

to encode the city number).[4] A year later, he came across another problem from hardware
engineers working on the institute's next computer: minimize the amount of wire needed
to connect the pins on the back panel of the machine. As a solution, he re-discovered the
algorithm known as Prim's minimal spanning tree algorithm34 (known earlier to Jarník35 ,
and also rediscovered by Prim36 ).[10][11] Dijkstra published the algorithm in 1959, two years
after Prim and 29 years after Jarník.[12][13]

34 https://en.wikipedia.org/wiki/Prim%27s_algorithm
35 https://en.wikipedia.org/wiki/Vojt%C4%9Bch_Jarn%C3%ADk
36 https://en.wikipedia.org/wiki/Robert_C._Prim

1391
Dijkstra's algorithm

133.2 Algorithm

Figure 326 Illustration of Dijkstra's algorithm finding a path from a start node (lower
left, red) to a goal node (upper right, green) in a robot motion planning problem. Open
nodes represent the ”tentative” set (aka set of ”unvisited” nodes). Filled nodes are visited
ones, with color representing the distance: the greener, the closer. Nodes in all the
different directions are explored uniformly, appearing more-or-less as a circular wavefront
as Dijkstra's algorithm uses a heuristic identically equal to 0.

Let the node at which we are starting be called the initial node. Let the distance of
node Y be the distance from the initial node to Y. Dijkstra's algorithm will assign some
initial distance values and will try to improve them step by step.
1. Mark all nodes unvisited. Create a set of all the unvisited nodes called the unvisited
set.

1392
Description

2. Assign to every node a tentative distance value: set it to zero for our initial node and
to infinity for all other nodes. Set the initial node as current.[14]
3. For the current node, consider all of its unvisited neighbours and calculate their
tentative distances through the current node. Compare the newly calculated
tentative distance to the current assigned value and assign the smaller one. For ex-
ample, if the current node A is marked with a distance of 6, and the edge connecting
it with a neighbour B has length 2, then the distance to B through A will be 6 + 2
= 8. If B was previously marked with a distance greater than 8 then change it to 8.
Otherwise, the current value will be kept.
4. When we are done considering all of the unvisited neighbours of the current node,
mark the current node as visited and remove it from the unvisited set. A visited node
will never be checked again.
5. If the destination node has been marked visited (when planning a route between two
specific nodes) or if the smallest tentative distance among the nodes in the unvisited
set is infinity (when planning a complete traversal; occurs when there is no connection
between the initial node and remaining unvisited nodes), then stop. The algorithm
has finished.
6. Otherwise, select the unvisited node that is marked with the smallest tentative dis-
tance, set it as the new ”current node”, and go back to step 3.
When planning a route, it is actually not necessary to wait until the destination node
is ”visited” as above: the algorithm can stop once the destination node has the smallest
tentative distance among all ”unvisited” nodes (and thus could be selected as the next
”current”).

133.3 Description

Note: For ease of understanding, this discussion uses the terms intersection, road and
map – however, in formal terminology these terms are vertex, edge and graph, respec-
tively. Suppose you would like to find the shortest path between two intersections37 on a city
map: a starting point and a destination. Dijkstra's algorithm initially marks the distance
(from the starting point) to every other intersection on the map with infinity. This is done
not to imply that there is an infinite distance, but to note that those intersections have not
been visited yet. Some variants of this method leave the intersections' distances unlabeled.
Now select the current intersection at each iteration. For the first iteration, the current
intersection will be the starting point, and the distance to it (the intersection's label) will
be zero. For subsequent iterations (after the first), the current intersection will be a closest
unvisited intersection to the starting point (this will be easy to find).
From the current intersection, update the distance to every unvisited intersection that is
directly connected to it. This is done by determining the sum of the distance between an
unvisited intersection and the value of the current intersection and then relabeling38 the
unvisited intersection with this value (the sum) if it is less than the unvisited intersection's
current value. In effect, the intersection is relabeled if the path to it through the current

37 https://en.wikipedia.org/wiki/Intersection_(road)
38 https://en.wikipedia.org/wiki/Graph_labeling

1393
Dijkstra's algorithm

intersection is shorter than the previously known paths. To facilitate shortest path identifi-
cation, in pencil, mark the road with an arrow pointing to the relabeled intersection if you
label/relabel it, and erase all others pointing to it. After you have updated the distances to
each neighboring intersection39 , mark the current intersection as visited and select an unvis-
ited intersection with minimal distance (from the starting point) – or the lowest label—as
the current intersection. Intersections marked as visited are labeled with the shortest path
from the starting point to it and will not be revisited or returned to.
Continue this process of updating the neighboring intersections with the shortest distances,
marking the current intersection as visited, and moving onto a closest unvisited intersection
until you have marked the destination as visited. Once you have marked the destination as
visited (as is the case with any visited intersection), you have determined the shortest path
to it from the starting point and can trace your way back following the arrows in reverse.
In the algorithm's implementations, this is usually done (after the algorithm has reached
the destination node) by following the nodes' parents from the destination node up to the
starting node; that's why we also keep track of each node's parent.
This algorithm makes no attempt of direct ”exploration” towards the destination as one
might expect. Rather, the sole consideration in determining the next ”current” intersection
is its distance from the starting point. This algorithm therefore expands outward from the
starting point, interactively considering every node that is closer in terms of shortest path
distance until it reaches the destination. When understood in this way, it is clear how
the algorithm necessarily finds the shortest path. However, it may also reveal one of the
algorithm's weaknesses: its relative slowness in some topologies.

133.4 Pseudocode

In the following algorithm, the code u ← vertex in Q with min dist[u], searches for the
vertex u in the vertex set Q that has the least dist[u] value. length(u, v) returns the length
of the edge joining (i.e. the distance between) the two neighbor-nodes u and v. The variable
alt on line 18 is the length of the path from the root node to the neighbor node v if it were
to go through u. If this path is shorter than the current shortest path recorded for v, that
current path is replaced with this alt path. The prev array is populated with a pointer to
the ”next-hop” node on the source graph to get the shortest route to the source.

39 https://en.wikipedia.org/wiki/Neighbourhood_(graph_theory)

1394
Pseudocode

Figure 327 A demo of Dijkstra's algorithm based on Euclidean distance. Red lines are
the shortest path covering, i.e., connecting u and prev[u]. Blue lines indicate where
relaxing happens, i.e., connecting v with a node u in Q, which gives a shorter path from
the source to v.

1 function Dijkstra(Graph, source):


2
3 create vertex set Q
4
5 for each vertex v in Graph:
6 dist[v] ← INFINITY
7 prev[v] ← UNDEFINED
8 add v to Q
10 dist[source] ← 0
11
12 while Q is not empty:
13 u ← vertex in Q with min dist[u]
14
15 remove u from Q

1395
Dijkstra's algorithm

16
17 for each neighbor v of u: // only v that are still in Q
18 alt ← dist[u] + length(u, v)
19 if alt < dist[v]:
20 dist[v] ← alt
21 prev[v] ← u
22
23 return dist[], prev[]

If we are only interested in a shortest path between vertices source and target, we can
terminate the search after line 15 if u = target. Now we can read the shortest path from
source to target by reverse iteration:
1 S ← empty sequence
2 u ← target
3 if prev[u] is defined or u = source: // Do something only if the vertex is reachable
4 while u is defined: // Construct the shortest path with a stack S
5 insert u at the beginning of S // Push the vertex onto the stack
6 u ← prev[u] // Traverse from target to source

Now sequence S is the list of vertices constituting one of the shortest paths from source to
target, or the empty sequence if no path exists.
A more general problem would be to find all the shortest paths between source and
target (there might be several different ones of the same length). Then instead of storing
only a single node in each entry of prev[] we would store all nodes satisfying the relaxation
condition. For example, if both r and source connect to target and both of them lie on
different shortest paths through target (because the edge cost is the same in both cases),
then we would add both r and source to prev[target]. When the algorithm completes, prev[]
data structure will actually describe a graph that is a subset of the original graph with
some edges removed. Its key property will be that if the algorithm was run with some
starting node, then every path from that node to any other node in the new graph will be
the shortest path between those nodes in the original graph, and all paths of that length
from the original graph will be present in the new graph. Then to actually find all these
shortest paths between two given nodes we would use a path finding algorithm on the new
graph, such as depth-first search40 .

133.4.1 Using a priority queue

A min-priority queue is an abstract data type that provides 3 basic operations :


add_with_priority(), decrease_priority() and extract_min(). As mentioned earlier, us-
ing such a data structure can lead to faster computing times than using a basic queue.
Notably, Fibonacci heap41 (Fredman & Tarjan 198442 ) or Brodal queue43 offer optimal im-

40 https://en.wikipedia.org/wiki/Depth-first_search
41 https://en.wikipedia.org/wiki/Fibonacci_heap
42 #CITEREFFredmanTarjan1984
43 https://en.wikipedia.org/wiki/Brodal_queue

1396
Proof of correctness

plementations for those 3 operations. As the algorithm is slightly different, we mention it


here, in pseudo-code as well :
1 function Dijkstra(Graph, source):
2 dist[source] ← 0 // Initialization
3
4 create vertex priority queue Q
5
6 for each vertex v in Graph:
7 if v ≠ source
8 dist[v] ← INFINITY // Unknown distance from source to v
9 prev[v] ← UNDEFINED // Predecessor of v
10
11 Q.add_with_priority(v, dist[v])
12
13
14 while Q is not empty: // The main loop
15 u ← Q.extract_min() // Remove and return best vertex
16 for each neighbor v of u: // only v that are still in Q
17 alt ← dist[u] + length(u, v)
18 if alt < dist[v]
19 dist[v] ← alt
20 prev[v] ← u
21 Q.decrease_priority(v, alt)
22
23 return dist, prev

Instead of filling the priority queue with all nodes in the initialization phase, it is
also possible to initialize it to contain only source; then, inside the if alt < dist[v]
block, the node must be inserted if not already in the queue (instead of performing a
decrease_priority operation).[6]:198
Other data structures can be used to achieve even faster computing times in practice.[15]

133.5 Proof of correctness

Proof of Dijkstra's algorithm is constructed by induction on the number of visited nodes.


Invariant hypothesis: For each visited node v, dist[v] is considered the shortest distance
from source to v; and for each unvisited node u, dist[u] is assumed the shortest distance
when traveling via visited nodes only, from source to u. This assumption is only considered
if a path exists, otherwise the distance is set to infinity. (Note : we do not assume dist[u]
is the actual shortest distance for unvisited nodes)
The base case is when there is just one visited node, namely the initial node source, in
which case the hypothesis is trivial44 .
Otherwise, assume the hypothesis for n-1 visited nodes. In which case, we choose an edge
vu where u has the least dist[u] of any unvisited nodes and the edge vu is such that dist[u]
= dist[v] + length[v,u]. dist[u] is considered to be the shortest distance from source to u
because if there were a shorter path, and if w was the first unvisited node on that path then
by the original hypothesis dist[w] > dist[u] which creates a contradiction. Similarly if there

44 https://en.wikipedia.org/wiki/Triviality_(mathematics)

1397
Dijkstra's algorithm

were a shorter path to u without using unvisited nodes, and if the last but one node on that
path were w, then we would have had dist[u] = dist[w] + length[w,u], also a contradiction.
After processing u it will still be true that for each unvisited node w, dist[w] will be the
shortest distance from source to w using visited nodes only, because if there were a shorter
path that doesn't go by u we would have found it previously, and if there were a shorter
path using u we would have updated it when processing u.

133.6 Running time

Bounds of the running time of Dijkstra's algorithm on a graph with edges E and vertices
V can be expressed as a function of the number of edges, denoted |E|, and the number
of vertices, denoted |V |, using big-O notation45 . The complexity bound depends mainly
on the data structure used to represent the set Q. In the following, upper bounds can be
simplified because |E| is O(|V |2 ) for any graph, but that simplification disregards the fact
that in some problems, other upper bounds on |E| may hold.
For any data structure for the vertex set Q, the running time is in
O(|E| · Tdk + |V | · Tem ),
where Tdk and Tem are the complexities of the decrease-key and extract-minimum operations
in Q, respectively. The simplest version of Dijkstra's algorithm stores the vertex set Q as
an ordinary linked list or array, and extract-minimum is simply a linear search through all
vertices in Q. In this case, the running time is O(|E| + |V |2 ) = O(|V |2 ).
If the graph is stored as an adjacency list, the running time for a dense graph (i.e., where
|E| ∈ O(|V |2 )) is
Θ((|V |2 ) log |V |).
For sparse graphs46 , that is, graphs with far fewer than |V |2 edges, Dijkstra's algorithm
can be implemented more efficiently by storing the graph in the form of adjacency lists47
and using a self-balancing binary search tree48 , binary heap49 , pairing heap50 , or Fibonacci
heap51 as a priority queue52 to implement extracting minimum efficiently. To perform
decrease-key steps in a binary heap efficiently, it is necessary to use an auxiliary data
structure that maps each vertex to its position in the heap, and to keep this structure up
to date as the priority queue Q changes. With a self-balancing binary search tree or binary
heap, the algorithm requires
Θ((|E| + |V |) log |V |)

45 https://en.wikipedia.org/wiki/Big-O_notation
46 https://en.wikipedia.org/wiki/Sparse_graph
47 https://en.wikipedia.org/wiki/Adjacency_list
48 https://en.wikipedia.org/wiki/Self-balancing_binary_search_tree
49 https://en.wikipedia.org/wiki/Binary_heap
50 https://en.wikipedia.org/wiki/Pairing_heap
51 https://en.wikipedia.org/wiki/Fibonacci_heap
52 https://en.wikipedia.org/wiki/Priority_queue

1398
Running time

time in the worst case (where log denotes the binary logarithm log2 ); for connected graphs
this time bound can be simplified to Θ(|E| log |V |). The Fibonacci heap53 improves this to
O(|E| + |V | log |V |).
When using binary heaps, the average case54 time complexity is lower than the worst-case:
assuming edge costs are drawn independently from a common probability distribution55 ,
the expected number of decrease-key operations is bounded by O(|V | log(|E|/|V |)), giving
a total running time of[6]:199–200
( )
|E|
O |E| + |V | log log |V | .
|V |

133.6.1 Practical optimizations and infinite graphs

In common presentations of Dijkstra's algorithm, initially all nodes are entered into the
priority queue. This is, however, not necessary: the algorithm can start with a priority
queue that contains only one item, and insert new items as they are discovered (instead of
doing a decrease-key, check whether the key is in the queue; if it is, decrease its key, otherwise
insert it).[6]:198 This variant has the same worst-case bounds as the common variant, but
maintains a smaller priority queue in practice, speeding up the queue operations.[8]
Moreover, not inserting all nodes in a graph makes it possible to extend the algorithm to
find the shortest path from a single source to the closest of a set of target nodes on infinite
graphs or those too large to represent in memory. The resulting algorithm is called uniform-
cost search (UCS) in the artificial intelligence literature[8][16][17] and can be expressed in
pseudocode as
procedure uniform_cost_search(Graph, start, goal) is
node ← start
cost ← 0
frontier ← priority queue containing node only
explored ← empty set
do
if frontier is empty then
return failure
node ← frontier.pop()
if node is goal then
return solution
explored.add(node)
for each of node's neighbors n do
if n is not in explored then
frontier.add(n)

The complexity of this algorithm can be expressed in an alternative way for very large
graphs: when C* is the length of the shortest path from the start node to any node satisfying
the ”goal” predicate, each edge has cost at least ε, and the number of neighbors per node
is bounded by b, then the algorithm's worst-case time and space complexity are both in
*
O(b1+⌊C ⁄ ε⌋ ).[16]

53 https://en.wikipedia.org/wiki/Fibonacci_heap
54 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
55 https://en.wikipedia.org/wiki/Probability_distribution

1399
Dijkstra's algorithm

Further optimizations of Dijkstra's algorithm for the single-target case include bidirec-
tional56 variants, goal-directed variants such as the A* algorithm57 (see § Related problems
and algorithms58 ), graph pruning to determine which nodes are likely to form the middle
segment of shortest paths (reach-based routing), and hierarchical decompositions of the in-
put graph that reduce s–t routing to connecting s and t to their respective ”transit nodes59 ”
followed by shortest-path computation between these transit nodes using a ”highway”.[18]
Combinations of such techniques may be needed for optimal practical performance on spe-
cific problems.[19]

133.6.2 Specialized variants

When arc weights are small integers (bounded by a parameter C), a monotone priority
queue60 can be used to speed up Dijkstra's algorithm. The first algorithm of this type was
Dial's algorithm, which used a bucket queue61 to obtain a running time O(|E| + diam(G))
that depends on the weighted diameter62 of a graph with integer edge weights (Dial
196963 ). The use of a Van Emde Boas tree64 as the priority queue brings the complexity to
O(|E| log log C) (Ahuja et al. 199065 ). Another interesting variant based on a combination

of a new radix heap66 and the well-known Fibonacci heap runs in time O(|E| + |V | log C)
(Ahuja et al. 199067 ). Finally, the best algorithms in this special case are as follows. The
algorithm given by (Thorup 200068 ) runs in O(|E| log log |V |) time and the algorithm given
by (Raman 199769 ) runs in O(|E| + |V | min{(log |V |)1/3+ε , (log C)1/4+ε }) time.
Also, for directed acyclic graphs70 , it is possible to find shortest paths from a given starting
vertex in linear O(|E| + |V |) time, by processing the vertices in a topological order71 , and
calculating the path length for each vertex to be the minimum length obtained via any of
its incoming edges.[20][21]
In the special case of integer weights and undirected connected graphs, Dijkstra's algorithm
can be completely countered with a linear O(|E|) complexity algorithm, given by (Thorup
199972 ).

56 https://en.wikipedia.org/wiki/Bidirectional_search
57 https://en.wikipedia.org/wiki/A*_algorithm
58 #Related_problems_and_algorithms
59 https://en.wikipedia.org/wiki/Transit_Node_Routing
60 https://en.wikipedia.org/wiki/Monotone_priority_queue
61 https://en.wikipedia.org/wiki/Bucket_queue
62 https://en.wikipedia.org/wiki/Distance_(graph_theory)
63 #CITEREFDial1969
64 https://en.wikipedia.org/wiki/Van_Emde_Boas_tree
65 #CITEREFAhujaMehlhornOrlinTarjan1990
66 https://en.wikipedia.org/wiki/Radix_heap
67 #CITEREFAhujaMehlhornOrlinTarjan1990
68 #CITEREFThorup2000
69 #CITEREFRaman1997
70 https://en.wikipedia.org/wiki/Directed_acyclic_graph
71 https://en.wikipedia.org/wiki/Topological_sorting
72 #CITEREFThorup1999

1400
Related problems and algorithms

133.7 Related problems and algorithms

The functionality of Dijkstra's original algorithm can be extended with a variety of mod-
ifications. For example, sometimes it is desirable to present solutions which are less than
mathematically optimal. To obtain a ranked list of less-than-optimal solutions, the optimal
solution is first calculated. A single edge appearing in the optimal solution is removed from
the graph, and the optimum solution to this new graph is calculated. Each edge of the
original solution is suppressed in turn and a new shortest-path calculated. The secondary
solutions are then ranked and presented after the first optimal solution.
Dijkstra's algorithm is usually the working principle behind link-state routing protocols73 ,
OSPF74 and IS-IS75 being the most common ones.
Unlike Dijkstra's algorithm, the Bellman–Ford algorithm76 can be used on graphs with
negative edge weights, as long as the graph contains no negative cycle77 reachable from the
source vertex s. The presence of such cycles means there is no shortest path, since the total
weight becomes lower each time the cycle is traversed. (This statement assumes that a ”path”
is allowed to repeat vertices. In graph theory78 that is normally not allowed. In theoretical
computer science79 it often is allowed.) It is possible to adapt Dijkstra's algorithm to handle
negative weight edges by combining it with the Bellman-Ford algorithm (to remove negative
edges and detect negative cycles), such an algorithm is called Johnson's algorithm80 .
The A* algorithm81 is a generalization of Dijkstra's algorithm that cuts down on the size
of the subgraph that must be explored, if additional information is available that provides
a lower bound on the ”distance” to the target. This approach can be viewed from the per-
spective of linear programming82 : there is a natural linear program for computing shortest
paths83 , and solutions to its dual linear program84 are feasible if and only if they form a
consistent heuristic85 (speaking roughly, since the sign conventions differ from place to place
in the literature). This feasible dual / consistent heuristic defines a non-negative reduced
cost86 and A* is essentially running Dijkstra's algorithm with these reduced costs. If the
dual satisfies the weaker condition of admissibility87 , then A* is instead more akin to the
Bellman–Ford algorithm.

73 https://en.wikipedia.org/wiki/Link-state_routing_protocol
74 https://en.wikipedia.org/wiki/OSPF
75 https://en.wikipedia.org/wiki/IS-IS
76 https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm
77 https://en.wikipedia.org/wiki/Negative_cycle
78 https://en.wikipedia.org/wiki/Graph_theory
79 https://en.wikipedia.org/wiki/Theoretical_computer_science
80 https://en.wikipedia.org/wiki/Johnson%27s_algorithm
81 https://en.wikipedia.org/wiki/A-star_algorithm
82 https://en.wikipedia.org/wiki/Linear_programming
83 https://en.wikipedia.org/wiki/Shortest_path_problem#Linear_programming_formulation
84 https://en.wikipedia.org/wiki/Dual_linear_program
85 https://en.wikipedia.org/wiki/Consistent_heuristic
86 https://en.wikipedia.org/wiki/Reduced_cost
87 https://en.wikipedia.org/wiki/Admissible_heuristic

1401
Dijkstra's algorithm

The process that underlies Dijkstra's algorithm is similar to the greedy88 process used in
Prim's algorithm89 . Prim's purpose is to find a minimum spanning tree90 that connects all
nodes in the graph; Dijkstra is concerned with only two nodes. Prim's does not evaluate
the total weight of the path from the starting node, only the individual edges.
Breadth-first search91 can be viewed as a special-case of Dijkstra's algorithm on unweighted
graphs, where the priority queue degenerates into a FIFO queue.
The fast marching method92 can be viewed as a continuous version of Dijkstra's algorithm
which computes the geodesic distance on a triangle mesh.

133.7.1 Dynamic programming perspective

From a dynamic programming93 point of view, Dijkstra's algorithm is a successive approx-


imation scheme that solves the dynamic programming functional equation for the shortest
path problem by the Reaching method.[22][23][24]
In fact, Dijkstra's explanation of the logic behind the algorithm,[25] namely
Problem 2. Find the path of minimum total length between two given nodes P and Q.
We use the fact that, if R is a node on the minimal path from P to Q, knowledge of the
latter implies the knowledge of the minimal path from P to R.
is a paraphrasing of Bellman's94 famous Principle of Optimality95 in the context of the
shortest path problem.

133.8 See also


• A* search algorithm96
• Bellman–Ford algorithm97
• Euclidean shortest path98
• Flood fill99
• Floyd–Warshall algorithm100
• Johnson's algorithm101
• Longest path problem102

88 https://en.wikipedia.org/wiki/Greedy_algorithm
89 https://en.wikipedia.org/wiki/Prim%27s_algorithm
90 https://en.wikipedia.org/wiki/Minimum_spanning_tree
91 https://en.wikipedia.org/wiki/Breadth-first_search
92 https://en.wikipedia.org/wiki/Fast_marching_method
93 https://en.wikipedia.org/wiki/Dynamic_programming
94 https://en.wikipedia.org/wiki/Richard_Bellman
95 https://en.wikipedia.org/wiki/Principle_of_Optimality
96 https://en.wikipedia.org/wiki/A*_search_algorithm
97 https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm
98 https://en.wikipedia.org/wiki/Euclidean_shortest_path
99 https://en.wikipedia.org/wiki/Flood_fill
100 https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
101 https://en.wikipedia.org/wiki/Johnson%27s_algorithm
102 https://en.wikipedia.org/wiki/Longest_path_problem

1402
Notes

• Parallel all-pairs shortest path algorithm103

133.9 Notes
1. Controversial,see
M S (2006). ”D'  :  
 ”104 . Control and Cybernetics. 35: 599–620. and below
part.
2. ”OSPF I SPF”105 . Cisco.
3. R, H. ”E W D”106 . A.M. Turing Award. Asso-
ciation for Computing Machinery. Retrieved 16 October 2017. At the Mathematical
Centre a major project was building the ARMAC computer. For its official inaugura-
tion in 1956, Dijkstra devised a program to solve a problem interesting to a nontech-
nical audience: Given a network of roads connecting cities, what is the shortest route
between two designated cities?
4. F, P (A 2010). ”A I  E W. D”.
Communications of the ACM. 53 (8): 41–47. doi107 :10.1145/1787234.1787249108 .
5. D, E. W.109 (1959). ”A      -
  ”110 (PDF). Numerische Mathematik. 1: 269–271.
111 112
doi :10.1007/BF01386390 .CS1 maint: ref=harv (link ) 113

6. M, K114 ; S, P115 (2008). ”C 10. S


P”116 (PDF). Algorithms and Data Structures: The Basic Toolbox. Springer.
doi117 :10.1007/978-3-540-77978-0118 . ISBN119 978-3-540-77977-3120 .
7. SŚ, I; J, A; WŹ-SŚ, BŻ
(2019). ”G D   ”. Journal of Optical
Communications and Networking. 11 (11): 568–577. arXiv121 :1810.04481122 .
doi123 :10.1364/JOCN.11.000568124 .

103 https://en.wikipedia.org/wiki/Parallel_all-pairs_shortest_path_algorithm
https://www.infona.pl/resource/bwmeta1.element.baztech-article-BAT5-0013-0005/tab/
104
summary
105 https://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/ospfispf.html
106 http://amturing.acm.org/award_winners/dijkstra_1053701.cfm
107 https://en.wikipedia.org/wiki/Doi_(identifier)
108 https://doi.org/10.1145%2F1787234.1787249
109 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
110 http://www-m3.ma.tum.de/twiki/pub/MN0506/WebHome/dijkstra.pdf
111 https://en.wikipedia.org/wiki/Doi_(identifier)
112 https://doi.org/10.1007%2FBF01386390
113 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
114 https://en.wikipedia.org/wiki/Kurt_Mehlhorn
115 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
116 http://people.mpi-inf.mpg.de/~mehlhorn/ftp/Toolbox/ShortestPaths.pdf
117 https://en.wikipedia.org/wiki/Doi_(identifier)
118 https://doi.org/10.1007%2F978-3-540-77978-0
119 https://en.wikipedia.org/wiki/ISBN_(identifier)
120 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-77977-3
121 https://en.wikipedia.org/wiki/ArXiv_(identifier)
122 http://arxiv.org/abs/1810.04481
123 https://en.wikipedia.org/wiki/Doi_(identifier)
124 https://doi.org/10.1364%2FJOCN.11.000568

1403
Dijkstra's algorithm

8. F, A (2011). Position Paper: Dijkstra's Algorithm versus Uniform Cost
Search or a Case Against Dijkstra's Algorithm125 . P. 4 I' S. 
C S. In a route-finding problem, Felner finds that the queue
can be a factor 500–600 smaller, taking some 40% of the running time.
9. ”ARMAC”126 . Unsung Heroes in Dutch Computing History. 2007. Archived from
the original127 on 13 November 2013.
10. D, E W., Reflections on ”A note on two problems in connexion with
graphs128 (PDF)
11. T, R E129 (1983), Data Structures and Network Algorithms,
CBMS_NSF Regional Conference Series in Applied Mathematics, 44, Society for
Industrial and Applied Mathematics, p. 75, The third classical minimum spanning
tree algorithm was discovered by Jarník and rediscovered by Prim and Dikstra; it is
commonly known as Prim's algorithm.
12. P, R.C. (1957). ”S     -
” 130 (PDF). Bell System Technical Journal. 36 (6): 1389–1401.
Bibcode131 :1957BSTJ...36.1389P132 . doi133 :10.1002/j.1538-7305.1957.tb01515.x134 .
Archived from the original135 (PDF) on 18 July 2017. Retrieved 18 July 2017.
13. V. Jarník: O jistém problému minimálním [About a certain minimal problem], Práce
Moravské Přírodovědecké Společnosti, 6, 1930, pp. 57–63. (in Czech)
14. G, S; F, M (2013). ”D' A”. Encyclopedia of
Operations Research and Management Science. Springer. 1. doi136 :10.1007/978-1-
4419-1153-7137 . ISBN138 978-1-4419-1137-7139 − via Springer Link.
15. C, M.; C, R. A.; R, V.; R, D. L.; T, L.
(2007). Priority Queues and Dijkstra's Algorithm – UTCS Technical Report TR-07-
54 – 12 October 2007140 (PDF). A, T: T U  T 
A, D  C S.

125 http://www.aaai.org/ocs/index.php/SOCS/SOCS11/paper/view/4017/4357
https://web.archive.org/web/20131113021126/http://www-set.win.tue.nl/UnsungHeroes/
126
machines/armac.html
127 http://www-set.win.tue.nl/UnsungHeroes/machines/armac.html
128 https://www.cs.utexas.edu/users/EWD/ewd08xx/EWD841a.PDF
129 https://en.wikipedia.org/wiki/Robert_Endre_Tarjan
https://web.archive.org/web/20170718230207/http://bioinfo.ict.ac.cn/~dbu/
130
AlgorithmCourses/Lectures/Prim1957.pdf
131 https://en.wikipedia.org/wiki/Bibcode_(identifier)
132 https://ui.adsabs.harvard.edu/abs/1957BSTJ...36.1389P
133 https://en.wikipedia.org/wiki/Doi_(identifier)
134 https://doi.org/10.1002%2Fj.1538-7305.1957.tb01515.x
135 http://bioinfo.ict.ac.cn/~dbu/AlgorithmCourses/Lectures/Prim1957.pdf
136 https://en.wikipedia.org/wiki/Doi_(identifier)
137 https://doi.org/10.1007%2F978-1-4419-1153-7
138 https://en.wikipedia.org/wiki/ISBN_(identifier)
139 https://en.wikipedia.org/wiki/Special:BookSources/978-1-4419-1137-7
140 http://www.cs.sunysb.edu/~rezaul/papers/TR-07-54.pdf

1404
Notes

16. R, S141 ; N, P142 (2009) [1995]. Artificial Intelligence: A


Modern Approach143 (3 .). P H. . 75, 81. ISBN144 978-0-13-
604259-4145 .
17. Sometimes also least-cost-first search:
N, D S. (1983). ”E  ”146 (PDF). Computer. IEEE.
16 (2): 63–85. doi147 :10.1109/mc.1983.1654302148 .
18. W, D; W, T (2007). Speed-up techniques for
shortest-path computations. STACS. pp. 23–36.
19. B, R; D, D; S, P; S, D-
; S, D; W, D (2010). ”C 
 - -   D' ”149 . J.
Experimental Algorithmics. 15: 2.1. doi150 :10.1145/1671970.1671976151 .
20. ”B G L: D A G S P –
1.44.0”152 . www.boost.org.
21. Cormen et al. 2001153 , p. 655
22. S, M. (2006). ”D'  :   -
 ”154 (PDF). Journal of Control and Cybernetics. 35 (3): 599–
620. Online version of the paper with interactive computational modules.155
23. D, E.V. (2003). Dynamic Programming: Models and Applications. Mine-
ola, NY: Dover Publications156 . ISBN157 978-0-486-42810-9158 .
24. S, M. (2010). Dynamic Programming: Foundations and Principles.
Francis & Taylor159 . ISBN160 978-0-8247-4099-3161 .
25. Dijkstra 1959162 , p. 270

141 https://en.wikipedia.org/wiki/Stuart_J._Russell
142 https://en.wikipedia.org/wiki/Peter_Norvig
143 https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach
144 https://en.wikipedia.org/wiki/ISBN_(identifier)
145 https://en.wikipedia.org/wiki/Special:BookSources/978-0-13-604259-4
146 https://www.cs.umd.edu/~nau/papers/nau1983expert.pdf
147 https://en.wikipedia.org/wiki/Doi_(identifier)
148 https://doi.org/10.1109%2Fmc.1983.1654302
149 https://publikationen.bibliothek.kit.edu/1000014952
150 https://en.wikipedia.org/wiki/Doi_(identifier)
151 https://doi.org/10.1145%2F1671970.1671976
152 https://www.boost.org/doc/libs/1_44_0/libs/graph/doc/dag_shortest_paths.html
153 #CITEREFCormenLeisersonRivestStein2001
154 http://matwbn.icm.edu.pl/ksiazki/cc/cc35/cc3536.pdf
155 http://www.ifors.ms.unimelb.edu.au/tutorial/dijkstra_new/index.html
156 https://en.wikipedia.org/wiki/Dover_Publications
157 https://en.wikipedia.org/wiki/ISBN_(identifier)
158 https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-42810-9
159 https://en.wikipedia.org/w/index.php?title=Francis_%26_Taylor&action=edit&redlink=1
160 https://en.wikipedia.org/wiki/ISBN_(identifier)
161 https://en.wikipedia.org/wiki/Special:BookSources/978-0-8247-4099-3
162 #CITEREFDijkstra1959

1405
Dijkstra's algorithm

133.10 References
• C, T H.163 ; L, C E.164 ; R, R L.165 ; S,
C166 (2001). ”S 24.3: D' ”. Introduction to Algo-
rithms167 (S .). MIT P168  MG–H169 . . 595–601. ISBN170 0-
262-03293-7171 .CS1 maint: ref=harv (link172 )
• D, R B. (1969). ”A 360: S-  
  [H]”. Communications of the ACM173 . 12 (11): 632–633.
doi174 :10.1145/363269.363610175 .CS1 maint: ref=harv (link176 )
• F, M L177 ; T, R E.178 (1984). Fibonacci
heaps and their uses in improved network optimization algorithms. 25th An-
nual Symposium on Foundations of Computer Science. IEEE179 . pp. 338–346.
doi180 :10.1109/SFCS.1984.715934181 .CS1 maint: ref=harv (link182 )
• F, M L183 ; T, R E.184 (1987). ”F-
         -
”. Journal of the Association for Computing Machinery. 34 (3): 596–615.
doi185 :10.1145/28869.28874186 .CS1 maint: ref=harv (link187 )
• Z, F. B; N, C E. (F 1998). ”S P A-
: A E U R R N”188 . Transportation Science189 .
32 (1): 65–73. doi190 :10.1287/trsc.32.1.65191 .
• L, M.; G, R. S.; J, A. A.; L, W. C.; M, J., S. R.;
P, R. M.; S, R. N. (1957). Investigation of Model Techniques – First Annual

163 https://en.wikipedia.org/wiki/Thomas_H._Cormen
164 https://en.wikipedia.org/wiki/Charles_E._Leiserson
165 https://en.wikipedia.org/wiki/Ronald_L._Rivest
166 https://en.wikipedia.org/wiki/Clifford_Stein
167 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
168 https://en.wikipedia.org/wiki/MIT_Press
169 https://en.wikipedia.org/wiki/McGraw%E2%80%93Hill
170 https://en.wikipedia.org/wiki/ISBN_(identifier)
171 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
172 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
173 https://en.wikipedia.org/wiki/Communications_of_the_ACM
174 https://en.wikipedia.org/wiki/Doi_(identifier)
175 https://doi.org/10.1145%2F363269.363610
176 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
177 https://en.wikipedia.org/wiki/Michael_Fredman
178 https://en.wikipedia.org/wiki/Robert_Tarjan
179 https://en.wikipedia.org/wiki/IEEE
180 https://en.wikipedia.org/wiki/Doi_(identifier)
181 https://doi.org/10.1109%2FSFCS.1984.715934
182 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
183 https://en.wikipedia.org/wiki/Michael_Fredman
184 https://en.wikipedia.org/wiki/Robert_Tarjan
185 https://en.wikipedia.org/wiki/Doi_(identifier)
186 https://doi.org/10.1145%2F28869.28874
187 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
188 https://semanticscholar.org/paper/c71301816cfe1e0c7ed1a04fddd7740ceb2e8197
189 https://en.wikipedia.org/wiki/Transportation_Science
190 https://en.wikipedia.org/wiki/Doi_(identifier)
191 https://doi.org/10.1287%2Ftrsc.32.1.65

1406
External links

Report – 6 June 1956 – 1 July 1957 – A Study of Model Techniques for Communication
Systems. Cleveland, Ohio: Case Institute of Technology.CS1 maint: ref=harv (link192 )
• K, D.E.193 (1977). ”A G  D' A”. Informa-
tion Processing Letters194 . 6 (1): 1–5. doi195 :10.1016/0020-0190(77)90002-3196 .
• A, R K.; M, K; O, J B.; T, R
E. (A 1990). ”F A   S P P”197
(PDF). Journal of the ACM. 37 (2): 213–223. doi198 :10.1145/77600.77615199 .
200 201
hdl :1721.1/47994 .CS1 maint: ref=harv (link ) 202

• R, R (1997). ”R    -  


”. SIGACT News. 28 (2): 81–87. doi203 :10.1145/261342.261352204 .CS1 maint:
ref=harv (link205 )
• T, M (2000). ”O RAM  Q”. SIAM Journal on Comput-
ing. 30 (1): 86–109. doi206 :10.1137/S0097539795288246207 .CS1 maint: ref=harv (link208 )
• T, M (1999). ”U -    -
     ”209 . Journal of the ACM. 46 (3): 362–394.
doi210 :10.1145/316542.316548211 .CS1 maint: ref=harv (link212 )

133.11 External links

Wikimedia Commons has media related to Dijkstra's algorithm213 .

• Oral history interview with Edsger W. Dijkstra214 , Charles Babbage Institute215 Univer-
sity of Minnesota, Minneapolis.

192 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
193 https://en.wikipedia.org/wiki/Donald_Knuth
194 https://en.wikipedia.org/wiki/Information_Processing_Letters
195 https://en.wikipedia.org/wiki/Doi_(identifier)
196 https://doi.org/10.1016%2F0020-0190%2877%2990002-3
197 https://dspace.mit.edu/bitstream/1721.1/47994/1/fasteralgorithms00sloa.pdf
198 https://en.wikipedia.org/wiki/Doi_(identifier)
199 https://doi.org/10.1145%2F77600.77615
200 https://en.wikipedia.org/wiki/Hdl_(identifier)
201 http://hdl.handle.net/1721.1%2F47994
202 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
203 https://en.wikipedia.org/wiki/Doi_(identifier)
204 https://doi.org/10.1145%2F261342.261352
205 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
206 https://en.wikipedia.org/wiki/Doi_(identifier)
207 https://doi.org/10.1137%2FS0097539795288246
208 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
209 http://www.diku.dk/~mthorup/PAPERS/sssp.ps.gz
210 https://en.wikipedia.org/wiki/Doi_(identifier)
211 https://doi.org/10.1145%2F316542.316548
212 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
213 https://commons.wikimedia.org/wiki/Category:Dijkstra%27s_algorithm
214 http://purl.umn.edu/107247
215 https://en.wikipedia.org/wiki/Charles_Babbage_Institute

1407
Dijkstra's algorithm

• Implementation of Dijkstra's algorithm using TDD216 , Robert Cecil Martin217 , The Clean
Code Blog
• Graphical explanation of Dijkstra's algorithm step-by-step on an example218 , Gilles
Bertrand219 , A step by step graphical explanation of Dijkstra's algorithm operations

Edsger Dijkstra

• Wikiquote

Optimization: Algorithms, methods, and heuristics

220

216 http://blog.cleancoder.com/uncle-bob/2016/10/26/DijkstrasAlg.html
217 https://en.wikipedia.org/wiki/Robert_Cecil_Martin
http://www.gilles-bertrand.com/2014/03/disjkstra-algorithm-description-shortest-path-
218
pseudo-code-data-structure-example-image.html
219 https://en.wikipedia.org/w/index.php?title=Gilles_Bertrand&action=edit&redlink=1
220 https://en.wikipedia.org/wiki/Wikipedia:Good_articles

1408
134 Widest path problem

Figure 328 In this graph, the widest path from Maldon to Feering has bandwidth 29,
and passes through Clacton, Tiptree, Harwich, and Blaxhall.

1409
Widest path problem

In graph algorithms1 , the widest path problem is the problem of finding a path2 between
two designated vertices3 in a weighted graph4 , maximizing the weight of the minimum-
weight edge in the path. The widest path problem is also known as the bottleneck short-
est path problem or the maximum capacity path problem. It is possible to adapt
most shortest path5 algorithms to compute widest paths, by modifying them to use the
bottleneck distance instead of path length.[1] However, in many cases even faster algorithms
are possible.
For instance, in a graph that represents connections between routers6 in the Internet7 , where
the weight of an edge represents the bandwidth8 of a connection between two routers, the
widest path problem is the problem of finding an end-to-end path between two Internet
nodes that has the maximum possible bandwidth.[2] The smallest edge weight on this path
is known as the capacity or bandwidth of the path. As well as its applications in network
routing, the widest path problem is also an important component of the Schulze method9
for deciding the winner of a multiway election,[3] and has been applied to digital composit-
ing10 ,[4] metabolic pathway analysis11 ,[5] and the computation of maximum flows12 .[6]
A closely related problem, the minimax path problem, asks for the path that minimizes
the maximum weight of any of its edges. It has applications that include transportation
planning13 .[7] Any algorithm for the widest path problem can be transformed into an algo-
rithm for the minimax path problem, or vice versa, by reversing the sense of all the weight
comparisons performed by the algorithm, or equivalently by replacing every edge weight by
its negation.

134.1 Undirected graphs

In an undirected graph14 , a widest path may be found as the path between the two vertices
in the maximum spanning tree15 of the graph, and a minimax path may be found as the
path between the two vertices in the minimum spanning tree.[8][9][10]
In any graph, directed or undirected, there is a straightforward algorithm for finding a
widest path once the weight of its minimum-weight edge is known: simply delete all smaller
edges and search for any path among the remaining edges using breadth first search16 or

1 https://en.wikipedia.org/wiki/Graph_algorithm
2 https://en.wikipedia.org/wiki/Path_(graph_theory)
3 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
4 https://en.wikipedia.org/wiki/Weighted_graph
5 https://en.wikipedia.org/wiki/Shortest_path
6 https://en.wikipedia.org/wiki/Router_(computing)
7 https://en.wikipedia.org/wiki/Internet
8 https://en.wikipedia.org/wiki/Bandwidth_(computing)
9 https://en.wikipedia.org/wiki/Schulze_method
10 https://en.wikipedia.org/wiki/Digital_compositing
https://en.wikipedia.org/wiki/Metabolic_network_modelling#Metabolic_network_
11
simulation
12 https://en.wikipedia.org/wiki/Maximum_flow
13 https://en.wikipedia.org/wiki/Transportation_planning
14 https://en.wikipedia.org/wiki/Undirected_graph
15 https://en.wikipedia.org/wiki/Minimum_spanning_tree
16 https://en.wikipedia.org/wiki/Breadth_first_search

1410
Undirected graphs

depth first search17 . Based on this test, there also exists a linear time18 algorithm19 for
finding a widest s-t path in an undirected graph, that does not use the maximum spanning
tree. The main idea of the algorithm is to apply the linear-time path-finding algorithm
to the median20 edge weight in the graph, and then either to delete all smaller edges or
contract all larger edges according to whether a path does or does not exist, and recurse in
the resulting smaller graph.[9][11][12]
Fernandez, Garfinkel & Arbiol (1998)21 use undirected bottleneck shortest paths in order to
form composite22 aerial photographs23 that combine multiple images of overlapping areas.
In the subproblem to which the widest path problem applies, two images have already been
transformed into a common coordinate system24 ; the remaining task is to select a seam, a
curve that passes through the region of overlap and divides one of the two images from the
other. Pixels on one side of the seam will be copied from one of the images, and pixels on
the other side of the seam will be copied from the other image. Unlike other compositing
methods that average pixels from both images, this produces a valid photographic image of
every part of the region being photographed. They weight the edges of a grid graph25 by
a numeric estimate of how visually apparent a seam across that edge would be, and find a
bottleneck shortest path for these weights. Using this path as the seam, rather than a more
conventional shortest path, causes their system to find a seam that is difficult to discern
at all of its points, rather than allowing it to trade off greater visibility in one part of the
image for lesser visibility elsewhere.[4]
A solution to the minimax path problem between the two opposite corners of a grid graph26
can be used to find the weak Fréchet distance27 between two polygonal chains28 . Here, each
grid graph vertex represents a pair of line segments, one from each chain, and the weight
of an edge represents the Fréchet distance needed to pass from one pair of segments to
another.[13]
If all edge weights of an undirected graph are positive29 , then the minimax distances be-
tween pairs of points (the maximum edge weights of minimax paths) form an ultrametric30 ;
conversely every finite ultrametric space comes from minimax distances in this way.[14] A
data structure31 constructed from the minimum spanning tree allows the minimax distance
between any pair of vertices to be queried in constant time per query, using lowest com-
mon ancestor32 queries in a Cartesian tree33 . The root of the Cartesian tree represents

17 https://en.wikipedia.org/wiki/Depth_first_search
18 https://en.wikipedia.org/wiki/Linear_time
19 https://en.wikipedia.org/wiki/Algorithm
20 https://en.wikipedia.org/wiki/Median
21 #CITEREFFernandezGarfinkelArbiol1998
22 https://en.wikipedia.org/wiki/Digital_compositing
23 https://en.wikipedia.org/wiki/Aerial_photography
24 https://en.wikipedia.org/wiki/Image_registration
25 https://en.wikipedia.org/wiki/Grid_graph
26 https://en.wikipedia.org/wiki/Grid_graph
27 https://en.wikipedia.org/wiki/Fr%C3%A9chet_distance
28 https://en.wikipedia.org/wiki/Polygonal_chain
29 https://en.wikipedia.org/wiki/Positive_number
30 https://en.wikipedia.org/wiki/Ultrametric
31 https://en.wikipedia.org/wiki/Data_structure
32 https://en.wikipedia.org/wiki/Lowest_common_ancestor
33 https://en.wikipedia.org/wiki/Cartesian_tree

1411
Widest path problem

the heaviest minimum spanning tree edge, and the children of the root are Cartesian trees
recursively34 constructed from the subtrees of the minimum spanning tree formed by remov-
ing the heaviest edge. The leaves of the Cartesian tree represent the vertices of the input
graph, and the minimax distance between two vertices equals the weight of the Cartesian
tree node that is their lowest common ancestor. Once the minimum spanning tree edges
have been sorted, this Cartesian tree can be constructed in linear time.[15]

134.2 Directed graphs

In directed graphs35 , the maximum spanning tree solution cannot be used. Instead, several
different algorithms are known; the choice of which algorithm to use depends on whether
a start or destination vertex for the path is fixed, or whether paths for many start or
destination vertices must be found simultaneously.

134.2.1 All pairs

The all-pairs widest path problem has applications in the Schulze method36 for choosing a
winner in multiway elections37 in which voters rank the candidates in preference order38 .
The Schulze method constructs a complete directed graph39 in which the vertices represent
the candidates and every two vertices are connected by an edge. Each edge is directed from
the winner to the loser of a pairwise contest between the two candidates it connects, and
is labeled with the margin of victory of that contest. Then the method computes widest
paths between all pairs of vertices, and the winner is the candidate whose vertex has wider
paths to each opponent than vice versa.[3] The results of an election using this method
are consistent with the Condorcet method40 – a candidate who wins all pairwise contests
automatically wins the whole election – but it generally allows a winner to be selected, even
in situations where the Concorcet method itself fails.[16] The Schulze method has been used
by several organizations including the Wikimedia Foundation41 .[17]
To compute the widest path widths for all pairs of nodes in a dense42 directed graph, such as
the ones that arise in the voting application, the asymptotically43 fastest known approach
takes time O(n(3+ω)/2 ) where ω is the exponent for fast matrix multiplication44 . Using the
best known algorithms for matrix multiplication, this time bound becomes O(n2.688 ).[18]
Instead, the reference implementation for the Schulze method uses a modified version of the

34 https://en.wikipedia.org/wiki/Recursion
35 https://en.wikipedia.org/wiki/Directed_graph
36 https://en.wikipedia.org/wiki/Schulze_method
37 https://en.wikipedia.org/wiki/Election
38 https://en.wikipedia.org/wiki/Ranked_voting_systems
39 https://en.wikipedia.org/wiki/Tournament_(graph_theory)
40 https://en.wikipedia.org/wiki/Condorcet_method
41 https://en.wikipedia.org/wiki/Wikimedia_Foundation
42 https://en.wikipedia.org/wiki/Dense_graph
43 https://en.wikipedia.org/wiki/Asymptotic_computational_complexity
44 https://en.wikipedia.org/wiki/Fast_matrix_multiplication

1412
Directed graphs

simpler Floyd–Warshall algorithm45 , which takes O(n3 ) time.[3] For sparse graphs46 , it may
be more efficient to repeatedly apply a single-source widest path algorithm.

134.2.2 Single source

If the edges are sorted by their weights, then a modified version of Dijkstra's algorithm47
can compute the bottlenecks between a designated start vertex and every other vertex in
the graph, in linear time. The key idea behind the speedup over a conventional version
of Dijkstra's algorithm is that the sequence of bottleneck distances to each vertex, in the
order that the vertices are considered by this algorithm, is a monotonic48 subsequence of
the sorted sequence of edge weights; therefore, the priority queue49 of Dijkstra's algorithm
can be implemented as a bucket queue50 : an array indexed by the numbers from 1 to m (the
number of edges in the graph), where array cell i contains the vertices whose bottleneck
distance is the weight of the edge with position i in the sorted order. This method allows the
widest path problem to be solved as quickly as sorting51 ; for instance, if the edge weights
are represented as integers, then the time bounds for integer sorting52 a list of m integers
would apply also to this problem.[12]

134.2.3 Single source and single destination

Berman & Handler (1987)53 suggest that service vehicles and emergency vehicles should use
minimax paths when returning from a service call to their base. In this application, the time
to return is less important than the response time if another service call occurs while the
vehicle is in the process of returning. By using a minimax path, where the weight of an edge
is the maximum travel time from a point on the edge to the farthest possible service call, one
can plan a route that minimizes the maximum possible delay between receipt of a service
call and arrival of a responding vehicle.[7] Ullah, Lee & Hassoun (2009)54 use maximin paths
to model the dominant reaction chains in metabolic networks55 ; in their model, the weight
of an edge is the free energy of the metabolic reaction represented by the edge.[5]
Another application of widest paths arises in the Ford–Fulkerson algorithm56 for the max-
imum flow problem57 . Repeatedly augmenting a flow along a maximum capacity path in
the residual network of the flow leads to a small bound, O(m log U), on the number of
augmentations needed to find a maximum flow; here, the edge capacities are assumed to be
integers that are at most U. However, this analysis does not depend on finding a path that

45 https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
46 https://en.wikipedia.org/wiki/Sparse_graph
47 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
48 https://en.wikipedia.org/wiki/Monotonic
49 https://en.wikipedia.org/wiki/Priority_queue
50 https://en.wikipedia.org/wiki/Bucket_queue
51 https://en.wikipedia.org/wiki/Sorting_algorithm
52 https://en.wikipedia.org/wiki/Integer_sorting
53 #CITEREFBermanHandler1987
54 #CITEREFUllahLeeHassoun2009
55 https://en.wikipedia.org/wiki/Metabolic_network
56 https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
57 https://en.wikipedia.org/wiki/Maximum_flow_problem

1413
Widest path problem

has the exact maximum of capacity; any path whose capacity is within a constant factor of
the maximum suffices. Combining this approximation idea with the shortest path augmen-
tation method of the Edmonds–Karp algorithm58 leads to a maximum flow algorithm with
running time O(mn log U).[6]
It is possible to find maximum-capacity paths and minimax paths with a single source and
single destination very efficiently even in models of computation that allow only comparisons
of the input graph's edge weights and not arithmetic on them.[12][19] The algorithm maintains
a set S of edges that are known to contain the bottleneck edge of the optimal path; initially,
S is just the set of all m edges of the graph. At each iteration of the algorithm, it splits S
into an ordered sequence of subsets S1 , S2 , ... of approximately equal size; the number of
subsets in this partition is chosen in such a way that all of the split points between subsets
can be found by repeated median-finding in time O(m). The algorithm then reweights each
edge of the graph by the index of the subset containing the edge, and uses the modified
Dijkstra algorithm on the reweighted graph; based on the results of this computation, it
can determine in linear time which of the subsets contains the bottleneck edge weight. It
then replaces S by the subset Si that it has determined to contain the bottleneck weight,
and starts the next iteration with this new set S. The number of subsets into which S can
be split increases exponentially with each step, so the number of iterations is proportional
to the iterated logarithm59 function, O(log*60 n), and the total time is O(m log*61 n).[19] In
a model of computation where each edge weight is a machine integer, the use of repeated
bisection in this algorithm can be replaced by a list-splitting technique of Han & Thorup

(2002)62 , allowing S to be split into O( m) smaller sets Si in a single step and leading to a
linear overall time bound.[20]

58 https://en.wikipedia.org/wiki/Edmonds%E2%80%93Karp_algorithm
59 https://en.wikipedia.org/wiki/Iterated_logarithm
60 https://en.wikipedia.org/wiki/Iterated_logarithm
61 https://en.wikipedia.org/wiki/Iterated_logarithm
62 #CITEREFHanThorup2002

1414
Euclidean point sets

134.3 Euclidean point sets

Figure 329 The dark blue band separates pairs of Gaussian prime numbers whose
minimax path length is 2 or more.

A variant of the minimax path problem has also been considered for sets of points in
the Euclidean plane63 . As in the undirected graph problem, this Euclidean minimax path
problem can be solved efficiently by finding a Euclidean minimum spanning tree64 : every
path in the tree is a minimax path. However, the problem becomes more complicated when
a path is desired that not only minimizes the hop length but also, among paths with the

63 https://en.wikipedia.org/wiki/Euclidean_plane
64 https://en.wikipedia.org/wiki/Euclidean_minimum_spanning_tree

1415
Widest path problem

same hop length, minimizes or approximately minimizes the total length of the path. The
solution can be approximated using geometric spanners65 .[21]
In number theory66 , the unsolved Gaussian moat67 problem asks whether or not minimax
paths in the Gaussian prime numbers68 have bounded or unbounded minimax length. That
is, does there exist a constant B such that, for every pair of points p and q in the infinite
Euclidean point set defined by the Gaussian primes, the minimax path in the Gaussian
primes between p and q has minimax edge length at most B?[22]

134.4 References
1. P, M (1960), ”T     -
”, Operations Research69 , 8 (5): 733–736, doi70 :10.1287/opre.8.5.73371 , JS-
TOR72 16738773
2. S, N. (1992), ”M    ”,
IEEE International Conference on Communications (ICC '92), 3, pp. 1217–
1221, doi74 :10.1109/ICC.1992.26804775 , hdl76 :2060/1999001764677 , ISBN78 978-0-
7803-0599-179 ;
W, Z; C, J. (1995), ”B-   -
”, IEEE Global Telecommunications Conference (GLOBECOM '95), 3,
pp. 2129–2133, doi80 :10.1109/GLOCOM.1995.50278081 , ISBN82 978-0-7803-2509-883
3. S, M (2011), ”A  , -, 
,  C- -  ”,
Social Choice and Welfare84 , 36 (2): 267–303, doi85 :10.1007/s00355-010-0475-486
4. F, E; G, R; A, R (1998), ”M
         

65 https://en.wikipedia.org/wiki/Geometric_spanner
66 https://en.wikipedia.org/wiki/Number_theory
67 https://en.wikipedia.org/wiki/Gaussian_moat
68 https://en.wikipedia.org/wiki/Gaussian_integer
69 https://en.wikipedia.org/wiki/Operations_Research_(journal)
70 https://en.wikipedia.org/wiki/Doi_(identifier)
71 https://doi.org/10.1287%2Fopre.8.5.733
72 https://en.wikipedia.org/wiki/JSTOR_(identifier)
73 http://www.jstor.org/stable/167387
74 https://en.wikipedia.org/wiki/Doi_(identifier)
75 https://doi.org/10.1109%2FICC.1992.268047
76 https://en.wikipedia.org/wiki/Hdl_(identifier)
77 http://hdl.handle.net/2060%2F19990017646
78 https://en.wikipedia.org/wiki/ISBN_(identifier)
79 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7803-0599-1
80 https://en.wikipedia.org/wiki/Doi_(identifier)
81 https://doi.org/10.1109%2FGLOCOM.1995.502780
82 https://en.wikipedia.org/wiki/ISBN_(identifier)
83 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7803-2509-8
84 https://en.wikipedia.org/wiki/Social_Choice_and_Welfare
85 https://en.wikipedia.org/wiki/Doi_(identifier)
86 https://doi.org/10.1007%2Fs00355-010-0475-4

1416
References

”, Operations Research87 , 46 (3): 293–304, doi88 :10.1287/opre.46.3.29389 , JS-


TOR90 22282391
5. U, E.; L, K; H, S. (2009), ”A   -
 -  ”, IEEE/ACM International Conference
on Computer-Aided Design (ICCAD 2009)92 , . 144–150
6. A, R K.93 ; M, T L.94 ; O, J B.95 (1993), ”7.3
C S A”, Network Flows: Theory, Algorithms and Applica-
tions, Prentice Hall, pp. 210–212, ISBN96 978-0-13-617549-097
7. B, O; H, G Y. (1987), ”O M P  
S S U   N  N D”, Transporta-
tion Science98 , 21 (2): 115–122, doi99 :10.1287/trsc.21.2.115100
8. H, T. C. (1961), ”T    ”, Operations Re-
search101 , 9 (6): 898–900, doi102 :10.1287/opre.9.6.898103 , JSTOR104 167055105
9. P, A P. (1991), ”A      
  ”, European Journal of Operational Research106 , 53 (3):
402–404, doi107 :10.1016/0377-2217(91)90073-5108
10. M, N; C, J (2002), ”A    -
    ”, Information Processing Letters109 , 83 (3):
175–180, doi110 :10.1016/S0020-0190(01)00323-4111 , MR112 1904226113
11. C, P. M. (1978), ”T -     
”, Information Processing Letters114 , 7 (1): 10–14, doi115 :10.1016/0020-
0190(78)90030-3116

87 https://en.wikipedia.org/wiki/Operations_Research_(journal)
88 https://en.wikipedia.org/wiki/Doi_(identifier)
89 https://doi.org/10.1287%2Fopre.46.3.293
90 https://en.wikipedia.org/wiki/JSTOR_(identifier)
91 http://www.jstor.org/stable/222823
92 https://ieeexplore.ieee.org/document/5361299
93 https://en.wikipedia.org/wiki/Ravindra_K._Ahuja
94 https://en.wikipedia.org/wiki/Thomas_L._Magnanti
95 https://en.wikipedia.org/wiki/James_B._Orlin
96 https://en.wikipedia.org/wiki/ISBN_(identifier)
97 https://en.wikipedia.org/wiki/Special:BookSources/978-0-13-617549-0
98 https://en.wikipedia.org/wiki/Transportation_Science
99 https://en.wikipedia.org/wiki/Doi_(identifier)
100 https://doi.org/10.1287%2Ftrsc.21.2.115
101 https://en.wikipedia.org/wiki/Operations_Research_(journal)
102 https://en.wikipedia.org/wiki/Doi_(identifier)
103 https://doi.org/10.1287%2Fopre.9.6.898
104 https://en.wikipedia.org/wiki/JSTOR_(identifier)
105 http://www.jstor.org/stable/167055
106 https://en.wikipedia.org/wiki/European_Journal_of_Operational_Research
107 https://en.wikipedia.org/wiki/Doi_(identifier)
108 https://doi.org/10.1016%2F0377-2217%2891%2990073-5
109 https://en.wikipedia.org/wiki/Information_Processing_Letters
110 https://en.wikipedia.org/wiki/Doi_(identifier)
111 https://doi.org/10.1016%2FS0020-0190%2801%2900323-4
112 https://en.wikipedia.org/wiki/MR_(identifier)
113 http://www.ams.org/mathscinet-getitem?mr=1904226
114 https://en.wikipedia.org/wiki/Information_Processing_Letters
115 https://en.wikipedia.org/wiki/Doi_(identifier)
116 https://doi.org/10.1016%2F0020-0190%2878%2990030-3

1417
Widest path problem

12. K, V; P, M A. F. (2006), On the bottleneck shortest


path problem117 (PDF), ZIB-R 06-22, K-Z-Z  I-
 B
13. A, H; G, M (1995), ”C  F
    ”118 (PDF), International
Journal of Computational Geometry and Applications, 5 (1–2): 75–91,
119
doi :10.1142/S0218195995000064 . 120

14. L, B (1981), ”D   ”,


Centre de Mathématique Sociale. École Pratique des Hautes Études. Mathématiques
et Sciences Humaines (in French) (73): 5–37, 127, MR121 0623034122
15. D, E D.123 ; L, G M.124 ; W, O (2009), ”O C-
     ”, Automata, Languages and Programming,
36th International Colloquium, ICALP 2009, Rhodes, Greece, July 5-12, 2009, Lec-
ture Notes in Computer Science, 5555, pp. 341–353, doi125 :10.1007/978-3-642-02927-
1_29126 , hdl127 :1721.1/61963128 , ISBN129 978-3-642-02926-4130
16. More specifically, the only kind of tie that the Schulze method fails to break is between
two candidates who have equally wide paths to each other.
17. See Jesse Plamondon-Willard, Board election to use preference voting131 , May 2008;
Mark Ryan, 2008 Wikimedia Board Election results132 , June 2008; 2008 Board Elec-
tions133 , June 2008; and 2009 Board Elections134 , August 2009.
18. D, R; P, S (2009), ”F   (, )-
    ”, Proceedings of the 20th An-
nual ACM-SIAM Symposium on Discrete Algorithms (SODA '09)135 , . 384–391.
For an earlier algorithm that also used fast matrix multiplication to speed up all pairs
widest paths, see
V, V136 ; W, R137 ; Y, R (2007),
”A-        -
”, Proceedings of the 39th Annual ACM Symposium on Theory of Com-

117 https://opus4.kobv.de/opus4-zib/files/916/ZR-06-22.pdf
118 http://www.cs.uu.nl/people/marc/asci/ag-cfdbt-95.pdf
119 https://en.wikipedia.org/wiki/Doi_(identifier)
120 https://doi.org/10.1142%2FS0218195995000064
121 https://en.wikipedia.org/wiki/MR_(identifier)
122 http://www.ams.org/mathscinet-getitem?mr=0623034
123 https://en.wikipedia.org/wiki/Erik_Demaine
124 https://en.wikipedia.org/wiki/Gad_Landau
125 https://en.wikipedia.org/wiki/Doi_(identifier)
126 https://doi.org/10.1007%2F978-3-642-02927-1_29
127 https://en.wikipedia.org/wiki/Hdl_(identifier)
128 http://hdl.handle.net/1721.1%2F61963
129 https://en.wikipedia.org/wiki/ISBN_(identifier)
130 https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-02926-4
131 https://lists.wikimedia.org/pipermail/foundation-l/2008-May/043134.html
132 https://lists.wikimedia.org/pipermail/foundation-l/2008-June/044361.html
133 https://meta.wikimedia.org/wiki/Board_elections/2008/Results/en
134 https://meta.wikimedia.org/wiki/Board_elections/2009/Results/en
135 http://portal.acm.org/citation.cfm?id=1496813
136 https://en.wikipedia.org/wiki/Virginia_Vassilevska_Williams
137 https://en.wikipedia.org/wiki/Ryan_Williams_(computer_scientist)

1418
References

puting (STOC '07)138 , <--D  ,    -->, N Y: ACM,
. 585–589, CSX139 10.1.1.164.9808140 , 141 :10.1145/1250790.1250876142 ,
ISBN143 9781595936318144 , MR145 2402484146 and Chapter 5 of
V, V (2008), Efficient Algorithms for Path Problems in Weighted
Graphs147 (PDF), P.D. , R CMU-CS-08-147, C M U-
 S  C S
19. G, H N.; T, R E.148 (1988), ”A  
  ”149 , Journal of Algorithms, 9 (3): 411–417,
doi150 :10.1016/0196-6774(88)90031-4151 , MR152 0955149153

20. H, Y; T, M.154 (2002), ”I   O(n log log n) expected
time and linear space”, Proc. 43rd Annual Symposium on Foundations of Com-
puter Science (FOCS 2002)155 , pp. 135–144, doi156 :10.1109/SFCS.2002.1181890157 ,
ISBN158 978-0-7695-1822-0159 .
21. B, P160 ; M, A; N, G; S, M;
Z, N (2004), ”A   
”, Computational Geometry. Theory and Applications161 , 29 (3): 233–249,
doi162 :10.1016/j.comgeo.2004.04.003163 , MR164 2095376165

138 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
139 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
140 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.164.9808
141 https://en.wikipedia.org/wiki/Doi_(identifier)
142 https://doi.org/10.1145%2F1250790.1250876
143 https://en.wikipedia.org/wiki/ISBN_(identifier)
144 https://en.wikipedia.org/wiki/Special:BookSources/9781595936318
145 https://en.wikipedia.org/wiki/MR_(identifier)
146 http://www.ams.org/mathscinet-getitem?mr=2402484
147 https://www.cs.cmu.edu/afs/cs/Web/People/virgi/thesis.pdf
148 https://en.wikipedia.org/wiki/Robert_Tarjan
149 https://zenodo.org/record/1258419
150 https://en.wikipedia.org/wiki/Doi_(identifier)
151 https://doi.org/10.1016%2F0196-6774%2888%2990031-4
152 https://en.wikipedia.org/wiki/MR_(identifier)
153 http://www.ams.org/mathscinet-getitem?mr=0955149
154 https://en.wikipedia.org/wiki/Mikkel_Thorup
155 https://en.wikipedia.org/wiki/Symposium_on_Foundations_of_Computer_Science
156 https://en.wikipedia.org/wiki/Doi_(identifier)
157 https://doi.org/10.1109%2FSFCS.2002.1181890
158 https://en.wikipedia.org/wiki/ISBN_(identifier)
159 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7695-1822-0
160 https://en.wikipedia.org/wiki/Jit_Bose
161 https://en.wikipedia.org/wiki/Computational_Geometry_(journal)
162 https://en.wikipedia.org/wiki/Doi_(identifier)
163 https://doi.org/10.1016%2Fj.comgeo.2004.04.003
164 https://en.wikipedia.org/wiki/MR_(identifier)
165 http://www.ams.org/mathscinet-getitem?mr=2095376

1419
Widest path problem

22. G, E; W, S166 ; W, B (1998), ”A  
 G ”, American Mathematical Monthly167 , 105 (4): 327–337,
doi168 :10.2307/2589708169 , JSTOR170 2589708171 , MR172 1614871173 .

166 https://en.wikipedia.org/wiki/Stan_Wagon
167 https://en.wikipedia.org/wiki/American_Mathematical_Monthly
168 https://en.wikipedia.org/wiki/Doi_(identifier)
169 https://doi.org/10.2307%2F2589708
170 https://en.wikipedia.org/wiki/JSTOR_(identifier)
171 http://www.jstor.org/stable/2589708
172 https://en.wikipedia.org/wiki/MR_(identifier)
173 http://www.ams.org/mathscinet-getitem?mr=1614871

1420
135 Yen's algorithm

Yen's algorithm computes single-source K-shortest loopless paths for a graph1 with non-
negative edge2 cost.[1] The algorithm was published by Jin Y. Yen in 1971 and employs any
shortest path algorithm3 to find the best path, then proceeds to find K − 1 deviations of
the best path.[2]

1 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
2 https://en.wikipedia.org/wiki/Glossary_of_graph_theory#Basics
3 https://en.wikipedia.org/wiki/Shortest_path_algorithm

1421
Yen's algorithm

Graph and tree


search algorithms

• α–β
• A*
• B*
• Backtracking
• Beam
• Bellman–Ford
• Best-first
• Bidirectional
• Borůvka
• Branch & bound
• BFS
• British Museum
• D*
• DFS
• Dijkstra
• Edmonds
• Floyd–Warshall
• Fringe search
• Hill climbing
• IDA*
• Iterative deepening
• Johnson
• Jump point
• Kruskal
• Lexicographic BFS
• LPA*
• Prim
• SMA*
• SPFA

Listings

• Graph algorithms
• Search algorithms
• List of graph algorithms

Related topics

• Dynamic programming
• Graph traversal
• Tree traversal
• Search games

1422
Algorithm

135.1 Algorithm

135.1.1 Terminology and notation

Notation Description
N The size of the graph, i.e., the number of nodes in the network.
(i) The ith node of the graph, where i ranges from 1 to N . This means that
(1) is the source node of the graph and (N ) is the sink node of the graph.
dij The cost of the edge between (i) and (j), assuming that (i) ̸= (j) and
dij ≥ 0.
Ak The k th shortest path from (1) to (N ), where k ranges from 1 to K. Then
Ak = (1) − (2k ) − (3k ) − · · · − (Qk k ) − (N ), where (2k ) is the 2nd node of
the k th shortest path and (3k ) is the 3rd node of the k th shortest path,
and so on.
Ak i A deviation path from Ak−1 at node (ik ), where i ranges from 1 to Qk .
Note that the maximum value of i is Qk , which is the node just before the
sink in the k shortest path. This means that the deviation path cannot
deviate from the k − 1 shortest path at the sink. The paths Ak and Ak−1
follow the same path until the ith node, then (i)k − (i + 1)k edge is differ-
ent from any path in Aj , where j ranges from 1 to k − 1.
Rk i The root path of Ak i that follows that Ak−1 until the ith node of Ak−1 .
Ski The spur path of Ak i that starts at the ith node of Ak i and ends at the
sink.

135.1.2 Description

The algorithm can be broken down into two parts, determining the first k-shortest path4 ,
Ak , and then determining all other k-shortest paths5 . It is assumed that the container A will
hold the k-shortest path, whereas the container B, will hold the potential k-shortest paths.
To determine A1 , the shortest path6 from the source to the sink, any efficient shortest path
algorithm7 can be used.
To find the Ak , where k ranges from 2 to K, the algorithm assumes that all paths from
A1 to Ak−1 have previously been found. The k iteration can be divided into two processes,
finding all the deviations Ak i and choosing a minimum length path to become Ak . Note
that in this iteration, i ranges from 1 to Qk k .
The first process can be further subdivided into three operations, choosing the Rk i , finding
S k i , and then adding Ak i to the container B. The root path, Rk i , is chosen by finding the
subpath in Ak−1 that follows the first i nodes of Aj , where j ranges from 1 to k − 1. Then,
if a path is found, the cost of edge di(i+1) of Aj is set to infinity. Next, the spur path,
S k i , is found by computing the shortest path from the spur node, node i, to the sink. The

4 https://en.wikipedia.org/wiki/K_shortest_path_routing
5 https://en.wikipedia.org/wiki/Shortest_path
6 https://en.wikipedia.org/wiki/Shortest_path
7 https://en.wikipedia.org/wiki/Shortest_path_algorithm

1423
Yen's algorithm

removal of previous used edges from (i) to (i + 1) ensures that the spur path is different.
Ak i = Rk i + S k i , the addition of the root path and the spur path, is added to B. Next,
the edges that were removed, i.e. had their cost set to infinity, are restored to their initial
values.
The second process determines a suitable path for Ak by finding the path in container B
with the lowest cost. This path is removed from container B and inserted into container A
and the algorithm continues to the next iteration.

135.1.3 Pseudocode

The algorithm assumes that the Dijkstra algorithm is used to find the shortest path between
two nodes, but any shortest path algorithm can be used in its place.
function YenKSP(Graph, source, sink, K):
// Determine the shortest path from the source to the sink.
A[0] = Dijkstra(Graph, source, sink);
// Initialize the set to store the potential kth shortest path.
B = [];

for k from 1 to K:
// The spur node ranges from the first node to the next to last node in the previous k-shortest path.
for i from 0 to size(A[k − 1]) − 2:

// Spur node is retrieved from the previous k-shortest path, k − 1.


spurNode = A[k-1].node(i);
// The sequence of nodes from the source to the spur node of the previous k-shortest path.
rootPath = A[k-1].nodes(0, i);

for each path p in A:


if rootPath == p.nodes(0, i):
// Remove the links that are part of the previous shortest paths which share the same root path.
remove p.edge(i,i + 1) from Graph;

for each node rootPathNode in rootPath except spurNode:


remove rootPathNode from Graph;

// Calculate the spur path from the spur node to the sink.
spurPath = Dijkstra(Graph, spurNode, sink);

// Entire path is made up of the root path and spur path.


totalPath = rootPath + spurPath;
// Add the potential k-shortest path to the heap.
if (totalPath not in B):
B.append(totalPath);

// Add back the edges and nodes that were removed from the graph.
restore edges to Graph;
restore nodes in rootPath to Graph;

if B is empty:
// This handles the case of there being no spur paths, or no spur paths left.
// This could happen if the spur paths have already been exhausted (added to A),
// or there are no spur paths at all - such as when both the source and sink vertices
// lie along a ”dead end”.
break;
// Sort the potential k-shortest paths by cost.
B.sort();
// Add the lowest cost path becomes the k-shortest path.
A[k] = B[0];
// In fact we should rather use shift since we are removing the first element
B.pop();

1424
Algorithm

return A;

135.1.4 Example

Figure 330 Yen's k-shortest path algorithm, K = 3, A to F

The example uses Yen's K-Shortest Path Algorithm to compute three paths from (C) to
(H). Dijkstra's algorithm8 is used to calculate the best path from (C) to (H), which is
(C) − (E) − (F ) − (H) with cost 5. This path is appended to container A and becomes the
first k-shortest path, A1 .
Node (C) of A1 becomes the spur node with a root path of itself, R2 1 = (C). The edge,
(C) − (E), is removed because it coincides with the root path and a path in container A.
Dijkstra's algorithm9 is used to compute the spur path S 2 1 , which is (C) − (D) − (F ) − (H),
with a cost of 8. A2 1 = R2 1 + S 2 1 = (C) − (D) − (F ) − (H) is added to container B as a
potential k-shortest path.
Node (E) of A1 becomes the spur node with R2 2 = (C) − (E). The edge, (E) − (F ), is
removed because it coincides with the root path and a path in container A. Dijkstra's
algorithm10 is used to compute the spur path S 2 2 , which is (E) − (G) − (H), with a cost
of 7. A2 2 = R2 2 + S 2 2 = (C) − (E) − (G) − (H) is added to container B as a potential k-
shortest path.
Node (F ) of A1 becomes the spur node with a root path, R2 3 = (C) − (E) − (F ). The edge,
(F ) − (H), is removed because it coincides with the root path and a path in container A.
Dijkstra's algorithm11 is used to compute the spur path S 2 3 , which is (F ) − (G) − (H), with

8 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
9 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
10 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
11 https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm

1425
Yen's algorithm

a cost of 8. A2 3 = R2 3 + S 2 3 = (C) − (E) − (F ) − (G) − (H) is added to container B as a


potential k-shortest path.
Of the three paths in container B, A2 2 is chosen to become A2 because it has the lowest
cost of 7. This process is continued to the 3rd k-shortest path. However, within this 3rd
iteration, note that some spur paths do not exist. And the path that is chosen to become
A3 is (C) − (D) − (F ) − (H).

135.2 Features

135.2.1 Space complexity

To store the edges of the graph, the shortest path list A, and the potential shortest path list
B, N 2 + KN memory addresses are required.[2] At worse case, the every node in the graph
has an edge to every other node in the graph, thus N 2 addresses are needed. Only KN
addresses are need for both list A and B because at most only K paths will be stored,[2]
where it is possible for each path to have N nodes.

135.2.2 Time complexity

The time complexity of Yen's algorithm is dependent on the shortest path algorithm used
in the computation of the spur paths, so the Dijkstra algorithm is assumed. Dijkstra's
algorithm has a worse case time complexity of O(N 2 ), but using a Fibonacci heap it becomes
O(M + N log N ),[3] where M is the amount of edges in the graph. Since Yen's algorithm
makes Kl calls to the Dijkstra in computing the spur paths, where l is the length of spur
paths. In a condensed graph, the expected value of l is O(log N ), while the worst case is
N . , the time complexity becomes O(KN (M + N log N )).[4]

135.3 Improvements

Yen's algorithm can be improved by using a heap to store B, the set of potential k-shortest
paths. Using a heap instead of a list will improve the performance of the algorithm, but not
the complexity.[5] One method to slightly decrease complexity is to skip the nodes where
there are non-existent spur paths. This case is produced when all the spur paths from
a spur node have been used in the previous Ak . Also, if container B has K − k paths of
minimum length, in reference to those in container A, then they can be extract and inserted
into container A since no shorter paths will be found.

135.3.1 Lawler's modification

Eugene Lawler12 proposed a modification to Yen's algorithm in which duplicates path are not
calculated as opposed to the original algorithm where they are calculated and then discarded

12 https://en.wikipedia.org/wiki/Eugene_Lawler

1426
See also

when they are found to be duplicates.[6] These duplicates paths result from calculating spur
paths of nodes in the root of Ak . For instance, Ak deviates from Ak−1 at some node (i). Any
spur path, S k j where j = 0, . . . , i, that is calculated will be a duplicate because they have
already been calculated during the k − 1 iteration. Therefore, only spur paths for nodes
that were on the spur path of Ak−1 must be calculated, i.e. only S k h where h ranges from
(i + 1)k−1 to (Qk )k−1 . To perform this operation for Ak , a record is needed to identify the
node where Ak−1 branched from Ak−2 .

135.4 See also


• Yen's improvement to the Bellman–Ford algorithm13

135.5 References
1. Y, J Y. (1970). ”A       
        ”. Quarterly of Ap-
plied Mathematics. 27 (4): 526–530. doi14 :10.1090/qam/25382215 . MR16 025382217 .
2. Y, J Y. (J 1971). ”F  k Shortest Loopless Paths in a Net-
work”. Management Science. 17 (11): 712–716. doi18 :10.1287/mnsc.17.11.71219 .
JSTOR20 262931221 .
3. F, M L22 ; T, R E.23 (1984). Fibonacci
heaps and their uses in improved network optimization algorithms. 25th An-
nual Symposium on Foundations of Computer Science. IEEE24 . pp. 338–346.
doi25 :10.1109/SFCS.1984.71593426 .CS1 maint: ref=harv (link27 )
4. B, E (2007). Path routing in mesh optical networks. Chichester, Eng-
land: John Wiley & Sons. ISBN28 978047003298529 .
5. B, A W; S, M C. A comparative study ofk-
shortest path algorithms. Department of Electronic Systems Engineering, University
of Essex, 1995.

13 https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm#Yen.27s_improvement
14 https://en.wikipedia.org/wiki/Doi_(identifier)
15 https://doi.org/10.1090%2Fqam%2F253822
16 https://en.wikipedia.org/wiki/MR_(identifier)
17 http://www.ams.org/mathscinet-getitem?mr=0253822
18 https://en.wikipedia.org/wiki/Doi_(identifier)
19 https://doi.org/10.1287%2Fmnsc.17.11.712
20 https://en.wikipedia.org/wiki/JSTOR_(identifier)
21 http://www.jstor.org/stable/2629312
22 https://en.wikipedia.org/wiki/Michael_Fredman
23 https://en.wikipedia.org/wiki/Robert_Tarjan
24 https://en.wikipedia.org/wiki/IEEE
25 https://en.wikipedia.org/wiki/Doi_(identifier)
26 https://doi.org/10.1109%2FSFCS.1984.715934
27 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
28 https://en.wikipedia.org/wiki/ISBN_(identifier)
29 https://en.wikipedia.org/wiki/Special:BookSources/9780470032985

1427
Yen's algorithm

6. L, EL (1972). ”A        


         
”. Management Science. 18 (7): 401–405. doi30 :10.1287/mnsc.18.7.40131 .

135.6 External links


• Open Source Python Implementation32 on GitHub
• Open Source C++ Implementation33
• Open Source C++ Implementation using Boost Graph Library34

30 https://en.wikipedia.org/wiki/Doi_(identifier)
31 https://doi.org/10.1287%2Fmnsc.18.7.401
32 https://github.com/Pent00/YenKSP
33 http://thinkingscale.com/k-shortest-paths-cpp-version/
34 https://svn.boost.org/trac/boost/ticket/11838

1428
136 Hungarian algorithm

The Hungarian method is a combinatorial optimization1 algorithm2 that solves the as-
signment problem3 in polynomial time4 and which anticipated later primal-dual methods5 .
It was developed and published in 1955 by Harold Kuhn6 , who gave the name ”Hungarian
method” because the algorithm was largely based on the earlier works of two Hungarian7
mathematicians: Dénes Kőnig8 and Jenő Egerváry9 .[1][2]
James Munkres10 reviewed the algorithm in 1957 and observed that it is (strongly)
polynomial11 .[3] Since then the algorithm has been known also as the Kuhn–Munkres
algorithm or Munkres assignment algorithm. The time complexity12 of the origi-
nal algorithm was O(n4 ), however Edmonds13 and Karp14 , and independently Tomizawa
15
noticed that it can be modified to achieve an O(n3 ) running time.[4][5][how? ] One of the
16
most popular[citation needed ] O(n3 ) variants is the Jonker-Volgenant algorithm.[6] Ford17 and
Fulkerson18 extended the method to general maximum flow problems in form of the Ford-
Fulkerson algorithm19 . In 2006, it was discovered that Carl Gustav Jacobi20 had solved the
assignment problem in the 19th century, and the solution had been published posthumously
in 1890 in Latin.[7]

136.1 The problem

Main article: Assignment problem21

1 https://en.wikipedia.org/wiki/Combinatorial_optimization
2 https://en.wikipedia.org/wiki/Algorithm
3 https://en.wikipedia.org/wiki/Assignment_problem
4 https://en.wikipedia.org/wiki/Polynomial_time
5 https://en.wikipedia.org/wiki/Duality_(optimization)
6 https://en.wikipedia.org/wiki/Harold_Kuhn
7 https://en.wikipedia.org/wiki/Hungary
8 https://en.wikipedia.org/wiki/D%C3%A9nes_K%C5%91nig
9 https://en.wikipedia.org/wiki/Jen%C5%91_Egerv%C3%A1ry
10 https://en.wikipedia.org/wiki/James_Munkres
11 https://en.wikipedia.org/wiki/Time_complexity#Strongly_and_weakly_polynomial_time
https://en.wikipedia.org/wiki/Computational_complexity_theory#Time_and_space_
12
complexity
13 https://en.wikipedia.org/wiki/Jack_Edmonds
14 https://en.wikipedia.org/wiki/Richard_Karp
17 https://en.wikipedia.org/wiki/L._R._Ford,_Jr.
18 https://en.wikipedia.org/wiki/D._R._Fulkerson
19 https://en.wikipedia.org/wiki/Ford-Fulkerson_algorithm
20 https://en.wikipedia.org/wiki/Carl_Gustav_Jacobi
21 https://en.wikipedia.org/wiki/Assignment_problem

1429
Hungarian algorithm

136.1.1 Example

In this simple example there are three workers: Paul, Dave, and Chris. One of them has to
clean the bathroom, another sweep the floors and the third washes the windows, but they
each demand different pay for the various tasks. The problem is to find the lowest-cost way
to assign the jobs. The problem can be represented in a matrix22 of the costs of the workers
doing the jobs. For example:

Clean bathroom Sweep floors Wash windows


Paul $2 $3 $3
Dave $3 $2 $3
Chris $3 $3 $2

The Hungarian method, when applied to the above table, would give the minimum cost:
this is $6, achieved by having Paul clean the bathroom, Dave sweep the floors, and Chris
wash the windows.

136.1.2 Matrix formulation

In the matrix formulation, we are given a nonnegative n×n matrix23 , where the element in
the i-th row and j-th column represents the cost of assigning the j-th job to the i-th worker.
We have to find an assignment of the jobs to the workers, such that each job is assigned to
one worker and each worker is assigned one job, such that the total cost of assignment is
minimum.
This can be expressed as permuting the rows and columns of a cost matrix C to minimize
the trace of a matrix:
min Tr(LCR)
L,R

where L and R are permutation matrices24 .


If the goal is to find the assignment that yields the maximum cost, the problem can solved
by negating the cost matrix C.

136.1.3 Bigraph formulation

The algorithm is easier to describe if we formulate the problem using a bipartite graph. We
have a complete bipartite graph25 G = (S, T ; E) with n worker vertices (S) and n job vertices
(T ), and each edge has a nonnegative cost c(i, j). We want to find a perfect matching26
with a minimum total cost.

22 https://en.wikipedia.org/wiki/Matrix_(mathematics)
23 https://en.wikipedia.org/wiki/Matrix_(mathematics)
24 https://en.wikipedia.org/wiki/Permutation_matrices
25 https://en.wikipedia.org/wiki/Complete_bipartite_graph
26 https://en.wikipedia.org/wiki/Perfect_matching

1430
The algorithm in terms of bipartite graphs

136.2 The algorithm in terms of bipartite graphs

Let us call a function y : (S ∪ T ) → R a potential if y(i) + y(j) ≤ c(i, j) ∑


for each i ∈ S, j ∈ T .
The value of potential y is the sum of the potential over all vertices: y(v).
v∈S∪T

It is easy to see that the cost of each perfect matching is at least the value of each potential:
the total cost of the matching is the sum of costs of all edges; the cost of each edge is at
least the sum of potentials of its endpoints; since the matching is perfect, each vertex is an
endpoint of exactly one edge; hence the total cost is at least the total potential.
The Hungarian method finds a perfect matching and a potential such that the matching
cost equals the potential value. This proves that both of them are optimal. In fact, the
Hungarian method finds a perfect matching of tight edges: an edge ij is called tight for a
potential y if y(i) + y(j) = c(i, j). Let us denote the subgraph27 of tight edges by Gy . The
cost of a perfect matching in Gy (if there is one) equals the value of y.
−→
During the algorithm we maintain a potential y and an orientation28 of Gy (denoted by Gy )
which has the property that the edges oriented from T to S form a matching M. Initially,
y is 0 everywhere, and all edges are oriented from S to T (so M is empty). In each step,
either we modify y so that its value increases, or modify the orientation to obtain a matching
with more edges. We maintain the invariant that all the edges of M are tight. We are done
if M is a perfect matching.
In a general step, let RS ⊆ S and RT ⊆ T be the vertices not covered by M (so RS consists
of the vertices in S with no incoming edge and RT consists of the vertices in T with no
−→
outgoing edge). Let Z be the set of vertices reachable in Gy from RS by a directed path
only following edges that are tight. This can be computed by breadth-first search29 .


If RT ∩ Z is nonempty, then reverse the orientation of a directed path in Gy from RS to
RT . Thus the size of the corresponding matching increases by 1.
If RT ∩ Z is empty, then let
∆ := min{c(i, j) − y(i) − y(j) : i ∈ Z ∩ S, j ∈ T \ Z}.
∆ is positive because there are no tight edges between Z ∩ S and T \ Z. Increase y by ∆ on
the vertices of Z ∩ S and decrease y by ∆ on the vertices of Z ∩ T . The resulting y is still a
potential, and although the graph Gy changes, it still contains M (see the next subsections).
We orient the new edges from S to T. By the definition of ∆ the set Z of vertices reachable
from RS increases (note that the number of tight edges does not necessarily increase).
We repeat these steps until M is a perfect matching, in which case it gives a minimum cost
assignment. The running time of this version of the method is O(n4 ): M is augmented
n times, and in a phase where M is unchanged, there are at most n potential changes (since
Z increases every time). The time sufficient for a potential change is O(n2 ).

27 https://en.wikipedia.org/wiki/Glossary_of_graph_theory#Subgraphs
28 https://en.wikipedia.org/wiki/Glossary_of_graph_theory#orientation
29 https://en.wikipedia.org/wiki/Breadth-first_search

1431
Hungarian algorithm

136.2.1 Proof that adjusting the potential y leaves M unchanged

To show that every edge in M remains after adjusting y, it suffices to show that for an
arbitrary edge in M, either both of its endpoints, or neither of them, are in Z. To this end
let vu be an edge in M from T to S. It is easy to see that if v is in Z then u must be too,
since every edge in M is tight. Now suppose, toward contradiction, that u ∈ Z but v ∈ / Z.
u itself cannot be in RS because it is the endpoint of a matched edge, so there must be
some directed path of tight edges from a vertex in RS to u. This path must avoid v, since
that is by assumption not in Z, so the vertex immediately preceding u in this path is some
other vertex v ′ ∈ T . v ′ u is a tight edge from T to S and is thus in M. But then M contains
two edges that share the vertex u, contradicting the fact that M is a matching. Thus every
edge in M has either both endpoints or neither endpoint in Z.

136.2.2 Proof that y remains a potential

To show that y remains a potential after being adjusted, it suffices to show that no edge
has its total potential increased beyond its cost. This is already established for edges
in M by the preceding paragraph, so consider an arbitrary edge uv from S to T. If y(u) is
increased by ∆, then either v ∈ Z ∩ T , in which case y(v) is decreased by ∆, leaving the total
potential of the edge unchanged, or v ∈ T \ Z, in which case the definition of ∆ guarantees
that y(u) + y(v) + ∆ ≤ c(u, v). Thus y remains a potential.

136.3 Matrix interpretation

This article may be confusing or unclear30 to readers. In particular, this


performs the algorithm on an example, but the actual algorithm for matrixes was
never discussed before, and does not provide details of the actual algorithm, and
also relies on vague approaches such as ”drawing” a minimum cover.. Please help
us clarify the article31 . There might be a discussion about this on the talk page32 .
(November 2019)(Learn how and when to remove this template message33 )

Given n workers and tasks, and an n×n matrix containing the cost of assigning each worker
to a task, find the cost minimizing assignment.
First the problem is written in the form of a matrix as given below

a1 a2 a3 a4
b1 b2 b3 b4
c1 c2 c3 c4
d1 d2 d3 d4

30 https://en.wikipedia.org/wiki/Wikipedia:Vagueness
31 https://en.wikipedia.org/wiki/Wikipedia:Please_clarify
32 https://en.wikipedia.org/wiki/Talk:Hungarian_algorithm
33 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

1432
Matrix interpretation

where a, b, c and d are the workers who have to perform tasks 1, 2, 3 and 4. a1, a2, a3, a4
denote the penalties incurred when worker ”a” does task 1, 2, 3, 4 respectively. The same
holds true for the other symbols as well. The matrix is square, so each worker can perform
only one task.
Step 1
Then we perform row operations on the matrix. To do this, the lowest of all ai (i belonging
to 1-4) is taken and is subtracted from each element in that row. This will lead to at least
one zero in that row (We get multiple zeros when there are two equal elements which also
happen to be the lowest in that row). This procedure is repeated for all rows. We now
have a matrix with at least one zero per row. Now we try to assign tasks to agents such
that each agent is doing only one task and the penalty incurred in each case is zero. This
is illustrated below.

0 a2' a3' a4'


b1' b2' b3' 0
c1' 0 c3' c4'
d1' d2' 0 d4'

The zeros that are indicated as 0 are the assigned tasks.


Step 2
Sometimes it may turn out that the matrix at this stage cannot be used for assigning, as is
the case for the matrix below.

0 a2' a3' a4'


b1' b2' b3' 0
0 c2' c3' c4'
d1' 0 d3' d4'

In the above case, no assignment can be made. Note that task 1 is done efficiently by both
agent a and c. Both can't be assigned the same task. Also note that no one does task
3 efficiently. To overcome this, we repeat the above procedure for all columns (i.e. the
minimum element in each column is subtracted from all the elements in that column) and
then check if an assignment is possible.
In most situations this will give the result, but if it is still not possible then we need to keep
going.
Step 3
All zeros in the matrix must be covered by marking as few rows and/or columns as possible.
The following procedure is one way to accomplish this:

1433
Hungarian algorithm

First, assign as many tasks as possible.


• Row 1 has one zero, so it is assigned. The 0 in row 3 is crossed out because it is in the
same column.
• Row 2 has one zero, so it is assigned.
• Row 3's only zero has been crossed out, so nothing is assigned.
• Row 4 has two uncrossed zeros. Either one can be assigned, and the other zero is crossed
out.
Alternatively, the 0 in row 3 may be assigned, causing the 0 in row 1 to be crossed instead.

0' a2' a3' a4'


b1' b2' b3' 0'
0 c2' c3' c4'
d1' 0' 0 d4'

Now to the drawing part.


• Mark all rows having no assignments (row 3).
• Mark all columns having zeros in newly marked row(s) (column 1).
• Mark all rows having assignments in newly marked columns (row 1).
• Repeat the steps outlined in the previous 2 bullets until there are no new rows or columns
being marked.

×
0' a2' a3' a4' ×
b1' b2' b3' 0'
0 c2' c3' c4' ×
d1' 0' 0 d4'

Now draw lines through all marked columns and unmarked rows.

×
0' a2' a3' a4' ×
b1' b2' b3' 0'
0 c2' c3' c4' ×
d1' 0' 0 d4'

The aforementioned detailed description is just one way to draw the minimum number of
lines to cover all the 0s. Other methods work as well.
Step 4
From the elements that are left, find the lowest value. Subtract this from every unmarked
element and add it to every element covered by two lines.

1434
Bibliography

Repeat steps 3–4 until an assignment is possible; this is when the minimum number of lines
used to cover all the 0s is equal to max(number of people, number of assignments), assuming
dummy variables (usually the max cost) are used to fill in when the number of people is
greater than the number of assignments.
Basically you find the second minimum cost among the remaining choices. The procedure
is repeated until you are able to distinguish among the workers in terms of least cost.

136.4 Bibliography
• R.E. Burkard, M. Dell'Amico, S. Martello: Assignment Problems (Revised reprint).
SIAM, Philadelphia (PA.) 2012. ISBN34 978-1-61197-222-135
• M. Fischetti, ”Lezioni di Ricerca Operativa”, Edizioni Libreria Progetto Padova, Italia,
1995.
• R. Ahuja36 , T. Magnanti37 , J. Orlin38 , ”Network Flows”, Prentice Hall, 1993.
• S. Martello, ”Jeno Egerváry: from the origins of the Hungarian algorithm to satellite
communication”. Central European Journal of Operational Research 18, 47–58, 2010

136.5 References
1. Harold W. Kuhn, ”The Hungarian Method for the assignment problem”, Naval Re-
search Logistics Quarterly39 , 2: 83–97, 1955. Kuhn's original publication.
2. Harold W. Kuhn, ”Variants of the Hungarian method for assignment problems”, Naval
Research Logistics Quarterly, 3: 253–258, 1956.
3. J. Munkres, ”Algorithms for the Assignment and Transportation Problems”, Journal
of the Society for Industrial and Applied Mathematics40 , 5(1):32–38, 1957 March.
4. EJ; M, KR (1 A 1972). ”T I-
  A E  N F P”41 . Journal
of the ACM (JACM). doi42 :10.1145/321694.32169943 .
5. T, N. (1971). ”O     
   ”44 . Networks. 1 (2): 173–194.
doi45 :10.1002/net.323001020646 . ISSN47 1097-003748 .

34 https://en.wikipedia.org/wiki/ISBN_(identifier)
35 https://en.wikipedia.org/wiki/Special:BookSources/978-1-61197-222-1
36 https://en.wikipedia.org/wiki/Ravindra_K._Ahuja
37 https://en.wikipedia.org/wiki/Thomas_L._Magnanti
38 https://en.wikipedia.org/wiki/James_B._Orlin
39 https://en.wikipedia.org/wiki/Naval_Research_Logistics_Quarterly
https://en.wikipedia.org/wiki/Journal_of_the_Society_for_Industrial_and_Applied_
40
Mathematics
41 https://dl.acm.org/doi/abs/10.1145/321694.321699
42 https://en.wikipedia.org/wiki/Doi_(identifier)
43 https://doi.org/10.1145%2F321694.321699
44 https://onlinelibrary.wiley.com/doi/abs/10.1002/net.3230010206
45 https://en.wikipedia.org/wiki/Doi_(identifier)
46 https://doi.org/10.1002%2Fnet.3230010206
47 https://en.wikipedia.org/wiki/ISSN_(identifier)
48 http://www.worldcat.org/issn/1097-0037

1435
Hungarian algorithm

6. J, R.; V, A. (D 1987). ”A   


       ”. Computing.
38 (4): 325–340. doi49 :10.1007/BF0227871050 .
7. 51

136.6 External links


53
• Bruff, Derek, The Assignment Problem and the Hungarian Method52[dead link ] (matrix
formalism).
• Mordecai J. Golin, Bipartite Matching and the Hungarian Method54 (bigraph formalism),
Course Notes, Hong Kong University of Science and Technology55 .
• Hungarian maximum matching algorithm56 (both formalisms), in Brilliant website.
• R. A. Pilgrim57 , Munkres' Assignment Algorithm. Modified for Rectangular Matrices58 ,
Course notes, Murray State University59 .
• Mike Dawes60 , The Optimal Assignment Problem61 , Course notes, University of Western
Ontario62 .
• On Kuhn's Hungarian Method – A tribute from Hungary63 , András Frank64 , Egervary
Research Group, Pazmany P. setany 1/C, H1117, Budapest, Hungary.
• Lecture: Fundamentals of Operations Research - Assignment Problem - Hungarian Algo-
rithm65 , Prof. G. Srinivasan, Department of Management Studies, IIT Madras.
• Extension: Assignment sensitivity analysis (with O(n^4) time complexity)66 , Liu, Shell.
• Solve any Assignment Problem online67 , provides a step by step explanation of the Hun-
garian Algorithm.

136.6.1 Implementations

Note that not all of these satisfy the O(n3 ) time complexity, even if they claim so. Some
may contain errors, implement the slower O(n4 ) algorithm, or have other inefficiencies. In
the worst case, a code example linked from Wikipedia could later be modified to include

49 https://en.wikipedia.org/wiki/Doi_(identifier)
50 https://doi.org/10.1007%2FBF02278710
51 http://www.lix.polytechnique.fr/~ollivier/JACOBI/presentationlEngl.htm
52 http://www.math.harvard.edu/archive/20_spring_05/handouts/assignment_overheads.pdf
54 http://www.cse.ust.hk/~golin/COMP572/Notes/Matching.pdf
55 https://en.wikipedia.org/wiki/Hong_Kong_University_of_Science_and_Technology
56 https://brilliant.org/wiki/hungarian-matching
57 https://en.wikipedia.org/w/index.php?title=R._A._Pilgrim&action=edit&redlink=1
58 http://csclab.murraystate.edu/bob.pilgrim/445/munkres.html
59 https://en.wikipedia.org/wiki/Murray_State_University
60 https://en.wikipedia.org/wiki/Mike_Dawes
https://web.archive.org/web/20060812030313/http://www.math.uwo.ca/~mdawes/courses/
61
344/kuhn-munkres.pdf
62 https://en.wikipedia.org/wiki/University_of_Western_Ontario
63 http://www.cs.elte.hu/egres/tr/egres-04-14.pdf
64 https://en.wikipedia.org/wiki/Andr%C3%A1s_Frank
65 https://www.youtube.com/watch?v=BUGIhEecipE
66 http://www.roboticsproceedings.org/rss06/p16.html
67 http://www.hungarianalgorithm.com/solve.php

1436
External links

exploit code. Verification and benchmarking is necessary when using such code examples
from unknown authors.
• C implementation claiming O(n3 ) time complexity68
• Java implementation claiming O(n3 ) time complexity69
• Matlab implementation claiming O(n3 ) time complexity70 (public domain)
• Python implementation71
• Ruby implementation with unit tests72
• C# implementation claiming O(n3 ) time complexity73
• D implementation with unit tests (port of a Java version claiming O(n3 ))74
• Online interactive implementation75
• Serial and parallel implementations.76
• Matlab and C77
• Perl implementation78
• C++ implementation79
• C++ implementation claiming O(n3 ) time complexity80 (BSD style open source licensed)
• MATLAB implementation81
• C implementation82
• JavaScript implementation with unit tests (port of a Java version claiming O(n3 ) time
complexity)83
• Clue R package proposes an implementation, solve_LSAP84
• Node.js implementation on GitHub85
• Python implementation in scipy package86

68 https://github.com/maandree/hungarian-algorithm-n3/blob/master/hungarian.c
https://github.com/KevinStern/software-and-algorithms/blob/master/src/main/java/
69
blogspot/software_and_algorithms/stern_library/optimization/HungarianAlgorithm.java
https://github.com/USNavalResearchLaboratory/TrackerComponentLibrary/blob/master/
70
Assignment%20Algorithms/2D%20Assignment/assign2D.m
71 http://software.clapper.org/munkres/
https://github.com/evansenter/gene/blob/f515fd73cb9d6a22b4d4b146d70b6c2ec6a5125b/
72
objects/extensions/hungarian.rb
73 https://github.com/antifriz/hungarian-algorithm-n3
74 http://www.fantascienza.net/leonardo/so/hungarian.d
75 http://www.ifors.ms.unimelb.edu.au/tutorial/hungarian/welcome_frame.html
76 http://www.netlib.org/utk/lsi/pcwLSI/text/node220.html
77 http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=6543
78 https://metacpan.org/module/Algorithm::Munkres
79 https://github.com/saebyn/munkres-cpp
80 http://dlib.net/optimization.html#max_cost_assignment
http://www.mathworks.com/matlabcentral/fileexchange/20652-hungarian-algorithm-for-
81
linear-assignment-problems--v2-3-
82 https://launchpad.net/lib-bipartite-match
83 https://github.com/Gerjo/esoteric/blob/master/Hungarian.js
84 https://cran.r-project.org/web/packages/clue/
85 https://github.com/addaleax/munkres-js
https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.optimize.linear_
86
sum_assignment.html

1437
137 Prüfer sequence

In combinatorial1 mathematics2 , the Prüfer sequence (also Prüfer code or Prüfer


numbers) of a labeled tree3 is a unique sequence4 associated with the tree. The sequence
for a tree on n vertices has length n − 2, and can be generated by a simple iterative al-
gorithm. Prüfer sequences were first used by Heinz Prüfer5 to prove Cayley's formula6 in
1918.[1]

137.1 Algorithm to convert a tree into a Prüfer sequence

One can generate a labeled tree's Prüfer sequence by iteratively removing vertices from the
tree until only two vertices remain. Specifically, consider a labeled tree T with vertices {1,
2, ..., n}. At step i, remove the leaf with the smallest label and set the ith element of the
Prüfer sequence to be the label of this leaf's neighbour.
The Prüfer sequence of a labeled tree is unique and has length n − 2.

1 https://en.wikipedia.org/wiki/Combinatorics
2 https://en.wikipedia.org/wiki/Mathematics
3 https://en.wikipedia.org/wiki/Labeled_tree
4 https://en.wikipedia.org/wiki/Sequence
5 https://en.wikipedia.org/wiki/Heinz_Pr%C3%BCfer
6 https://en.wikipedia.org/wiki/Cayley%27s_formula

1439
Prüfer sequence

137.1.1 Example

Figure 331 A labeled tree with Prüfer sequence {4,4,4,5}.

Consider the above algorithm run on the tree shown to the right. Initially, vertex 1 is
the leaf with the smallest label, so it is removed first and 4 is put in the Prüfer sequence.
Vertices 2 and 3 are removed next, so 4 is added twice more. Vertex 4 is now a leaf and
has the smallest label, so it is removed and we append 5 to the sequence. We are left with
only two vertices, so we stop. The tree's sequence is {4,4,4,5}.

1440
Algorithm to convert a Prüfer sequence into a tree

137.2 Algorithm to convert a Prüfer sequence into a tree

Let {a[1], a[2], ..., a[n]} be a Prüfer sequence:


The tree will have n+2 nodes, numbered from 1 to n+2. For each node set its degree to the
number of times it appears in the sequence plus 1. For instance, in pseudo-code:
Convert-Prüfer-to-Tree(a)
1 n ← length[a]
2 T ← a graph with n + 2 isolated nodes, numbered 1 to n + 2
3 degree ← an array of integers
4 for each node i in T do
5 degree[i] ← 1
6 for each value i in a do
7 degree[i] ← degree[i] + 1

Next, for each number in the sequence a[i], find the first (lowest-numbered) node, j, with
degree equal to 1, add the edge (j, a[i]) to the tree, and decrement the degrees of j and
a[i]. In pseudo-code:
8 for each value i in a do
9 for each node j in T do
10 if degree[j] = 1 then
11 Insert edge[i, j] into T
12 degree[i] ← degree[i] - 1
13 degree[j] ← degree[j] - 1
14 break

At the end of this loop two nodes with degree 1 will remain (call them u, v). Lastly, add
the edge (u,v) to the tree.[2]
15 u ← v ← 0
16 for each node i in T
17 if degree[i] = 1 then
18 if u = 0 then
19 u←i
20 else
21 v←i
22 break
23 Insert edge[u, v] into T
24 degree[u] ← degree[u] - 1
25 degree[v] ← degree[v] - 1
26 return T

137.3 Cayley's formula

The Prüfer sequence of a labeled tree on n vertices is a unique sequence of length n − 2 on


the labels 1 to n. For a given sequence S of length n−2 on the labels 1 to n, there is a
unique labeled tree whose Prüfer sequence is S.
The immediate consequence is that Prüfer sequences provide a bijection7 between the set
of labeled trees on n vertices and the set of sequences of length n − 2 on the labels 1 to n.

7 https://en.wikipedia.org/wiki/Bijection

1441
Prüfer sequence

The latter set has size nn−2 , so the existence of this bijection proves Cayley's formula8 , i.e.
that there are nn−2 labeled trees on n vertices.

137.4 Other applications[3]


• Cayley's formula can be strengthened to prove the following claim:
The number of spanning trees in a complete graph Kn with a degree di specified for each
vertex i is equal to the multinomial coefficient9
( )
n−2 (n − 2)!
= .
d1 − 1, d2 − 1, . . . , dn − 1 (d1 − 1)!(d2 − 1)! · · · (dn − 1)!
The proof follows by observing that in the Prüfer sequence number i appears exactly
(di − 1) times.
• Cayley's formula can be generalized: a labeled tree is in fact a spanning tree10 of the
labeled complete graph11 . By placing restrictions on the enumerated Prüfer sequences,
similar methods can give the number of spanning trees of a complete bipartite graph12 . If
G is the complete bipartite graph with vertices 1 to n1 in one partition and vertices n1 + 1
to n in the other partition, the number of labeled spanning trees of G is nn1 2 −1 nn2 1 −1 , where
n2 = n − n1 .
• Generating uniformly distributed random Prüfer sequences and converting them into
the corresponding trees is a straightforward method of generating uniformly distributed
random labelled trees.

137.5 References
1. P, H. (1918). ”N B  S  P”. Arch.
Math. Phys. 27: 742–744.
2. J G; B A. J; G R. R; F R.
(2001). ”P : A      
 ”13 (PDF). Proceedings of the Genetic and Evolutionary Com-
putation Conference (GECCO-2001): 343–350. Archived from the original14 (PDF)
on 2006-09-26.
3. K, H. (2003). ”A E   P C  A
 C G  T B”. Graphs and Combinatorics15 . 19:
231–239. doi16 :10.1007/s00373-002-0499-317 .

8 https://en.wikipedia.org/wiki/Cayley%27s_formula
9 https://en.wikipedia.org/wiki/Multinomial_coefficient
10 https://en.wikipedia.org/wiki/Spanning_tree_(mathematics)
11 https://en.wikipedia.org/wiki/Complete_graph
12 https://en.wikipedia.org/wiki/Bipartite_graph
https://web.archive.org/web/20060926171652/http://www.ads.tuwien.ac.at/publications/
13
bib/pdf/gottlieb-01.pdf
14 http://www.ads.tuwien.ac.at/publications/bib/pdf/gottlieb-01.pdf
15 https://en.wikipedia.org/wiki/Graphs_and_Combinatorics
16 https://en.wikipedia.org/wiki/Doi_(identifier)
17 https://doi.org/10.1007%2Fs00373-002-0499-3

1442
External links

137.6 External links


• Prüfer code18 – from MathWorld19

18 http://mathworld.wolfram.com/PrueferCode.html
19 https://en.wikipedia.org/wiki/MathWorld

1443
138 Graph drawing

This article is about the general subject of graph drawing. For the annual research sympo-
sium, see International Symposium on Graph Drawing1 .

Figure 332 Graphic representation of a minute fraction of the WWW, demonstrating


hyperlinks.

Graph drawing is an area of mathematics2 and computer science3 combining methods


from geometric graph theory4 and information visualization5 to derive two-dimensional de-

1 https://en.wikipedia.org/wiki/International_Symposium_on_Graph_Drawing
2 https://en.wikipedia.org/wiki/Mathematics
3 https://en.wikipedia.org/wiki/Computer_science
4 https://en.wikipedia.org/wiki/Geometric_graph_theory
5 https://en.wikipedia.org/wiki/Information_visualization

1445
Graph drawing

pictions of graphs6 arising from applications such as social network analysis7 , cartography8 ,
linguistics9 , and bioinformatics10 .[1]
A drawing of a graph or network diagram is a pictorial representation of the vertices11
and edges12 of a graph. This drawing should not be confused with the graph itself: very
different layouts can correspond to the same graph.[2] In the abstract, all that matters is
which pairs of vertices are connected by edges. In the concrete, however, the arrangement of
these vertices and edges within a drawing affects its understandability, usability, fabrication
cost, and aesthetics13 .[3] The problem gets worse if the graph changes over time by adding
and deleting edges (dynamic graph drawing) and the goal is to preserve the user's mental
map.[4]

6 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
7 https://en.wikipedia.org/wiki/Social_network_analysis
8 https://en.wikipedia.org/wiki/Cartography
9 https://en.wikipedia.org/wiki/Linguistics
10 https://en.wikipedia.org/wiki/Bioinformatics
11 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
12 https://en.wikipedia.org/wiki/Edge_(graph_theory)
13 https://en.wikipedia.org/wiki/Aesthetics

1446
Graphical conventions

138.1 Graphical conventions

Figure 333 Directed graph with arrowheads showing edge directions

Graphs are frequently drawn as node–link diagrams in which the vertices are represented as
disks, boxes, or textual labels and the edges are represented as line segments14 , polylines15 ,
or curves in the Euclidean plane16 .[3] Node–link diagrams can be traced back to the 13th

14 https://en.wikipedia.org/wiki/Line_segment
15 https://en.wikipedia.org/wiki/Polygonal_chain
16 https://en.wikipedia.org/wiki/Euclidean_plane

1447
Graph drawing

century work of Ramon Llull17 , who drew diagrams of this type for complete graphs18 in
order to analyze all pairwise combinations among sets of metaphysical concepts.[5]
In the case of directed graphs19 , arrowheads20 form a commonly used graphical convention
to show their orientation21 ;[2] however, user studies have shown that other conventions such
as tapering provide this information more effectively.[6] Upward planar drawing22 uses the
convention that every edge is oriented from a lower vertex to a higher vertex, making
arrowheads unnecessary.[7]
Alternative conventions to node–link diagrams include adjacency representations such as
circle packings23 , in which vertices are represented by disjoint regions in the plane and
edges are represented by adjacencies between regions; intersection representations24 in which
vertices are represented by non-disjoint geometric objects and edges are represented by their
intersections; visibility representations in which vertices are represented by regions in the
plane and edges are represented by regions that have an unobstructed line of sight to
each other; confluent drawings, in which edges are represented as smooth curves within
mathematical train tracks25 ; fabrics, in which nodes are represented as horizontal lines and
edges as vertical lines;[8] and visualizations of the adjacency matrix26 of the graph.

138.2 Quality measures

Many different quality measures have been defined for graph drawings, in an attempt to find
objective means of evaluating their aesthetics and usability.[9] In addition to guiding the
choice between different layout methods for the same graph, some layout methods attempt
to directly optimize these measures.

17 https://en.wikipedia.org/wiki/Ramon_Llull
18 https://en.wikipedia.org/wiki/Complete_graph
19 https://en.wikipedia.org/wiki/Directed_graph
20 https://en.wikipedia.org/wiki/Arrow_(symbol)
21 https://en.wikipedia.org/wiki/Orientability
22 https://en.wikipedia.org/wiki/Upward_planar_drawing
23 https://en.wikipedia.org/wiki/Circle_packing_theorem
24 https://en.wikipedia.org/wiki/Intersection_graph
25 https://en.wikipedia.org/wiki/Train_track_(mathematics)
26 https://en.wikipedia.org/wiki/Adjacency_matrix

1448
Quality measures

Figure 334 Planar graph drawn without overlapping edges

• The crossing number27 of a drawing is the number of pairs of edges that cross each
other. If the graph is planar28 , then it is often convenient to draw it without any edge
intersections; that is, in this case, a graph drawing represents a graph embedding29 .
However, nonplanar graphs frequently arise in applications, so graph drawing algorithms
must generally allow for edge crossings.[10]

27 https://en.wikipedia.org/wiki/Crossing_number_(graph_theory)
28 https://en.wikipedia.org/wiki/Planar_graph
29 https://en.wikipedia.org/wiki/Graph_embedding

1449
Graph drawing

• The area30 of a drawing is the size of its smallest bounding box31 , relative to the closest
distance between any two vertices. Drawings with smaller area are generally preferable
to those with larger area, because they allow the features of the drawing to be shown at
greater size and therefore more legibly. The aspect ratio32 of the bounding box may also
be important.
• Symmetry display is the problem of finding symmetry groups33 within a given graph,
and finding a drawing that displays as much of the symmetry as possible. Some layout
methods automatically lead to symmetric drawings; alternatively, some drawing methods
start by finding symmetries in the input graph and using them to construct a drawing.[11]
• It is important that edges have shapes that are as simple as possible, to make it easier for
the eye to follow them. In polyline drawings, the complexity of an edge may be measured
by its number of bends34 , and many methods aim to provide drawings with few total
bends or few bends per edge. Similarly for spline curves the complexity of an edge may
be measured by the number of control points on the edge.
• Several commonly used quality measures concern lengths of edges: it is generally desirable
to minimize the total length of the edges as well as the maximum length of any edge.
Additionally, it may be preferable for the lengths of edges to be uniform rather than
highly varied.
• Angular resolution35 is a measure of the sharpest angles in a graph drawing. If a graph
has vertices with high degree36 then it necessarily will have small angular resolution, but
the angular resolution can be bounded below by a function of the degree.[12]
• The slope number37 of a graph is the minimum number of distinct edge slopes needed
in a drawing with straight line segment edges (allowing crossings). Cubic graphs38 have
slope number at most four, but graphs of degree five may have unbounded slope number;
it remains open whether the slope number of degree-4 graphs is bounded.[12]

30 https://en.wikipedia.org/wiki/Area_(graph_drawing)
31 https://en.wikipedia.org/wiki/Bounding_box
32 https://en.wikipedia.org/wiki/Aspect_ratio
33 https://en.wikipedia.org/wiki/Graph_automorphism
34 https://en.wikipedia.org/wiki/Bend_minimization
35 https://en.wikipedia.org/wiki/Angular_resolution_(graph_drawing)
36 https://en.wikipedia.org/wiki/Degree_(graph_theory)
37 https://en.wikipedia.org/wiki/Slope_number
38 https://en.wikipedia.org/wiki/Cubic_graph

1450
Layout methods

138.3 Layout methods

Figure 335 A force-based network visualization.[13]

There are many different graph layout strategies:


• In force-based layout39 systems, the graph drawing software modifies an initial vertex
placement by continuously moving the vertices according to a system of forces based
on physical metaphors related to systems of springs40 or molecular mechanics41 . Typi-
cally, these systems combine attractive forces between adjacent vertices with repulsive
forces between all pairs of vertices, in order to seek a layout in which edge lengths are
small while vertices are well-separated. These systems may perform gradient descent42
based minimization of an energy function43 , or they may translate the forces directly into
velocities or accelerations for the moving vertices.[14]

39 https://en.wikipedia.org/wiki/Force-based_layout
40 https://en.wikipedia.org/wiki/Spring_(device)
41 https://en.wikipedia.org/wiki/Molecular_mechanics
42 https://en.wikipedia.org/wiki/Gradient_descent
43 https://en.wikipedia.org/wiki/Energy_function

1451
Graph drawing

• Spectral layout44 methods use as coordinates the eigenvectors45 of a matrix46 such as the
Laplacian47 derived from the adjacency matrix48 of the graph.[15]
• Orthogonal layout methods, which allow the edges of the graph to run horizontally or
vertically, parallel to the coordinate axes of the layout. These methods were originally
designed for VLSI49 and PCB50 layout problems but they have also been adapted for
graph drawing. They typically involve a multiphase approach in which an input graph
is planarized by replacing crossing points by vertices, a topological embedding of the
planarized graph is found, edge orientations are chosen to minimize bends, vertices are
placed consistently with these orientations, and finally a layout compaction stage reduces
the area of the drawing.[16]
• Tree layout algorithms these show a rooted tree51 -like formation, suitable for trees52 .
Often, in a technique called ”balloon layout”, the children of each node in the tree are
drawn on a circle surrounding the node, with the radii of these circles diminishing at
lower levels in the tree so that these circles do not overlap.[17]
• Layered graph drawing53 methods (often called Sugiyama-style drawing) are best suited
for directed acyclic graphs54 or graphs that are nearly acyclic, such as the graphs of
dependencies between modules or functions in a software system. In these methods,
the nodes of the graph are arranged into horizontal layers using methods such as the
Coffman–Graham algorithm55 , in such a way that most edges go downwards from one
layer to the next; after this step, the nodes within each layer are arranged in order to
minimize crossings.[18]

44 https://en.wikipedia.org/wiki/Spectral_layout
45 https://en.wikipedia.org/wiki/Eigenvector
46 https://en.wikipedia.org/wiki/Matrix_(mathematics)
47 https://en.wikipedia.org/wiki/Discrete_Laplace_operator
48 https://en.wikipedia.org/wiki/Adjacency_matrix
49 https://en.wikipedia.org/wiki/VLSI
50 https://en.wikipedia.org/wiki/Printed_circuit_board
51 https://en.wikipedia.org/wiki/Tree_structure
52 https://en.wikipedia.org/wiki/Tree_(graph_theory)
53 https://en.wikipedia.org/wiki/Layered_graph_drawing
54 https://en.wikipedia.org/wiki/Directed_acyclic_graph
55 https://en.wikipedia.org/wiki/Coffman%E2%80%93Graham_algorithm

1452
Layout methods

Figure 336 Arc diagram

• Arc diagrams56 , a layout style dating back to the 1960s,[19] place vertices on a line; edges
may be drawn as semicircles above or below the line, or as smooth curves linked together
from multiple semicircles.
• Circular layout57 methods place the vertices of the graph on a circle, choosing carefully the
ordering of the vertices around the circle to reduce crossings and place adjacent vertices
close to each other. Edges may be drawn either as chords of the circle or as arcs inside
or outside of the circle. In some cases, multiple circles may be used.[20]
• Dominance drawing58 places vertices in such a way that one vertex is upwards, rightwards,
or both of another if and only if it is reachable59 from the other vertex. In this way, the
layout style makes the reachability relation of the graph visually apparent.[21]

56 https://en.wikipedia.org/wiki/Arc_diagram
57 https://en.wikipedia.org/wiki/Circular_layout
58 https://en.wikipedia.org/wiki/Dominance_drawing
59 https://en.wikipedia.org/wiki/Reachability

1453
Graph drawing

138.4 Application-specific graph drawings

Graphs and graph drawings arising in other areas of application include


• Sociograms60 , drawings of a social network61 , as often offered by social network analysis
software62[22]
• Hasse diagrams63 , a type of graph drawing specialized to partial orders64[23]
• Dessin d'enfants65 , a type of graph drawing used in algebraic geometry66[24]
• State diagrams67 , graphical representations of finite-state machines68[25]
• Computer network diagrams69 , depictions of the nodes and connections in a computer
network70[26]
• Flowcharts71 and drakon-charts72 , drawings in which the nodes represent the steps of an
algorithm73 and the edges represent control flow74 between steps.
• Data-flow diagrams75 , drawings in which the nodes represent the components of an in-
formation system76 and the edges represent the movement of information from one com-
ponent to another.
• Bioinformatics77 including phylogenetic trees78 , protein–protein interaction79 networks,
and metabolic pathways80 .[27]
In addition, the placement81 and routing82 steps of electronic design automation83 (EDA)
are similar in many ways to graph drawing, as is the problem of greedy embedding84 in
distributed computing85 , and the graph drawing literature includes several results borrowed
from the EDA literature. However, these problems also differ in several important ways: for
instance, in EDA, area minimization and signal length are more important than aesthetics,
and the routing problem in EDA may have more than two terminals per net while the
analogous problem in graph drawing generally only involves pairs of vertices for each edge.

60 https://en.wikipedia.org/wiki/Sociogram
61 https://en.wikipedia.org/wiki/Social_network
62 https://en.wikipedia.org/wiki/Social_network_analysis_software
63 https://en.wikipedia.org/wiki/Hasse_diagram
64 https://en.wikipedia.org/wiki/Partial_order
65 https://en.wikipedia.org/wiki/Dessin_d%27enfant
66 https://en.wikipedia.org/wiki/Algebraic_geometry
67 https://en.wikipedia.org/wiki/State_diagram
68 https://en.wikipedia.org/wiki/Finite-state_machine
69 https://en.wikipedia.org/wiki/Computer_network_diagram
70 https://en.wikipedia.org/wiki/Computer_network
71 https://en.wikipedia.org/wiki/Flowchart
72 https://en.wikipedia.org/wiki/DRAKON
73 https://en.wikipedia.org/wiki/Algorithm
74 https://en.wikipedia.org/wiki/Control_flow
75 https://en.wikipedia.org/wiki/Data-flow_diagram
76 https://en.wikipedia.org/wiki/Information_system
77 https://en.wikipedia.org/wiki/Bioinformatics
78 https://en.wikipedia.org/wiki/Phylogenetic_tree
79 https://en.wikipedia.org/wiki/Protein%E2%80%93protein_interaction
80 https://en.wikipedia.org/wiki/Metabolic_pathway
81 https://en.wikipedia.org/wiki/Placement_(electronic_design_automation)
82 https://en.wikipedia.org/wiki/Routing_(electronic_design_automation)
83 https://en.wikipedia.org/wiki/Electronic_design_automation
84 https://en.wikipedia.org/wiki/Greedy_embedding
85 https://en.wikipedia.org/wiki/Distributed_computing

1454
Software

138.5 Software

Figure 337 A graph drawing interface (Gephi 0.9.1)

Software, systems, and providers of systems for drawing graphs include:


• BioFabric86 open-source software for visualizing large networks by drawing nodes as hor-
izontal lines.
• Cytoscape87 , open-source software for visualizing molecular interaction networks
• Gephi88 , open-source network analysis and visualization software
• graph-tool89 , a free/libre90 Python91 library for analysis of graphs.
• Graphviz92 , an open-source graph drawing system from AT&T Corporation93[28]
• Linkurious94 , a commercial network analysis and visualization software for graph
databases95
• Mathematica96 , a general purpose computation tool that includes 2D and 3D graph vi-
sualization and graph analysis tools.[29][30]

86 https://en.wikipedia.org/wiki/BioFabric
87 https://en.wikipedia.org/wiki/Cytoscape
88 https://en.wikipedia.org/wiki/Gephi
89 https://en.wikipedia.org/wiki/Graph-tool
90 https://en.wikipedia.org/wiki/Free_Software
91 https://en.wikipedia.org/wiki/Python_(programming_language)
92 https://en.wikipedia.org/wiki/Graphviz
93 https://en.wikipedia.org/wiki/AT%26T_Corporation
94 https://en.wikipedia.org/wiki/Linkurious
95 https://en.wikipedia.org/wiki/Graph_databases
96 https://en.wikipedia.org/wiki/Mathematica

1455
Graph drawing

• Microsoft Automatic Graph Layout97 , open-source .NET library (formerly called GLEE)
for laying out graphs[31]
• NetworkX98 is a Python library for studying graphs and networks.
• Tom Sawyer Software99[32] Tom Sawyer Perspectives is graphics-based software for build-
ing enterprise-class graph and data visualization and analysis applications. It is a Software
Development Kit (SDK) with a graphics-based design and preview environment.
• Tulip (software)100 ,[33] an open source data visualization tool
• yEd101 , a graph editor with graph layout functionality[34]
• PGF/TikZ102 3.0 with the graphdrawing package (requires LuaTeX103 ).[35]
• LaNet-vi104 , an open-source large network visualization software
• Edraw Max105 2D business technical diagramming software

138.6 References

Footnotes
1. Di Battista et al. (1994)106 , pp. vii–viii; Herman, Melançon & Marshall (2000)107 ,
Section 1.1, ”Typical Application Areas”.
2. Di Battista et al. (1994)108 , p. 6.
3. Di Battista et al. (1994)109 , p. viii.
4. Misue et al. (1995)110
5. K, D E.111 (2013), ”T    ”, 
W, R; W, J J. (.), Combinatorics: Ancient and Modern,
Oxford University Press, pp. 7–37.
6. Holten & van Wijk (2009)112 ; Holten et al. (2011)113 .
7. Garg & Tamassia (1995)114 .
8. Longabaugh (2012)115 .

97 https://en.wikipedia.org/wiki/Microsoft_Automatic_Graph_Layout
98 https://en.wikipedia.org/wiki/NetworkX
99 https://en.wikipedia.org/wiki/Tom_Sawyer_Software
100 https://en.wikipedia.org/wiki/Tulip_(software)
101 https://en.wikipedia.org/wiki/YEd
102 https://en.wikipedia.org/wiki/PGF/TikZ
103 https://en.wikipedia.org/wiki/LuaTeX
104 https://en.wikipedia.org/wiki/LaNet-vi
105 https://en.wikipedia.org/wiki/Edraw_Max
106 #CITEREFDi_BattistaEadesTamassiaTollis1994
107 #CITEREFHermanMelan%C3%A7onMarshall2000
108 #CITEREFDi_BattistaEadesTamassiaTollis1994
109 #CITEREFDi_BattistaEadesTamassiaTollis1994
110 #CITEREFMisueEadesLaiSugiyama1995
111 https://en.wikipedia.org/wiki/Donald_Knuth
112 #CITEREFHoltenvan_Wijk2009
113 #CITEREFHoltenIsenbergvan_WijkFekete2011
114 #CITEREFGargTamassia1995
115 #CITEREFLongabaugh2012

1456
References

9. Di Battista et al. (1994)116 , Section 2.1.2, Aesthetics, pp. 14–16; Purchase, Cohen &
James (1997)117 .
10. Di Battista et al. (1994)118 , p 14.
11. Di Battista et al. (1994)119 , p. 16.
12. Pach & Sharir (2009)120 .
13. Published in
G, M (2014). ”L    ”121 . Les Cahiers
du Numérique. 10 (3): 37–54. doi122 :10.3166/lcn.10.3.37-54123 . Retrieved 2014-10-15.
14. Di Battista et al. (1994)124 , Section 2.7, ”The Force-Directed Approach”, pp. 29–30,
and Chapter 10, ”Force-Directed Methods”, pp. 303–326.
15. Beckman (1994)125 ; Koren (2005)126 .
16. Di Battista et al. (1994)127 , Chapter 5, ”Flow and Orthogonal Drawings”, pp. 137–170;
(Eiglsperger, Fekete & Klau 2001128 ).
17. Herman, Melançon & Marshall (2000)129 , Section 2.2, ”Traditional Layout – An
Overview”.
18. Sugiyama, Tagawa & Toda (1981)130 ; Bastert & Matuszewski (2001)131 ; Di Battista
et al. (1994)132 , Chapter 9, ”Layered Drawings of Digraphs”, pp. 265–302.
19. Saaty (1964)133 .
20. Doğrusöz, Madden & Madden (1997)134 .
21. Di Battista et al. (1994)135 , Section 4.7, ”Dominance Drawings”, pp. 112–127.
22. Scott (2000)136 ; Brandes, Freeman & Wagner (2014)137 .
23. Di Battista et al. (1994)138 , pp. 15–16, and Chapter 6, ”Flow and Upward Planarity”,
pp. 171–214; Freese (2004)139 .
24. Zapponi (2003)140 .
25. Anderson & Head (2006)141 .

116 #CITEREFDi_BattistaEadesTamassiaTollis1994
117 #CITEREFPurchaseCohenJames1997
118 #CITEREFDi_BattistaEadesTamassiaTollis1994
119 #CITEREFDi_BattistaEadesTamassiaTollis1994
120 #CITEREFPachSharir2009
121 http://www.cairn.info/resume.php?ID_ARTICLE=LCN_103_0037
122 https://en.wikipedia.org/wiki/Doi_(identifier)
123 https://doi.org/10.3166%2Flcn.10.3.37-54
124 #CITEREFDi_BattistaEadesTamassiaTollis1994
125 #CITEREFBeckman1994
126 #CITEREFKoren2005
127 #CITEREFDi_BattistaEadesTamassiaTollis1994
128 #CITEREFEiglspergerFeketeKlau2001
129 #CITEREFHermanMelan%C3%A7onMarshall2000
130 #CITEREFSugiyamaTagawaToda1981
131 #CITEREFBastertMatuszewski2001
132 #CITEREFDi_BattistaEadesTamassiaTollis1994
133 #CITEREFSaaty1964
134 #CITEREFDo%C4%9Frus%C3%B6zMaddenMadden1997
135 #CITEREFDi_BattistaEadesTamassiaTollis1994
136 #CITEREFScott2000
137 #CITEREFBrandesFreemanWagner2014
138 #CITEREFDi_BattistaEadesTamassiaTollis1994
139 #CITEREFFreese2004
140 #CITEREFZapponi2003
141 #CITEREFAndersonHead2006

1457
Graph drawing

26. Di Battista & Rimondini (2014)142 .


27. Bachmaier, Brandes & Schreiber (2014)143 .
28. ”Graphviz and Dynagraph – Static and Dynamic Graph Drawing Tools”, by John
Ellson, Emden R. Gansner, Eleftherios Koutsofios, Stephen C. North, and Gordon
Woodhull, in Jünger & Mutzel (2004)144 .
29. GraphPlot145 Mathematica documentation
30. Graph drawing tutorial146
31. Nachmanson, Robertson & Lee (2008)147 .
32. Madden et al. (1996)148 .
33. ”Tulip – A Huge Graph Visualization Framework”, by David Auber, in Jünger &
Mutzel (2004)149 .
34. ”yFiles – Visualization and Automatic Layout of Graphs”, by Roland Wiese, Markus
Eiglsperger, and Michael Kaufmann, in Jünger & Mutzel (2004)150 .
35. Tantau (2013)151 ; see also the older GD 2012 presentation152
General references
• D B, G; E, P153 ; T, R154 ; T, I-
 G. (1994), ”A  D G:  A B-
”155 , Computational Geometry: Theory and Applications156 , 4 (5): 235–282,
doi157 :10.1016/0925-7721(94)00014-x158 .
• D B, G; E, P159 ; T, R160 ; T, I
G. (1998), Graph Drawing: Algorithms for the Visualization of Graphs, Prentice Hall161 ,
ISBN162 978-0-13-301615-4163 .
• H, I; M, G; M, M. S (2000), ”G V-
  N  I V: A S”,
IEEE Transactions on Visualization and Computer Graphics, 6 (1): 24–43,
doi164 :10.1109/2945.841119165 .

142 #CITEREFDi_BattistaRimondini2014
143 #CITEREFBachmaierBrandesSchreiber2014
144 #CITEREFJ%C3%BCngerMutzel2004
145 http://reference.wolfram.com/mathematica/ref/GraphPlot.html
146 http://reference.wolfram.com/mathematica/tutorial/GraphDrawingIntroduction.html
147 #CITEREFNachmansonRobertsonLee2008
148 #CITEREFMaddenMaddenPowersHimsolt1996
149 #CITEREFJ%C3%BCngerMutzel2004
150 #CITEREFJ%C3%BCngerMutzel2004
151 #CITEREFTantau2013
152 http://www.tcs.uni-luebeck.de/downloads/mitarbeiter/tantau/2012-gd-presentation.pdf
153 https://en.wikipedia.org/wiki/Peter_Eades
154 https://en.wikipedia.org/wiki/Roberto_Tamassia
155 http://www.cs.brown.edu/people/rt/gd.html
156 https://en.wikipedia.org/wiki/Computational_Geometry_(journal)
157 https://en.wikipedia.org/wiki/Doi_(identifier)
158 https://doi.org/10.1016%2F0925-7721%2894%2900014-x
159 https://en.wikipedia.org/wiki/Peter_Eades
160 https://en.wikipedia.org/wiki/Roberto_Tamassia
161 https://en.wikipedia.org/wiki/Prentice_Hall
162 https://en.wikipedia.org/wiki/ISBN_(identifier)
163 https://en.wikipedia.org/wiki/Special:BookSources/978-0-13-301615-4
164 https://en.wikipedia.org/wiki/Doi_(identifier)
165 https://doi.org/10.1109%2F2945.841119

1458
References

• J, M; M, P166 (2004), Graph Drawing Software, Springer-


Verlag, ISBN167 978-3-540-00881-1168 .
• K, M; W, D169 , . (2001), Drawing Graphs:
Methods and Models, Lecture Notes in Computer Science170 , 2025, Springer-Verlag,
doi171 :10.1007/3-540-44969-8172 , ISBN173 978-3-540-42062-0174 .
• T, R175 , . (2014), Handbook of Graph Drawing and Visualization176 ,
CRC P,    177  2013-08-15,  2013-08-28.
Specialized subtopics
• A, J A; H, T J. (2006), Automata Theory with Modern
Applications178 , C U P, . 38–41, ISBN179 978-0-521-84887-
9180 .
• B, C; B, U; S, F (2014), ”B
N”,  T, R181 (.), Handbook of Graph Drawing and Visual-
ization, CRC Press, pp. 621–651.
• B, O; M, C (2001), ”L   -
”,  K, M; W, D182 (.), Drawing Graphs:
Methods and Models, Lecture Notes in Computer Science, 2025, Springer-Verlag, pp. 87–
120, doi183 :10.1007/3-540-44969-8_5184 , ISBN185 978-3-540-42062-0186 .
• B, B (1994), Theory of Spectral Graph Layout187 , T. R MSR-
TR-94-04, M R.
• B, U; F, L C.; W, D188 (2014), ”S
N”,  T, R189 (.), Handbook of Graph Drawing and Visual-
ization, CRC Press, pp. 805–839.

166 https://en.wikipedia.org/wiki/Petra_Mutzel
167 https://en.wikipedia.org/wiki/ISBN_(identifier)
168 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-00881-1
169 https://en.wikipedia.org/wiki/Dorothea_Wagner
170 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
171 https://en.wikipedia.org/wiki/Doi_(identifier)
172 https://doi.org/10.1007%2F3-540-44969-8
173 https://en.wikipedia.org/wiki/ISBN_(identifier)
174 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-42062-0
175 https://en.wikipedia.org/wiki/Roberto_Tamassia
176 https://web.archive.org/web/20130815181243/http://cs.brown.edu/~rt/gdhandbook/
177 http://cs.brown.edu/~rt/gdhandbook/
178 https://books.google.com/books?id=ikS8BLdLDxIC&pg=PA38
179 https://en.wikipedia.org/wiki/ISBN_(identifier)
180 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-84887-9
181 https://en.wikipedia.org/wiki/Roberto_Tamassia
182 https://en.wikipedia.org/wiki/Dorothea_Wagner
183 https://en.wikipedia.org/wiki/Doi_(identifier)
184 https://doi.org/10.1007%2F3-540-44969-8_5
185 https://en.wikipedia.org/wiki/ISBN_(identifier)
186 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-42062-0
187 http://research.microsoft.com/apps/pubs/default.aspx?id=69611
188 https://en.wikipedia.org/wiki/Dorothea_Wagner
189 https://en.wikipedia.org/wiki/Roberto_Tamassia

1459
Graph drawing

• D B, G; R, M (2014), ”C N”, 


T, R190 (.), Handbook of Graph Drawing and Visualization, CRC
Press, pp. 763–803.
• DĞ, UĞ; M, B; M, P (1997), ”C -
   G L ”,  N, S (.), Symposium on
Graph Drawing, GD '96 Berkeley, California, USA, September 18–20, 1996, Proceed-
ings191 , L N  C S, 1190, Springer-Verlag, pp. 92–100,
doi192 :10.1007/3-540-62495-3_40193 , ISBN194 978-3-540-62495-0195 .
• E, M; F, S; K, G (2001), ”O
 ”,  K, M; W, D196 (.), Drawing
Graphs, Lecture Notes in Computer Science, 2025, Springer Berlin / Heidelberg, pp. 121–
171, doi197 :10.1007/3-540-44969-8_6198 , ISBN199 978-3-540-42062-0200 .
• F, R (2004), ”A  ”,  E, P
(.), Concept Lattices: Second International Conference on Formal Concept Anal-
ysis, ICFCA 2004, Sydney, Australia, February 23-26, 2004, Proceedings201 (PDF),
L N  C S, 2961, Springer-Verlag, pp. 589–590, Cite-
SeerX202 10.1.1.69.6245203 , doi204 :10.1007/978-3-540-24651-0_12205 , ISBN206 978-3-540-
21043-6207 .
• G, A; T, R (1995), ”U  ”, Or-
der208 , 12 (2): 109–133, CiteSeerX209 10.1.1.10.2237210 , doi211 :10.1007/BF01108622212 ,
MR213 1354797214 .
• H, D; I, P;  W, J J.215 ; F, J-
D (2011), ”A       , -
,   -   - ”,

190 https://en.wikipedia.org/wiki/Roberto_Tamassia
191 https://en.wikipedia.org/wiki/International_Symposium_on_Graph_Drawing
192 https://en.wikipedia.org/wiki/Doi_(identifier)
193 https://doi.org/10.1007%2F3-540-62495-3_40
194 https://en.wikipedia.org/wiki/ISBN_(identifier)
195 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-62495-0
196 https://en.wikipedia.org/wiki/Dorothea_Wagner
197 https://en.wikipedia.org/wiki/Doi_(identifier)
198 https://doi.org/10.1007%2F3-540-44969-8_6
199 https://en.wikipedia.org/wiki/ISBN_(identifier)
200 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-42062-0
201 http://www.math.hawaii.edu/~ralph/Preprints/latdrawing.pdf
202 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
203 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.69.6245
204 https://en.wikipedia.org/wiki/Doi_(identifier)
205 https://doi.org/10.1007%2F978-3-540-24651-0_12
206 https://en.wikipedia.org/wiki/ISBN_(identifier)
207 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-21043-6
208 https://en.wikipedia.org/wiki/Order_(journal)
209 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
210 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.2237
211 https://en.wikipedia.org/wiki/Doi_(identifier)
212 https://doi.org/10.1007%2FBF01108622
213 https://en.wikipedia.org/wiki/MR_(identifier)
214 http://www.ams.org/mathscinet-getitem?mr=1354797
215 https://en.wikipedia.org/wiki/Jack_van_Wijk

1460
References

IEEE Pacific Visualization Symposium (PacificVis 2011)216 (PDF), . 195–202,


217 :10.1109/PACIFICVIS.2011.5742390218 , ISBN219 978-1-61284-935-5220 .
• H, D;  W, J J.221 (2009), ”A    -
    ”, Proceedings of the 27th International
Conference on Human Factors in Computing Systems (CHI '09)222 (PDF),
. 2299–2308, CSX223 10.1.1.212.5461224 , 225 :10.1145/1518701.1519054226 ,
ISBN227 9781605582467228 ,    229 (PDF)  2011-11-06.
• K, Y (2005), ”D   :   -
”230 (PDF), Computers & Mathematics with Applications, 49 (11–12): 1867–1888,
doi231 :10.1016/j.camwa.2004.08.015232 , MR233 2154691234 , archived from the original235
(PDF) on 2012-04-02, retrieved 2011-09-17.
• L, W (2012), ”C    BF:  
     ”236 (PDF), BMC Bioinformatics,
13: 275, doi237 :10.1186/1471-2105-13-275238 , PMC239 3574047240 , PMID241 23102059242 .
• M, B; M, P; P, S; H, M
(1996), ”P    ”,  B, F J. (.),
Graph Drawing: Symposium on Graph Drawing, GD '95, Passau, Germany, September
20–22, 1995, Proceedings243 , L N  C S, 1027, Springer-
Verlag, pp. 385–395, doi244 :10.1007/BFb0021822245 , ISBN246 978-3-540-60723-6247 .

216 http://www.lri.fr/~isenberg/publications/papers/Holten_2011_AEP.pdf
217 https://en.wikipedia.org/wiki/Doi_(identifier)
218 https://doi.org/10.1109%2FPACIFICVIS.2011.5742390
219 https://en.wikipedia.org/wiki/ISBN_(identifier)
220 https://en.wikipedia.org/wiki/Special:BookSources/978-1-61284-935-5
221 https://en.wikipedia.org/wiki/Jack_van_Wijk
https://web.archive.org/web/20111106004500/http://www.win.tue.nl/~dholten/papers/
222
directed_edges_chi.pdf
223 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
224 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.212.5461
225 https://en.wikipedia.org/wiki/Doi_(identifier)
226 https://doi.org/10.1145%2F1518701.1519054
227 https://en.wikipedia.org/wiki/ISBN_(identifier)
228 https://en.wikipedia.org/wiki/Special:BookSources/9781605582467
229 http://www.win.tue.nl/~dholten/papers/directed_edges_chi.pdf
https://web.archive.org/web/20120402131402/https://akpublic.research.att.com/areas/
230
visualization/papers_videos/pdf/DBLP-journals-camwa-Koren05.pdf
231 https://en.wikipedia.org/wiki/Doi_(identifier)
232 https://doi.org/10.1016%2Fj.camwa.2004.08.015
233 https://en.wikipedia.org/wiki/MR_(identifier)
234 http://www.ams.org/mathscinet-getitem?mr=2154691
https://akpublic.research.att.com/areas/visualization/papers_videos/pdf/DBLP-
235
journals-camwa-Koren05.pdf
236 http://www.biomedcentral.com/content/pdf/1471-2105-13-275.pdf
237 https://en.wikipedia.org/wiki/Doi_(identifier)
238 https://doi.org/10.1186%2F1471-2105-13-275
239 https://en.wikipedia.org/wiki/PMC_(identifier)
240 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3574047
241 https://en.wikipedia.org/wiki/PMID_(identifier)
242 http://pubmed.ncbi.nlm.nih.gov/23102059
243 https://en.wikipedia.org/wiki/International_Symposium_on_Graph_Drawing
244 https://en.wikipedia.org/wiki/Doi_(identifier)
245 https://doi.org/10.1007%2FBFb0021822
246 https://en.wikipedia.org/wiki/ISBN_(identifier)
247 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-60723-6

1461
Graph drawing

• M, K.; E, P.; L, W.; S, K. (1995), ”L A 
 M M”, Journal of Visual Languages and Computing, 6 (2): 183–210,
doi248 :10.1006/jvlc.1995.1010249 .
• N, L; R, G; L, B (2008), ”D
G  GLEE”250 (PDF),  H, S-H; N, T251 ; Q,
W (.), Graph Drawing, 15th International Symposium, GD 2007, Sydney, Aus-
tralia, September 24–26, 2007, Revised Papers252 , L N  C
S, 4875, Springer-Verlag, pp. 389–394, doi253 :10.1007/978-3-540-77537-9_38254 ,
ISBN255 978-3-540-77536-2256 .
• P, J257 ; S, M258 (2009), ”5.5 A   ”,
Combinatorial Geometry and Its Algorithmic Applications: The Alcalá Lectures, Mathe-
matical Surveys and Monographs, 152, American Mathematical Society259 , pp. 126–127.
• P, H. C.260 ; C, R. F.; J, M. I. (1997), ”A  
      ”261 (PDF), Journal of Experimental
264
Algorithmics, 2, Article 4, doi262 :10.1145/264216.264222263[permanent dead link ] .
• S, T L.265 (1964), ”T     
 ”, Proc. Natl. Acad. Sci. U.S.A., 52 (3): 688–690,
doi266 :10.1073/pnas.52.3.688267 , PMC268 300329269 , PMID270 16591215271 .
• S, J (2000), ”S  G T”, Social network analysis: a
handbook272 (2 .), S, . 64–69, ISBN273 978-0-7619-6339-4274 .
• S, K275 ; T, S; T, M (1981), ”M-
       ”, IEEE

248 https://en.wikipedia.org/wiki/Doi_(identifier)
249 https://doi.org/10.1006%2Fjvlc.1995.1010
250 ftp://ftp.research.microsoft.com/pub/TR/TR-2007-72.pdf
251 https://en.wikipedia.org/wiki/Takao_Nishizeki
252 https://en.wikipedia.org/wiki/International_Symposium_on_Graph_Drawing
253 https://en.wikipedia.org/wiki/Doi_(identifier)
254 https://doi.org/10.1007%2F978-3-540-77537-9_38
255 https://en.wikipedia.org/wiki/ISBN_(identifier)
256 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-77536-2
257 https://en.wikipedia.org/wiki/J%C3%A1nos_Pach
258 https://en.wikipedia.org/wiki/Micha_Sharir
259 https://en.wikipedia.org/wiki/American_Mathematical_Society
260 https://en.wikipedia.org/wiki/Helen_Purchase
https://secure.cs.uvic.ca/twiki/pub/Research/Chisel/ComputationalAestheticsProject/
261
Vol2Nbr4.pdf
262 https://en.wikipedia.org/wiki/Doi_(identifier)
263 https://doi.org/10.1145%2F264216.264222
265 https://en.wikipedia.org/wiki/Thomas_L._Saaty
266 https://en.wikipedia.org/wiki/Doi_(identifier)
267 https://doi.org/10.1073%2Fpnas.52.3.688
268 https://en.wikipedia.org/wiki/PMC_(identifier)
269 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC300329
270 https://en.wikipedia.org/wiki/PMID_(identifier)
271 http://pubmed.ncbi.nlm.nih.gov/16591215
272 https://books.google.com/books?id=Ww3_bKcz6kgC&pg=PA
273 https://en.wikipedia.org/wiki/ISBN_(identifier)
274 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7619-6339-4
275 https://en.wikipedia.org/wiki/Kozo_Sugiyama

1462
External links

Transactions on Systems, Man, and Cybernetics276 , SMC-11 (2): 109–125,


277 278
 :10.1109/TSMC.1981.4308636 , MR 0611436 . 279 280

• T, T (2013), ”G D  TZ”, Journal of Graph Algorithms and
Applications281 , 17 (4): 495–513, doi282 :10.7155/jgaa.00301283 .
• Z, L (A 2003), ”W   D 'E”284 (PDF), No-
tices of the American Mathematical Society285 , 50: 788–789.

138.7 External links


• GraphX library for .NET286 : open-source WPF library for graph calculation and visual-
ization. Supports many layout and edge routing algorithms.
• Graph drawing e-print archive287 : including information on papers from all Graph Draw-
ing symposia288 .
• Graph drawing289 at Curlie290 for many additional links related to graph drawing.

Graph representations

276 https://en.wikipedia.org/wiki/IEEE_Systems,_Man_%26_Cybernetics_Society
277 https://en.wikipedia.org/wiki/Doi_(identifier)
278 https://doi.org/10.1109%2FTSMC.1981.4308636
279 https://en.wikipedia.org/wiki/MR_(identifier)
280 http://www.ams.org/mathscinet-getitem?mr=0611436
281 https://en.wikipedia.org/wiki/Journal_of_Graph_Algorithms_and_Applications
282 https://en.wikipedia.org/wiki/Doi_(identifier)
283 https://doi.org/10.7155%2Fjgaa.00301
284 http://www.ams.org/notices/200307/what-is.pdf
285 https://en.wikipedia.org/wiki/Notices_of_the_American_Mathematical_Society
286 http://graphx.codeplex.com
287 http://gdea.informatik.uni-koeln.de/
288 https://en.wikipedia.org/wiki/International_Symposium_on_Graph_Drawing
289 https://curlie.org/Science/Math/Combinatorics/Software/Graph_Drawing
290 https://en.wikipedia.org/wiki/Curlie

1463
139 Analysis of algorithms

This article includes a list of references1 , but its sources remain unclear be-
cause it has insufficient inline citations2 . Please help to improve3 this article
by introducing4 more precise citations. (March 2010)(Learn how and when to remove
this template message5 )

1 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
2 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
3 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Fact_and_Reference_Check
4 https://en.wikipedia.org/wiki/Wikipedia:When_to_cite
5 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

1465
Analysis of algorithms

Figure 338 For looking up a given entry in a given ordered list, both the binary and the
linear search algorithm (which ignores ordering) can be used. The analysis of the former
and the latter algorithm shows that it takes at most log2 (n) and n check steps,
respectively, for a list of length n. In the depicted example list of length 33, searching for
”Morin, Arthur” takes 5 and 28 steps with binary (shown in cyan) and linear (magenta)
search, respectively.

1466
External links

Figure 339 Graphs of functions commonly used in the analysis of algorithms, showing
the number of operations N versus input size n for each function

In computer science6 , the analysis of algorithms is the process of finding the computa-
tional complexity7 of algorithms – the amount of time, storage, or other resources needed
to execute them8 . Usually, this involves determining a function9 that relates the length of
an algorithm's input to the number of steps it takes (its time complexity10 ) or the number
of storage locations it uses (its space complexity11 ). An algorithm is said to be efficient

6 https://en.wikipedia.org/wiki/Computer_science
7 https://en.wikipedia.org/wiki/Computational_complexity
8 https://en.wikipedia.org/wiki/Computation
9 https://en.wikipedia.org/wiki/Function_(mathematics)
10 https://en.wikipedia.org/wiki/Time_complexity
11 https://en.wikipedia.org/wiki/Space_complexity

1467
Analysis of algorithms

when this function's values are small, or grow slowly compared to a growth in the size of
the input. Different inputs of the same length may cause the algorithm to have different
behavior, so best, worst and average case12 descriptions might all be of practical interest.
When not otherwise specified, the function describing the performance of an algorithm is
usually an upper bound13 , determined from the worst case inputs to the algorithm.
The term ”analysis of algorithms” was coined by Donald Knuth14 .[1] Algorithm analysis is an
important part of a broader computational complexity theory15 , which provides theoretical
estimates for the resources needed by any algorithm which solves a given computational
problem16 . These estimates provide an insight into reasonable directions of search for effi-
cient algorithms17 .
In theoretical analysis of algorithms it is common to estimate their complexity in the asymp-
totic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O no-
tation18 , Big-omega notation19 and Big-theta notation20 are used to this end. For instance,
binary search21 is said to run in a number of steps proportional to the logarithm of the
length of the sorted list being searched, or in O(log(n)), colloquially ”in logarithmic time22 ”.
Usually asymptotic23 estimates are used because different implementations24 of the same
algorithm may differ in efficiency. However the efficiencies of any two ”reasonable” imple-
mentations of a given algorithm are related by a constant multiplicative factor called a
hidden constant.
Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually
require certain assumptions concerning the particular implementation of the algorithm,
called model of computation25 . A model of computation may be defined in terms of an
abstract computer26 , e.g., Turing machine27 , and/or by postulating that certain operations
are executed in unit time. For example, if the sorted list to which we apply binary search
has n elements, and we can guarantee that each lookup of an element in the list can be
done in unit time, then at most log2 n + 1 time units are needed to return an answer.

139.1 Cost models

Time efficiency estimates depend on what we define to be a step. For the analysis to
correspond usefully to the actual execution time, the time required to perform a step must

12 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
13 https://en.wikipedia.org/wiki/Upper_bound
14 https://en.wikipedia.org/wiki/Donald_Knuth
15 https://en.wikipedia.org/wiki/Computational_complexity_theory
16 https://en.wikipedia.org/wiki/Computational_problem
17 https://en.wikipedia.org/wiki/Algorithmic_efficiency
18 https://en.wikipedia.org/wiki/Big_O_notation
19 https://en.wikipedia.org/wiki/Big-omega_notation
20 https://en.wikipedia.org/wiki/Big-theta_notation
21 https://en.wikipedia.org/wiki/Binary_search
22 https://en.wikipedia.org/wiki/Logarithmic_time
23 https://en.wikipedia.org/wiki/Asymptotic_analysis
24 https://en.wikipedia.org/wiki/Implementation
25 https://en.wikipedia.org/wiki/Model_of_computation
26 https://en.wikipedia.org/wiki/Abstract_machine
27 https://en.wikipedia.org/wiki/Turing_machine

1468
Run-time analysis

be guaranteed to be bounded above by a constant. One must be careful here; for instance,
some analyses count an addition of two numbers as one step. This assumption may not be
warranted in certain contexts. For example, if the numbers involved in a computation may
be arbitrarily large, the time required by a single addition can no longer be assumed to be
constant.
Two cost models are generally used:[2][3][4][5][6]
• the uniform cost model, also called uniform-cost measurement (and similar vari-
ations), assigns a constant cost to every machine operation, regardless of the size of the
numbers involved
• the logarithmic cost model, also called logarithmic-cost measurement (and similar
variations), assigns a cost to every machine operation proportional to the number of bits
involved
The latter is more cumbersome to use, so it's only employed when necessary, for example in
the analysis of arbitrary-precision arithmetic28 algorithms, like those used in cryptography29 .
A key point which is often overlooked is that published lower bounds for problems are often
given for a model of computation that is more restricted than the set of operations that you
could use in practice and therefore there are algorithms that are faster than what would
naively be thought possible.[7]

139.2 Run-time analysis

Run-time analysis is a theoretical classification that estimates and anticipates the increase
in running time30 (or run-time) of an algorithm31 as its input size32 (usually denoted as n)
increases. Run-time efficiency is a topic of great interest in computer science33 : A program34
can take seconds, hours, or even years to finish executing, depending on which algorithm it
implements. While software profiling35 techniques can be used to measure an algorithm's
run-time in practice, they cannot provide timing data for all infinitely many possible inputs;
the latter can only be achieved by the theoretical methods of run-time analysis.

139.2.1 Shortcomings of empirical metrics

Since algorithms are platform-independent36 (i.e. a given algorithm can be implemented in


an arbitrary programming language37 on an arbitrary computer38 running an arbitrary oper-

28 https://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic
29 https://en.wikipedia.org/wiki/Cryptography
30 https://en.wikipedia.org/wiki/DTIME
31 https://en.wikipedia.org/wiki/Algorithm
32 https://en.wikipedia.org/wiki/Information
33 https://en.wikipedia.org/wiki/Computer_science
34 https://en.wikipedia.org/wiki/Computer_program
35 https://en.wikipedia.org/wiki/Software_profiling
36 https://en.wikipedia.org/wiki/Platform-independent
37 https://en.wikipedia.org/wiki/Programming_language
38 https://en.wikipedia.org/wiki/Computer

1469
Analysis of algorithms

ating system39 ), there are additional significant drawbacks to using an empirical40 approach
to gauge the comparative performance of a given set of algorithms.
Take as an example a program that looks up a specific entry in a sorted41 list42 of size n.
Suppose this program were implemented on Computer A, a state-of-the-art machine, using
a linear search43 algorithm, and on Computer B, a much slower machine, using a binary
search algorithm44 . Benchmark testing45 on the two computers running their respective
programs might look something like the following:

n (list size) Computer A run-time Computer B run-time


(in nanoseconds46 ) (in nanoseconds47 )
16 8 100,000
63 32 150,000
250 125 200,000
1,000 500 250,000

Based on these metrics, it would be easy to jump to the conclusion that Computer A is
running an algorithm that is far superior in efficiency to that of Computer B. However, if
the size of the input-list is increased to a sufficient number, that conclusion is dramatically
demonstrated to be in error:

n (list size) Computer A run-time Computer B run-time


(in nanoseconds48 ) (in nanoseconds49 )
16 8 100,000
63 32 150,000
250 125 200,000
1,000 500 250,000
... ... ...
1,000,000 500,000 500,000
4,000,000 2,000,000 550,000
16,000,000 8,000,000 600,000
... ... ...
63,072 × 1012 31,536 × 1012 ns, 1,375,000 ns,
or 1 year or 1.375 milliseconds

Computer A, running the linear search program, exhibits a linear50 growth rate. The
program's run-time is directly proportional to its input size. Doubling the input size doubles
the run time, quadrupling the input size quadruples the run-time, and so forth. On the other
hand, Computer B, running the binary search program, exhibits a logarithmic51 growth rate.

39 https://en.wikipedia.org/wiki/Operating_system
40 https://en.wikipedia.org/wiki/Empirical
41 https://en.wikipedia.org/wiki/Collation
42 https://en.wikipedia.org/wiki/List_(computing)
43 https://en.wikipedia.org/wiki/Linear_search
44 https://en.wikipedia.org/wiki/Binary_search_algorithm
45 https://en.wikipedia.org/wiki/Benchmark_(computing)
50 https://en.wikipedia.org/wiki/Linear
51 https://en.wikipedia.org/wiki/Logarithm

1470
Run-time analysis

Quadrupling the input size only increases the run time by a constant52 amount (in this
example, 50,000 ns). Even though Computer A is ostensibly a faster machine, Computer
B will inevitably surpass Computer A in run-time because it's running an algorithm with
a much slower growth rate.

139.2.2 Orders of growth

Main article: Big O notation53 Informally, an algorithm can be said to exhibit a growth
rate on the order of a mathematical function54 if beyond a certain input size n, the function
f (n) times a positive constant provides an upper bound or limit55 for the run-time of that
algorithm. In other words, for a given input size n greater than some n0 and a constant
c, the running time of that algorithm will never be larger than c × f (n). This concept is
frequently expressed using Big O notation. For example, since the run-time of insertion
sort56 grows quadratically57 as its input size increases, insertion sort can be said to be of
order O(n2 ).
Big O notation is a convenient way to express the worst-case scenario58 for a given algorithm,
although it can also be used to express the average-case — for example, the worst-case
scenario for quicksort59 is O(n2 ), but the average-case run-time is O(n log n).

139.2.3 Empirical orders of growth

Assuming the execution time follows power rule, t ≈ k na , the coefficient a can be found [8]
by taking empirical measurements of run time {t1 , t2 } at some problem-size points {n1 , n2 },
and calculating t2 /t1 = (n2 /n1 )a so that a = log(t2 /t1 )/ log(n2 /n1 ). In other words, this
measures the slope of the empirical line on the log–log plot60 of execution time vs. problem
size, at some size point. If the order of growth indeed follows the power rule (and so the
line on log–log plot is indeed a straight line), the empirical value of a will stay constant
at different ranges, and if not, it will change (and the line is a curved line) - but still
could serve for comparison of any two given algorithms as to their empirical local orders of
growth behaviour. Applied to the above table:

n (list size) Computer A Local order of Computer B Local order of


run-time growth run-time growth
(in nanosec- (n^_) (in nanosec- (n^_)
onds61 ) onds62 )
15 7 100,000
65 32 1.04 150,000 0.28
250 125 1.01 200,000 0.21

52 https://en.wiktionary.org/wiki/Constant
53 https://en.wikipedia.org/wiki/Big_O_notation
54 https://en.wikipedia.org/wiki/Function_(mathematics)
55 https://en.wikipedia.org/wiki/Asymptotic_analysis
56 https://en.wikipedia.org/wiki/Insertion_sort
57 https://en.wikipedia.org/wiki/Quadratic_growth
58 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
59 https://en.wikipedia.org/wiki/Quicksort
60 https://en.wikipedia.org/wiki/Log%E2%80%93log_plot

1471
Analysis of algorithms

n (list size) Computer A Local order of Computer B Local order of


run-time growth run-time growth
(in nanosec- (n^_) (in nanosec- (n^_)
onds61 ) onds62 )
1,000 500 1.00 250,000 0.16
... ... ...
1,000,000 500,000 1.00 500,000 0.10
4,000,000 2,000,000 1.00 550,000 0.07
16,000,000 8,000,000 1.00 600,000 0.06
... ... ...

It is clearly seen that the first algorithm exhibits a linear order of growth indeed following
the power rule. The empirical values for the second one are diminishing rapidly, suggesting
it follows another rule of growth and in any case has much lower local orders of growth (and
improving further still), empirically, than the first one.

139.2.4 Evaluating run-time complexity

The run-time complexity for the worst-case scenario of a given algorithm can sometimes
be evaluated by examining the structure of the algorithm and making some simplifying
assumptions. Consider the following pseudocode63 :
1 get a positive integer n from input
2 if n > 10
3 print ”This might take a while...”
4 for i = 1 to n
5 for j = 1 to i
6 print i * j
7 print ”Done!”

A given computer will take a discrete amount of time64 to execute each of the instructions65
involved with carrying out this algorithm. The specific amount of time to carry out a given
instruction will vary depending on which instruction is being executed and which computer
is executing it, but on a conventional computer, this amount will be deterministic66 .[9] Say
that the actions carried out in step 1 are considered to consume time T1 , step 2 uses time
T2 , and so forth.
In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation,
it should be assumed that step 3 will be run as well. Thus the total amount of time to run
steps 1-3 and step 7 is:
T1 + T2 + T3 + T7 .
The loops67 in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will
execute ( n + 1 ) times (note that an extra step is required to terminate the for loop, hence
n + 1 and not n executions), which will consume T4 ( n + 1 ) time. The inner loop, on the

63 https://en.wikipedia.org/wiki/Pseudocode
64 https://en.wikipedia.org/wiki/DTIME
65 https://en.wikipedia.org/wiki/Instruction_(computer_science)
66 https://en.wikipedia.org/wiki/Deterministic_system_(mathematics)
67 https://en.wikipedia.org/wiki/Program_loop

1472
Run-time analysis

other hand, is governed by the value of j, which iterates68 from 1 to i. On the first pass
through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running
the inner loop body (step 6) consumes T6 time, and the inner loop test (step 5) consumes
2T5 time. During the next pass through the outer loop, j iterates from 1 to 2: the inner
loop makes two passes, so running the inner loop body (step 6) consumes 2T6 time, and
the inner loop test (step 5) consumes 3T5 time.
Altogether, the total time required to run the inner loop body can be expressed as an
arithmetic progression69 :
T6 + 2T6 + 3T6 + · · · + (n − 1)T6 + nT6
which can be factored70[10] as
[ ]
1 2
T6 [1 + 2 + 3 + · · · + (n − 1) + n] = T6 (n + n)
2
The total time required to run the outer loop test can be evaluated similarly:
2T5 + 3T5 + 4T5 + · · · + (n − 1)T5 + nT5 + (n + 1)T5
= T5 + 2T5 + 3T5 + 4T5 + · · · + (n − 1)T5 + nT5 + (n + 1)T5 − T5
which can be factored as
T5 [1 + 2 + 3 + · · · + (n − 1) + n + (n + 1)] − T5
[ ]
1 2
= (n + n) T5 + (n + 1)T5 − T5
2
[ ]
1 2
=T5 (n + n) + nT5
2
[ ]
1 2
= (n + 3n) T5
2
Therefore, the total running time for this algorithm is:
[ ] [ ]
1 1
f (n) = T1 + T2 + T3 + T7 + (n + 1)T4 + (n2 + n) T6 + (n2 + 3n) T5
2 2
which reduces71 to
[ ] [ ]
1 1
f (n) = (n2 + n) T6 + (n2 + 3n) T5 + (n + 1)T4 + T1 + T2 + T3 + T7
2 2
As a rule-of-thumb72 , one can assume that the highest-order term in any given function
dominates its rate of growth and thus defines its run-time order. In this example, n2 is the
highest-order term, so one can conclude that f(n) = O(n2 ). Formally this can be proven as
follows:
[ ] [ ]
1 2 1 2
Prove that (n + n) T6 + (n + 3n) T5 + (n + 1)T4 + T1 + T2 + T3 + T7 ≤ cn2 , n ≥ n0
2 2

68 https://en.wikipedia.org/wiki/Iteration
69 https://en.wikipedia.org/wiki/Arithmetic_progression
70 https://en.wikipedia.org/wiki/Factorization
71 https://en.wikipedia.org/wiki/Reduction_(mathematics)
72 https://en.wikipedia.org/wiki/Rule-of-thumb

1473
Analysis of algorithms

[ ] [ ]
1 2 1
(n + n) T6 + (n2 + 3n) T5 + (n + 1)T4 + T1 + T2 + T3 + T7
2 2
≤(n + n)T6 + (n + 3n)T5 + (n + 1)T4 + T1 + T2 + T3 + T7 (for n ≥ 0)
2 2

Let k be a constant greater than or equal to [T1 ..T7 ]

T6 (n2 + n) + T5 (n2 + 3n) + (n + 1)T4 + T1 + T2 + T3 + T7 ≤ k(n2 + n) + k(n2 + 3n) + kn + 5k


=2kn2 + 5kn + 5k ≤ 2kn2 + 5kn2 + 5kn2 (for n ≥ 1) = 12kn2

[ ] [ ]
1 1
Therefore (n2 + n) T6 + (n2 + 3n) T5 + (n + 1)T4 + T1 + T2 + T3 + T7 ≤ cn2 , n ≥ n0 for c = 12k, n0 =
2 2
A more elegant73 approach to analyzing this algorithm would be to declare that [T1 ..T7 ]
are all equal to one unit of time, in a system of units chosen so that one unit is greater than
or equal to the actual times for these steps. This would mean that the algorithm's running
time breaks down as follows:[11]

n ∑
n
4+ i ≤ 4+ n = 4 + n2 ≤ 5n2 (for n ≥ 1) = O(n2 ).
i=1 i=1

139.2.5 Growth rate analysis of other resources

The methodology of run-time analysis can also be utilized for predicting other growth rates,
such as consumption of memory space74 . As an example, consider the following pseudocode
which manages and reallocates memory usage by a program based on the size of a file75
which that program manages:
while file is still open:
let n = size of file
for every 100,000 kilobytes76 of increase in file size
double the amount of memory reserved

In this instance, as the file size n increases, memory will be consumed at an exponential
growth77 rate, which is order O(2n ). This is an extremely rapid and most likely unmanage-
able growth rate for consumption of memory resources78 .

139.3 Relevance

Algorithm analysis is important in practice because the accidental or unintentional use


of an inefficient algorithm can significantly impact system performance. In time-sensitive
applications, an algorithm taking too long to run can render its results outdated or useless.

73 https://en.wikipedia.org/wiki/Elegance
74 https://en.wikipedia.org/wiki/DSPACE
75 https://en.wikipedia.org/wiki/Computer_file
76 https://en.wikipedia.org/wiki/Kilobyte
77 https://en.wikipedia.org/wiki/Exponential_growth
78 https://en.wikipedia.org/wiki/Resource_(computer_science)

1474
Constant factors

An inefficient algorithm can also end up requiring an uneconomical amount of computing


power or storage in order to run, again rendering it practically useless.

139.4 Constant factors

Analysis of algorithms typically focuses on the asymptotic performance, particularly at the


elementary level, but in practical applications constant factors are important, and real-
world data is in practice always limited in size. The limit is typically the size of addressable
memory, so on 32-bit machines 232 = 4 GiB (greater if segmented memory79 is used) and
on 64-bit machines 264 = 16 EiB. Thus given a limited size, an order of growth (time or
space) can be replaced by a constant factor, and in this sense all practical algorithms are
O(1) for a large enough constant, or for small enough data.
This interpretation is primarily useful for functions that grow extremely slowly: (binary)
iterated logarithm80 (log* ) is less than 5 for all practical data (265536 bits); (binary) log-log
(log log n) is less than 6 for virtually all practical data (264 bits); and binary log (log n)
is less than 64 for virtually all practical data (264 bits). An algorithm with non-constant
complexity may nonetheless be more efficient than an algorithm with constant complexity
on practical data if the overhead of the constant time algorithm results in a larger constant
6
factor, e.g., one may have K > k log log n so long as K/k > 6 and n < 22 = 264 .
For large data linear or quadratic factors cannot be ignored, but for small data an asymp-
totically inefficient algorithm may be more efficient. This is particularly used in hybrid
algorithms81 , like Timsort82 , which use an asymptotically efficient algorithm (here merge
sort83 , with time complexity n log n), but switch to an asymptotically inefficient algorithm
(here insertion sort84 , with time complexity n2 ) for small data, as the simpler algorithm is
faster on small data.

139.5 See also


• Amortized analysis85
• Analysis of parallel algorithms86
• Asymptotic computational complexity87
• Best, worst and average case88
• Big O notation89

79 https://en.wikipedia.org/wiki/Segmented_memory
80 https://en.wikipedia.org/wiki/Iterated_logarithm
81 https://en.wikipedia.org/wiki/Hybrid_algorithm
82 https://en.wikipedia.org/wiki/Timsort
83 https://en.wikipedia.org/wiki/Merge_sort
84 https://en.wikipedia.org/wiki/Insertion_sort
85 https://en.wikipedia.org/wiki/Amortized_analysis
86 https://en.wikipedia.org/wiki/Analysis_of_parallel_algorithms
87 https://en.wikipedia.org/wiki/Asymptotic_computational_complexity
88 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
89 https://en.wikipedia.org/wiki/Big_O_notation

1475
Analysis of algorithms

• Computational complexity theory90


• Master theorem (analysis of algorithms)91
• NP-Complete92
• Numerical analysis93
• Polynomial time94
• Program optimization95
• Profiling (computer programming)96
• Scalability97
• Smoothed analysis98
• Termination analysis99 — the subproblem of checking whether a program will terminate
at all
• Time complexity100 — includes table of orders of growth for common algorithms
• Information-based complexity101

139.6 Notes
1. ”K: R N”102 . web.archive.org. 28 August 2016.
2. A V. A; J E. H; J D. U (1974). The design
and analysis of computer algorithms103 . A-W P. C., section 1.3
3. J HČ (2004). Theoretical computer science: introduction to Au-
tomata, computability, complexity, algorithmics, randomization, communication, and
cryptography104 . S. . 177–178. ISBN105 978-3-540-14015-3106 .
4. G A (1999). Complexity and approximation: combinatorial op-
timization problems and their approximability properties107 . S. . 3–8.
ISBN108 978-3-540-65431-5109 .

90 https://en.wikipedia.org/wiki/Computational_complexity_theory
91 https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)
92 https://en.wikipedia.org/wiki/NP-Complete
93 https://en.wikipedia.org/wiki/Numerical_analysis
94 https://en.wikipedia.org/wiki/Polynomial_time
95 https://en.wikipedia.org/wiki/Program_optimization
96 https://en.wikipedia.org/wiki/Profiling_(computer_programming)
97 https://en.wikipedia.org/wiki/Scalability
98 https://en.wikipedia.org/wiki/Smoothed_analysis
99 https://en.wikipedia.org/wiki/Termination_analysis
100 https://en.wikipedia.org/wiki/Time_complexity
101 https://en.wikipedia.org/wiki/Information-based_complexity
https://web.archive.org/web/20160828152021/http://www-cs-faculty.stanford.edu/~uno/
102
news.html
103 https://archive.org/details/designanalysisof00ahoarich
104 https://books.google.com/books?id=KpNet-n262QC&pg=PA177
105 https://en.wikipedia.org/wiki/ISBN_(identifier)
106 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-14015-3
107 https://books.google.com/books?id=Yxxw90d9AuMC&pg=PA3
108 https://en.wikipedia.org/wiki/ISBN_(identifier)
109 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-65431-5

1476
References

5. W, I (2005), Complexity theory: exploring the limits of efficient al-
gorithms110 , B, N Y: S-V111 , . 20, ISBN112 978-3-540-
21045-0113
6. R E T114 (1983). Data structures and network algorithms115 .
SIAM. . 3–7. ISBN116 978-0-89871-187-5117 .
7. Examples of the price of abstraction?118 , cstheory.stackexchange.com
8. How To Avoid O-Abuse and Bribes119 Archived120 2017-03-08 at the Wayback Ma-
chine121 , at the blog ”Gödel's Lost Letter and P=NP” by R. J. Lipton, professor of
Computer Science at Georgia Tech, recounting idea by Robert Sedgewick
9. However, this is not the case with a quantum computer122
n(n + 1)
10. It can be proven by induction123 that 1 + 2 + 3 + · · · + (n − 1) + n =
2
11. This approach, unlike the above approach, neglects the constant time consumed by
the loop tests which terminate their respective loops, but it is trivial124 to prove that
such omission does not affect the final result

139.7 References
• C, T H.125 ; L, C E.126 ; R, R L.127 & S,
C128 (2001). Introduction to Algorithms129 . C 1: F (S
.). C, MA: MIT P  MG-H. . 3–122. ISBN130 0-262-
03293-7131 .

110 https://books.google.com/books?id=u7DZSDSUYlQC&pg=PA20
111 https://en.wikipedia.org/wiki/Springer-Verlag
112 https://en.wikipedia.org/wiki/ISBN_(identifier)
113 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-21045-0
114 https://en.wikipedia.org/wiki/Robert_Endre_Tarjan
115 https://books.google.com/books?id=JiC7mIqg-X4C&pg=PA3
116 https://en.wikipedia.org/wiki/ISBN_(identifier)
117 https://en.wikipedia.org/wiki/Special:BookSources/978-0-89871-187-5
118 https://cstheory.stackexchange.com/q/608
119 http://rjlipton.wordpress.com/2009/07/24/how-to-avoid-o-abuse-and-bribes/
https://web.archive.org/web/20170308175036/https://rjlipton.wordpress.com/2009/07/24/
120
how-to-avoid-o-abuse-and-bribes/
121 https://en.wikipedia.org/wiki/Wayback_Machine
122 https://en.wikipedia.org/wiki/Quantum_computer
123 https://en.wikipedia.org/wiki/Mathematical_induction
124 https://en.wikipedia.org/wiki/Trivial_(mathematics)
125 https://en.wikipedia.org/wiki/Thomas_H._Cormen
126 https://en.wikipedia.org/wiki/Charles_E._Leiserson
127 https://en.wikipedia.org/wiki/Ronald_L._Rivest
128 https://en.wikipedia.org/wiki/Clifford_Stein
129 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
130 https://en.wikipedia.org/wiki/ISBN_(identifier)
131 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7

1477
Analysis of algorithms

• S, R132 (1998). Algorithms in C, Parts 1-4: Fundamentals, Data


Structures, Sorting, Searching133 (3 .). R, MA: A-W P-
. ISBN134 978-0-201-31452-6135 .
• K, D136 . The Art of Computer Programming137 . A-W.
• G, D A.; K, D E. (1982). Mathematics for the Analysis of
Algorithms (Second ed.). Birkhäuser. ISBN138 3-7643-3102-X139 .
• G, O140 (2010). Computational Complexity: A Conceptual Perspective.
Cambridge University Press141 . ISBN142 978-0-521-88473-0143 .

139.8 External links

• Media related to Analysis of algorithms144 at Wikimedia Commons

Computer science

132 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
133 https://archive.org/details/algorithmsinc00sedg
134 https://en.wikipedia.org/wiki/ISBN_(identifier)
135 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-31452-6
136 https://en.wikipedia.org/wiki/Donald_Knuth
137 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
138 https://en.wikipedia.org/wiki/ISBN_(identifier)
139 https://en.wikipedia.org/wiki/Special:BookSources/3-7643-3102-X
140 https://en.wikipedia.org/wiki/Oded_Goldreich
141 https://en.wikipedia.org/wiki/Cambridge_University_Press
142 https://en.wikipedia.org/wiki/ISBN_(identifier)
143 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-88473-0
144 https://commons.wikimedia.org/wiki/Category:Analysis_of_algorithms

1478
140 Time complexity

An estimate of time taken for running an algorithm ”Running time” redirects here. For the
film, see Running Time (film)1 .

Figure 341 Graphs of functions commonly used in the analysis of algorithms, showing
the number of operations N versus input size n for each function

1 https://en.wikipedia.org/wiki/Running_Time_(film)

1479
Time complexity

In computer science2 , the time complexity is the computational complexity3 that de-
scribes the amount of time it takes to run an algorithm4 . Time complexity is commonly
estimated by counting the number of elementary operations performed by the algorithm,
supposing that each elementary operation takes a fixed amount of time to perform. Thus,
the amount of time taken and the number of elementary operations performed by the algo-
rithm are taken to differ by at most a constant factor5 .
Since an algorithm's running time may vary among different inputs of the same size, one
commonly considers the worst-case time complexity6 , which is the maximum amount of
time required for inputs of a given size. Less common, and usually specified explicitly, is
the average-case complexity7 , which is the average of the time taken on inputs of a given
size (this makes sense because there are only a finite number of possible inputs of a given
size). In both cases, the time complexity is generally expressed as a function8 of the size of
the input.[1]:226 Since this function is generally difficult to compute exactly, and the running
time for small inputs is usually not consequential, one commonly focuses on the behavior
of the complexity when the input size increases—that is, the asymptotic behavior9 of the
complexity. Therefore, the time complexity is commonly expressed using big O notation10 ,
typically O(n), O(n log n), O(nα ), O(2n ), etc., where n is the input size in units of bits11
needed to represent the input.
Algorithmic complexities are classified according to the type of function appearing in the
big O notation. For example, an algorithm with time complexity O(n) is a linear time
algorithm and an algorithm with time complexity O(nα ) for some constant α > 1 is a poly-
nomial time algorithm.

140.1 Table of common time complexities

Further information: Computational complexity of mathematical operations12 The following


table summarizes some classes of commonly encountered time complexities. In the table,
poly(x) = xO(1) , i.e., polynomial in x.
Name Complexity class13 Running time (T(n)) Examples of Example algorithms
running times
constant time O(1) 10 Finding the median value in a sorted
array14 of numbersCalculating (−1)n
inverse Ackermann15 time O(α(n)) Amortized time16 per operation using a
disjoint set17

2 https://en.wikipedia.org/wiki/Computer_science
3 https://en.wikipedia.org/wiki/Computational_complexity
4 https://en.wikipedia.org/wiki/Algorithm
5 https://en.wikipedia.org/wiki/Constant_factor
6 https://en.wikipedia.org/wiki/Worst-case_complexity
7 https://en.wikipedia.org/wiki/Average-case_complexity
8 https://en.wikipedia.org/wiki/Function_(mathematics)
9 https://en.wikipedia.org/wiki/Asymptotic_analysis
10 https://en.wikipedia.org/wiki/Big_O_notation
11 https://en.wikipedia.org/wiki/Bit
12 https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
14 https://en.wikipedia.org/wiki/Array_data_structure
15 https://en.wikipedia.org/wiki/Inverse_Ackermann_function
16 https://en.wikipedia.org/wiki/Amortized_time
17 https://en.wikipedia.org/wiki/Disjoint_set_data_structure

1480
Table of common time complexities

Name Complexity class13 Running time (T(n)) Examples of Example algorithms


running times
iterated logarithmic18 time O(log*19 n) Distributed coloring of cycles20
log-logarithmic O(log log n) Amortized time per operation using a
bounded priority queue21 [2]
22 2
logarithmic time DLOGTIME O(log n) log n, log(n ) Binary search23
polylogarithmic time poly(log n) (log n)2
fractional power O(nc ) where 0 < c < 1 n1/2 , n2/3 Searching in a kd-tree24
linear time O(n) n, 2n + 5 Finding the smallest or largest item in an
unsorted array25 , Kadane's algorithm26
”n log-star n” time O(n log*27 n) Seidel28 's polygon triangulation29
algorithm.
linearithmic time O(n log n) n log n, log n! Fastest possible comparison sort30 ; Fast
Fourier transform31 .
quasilinear time n poly(log n)
quadratic time O(n2 ) n2 Bubble sort32 ; Insertion sort33 ; Direct
convolution34
cubic time O(n3 ) n3 Naive multiplication of two n×n matrices.
Calculating partial correlation35 .
polynomial time P36 2O(log n) = poly(n) n2 + n, n10 Karmarkar's algorithm37 for linear
programming38 ; AKS primality test39 [3] [4]
quasi-polynomial time QP 2poly(log n) nlog log n , nlog n Best-known O(log2 n)-approximation
algorithm40 for the directed Steiner tree
problem41 .
ε log log n
sub-exponential time SUBEXP O(2n ) for all ε > 0 O(2log n ) Contains BPP42 unless EXPTIME (see
(first definition) below) equals MA43 .[5]
o(n) n1/3
sub-exponential time 2 2 Best-known algorithm for integer
(second definition) factorization44 ; formerly-best algorithm
for graph isomorphism45
exponential time E46 2O(n) 1.1n , 10n Solving the traveling salesman problem47
(with linear exponent) using dynamic programming48
2
exponential time EXPTIME49 2poly(n) 2n , 2 n Solving matrix chain multiplication50 via
brute-force search51
factorial time O(n!) n! Solving the traveling salesman problem52
via brute-force search53
poly(n) n
double exponential time 2-EXPTIME54 22 22 Deciding the truth of a given statement
in Presburger arithmetic55

18 https://en.wikipedia.org/wiki/Iterated_logarithm
19 https://en.wikipedia.org/wiki/Iterated_logarithm
20 https://en.wikipedia.org/wiki/Cole-Vishkin_algorithm
21 https://en.wikipedia.org/wiki/Priority_queue
22 https://en.wikipedia.org/wiki/DLOGTIME
23 https://en.wikipedia.org/wiki/Binary_search
24 https://en.wikipedia.org/wiki/Kd-tree
25 https://en.wikipedia.org/wiki/Array_data_structure
26 https://en.wikipedia.org/wiki/Kadane%27s_algorithm
27 https://en.wikipedia.org/wiki/Iterated_logarithm
28 https://en.wikipedia.org/wiki/Raimund_Seidel
29 https://en.wikipedia.org/wiki/Polygon_triangulation
30 https://en.wikipedia.org/wiki/Comparison_sort
31 https://en.wikipedia.org/wiki/Fast_Fourier_transform
32 https://en.wikipedia.org/wiki/Bubble_sort
33 https://en.wikipedia.org/wiki/Insertion_sort
34 https://en.wikipedia.org/wiki/Convolution_theorem
35 https://en.wikipedia.org/wiki/Partial_correlation
36 https://en.wikipedia.org/wiki/P_(complexity)
37 https://en.wikipedia.org/wiki/Karmarkar%27s_algorithm
38 https://en.wikipedia.org/wiki/Linear_programming
39 https://en.wikipedia.org/wiki/AKS_primality_test
40 https://en.wikipedia.org/wiki/Approximation_algorithm
41 https://en.wikipedia.org/wiki/Steiner_tree_problem
42 https://en.wikipedia.org/wiki/Bounded-error_probabilistic_polynomial
43 https://en.wikipedia.org/wiki/Arthur%E2%80%93Merlin_protocol#MA
44 https://en.wikipedia.org/wiki/Integer_factorization
45 https://en.wikipedia.org/wiki/Graph_isomorphism_problem
46 https://en.wikipedia.org/wiki/E_(complexity)
47 https://en.wikipedia.org/wiki/Traveling_salesman_problem
48 https://en.wikipedia.org/wiki/Dynamic_programming
49 https://en.wikipedia.org/wiki/EXPTIME
50 https://en.wikipedia.org/wiki/Matrix_chain_multiplication
51 https://en.wikipedia.org/wiki/Brute-force_search
52 https://en.wikipedia.org/wiki/Travelling_salesman_problem
53 https://en.wikipedia.org/wiki/Brute-force_search
54 https://en.wikipedia.org/wiki/2-EXPTIME
55 https://en.wikipedia.org/wiki/Presburger_arithmetic

1481
Time complexity

140.2 Constant time

An algorithm is said to be constant time (also written as O(1) time) if the value of T(n)
is bounded by a value that does not depend on the size of the input. For example, accessing
any single element in an array56 takes constant time as only one operation57 has to be
performed to locate it. In a similar manner, finding the minimal value in an array sorted in
ascending order; it is the first element. However, finding the minimal value in an unordered
array is not a constant time operation as scanning over each element58 in the array is needed
in order to determine the minimal value. Hence it is a linear time operation, taking O(n)
time. If the number of elements is known in advance and does not change, however, such
an algorithm can still be said to run in constant time.
Despite the name ”constant time”, the running time does not have to be independent of the
problem size, but an upper bound for the running time has to be bounded independently
of the problem size. For example, the task ”exchange the values of a and b if necessary so
that a≤b” is called constant time even though the time may depend on whether or not it
is already true that a ≤b. However, there is some constant t such that the time required is
always at most t.
Here are some examples of code fragments that run in constant time :
int index = 5;
int item = list[index];
if (condition true) then
perform some operation that runs in constant time
else
perform some other operation that runs in constant time
for i = 1 to 100
for j = 1 to 200
perform some operation that runs in constant time

If T(n) is O(any constant value), this is equivalent to and stated in standard notation as
T(n) being O(1).

140.3 Logarithmic time

Further information: Logarithmic growth59 An algorithm is said to take logarithmic


time when T(n) = O(log n). Since loga n and logb n are related by a constant multi-
plier60 , and such a multiplier is irrelevant61 to big-O classification, the standard usage for
logarithmic-time algorithms is O(log n) regardless of the base of the logarithm appearing
in the expression of T.

56 https://en.wikipedia.org/wiki/Array_data_structure
57 https://en.wikipedia.org/wiki/Instruction_(computer_science)
58 https://en.wikipedia.org/wiki/Element_(math)
59 https://en.wikipedia.org/wiki/Logarithmic_growth
60 https://en.wikipedia.org/wiki/Logarithmic_identities#Changing_the_base
61 https://en.wikipedia.org/wiki/Big_O_notation#Multiplication_by_a_constant

1482
Polylogarithmic time

Algorithms taking logarithmic time are commonly found in operations on binary trees62 or
when using binary search63 .
An O(log n) algorithm is considered highly efficient, as the ratio of the number of operations
to the size of the input decreases and tends to zero when n increases. An algorithm that
must access all elements of its input cannot take logarithmic time, as the time taken for
reading an input of size n is of the order of n.
An example of logarithmic time is given by dictionary search. Consider a dictionary64
D which contains n entries, sorted by alphabetical order65 . We suppose that, for 1 ≤k ≤n,
one may access to the kth entry of the dictionary in a constant time. Let D(k) denote
this k-th entry. Under these hypotheses, the test if a word w is in the dictionary may be
done in logarithmic time: consider D(⌊n/2⌋), where ⌊⌋ denotes the floor function66 . If
w = D(⌊n/2⌋), then we are done. Else, if w < D(⌊n/2⌋), continue the search in the same
way in the left half of the dictionary, otherwise continue similarly with the right half of the
dictionary. This algorithm is similar to the method often used to find an entry in a paper
dictionary.

140.4 Polylogarithmic time

An algorithm is said to run in polylogarithmic67 timeif T(n) = O((log n)k ), for some
constant k. For example, matrix chain ordering68 can be solved in polylogarithmic time on
a parallel random-access machine69 .[6]

140.5 Sub-linear time

An algorithm is said to run in sub-linear time (often spelled sublinear time) if T(n) =
o(n). In particular this includes algorithms with the time complexities defined above.
Typical algorithms that are exact and yet run in sub-linear time use parallel processing70 (as
the NC1 matrix determinant calculation does), or alternatively have guaranteed assumptions
on the input structure (as the logarithmic time binary search71 and many tree maintenance
algorithms do). However, formal languages72 such as the set of all strings that have a 1-bit
in the position indicated by the first log(n) bits of the string may depend on every bit of
the input and yet be computable in sub-linear time.

62 https://en.wikipedia.org/wiki/Binary_tree
63 https://en.wikipedia.org/wiki/Binary_search
64 https://en.wikipedia.org/wiki/Dictionary
65 https://en.wikipedia.org/wiki/Alphabetical_order
66 https://en.wikipedia.org/wiki/Floor_function
67 https://en.wikipedia.org/wiki/Polylogarithmic_function
68 https://en.wikipedia.org/wiki/Matrix_chain_multiplication
69 https://en.wikipedia.org/wiki/Parallel_random-access_machine
70 https://en.wikipedia.org/wiki/Parallel_algorithm
71 https://en.wikipedia.org/wiki/Binary_search_algorithm
72 https://en.wikipedia.org/wiki/Formal_language

1483
Time complexity

The specific term sublinear time algorithm is usually reserved to algorithms that are unlike
the above in that they are run over classical serial machine models and are not allowed prior
assumptions on the input.[7] They are however allowed to be randomized73 , and indeed must
be randomized for all but the most trivial of tasks.
As such an algorithm must provide an answer without reading the entire input, its par-
ticulars heavily depend on the access allowed to the input. Usually for an input that is
represented as a binary string b1 ,...,bk it is assumed that the algorithm can in time O(1)
request and obtain the value of bi for any i.
Sub-linear time algorithms are typically randomized, and provide only approximate74 solu-
tions. In fact, the property of a binary string having only zeros (and no ones) can be easily
proved not to be decidable by a (non-approximate) sub-linear time algorithm. Sub-linear
time algorithms arise naturally in the investigation of property testing75 .

140.6 Linear time

An algorithm is said to take linear time, or O(n) time, if its time complexity is O(n).
Informally, this means that the running time increases at most linearly with the size of the
input. More precisely, this means that there is a constant c such that the running time is
at most cn for every input of size n. For example, a procedure that adds up all elements of
a list requires time proportional to the length of the list, if the adding time is constant, or,
at least, bounded by a constant.
Linear time is the best possible time complexity in situations where the algorithm has to
sequentially read its entire input. Therefore, much research has been invested into dis-
covering algorithms exhibiting linear time or, at least, nearly linear time. This research
includes both software and hardware methods. There are several hardware technologies
which exploit parallelism76 to provide this. An example is content-addressable memory77 .
This concept of linear time is used in string matching algorithms such as the Boyer–Moore
algorithm78 and Ukkonen's algorithm79 .

140.7 Quasilinear time

An algorithm is said to run in quasilinear time (also referred to as log-linear time) if


T(n) = O(n logk n) for some positive constant k; linearithmic time is the case k = 1.[8][9]
Using soft O notation80 these algorithms are Õ(n). Quasilinear time algorithms are also
O(n1+ε ) for every constant ε > 0, and thus run faster than any polynomial time algorithm
whose time bound includes a term nc for any c > 1.

73 https://en.wikipedia.org/wiki/Randomized_algorithm
74 https://en.wikipedia.org/wiki/Approximation_algorithm
75 https://en.wikipedia.org/wiki/Property_testing
76 https://en.wikipedia.org/wiki/Parallel_computing
77 https://en.wikipedia.org/wiki/Content-addressable_memory
78 https://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_string_search_algorithm
79 https://en.wikipedia.org/wiki/Ukkonen%27s_algorithm
80 https://en.wikipedia.org/wiki/Soft_O_notation

1484
Sub-quadratic time

Algorithms which run in quasilinear time include:


• In-place merge sort81 , O(n log2 n)
• Quicksort82 , O(n log n), in its randomized version, has a running time that is O(n log
n) in expectation on the worst-case input. Its non-randomized version has an O(n log n)
running time only when considering average case complexity.
• Heapsort83 , O(n log n), merge sort84 , introsort85 , binary tree sort, smoothsort86 , patience
sorting87 , etc. in the worst case
• Fast Fourier transforms88 , O(n log n)
• Monge array89 calculation, O(n log n)
In many cases, the n · log n running time is simply the result of performing a Θ(log n)
operation n times (for the notation, see Big O notation § Family of Bachmann–Landau
notations90 ). For example, binary tree sort91 creates a binary tree92 by inserting each
element of the n-sized array one by one. Since the insert operation on a self-balancing
binary search tree93 takes O(log n) time, the entire algorithm takes O(n log n) time.
Comparison sorts94 require at least Ω(n log n) comparisons in the worst case because log(n!)
= Θ(n log n), by Stirling's approximation95 . They also frequently arise from the recurrence
relation96 T(n) = 2T(n/2) + O(n).

140.8 Sub-quadratic time

An algorithm97 is said to be subquadratic time if T(n) = o(n2 ).


For example, simple, comparison-based sorting algorithms98 are quadratic (e.g. insertion
sort99 ), but more advanced algorithms can be found that are subquadratic (e.g. Shell
sort100 ). No general-purpose sorts run in linear time, but the change from quadratic to
sub-quadratic is of great practical importance.

81 https://en.wikipedia.org/wiki/In-place_merge_sort
82 https://en.wikipedia.org/wiki/Quicksort
83 https://en.wikipedia.org/wiki/Heapsort
84 https://en.wikipedia.org/wiki/Merge_sort
85 https://en.wikipedia.org/wiki/Introsort
86 https://en.wikipedia.org/wiki/Smoothsort
87 https://en.wikipedia.org/wiki/Patience_sorting
88 https://en.wikipedia.org/wiki/Fast_Fourier_transform
89 https://en.wikipedia.org/wiki/Monge_array
https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann%E2%80%93Landau_
90
notations
91 https://en.wikipedia.org/wiki/Binary_tree_sort
92 https://en.wikipedia.org/wiki/Binary_tree
93 https://en.wikipedia.org/wiki/Self-balancing_binary_search_tree
94 https://en.wikipedia.org/wiki/Comparison_sort
95 https://en.wikipedia.org/wiki/Stirling%27s_approximation
96 https://en.wikipedia.org/wiki/Recurrence_relation
97 https://en.wikipedia.org/wiki/Algorithm
98 https://en.wikipedia.org/wiki/Sorting_algorithm
99 https://en.wikipedia.org/wiki/Insertion_sort
100 https://en.wikipedia.org/wiki/Shell_sort

1485
Time complexity

140.9 Polynomial time

An algorithm is said to be of polynomial time if its running time is upper bounded101


by a polynomial expression102 in the size of the input for the algorithm, i.e., T(n) = O(nk )
for some positive constant k.[1][10] Problems103 for which a deterministic polynomial time
algorithm exists belong to the complexity class104 P105 , which is central in the field of
computational complexity theory106 . Cobham's thesis107 states that polynomial time is a
synonym for ”tractable”, ”feasible”, ”efficient”, or ”fast”.[11]
Some examples of polynomial time algorithms:
• The selection sort108 sorting algorithm on n integers performs An2 operations for some
constant A. Thus it runs in time O(n2 ) and is a polynomial time algorithm.
• All the basic arithmetic operations (addition, subtraction, multiplication, division, and
comparison) can be done in polynomial time.
• Maximum matchings109 in graphs110 can be found in polynomial time.

140.9.1 Strongly and weakly polynomial time

In some contexts, especially in optimization111 , one differentiates between strongly poly-


nomial time and weakly polynomial time algorithms. These two concepts are only
relevant if the inputs to the algorithms consist of integers.
Strongly polynomial time is defined in the arithmetic model of computation. In this model of
computation the basic arithmetic operations (addition, subtraction, multiplication, division,
and comparison) take a unit time step to perform, regardless of the sizes of the operands.
The algorithm runs in strongly polynomial time if[12]
1. the number of operations in the arithmetic model of computation is bounded by a
polynomial in the number of integers in the input instance; and
2. the space used by the algorithm is bounded by a polynomial in the size of the input.
Any algorithm with these two properties can be converted to a polynomial time algorithm
by replacing the arithmetic operations by suitable algorithms for performing the arithmetic
operations on a Turing machine112 . If the second of the above requirements is not met,
then this is not true anymore. Given the integer 2n (which takes up space proportional to
n
n in the Turing machine model), it is possible to compute 22 with n multiplications using

101 https://en.wikipedia.org/wiki/Upper_bound
102 https://en.wikipedia.org/wiki/Polynomial_expression
103 https://en.wikipedia.org/wiki/Decision_problem
104 https://en.wikipedia.org/wiki/Complexity_class
105 https://en.wikipedia.org/wiki/P_(complexity)
106 https://en.wikipedia.org/wiki/Computational_complexity_theory
107 https://en.wikipedia.org/wiki/Cobham%27s_thesis
108 https://en.wikipedia.org/wiki/Selection_sort
109 https://en.wikipedia.org/wiki/Maximum_matching
110 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
111 https://en.wikipedia.org/wiki/Optimization_(mathematics)
112 https://en.wikipedia.org/wiki/Turing_machine

1486
Polynomial time

n
repeated squaring113 . However, the space used to represent 22 is proportional to 2n , and
thus exponential rather than polynomial in the space used to represent the input. Hence,
it is not possible to carry out this computation in polynomial time on a Turing machine,
but it is possible to compute it by polynomially many arithmetic operations.
Conversely, there are algorithms that run in a number of Turing machine steps bounded by
a polynomial in the length of binary-encoded input, but do not take a number of arithmetic
operations bounded by a polynomial in the number of input numbers. The Euclidean
algorithm114 for computing the greatest common divisor115 of two integers is one example.
Given two integers a and b, the algorithm performs O(log a + log b) arithmetic operations
on numbers with at most O(log a + log b) bits. At the same time, the number of arithmetic
operations cannot be bounded by the number of integers in the input (which is constant in
this case, there are always only two integers in the input). Due to the latter observation,
the algorithm does not run in strongly polynomial time. Its real running time depends on
the magnitudes of a and b and not only on the number of integers in the input.
An algorithm that runs in polynomial time but that is not strongly polynomial is said to run
in weakly polynomial time.[13] A well-known example of a problem for which a weakly
polynomial-time algorithm is known, but is not known to admit a strongly polynomial-time
algorithm, is linear programming116 . Weakly polynomial-time should not be confused with
pseudo-polynomial time117 .

140.9.2 Complexity classes

The concept of polynomial time leads to several complexity classes in computational com-
plexity theory. Some important classes defined using polynomial time are the following.
P118 The complexity class119 of decision problems120 that can be solved on a deter-
ministic Turing machine121 in polynomial time
NP122 The complexity class of decision problems that can be solved on a non-
deterministic Turing machine123 in polynomial time
ZPP124 The complexity class of decision problems that can be solved with zero error on
a probabilistic Turing machine125 in polynomial time
RP126 The complexity class of decision problems that can be solved with 1-sided error
on a probabilistic Turing machine in polynomial time.
BPP127 The complexity class of decision problems that can be solved with 2-sided error
on a probabilistic Turing machine in polynomial time

113 https://en.wikipedia.org/wiki/Repeated_squaring
114 https://en.wikipedia.org/wiki/Euclidean_algorithm
115 https://en.wikipedia.org/wiki/Greatest_common_divisor
116 https://en.wikipedia.org/wiki/Linear_programming
117 https://en.wikipedia.org/wiki/Pseudo-polynomial_time
118 https://en.wikipedia.org/wiki/P_(complexity)
119 https://en.wikipedia.org/wiki/Complexity_class
120 https://en.wikipedia.org/wiki/Decision_problem
121 https://en.wikipedia.org/wiki/Deterministic_Turing_machine
122 https://en.wikipedia.org/wiki/NP_(complexity)
123 https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine
124 https://en.wikipedia.org/wiki/ZPP_(complexity)
125 https://en.wikipedia.org/wiki/Probabilistic_Turing_machine
126 https://en.wikipedia.org/wiki/RP_(complexity)
127 https://en.wikipedia.org/wiki/BPP_(complexity)

1487
Time complexity

BQP128 The complexity class of decision problems that can be solved with 2-sided error
on a quantum Turing machine129 in polynomial time

P is the smallest time-complexity class on a deterministic machine which is robust130 in


terms of machine model changes. (For example, a change from a single-tape Turing ma-
chine to a multi-tape machine can lead to a quadratic speedup, but any algorithm that
runs in polynomial time under one model also does so on the other.) Any given abstract
machine131 will have a complexity class corresponding to the problems which can be solved
in polynomial time on that machine.

140.10 Superpolynomial time

An algorithm is said to take superpolynomial time if T(n) is not bounded above by any
polynomial. Using little omega notation132 , it is ω(nc ) time for all constants c, where n is
the input parameter, typically the number of bits in the input.
For example, an algorithm that runs for 2n steps on an input of size n requires superpoly-
nomial time (more specifically, exponential time).
An algorithm that uses exponential resources is clearly superpolynomial, but some algo-
rithms are only very weakly superpolynomial. For example, the Adleman–Pomerance–
Rumely primality test133 runs for nO(log log n) time on n-bit inputs; this grows faster than
any polynomial for large enough n, but the input size must become impractically large
before it cannot be dominated by a polynomial with small degree.
An algorithm that requires superpolynomial time lies outside the complexity class134 P135 .
Cobham's thesis136 posits that these algorithms are impractical, and in many cases they
are. Since the P versus NP problem137 is unresolved, no algorithm for an NP-complete138
problem is currently known to run in polynomial time.

140.11 Quasi-polynomial time

Quasi-polynomial time algorithms are algorithms that run slower than polynomial time,
yet not so slow as to be exponential time. The worst case running time of a quasi-polynomial
c
time algorithm is 2O((log n) ) for some fixed c > 0. For c = 1 we get a polynomial time
algorithm, for c < 1 we get a sub-linear time algorithm.

128 https://en.wikipedia.org/wiki/BQP
129 https://en.wikipedia.org/wiki/Quantum_Turing_machine
130 https://en.wikipedia.org/wiki/Robustness_(computer_science)
131 https://en.wikipedia.org/wiki/Abstract_machine
https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann%E2%80%93Landau_
132
notations
133 https://en.wikipedia.org/wiki/Adleman%E2%80%93Pomerance%E2%80%93Rumely_primality_test
134 https://en.wikipedia.org/wiki/Complexity_class
135 https://en.wikipedia.org/wiki/P_(complexity)
136 https://en.wikipedia.org/wiki/Cobham%27s_thesis
137 https://en.wikipedia.org/wiki/P_versus_NP_problem
138 https://en.wikipedia.org/wiki/NP-complete

1488
Quasi-polynomial time

Quasi-polynomial time algorithms typically arise in reductions139 from an NP-hard140 prob-


lem to another problem. For example, one can take an instance of an NP hard problem, say
3SAT141 , and convert it to an instance of another problem B, but the size of the instance
c
becomes 2O((log n) ) . In that case, this reduction does not prove that problem B is NP-hard;
this reduction only shows that there is no polynomial time algorithm for B unless there is
a quasi-polynomial time algorithm for 3SAT (and thus all of NP142 ). Similarly, there are
some problems for which we know quasi-polynomial time algorithms, but no polynomial
time algorithm is known. Such problems arise in approximation algorithms; a famous ex-
ample is the directed Steiner tree problem143 , for which there is a quasi-polynomial time
approximation algorithm achieving an approximation factor of O(log3 n) (n being the num-
ber of vertices), but showing the existence of such a polynomial time algorithm is an open
problem.
Other computational problems with quasi-polynomial time solutions but no known poly-
nomial time solution include the planted clique144 problem in which the goal is to find a
large clique145 in the union of a clique and a random graph146 . Although quasi-polynomially
solvable, it has been conjectured that the planted clique problem has no polynomial time
solution; this planted clique conjecture has been used as a computational hardness assump-
tion147 to prove the difficulty of several other problems in computational game theory148 ,
property testing149 , and machine learning150 .[14]
The complexity class QP consists of all problems that have quasi-polynomial time algo-
rithms. It can be defined in terms of DTIME151 as follows.[15]
∪ c
QP = DTIME(2(log n) )
c∈N

140.11.1 Relation to NP-complete problems

In complexity theory, the unsolved P versus NP152 problem asks if all problems in NP
have polynomial-time algorithms. All the best-known algorithms for NP-complete153 prob-
lems like 3SAT etc. take exponential time. Indeed, it is conjectured for many natural
NP-complete problems that they do not have sub-exponential time algorithms. Here ”sub-
exponential time” is taken to mean the second definition presented below. (On the other
hand, many graph problems represented in the natural way by adjacency matrices are solv-

139 https://en.wikipedia.org/wiki/Reduction_(complexity)
140 https://en.wikipedia.org/wiki/NP-hard
141 https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
142 https://en.wikipedia.org/wiki/NP_(complexity)
143 https://en.wikipedia.org/wiki/Steiner_tree_problem
144 https://en.wikipedia.org/wiki/Planted_clique
145 https://en.wikipedia.org/wiki/Clique_problem
146 https://en.wikipedia.org/wiki/Random_graph
147 https://en.wikipedia.org/wiki/Computational_hardness_assumption
148 https://en.wikipedia.org/wiki/Game_theory
149 https://en.wikipedia.org/wiki/Property_testing
150 https://en.wikipedia.org/wiki/Machine_learning
151 https://en.wikipedia.org/wiki/DTIME
152 https://en.wikipedia.org/wiki/P_versus_NP
153 https://en.wikipedia.org/wiki/NP-complete

1489
Time complexity

able in subexponential time simply because the size of the input is square of the number of
vertices.) This conjecture (for the k-SAT problem) is known as the exponential time hypoth-
esis154 .[16] Since it is conjectured that NP-complete problems do not have quasi-polynomial
time algorithms, some inapproximability results in the field of approximation algorithms155
make the assumption that NP-complete problems do not have quasi-polynomial time algo-
rithms. For example, see the known inapproximability results for the set cover156 problem.

140.12 Sub-exponential time

The term sub-exponential157 timeis used to express that the running time of some algo-
rithm may grow faster than any polynomial but is still significantly smaller than an expo-
nential. In this sense, problems that have sub-exponential time algorithms are somewhat
more tractable than those that only have exponential algorithms. The precise definition
of ”sub-exponential” is not generally agreed upon,[17] and we list the two most widely used
ones below.

140.12.1 First definition

A problem is said to be sub-exponential time solvable if it can be solved in running times


whose logarithms grow smaller than any given polynomial. More precisely, a problem is in
sub-exponential time if for every ε > 0 there exists an algorithm which solves the problem
ε
in time O(2n ). The set of all such problems is the complexity class SUBEXP which can
be defined in terms of DTIME158 as follows.[5][18][19][20]
∩ ( ε
)
SUBEXP = DTIME 2n
ε>0

This notion of sub-exponential is non-uniform in terms of ε in the sense that ε is not part
of the input and each ε may have its own algorithm for the problem.

140.12.2 Second definition

Some authors define sub-exponential time as running times in 2o(n) .[16][21][22] This definition
allows larger running times than the first definition of sub-exponential time. An example
of such a sub-exponential time algorithm is the best-known classical algorithm for integer
1/3
factorization, the general number field sieve159 , which runs in time about 2Õ(n ) , where
the length of the input is n. Another 160 , where
√ example is the graph isomorphism problem
Luks's algorithm runs in time 2O( n log n) . (In 2015–2017, Babai reduced the complexity of
this problem to quasi-polynomial time.)

154 https://en.wikipedia.org/wiki/Exponential_time_hypothesis
155 https://en.wikipedia.org/wiki/Approximation_algorithms
156 https://en.wikipedia.org/wiki/Set_cover
157 https://en.wikipedia.org/wiki/Infra-exponential
158 https://en.wikipedia.org/wiki/DTIME
159 https://en.wikipedia.org/wiki/General_number_field_sieve
160 https://en.wikipedia.org/wiki/Graph_isomorphism_problem

1490
Exponential time

It makes a difference whether the algorithm is allowed to be sub-exponential in the size of


the instance, the number of vertices, or the number of edges. In parameterized complex-
ity161 , this difference is made explicit by considering pairs (L, k) of decision problems162
and parameters k. SUBEPT is the class of all parameterized problems that run in time
sub-exponential in k and polynomial in the input size n:[23]
( )
SUBEPT = DTIME 2o(k) · poly(n) .

More precisely, SUBEPT is the class of all parameterized problems (L, k) for which there is
a computable function163 f : N → N with f ∈ o(k) and an algorithm that decides L in time
2f (k) · poly(n).

Exponential time hypothesis

Main article: Exponential time hypothesis164 The exponential time hypothesis (ETH)
is that 3SAT165 , the satisfiability problem of Boolean formulas in conjunctive normal form166
with, at most, three literals per clause and with n variables, cannot be solved in time 2o(n) .
More precisely, the hypothesis is that there is some absolute constant c>0 such that 3SAT
cannot be decided in time 2cn by any deterministic Turing machine. With m denoting the
number of clauses, ETH is equivalent to the hypothesis that kSAT cannot be solved in time
2o(m) for any integer k ≥ 3.[24] The exponential time hypothesis implies P ≠ NP167 .

140.13 Exponential time

An algorithm is said to be exponential time, if T(n) is upper bounded by 2poly(n) , where


poly(n) is some polynomial in n. More formally, an algorithm is exponential time if T(n) is
k
bounded by O(2n ) for some constant k. Problems which admit exponential time algorithms
on a deterministic Turing machine form the complexity class known as EXP168 .
∪ ( c
)
EXP = DTIME 2n
c∈N

Sometimes, exponential time is used to refer to algorithms that have T(n) = 2O(n) , where
the exponent is at most a linear function of n. This gives rise to the complexity class E169 .

E= DTIME (2cn )
c∈N

161 https://en.wikipedia.org/wiki/Parameterized_complexity
162 https://en.wikipedia.org/wiki/Decision_problem
163 https://en.wikipedia.org/wiki/Computable_function
164 https://en.wikipedia.org/wiki/Exponential_time_hypothesis
165 https://en.wikipedia.org/wiki/3SAT
166 https://en.wikipedia.org/wiki/Conjunctive_normal_form
167 https://en.wikipedia.org/wiki/P_%E2%89%A0_NP
168 https://en.wikipedia.org/wiki/EXP
169 https://en.wikipedia.org/wiki/E_(complexity)

1491
Time complexity

140.14 Factorial time

An example of an algorithm that runs in factorial time is bogosort170 , a notoriously inef-


ficient sorting algorithm based on trial and error171 . Bogosort sorts a list of n items by
repeatedly shuffling172 the list until it is found to be sorted. In the average case, each pass
through the bogosort algorithm will examine one of the n! orderings of the n items. If the
items are distinct, only one such ordering is sorted. Bogosort shares patrimony with the
infinite monkey theorem173 .

140.15 Double exponential time


poly(n)
An algorithm is said to be double exponential174 time if T(n) is upper bounded by 22 ,
where poly(n) is some polynomial in n. Such algorithms belong to the complexity class
2-EXPTIME175 .
∪ ( nc
)
2-EXPTIME = DTIME 22
c∈N

Well-known double exponential time algorithms include:


• Decision procedures for Presburger arithmetic176
• Computing a Gröbner basis177 (in the worst case[25] )
• Quantifier elimination178 on real closed fields179 takes at least double exponential time,[26]
and can be done in this time.[27]

140.16 See also


• Block swap algorithms180
• L-notation181
• Space complexity182

170 https://en.wikipedia.org/wiki/Bogosort
171 https://en.wikipedia.org/wiki/Trial_and_error
172 https://en.wikipedia.org/wiki/Shuffling
173 https://en.wikipedia.org/wiki/Infinite_monkey_theorem
174 https://en.wikipedia.org/wiki/Double_exponential_function
175 https://en.wikipedia.org/wiki/2-EXPTIME
176 https://en.wikipedia.org/wiki/Presburger_arithmetic
177 https://en.wikipedia.org/wiki/Gr%C3%B6bner_basis
178 https://en.wikipedia.org/wiki/Quantifier_elimination
179 https://en.wikipedia.org/wiki/Real_closed_field
180 https://en.wikipedia.org/wiki/Block_swap_algorithms
181 https://en.wikipedia.org/wiki/L-notation
182 https://en.wikipedia.org/wiki/Space_complexity

1492
References

140.17 References
1. S, M183 (2006). Introduction to the Theory of Computation. Course
Technology Inc. ISBN184 0-619-21764-2185 .
2. M, K; N, S (1990). ”B  
 O(  N)   O() ”. Information Processing Letters. 35 (4):
183–189. doi186 :10.1016/0020-0190(90)90022-P187 .
3. T, T188 (2010). ”1.11 T AKS  ”189 . An epsilon
of room, II: Pages from year three of a mathematical blog. Graduate Studies in
Mathematics. 117. Providence, RI: American Mathematical Society. pp. 82–86.
doi190 :10.1090/gsm/117191 . ISBN192 978-0-8218-5280-4193 . MR194 2780010195 .
4. L, J., H.W.196 ; P, C197 (11 D 2016). ”P-
   G ”198 (PDF). Cite journal requires
|journal= (help199 )
5. B, L200 ; F, L201 ; N, N.202 ; W, A203 (1993).
”BPP      EXPTIME  -
 ”. Computational Complexity. Berlin, New York: Springer-Verlag204 .
3 (4): 307–318. doi205 :10.1007/BF01275486206 .
6. B, P G.; R, G J. E.; S, G E. (1998).
”E M C O  P T”. SIAM Journal on Com-
puting. Philadelphia: Society for Industrial and Applied Mathematics207 . 27 (2):
466–490. doi208 :10.1137/S0097539794270698209 . ISSN210 1095-7111211 .

183 https://en.wikipedia.org/wiki/Michael_Sipser
184 https://en.wikipedia.org/wiki/ISBN_(identifier)
185 https://en.wikipedia.org/wiki/Special:BookSources/0-619-21764-2
186 https://en.wikipedia.org/wiki/Doi_(identifier)
187 https://doi.org/10.1016%2F0020-0190%2890%2990022-P
188 https://en.wikipedia.org/wiki/Terence_Tao
189 https://terrytao.wordpress.com/2009/08/11/the-aks-primality-test/
190 https://en.wikipedia.org/wiki/Doi_(identifier)
191 https://doi.org/10.1090%2Fgsm%2F117
192 https://en.wikipedia.org/wiki/ISBN_(identifier)
193 https://en.wikipedia.org/wiki/Special:BookSources/978-0-8218-5280-4
194 https://en.wikipedia.org/wiki/MR_(identifier)
195 http://www.ams.org/mathscinet-getitem?mr=2780010
196 https://en.wikipedia.org/wiki/Hendrik_Lenstra
197 https://en.wikipedia.org/wiki/Carl_Pomerance
198 https://math.dartmouth.edu/~carlp/aks111216.pdf
199 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
200 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Babai
201 https://en.wikipedia.org/wiki/Lance_Fortnow
202 https://en.wikipedia.org/wiki/Noam_Nisan
203 https://en.wikipedia.org/wiki/Avi_Wigderson
204 https://en.wikipedia.org/wiki/Springer-Verlag
205 https://en.wikipedia.org/wiki/Doi_(identifier)
206 https://doi.org/10.1007%2FBF01275486
207 https://en.wikipedia.org/wiki/Society_for_Industrial_and_Applied_Mathematics
208 https://en.wikipedia.org/wiki/Doi_(identifier)
209 https://doi.org/10.1137%2FS0097539794270698
210 https://en.wikipedia.org/wiki/ISSN_(identifier)
211 http://www.worldcat.org/issn/1095-7111

1493
Time complexity

7. K, R; R, R (2003). ”S  ”212


(PDF). SIGACT News. 34 (4): 57–67. doi213 :10.1145/954092.954103214 .
8. N, A V.; R, K W.; S, D. (1995). ”O Q-
 T C T”215 (PDF). Theoretical Computer Science. 148 (2):
325–349. doi216 :10.1016/0304-3975(95)00031-q217 . Retrieved 23 February 2015.
9. Sedgewick, R. and Wayne K (2011). Algorithms, 4th Ed218 . p. 186. Pearson Educa-
tion, Inc.
10. P, C H.219 (1994). Computational complexity. Reading,
Mass.: Addison-Wesley. ISBN220 0-201-53082-1221 .
11. C, A222 (1965). ”T     -
”. Proc. Logic, Methodology, and Philosophy of Science II. North Holland.
12. G, M223 ; L L224 ; A S225 (1988).
”C, O,  N C”. Geometric Algorithms
and Combinatorial Optimization. Springer. ISBN226 0-387-13624-X227 .
13. S, A228 (2003). ”P    C-
”. Combinatorial Optimization: Polyhedra and Efficiency. 1. Springer.
ISBN229 3-540-44389-4230 .
14. B, M; K, Y K; R, A; W,
O (2015), ETH hardness for densest-k-subgraph with perfect completeness,
arXiv231 :1504.08352232 , Bibcode233 :2015arXiv150408352B234 .
15. Complexity Zoo235 : Class QP: Quasipolynomial-Time236

212 http://www.cs.princeton.edu/courses/archive/spr04/cos598B/bib/kumarR-survey.pdf
213 https://en.wikipedia.org/wiki/Doi_(identifier)
214 https://doi.org/10.1145%2F954092.954103
215 http://www.cse.buffalo.edu/~regan/papers/pdf/NRS95.pdf
216 https://en.wikipedia.org/wiki/Doi_(identifier)
217 https://doi.org/10.1016%2F0304-3975%2895%2900031-q
218 http://algs4.cs.princeton.edu/home/
219 https://en.wikipedia.org/wiki/Christos_H._Papadimitriou
220 https://en.wikipedia.org/wiki/ISBN_(identifier)
221 https://en.wikipedia.org/wiki/Special:BookSources/0-201-53082-1
222 https://en.wikipedia.org/wiki/Alan_Cobham
223 https://en.wikipedia.org/wiki/Martin_Gr%C3%B6tschel
224 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Lov%C3%A1sz
225 https://en.wikipedia.org/wiki/Alexander_Schrijver
226 https://en.wikipedia.org/wiki/ISBN_(identifier)
227 https://en.wikipedia.org/wiki/Special:BookSources/0-387-13624-X
228 https://en.wikipedia.org/wiki/Alexander_Schrijver
229 https://en.wikipedia.org/wiki/ISBN_(identifier)
230 https://en.wikipedia.org/wiki/Special:BookSources/3-540-44389-4
231 https://en.wikipedia.org/wiki/ArXiv_(identifier)
232 http://arxiv.org/abs/1504.08352
233 https://en.wikipedia.org/wiki/Bibcode_(identifier)
234 https://ui.adsabs.harvard.edu/abs/2015arXiv150408352B
235 https://en.wikipedia.org/wiki/Complexity_Zoo
236 https://complexityzoo.uwaterloo.ca/Complexity_Zoo:Q#qp

1494
References

16. I, R.; P, R. (2001). ”O    -SAT”.


Journal of Computer and System Sciences. Elsevier237 . 62 (2): 367–375.
238 239 240
doi :10.1006/jcss.2000.1727 . ISSN 1090-2724 . 241

17. A, S (5 A 2009). ”A -- ”242 .


Shtetl-Optimized. Retrieved 2 December 2009.
18. Complexity Zoo243 : Class SUBEXP: Deterministic Subexponential-Time244
19. M, P. (2003). ”B' C  S C C”. I
A L, B J. N (.). Fundamentals of Computation Theory:
14th International Symposium, FCT 2003, Malmö, Sweden, August 12-15, 2003, Pro-
ceedings. Lecture Notes in Computer Science245 . 2751. Berlin, New York: Springer-
Verlag. pp. 333–342. doi246 :10.1007/978-3-540-45077-1_31247 . ISBN248 978-3-540-
40543-6249 . ISSN250 0302-9743251 .CS1 maint: uses editors parameter (link252 )
20. M, P.B. (2001). ”D C C”. Handbook of
Randomized Computing. Combinatorial Optimization. Kluwer Academic Pub. 9:
843. doi253 :10.1007/978-1-4615-0013-1_19254 . ISBN255 978-1-4613-4886-3256 .
21. K, G (2005). ”A S-T Q A-
   D H S P”. SIAM Journal
on Computing. Philadelphia. 35 (1): 188. arXiv257 :quant-ph/0302112258 .
doi259 :10.1137/s0097539703436345260 . ISSN261 1095-7111262 .
22. O R (2004). ”A S T A   D-
 H S P  P S”. X263 :-
/04061511264 .

237 https://en.wikipedia.org/wiki/Elsevier
238 https://en.wikipedia.org/wiki/Doi_(identifier)
239 https://doi.org/10.1006%2Fjcss.2000.1727
240 https://en.wikipedia.org/wiki/ISSN_(identifier)
241 http://www.worldcat.org/issn/1090-2724
242 http://scottaaronson.com/blog/?p=394
243 https://en.wikipedia.org/wiki/Complexity_Zoo
244 https://complexityzoo.uwaterloo.ca/Complexity_Zoo:S#subexp
245 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
246 https://en.wikipedia.org/wiki/Doi_(identifier)
247 https://doi.org/10.1007%2F978-3-540-45077-1_31
248 https://en.wikipedia.org/wiki/ISBN_(identifier)
249 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-40543-6
250 https://en.wikipedia.org/wiki/ISSN_(identifier)
251 http://www.worldcat.org/issn/0302-9743
252 https://en.wikipedia.org/wiki/Category:CS1_maint:_uses_editors_parameter
253 https://en.wikipedia.org/wiki/Doi_(identifier)
254 https://doi.org/10.1007%2F978-1-4615-0013-1_19
255 https://en.wikipedia.org/wiki/ISBN_(identifier)
256 https://en.wikipedia.org/wiki/Special:BookSources/978-1-4613-4886-3
257 https://en.wikipedia.org/wiki/ArXiv_(identifier)
258 http://arxiv.org/abs/quant-ph/0302112
259 https://en.wikipedia.org/wiki/Doi_(identifier)
260 https://doi.org/10.1137%2Fs0097539703436345
261 https://en.wikipedia.org/wiki/ISSN_(identifier)
262 http://www.worldcat.org/issn/1095-7111
263 https://en.wikipedia.org/wiki/ArXiv_(identifier)
264 http://arxiv.org/abs/quant-ph/0406151v1

1495
Time complexity

23. F, J; G, M265 (2006). Parameterized Complexity Theory266 .


S. . 417. ISBN267 978-3-540-29952-3268 .CS1 maint: ref=harv (link269 )
24. I, R.270 ; P, R.; Z, F. (2001). ”W  
  ?”. Journal of Computer and System Sci-
ences271 . 63 (4): 512–530. doi272 :10.1006/jcss.2001.1774273 .
25. Mayr,E. & Mayer,A.: The Complexity of the Word Problem for Commutative Semi-
groups and Polynomial Ideals. Adv. in Math. 46(1982) pp. 305–329
26. J.H. Davenport & J. Heintz: Real Quantifier Elimination is Doubly Exponential. J.
Symbolic Comp. 5(1988) pp. 29–35.
27. G.E. Collins: Quantifier Elimination for Real Closed Fields by Cylindrical Algebraic
Decomposition. Proc. 2nd. GI Conference Automata Theory & Formal Languages
(Springer Lecture Notes in Computer Science 33) pp. 134–183

265 https://en.wikipedia.org/wiki/Martin_Grohe
https://www.springer.com/east/home/generic/search/results?SGWID=5-40109-22-141358322-
266
0
267 https://en.wikipedia.org/wiki/ISBN_(identifier)
268 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-29952-3
269 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
270 https://en.wikipedia.org/wiki/Russell_Impagliazzo
271 https://en.wikipedia.org/wiki/Journal_of_Computer_and_System_Sciences
272 https://en.wikipedia.org/wiki/Doi_(identifier)
273 https://doi.org/10.1006%2Fjcss.2001.1774

1496
141 Space complexity

In computer science1 , the space complexity of an algorithm2 or a computer program3 is


the amount of memory space required to solve an instance of the computational problem4
as a function of the size of the input5 . It is the memory required by an algorithm to execute
a program and produce output.[1]
Similar to time complexity6 , Space complexity is often expressed asymptotically in big O
notation7 , such as O(n), O(n log n), O(nα ), O(2n ), etc., where n is the input size in units of
bits8 needed to represent the input.

141.1 Space complexity classes

Analogously to time complexity classes DTIME(f(n))9 and NTIME(f(n))10 , the complexity


classes DSPACE(f(n))11 and NSPACE(f(n))12 are the sets of languages that are decidable by
deterministic (respectively, non-deterministic) Turing machines13 that use O(f (n)) space.
The complexity classes PSPACE14 and NPSPACE15 allow f to be any polynomial, analo-
gously to P16 and NP17 . That is,

PSPACE = DSPACE(nc )
c∈Z+

and

NPSPACE = NSPACE(nc )
c∈Z+

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Algorithm
3 https://en.wikipedia.org/wiki/Computer_program
4 https://en.wikipedia.org/wiki/Computational_problem
5 https://en.wikipedia.org/wiki/Problem_size
6 https://en.wikipedia.org/wiki/Time_complexity
7 https://en.wikipedia.org/wiki/Big_O_notation
8 https://en.wikipedia.org/wiki/Bit
9 https://en.wikipedia.org/wiki/DTIME
10 https://en.wikipedia.org/wiki/NTIME
11 https://en.wikipedia.org/wiki/DSPACE
12 https://en.wikipedia.org/wiki/NSPACE
13 https://en.wikipedia.org/wiki/Turing_machine
14 https://en.wikipedia.org/wiki/PSPACE
15 https://en.wikipedia.org/wiki/NPSPACE
16 https://en.wikipedia.org/wiki/P_(complexity)
17 https://en.wikipedia.org/wiki/NP_(complexity)

1497
Space complexity

141.2 Relationships between classes

The space hierarchy theorem18 states that, for all space-constructible functions19 f (n), there
exists a problem that can be solved by a machine with f (n) memory space, but cannot be
solved by a machine with asymptotically less than f (n)space.
The following containments between complexity classes hold.[2]
( )
DTIME (f (n)) ⊆ DSPACE (f (n)) ⊆ NSPACE (f (n)) ⊆ DTIME 2O(f (n))

Furthermore, Savitch's theorem20 gives the reverse containment that if f ∈ Ω(log(n)),


( )
NSPACE (f (n)) ⊆ DSPACE (f (n))2 .

As a direct corollary, PSPACE = NPSPACE. This result is surprising because it suggests that
non-determinism can reduce the space necessary to solve a problem only by a small amount.
In contrast, the exponential time hypothesis21 conjectures that for time complexity, there
can be an exponential gap between deterministic and non-deterministic complexity.
The Immerman–Szelepcsényi theorem22 states that, again for f ∈ Ω(log(n)), NSPACE(f (n))
is closed under complementation. This shows another qualitative difference between time
and space complexity classes, as nondeterministic time complexity classes are not believed
to be closed under complementation; for instance, it is conjectured that NP ≠co-NP23 .[3][4]

141.3 LOGSPACE

L24 or LOGSPACE is the set of problems that can be solved by a deterministic Turing
machine using only O(log n) memory space with regards to input size. Even a single counter
that can index the entire n-bit input requires log n space, so LOGSPACE algorithms can
maintain only a constant number of counters or other variables of similar bit complexity.
LOGSPACE and other sub-linear space complexity is useful when processing large data
that cannot fit into a computer's RAM25 . They are related to Streaming algorithms26 ,
but only restrict how much memory can be used, while streaming algorithms have further
constraints on how the input is fed into the algorithm. This class also sees use in the field of
pseudorandomness27 and derandomization28 , where researchers consider the open problem
of whether L29 = RL30 .[5][6]

18 https://en.wikipedia.org/wiki/Space_hierarchy_theorem
19 https://en.wikipedia.org/wiki/Space-constructible_function
20 https://en.wikipedia.org/wiki/Savitch%27s_theorem
21 https://en.wikipedia.org/wiki/Exponential_time_hypothesis
22 https://en.wikipedia.org/wiki/Immerman%E2%80%93Szelepcs%C3%A9nyi_theorem
23 https://en.wikipedia.org/wiki/Co-NP
24 https://en.wikipedia.org/wiki/L_(complexity)
25 https://en.wikipedia.org/wiki/Random_access_memory
26 https://en.wikipedia.org/wiki/Streaming_algorithm
27 https://en.wikipedia.org/wiki/Pseudorandomness
28 https://en.wikipedia.org/wiki/Derandomization
29 https://en.wikipedia.org/wiki/L_(complexity)
30 https://en.wikipedia.org/wiki/RL_(complexity)

1498
References

The corresponding nondeterministic space complexity class is NL31 .

141.4 References
1. K, W; Z, M J. (2003), Optimal Reliability Modeling: Principles and
Applications32 , J W & S, . 62, ISBN33 978047127545934
2. A, S35 ; B, B (2007), Computational Complexity : A Modern
Approach36 (PDF) ( .), . 76, ISBN37 978051180409038
3. I, N39 (1988), ”N    
”40 (PDF), SIAM Journal on Computing, 17 (5): 935–938,
doi41 :10.1137/021705842 , MR43 096104944
4. S, R45 (1987), ”T     -
 ”, Bulletin of the EATCS, 33: 96–100
5. N, N46 (1992), ”RL ⊆ SC”, Proceedings of the 24th ACM Symposium on
Theory of computing (STOC '92), Victoria, British Columbia, Canada, pp. 619–623,
doi47 :10.1145/129712.12977248 .
6. R, O49 ; T, L50 ; V, S51 (2006), ”P-
       RL . L ”52 (PDF),
STOC'06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing,
New York: ACM, pp. 457–466, doi53 :10.1145/1132516.113258354 , MR55 227717156

31 https://en.wikipedia.org/wiki/NL_(complexity)
32 https://books.google.com/books?id=vdZ4Bm-LnHMC&pg=PA62
33 https://en.wikipedia.org/wiki/ISBN_(identifier)
34 https://en.wikipedia.org/wiki/Special:BookSources/9780471275459
35 https://en.wikipedia.org/wiki/Sanjeev_Arora
36 https://theory.cs.princeton.edu/complexity/book.pdf
37 https://en.wikipedia.org/wiki/ISBN_(identifier)
38 https://en.wikipedia.org/wiki/Special:BookSources/9780511804090
39 https://en.wikipedia.org/wiki/Neil_Immerman
40 http://www.cs.umass.edu/~immerman/pub/space.pdf
41 https://en.wikipedia.org/wiki/Doi_(identifier)
42 https://doi.org/10.1137%2F0217058
43 https://en.wikipedia.org/wiki/MR_(identifier)
44 http://www.ams.org/mathscinet-getitem?mr=0961049
45 https://en.wikipedia.org/wiki/R%C3%B3bert_Szelepcs%C3%A9nyi
46 https://en.wikipedia.org/wiki/Noam_Nisan
47 https://en.wikipedia.org/wiki/Doi_(identifier)
48 https://doi.org/10.1145%2F129712.129772
49 https://en.wikipedia.org/wiki/Omer_Reingold
50 https://en.wikipedia.org/wiki/Luca_Trevisan
51 https://en.wikipedia.org/wiki/Salil_Vadhan
52 http://people.seas.harvard.edu/~salil/research/regular.pdf
53 https://en.wikipedia.org/wiki/Doi_(identifier)
54 https://doi.org/10.1145%2F1132516.1132583
55 https://en.wikipedia.org/wiki/MR_(identifier)
56 http://www.ams.org/mathscinet-getitem?mr=2277171

1499
142 Big O notation

Notation describing limiting behavior

Fit approximation
Concepts
Orders of approximation
Scale analysis · Big O notation
Curve fitting · False precision
Significant figures
Other fundamentals
Approximation · Generalization error
Taylor polynomial
Scientific modelling

1501
Big O notation

Figure 342 Example of Big O notation: f(x) ∈ O(g(x)) as there exists c > 0 (e.g., c = 1)
and x0 (e.g., x0 = 5) such that f(x) ≤ cg(x) whenever x ≥ x0 .

Big O notation is a mathematical notation that describes the limiting behavior1 of a


function2 when the argument3 tends towards a particular value or infinity. Big O is a
member of a family of notations invented by Paul Bachmann4 ,[1] Edmund Landau5 ,[2] and
others, collectively called Bachmann–Landau notation or asymptotic notation.

1 https://en.wikipedia.org/wiki/Asymptotic_analysis
2 https://en.wikipedia.org/wiki/Function_(mathematics)
3 https://en.wikipedia.org/wiki/Argument_of_a_function
4 https://en.wikipedia.org/wiki/Paul_Gustav_Heinrich_Bachmann
5 https://en.wikipedia.org/wiki/Edmund_Landau

1502
Formal definition

In computer science6 , big O notation is used to classify algorithms7 according to how their
run time or space requirements grow as the input size grows.[3] In analytic number theory8 ,
big O notation is often used to express a bound on the difference between an arithmetical
function9 and a better understood approximation; a famous example of such a difference is
the remainder term in the prime number theorem10 .
Big O notation characterizes functions according to their growth rates: different functions
with the same growth rate may be represented using the same O notation.
The letter O is used because the growth rate of a function is also referred to as the order of
the function. A description of a function in terms of big O notation usually only provides
an upper bound11 on the growth rate of the function. Associated with big O notation are
several related notations, using the symbols o, Ω, ω, and Θ, to describe other kinds of
bounds on asymptotic growth rates.
Big O notation is also used in many other fields to provide similar estimates.

142.1 Formal definition

Let f be a real or complex valued function and g a real valued function. Let both functions
be defined on some unbounded subset of the real positive numbers12 , and g(x) be strictly
positive for all large enough values of x.[4] One writes
f (x) = O(g(x)) as x → ∞
if and only if13 for all sufficiently large values of x, the absolute value of f(x) is at most
a positive constant multiple of g(x). That is, f(x) = O(g(x)) if and only if there exists a
positive real number M and a real number x0 such that
|f (x)| ≤ M g(x) for all x ≥ x0 .
In many contexts, the assumption that we are interested in the growth rate as the variable
x goes to infinity is left unstated, and one writes more simply that
f (x) = O(g(x)).
The notation can also be used to describe the behavior of f near some real number a (often,
a = 0): we say
f (x) = O(g(x)) as x → a
if and only if there exist positive numbers δ and M such that
|f (x)| ≤ M g(x) when 0 < |x − a| < δ.

6 https://en.wikipedia.org/wiki/Computer_science
7 https://en.wikipedia.org/wiki/Computational_complexity_theory
8 https://en.wikipedia.org/wiki/Analytic_number_theory
9 https://en.wikipedia.org/wiki/Arithmetic_function
10 https://en.wikipedia.org/wiki/Prime_number_theorem
11 https://en.wikipedia.org/wiki/Upper_bound
12 https://en.wikipedia.org/wiki/Real_number
13 https://en.wikipedia.org/wiki/If_and_only_if

1503
Big O notation

As g(x) is chosen to be non-zero for values of x sufficiently close14 to a, both of these


definitions can be unified using the limit superior15 :
f (x) = O(g(x)) as x → a
if and only if

f (x)
lim sup < ∞.
x→a g(x)

142.2 Example

In typical usage the O notation is asymptotical, that is, it refers to very large x. In this
setting, the contribution of the terms that grow ”most quickly” will eventually make the
other ones irrelevant. As a result, the following simplification rules can be applied:
• If f(x) is a sum of several terms, if there is one with largest growth rate, it can be kept,
and all others omitted.
• If f(x) is a product of several factors, any constants (terms in the product that do not
depend on x) can be omitted.
For example, let f(x) = 6x4 − 2x3 + 5, and suppose we wish to simplify this function, using
O notation, to describe its growth rate as x approaches infinity. This function is the sum of
three terms: 6x4 , −2x3 , and 5. Of these three terms, the one with the highest growth rate
is the one with the largest exponent as a function of x, namely 6x4 . Now one may apply
the second rule: 6x4 is a product of 6 and x4 in which the first factor does not depend on
x. Omitting this factor results in the simplified form x4 . Thus, we say that f(x) is a ”big-
oh” of (x4 ). Mathematically, we can write f(x) = O(x4 ). One may confirm this calculation
using the formal definition: let f(x) = 6x4 − 2x3 + 5 and g(x) = x4 . Applying the formal
definition16 from above, the statement that f(x) = O(x4 ) is equivalent to its expansion,
|f (x)| ≤ M x4
for some suitable choice of x0 and M and for all x > x0 . To prove this, let x0 = 1 and
M = 13. Then, for all x > x0 :
|6x4 − 2x3 + 5| ≤ 6x4 + |2x3 | + 5
≤ 6x4 + 2x4 + 5x4
= 13x4
so
|6x4 − 2x3 + 5| ≤ 13 x4 .

14 https://en.wikipedia.org/wiki/Mathematical_jargon#sufficiently_large
15 https://en.wikipedia.org/wiki/Limit_superior
16 #Formal_definition

1504
Usage

142.3 Usage

Big O notation has two main areas of application:


• In mathematics17 , it is commonly used to describe how closely a finite series approximates
a given function18 , especially in the case of a truncated Taylor series19 or asymptotic
expansion20
• In computer science21 , it is useful in the analysis of algorithms22
In both applications, the function g(x) appearing within the O(...) is typically chosen to be
as simple as possible, omitting constant factors and lower order terms.
There are two formally close, but noticeably different, usages of this notation:
• infinite23 asymptotics
• infinitesimal24 asymptotics.
This distinction is only in application and not in principle, however—the formal definition
for the ”big O” is the same for both cases, only with different limits for the function argument.

17 https://en.wikipedia.org/wiki/Mathematics
18 https://en.wikipedia.org/wiki/Big_O_notation#Infinitesimal_asymptotics
19 https://en.wikipedia.org/wiki/Taylor_series
20 https://en.wikipedia.org/wiki/Asymptotic_expansion
21 https://en.wikipedia.org/wiki/Computer_science
22 https://en.wikipedia.org/wiki/Big_O_notation#Infinite_asymptotics
23 https://en.wikipedia.org/wiki/Infinity
24 https://en.wikipedia.org/wiki/Infinitesimal

1505
Big O notation

142.3.1 Infinite asymptotics

Figure 343 Graphs of functions commonly used in the analysis of algorithms, showing
the number of operations N versus input size n for each function

Big O notation is useful when analyzing algorithms25 for efficiency. For example, the time
(or the number of steps) it takes to complete a problem of size n might be found to be T(n)
= 4n2 − 2n + 2. As n grows large, the n2 term26 will come to dominate, so that all other
terms can be neglected—for instance when n = 500, the term 4n2 is 1000 times as large
as the 2n term. Ignoring the latter would have negligible effect on the expression's value
for most purposes. Further, the coefficients27 become irrelevant if we compare to any other

25 https://en.wikipedia.org/wiki/Analysis_of_algorithms
26 https://en.wikipedia.org/wiki/Term_(mathematics)
27 https://en.wikipedia.org/wiki/Coefficient

1506
Properties

order28 of expression, such as an expression containing a term n3 or n4 . Even if T(n) =


1,000,000n2 , if U(n) = n3 , the latter will always exceed the former once n grows larger than
1,000,000 (T(1,000,000) = 1,000,0003 = U(1,000,000)). Additionally, the number of steps
depends on the details of the machine model on which the algorithm runs, but different
types of machines typically vary by only a constant factor in the number of steps needed
to execute an algorithm. So the big O notation captures what remains: we write either
T (n) = O(n2 )
or
T (n) ∈ O(n2 )
and say that the algorithm has order of n2 time complexity. The sign ”=” is not meant to
express ”is equal to” in its normal mathematical sense, but rather a more colloquial ”is”,
so the second expression is sometimes considered more accurate (see the ”Equals sign29 ”
discussion below) while the first is considered by some as an abuse of notation30 .[5]

142.3.2 Infinitesimal asymptotics

Big O can also be used to describe the error term31 in an approximation to a mathematical
function. The most significant terms are written explicitly, and then the least-significant
terms are summarized in a single big O term. Consider, for example, the exponential series32
and two expressions of it that are valid when x is small:
x2 x3 x4
ex = 1 + x + + + + · · · for all x
2! 3! 4!
x2
= 1+x+ + O(x3 ) as x → 0
2
= 1 + x + O(x2 ) as x → 0
The second expression (the one with O(x3 )) means the absolute-value of the error ex − (1
+ x + x2 /2) is at most some constant times |x3 | when x is close enough to 0.

142.4 Properties

If the function f can be written as a finite sum of other functions, then the fastest growing
one determines the order of f(n). For example,
f (n) = 9 log n + 5(log n)4 + 3n2 + 2n3 = O(n3 ) as n → ∞.
In particular, if a function may be bounded by a polynomial in n, then as n tends to infinity,
one may disregard lower-order terms of the polynomial. The sets O(nc ) and O(cn ) are very
different. If c is greater than one, then the latter grows much faster. A function that

28 https://en.wikipedia.org/wiki/Orders_of_approximation
29 #Equals_sign
30 https://en.wikipedia.org/wiki/Abuse_of_notation
31 https://en.wikipedia.org/wiki/Taylor_series#Approximation_error_and_convergence
32 https://en.wikipedia.org/wiki/Exponential_function#Formal_definition

1507
Big O notation

grows faster than nc for any c is called superpolynomial. One that grows more slowly than
any exponential function of the form cn is called subexponential. An algorithm can require
time that is both superpolynomial and subexponential; examples of this include the fastest
known algorithms for integer factorization33 and the function nlog n .
We may ignore any powers of n inside of the logarithms. The set O(log n) is exactly the
same as O(log(nc )). The logarithms differ only by a constant factor (since log(nc ) = c log
n) and thus the big O notation ignores that. Similarly, logs with different constant bases
are equivalent. On the other hand, exponentials with different bases are not of the same
order. For example, 2n and 3n are not of the same order.
Changing units may or may not affect the order of the resulting algorithm. Changing units
is equivalent to multiplying the appropriate variable by a constant wherever it appears. For
example, if an algorithm runs in the order of n2 , replacing n by cn means the algorithm
runs in the order of c2 n2 , and the big O notation ignores the constant c2 . This can be
written as c2 n2 = O(n2 ). If, however, an algorithm runs in the order of 2n , replacing n with
cn gives 2cn = (2c )n . This is not equivalent to 2n in general. Changing variables may also
affect the order of the resulting algorithm. For example, if an algorithm's run time is O(n)
when measured in terms of the number n of digits of an input number x, then its run time
is O(log x) when measured as a function of the input number x itself, because n = O(log x).

142.4.1 Product

f1 = O(g1 ) and f2 = O(g2 ) ⇒ f1 f2 = O(g1 g2 )


f · O(g) = O(f g)

142.4.2 Sum

f1 = O(g1 ) and f2 = O(g2 ) ⇒ f1 + f2 = O(max(g1 , g2 ))


This implies f1 = O(g) and f2 = O(g) ⇒ f1 + f2 ∈ O(g), which means that O(g) is a convex
cone34 .

142.4.3 Multiplication by a constant

Let k be constant. Then:


O(|k|g) = O(g) if k is nonzero.
f = O(g) ⇒ kf = O(g).

33 https://en.wikipedia.org/wiki/Integer_factorization
34 https://en.wikipedia.org/wiki/Convex_cone

1508
Multiple variables

142.5 Multiple variables

Big O (and little o, Ω, etc.) can also be used with multiple variables. To define big
O formally for multiple variables, suppose f and g are two functions defined on some subset
of Rn . We say
f (⃗x) is O(g(⃗x)) as ⃗x → ∞
if and only if[6]
∃M ∃C > 0 such that for all ⃗x with xi ≥ M for some i, |f (⃗x)| ≤ C|g(⃗x)|.
Equivalently, the condition that xi ≥ M for some i can be replaced with the condition that
∥⃗x∥∞ ≥ M , where ∥⃗x∥∞ denotes the Chebyshev norm35 . For example, the statement
f (n, m) = n2 + m3 + O(n + m) as n, m → ∞
asserts that there exist constants C and M such that
∀∥(n, m)∥∞ ≥ M : |g(n, m)| ≤ C|n + m|,
where g(n,m) is defined by
f (n, m) = n2 + m3 + g(n, m).
This definition allows all of the coordinates of ⃗x to increase to infinity. In particular, the
statement
f (n, m) = O(nm ) as n, m → ∞
(i.e., ∃C ∃M ∀n ∀m . . .) is quite different from
∀m : f (n, m) = O(nm ) as n → ∞
(i.e., ∀m ∃C ∃M ∀n . . .).
Under this definition, the subset on which a function is defined is significant when gener-
alizing statements from the univariate setting to the multivariate setting. For example, if
f (n, m) = 1 and g(n, m) = n, then f (n, m) = O(g(n, m)) if we restrict f and g to [1, ∞)2 ,
but not if they are defined on [0, ∞)2 .
This is not the only generalization of big O to multivariate functions, and in practice, there
is some inconsistency in the choice of definition.[7]

142.6 Matters of notation

142.6.1 Equals sign

The statement ”f(x) is O(g(x))” as defined above is usually written as f(x) = O(g(x)). Some
consider this to be an abuse of notation36 , since the use of the equals sign could be mis-

35 https://en.wikipedia.org/wiki/Chebyshev_norm
36 https://en.wikipedia.org/wiki/Abuse_of_notation

1509
Big O notation

leading as it suggests a symmetry that this statement does not have. As de Bruijn37 says,
O(x) = O(x2 ) is true but O(x2 ) = O(x) is not.[8] Knuth38 describes such statements as ”one-
way equalities”, since if the sides could be reversed, ”we could deduce ridiculous things like
n = n2 from the identities n = O(n2 ) and n2 = O(n2 ).”[9]
For these reasons, it would be more precise to use set notation39 and write f(x) ∈ O(g(x)),
thinking of O(g(x)) as the class of all functions h(x) such that |h(x)| ≤ C|g(x)| for some
constant C.[9] However, the use of the equals sign is customary. Knuth pointed out that
”mathematicians customarily use the = sign as they use the word 'is' in English: Aristotle
is a man, but a man isn't necessarily Aristotle.”[10]

142.6.2 Other arithmetic operators

Big O notation can also be used in conjunction with other arithmetic operators in more
complicated equations. For example, h(x) + O(f(x)) denotes the collection of functions
having the growth of h(x) plus a part whose growth is limited to that of f(x). Thus,
g(x) = h(x) + O(f (x))
expresses the same as
g(x) − h(x) = O(f (x)) .

Example

Suppose an algorithm40 is being developed to operate on a set of n elements. Its developers


are interested in finding a function T(n) that will express how long the algorithm will take
to run (in some arbitrary measurement of time) in terms of the number of elements in the
input set. The algorithm works by first calling a subroutine to sort the elements in the
set and then perform its own operations. The sort has a known time complexity of O(n2 ),
and after the subroutine runs the algorithm must take an additional 55n3 + 2n + 10 steps
before it terminates. Thus the overall time complexity of the algorithm can be expressed
as T(n) = 55n3 + O(n2 ). Here the terms 2n+10 are subsumed within the faster-growing
O(n2 ). Again, this usage disregards some of the formal meaning of the ”=” symbol, but it
does allow one to use the big O notation as a kind of convenient placeholder.

142.6.3 Multiple uses

In more complicated usage, O(...) can appear in different places in an equation, even several
times on each side. For example, the following are true for n → ∞
(n + 1)2 = n2 + O(n)
(n + O(n1/2 ))(n + O(log n))2 = n3 + O(n5/2 )

37 https://en.wikipedia.org/wiki/Nicolaas_Govert_de_Bruijn
38 https://en.wikipedia.org/wiki/Donald_Knuth
39 https://en.wikipedia.org/wiki/Set_notation
40 https://en.wikipedia.org/wiki/Algorithm

1510
Orders of common functions

nO(1) = O(en ).
The meaning of such statements is as follows: for any functions which satisfy each O(...)
on the left side, there are some functions satisfying each O(...) on the right side, such that
substituting all these functions into the equation makes the two sides equal. For example,
the third equation above means: ”For any function f(n) = O(1), there is some function
g(n) = O(en ) such that nf(n) = g(n).” In terms of the ”set notation” above, the meaning is
that the class of functions represented by the left side is a subset of the class of functions
represented by the right side. In this use the ”=” is a formal symbol that unlike the usual
use of ”=” is not a symmetric relation41 . Thus for example nO(1) = O(en ) does not imply
the false statement O(en ) = nO(1)

142.6.4 Typesetting

Big O consists of just an uppercase ”O”. Unlike Greek-named Bachmann–Landau notations,


it needs no special symbol. Yet, commonly used calligraphic variants, like O, are available,
in LaTeX42 and derived typesetting systems.[11]

142.7 Orders of common functions

Further information: Time complexity § Table of common time complexities43 Here is a list
of classes of functions that are commonly encountered when analyzing the running time of
an algorithm. In each case, c is a positive constant and n increases without bound. The
slower-growing functions are generally listed first.

Notation Name Example


O(1) constant44 Determining if a binary number is
even or odd; Calculating (−1)n ;
Using a constant-size lookup ta-
ble45
O(log log n) double logarithmic Number of comparisons spent
finding an item using interpola-
tion search46 in a sorted array of
uniformly distributed values
O(log n) logarithmic47 Finding an item in a sorted array
with a binary search48 or a bal-
anced search tree49 as well as all
operations in a Binomial heap50

41 https://en.wikipedia.org/wiki/Symmetric_relation
42 https://en.wikipedia.org/wiki/LaTeX
43 https://en.wikipedia.org/wiki/Time_complexity#Table_of_common_time_complexities
44 https://en.wikipedia.org/wiki/Constant_time
45 https://en.wikipedia.org/wiki/Lookup_table
46 https://en.wikipedia.org/wiki/Interpolation_search
47 https://en.wikipedia.org/wiki/Logarithmic_time
48 https://en.wikipedia.org/wiki/Binary_search_algorithm
49 https://en.wikipedia.org/wiki/Tree_data_structure
50 https://en.wikipedia.org/wiki/Binomial_heap

1511
Big O notation

Notation Name Example


O((log n)c ) polylogarithmic51 Matrix chain ordering can be
c>1 solved in polylogarithmic time
on a parallel random-access ma-
chine52 .
O(nc ) fractional power Searching in a k-d tree53
0<c<1
O(n) linear54 Finding an item in an unsorted
list or in an unsorted array;
adding two n-bit integers by ripple
carry55
O(n log∗ n) n log-star56 n Performing triangulation57
of a simple polygon using
Seidel's algorithm58 , or the
union–find{algorithm59 . Note that
0, if n ≤ 1
log∗ (n) =
1 + log∗ (log n), if n > 1
O(n log n) = O(log n!) linearithmic60 , loglinear, quasilin- Performing a fast Fourier trans-
ear, or ”n log n” form61 ; Fastest possible compari-
son sort62 ; heapsort63 and merge
sort64
O(n2 ) quadratic65 Multiplying two n-digit numbers
by a simple algorithm; simple sort-
ing algorithms, such as bubble
sort66 , selection sort67 and inser-
tion sort68 ; (worst case) bound
on some usually faster sorting
algorithms such as quicksort69 ,
Shellsort70 , and tree sort71

51 https://en.wikipedia.org/wiki/Polylogarithmic_time
52 https://en.wikipedia.org/wiki/Parallel_random-access_machine
53 https://en.wikipedia.org/wiki/K-d_tree
54 https://en.wikipedia.org/wiki/Linear_time
55 https://en.wikipedia.org/wiki/Ripple_carry_adder
56 https://en.wikipedia.org/wiki/Log-star
57 https://en.wikipedia.org/wiki/Polygon_triangulation
58 https://en.wikipedia.org/wiki/Kirkpatrick%E2%80%93Seidel_algorithm
59 https://en.wikipedia.org/wiki/Proof_of_O(log*n)_time_complexity_of_union%E2%80%93find
60 https://en.wikipedia.org/wiki/Linearithmic_time
61 https://en.wikipedia.org/wiki/Fast_Fourier_transform
62 https://en.wikipedia.org/wiki/Comparison_sort
63 https://en.wikipedia.org/wiki/Heapsort
64 https://en.wikipedia.org/wiki/Merge_sort
65 https://en.wikipedia.org/wiki/Quadratic_time
66 https://en.wikipedia.org/wiki/Bubble_sort
67 https://en.wikipedia.org/wiki/Selection_sort
68 https://en.wikipedia.org/wiki/Insertion_sort
69 https://en.wikipedia.org/wiki/Quicksort
70 https://en.wikipedia.org/wiki/Shellsort
71 https://en.wikipedia.org/wiki/Tree_sort

1512
Related asymptotic notations

Notation Name Example


O(nc ) polynomial72 or algebraic Tree-adjoining grammar73 parsing;
maximum matching74 for bipartite
graphs75 ; finding the determi-
nant76 with LU decomposition77
α
(ln ln n)1−α
Ln [α, c] = e(c+o(1))(ln n) L-notation78 or sub-exponential79 Factoring a number using the
0<α<1 quadratic sieve80 or number field
sieve81
O(cn ) exponential82 Finding the (exact) solution to
c>1 the travelling salesman problem83
using dynamic programming84 ; de-
termining if two logical statements
are equivalent using brute-force
search85
O(n!) factorial86 Solving the travelling salesman
problem87 via brute-force search;
generating all unrestricted per-
mutations of a poset88 ; finding
the determinant89 with Laplace
expansion90 ; enumerating all par-
titions of a set91

The statement f (n) = O(n!) is sometimes weakened to f (n) = O (nn ) to derive simpler for-
mulas for asymptotic complexity. For any k > 0 and c > 0, O(nc (log n)k ) is a subset of
O(nc+ε ) for any ε > 0, so may be considered as a polynomial with some bigger order.

142.8 Related asymptotic notations

Big O is the most commonly used asymptotic notation for comparing


92
functions.[citation needed ] Together with some other related notations it forms the fam-
ily of Bachmann–Landau notations.

72 https://en.wikipedia.org/wiki/Polynomial_time
73 https://en.wikipedia.org/wiki/Tree-adjoining_grammar
74 https://en.wikipedia.org/wiki/Matching_(graph_theory)
75 https://en.wikipedia.org/wiki/Bipartite_graph
76 https://en.wikipedia.org/wiki/Determinant
77 https://en.wikipedia.org/wiki/LU_decomposition
78 https://en.wikipedia.org/wiki/L-notation
79 https://en.wikipedia.org/wiki/Sub-exponential_time
80 https://en.wikipedia.org/wiki/Quadratic_sieve
81 https://en.wikipedia.org/wiki/Number_field_sieve
82 https://en.wikipedia.org/wiki/Exponential_time
83 https://en.wikipedia.org/wiki/Travelling_salesman_problem
84 https://en.wikipedia.org/wiki/Dynamic_programming
85 https://en.wikipedia.org/wiki/Brute-force_search
86 https://en.wikipedia.org/wiki/Factorial
87 https://en.wikipedia.org/wiki/Travelling_salesman_problem
88 https://en.wikipedia.org/wiki/Partially_ordered_set
89 https://en.wikipedia.org/wiki/Determinant
90 https://en.wikipedia.org/wiki/Laplace_expansion
91 https://en.wikipedia.org/wiki/Bell_number

1513
Big O notation

142.8.1 Little-o notation

”Little o” redirects here. For the baseball player, see Omar Vizquel93 . Intuitively, the
assertion ”f(x) is o(g(x))” (read ”f(x) is little-o of g(x)”) means that g(x) grows much faster
than f(x). Let as before f be a real or complex valued function and g a real valued function,
both defined on some unbounded subset of the real positive numbers94 , such that g(x) is
strictly positive for all large enough values of x. One writes
f (x) = o(g(x)) as x → ∞
if for every positive constant ε there exists a constant N such that
|f (x)| ≤ εg(x) for all x ≥ N .[12]
For example, one has
2x = o(x2 ) and 1/x = o(1).
The difference between the earlier definition95 for the big-O notation and the present def-
inition of little-o, is that while the former has to be true for at least one constant M, the
latter must hold for every positive constant ε, however small.[13] In this way, little-o notation
makes a stronger statement than the corresponding big-O notation: every function that is
little-o of g is also big-O of g, but not every function that is big-O of g is also little-o of g.
For example, 2x2 = O(x2 ) but 2x2 ̸= o(x2 ).
As g(x) is nonzero, or at least becomes nonzero beyond a certain point, the relation
f(x) = o(g(x)) is equivalent to
f (x)
lim = 0 (and this is in fact how Landau[12] originally defined the little-o notation).
x→∞ g(x)

Little-o respects a number of arithmetic operations. For example,


if c is a nonzero constant and f = o(g) then c · f = o(g), and
if f = o(F ) and g = o(G) then f · g = o(F · G).
It also satisfies a transitivity relation:
if f = o(g) and g = o(h) then f = o(h).

142.8.2 Big Omega notation

There are two very widespread and incompatible definitions of the statement
f (x) = Ω(g(x)) (x → a),
where a is some real number, ∞, or −∞, where f and g are real functions defined in a
neighbourhood of a, and where g is positive in this neighbourhood.

93 https://en.wikipedia.org/wiki/Omar_Vizquel
94 https://en.wikipedia.org/wiki/Real_number
95 #Formal_definition

1514
Related asymptotic notations

The first one (chronologically) is used in analytic number theory96 , and the other one in
computational complexity theory97 . When the two subjects meet, this situation is bound
to generate confusion.

The Hardy–Littlewood definition

In 1914 Godfrey Harold Hardy98 and John Edensor Littlewood99 introduced the new symbol
Ω,[14] which is defined as follows:

f (x)

f (x) = Ω(g(x)) (x → ∞) ⇔ lim sup > 0.
x→∞ g(x)

Thus f (x) = Ω(g(x)) is the negation of f (x) = o(g(x)).


In 1916 the same authors introduced the two new symbols ΩR and ΩL , defined as:[15]
f (x)
f (x) = ΩR (g(x)) (x → ∞) ⇔ lim sup > 0;
x→∞ g(x)
f (x)
f (x) = ΩL (g(x)) (x → ∞) ⇔ lim inf < 0.
x→∞ g(x)
These symbols were used by Edmund Landau100 , with the same meanings, in 1924.[16] After
Landau, the notations were never used again exactly thus; ΩR became Ω+ and ΩL became
101
Ω− .[citation needed ]
These three symbols Ω, Ω+ , Ω− , as well as f (x) = Ω± (g(x)) (meaning that f (x) = Ω+ (g(x))
and f (x) = Ω− (g(x)) are both satisfied), are now currently used in analytic number the-
ory102 .[17][18]

Simple examples
We have
sin x = Ω(1) (x → ∞),
and more precisely
sin x = Ω± (1) (x → ∞).
We have
sin x + 1 = Ω(1) (x → ∞),
and more precisely
sin x + 1 = Ω+ (1) (x → ∞);
however

96 https://en.wikipedia.org/wiki/Analytic_number_theory
97 https://en.wikipedia.org/wiki/Computational_complexity_theory
98 https://en.wikipedia.org/wiki/Godfrey_Harold_Hardy
99 https://en.wikipedia.org/wiki/John_Edensor_Littlewood
100 https://en.wikipedia.org/wiki/Edmund_Landau
102 https://en.wikipedia.org/wiki/Analytic_number_theory

1515
Big O notation

sin x + 1 ̸= Ω− (1) (x → ∞).

The Knuth definition

In 1976 Donald Knuth103 published a paper to justify his use of the Ω-symbol to describe
a stronger property. Knuth wrote: ”For all the applications I have seen so far in computer
science, a stronger requirement […] is much more appropriate”. He defined
f (x) = Ω(g(x)) ⇔ g(x) = O(f (x))
with the comment: ”Although I have changed Hardy and Littlewood's definition of Ω, I feel
justified in doing so because their definition is by no means in wide use, and because there
are other ways to say what they want to say in the comparatively rare cases when their
definition applies.”[19]

142.8.3 Family of Bachmann–Landau notations

Notation Name[19] Description Formal Definition Limit


Definition[20] [21] [22] [19] [14]
|f (n)|
f (n) = O(g(n)) Big O; Big Oh; Big Omicron |f | is bounded above by ∃k > 0∃n0 ∀n > n0 |f (n)| ≤ k · g(n)
lim sup <∞
g (up to constant factor) n→∞ g(n)
asymptotically
f (n) = Θ(g(n)) Big Theta f is bounded both above and ∃k1 > 0∃k2 > 0∃n0 ∀n > n0 kf1(n)· g(n) ≤ f (n) ≤
= O(g(n)) k2 · g(n)
and
below by g asymptotically f (n) = Ω(g(n)) (Knuth
version)
f (n)
f (n) = Ω(g(n)) Big Omega in complexity theory f is bounded below by g ∃k > 0∃n0 ∀n > n0 f (n) ≥ k ·lim
g(n)
inf >0
(Knuth) asymptotically n→∞ g(n)
f (n)
f (n) = o(g(n)) Small O; Small Oh f is dominated by g asymp- ∀k > 0∃n0 ∀n > n0 |f (n)| < klim
· g(n) =0
totically
n→∞ g(n)

f (n) − 1lim
< εf (n) = 1
f (n) ∼ g(n) On the order of f is equal to g asymptoti- ∀ε > 0∃n0 ∀n > n0 g(n) n→∞ g(n)
cally
f (n) = ∞
f (n) = ω(g(n)) Small Omega f dominates g asymptotically ∀k > 0∃n0 ∀n > n0 |f (n)| > klim
· |g(n)|
n→∞ g(n)

sup
f (n)
f (n) = Ω(g(n)) Big Omega in number theory (Hardy– |f | is not dominated by g ∃k > 0∀n0 ∃n > n0 |f (n)| ≥ k · g(n)
lim
g(n)
>0
Littlewood) asymptotically n→∞

The limit definitions assume g(n) ̸= 0 for sufficiently large n. The table is (partly) sorted
from smallest to largest, in the sense that o, O, Θ, ∼, (Knuth's version of) Ω, ω on functions
correspond to <, ≤, ≈, =, ≥, > on the real line[22] (the Hardy-Littlewood version of Ω,
however, doesn't correspond to any such description).
Computer science uses the big O, big Theta Θ, little o, little omega ω and Knuth's big Omega
Ω notations.[23] Analytic number theory often uses the big O, small o, Hardy–Littlewood's
big Omega Ω (with or without the +, - or ± subscripts) and ∼ notations.[17] The small
omega ω notation is not used as often in analysis.[24]

103 https://en.wikipedia.org/wiki/Donald_Knuth

1516
Related asymptotic notations

142.8.4 Use in computer science

Further information: Analysis of algorithms104 Informally, especially in computer science,


the big O notation often can be used somewhat differently to describe an asymptotic tight105
bound where using big Theta Θ notation might be more factually appropriate in a given
106
context.[citation needed ] For example, when considering a function T(n) = 73n3 + 22n2 + 58,
all of the following are generally acceptable, but tighter bounds (such as numbers 2 and 3
below) are usually strongly preferred over looser bounds (such as number 1 below).
1. T(n) = O(n100 )
2. T(n) = O(n3 )
3. T(n) = Θ(n3 )
The equivalent English statements are respectively:
1. T(n) grows asymptotically no faster than n100
2. T(n) grows asymptotically no faster than n3
3. T(n) grows asymptotically as fast as n3 .
So while all three statements are true, progressively more information is contained in each.
In some fields, however, the big O notation (number 2 in the lists above) would be used more
commonly than the big Theta notation (items numbered 3 in the lists above). For example,
if T(n) represents the running time of a newly developed algorithm for input size n, the
inventors and users of the algorithm might be more inclined to put an upper asymptotic
bound on how long it will take to run without making an explicit statement about the lower
asymptotic bound.

142.8.5 Other notation

In their book Introduction to Algorithms107 , Cormen108 , Leiserson109 , Rivest110 and Stein111


consider the set of functions f which satisfy
f (n) = O(g(n)) (n → ∞).
In a correct notation this set can, for instance, be called O(g), where
O(g) = {f : there exist positive constants c and n0 such that 0 ≤ f (n) ≤ cg(n) for all
n ≥ n0 }.[25]
The authors state that the use of equality operator (=) to denote set membership rather than
the set membership operator (∈) is an abuse of notation, but that doing so has advantages.[5]
Inside an equation or inequality, the use of asymptotic notation stands for an anonymous
function in the set O(g), which eliminates lower-order terms, and helps to reduce inessential
clutter in equations, for example:[26]

104 https://en.wikipedia.org/wiki/Analysis_of_algorithms
105 https://en.wikipedia.org/wiki/Upper_and_lower_bounds#Tight_bounds
107 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
108 https://en.wikipedia.org/wiki/Thomas_H._Cormen
109 https://en.wikipedia.org/wiki/Charles_E._Leiserson
110 https://en.wikipedia.org/wiki/Ronald_L._Rivest
111 https://en.wikipedia.org/wiki/Clifford_Stein

1517
Big O notation

2n2 + 3n + 1 = 2n2 + O(n).

142.8.6 Extensions to the Bachmann–Landau notations

Another notation sometimes used in computer science is Õ (read soft-O): f(n) = Õ(g(n))
is shorthand for f(n) = O(g(n) logk g(n)) for some k[27] . Essentially, it is big O notation,
ignoring logarithmic factors because the growth-rate effects of some other super-logarithmic
function indicate a growth-rate explosion for large-sized input parameters that is more
important to predicting bad run-time performance than the finer-point effects contributed
by the logarithmic-growth factor(s). This notation is often used to obviate the ”nitpicking”
within growth-rates that are stated as too tightly bounded for the matters at hand (since
logk n is always o(nε ) for any constant k and any ε > 0).
Also the L notation112 , defined as
α (ln ln n)1−α
Ln [α, c] = e(c+o(1))(ln n)
is convenient for functions that are between polynomial113 and exponential114 in terms of
ln n.

142.9 Generalizations and related usages

The generalization to functions taking values in any normed vector space115 is straightfor-
ward (replacing absolute values by norms), where f and g need not take their values in the
same space. A generalization to functions g taking values in any topological group116 is also
117
possible[citation needed ] . The ”limiting process” x → xo can also be generalized by introducing
an arbitrary filter base118 , i.e. to directed nets119 f and g. The o notation can be used to
define derivatives120 and differentiability121 in quite general spaces, and also (asymptotical)
equivalence of functions,
f ∼ g ⇐⇒ (f − g) ∈ o(g)
which is an equivalence relation122 and a more restrictive notion than the relationship ”f is
Θ(g)” from above. (It reduces to lim f / g = 1 if f and g are positive real valued functions.)
For example, 2x is Θ(x), but 2x − x is not o(x).

112 https://en.wikipedia.org/wiki/L-notation
113 https://en.wikipedia.org/wiki/Time_complexity#Polynomial_time
114 https://en.wikipedia.org/wiki/Time_complexity#Exponential_time
115 https://en.wikipedia.org/wiki/Normed_vector_space
116 https://en.wikipedia.org/wiki/Topological_group
118 https://en.wikipedia.org/wiki/Filter_base
119 https://en.wikipedia.org/wiki/Net_(mathematics)
120 https://en.wikipedia.org/wiki/Derivative
121 https://en.wikipedia.org/wiki/Differentiability
122 https://en.wikipedia.org/wiki/Equivalence_relation

1518
History (Bachmann–Landau, Hardy, and Vinogradov notations)

142.10 History (Bachmann–Landau, Hardy, and


Vinogradov notations)

The symbol O was first introduced by number theorist Paul Bachmann123 in 1894, in the
second volume of his book Analytische Zahlentheorie (”analytic number theory124 ”).[1] The
number theorist Edmund Landau125 adopted it, and was thus inspired to introduce in 1909
the notation o;[2] hence both are now called Landau symbols. These notations were used
in applied mathematics during the 1950s for asymptotic analysis.[28] The symbol Ω (in the
sense ”is not an o of”) was introduced in 1914 by Hardy and Littlewood.[14] Hardy and
Littlewood also introduced in 1918 the symbols ΩR (”right”) and ΩL (”left”),[15] precursors
of the modern symbols Ω+ (”is not smaller than a small o of”) and Ω− (”is not larger than
a small o of”). Thus the Omega symbols (with their original meanings) are sometimes also
referred to as ”Landau symbols”. This notation Ω became commonly used in number theory
at least since the 1950s.[29] In the 1970s the big O was popularized in computer science
by Donald Knuth126 , who introduced the related Theta notation, and proposed a different
definition for the Omega notation.[19]
Landau never used the big Theta and small omega symbols.
Hardy's symbols were (in terms of the modern O notation)
f ≼ g ⇐⇒ f ∈ O(g) and f ≺ g ⇐⇒ f ∈ o(g);
(Hardy however never defined or used the notation ≺≺, nor ≪, as it has been sometimes
reported). Hardy introduced the symbols ≼ and ≺ (as well as some other symbols) in his
1910 tract ”Orders of Infinity”, and made use of them only in three papers (1910–1913). In
his nearly 400 remaining papers and books he consistently used the Landau symbols O and
o.
Hardy's notation is not used anymore. On the other hand, in the 1930s,[30] the Russian
number theorist Ivan Matveyevich Vinogradov127 introduced his notation ≪, which has
been increasingly used in number theory instead of the O notation. We have
f ≪ g ⇐⇒ f ∈ O(g),
and frequently both notations are used in the same paper.
The big-O originally stands for ”order of” (”Ordnung”, Bachmann 1894), and is thus a Latin
letter. Neither Bachmann nor Landau ever call it ”Omicron”. The symbol was much later
on (1976) viewed by Knuth as a capital omicron128 ,[19] probably in reference to his definition
of the symbol Omega129 . The digit zero130 should not be used.

123 https://en.wikipedia.org/wiki/Paul_Bachmann
124 https://en.wikipedia.org/wiki/Analytic_number_theory
125 https://en.wikipedia.org/wiki/Edmund_Landau
126 https://en.wikipedia.org/wiki/Donald_Knuth
127 https://en.wikipedia.org/wiki/Ivan_Matveyevich_Vinogradov
128 https://en.wikipedia.org/wiki/Omicron
129 https://en.wikipedia.org/wiki/Omega
130 https://en.wikipedia.org/wiki/0_(number)

1519
Big O notation

142.11 See also


• Asymptotic expansion131 : Approximation of functions generalizing Taylor's formula
• Asymptotically optimal algorithm132 : A phrase frequently used to describe an algorithm
that has an upper bound asymptotically within a constant of a lower bound for the
problem
• Big O in probability notation133 : Op ,op
• Limit superior and limit inferior134 : An explanation of some of the limit notation used in
this article
• Nachbin's theorem135 : A precise method of bounding complex analytic136 functions so
that the domain of convergence of integral transforms137 can be stated
• Orders of approximation138
• Computational complexity of mathematical operations139

142.12 References and notes


1. B, P140 (1894). Analytische Zahlentheorie141 [Analytic Number
Theory] (in German). 2. Leipzig: Teubner.
2. L, E142 (1909). Handbuch der Lehre von der Verteilung der
Primzahlen143 [Handbook on the theory of the distribution of the primes] (in German).
Leipzig: B. G. Teubner. p. 883.
3. M, A. ”Q C  C T  T
 C”144 (PDF). . 2. R 7 J 2014.
4. L, E145 (1909). Handbuch der Lehre von der Verteilung der
Primzahlen146 [Handbook on the theory of the distribution of the primes] (in German).
Leipzig: B. G. Teubner. p. 31.
5. T H. C; C E. L; R L. R (2009). Intro-
duction to Algorithms (3rd ed.). Cambridge/MA: MIT Press. p. 45. ISBN147 978-0-
262-53305-8148 . Because θ(g(n)) is a set, we could write ”f(n) ∈ θ(g(n))” to indicate
that f(n) is a member of θ(g(n)). Instead, we will usually write f(n) = θ(g(n)) to

131 https://en.wikipedia.org/wiki/Asymptotic_expansion
132 https://en.wikipedia.org/wiki/Asymptotically_optimal_algorithm
133 https://en.wikipedia.org/wiki/Big_O_in_probability_notation
134 https://en.wikipedia.org/wiki/Limit_superior_and_limit_inferior
135 https://en.wikipedia.org/wiki/Nachbin%27s_theorem
136 https://en.wikipedia.org/wiki/Complex_analytic
137 https://en.wikipedia.org/wiki/Integral_transform
138 https://en.wikipedia.org/wiki/Orders_of_approximation
139 https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
140 https://en.wikipedia.org/wiki/Paul_Bachmann
141 https://archive.org/stream/dieanalytischeza00bachuoft#page/402/mode/2up
142 https://en.wikipedia.org/wiki/Edmund_Landau
143 https://archive.org/details/handbuchderlehre01landuoft
144 http://www.austinmohr.com/Work_files/complexity.pdf
145 https://en.wikipedia.org/wiki/Edmund_Landau
146 https://archive.org/stream/handbuchderlehre01landuoft#page/31/mode/2up
147 https://en.wikipedia.org/wiki/ISBN_(identifier)
148 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-53305-8

1520
References and notes

express the same notion. You might be confused because we abuse equality in this way,
but we shall see later in this section that doing so has its advantages.
6. C, T; L, C; R, R; S, C
(2009). Introduction to Algorithms (Third ed.). MIT. p. 53.
7. H, R. ”O A N  M V”149
(PDF). R 2015-04-23.
8. N. G.  B150 (1958). Asymptotic Methods in Analysis151 . A:
N-H. . 5–7. ISBN152 978-0-486-64221-5153 .
9. G, R154 ; K, D155 ; P, O156 (1994). Concrete
Mathematics157 (2 .). R, M: A–W. . 446.
ISBN158 978-0-201-55802-9159 .
10. D K (J–J 1998). ”T C  B O”160 (PDF).
Notices of the American Mathematical Society161 . 45 (6): 687. (Unabridged ver-
sion162 )
11. T (24 J 2014). ”B O     LTX”163 . texblog.
12. L, E164 (1909). Handbuch der Lehre von der Verteilung der
Primzahlen165 [Handbook on the theory of the distribution of the primes] (in German).
Leipzig: B. G. Teubner. p. 61.
13. Thomas H. Cormen et al., 2001, Introduction to Algorithms, Second Edi-
167
tion166[page needed ]
14. H, G. H.; L, J. E. (1914). ”S    -
: P II. T      -
 ϑ-”168 . Acta Mathematica. 37: 225. doi169 :10.1007/BF02401834170 .
15. G. H. Hardy and J. E. Littlewood, « Contribution to the theory of the Riemann zeta-
function and the theory of the distribution of primes », Acta Mathematica171 , vol. 41,
1916.
16. E. Landau, ”Über die Anzahl der Gitterpunkte in gewissen Bereichen. IV.” Nachr.
Gesell. Wiss. Gött. Math-phys. Kl. 1924, 137–150.

149 http://people.cis.ksu.edu/~rhowell/asymptotic.pdf
150 https://en.wikipedia.org/wiki/N._G._de_Bruijn
151 https://books.google.com/?id=_tnwmvHmVwMC&pg=PA5&vq=%22The+trouble+is%22
152 https://en.wikipedia.org/wiki/ISBN_(identifier)
153 https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-64221-5
154 https://en.wikipedia.org/wiki/Ronald_Graham
155 https://en.wikipedia.org/wiki/Donald_Knuth
156 https://en.wikipedia.org/wiki/Oren_Patashnik
157 https://books.google.com/?id=pntQAAAAMAAJ
158 https://en.wikipedia.org/wiki/ISBN_(identifier)
159 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-55802-9
160 http://www.ams.org/notices/199806/commentary.pdf
161 https://en.wikipedia.org/wiki/Notices_of_the_American_Mathematical_Society
162 http://www-cs-staff.stanford.edu/~knuth/ocalc.tex
163 https://texblog.org/2014/06/24/big-o-and-related-notations-in-latex/
164 https://en.wikipedia.org/wiki/Edmund_Landau
165 https://archive.org/stream/handbuchderlehre01landuoft#page/61/mode/2up
166 http://highered.mcgraw-hill.com/sites/0070131511/
168 http://projecteuclid.org/download/pdf_1/euclid.acta/1485887376
169 https://en.wikipedia.org/wiki/Doi_(identifier)
170 https://doi.org/10.1007%2FBF02401834
171 https://en.wikipedia.org/wiki/Acta_Mathematica

1521
Big O notation

17. Aleksandar Ivić. The Riemann zeta-function, chapter 9. John Wiley & Sons 1985.
18. Gérald Tenenbaum, Introduction to analytic and probabilistic number theory, Chapter
I.5. American Mathematical Society, Providence RI, 2015.
19. Donald Knuth. ”Big Omicron and big Omega and big Theta172 ”, SIGACT News,
Apr.-June 1976, 18–24.
20. B, J L.; G, J. ”N  
     ”173 (PDF). RAIRO – Theoretical Infor-
matics and Applications – Informatique Théorique et Applications. 23 (2): 180.
ISSN174 0988-3754175 . Retrieved 14 March 2017.
21. C, F; B, P (2013). ”A.1 B O, L O, 
O C”. Condition: The Geometry of Numerical Algorithms.
Berlin, Heidelberg: Springer. pp. 467–468. doi176 :10.1007/978-3-642-38896-5177 .
ISBN178 978-3-642-38896-5179 .
22. V, P180 ; M, L181 (A 1985). ”B O -
   ”182 (PDF). ACM SIGACT News. 16 (4): 56–59. Cite-
SeerX183 10.1.1.694.3072184 . doi185 :10.1145/382242.382835186 .
23. C, T H.187 ; L, C E.188 ; R, R L.189 ;
S, C190 (2001) [1990]. Introduction to Algorithms191 (2 .). MIT
P  MG-H. . 41–50. ISBN192 0-262-03293-7193 .
24. for example it is omitted in:
H, A.J. ”A N”194 (PDF). Asymptotic Methods in
Analysis. Math 595, Fall 2009. Retrieved 14 March 2017.
25. T H. C; C E. L; R L. R (2009). Intro-
duction to Algorithms (3rd ed.). Cambridge/MA: MIT Press. p. 47. ISBN195 978-0-
262-53305-8196 . When we have only an asymptotic upper bound, we use O-notation.

172 http://www.phil.uu.nl/datastructuren/10-11/knuth_big_omicron.pdf
173 http://archive.numdam.org/article/ITA_1989__23_2_177_0.pdf
174 https://en.wikipedia.org/wiki/ISSN_(identifier)
175 http://www.worldcat.org/issn/0988-3754
176 https://en.wikipedia.org/wiki/Doi_(identifier)
177 https://doi.org/10.1007%2F978-3-642-38896-5
178 https://en.wikipedia.org/wiki/ISBN_(identifier)
179 https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-38896-5
180 https://en.wikipedia.org/wiki/Paul_Vitanyi
181 https://en.wikipedia.org/wiki/Lambert_Meertens
http://www.kestrel.edu/home/people/meertens/publications/papers/Big_Omega_contra_the_
182
wild_functions.pdf
183 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
184 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.694.3072
185 https://en.wikipedia.org/wiki/Doi_(identifier)
186 https://doi.org/10.1145%2F382242.382835
187 https://en.wikipedia.org/wiki/Thomas_H._Cormen
188 https://en.wikipedia.org/wiki/Charles_E._Leiserson
189 https://en.wikipedia.org/wiki/Ron_Rivest
190 https://en.wikipedia.org/wiki/Clifford_Stein
191 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
192 https://en.wikipedia.org/wiki/ISBN_(identifier)
193 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
194 http://www.math.uiuc.edu/~ajh/595ama/ama-ch2.pdf
195 https://en.wikipedia.org/wiki/ISBN_(identifier)
196 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-53305-8

1522
Further reading

For a given function g(n), we denote by O(g(n)) (pronounced “big-oh of g of n” or


sometimes just “oh of g of n”) the set of functions O(g(n)) = { f(n) : there exist
positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
26. T H. C; C E. L; R L. R (2009). Intro-
duction to Algorithms (3rd ed.). Cambridge/MA: MIT Press. p. 49. ISBN197 978-
0-262-53305-8198 . When the asymptotic notation stands alone (that is, not within a
larger formula) on the right-hand side of an equation (or inequality), as in n = O(n²),
we have already defined the equal sign to mean set membership: n ∈ O(n²). In general,
however, when asymptotic notation appears in a formula, we interpret it as standing
for some anonymous function that we do not care to name. For example, the formula
2n2 + 3n + 1 = 2n2 + θ(n) means that 2n2 + 3n + 1 = 2n2 + f(n), where f(n) is
some function in the set θ(n). In this case, we let f(n) = 3n + 1, which is indeed in
θ(n). Using asymptotic notation in this manner can help eliminate inessential detail
and clutter in an equation.
27. Introduction to algorithms. Cormen, Thomas H. (Third ed.). Cambridge, Mass.:
MIT Press. 2009. p. 63. ISBN199 978-0-262-27083-0200 . OCLC201 676697295202 .CS1
maint: others (link203 )
28. E, A. (1956). Asymptotic Expansions. ISBN204 978-0-486-60318-6205 .
29. E. C. Titchmarsh, The Theory of the Riemann Zeta-Function (Oxford; Clarendon
Press, 1951)
30. See for instance ”A new estimate for G(n) in Waring's problem” (Russian). Doklady
Akademii Nauk SSSR 5, No 5-6 (1934), 249–253. Translated in English in: Selected
works / Ivan Matveevič Vinogradov; prepared by the Steklov Mathematical Institute
of the Academy of Sciences of the USSR on the occasion of his 90th birthday. Springer-
Verlag, 1985.

142.13 Further reading


• H, G. H.206 (1910). Orders of Infinity: The 'Infinitärcalcül' of Paul du Bois-
Reymond207 . C U P208 .
• K, D209 (1997). ”1.2.11: A R”. Fundamen-
tal Algorithms. The Art of Computer Programming. 1 (3rd ed.). Addison–Wesley.
ISBN210 978-0-201-89683-1211 .

197 https://en.wikipedia.org/wiki/ISBN_(identifier)
198 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-53305-8
199 https://en.wikipedia.org/wiki/ISBN_(identifier)
200 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-27083-0
201 https://en.wikipedia.org/wiki/OCLC_(identifier)
202 http://www.worldcat.org/oclc/676697295
203 https://en.wikipedia.org/wiki/Category:CS1_maint:_others
204 https://en.wikipedia.org/wiki/ISBN_(identifier)
205 https://en.wikipedia.org/wiki/Special:BookSources/978-0-486-60318-6
206 https://en.wikipedia.org/wiki/G._H._Hardy
207 https://archive.org/details/ordersofinfinity00harduoft
208 https://en.wikipedia.org/wiki/Cambridge_University_Press
209 https://en.wikipedia.org/wiki/Donald_Knuth
210 https://en.wikipedia.org/wiki/ISBN_(identifier)
211 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-89683-1

1523
Big O notation

• C, T H.212 ; L, C E.213 ; R, R L.214 ; S,
C215 (2001). ”3.1: A ”. Introduction to Algorithms216 (2
.). MIT P  MG–H. ISBN217 978-0-262-03293-3218 .
• S, M219 (1997). Introduction to the Theory of Computation. PWS Pub-
lishing. pp. 226–228. ISBN220 978-0-534-94728-6221 .
• A, J; D, K (2004). Formalizing O notation in Is-
abelle/HOL222 (PDF). I J C  A R-
. 223 :10.1007/978-3-540-25984-8_27224 .
• B, P E. (11 M 2005). B, P E. (.). ”-O ”225 .
Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and
Technology. Retrieved December 16, 2006.
• B, P E. (17 D 2004). B, P E. (.). ”- -
”226 . Dictionary of Algorithms and Data Structures. U.S. National Institute of
Standards and Technology. Retrieved December 16, 2006.
• B, P E. (17 D 2004). B, P E. (.). ”Ω”227 . Dictionary of
Algorithms and Data Structures. U.S. National Institute of Standards and Technology.
Retrieved December 16, 2006.
• B, P E. (17 D 2004). B, P E. (.). ”ω”228 . Dictionary of
Algorithms and Data Structures. U.S. National Institute of Standards and Technology.
Retrieved December 16, 2006.
• B, P E. (17 D 2004). B, P E. (.). ”Θ”229 . Dictionary of
Algorithms and Data Structures. U.S. National Institute of Standards and Technology.
Retrieved December 16, 2006.

142.14 External links

The Wikibook Data Structures230 has a page on the topic of: Big-O Nota-
tion231

212 https://en.wikipedia.org/wiki/Thomas_H._Cormen
213 https://en.wikipedia.org/wiki/Charles_E._Leiserson
214 https://en.wikipedia.org/wiki/Ronald_L._Rivest
215 https://en.wikipedia.org/wiki/Clifford_Stein
216 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
217 https://en.wikipedia.org/wiki/ISBN_(identifier)
218 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03293-3
219 https://en.wikipedia.org/wiki/Michael_Sipser
220 https://en.wikipedia.org/wiki/ISBN_(identifier)
221 https://en.wikipedia.org/wiki/Special:BookSources/978-0-534-94728-6
222 http://www.andrew.cmu.edu/~avigad/Papers/bigo.pdf
223 https://en.wikipedia.org/wiki/Doi_(identifier)
224 https://doi.org/10.1007%2F978-3-540-25984-8_27
225 https://xlinux.nist.gov/dads/HTML/bigOnotation.html
226 https://xlinux.nist.gov/dads/HTML/littleOnotation.html
227 https://xlinux.nist.gov/dads/HTML/omegaCapital.html
228 https://xlinux.nist.gov/dads/HTML/omega.html
229 https://xlinux.nist.gov/dads/HTML/theta.html
230 https://en.wikibooks.org/wiki/Data_Structures
231 https://en.wikibooks.org/wiki/Data_Structures/Asymptotic_Notation#Big-O_Notation

1524
External links

• Growth of sequences — OEIS (Online Encyclopedia of Integer Sequences) Wiki232


• Introduction to Asymptotic Notations233
• Landau Symbols234
• Big-O Notation – What is it good for235
• Big O Notation explained in plain english236
• An example of Big O in accuracy of central divided difference scheme for first derivative237
• A Gentle Introduction to Algorithm Complexity Analysis238

232 http://oeis.org/wiki/Growth_of_sequences
233 https://classes.soe.ucsc.edu/classes/cmps102/Spring04/TantaloAsymp.pdf
234 http://mathworld.wolfram.com/LandauSymbols.html
235 http://www.perlmonks.org/?node_id=573138
https://stackoverflow.com/questions/487258/what-is-a-plain-english-explanation-of-
236
big-o-notation/50288253#50288253
237 https://autarkaw.org/2013/01/30/making-sense-of-the-big-oh/
238 https://discrete.gr/complexity/

1525
143 Master theorem

In mathematics, a theorem that covers a variety of cases is sometimes called a master


theorem.
Some theorems called master theorems in their fields include:
• Master theorem (analysis of algorithms)1 , analyzing the asymptotic behavior of divide-
and-conquer algorithms
• Ramanujan's master theorem2 , providing an analytic expression for the Mellin transform
of an analytic function
• MacMahon master theorem3 (MMT), in enumerative combinatorics and linear algebra
• Glasser's master theorem4 in integral calculus

Disambiguation page providing links to topics that could be referred to by the


same search termThis disambiguation5 page lists articles associated with the title
Master theorem.
If an internal link6 led you here, you may wish to change the link to point directly to the
intended article.

1 https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)
2 https://en.wikipedia.org/wiki/Ramanujan%27s_master_theorem
3 https://en.wikipedia.org/wiki/MacMahon_master_theorem
4 https://en.wikipedia.org/wiki/Glasser%27s_master_theorem
5 https://en.wikipedia.org/wiki/Help:Disambiguation
https://en.wikipedia.org/w/index.php?title=Special:WhatLinksHere/Master_theorem&
6
namespace=0

1527
144 Best, worst and average case

A measure of how efficiently algorithms use resources

This article needs additional citations for verification1 . Please help improve
this article2 by adding citations to reliable sources3 . Unsourced material may be
challenged and removed.
Find sources: ”Best, worst and average case”4 –
news5 · newspapers6 · books7 · scholar8 · JSTOR9 (March 2009)(Learn how and
when to remove this template message10 )

In computer science11 , best, worst, and average cases of a given algorithm12 express what
the resource13 usage is at least, at most and on average, respectively. Usually the resource
being considered is running time, i.e. time complexity14 , but could also be memory or
other resource. Best case is the function which performs the minimum number of steps on
input data of n elements. Worst case is the function which performs the maximum number
of steps on input data of size n. Average case is the function which performs an average
number of steps on input data of n elements.
In real-time computing15 , the worst-case execution time16 is often of particular concern since
it is important to know how much time might be needed in the worst case to guarantee that
the algorithm will always finish on time.

1 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
2 https://en.wikipedia.org/w/index.php?title=Best,_worst_and_average_case&action=edit
3 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
4 http://www.google.com/search?as_eq=wikipedia&q=%22Best%2C+worst+and+average+case%22
http://www.google.com/search?tbm=nws&q=%22Best%2C+worst+and+average+case%22+-
5
wikipedia
http://www.google.com/search?&q=%22Best%2C+worst+and+average+case%22+site:news.
6
google.com/newspapers&source=newspapers
http://www.google.com/search?tbs=bks:1&q=%22Best%2C+worst+and+average+case%22+-
7
wikipedia
8 http://scholar.google.com/scholar?q=%22Best%2C+worst+and+average+case%22
https://www.jstor.org/action/doBasicSearch?Query=%22Best%2C+worst+and+average+case%
9
22&acc=on&wc=on
10 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
11 https://en.wikipedia.org/wiki/Computer_science
12 https://en.wikipedia.org/wiki/Algorithm
13 https://en.wikipedia.org/wiki/Resource_(computer_science)
14 https://en.wikipedia.org/wiki/Time_complexity
15 https://en.wikipedia.org/wiki/Real-time_computing
16 https://en.wikipedia.org/wiki/Worst-case_execution_time

1529
Best, worst and average case

Average performance and worst-case performance are the most used in algorithm analysis.
Less widely found is best-case performance17 , but it does have uses: for example, where the
best cases of individual tasks are known, they can be used to improve the accuracy of an
overall worst-case analysis. Computer scientists18 use probabilistic analysis19 techniques,
especially expected value20 , to determine expected running times.
The terms are used in other contexts; for example the worst- and best-case outcome of
a planned-for epidemic, worst-case temperature to which an electronic circuit element is
exposed, etc. Where components of specified tolerance21 are used, devices must be designed
to work properly with the worst-case combination of tolerances and external conditions.

144.1 Best case performance for algorithm

The term best-case performance is used in computer science to describe an algorithm's


behavior under optimal conditions. For example, the best case for a simple linear search on
a list occurs when the desired element is the first element of the list.
Development and choice of algorithms is rarely based on best-case performance: most aca-
demic and commercial enterprises are more interested in improving Average-case complex-
ity22 and worst-case performance23 . Algorithms may also be trivially modified to have good
best-case running time by hard-coding solutions to a finite set of inputs, making the measure
almost meaningless.[1]

144.2 Worst-case versus average-case performance

17 https://en.wikipedia.org/wiki/Best-case_performance
18 https://en.wikipedia.org/wiki/Computer_scientist
19 https://en.wikipedia.org/wiki/Probabilistic_analysis
20 https://en.wikipedia.org/wiki/Expected_value
21 https://en.wikipedia.org/wiki/Engineering_tolerance
22 https://en.wikipedia.org/wiki/Average-case_complexity
23 https://en.wikipedia.org/wiki/Worst-case_performance

1530
Worst-case versus average-case performance

This section does not cite24 any sources25 . Please help improve this section26
by adding citations to reliable sources27 . Unsourced material may be challenged
and removed28 .
Find sources: ”Best, worst and average case”29 –
news30 · newspapers31 · books32 · scholar33 · JSTOR34 (September 2017)(Learn
how and when to remove this template message35 )

Worst-case performance analysis and average-case performance analysis have some similar-
ities, but in practice usually require different tools and approaches.
Determining what typical input means is difficult, and often that average input has proper-
ties which make it difficult to characterise mathematically (consider, for instance, algorithms
that are designed to operate on strings36 of text). Similarly, even when a sensible descrip-
tion of a particular ”average case” (which will probably only be applicable for some uses of
the algorithm) is possible, they tend to result in more difficult analysis of equations.[2]
Worst-case analysis gives a safe analysis (the worst case is never underestimated), but one
which can be overly pessimistic, since there may be no (realistic) input that would take this
many steps.
In some situations it may be necessary to use a pessimistic analysis in order to guarantee
safety. Often however, a pessimistic analysis may be too pessimistic, so an analysis that gets
closer to the real value but may be optimistic (perhaps with some known low probability
of failure) can be a much more practical approach. One modern approach in academic
theory to bridge the gap between worst-case and average-case analysis is called smoothed
analysis37 .
When analyzing algorithms which often take a small time to complete, but periodically
require a much larger time, amortized analysis38 can be used to determine the worst-case
running time over a (possibly infinite) series of operations39 . This amortized worst-

24 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
25 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
26 https://en.wikipedia.org/w/index.php?title=Best,_worst_and_average_case&action=edit
27 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
28 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
29 http://www.google.com/search?as_eq=wikipedia&q=%22Best%2C+worst+and+average+case%22
http://www.google.com/search?tbm=nws&q=%22Best%2C+worst+and+average+case%22+-
30
wikipedia
http://www.google.com/search?&q=%22Best%2C+worst+and+average+case%22+site:news.
31
google.com/newspapers&source=newspapers
http://www.google.com/search?tbs=bks:1&q=%22Best%2C+worst+and+average+case%22+-
32
wikipedia
33 http://scholar.google.com/scholar?q=%22Best%2C+worst+and+average+case%22
https://www.jstor.org/action/doBasicSearch?Query=%22Best%2C+worst+and+average+case%
34
22&acc=on&wc=on
35 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
36 https://en.wikipedia.org/wiki/String_(computer_science)
37 https://en.wikipedia.org/wiki/Smoothed_analysis
38 https://en.wikipedia.org/wiki/Amortized_analysis
39 https://en.wikipedia.org/wiki/Operation_(mathematics)

1531
Best, worst and average case

case cost can be much closer to the average case cost, while still providing a guaranteed
upper limit on the running time.
The worst-case analysis is related to the worst-case complexity40 .[3]

144.3 Practical consequences

Many algorithms with bad worst-case performance have good average-case performance. For
problems we want to solve, this is a good thing: we can hope that the particular instances
we care about are average. For cryptography41 , this is very bad: we want typical instances
of a cryptographic problem to be hard. Here methods like random self-reducibility42 can be
used for some specific problems to show that the worst case is no harder than the average
case, or, equivalently, that the average case is no easier than the worst case.
On the other hand, some data structures like hash tables43 have very poor worst case
behaviors, but a well written hash table of sufficient size will statistically never give the
worst case; the average number of operations performed follows an exponential decay curve,
and so the run time of an operation is statistically bounded.

144.4 Examples

144.4.1 Sorting algorithms

See also: Sorting algorithm § Comparison of algorithms44


Algorithm Data structure Time complexity:Best Time complex- Time complex- Space complex-
ity:Average ity:Worst ity:Worst
Quick sort Array O(n log(n)) O(n log(n)) O(n2 ) O(n)
Merge sort Array O(n log(n)) O(n log(n)) O(n log(n)) O(n)
Heap sort Array O(n log(n)) O(n log(n)) O(n log(n)) O(1)
Smooth sort Array O(n) O(n log(n)) O(n log(n)) O(1)
Bubble sort Array O(n) O(n2 ) O(n2 ) O(1)
Insertion sort Array O(n) O(n2 ) O(n2 ) O(1)
Selection sort Array O(n2 ) O(n2 ) O(n2 ) O(1)
Bogo sort Array O(n) O(n n!) O(∞) O(1)

40 https://en.wikipedia.org/wiki/Worst-case_complexity
41 https://en.wikipedia.org/wiki/Cryptography
42 https://en.wikipedia.org/wiki/Random_self-reducibility
43 https://en.wikipedia.org/wiki/Hash_table
44 https://en.wikipedia.org/wiki/Sorting_algorithm#Comparison_of_algorithms

1532
Examples

Figure 347 Graphs of functions commonly used in the analysis of algorithms, showing
the number of operations N versus input size n for each function

• Insertion sort45 applied to a list of n elements, assumed to be all different and initially in
random order. On average, half the elements in a list A1 ... Aj are less than elementAj+1 ,
and half are greater. Therefore, the algorithm compares the (j + 1)th element to be
inserted on the average with half the already sorted sub-list, so tj = j/2. Working out
the resulting average-case running time yields a quadratic function of the input size, just
like the worst-case running time.
• Quicksort46 applied to a list of n elements, again assumed to be all different and initially
in random order. This popular sorting algorithm47 has an average-case performance of

45 https://en.wikipedia.org/wiki/Insertion_sort
46 https://en.wikipedia.org/wiki/Quicksort
47 https://en.wikipedia.org/wiki/Sorting_algorithm

1533
Best, worst and average case

O(n log(n)), which contributes to making it a very fast algorithm in practice. But given
a worst-case input, its performance degrades to O(n2 ). Also, when implemented with the
”shortest first” policy, the worst-case space complexity is instead bounded by O(log(n)).
• Heapsort has O(n) time when all elements are the same. Heapify takes O(n) time and
then removing elements from the heap is O(1) time for each of the n elements. The run
time grows to O(nlog(n)) if all elements must be distinct.
• Bogosort48 has O(n) time when the elements are sorted on the first iteration. In each
iteration all elements are checked if in order. There are n! possible permutations; with a
balanced random number generator, almost each permutation of the array is yielded in
n! iterations. Computers have limited memory, so the generated numbers cycle; it might
not be possible to reach each permutation. In the worst case this leads to O(∞) time, an
infinite loop.

144.4.2 Data structures

See also: Search data structure § Asymptotic amortized worst-case analysis49


Time complexity Space
Data structure
complexity
Avg: Indexing Avg: Search Avg: Insertion Avg: Deletion Worst: Indexing Worst: Search Worst: Insertion Worst: Deletion Worst
50
Basic array O(1) O(n) — — O(1) O(n) — — O(n)
Dynamic array51 O(1) O(n) O(n) — O(1) O(n) O(n) — O(n)
Singly linked list52 O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n)
Doubly linked O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n)
list53
Hash table54 — O(1) O(1) O(1) — O(n) O(n) O(n) O(n)
Binary search — O(log (n)) O(log (n)) O(log (n)) — O(n) O(n) O(n) O(n)
tree55
B-tree56 — O(log (n)) O(log (n)) O(log (n)) — O(log (n)) O(log (n)) O(log (n)) O(n)
Red-black tree57 — O(log (n)) O(log (n)) O(log (n)) — O(log (n)) O(log (n)) O(log (n)) O(n)
AVL tree58 — O(log (n)) O(log (n)) O(log (n)) — O(log (n)) O(log (n)) O(log (n)) O(n)

• Linear search59 on a list of n elements. In the absolute worst case, the search must visit
every element once. This happens when the value being searched for is either the last
element in the list, or is not in the list. However, on average, assuming the value searched
for is in the list and each list element is equally likely to be the value searched for, the
search visits only n/2 elements.

144.5 See also


• Sorting algorithm60 − an area where there is a great deal of performance analysis of
various algorithms.

48 https://en.wikipedia.org/wiki/Bogosort
https://en.wikipedia.org/wiki/Search_data_structure#Asymptotic_amortized_worst-
49
case_analysis
50 https://en.wikipedia.org/wiki/Array
51 https://en.wikipedia.org/wiki/Dynamic_array
52 https://en.wikipedia.org/wiki/Singly_linked_list
53 https://en.wikipedia.org/wiki/Doubly_linked_list
54 https://en.wikipedia.org/wiki/Hash_table
55 https://en.wikipedia.org/wiki/Binary_search_tree
56 https://en.wikipedia.org/wiki/B-tree
57 https://en.wikipedia.org/wiki/Red-black_tree
58 https://en.wikipedia.org/wiki/AVL_tree
59 https://en.wikipedia.org/wiki/Linear_search
60 https://en.wikipedia.org/wiki/Sorting_algorithm

1534
References

• Search data structure61 − any data structure that allows the efficient retrieval of specific
items
• Worst-case circuit analysis62
• Smoothed analysis63
• Interval finite element64
• Big O notation65

144.6 References
1. Introduction to Algorithms (Cormen, Leiserson, Rivest, and Stein) 2001, Chapter 2
”Getting Started”.In Best-case complexity66 , it gives the lower bound on the running
time of the algorithm of any instances of input.
2. S, D67 ; T, S-H68 (2009), ”S : 
        ”69 (PDF), Com-
munications of the ACM, ACM, 52 (10): 76-84, doi70 :10.1145/1562764.156278571
3. ”W- ”72 (PDF). A73 (PDF)   
 2011-07-21. R 2008-11-30.

61 https://en.wikipedia.org/wiki/Search_data_structure
62 https://en.wikipedia.org/wiki/Worst-case_circuit_analysis
63 https://en.wikipedia.org/wiki/Smoothed_analysis
64 https://en.wikipedia.org/wiki/Interval_finite_element
65 https://en.wikipedia.org/wiki/Big_O_notation
66 https://en.wikipedia.org/wiki/Best-case_complexity
67 https://en.wikipedia.org/wiki/Daniel_Spielman
68 https://en.wikipedia.org/w/index.php?title=Shangua_Teng&action=edit&redlink=1
69 http://cs-www.cs.yale.edu/homes/spielman/Research/cacmSmooth.pdf
70 https://en.wikipedia.org/wiki/Doi_(identifier)
71 https://doi.org/10.1145%2F1562764.1562785
72 http://www.fsz.bme.hu/~szirmay/ray6.pdf
73 https://web.archive.org/web/20110721103906/http://www.fsz.bme.hu/~szirmay/ray6.pdf

1535
145 Amortized analysis

”Amortized” redirects here. For other uses, see Amortization1 . In computer science2 , amor-
tized analysis is a method for analyzing3 a given algorithm's complexity4 , or how much of
a resource, especially time or memory, it takes to execute5 . The motivation for amortized
analysis is that looking at the worst-case run time per operation, rather than per algorithm,
can be too pessimistic.[1]
While certain operations for a given algorithm may have a significant cost in resources, other
operations may not be as costly. The amortized analysis considers both the costly and less
costly operations together over the whole series of operations of the algorithm. This may
include accounting for different types of input, length of the input, and other factors that
affect its performance.[2]

145.1 History

Amortized analysis initially emerged from a method called aggregate analysis, which is
now subsumed by amortized analysis. The technique was first formally introduced by
Robert Tarjan6 in his 1985 paper Amortized Computational Complexity,[3] which addressed
the need for a more useful form of analysis than the common probabilistic methods used.
Amortization was initially used for very specific types of algorithms, particularly those
involving binary trees7 and union8 operations. However, it is now ubiquitous and comes
into play when analyzing many other algorithms as well.[2]

145.2 Method

Main articles: accounting method9 and potential method10 Amortized analysis requires
knowledge of which series of operations are possible. This is most commonly the case with
data structures11 , which have state12 that persists between operations. The basic idea is

1 https://en.wikipedia.org/wiki/Amortization
2 https://en.wikipedia.org/wiki/Computer_science
3 https://en.wikipedia.org/wiki/Analysis_of_algorithms
4 https://en.wikipedia.org/wiki/Computational_complexity_theory
5 https://en.wikipedia.org/wiki/Execution_(computing)
6 https://en.wikipedia.org/wiki/Robert_Tarjan
7 https://en.wikipedia.org/wiki/Binary_tree
8 https://en.wikipedia.org/wiki/Union_(computer_science)
9 https://en.wikipedia.org/wiki/Accounting_method
10 https://en.wikipedia.org/wiki/Potential_method
11 https://en.wikipedia.org/wiki/Data_structure
12 https://en.wikipedia.org/wiki/State_(computer_science)

1537
Amortized analysis

that a worst-case operation can alter the state in such a way that the worst case cannot
occur again for a long time, thus ”amortizing” its cost.
There are generally three methods for performing amortized analysis: the aggregate method,
the accounting method13 , and the potential method14 . All of these give correct answers; the
choice of which to use depends on which is most convenient for a particular situation.[4]
• Aggregate analysis determines the upper bound T(n) on the total cost of a sequence of
n operations, then calculates the amortized cost to be T(n) / n.[4]
• The accounting method15 is a form of aggregate analysis which assigns to each operation
an amortized cost which may differ from its actual cost. Early operations have an amor-
tized cost higher than their actual cost, which accumulates a saved ”credit” that pays
for later operations having an amortized cost lower than their actual cost. Because the
credit begins at zero, the actual cost of a sequence of operations equals the amortized
cost minus the accumulated credit. Because the credit is required to be non-negative,
the amortized cost is an upper bound on the actual cost. Usually, many short-running
operations accumulate such credit in small increments, while rare long-running operations
decrease it drastically.[4]
• The potential method16 is a form of the accounting method where the saved credit is
computed as a function (the ”potential”) of the state of the data structure. The amortized
cost is the immediate cost plus the change in potential.[4]

13 https://en.wikipedia.org/wiki/Accounting_method
14 https://en.wikipedia.org/wiki/Potential_method
15 https://en.wikipedia.org/wiki/Accounting_method
16 https://en.wikipedia.org/wiki/Potential_method

1538
Examples

145.3 Examples

145.3.1 Dynamic array

Figure 348 Amortized analysis of the push operation for a dynamic array

Consider a dynamic array17 that grows in size as more elements are added to it, such as
ArrayList in Java or std::vector in C++. If we started out with a dynamic array of size
4, we could push 4 elements onto it, and each operation would take constant time18 . Yet
pushing a fifth element onto that array would take longer as the array would have to create
a new array of double the current size (8), copy the old elements onto the new array, and
then add the new element. The next three push operations would similarly take constant
time, and then the subsequent addition would require another slow doubling of the array
size.

17 https://en.wikipedia.org/wiki/Dynamic_array
18 https://en.wikipedia.org/wiki/Constant_time

1539
Amortized analysis

In general if we consider an arbitrary number of pushes n + 1 to an array of size n, we


notice that push operations take constant time except for the last one which takes Θ(n)19
time to perform the size doubling operation. Since there were n + 1 operations total we
can take the average of this and find that pushing elements onto the dynamic array takes:
nΘ(1)+Θ(n)
n+1 = Θ(1), constant time.[4]

145.3.2 Queue

Shown is a Ruby implementation of a Queue20 , a FIFO data structure21 :

class Queue
def initialize
@input = []
@output = []
end

def enqueue(element)
@input << element
end

def dequeue
if @output.empty?
while @input.any?
@output << @input.pop
end
end

@output.pop
end
end

The enqueue operation just pushes an element onto the input array; this operation does
not depend on the lengths of either input or output and therefore runs in constant time.
However the dequeue operation is more complicated. If the output array already has some
elements in it, then dequeue runs in constant time; otherwise, dequeue takes O(n) time to
add all the elements onto the output array from the input array, where n is the current
length of the input array. After copying n elements from input, we can perform n dequeue
operations, each taking constant time, before the output array is empty again. Thus, we
can perform a sequence of n dequeue operations in only O(n) time, which implies that the
amortized time of each dequeue operation is O(1).[5]
Alternatively, we can charge the cost of copying any item from the input array to the
output array to the earlier enqueue operation for that item. This charging scheme doubles
the amortized time for enqueue but reduces the amortized time for dequeue to O(1).

145.4 Common use


• In common usage, an ”amortized algorithm” is one that an amortized analysis has shown
to perform well.

19 https://en.wikipedia.org/wiki/Big_O_notation
20 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
21 https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)

1540
References

• Online algorithms22 commonly use amortized analysis.

145.5 References
• A B23  R E-Y (1998). Online Computation and Competitive
Analysis24 . C U P. . 20, 141.
1. ”L 7: A A”25 (PDF). C M U26 .
R 14 M 2015.
2. R F (2007), Amortized Analysis Explained27 (PDF),  3
M 2011
3. T, R E28 (A 1985). ”A C C-
”29 (PDF). SIAM Journal on Algebraic and Discrete Methods. 6 (2): 306–318.
doi30 :10.1137/060603131 .
4. K, D (S 2011). ”CS 3110 L 20: A A”32 .
C U33 . R 14 M 2015.
5. G, D. ”CSE332: D A”34 (PDF). cs.washington.edu.
Retrieved 14 March 2015.

22 https://en.wikipedia.org/wiki/Online_algorithm
23 https://en.wikipedia.org/wiki/Allan_Borodin
24 https://www.cs.technion.ac.il/~rani/book.html
25 https://www.cs.cmu.edu/afs/cs/academic/class/15451-s07/www/lecture_notes/lect0206.pdf
26 https://en.wikipedia.org/wiki/Carnegie_Mellon_University
27 http://www.cs.princeton.edu/~fiebrink/423/AmortizedAnalysisExplained_Fiebrink.pdf
28 https://en.wikipedia.org/wiki/Robert_Tarjan
29 http://www.cs.duke.edu/courses/fall11/cps234/reading/Tarjan85_AmortizedComplexity.pdf
30 https://en.wikipedia.org/wiki/Doi_(identifier)
31 https://doi.org/10.1137%2F0606031
http://www.cs.cornell.edu/courses/cs3110/2011sp/lectures/lec20-amortized/amortized.
32
htm
33 https://en.wikipedia.org/wiki/Cornell_University
34 http://courses.cs.washington.edu/courses/cse332/10sp/lectures/lecture21.pdf

1541
146 Computational complexity theory

Study of inherent difficulty of computational problems


Computational complexity theory focuses on classifying computational problems ac-
cording to their inherent difficulty, and relating these classes to each other. A computational
problem is a task solved by a computer. A computation problem is solvable by mechanical
application of mathematical steps, such as an algorithm1 .
A problem is regarded as inherently difficult if its solution requires significant resources,
whatever the algorithm used. The theory formalizes this intuition, by introducing math-
ematical models of computation2 to study these problems and quantifying their computa-
tional complexity3 , i.e., the amount of resources needed to solve them, such as time and
storage. Other measures of complexity are also used, such as the amount of communication
(used in communication complexity4 ), the number of gates5 in a circuit (used in circuit
complexity6 ) and the number of processors (used in parallel computing7 ). One of the roles
of computational complexity theory is to determine the practical limits on what comput-
ers can and cannot do. The P versus NP problem8 , one of the seven Millennium Prize
Problems9 , is dedicated to the field of computational complexity.[1]
Closely related fields in theoretical computer science are analysis of algorithms10 and com-
putability theory11 . A key distinction between analysis of algorithms and computational
complexity theory is that the former is devoted to analyzing the amount of resources needed
by a particular algorithm to solve a problem, whereas the latter asks a more general ques-
tion about all possible algorithms that could be used to solve the same problem. More
precisely, computational complexity theory tries to classify problems that can or cannot be
solved with appropriately restricted resources. In turn, imposing restrictions on the avail-
able resources is what distinguishes computational complexity from computability theory:
the latter theory asks what kind of problems can, in principle, be solved algorithmically.

1 https://en.wikipedia.org/wiki/Algorithm
2 https://en.wikipedia.org/wiki/Models_of_computation
3 https://en.wikipedia.org/wiki/Computational_complexity
4 https://en.wikipedia.org/wiki/Communication_complexity
5 https://en.wikipedia.org/wiki/Logic_gate
6 https://en.wikipedia.org/wiki/Circuit_complexity
7 https://en.wikipedia.org/wiki/Parallel_computing
8 https://en.wikipedia.org/wiki/P_versus_NP_problem
9 https://en.wikipedia.org/wiki/Millennium_Prize_Problems
10 https://en.wikipedia.org/wiki/Analysis_of_algorithms
11 https://en.wikipedia.org/wiki/Computability_theory

1543
Computational complexity theory

146.1 Computational problems

Figure 349 A traveling salesman tour through 14 German cities.

146.1.1 Problem instances

A computational problem12 can be viewed as an infinite collection of instances together with


a solution for every instance. The input string for a computational problem is referred to as

12 https://en.wikipedia.org/wiki/Computational_problem

1544
Computational problems

a problem instance, and should not be confused with the problem itself. In computational
complexity theory, a problem refers to the abstract question to be solved. In contrast, an
instance of this problem is a rather concrete utterance, which can serve as the input for a
decision problem. For example, consider the problem of primality testing13 . The instance
is a number (e.g., 15) and the solution is ”yes” if the number is prime and ”no” otherwise
(in this case, 15 is not prime and the answer is ”no”). Stated another way, the instance is a
particular input to the problem, and the solution is the output corresponding to the given
input.
To further highlight the difference between a problem and an instance, consider the following
instance of the decision version of the traveling salesman problem14 : Is there a route of at
most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative
answer to this particular problem instance is of little use for solving other instances of the
problem, such as asking for a round trip through all sites in Milan15 whose total length is
at most 10 km. For this reason, complexity theory addresses computational problems and
not particular problem instances.

146.1.2 Representing problem instances

When considering computational problems, a problem instance is a string16 over an alpha-


bet17 . Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and
thus the strings are bitstrings18 . As in a real-world computer19 , mathematical objects other
than bitstrings must be suitably encoded. For example, integers20 can be represented in
binary notation21 , and graphs22 can be encoded directly via their adjacency matrices23 , or
by encoding their adjacency lists24 in binary.
Even though some proofs of complexity-theoretic theorems regularly assume some concrete
choice of input encoding, one tries to keep the discussion abstract enough to be independent
of the choice of encoding. This can be achieved by ensuring that different representations
can be transformed into each other efficiently.

13 https://en.wikipedia.org/wiki/Primality_testing
14 https://en.wikipedia.org/wiki/Traveling_salesman_problem
15 https://en.wikipedia.org/wiki/Milan
16 https://en.wikipedia.org/wiki/String_(computer_science)
17 https://en.wikipedia.org/wiki/Alphabet_(computer_science)
18 https://en.wikipedia.org/wiki/Bitstring
19 https://en.wikipedia.org/wiki/Computer
20 https://en.wikipedia.org/wiki/Integer
21 https://en.wikipedia.org/wiki/Binary_notation
22 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
23 https://en.wikipedia.org/wiki/Adjacency_matrix
24 https://en.wikipedia.org/wiki/Adjacency_list

1545
Computational complexity theory

146.1.3 Decision problems as formal languages

Figure 350 A decision problem has only two possible outputs, yes or no (or alternately
1 or 0) on any input.

Decision problems25 are one of the central objects of study in computational complexity
theory. A decision problem is a special type of computational problem whose answer is
either yes or no, or alternately either 1 or 0. A decision problem can be viewed as a formal
language26 , where the members of the language are instances whose output is yes, and the

25 https://en.wikipedia.org/wiki/Decision_problem
26 https://en.wikipedia.org/wiki/Formal_language

1546
Computational problems

non-members are those instances whose output is no. The objective is to decide, with the
aid of an algorithm27 , whether a given input string is a member of the formal language
under consideration. If the algorithm deciding this problem returns the answer yes, the
algorithm is said to accept the input string, otherwise it is said to reject the input.
An example of a decision problem is the following. The input is an arbitrary graph28 . The
problem consists in deciding whether the given graph is connected29 or not. The formal
language associated with this decision problem is then the set of all connected graphs — to
obtain a precise definition of this language, one has to decide how graphs are encoded as
binary strings.

146.1.4 Function problems

A function problem30 is a computational problem where a single output (of a total func-
tion31 ) is expected for every input, but the output is more complex than that of a decision
problem32 —that is, the output isn't just yes or no. Notable examples include the traveling
salesman problem33 and the integer factorization problem34 .
It is tempting to think that the notion of function problems is much richer than the notion
of decision problems. However, this is not really the case, since function problems can be
recast as decision problems. For example, the multiplication of two integers can be expressed
as the set of triples (a, b, c) such that the relation a × b = c holds. Deciding whether a
given triple is a member of this set corresponds to solving the problem of multiplying two
numbers.

146.1.5 Measuring the size of an instance

To measure the difficulty of solving a computational problem, one may wish to see how
much time the best algorithm requires to solve the problem. However, the running time
may, in general, depend on the instance. In particular, larger instances will require more
time to solve. Thus the time required to solve a problem (or the space required, or any
measure of complexity) is calculated as a function of the size of the instance. This is usually
taken to be the size of the input in bits. Complexity theory is interested in how algorithms
scale with an increase in the input size. For instance, in the problem of finding whether a
graph is connected, how much more time does it take to solve a problem for a graph with
2n vertices compared to the time taken for a graph with n vertices?
If the input size is n, the time taken can be expressed as a function of n. Since the time taken
on different inputs of the same size can be different, the worst-case time complexity T(n) is
defined to be the maximum time taken over all inputs of size n. If T(n) is a polynomial in

27 https://en.wikipedia.org/wiki/Algorithm
28 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
29 https://en.wikipedia.org/wiki/Connectivity_(graph_theory)
30 https://en.wikipedia.org/wiki/Function_problem
31 https://en.wikipedia.org/wiki/Total_function
32 https://en.wikipedia.org/wiki/Decision_problem
33 https://en.wikipedia.org/wiki/Traveling_salesman_problem
34 https://en.wikipedia.org/wiki/Integer_factorization_problem

1547
Computational complexity theory

n, then the algorithm is said to be a polynomial time35 algorithm. Cobham's thesis36 argues
that a problem can be solved with a feasible amount of resources if it admits a polynomial
time algorithm.

146.2 Machine models and complexity measures

146.2.1 Turing machine

Main article: Turing machine37

Figure 351 An illustration of a Turing machine

A Turing machine is a mathematical model of a general computing machine. It is a theoret-


ical device that manipulates symbols contained on a strip of tape. Turing machines are not
intended as a practical computing technology, but rather as a general model of a computing
machine—anything from an advanced supercomputer to a mathematician with a pencil and
paper. It is believed that if a problem can be solved by an algorithm, there exists a Tur-
ing machine that solves the problem. Indeed, this is the statement of the Church–Turing
thesis38 . Furthermore, it is known that everything that can be computed on other models
of computation known to us today, such as a RAM machine39 , Conway's Game of Life40 ,
cellular automata41 or any programming language can be computed on a Turing machine.
Since Turing machines are easy to analyze mathematically, and are believed to be as pow-
erful as any other model of computation, the Turing machine is the most commonly used
model in complexity theory.
Many types of Turing machines are used to define complexity classes, such as deterministic
Turing machines42 , probabilistic Turing machines43 , non-deterministic Turing machines44 ,
quantum Turing machines45 , symmetric Turing machines46 and alternating Turing ma-

35 https://en.wikipedia.org/wiki/Polynomial_time
36 https://en.wikipedia.org/wiki/Cobham%27s_thesis
37 https://en.wikipedia.org/wiki/Turing_machine
38 https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
39 https://en.wikipedia.org/wiki/RAM_machine
40 https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
41 https://en.wikipedia.org/wiki/Cellular_automata
42 https://en.wikipedia.org/wiki/Deterministic_Turing_machine
43 https://en.wikipedia.org/wiki/Probabilistic_Turing_machine
44 https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine
45 https://en.wikipedia.org/wiki/Quantum_Turing_machine
46 https://en.wikipedia.org/wiki/Symmetric_Turing_machine

1548
Machine models and complexity measures

chines47 . They are all equally powerful in principle, but when resources (such as time
or space) are bounded, some of these may be more powerful than others.
A deterministic Turing machine is the most basic Turing machine, which uses a fixed set
of rules to determine its future actions. A probabilistic Turing machine is a deterministic
Turing machine with an extra supply of random bits. The ability to make probabilistic de-
cisions often helps algorithms solve problems more efficiently. Algorithms that use random
bits are called randomized algorithms48 . A non-deterministic Turing machine is a deter-
ministic Turing machine with an added feature of non-determinism, which allows a Turing
machine to have multiple possible future actions from a given state. One way to view non-
determinism is that the Turing machine branches into many possible computational paths
at each step, and if it solves the problem in any of these branches, it is said to have solved
the problem. Clearly, this model is not meant to be a physically realizable model, it is
just a theoretically interesting abstract machine that gives rise to particularly interesting
complexity classes. For examples, see non-deterministic algorithm49 .

146.2.2 Other machine models

Many machine models different from the standard multi-tape Turing machines50 have been
proposed in the literature, for example random access machines51 . Perhaps surprisingly,
each of these models can be converted to another without providing any extra computational
power. The time and memory consumption of these alternate models may vary.[2] What all
these models have in common is that the machines operate deterministically52 .
However, some computational problems are easier to analyze in terms of more unusual
resources. For example, a non-deterministic Turing machine is a computational model that
is allowed to branch out to check many different possibilities at once. The non-deterministic
Turing machine has very little to do with how we physically want to compute algorithms,
but its branching exactly captures many of the mathematical models we want to analyze,
so that non-deterministic time53 is a very important resource in analyzing computational
problems.

146.2.3 Complexity measures

For a precise definition of what it means to solve a problem using a given amount of time
and space, a computational model such as the deterministic Turing machine54 is used. The
time required by a deterministic Turing machine M on input x is the total number of state
transitions, or steps, the machine makes before it halts and outputs the answer (”yes” or
”no”). A Turing machine M is said to operate within time f(n), if the time required by

47 https://en.wikipedia.org/wiki/Alternating_Turing_machine
48 https://en.wikipedia.org/wiki/Randomized_algorithm
49 https://en.wikipedia.org/wiki/Non-deterministic_algorithm
50 https://en.wikipedia.org/wiki/Turing_machine_equivalents#Multi-tape_Turing_machines
51 https://en.wikipedia.org/wiki/Random_access_machine
52 https://en.wikipedia.org/wiki/Deterministic_algorithm
53 https://en.wikipedia.org/wiki/Non-deterministic_time
54 https://en.wikipedia.org/wiki/Deterministic_Turing_machine

1549
Computational complexity theory

M on each input of length n is at most f(n). A decision problem A can be solved in time
f(n) if there exists a Turing machine operating in time f(n) that solves the problem. Since
complexity theory is interested in classifying problems based on their difficulty, one defines
sets of problems based on some criteria. For instance, the set of problems solvable within
time f(n) on a deterministic Turing machine is then denoted by DTIME55 (f(n)).
Analogous definitions can be made for space requirements. Although time and space are the
most well-known complexity resources, any complexity measure56 can be viewed as a compu-
tational resource. Complexity measures are very generally defined by the Blum complexity
axioms57 . Other complexity measures used in complexity theory include communication
complexity58 , circuit complexity59 , and decision tree complexity60 .
The complexity of an algorithm is often expressed using big O notation61 .

55 https://en.wikipedia.org/wiki/DTIME
56 https://en.wikipedia.org/wiki/Complexity
57 https://en.wikipedia.org/wiki/Blum_complexity_axioms
58 https://en.wikipedia.org/wiki/Communication_complexity
59 https://en.wikipedia.org/wiki/Circuit_complexity
60 https://en.wikipedia.org/wiki/Decision_tree_complexity
61 https://en.wikipedia.org/wiki/Big_O_notation

1550
Machine models and complexity measures

146.2.4 Best, worst and average case complexity

Figure 352 Visualization of the quicksort algorithm that has average case performance
O(n log n).

The best, worst and average case62 complexity refer to three different ways of measuring
the time complexity (or any other complexity measure) of different inputs of the same size.
Since some inputs of size n may be faster to solve than others, we define the following
complexities:
1. Best-case complexity: This is the complexity of solving the problem for the best input
of size n.
2. Average-case complexity: This is the complexity of solving the problem on an average.
This complexity is only defined with respect to a probability distribution63 over the
inputs. For instance, if all inputs of the same size are assumed to be equally likely
to appear, the average case complexity can be defined with respect to the uniform
distribution over all inputs of size n.
3. Amortized analysis64 : Amortized analysis considers both the costly and less costly
operations together over the whole series of operations of the algorithm.

62 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
63 https://en.wikipedia.org/wiki/Probability_distribution
64 https://en.wikipedia.org/wiki/Amortized_analysis

1551
Computational complexity theory

4. Worst-case complexity: This is the complexity of solving the problem for the worst
input of size n.
The order from cheap to costly is: Best, average (of discrete uniform distribution65 ), amor-
tized, worst.
For example, consider the deterministic sorting algorithm quicksort66 . This solves the prob-
lem of sorting a list of integers that is given as the input. The worst-case is when the input
is sorted or sorted in reverse order, and the algorithm takes time O67 (n2 ) for this case. If
we assume that all possible permutations of the input list are equally likely, the average
time taken for sorting is O(n log n). The best case occurs when each pivoting divides the
list in half, also needing O(n log n) time.

146.2.5 Upper and lower bounds on the complexity of problems

To classify the computation time (or similar resources, such as space consumption), one is
interested in proving upper and lower bounds on the maximum amount of time required
by the most efficient algorithm solving a given problem. The complexity of an algorithm
is usually taken to be its worst-case complexity, unless specified otherwise. Analyzing a
particular algorithm falls under the field of analysis of algorithms68 . To show an upper
bound T(n) on the time complexity of a problem, one needs to show only that there is a
particular algorithm with running time at most T(n). However, proving lower bounds is
much more difficult, since lower bounds make a statement about all possible algorithms that
solve a given problem. The phrase ”all possible algorithms” includes not just the algorithms
known today, but any algorithm that might be discovered in the future. To show a lower
bound of T(n) for a problem requires showing that no algorithm can have time complexity
lower than T(n).
Upper and lower bounds are usually stated using the big O notation69 , which hides constant
factors and smaller terms. This makes the bounds independent of the specific details of the
computational model used. For instance, if T(n) = 7n2 + 15n + 40, in big O notation one
would write T(n) = O(n2 ).

146.3 Complexity classes

Main article: Complexity class70

65 https://en.wikipedia.org/wiki/Discrete_uniform_distribution
66 https://en.wikipedia.org/wiki/Quicksort
67 https://en.wikipedia.org/wiki/Big_O_notation
68 https://en.wikipedia.org/wiki/Analysis_of_algorithms
69 https://en.wikipedia.org/wiki/Big_O_notation
70 https://en.wikipedia.org/wiki/Complexity_class

1552
Complexity classes

146.3.1 Defining complexity classes

A complexity class is a set of problems of related complexity. Simpler complexity classes


are defined by the following factors:
• The type of computational problem: The most commonly used problems are decision
problems. However, complexity classes can be defined based on function problems71 ,
counting problems72 , optimization problems73 , promise problems74 , etc.
• The model of computation: The most common model of computation is the deterministic
Turing machine, but many complexity classes are based on non-deterministic Turing
machines, Boolean circuits75 , quantum Turing machines76 , monotone circuits77 , etc.
• The resource (or resources) that is being bounded and the bound: These two properties
are usually stated together, such as ”polynomial time”, ”logarithmic space”, ”constant
depth”, etc.
Some complexity classes have complicated definitions that do not fit into this framework.
Thus, a typical complexity class has a definition like the following:
The set of decision problems solvable by a deterministic Turing machine within time f(n).
(This complexity class is known as DTIME(f(n)).)
But bounding the computation time above by some concrete function f(n) often yields
complexity classes that depend on the chosen machine model. For instance, the language
{xx | x is any binary string} can be solved in linear time78 on a multi-tape Turing machine,
but necessarily requires quadratic time in the model of single-tape Turing machines. If
we allow polynomial variations in running time, Cobham-Edmonds thesis79 states that ”the
time complexities in any two reasonable and general models of computation are polynomially
related” (Goldreich 200880 , Chapter 1.2). This forms the basis for the complexity class P81 ,
which is the set of decision problems solvable by a deterministic Turing machine within
polynomial time. The corresponding set of function problems is FP82 .

71 https://en.wikipedia.org/wiki/Function_problem
72 https://en.wikipedia.org/wiki/Counting_problem_(complexity)
73 https://en.wikipedia.org/wiki/Optimization_problem
74 https://en.wikipedia.org/wiki/Promise_problem
75 https://en.wikipedia.org/wiki/Boolean_circuit
76 https://en.wikipedia.org/wiki/Quantum_Turing_machine
77 https://en.wikipedia.org/wiki/Monotone_circuit
78 https://en.wikipedia.org/wiki/Linear_time
79 https://en.wikipedia.org/wiki/Cobham%27s_thesis
80 #CITEREFGoldreich2008
81 https://en.wikipedia.org/wiki/P_(complexity)
82 https://en.wikipedia.org/wiki/FP_(complexity)

1553
Computational complexity theory

146.3.2 Important complexity classes

Figure 353 A representation of the relation among complexity classes

Many important complexity classes can be defined by bounding the time or space used
by the algorithm. Some important complexity classes of decision problems defined in this
manner are the following:
Complexity class Model of computation Resource
Complexity class Model of computation Resource constraint
constraint Deterministic space
Deterministic time DSPACE89 (f(n)) Deterministic Turing Space O(f(n))
DTIME83 (f(n)) Deterministic Turing Time O(f(n)) machine
machine L90 Deterministic Turing Space O(log n)
machine
P84 Deterministic Turing Time O(poly(n)) PSPACE91 Deterministic Turing Space O(poly(n))
machine machine
EXPTIME85 Deterministic Turing Time O(2poly(n) ) EXPSPACE92 Deterministic Turing Space O(2poly(n) )
machine machine
Non-deterministic time Non-deterministic space
NTIME86 (f(n)) Non-deterministic Turing Time O(f(n)) NSPACE93 (f(n)) Non-deterministic Turing Space O(f(n))
machine machine
94
NL Non-deterministic Turing Space O(log n)
NP87 Non-deterministic Turing Time O(poly(n)) machine
machine NPSPACE95 Non-deterministic Turing Space O(poly(n))
NEXPTIME88 Non-deterministic Turing Time O(2poly(n) ) machine
machine NEXPSPACE96 Non-deterministic Turing Space O(2poly(n) )
machine

1554
Complexity classes

The logarithmic-space classes (necessarily) do not take into account the space needed to
represent the problem.
It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitch's
theorem97 .
Other important complexity classes include BPP98 , ZPP99 and RP100 , which are defined us-
ing probabilistic Turing machines101 ; AC102 and NC103 , which are defined using Boolean cir-
cuits; and BQP104 and QMA105 , which are defined using quantum Turing machines. #P106
is an important complexity class of counting problems (not decision problems). Classes like
IP107 and AM108 are defined using Interactive proof systems109 . ALL110 is the class of all
decision problems.

146.3.3 Hierarchy theorems

Main articles: time hierarchy theorem111 and space hierarchy theorem112 For the complex-
ity classes defined in this way, it is desirable to prove that relaxing the requirements on
(say) computation time indeed defines a bigger set of problems. In particular, although
DTIME(n) is contained in DTIME(n2 ), it would be interesting to know if the inclusion is
strict. For time and space requirements, the answer to such questions is given by the time
and space hierarchy theorems respectively. They are called hierarchy theorems because they
induce a proper hierarchy on the classes defined by constraining the respective resources.
Thus there are pairs of complexity classes such that one is properly included in the other.
Having deduced such proper set inclusions, we can proceed to make quantitative statements
about how much more additional time or space is needed in order to increase the number
of problems that can be solved.
More precisely, the time hierarchy theorem113 states that
( ) ( )
DTIME f (n) ⊊ DTIME f (n) · log2 (f (n)) .
The space hierarchy theorem114 states that
( ) ( )
DSPACE f (n) ⊊ DSPACE f (n) · log(f (n)) .

97 https://en.wikipedia.org/wiki/Savitch%27s_theorem
98 https://en.wikipedia.org/wiki/BPP_(complexity)
99 https://en.wikipedia.org/wiki/ZPP_(complexity)
100 https://en.wikipedia.org/wiki/RP_(complexity)
101 https://en.wikipedia.org/wiki/Probabilistic_Turing_machine
102 https://en.wikipedia.org/wiki/AC_(complexity)
103 https://en.wikipedia.org/wiki/NC_(complexity)
104 https://en.wikipedia.org/wiki/BQP
105 https://en.wikipedia.org/wiki/QMA
106 https://en.wikipedia.org/wiki/Sharp-P
107 https://en.wikipedia.org/wiki/IP_(complexity)
108 https://en.wikipedia.org/wiki/AM_(complexity)
109 https://en.wikipedia.org/wiki/Interactive_proof_system
110 https://en.wikipedia.org/wiki/ALL_(complexity)
111 https://en.wikipedia.org/wiki/Time_hierarchy_theorem
112 https://en.wikipedia.org/wiki/Space_hierarchy_theorem
113 https://en.wikipedia.org/wiki/Time_hierarchy_theorem
114 https://en.wikipedia.org/wiki/Space_hierarchy_theorem

1555
Computational complexity theory

The time and space hierarchy theorems form the basis for most separation results of com-
plexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained
in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in
PSPACE.

146.3.4 Reduction

Main article: Reduction (complexity)115 Many complexity classes are defined using the
concept of a reduction. A reduction is a transformation of one problem into another problem.
It captures the informal notion of a problem being at most as difficult as another problem.
For instance, if a problem X can be solved using an algorithm for Y, X is no more difficult
than Y, and we say that X reduces to Y. There are many different types of reductions, based
on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions,
and the bound on the complexity of reductions, such as polynomial-time reductions116 or
log-space reductions117 .
The most commonly used reduction is a polynomial-time reduction. This means that the
reduction process takes polynomial time. For example, the problem of squaring an integer
can be reduced to the problem of multiplying two integers. This means an algorithm for
multiplying two integers can be used to square an integer. Indeed, this can be done by giving
the same input to both inputs of the multiplication algorithm. Thus we see that squaring
is not more difficult than multiplication, since squaring can be reduced to multiplication.
This motivates the concept of a problem being hard for a complexity class. A problem X is
hard for a class of problems C if every problem in C can be reduced to X. Thus no problem
in C is harder than X, since an algorithm for X allows us to solve any problem in C. The
notion of hard problems depends on the type of reduction being used. For complexity
classes larger than P, polynomial-time reductions are commonly used. In particular, the set
of problems that are hard for NP is the set of NP-hard118 problems.
If a problem X is in C and hard for C, then X is said to be complete119 for C. This means
that X is the hardest problem in C. (Since many problems could be equally hard, one might
say that X is one of the hardest problems in C.) Thus the class of NP-complete120 problems
contains the most difficult problems in NP, in the sense that they are the ones most likely
not to be in P. Because the problem P = NP is not solved, being able to reduce a known
NP-complete problem, Π2 , to another problem, Π1 , would indicate that there is no known
polynomial-time solution for Π1 . This is because a polynomial-time solution to Π1 would
yield a polynomial-time solution to Π2 . Similarly, because all NP problems can be reduced
to the set, finding an NP-complete121 problem that can be solved in polynomial time would
mean that P = NP.[3]

115 https://en.wikipedia.org/wiki/Reduction_(complexity)
116 https://en.wikipedia.org/wiki/Polynomial-time_reduction
117 https://en.wikipedia.org/wiki/Log-space_reduction
118 https://en.wikipedia.org/wiki/NP-hard
119 https://en.wikipedia.org/wiki/Complete_(complexity)
120 https://en.wikipedia.org/wiki/NP-complete
121 https://en.wikipedia.org/wiki/NP-complete

1556
Important open problems

146.4 Important open problems

Figure 354 Diagram of complexity classes provided that P ≠ NP. The existence of
problems in NP outside both P and NP-complete in this case was established by Ladner.[4]

146.4.1 P versus NP problem

Main article: P versus NP problem122 The complexity class P is often seen as a mathematical
abstraction modeling those computational tasks that admit an efficient algorithm. This
hypothesis is called the Cobham–Edmonds thesis123 . The complexity class NP124 , on the
other hand, contains many problems that people would like to solve efficiently, but for
which no efficient algorithm is known, such as the Boolean satisfiability problem125 , the
Hamiltonian path problem126 and the vertex cover problem127 . Since deterministic Turing
machines are special non-deterministic Turing machines, it is easily observed that each
problem in P is also member of the class NP.
The question of whether P equals NP is one of the most important open questions in
theoretical computer science because of the wide implications of a solution.[3] If the answer
is yes, many important problems can be shown to have more efficient solutions. These

122 https://en.wikipedia.org/wiki/P_versus_NP_problem
123 https://en.wikipedia.org/wiki/Cobham%E2%80%93Edmonds_thesis
124 https://en.wikipedia.org/wiki/NP_(complexity)
125 https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
126 https://en.wikipedia.org/wiki/Hamiltonian_path_problem
127 https://en.wikipedia.org/wiki/Vertex_cover_problem

1557
Computational complexity theory

include various types of integer programming128 problems in operations research129 , many


problems in logistics130 , protein structure prediction131 in biology132 ,[5] and the ability to
find formal proofs of pure mathematics133 theorems.[6] The P versus NP problem is one of
the Millennium Prize Problems134 proposed by the Clay Mathematics Institute135 . There
is a US$1,000,000 prize for resolving the problem.[7]

146.4.2 Problems in NP not known to be in P or NP-complete

It was shown by Ladner that if P ≠NP then there exist problems in NP that are neither in
P nor NP-complete.[4] Such problems are called NP-intermediate136 problems. The graph
isomorphism problem137 , the discrete logarithm problem138 and the integer factorization
problem139 are examples of problems believed to be NP-intermediate. They are some of the
very few NP problems not known to be in P or to be NP-complete.
The graph isomorphism problem140 is the computational problem of determining whether
two finite graphs141 are isomorphic142 . An important unsolved problem in complexity theory
is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The
answer is not known, but it is believed that the problem is at least not NP-complete.[8] If
graph isomorphism is NP-complete, the polynomial time hierarchy143 collapses to its second
level.[9] Since it is widely believed that the polynomial hierarchy does not collapse to any
finite level, it is believed that graph isomorphism is not NP-complete. The best√algorithm
for this problem, due to László Babai144 and Eugene Luks145 has run time O(2 n log n ) for
graphs with n vertices, although some recent work by Babai offers some potentially new
perspectives on this.[10]
The integer factorization problem146 is the computational problem of determining the prime
factorization147 of a given integer. Phrased as a decision problem, it is the problem of de-
ciding whether the input has a prime factor less than k. No efficient integer factorization
algorithm is known, and this fact forms the basis of several modern cryptographic sys-
tems, such as the RSA148 algorithm. The integer factorization problem is in NP and in

128 https://en.wikipedia.org/wiki/Integer_programming
129 https://en.wikipedia.org/wiki/Operations_research
130 https://en.wikipedia.org/wiki/Logistics
131 https://en.wikipedia.org/wiki/Protein_structure_prediction
132 https://en.wikipedia.org/wiki/Biology
133 https://en.wikipedia.org/wiki/Pure_mathematics
134 https://en.wikipedia.org/wiki/Millennium_Prize_Problems
135 https://en.wikipedia.org/wiki/Clay_Mathematics_Institute
136 https://en.wikipedia.org/wiki/NP-intermediate
137 https://en.wikipedia.org/wiki/Graph_isomorphism_problem
138 https://en.wikipedia.org/wiki/Discrete_logarithm_problem
139 https://en.wikipedia.org/wiki/Integer_factorization_problem
140 https://en.wikipedia.org/wiki/Graph_isomorphism_problem
141 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
142 https://en.wikipedia.org/wiki/Graph_isomorphism
143 https://en.wikipedia.org/wiki/Polynomial_time_hierarchy
144 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Babai
145 https://en.wikipedia.org/wiki/Eugene_Luks
146 https://en.wikipedia.org/wiki/Integer_factorization_problem
147 https://en.wikipedia.org/wiki/Prime_factorization
148 https://en.wikipedia.org/wiki/RSA_(algorithm)

1558
Intractability

co-NP (and even in UP and co-UP[11] ). If the problem is NP-complete, the polyno-
mial time hierarchy will collapse to its first level (i.e., NP will equal co-NP). The best
known algorithm
( √ ) √for integer factorization is the general number field sieve149 , which takes

3 64 3
(log n) 3 (log log n)2
time O(e 9
)[12] to factor an odd integer n. However, the best known
quantum algorithm 150 for this problem, Shor's algorithm151 , does run in polynomial time.
Unfortunately, this fact doesn't say much about where the problem lies with respect to
non-quantum complexity classes.

146.4.3 Separations between other complexity classes

Many known complexity classes are suspected to be unequal, but this has not been proved.
For instance P ⊆NP ⊆PP152 ⊆ PSPACE, but it is possible that P = PSPACE. If P is
not equal to NP, then P is not equal to PSPACE either. Since there are many known
complexity classes between P and PSPACE, such as RP, BPP, PP, BQP, MA, PH,
etc., it is possible that all these complexity classes collapse to one class. Proving that any
of these classes are unequal would be a major breakthrough in complexity theory.
Along the same lines, co-NP153 is the class containing the complement154 problems (i.e.
problems with the yes/no answers reversed) of NP problems. It is believed[13] that NP is
not equal to co-NP; however, it has not yet been proven. It is clear that if these two com-
plexity classes are not equal then P is not equal to NP, since P=co-P. Thus if P=NP we
would have co-P=co-NP whence NP=P=co-P=co-NP.
Similarly, it is not known if L (the set of all problems that can be solved in logarithmic
space) is strictly contained in P or equal to P. Again, there are many complexity classes
between the two, such as NL and NC, and it is not known if they are distinct or equal
classes.
It is suspected that P and BPP are equal. However, it is currently open if BPP = NEXP.

146.5 Intractability

See also: Combinatorial explosion155

Look up tractable156 , feasible157 , intractability158 , or infeasible159 in Wik-


tionary, the free dictionary.

149 https://en.wikipedia.org/wiki/General_number_field_sieve
150 https://en.wikipedia.org/wiki/Quantum_algorithm
151 https://en.wikipedia.org/wiki/Shor%27s_algorithm
152 https://en.wikipedia.org/wiki/PP_(complexity)
153 https://en.wikipedia.org/wiki/Co-NP
154 https://en.wikipedia.org/wiki/Complement_(complexity)
155 https://en.wikipedia.org/wiki/Combinatorial_explosion
156 https://en.wiktionary.org/wiki/tractable
157 https://en.wiktionary.org/wiki/feasible
158 https://en.wiktionary.org/wiki/intractability
159 https://en.wiktionary.org/wiki/infeasible

1559
Computational complexity theory

A problem that can be solved in theory (e.g. given large but finite resources, especially time),
but for which in practice any solution takes too many resources to be useful, is known as
an intractable problem.[14] Conversely, a problem that can be solved in practice is called
a tractable problem, literally ”a problem that can be handled”. The term infeasible160
(literally ”cannot be done”) is sometimes used interchangeably with intractable161 ,[15] though
this risks confusion with a feasible solution162 in mathematical optimization163 .[16]
Tractable problems are frequently identified with problems that have polynomial-time so-
lutions (P, PTIME); this is known as the Cobham–Edmonds thesis164 . Problems that are
known to be intractable in this sense include those that are EXPTIME165 -hard. If NP is
not the same as P, then NP-hard166 problems are also intractable in this sense.
However, this identification is inexact: a polynomial-time solution with large degree or
large leading coefficient grows quickly, and may be impractical for practical size problems;
conversely, an exponential-time solution that grows slowly may be practical on realistic
input, or a solution that takes a long time in the worst case may take a short time in most
cases or the average case, and thus still be practical. Saying that a problem is not in P does
not imply that all large cases of the problem are hard or even that most of them are. For
example, the decision problem in Presburger arithmetic167 has been shown not to be in P,
yet algorithms have been written that solve the problem in reasonable times in most cases.
Similarly, algorithms can solve the NP-complete knapsack problem168 over a wide range of
sizes in less than quadratic time and SAT solvers169 routinely handle large instances of the
NP-complete Boolean satisfiability problem170 .
To see why exponential-time algorithms are generally unusable in practice, consider a pro-
gram that makes 2n operations before halting. For small n, say 100, and assuming for the
sake of example that the computer does 1012 operations each second, the program would
run for about 4 × 1010 years, which is the same order of magnitude as the age of the uni-
verse171 . Even with a much faster computer, the program would only be useful for very small
instances and in that sense the intractability of a problem is somewhat independent of tech-
nological progress. However, an exponential-time algorithm that takes 1.0001n operations
is practical until n gets relatively large.
Similarly, a polynomial time algorithm is not always practical. If its running time is,
say, n15 , it is unreasonable to consider it efficient and it is still useless except on small
instances. Indeed, in practice even n3 or n2 algorithms are often impractical on realistic
sizes of problems.

160 https://en.wiktionary.org/wiki/infeasible
161 https://en.wiktionary.org/wiki/intractable
162 https://en.wikipedia.org/wiki/Feasible_solution
163 https://en.wikipedia.org/wiki/Mathematical_optimization
164 https://en.wikipedia.org/wiki/Cobham%E2%80%93Edmonds_thesis
165 https://en.wikipedia.org/wiki/EXPTIME
166 https://en.wikipedia.org/wiki/NP-hard
167 https://en.wikipedia.org/wiki/Presburger_arithmetic
168 https://en.wikipedia.org/wiki/Knapsack_problem
169 https://en.wikipedia.org/wiki/SAT_solver
170 https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
171 https://en.wikipedia.org/wiki/Age_of_the_universe

1560
Continuous complexity theory

146.6 Continuous complexity theory

Continuous complexity theory can refer to complexity theory of problems that involve con-
tinuous functions that are approximated by discretizations, as studied in numerical anal-
ysis172 . One approach to complexity theory of numerical analysis[17] is information based
complexity173 .
Continuous complexity theory can also refer to complexity theory of the use of analog com-
putation174 , which uses continuous dynamical systems175 and differential equations176 .[18]
Control theory177 can be considered a form of computation and differential equations are
used in the modelling of continuous-time and hybrid discrete-continuous-time systems.[19]

146.7 History

An early example of algorithm complexity analysis is the running time analysis of the
Euclidean algorithm178 done by Gabriel Lamé179 in 1844.
Before the actual research explicitly devoted to the complexity of algorithmic problems
started off, numerous foundations were laid out by various researchers. Most influential
among these was the definition of Turing machines by Alan Turing180 in 1936, which turned
out to be a very robust and flexible simplification of a computer.
The beginning of systematic studies in computational complexity is attributed to the sem-
inal 1965 paper ”On the Computational Complexity of Algorithms” by Juris Hartmanis181
and Richard E. Stearns182 , which laid out the definitions of time complexity183 and space
complexity184 , and proved the hierarchy theorems.[20] In addition, in 1965 Edmonds185 sug-
gested to consider a ”good” algorithm to be one with running time bounded by a polynomial
of the input size.[21]
Earlier papers studying problems solvable by Turing machines with specific bounded re-
sources include[20] John Myhill186 's definition of linear bounded automata187 (Myhill 1960),
Raymond Smullyan188 's study of rudimentary sets (1961), as well as Hisao Yamada189 's

172 https://en.wikipedia.org/wiki/Numerical_analysis
173 https://en.wikipedia.org/wiki/Information_based_complexity
174 https://en.wikipedia.org/wiki/Analog_computation
175 https://en.wikipedia.org/wiki/Dynamical_system
176 https://en.wikipedia.org/wiki/Differential_equation
177 https://en.wikipedia.org/wiki/Control_theory
178 https://en.wikipedia.org/wiki/Euclidean_algorithm
179 https://en.wikipedia.org/wiki/Gabriel_Lam%C3%A9
180 https://en.wikipedia.org/wiki/Alan_Turing
181 https://en.wikipedia.org/wiki/Juris_Hartmanis
182 https://en.wikipedia.org/wiki/Richard_E._Stearns
183 https://en.wikipedia.org/wiki/Time_complexity
184 https://en.wikipedia.org/wiki/Space_complexity
185 https://en.wikipedia.org/wiki/Jack_Edmonds
186 https://en.wikipedia.org/wiki/John_Myhill
187 https://en.wikipedia.org/wiki/Linear_bounded_automata
188 https://en.wikipedia.org/wiki/Raymond_Smullyan
189 https://en.wikipedia.org/wiki/Hisao_Yamada

1561
Computational complexity theory

paper[22] on real-time computations (1962). Somewhat earlier, Boris Trakhtenbrot190 (1956),


a pioneer in the field from the USSR, studied another specific complexity measure.[23] As
he remembers:
However, [my] initial interest [in automata theory] was increasingly set aside in favor of
computational complexity, an exciting fusion of combinatorial methods, inherited from
switching theory191 , with the conceptual arsenal of the theory of algorithms. These ideas
had occurred to me earlier in 1955 when I coined the term ”signalizing function”, which
is nowadays commonly known as ”complexity measure”.[24]
In 1967, Manuel Blum192 formulated a set of axioms193 (now known as Blum axioms194 )
specifying desirable properties of complexity measures on the set of computable functions
and proved an important result, the so-called speed-up theorem195 . The field began to
flourish in 1971 when the Stephen Cook196 and Leonid Levin197 proved198 the existence
of practically relevant problems that are NP-complete199 . In 1972, Richard Karp200 took
this idea a leap forward with his landmark paper, ”Reducibility Among Combinatorial
Problems”, in which he showed that 21 diverse combinatorial201 and graph theoretical202
problems, each infamous for its computational intractability, are NP-complete.[25]
In the 1980s, much work was done on the average difficulty of solving NP-complete
problems—both exactly and approximately. At that time, computational complexity theory
was at its height, and it was widely believed that if a problem turned out to be NP-complete,
then there was little chance of being able to work with the problem in a practical situation.
203
However, it became increasingly clear that this is not always the case[citation needed ] , and
some authors claimed that general asymptotic results are often unimportant for typical
problems arising in practice.[26]

146.8 See also


• Context of computational complexity204
• Descriptive complexity theory205
• Game complexity206
• Leaf language207

190 https://en.wikipedia.org/wiki/Boris_Trakhtenbrot
191 https://en.wikipedia.org/wiki/Switching_theory
192 https://en.wikipedia.org/wiki/Manuel_Blum
193 https://en.wikipedia.org/wiki/Axiom
194 https://en.wikipedia.org/wiki/Blum_axioms
195 https://en.wikipedia.org/wiki/Blum%27s_speedup_theorem
196 https://en.wikipedia.org/wiki/Stephen_Cook
197 https://en.wikipedia.org/wiki/Leonid_Levin
198 https://en.wikipedia.org/wiki/Cook%E2%80%93Levin_theorem
199 https://en.wikipedia.org/wiki/NP-complete
200 https://en.wikipedia.org/wiki/Richard_Karp
201 https://en.wikipedia.org/wiki/Combinatorics
202 https://en.wikipedia.org/wiki/Graph_theory
204 https://en.wikipedia.org/wiki/Context_of_computational_complexity
205 https://en.wikipedia.org/wiki/Descriptive_complexity_theory
206 https://en.wikipedia.org/wiki/Game_complexity
207 https://en.wikipedia.org/wiki/Leaf_language

1562
Works on Complexity

• List of complexity classes208


• List of computability and complexity topics209
• List of important publications in theoretical computer science210
• List of unsolved problems in computer science211
• Parameterized complexity212
• Proof complexity213
• Quantum complexity theory214
• Structural complexity theory215
• Transcomputational problem216
• Computational complexity of mathematical operations217

146.9 Works on Complexity


• W, S; D, F A., . (2020), Unravelling Complex-
ity: The Life and Work of Gregory Chaitin, World Scientific, doi218 :10.1142/11270219 ,
ISBN220 978-981-12-0006-9221

146.10 References

146.10.1 Citations
1. ”P  NP P | C M I”222 . www.claymath.org.
2. See Arora & Barak 2009223 , Chapter 1: The computational model and why it doesn't
matter
3. See Sipser 2006224 , Chapter 7: Time complexity
4. L, R E. (1975), ”O    -
  ”, Journal of the ACM225 , 22 (1): 151–171,
226
doi :10.1145/321864.321877 . 227

208 https://en.wikipedia.org/wiki/List_of_complexity_classes
209 https://en.wikipedia.org/wiki/List_of_computability_and_complexity_topics
https://en.wikipedia.org/wiki/List_of_important_publications_in_theoretical_computer_
210
science
211 https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_computer_science
212 https://en.wikipedia.org/wiki/Parameterized_complexity
213 https://en.wikipedia.org/wiki/Proof_complexity
214 https://en.wikipedia.org/wiki/Quantum_complexity_theory
215 https://en.wikipedia.org/wiki/Structural_complexity_theory
216 https://en.wikipedia.org/wiki/Transcomputational_problem
217 https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
218 https://en.wikipedia.org/wiki/Doi_(identifier)
219 https://doi.org/10.1142%2F11270
220 https://en.wikipedia.org/wiki/ISBN_(identifier)
221 https://en.wikipedia.org/wiki/Special:BookSources/978-981-12-0006-9
222 http://www.claymath.org/millennium-problems/p-vs-np-problem
223 #CITEREFAroraBarak2009
224 #CITEREFSipser2006
225 https://en.wikipedia.org/wiki/Journal_of_the_ACM
226 https://en.wikipedia.org/wiki/Doi_(identifier)
227 https://doi.org/10.1145%2F321864.321877

1563
Computational complexity theory

5. B, B A.228 ; L, T229 (1998), ”P  


 - (HP)   NP-”, Jour-
nal of Computational Biology, 5 (1): 27–40, CiteSeerX230 10.1.1.139.5547231 ,
doi232 :10.1089/cmb.1998.5.27233 , PMID234 9541869235 .
6. C, S236 (A 2000), The P versus NP Problem237 (PDF), C
M I238 ,    239 (PDF)  D-
 12, 2010,  O 18, 2006.
7. J, A M.240 (2006), ”T M G C  M-
”241 (PDF), Notices of the AMS, 53 (6), retrieved October 18, 2006.
8. A, V; K, P P. (2006), ”G    SPP”,
Information and Computation, 204 (5): 835–852, doi242 :10.1016/j.ic.2006.02.002243 .
9. S, U244 (1987). Graph isomorphism is in the low hierarchy. Proceed-
ings of the 4th Annual Symposium on Theoretical Aspects of Computer Science. Lec-
ture Notes in Computer Science. 1987. pp. 114–124. doi245 :10.1007/bfb0039599246 .
ISBN247 978-3-540-17219-2248 .
10. B, L (2016). ”G I  Q T”.
X249 :1512.03547250 [.DS251 ].
11. Lance Fortnow252 . Computational Complexity Blog: Complexity Class of the Week:
Factoring. September 13, 2002. 253
12. Wolfram MathWorld: Number Field Sieve254
13. Boaz Barak's course on Computational Complexity255 Lecture 2256

228 https://en.wikipedia.org/wiki/Bonnie_Berger
229 https://en.wikipedia.org/wiki/F._Thomson_Leighton
230 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
231 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.139.5547
232 https://en.wikipedia.org/wiki/Doi_(identifier)
233 https://doi.org/10.1089%2Fcmb.1998.5.27
234 https://en.wikipedia.org/wiki/PMID_(identifier)
235 http://pubmed.ncbi.nlm.nih.gov/9541869
236 https://en.wikipedia.org/wiki/Stephen_Cook
https://web.archive.org/web/20101212035424/http://www.claymath.org/millennium/P_vs_
237
NP/Official_Problem_Description.pdf
238 https://en.wikipedia.org/wiki/Clay_Mathematics_Institute
239 http://www.claymath.org/millennium/P_vs_NP/Official_Problem_Description.pdf
240 https://en.wikipedia.org/wiki/Arthur_Jaffe
241 http://www.ams.org/notices/200606/fea-jaffe.pdf
242 https://en.wikipedia.org/wiki/Doi_(identifier)
243 https://doi.org/10.1016%2Fj.ic.2006.02.002
244 https://en.wikipedia.org/wiki/Uwe_Sch%C3%B6ning
245 https://en.wikipedia.org/wiki/Doi_(identifier)
246 https://doi.org/10.1007%2Fbfb0039599
247 https://en.wikipedia.org/wiki/ISBN_(identifier)
248 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-17219-2
249 https://en.wikipedia.org/wiki/ArXiv_(identifier)
250 http://arxiv.org/abs/1512.03547
251 http://arxiv.org/archive/cs.DS
252 https://en.wikipedia.org/wiki/Lance_Fortnow
253 http://weblog.fortnow.com/2002/09/complexity-class-of-week-factoring.html
254 http://mathworld.wolfram.com/NumberFieldSieve.html
255 http://www.cs.princeton.edu/courses/archive/spr06/cos522/
256 http://www.cs.princeton.edu/courses/archive/spr06/cos522/lec2.pdf

1564
References

14. Hopcroft, J.E., Motwani, R. and Ullman, J.D. (2007) Introduction to Automata The-
ory, Languages, and Computation257 , Addison Wesley, Boston/San Francisco/New
York (page 368)
15. M, G (2014). Algorithms and Complexity. p. p. 4258 . ISBN259 978-
0-08093391-7260 .
16. Z, J (2015). Writing for Computer Science. p. 132261 . ISBN262 978-1-
44716639-9263 .
17. S, S (1997). ”C T  N A”. Acta
Numerica. Cambridge Univ Press. 6: 523–551. Bibcode264 :1997AcNum...6..523S265 .
doi266 :10.1017/s0962492900002774267 . CiteSeerx268 : 10.1.1.33.4678269 .
18. B, L; C, M (2009). ”A S  C
T C”. X270 :0907.3117271 [.CC272 ].
19. T, C J.; M, I; B, A M.; O,
M (J 2003). ”C T   V-
  H S”. Proceedings of the IEEE. 91 (7): 986–1001.
doi273 :10.1109/jproc.2003.814621274 . CiteSeerx275 : 10.1.1.70.4296276 .
20. Fortnow & Homer (2003)277
21. Richard M. Karp, ”Combinatorics, Complexity, and Randomness278 ”, 1985 Turing
Award Lecture
22. Y, H. (1962). ”R-T C  R F N
R-T C”. IEEE Transactions on Electronic Computers. EC-11 (6):
753–760. doi279 :10.1109/TEC.1962.5219459280 .
23. Trakhtenbrot, B.A.: Signalizing functions and tabular operators. Uchionnye Zapiski
Penzenskogo Pedinstituta (Transactions of the Penza Pedagogoical Institute) 4, 75–87
(1956) (in Russian)

https://en.wikipedia.org/wiki/Introduction_to_Automata_Theory,_Languages,_and_
257
Computation
https://books.google.com/books?id=6WriBQAAQBAJ&pg=PA4&dq=computational+feasibility+
258
tractability
259 https://en.wikipedia.org/wiki/ISBN_(identifier)
260 https://en.wikipedia.org/wiki/Special:BookSources/978-0-08093391-7
261 https://books.google.com/books?id=LWCYBgAAQBAJ&pg=PA132&dq=intractable+infeasible
262 https://en.wikipedia.org/wiki/ISBN_(identifier)
263 https://en.wikipedia.org/wiki/Special:BookSources/978-1-44716639-9
264 https://en.wikipedia.org/wiki/Bibcode_(identifier)
265 https://ui.adsabs.harvard.edu/abs/1997AcNum...6..523S
266 https://en.wikipedia.org/wiki/Doi_(identifier)
267 https://doi.org/10.1017%2Fs0962492900002774
268 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
269 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.33.4678
270 https://en.wikipedia.org/wiki/ArXiv_(identifier)
271 http://arxiv.org/abs/0907.3117
272 http://arxiv.org/archive/cs.CC
273 https://en.wikipedia.org/wiki/Doi_(identifier)
274 https://doi.org/10.1109%2Fjproc.2003.814621
275 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
276 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.70.4296
277 #CITEREFFortnowHomer2003
278 http://cecas.clemson.edu/~shierd/Shier/MthSc816/turing-karp.pdf
279 https://en.wikipedia.org/wiki/Doi_(identifier)
280 https://doi.org/10.1109%2FTEC.1962.5219459

1565
Computational complexity theory

24. Boris Trakhtenbrot, ”From Logic to Theoretical Computer Science – An Update281 ”.


In: Pillars of Computer Science, LNCS 4800, Springer 2008.
25. R M. K (1972), ”R A C P-
”282 (PDF),  R. E. M; J. W. T (.), Complexity of Computer
Computations, New York: Plenum, pp. 85–103
26. W, S (2002). A New Kind of Science283 . W M, I.
. 1143284 . ISBN285 978-1-57955-008-0286 .

146.10.2 Textbooks
• A, S287 ; B, B (2009), Computational Complexity: A
Modern Approach288 , C U P, ISBN289 978-0-521-42426-4290 ,
Z291 1193.68112292
• D, R; F, M293 (1999), Parameterized complexity294 ,
M  C S, B, N Y: S-V,
ISBN295 9780387948836296
• D, D-Z; K, K-I (2000), Theory of Computational Complexity, John Wiley
& Sons, ISBN297 978-0-471-34506-0298
• G, M R.299 ; J, D S.300 (1979), Computers and Intractability:
A Guide to the Theory of NP-Completeness301 , W. H. F302 , ISBN303 0-7167-1045-
5304
• G, O305 (2008), Computational Complexity: A Conceptual Perspective306 ,
C U P

https://books.google.com/books?id=GFX2qiLuRAMC&pg=PA1&dq=%22From+Logic+to+
Theoretical+Computer+Science+%E2%80%93+An+Update%22&hl=en&sa=X&ved=0ahUKEwivkOPkt-
281
TjAhVHRqwKHUNnAekQ6AEIKjAA#v=onepage&q=%22From%20Logic%20to%20Theoretical%20Computer%
20Science%20%E2%80%93%20An%20Update%22&f=false
282 http://www.cs.berkeley.edu/~luca/cs172/karp.pdf
283 https://archive.org/details/newkindofscience00wolf/page/1143
284 https://archive.org/details/newkindofscience00wolf/page/1143
285 https://en.wikipedia.org/wiki/ISBN_(identifier)
286 https://en.wikipedia.org/wiki/Special:BookSources/978-1-57955-008-0
287 https://en.wikipedia.org/wiki/Sanjeev_Arora
288 http://www.cs.princeton.edu/theory/complexity/
289 https://en.wikipedia.org/wiki/ISBN_(identifier)
290 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-42426-4
291 https://en.wikipedia.org/wiki/Zbl_(identifier)
292 http://zbmath.org/?format=complete&q=an:1193.68112
293 https://en.wikipedia.org/wiki/Michael_Fellows
294 https://www.springer.com/sgw/cda/frontpage/0,11855,5-0-22-1519914-0,00.html
295 https://en.wikipedia.org/wiki/ISBN_(identifier)
296 https://en.wikipedia.org/wiki/Special:BookSources/9780387948836
297 https://en.wikipedia.org/wiki/ISBN_(identifier)
298 https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-34506-0
299 https://en.wikipedia.org/wiki/Michael_Garey
300 https://en.wikipedia.org/wiki/David_S._Johnson
301 https://en.wikipedia.org/wiki/Computers_and_Intractability
302 https://en.wikipedia.org/wiki/W._H._Freeman_and_Company
303 https://en.wikipedia.org/wiki/ISBN_(identifier)
304 https://en.wikipedia.org/wiki/Special:BookSources/0-7167-1045-5
305 https://en.wikipedia.org/wiki/Oded_Goldreich
306 http://www.wisdom.weizmann.ac.il/~oded/cc-book.html

1566
References

•  L, J307 , . (1990), Handbook of theoretical computer science (vol. A):
algorithms and complexity, MIT Press, ISBN308 978-0-444-88071-0309
• P, C310 (1994), Computational Complexity (1st ed.), Addison
Wesley, ISBN311 978-0-201-53082-7312
• S, M313 (2006), Introduction to the Theory of Computation314 (2 .),
USA: T C T, ISBN315 978-0-534-95097-2316

146.10.3 Surveys
• K, H; U, D317 (1976), ”A R  C S  C-
  A  P D E”318 , Proceedings of
the Annual Conference on - ACM 76, ACM '76: 197–201, doi319 :10.1145/800191.805573320
• C, S321 (1983), ”A    -
”322 (PDF), Commun. ACM, 26 (6): 400–408, doi323 :10.1145/358141.358144324 ,
ISSN325 0001-0782326
• F, L; H, S (2003), ”A S H  C
C”327 (PDF), Bulletin of the EATCS, 80: 95–133
• M, S (2002), ”C C  P”,
Computing in Science and Eng., 4 (3): 31–47, arXiv328 :cond-mat/0012185329 , Bib-
code330 :2002CSE.....4c..31M331 , doi332 :10.1109/5992.998639333 , ISSN334 1521-9615335

307 https://en.wikipedia.org/wiki/Jan_van_Leeuwen
308 https://en.wikipedia.org/wiki/ISBN_(identifier)
309 https://en.wikipedia.org/wiki/Special:BookSources/978-0-444-88071-0
310 https://en.wikipedia.org/wiki/Christos_Papadimitriou
311 https://en.wikipedia.org/wiki/ISBN_(identifier)
312 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-53082-7
313 https://en.wikipedia.org/wiki/Michael_Sipser
314 https://en.wikipedia.org/wiki/Introduction_to_the_Theory_of_Computation
315 https://en.wikipedia.org/wiki/ISBN_(identifier)
316 https://en.wikipedia.org/wiki/Special:BookSources/978-0-534-95097-2
317 https://en.wikipedia.org/wiki/Dana_Ulery
318 http://portal.acm.org/citation.cfm?id=800191.805573
319 https://en.wikipedia.org/wiki/Doi_(identifier)
320 https://doi.org/10.1145%2F800191.805573
321 https://en.wikipedia.org/wiki/Stephen_Cook
322 http://www.europrog.ru/paper/sc1982e.pdf
323 https://en.wikipedia.org/wiki/Doi_(identifier)
324 https://doi.org/10.1145%2F358141.358144
325 https://en.wikipedia.org/wiki/ISSN_(identifier)
326 http://www.worldcat.org/issn/0001-0782
327 http://people.cs.uchicago.edu/~fortnow/papers/history.pdf
328 https://en.wikipedia.org/wiki/ArXiv_(identifier)
329 http://arxiv.org/abs/cond-mat/0012185
330 https://en.wikipedia.org/wiki/Bibcode_(identifier)
331 https://ui.adsabs.harvard.edu/abs/2002CSE.....4c..31M
332 https://en.wikipedia.org/wiki/Doi_(identifier)
333 https://doi.org/10.1109%2F5992.998639
334 https://en.wikipedia.org/wiki/ISSN_(identifier)
335 http://www.worldcat.org/issn/1521-9615

1567
Computational complexity theory

146.11 External links

Wikimedia Commons has media related to Computational complexity the-


ory336 .

• The Complexity Zoo337


• H, M338 , . (2001) [1994], ”C 
” , Encyclopedia of Mathematics340 , S S+B M
339

B.V. / K A P, ISBN341 978-1-55608-010-4342


• What are the most important results (and papers) in complexity theory that every one
should know?343
• Scott Aaronson: Why Philosophers Should Care About Computational Complexity344

Important complexity classes (more)

Computer science

336 https://commons.wikimedia.org/wiki/Category:Computational_complexity_theory
337 https://complexityzoo.uwaterloo.ca/Complexity_Zoo
338 https://en.wikipedia.org/wiki/Michiel_Hazewinkel
339 https://www.encyclopediaofmath.org/index.php?title=p/c130160
340 https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics
341 https://en.wikipedia.org/wiki/ISBN_(identifier)
342 https://en.wikipedia.org/wiki/Special:BookSources/978-1-55608-010-4
343 https://mathoverflow.net/q/34487
344 https://www.scottaaronson.com/papers/philos.pdf

1568
External links

Computer science

1569
147 Complexity class

Figure 355 A representation of the relationships between several important complexity


classes

1571
Complexity class

In computational complexity theory1 , a complexity class is a set of problems2 of related


resource-based complexity3 . The two most common resources considered are time4 and
memory5 .
In general, a complexity class is defined in terms of a type of computational problem, a
model of computation6 , and a bounded resource like time7 or memory8 . In particular,
most complexity classes concern decision problems9 solved by a Turing machine10 , and are
differentiated by their time or space (memory) requirements. For instance, the class P11
is the class of decision problems solvable by a deterministic Turing machine in polynomial
time12 . There are, however, many complexity classes defined in terms of other types of
problems (e.g. counting problems13 and function problems14 ) and complexity classes defined
using other models of computation (e.g. probabilistic Turing machines15 , interactive proof
systems16 , Boolean circuits17 , and quantum computers18 ).
The study of the relationships between complexity classes is a major area of research in
theoretical computer science. There are often general hierarchies of complexity classes; for
example, it is known that the basic time and space complexity classes relate to each other
in the following way: NL19 ⊆P20 ⊆NP21 ⊆PSPACE22 ⊆EXPTIME23 ⊆EXPSPACE24 .
However, many relationships are not yet known; for example, one of the most famous open
problems in computer science concerns whether or not P equals NP25 . The relationships
between classes often answer questions about the fundamental nature of computation. For
example, the P versus NP problem is directly related to questions of whether nondetermin-
ism26 adds any computational power to computers and whether problems whose solution
can be quickly checked for correctness can also be quickly solved.

1 https://en.wikipedia.org/wiki/Computational_complexity_theory
2 https://en.wikipedia.org/wiki/Computational_problem
3 https://en.wikipedia.org/wiki/Computational_complexity
4 https://en.wikipedia.org/wiki/Time_complexity
5 https://en.wikipedia.org/wiki/Space_complexity
6 https://en.wikipedia.org/wiki/Model_of_computation
7 https://en.wikipedia.org/wiki/Time_complexity
8 https://en.wikipedia.org/wiki/Space_complexity
9 https://en.wikipedia.org/wiki/Decision_problem
10 https://en.wikipedia.org/wiki/Turing_machine
11 https://en.wikipedia.org/wiki/P_(complexity)
12 https://en.wikipedia.org/wiki/Polynomial_time
13 https://en.wikipedia.org/wiki/Counting_problem_(complexity)
14 https://en.wikipedia.org/wiki/Function_problem
15 https://en.wikipedia.org/wiki/Probabilistic_Turing_machine
16 https://en.wikipedia.org/wiki/Interactive_proof_system
17 https://en.wikipedia.org/wiki/Boolean_circuit
18 https://en.wikipedia.org/wiki/Quantum_computer
19 https://en.wikipedia.org/wiki/NL_(complexity)
20 https://en.wikipedia.org/wiki/P_(complexity)
21 https://en.wikipedia.org/wiki/NP_(complexity)
22 https://en.wikipedia.org/wiki/PSPACE
23 https://en.wikipedia.org/wiki/EXPTIME
24 https://en.wikipedia.org/wiki/EXPSPACE
25 https://en.wikipedia.org/wiki/P_versus_NP
26 https://en.wikipedia.org/wiki/Nondeterministic_algorithm

1572
Background

147.1 Background

Complexity classes are concerned with the rate of growth of the requirement in resources as
the input size n increases. It is an abstract measurement, and does not give time or space in
requirements in terms of seconds or bytes, which would require knowledge of implementation
specifics. The function inside the O(...) expression could be a constant, for algorithms27
which are unaffected by the size of n, or an expression involving a logarithm28 , an expression
involving a power of n, i.e. a polynomial29 expression, and many others. The O is read as
”order of..”. For the purposes of computational complexity theory, some of the details of the
function can be ignored, for instance many possible polynomials can be grouped together
as a class.
The resource in question can either be time, essentially the number of primitive operations
on an abstract machine, or (storage) space. For example, the class NP30 is the set of decision
problems31 whose solutions can be determined by a non-deterministic Turing machine32 in
polynomial time33 , while the class PSPACE34 is the set of decision problems that can be
solved by a deterministic Turing machine35 in polynomial space36 .

147.1.1 Characterization

The simplest complexity classes are defined by the type of computational problem, the
model of computation, and the resource (or resources) that are being bounded and the
bounds. The resource and bounds are usually stated together, such as ”polynomial time”,
”logarithmic space”, ”constant depth”, etc.
Many complexity classes can be characterized in terms of the mathematical logic37 needed
to express them; see descriptive complexity38 .

27 https://en.wikipedia.org/wiki/Algorithm
28 https://en.wikipedia.org/wiki/Logarithm
29 https://en.wikipedia.org/wiki/Polynomial
30 https://en.wikipedia.org/wiki/NP_(complexity)
31 https://en.wikipedia.org/wiki/Decision_problem
32 https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine
33 https://en.wikipedia.org/wiki/Polynomial_time
34 https://en.wikipedia.org/wiki/PSPACE
35 https://en.wikipedia.org/wiki/Deterministic_Turing_machine
36 https://en.wikipedia.org/wiki/Polynomial_space
37 https://en.wikipedia.org/wiki/Mathematical_logic
38 https://en.wikipedia.org/wiki/Descriptive_complexity

1573
Complexity class

Computational problem

The most commonly used problems are decision problems39 . However, complexity classes
can be defined based on function problems40 (an example is FP41 ), counting problems42
(e.g. #P43 ), optimization problems44 , promise problems45 , etc.

Model of computation

The most common model of computation is the deterministic Turing machine46 , but many
complexity classes are based on nondeterministic Turing machines47 , boolean circuits48 ,
quantum Turing machines49 , monotone circuits50 , etc.

Resource bounds

Bounding the computation time above by some concrete function f(n) often yields com-
plexity classes that depend on the chosen machine model. For instance, the language {xx |
x is any binary string} can be solved in linear time51 on a multi-tape Turing machine, but
necessarily requires quadratic time in the model of single-tape Turing machines. If we allow
polynomial variations in running time, Cobham–Edmonds thesis52 states that ”the time
complexities in any two reasonable and general models of computation are polynomially
related” (Goldreich 200853 , Chapter 1.2) harv error: no target: CITEREFGoldreich2008
(help54 ). This forms the basis for the complexity class P55 , which is the set of decision
problems solvable by a deterministic Turing machine within polynomial time. The corre-
sponding set of function problems is FP56 .
The Blum axioms57 can be used to define complexity classes without referring to a concrete
computational model58 .

39 https://en.wikipedia.org/wiki/Decision_problem
40 https://en.wikipedia.org/wiki/Function_problem
41 https://en.wikipedia.org/wiki/FP_(complexity)
42 https://en.wikipedia.org/wiki/Counting_problem_(complexity)
43 https://en.wikipedia.org/wiki/Sharp-P
44 https://en.wikipedia.org/wiki/Optimization_problem
45 https://en.wikipedia.org/wiki/Promise_problem
46 https://en.wikipedia.org/wiki/Deterministic_Turing_machine
47 https://en.wikipedia.org/wiki/Nondeterministic_Turing_machine
48 https://en.wikipedia.org/wiki/Boolean_circuit
49 https://en.wikipedia.org/wiki/Quantum_Turing_machine
50 https://en.wikipedia.org/wiki/Monotone_circuit
51 https://en.wikipedia.org/wiki/Linear_time
52 https://en.wikipedia.org/wiki/Cobham%27s_thesis
53 #CITEREFGoldreich2008
54 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
55 https://en.wikipedia.org/wiki/P_(complexity)
56 https://en.wikipedia.org/wiki/FP_(complexity)
57 https://en.wikipedia.org/wiki/Blum_axioms
58 https://en.wikipedia.org/wiki/Computational_model

1574
Common complexity classes

147.2 Common complexity classes

Main article: List of complexity classes59 ALL60 is the class of all decision problems61 .
Many important complexity classes can be defined by bounding the time62 or space63 used
by the algorithm. Some important complexity classes of decision problems defined in this
manner are the following:

147.2.1 Time-complexity classes

Model of computation Time constraint f(n) Time constraint poly(n) Time constraint 2poly(n)
Deterministic Turing machine DTIME64 P65 EXPTIME66
Non-deterministic Turing NTIME67 NP68 NEXPTIME69
machine

147.2.2 Space-complexity classes

Model of computa- Space constraint f(n) Space constraint O(log n) Space constraint poly(n) Space constraint 2poly(n)
tion
Deterministic Turing DSPACE70 L71 PSPACE72 EXPSPACE73
machine
Non-deterministic NSPACE74 NL75 NPSPACE76 NEXPSPACE77
Turing machine

147.2.3 Other models of computation

While the deterministic and non-deterministic Turing machines78 are the most commonly
utilized models of computation, many complexity classes are defined in terms of other
computational models. In particular,

59 https://en.wikipedia.org/wiki/List_of_complexity_classes
60 https://en.wikipedia.org/wiki/ALL_(complexity)
61 https://en.wikipedia.org/wiki/Decision_problem
62 https://en.wikipedia.org/wiki/Time_complexity
63 https://en.wikipedia.org/wiki/Space_complexity
64 https://en.wikipedia.org/wiki/DTIME
65 https://en.wikipedia.org/wiki/P_(complexity)
66 https://en.wikipedia.org/wiki/EXPTIME
67 https://en.wikipedia.org/wiki/NTIME
68 https://en.wikipedia.org/wiki/NP_(complexity)
69 https://en.wikipedia.org/wiki/NEXPTIME
70 https://en.wikipedia.org/wiki/DSPACE
71 https://en.wikipedia.org/wiki/L_(complexity)
72 https://en.wikipedia.org/wiki/PSPACE
73 https://en.wikipedia.org/wiki/EXPSPACE
74 https://en.wikipedia.org/wiki/NSPACE
75 https://en.wikipedia.org/wiki/NL_(complexity)
76 https://en.wikipedia.org/wiki/NPSPACE
77 https://en.wikipedia.org/wiki/NEXPSPACE
78 https://en.wikipedia.org/wiki/Turing_machine

1575
Complexity class

• The classes BPP79 , PP80 , RP81 , and ZPP82 are defined using the probabilistic Turing
machine83
• The classes IP84 , MA85 , and AM86 are defined using the interactive proof system87
• The class P/poly88 and its subclasses NC89 and AC90 are defined using the boolean
circuit91
• The classes BQP92 and QMA93 are defined using the quantum Turing machine94
These are explained in greater detail below.

Randomized computation

Main article: Probabilistic Turing machine95 A number of important complexity classes are
defined using the probabilistic Turing machine96 , a variant of the Turing machine97 that
can toss random coins. These classes help to better describe the complexity of randomized
algorithms98 .
Unlike with standard Turing machines, the definition of a probabilistic Turing machine
introduces the potential for error; that is, strings that the Turing machine is meant to
accept may on some occasions be rejected and strings that the Turing machine is meant to
reject may on some occasions be accepted. As a result, the complexity classes based on the
probabilistic Turing machine are defined in large part around the amount of error that is
allowed. In particular, they are defined using an error probability ϵ: a probabilistic Turing
machine M is said to recognize a language L with error probability ϵ if:
1. a string w in L implies that Pr[M accepts w] ≥ 1 − ϵ
2. a string w not in L implies that Pr[M rejects w] ≥ 1 − ϵ

79 https://en.wikipedia.org/wiki/Bounded-error_probabilistic_polynomial
80 https://en.wikipedia.org/wiki/PP_(complexity)
81 https://en.wikipedia.org/wiki/RP_(complexity)
82 https://en.wikipedia.org/wiki/ZPP_(complexity)
83 https://en.wikipedia.org/wiki/Probabilistic_Turing_machine
84 https://en.wikipedia.org/wiki/IP_(complexity)
85 https://en.wikipedia.org/wiki/MA_(complexity)
86 https://en.wikipedia.org/wiki/AM_(complexity)
87 https://en.wikipedia.org/wiki/Interactive_proof_system
88 https://en.wikipedia.org/wiki/P/poly
89 https://en.wikipedia.org/wiki/NC_(complexity)
90 https://en.wikipedia.org/wiki/AC_(complexity)
91 https://en.wikipedia.org/wiki/Boolean_circuit
92 https://en.wikipedia.org/wiki/BQP
93 https://en.wikipedia.org/wiki/QMA
94 https://en.wikipedia.org/wiki/Quantum_Turing_machine
95 https://en.wikipedia.org/wiki/Probabilistic_Turing_machine
96 https://en.wikipedia.org/wiki/Probabilistic_Turing_machine
97 https://en.wikipedia.org/wiki/Turing_machine
98 https://en.wikipedia.org/wiki/Randomized_algorithm

1576
Common complexity classes

Figure 356 The relationships between the fundamental probabilistic complexity classes.
BQP is a probabilistic quantum complexity class and is described in the quantum
computing section.

A straightforward class based upon this definition is ZPP99 (zero-error probabilistic poly-
nomial time), the class of problems solvable in polynomial time by a probabilistic Turing
machine with error probability 0. Intuitively, this is the strictest class of probabilistic
problems because it demands no error whatsoever.
A slightly looser class is RP100 (randomized polynomial time), which maintains no error
for strings not in the language but allows bounded error for strings in the language. More
formally, a language is in RP if there is a probabilistic polynomial-time Turing machine M
such that if a string is not in the language then M always rejects and if a string is in the lan-
guage then M accepts with a probability at least 1/2. The class co-RP is similarly defined
except the roles are flipped: error is not allowed for strings in the language but is allowed
for strings not in the language. Taken together, the classes RP and co-RP encompass all
of the problems that can be solved by probabilistic Turing machines with one-sided error101 .
Loosening the error requirements further to allow for two-sided error102 yields the class
BPP103 (bounded-error probabilistic polynomial time), the class of problems solvable in

99 https://en.wikipedia.org/wiki/ZPP_(complexity)
100 https://en.wikipedia.org/wiki/RP_(complexity)
101 https://en.wikipedia.org/wiki/One-sided_error
102 https://en.wikipedia.org/wiki/Two-sided_error
103 https://en.wikipedia.org/wiki/Bounded-error_probabilistic_polynomial

1577
Complexity class

polynomial time by a probabilistic Turing machine with error probability less than 1/3 (for
both strings in the language and not in the language). BPP is the most practically relevant
of the probabilistic complexity classes—most problems of interest in BPP have efficient
randomized algorithms104 that can be run quickly on real computers. BPP is also at the
center of the important unsolved problem in computer science over whether P=BPP105 ,
which if true would mean that randomness does not increase the computational power of
computers, i.e. any probabilistic Turing machine could be simulated by a deterministic
Turing machine with at most polynomial slowdown.
The broadest class of efficiently-solvable probabilistic problems is PP106 (probabilistic poly-
nomial time), the set of languages solvable by a probabilistic Turing machine in polynomial
time with an error probability of less than 1/2 for all strings.
ZPP, RP and co-RP are all subsets of BPP. The reason for this is intuitive: the classes
allowing zero error and only one-sided error are all contained within the class that allows
two-sided error. BPP is a subset of PP. ZPP relates to RP and co-RP in the following
way: ZPP=RP∩co-RP, that is, ZPP consists exactly of those problems that are in both
RP and co-RP. Intuitively, this follows from the fact that RP and co-RP allow only one-
sided error. A problem in RP allows error only for strings in the language, and a problem
in co-RP allows error only for strings not in the language, i.e. it disallows error for strings
in the language. Hence, if a problem is in both RP and co-RP, then there must be no
error for strings in the language. A similar argument further demonstrates that no error
is allowed for strings not in the language. Together, this means that no error is allowed
whatsoever, which is exactly the definition of ZPP.

Interactive proof systems

Main article: Interactive proof system107 A number of complexity classes are defined using
interactive proof systems. Interactive proofs generalize the proofs definition of the complex-
ity class NP108 and yield insights into cryptography109 , approximation algorithms110 , and
formal verification111 .

104 https://en.wikipedia.org/wiki/Randomized_algorithm
105 https://en.wikipedia.org/wiki/BPP_(complexity)#Problems
106 https://en.wikipedia.org/wiki/PP_(complexity)
107 https://en.wikipedia.org/wiki/Interactive_proof_system
108 https://en.wikipedia.org/wiki/NP_(complexity)
109 https://en.wikipedia.org/wiki/Cryptography
110 https://en.wikipedia.org/wiki/Approximation_algorithm
111 https://en.wikipedia.org/wiki/Formal_verification

1578
Common complexity classes

Figure 357 General representation of an interactive proof protocol.

Interactive proof systems are abstract machines112 that model computation as the exchange
of messages between two parties: a prover P and a verifier V . The parties interact by
exchanging messages, and an input string is accepted by the system if the verifier decides
to accept the input on the basis of the messages it has received from the prover. The prover
P has unlimited computational power while the verifier has bounded computational power
(the standard definition of interactive proof systems defines the verifier to be polynomially-
time bounded). The prover, however, is untrustworthy (this prevents all languages from
being trivially recognized by the proof system by having the computationally unbounded
prover determine if a string is in a language and then sending a trustworthy ”YES” or ”NO”
to the verifier), so the verifier must conduct an ”interrogation” of the prover by ”asking it”
successive rounds of questions, accepting only if it develops a high degree of confidence that
the string is in the language.[1]
The class NP113 is a simple proof system in which the verifier is restricted to being a
deterministic, polynomial-time machine and the procedure is restricted to one round (that
is, the prover sends only a single, full proof114 to the verifier). Put another way, in the
definition of the class NP (the set of decision problems for which the problem instances,
when the answer is ”YES”, have proofs verifiable in polynomial time by a deterministic
Turing machine) is a proof system in which the proof is constructed by an unmentioned
prover and the deterministic Turing machine is the verifier.

112 https://en.wikipedia.org/wiki/Abstract_machine
113 https://en.wikipedia.org/wiki/NP_(complexity)
114 https://en.wikipedia.org/wiki/Certificate_(complexity)

1579
Complexity class

It turns out that NP describes the full power of interactive proof systems with deterministic
(polynomial-time) verifiers because, for any proof systems with deterministic verifiers, it is
never necessary to need more than a single round of messaging between the prover and the
verifier. Interactive proof systems that provide greater computational power over standard
complexity classes thus require probabilistic verifiers, which means that the verifiers' ques-
tions to provers are computed using probabilistic algorithms115 . As noted in the section
above, probabilistic algorithms introduce error into the system, so complexity classes based
on probabilistic proof systems are defined in terms of the error probability.
The most general complexity class arising out of this characterization is the class IP116
(interactive polynomial time), which is the class of all problems solvable by an interactive
proof system (P, V ), where V is probabilistic polynomial-time and the proof system satisfies
two properties: for a language L ∈ IP
1. (Completeness) a string w in L implies Pr[V accepts w after interacting with P ] ≥ 32
2. (Soundness) a string w not in L implies Pr[V accepts w after interacting with P ] ≤ 31
An important feature of IP is that it equals PSPACE117 .
A modification of the protocol for IP produces another important complexity class: AM118
(Arthur–Merlin protocol). In the definition of interactive proof systems used by IP, the
prover was not able to see the coins utilized by the verifier in its probabilistic computation—
it was only able to see the messages that the verifier produced with these coins. For this
reason, the coins are called private random coins. The interactive proof system can be
constrained so that the coins used by the verifier are public random coins, that is the prover
is able to see the coins. Formally, AM is defined as the class of languages with an interactive
proof in which the verifier sends a random string to the prover, the prover responds with a
message, and the verifier either accepts or rejects by applying a deterministic polynomial-
time function to the message from the prover. AM can be generalized to AM[k], where k
is the number of messages exchanged (so in the generalized form the standard AM defined
above is AM[2]). However, it is the case that for all k≥2, AM[k]=AM[2]. It is also the
case that AM[k]⊆IP[k].
Other complexity classes defined using interactive proof systems include MIP119 (mut-
liprover interactive polynomial time) and QIP120 (quantum interactive polynomial time).

Boolean circuits

Main article: Circuit complexity121

115 https://en.wikipedia.org/wiki/Probabilistic_algorithm
116 https://en.wikipedia.org/wiki/IP_(complexity)
117 https://en.wikipedia.org/wiki/PSPACE
118 https://en.wikipedia.org/wiki/AM_(complexity)
119 https://en.wikipedia.org/wiki/Interactive_proof_system#MIP
120 https://en.wikipedia.org/wiki/QIP_(complexity)
121 https://en.wikipedia.org/wiki/Circuit_complexity

1580
Common complexity classes

Figure 358 Example Boolean circuit computing the Boolean function fC (x1 , x2 , x3 ),
where x1 = 0, x2 = 1, and x3 = 0. The ∧ nodes are AND gates, the ∨ nodes are OR gates,
and the ¬ nodes are NOT gates.

An alternative model of computation to the Turing machine122 is the Boolean circuit123 , a


simplified model of the digital circuits124 used in modern computers125 . Not only does this
model provide a more intuitive connection between computation in theory and computation
in practice, but it is also a natural model for non-uniform computation126 (different input
sizes within the same problem use different algorithms) that leads to a number of new
complexity classes.

122 https://en.wikipedia.org/wiki/Turing_machine
123 https://en.wikipedia.org/wiki/Boolean_circuit
124 https://en.wikipedia.org/wiki/Digital_circuit
125 https://en.wikipedia.org/wiki/Computer
https://en.wikipedia.org/w/index.php?title=Non-uniform_computation&action=edit&
126
redlink=1

1581
Complexity class

Formally, a Boolean circuit C is a directed acyclic graph127 in which edges represent


wires (which carry the bit128 values 0 and 1), the input bits are represented by source
vertices (vertices with no incoming edges), and all non-source vertices represent logic gates129
(generally the AND130 , OR131 , and NOT gates132 ). One logic gate is designated the output
gate, and represents the end of the computation. The input/output behavior of a circuit
C with n input variables is represented by the Boolean function133 fC : {0, 1}n → {0, 1}; for
example, on input bits x1 , x2 , ..., xn , the output bit b of the circuit is represented mathe-
matically as b = fC (x1 , x2 , ..., xn ). The circuit C is said to compute the Boolean function
fC .
A particular circuit has a fixed number of input vertices, so it can only act on inputs of that
size. Languages134 (the formal representation of decision problems135 ), however, contain
strings of different lengths, so languages cannot be fully captured by a single circuit (in
contrast to the Turing machine model, in which a language is fully described by a single
Turing machine that can act on any input size). A language is instead represented by a
circuit family. A circuit family is an infinite list of circuits (C0 , C1 , C2 , ...), where Cn has
n input variables. A circuit family is said to decide a language L if, for every string w, w
is in the language L if and only if Cn (w) = 1, where n is the length of w. In other words,
a language is the set of strings which, when applied to the circuits corresponding to their
lengths, evaluate to 1.
While complexity classes defined using Turing machines are described in terms of time
complexity136 , circuit complexity classes are defined in terms of circuit size — the number
of vertices in the circuit. The size complexity of a circuit family (C0 , C1 , ...) is the function
f : N → N, where f (n) is the circuit size of Cn . The familiar function classes follow naturally
from this; for example, a polynomial-size complexity is one such that the function f is a
polynomial137 .
This leads directly to the complexity class P/poly138 , the set of languages that are decidable
by polynomial-size circuit families. It turns out that there is a natural connection between
circuit complexity and time complexity. Intuitively, a language with small time complexity
(requires relatively few sequential operations on a Turing machine), also has a small circuit
complexity (requires relatively few Boolean operations). Formally, it can be shown that if
a language is in TIME(t(n)), where t is a function t : N → N, then it has circuit complexity
O(t2 (n)).[2] It follows directly from this that P139 ⊆P/poly. In other words, any problem
that can be computed in polynomial time by a deterministic Turing machine can also be

127 https://en.wikipedia.org/wiki/Directed_acyclic_graph
128 https://en.wikipedia.org/wiki/Bit
129 https://en.wikipedia.org/wiki/Logic_gate
130 https://en.wikipedia.org/wiki/AND_gate
131 https://en.wikipedia.org/wiki/OR_gate
132 https://en.wikipedia.org/wiki/NOT_gate
133 https://en.wikipedia.org/wiki/Boolean_function
134 https://en.wikipedia.org/wiki/Formal_language
135 https://en.wikipedia.org/wiki/Decision_problem
136 https://en.wikipedia.org/wiki/Time_complexity
137 https://en.wikipedia.org/wiki/Polynomial
138 https://en.wikipedia.org/wiki/P/poly
139 https://en.wikipedia.org/wiki/P_(complexity)

1582
Common complexity classes

computed by a polynomial-size circuit family. It is further the case that the inclusion is
proper (P⊂P/poly) because there are undecidable problems that are in P/poly.
P/poly turns out to have a number of properties that make it highly useful in the study
of the relationships between complexity classes. In particular, it is helpful in investigating
problems related to P versus NP140 . For example, if there is any language in NP that is not
in P/poly, then P̸=NP.[3] P/poly also helps to investigate properties of the polynomial
hierarchy141 . For example, if NP142 ⊆ P/poly, then PH collapses to ΣP2 . A full description
of the relations between P/poly and other complexity classes is available at ”Importance
of P/poly143 ”. P/poly is also helpful in the general study of the properties of Turing
machines144 , as it can be equivalently defined as the class of languages recognized by a
polynomial-time Turing machine with a polynomial-bounded advice function145 .
Two subclasses of P/poly that have interesting properties in their own right are NC146
and AC147 . These classes are defined not only in terms of their circuit size but also in terms
of their depth. The depth of a circuit is the length of the longest directed path148 from an
input node to the output node. The class NC is the set of languages that can be solved
by circuit families that are restricted not only to having polynomial-size but also to having
polylogarithmic depth. The class AC is defined similarly to NC, however gates are allowed
to have unbounded fan-in (that is, the AND and OR gates can be applied to more than
two bits). NC is an important class because it turns out that it represents the class of
languages that have efficient parallel algorithms149 .

Quantum Turing machines

The classes BQP150 and QMA151 , which are of key importance in quantum information
science152 , are defined using quantum Turing machines153 .

This section needs expansion. You can help by adding to it154 . (April 2017)

140 https://en.wikipedia.org/wiki/P_versus_NP
141 https://en.wikipedia.org/wiki/Polynomial_hierarchy
142 https://en.wikipedia.org/wiki/NP_(complexity)
143 https://en.wikipedia.org/wiki/P/poly#Importance_of_P/poly
144 https://en.wikipedia.org/wiki/Turing_machine
145 https://en.wikipedia.org/wiki/Advice_(complexity)
146 https://en.wikipedia.org/wiki/NC_(complexity)
147 https://en.wikipedia.org/wiki/AC_(complexity)
148 https://en.wikipedia.org/wiki/Directed_path
149 https://en.wikipedia.org/wiki/Parallel_algorithm
150 https://en.wikipedia.org/wiki/BQP
151 https://en.wikipedia.org/wiki/QMA
152 https://en.wikipedia.org/wiki/Quantum_information_science
153 https://en.wikipedia.org/wiki/Quantum_Turing_machine
154 https://en.wikipedia.org/w/index.php?title=Complexity_class&action=edit&section=

1583
Complexity class

Counting problems

#P155 is an important complexity class of counting problems (not decision problems).

This section needs expansion. You can help by adding to it156 . (April 2017)

147.2.4 Enumeration algorithms

Several output-sensitive157 classes have been defined for enumeration algorithms158 .

This section needs expansion. You can help by adding to it159 . (May 2019)

147.3 Reduction

Main article: Reduction (complexity)160 Many complexity classes are defined using the
concept of a reduction. A reduction is a transformation of one problem into another problem.
It captures the informal notion of a problem being at least as difficult as another problem.
For instance, if a problem X can be solved using an algorithm for Y, X is no more difficult
than Y, and we say that X reduces to Y. There are many different types of reductions,
based on the method of reduction, such as Cook reductions161 , Karp reductions162 and
Levin reductions163 , and the bound on the complexity of reductions, such as polynomial-
time reductions164 or log-space reductions165 .
The most commonly used reduction is a polynomial-time reduction. This means that the
reduction process takes polynomial time. For example, the problem of squaring an integer
can be reduced to the problem of multiplying two integers. This means an algorithm for
multiplying two integers can be used to square an integer. Indeed, this can be done by giving
the same input to both inputs of the multiplication algorithm. Thus we see that squaring
is not more difficult than multiplication, since squaring can be reduced to multiplication.
This motivates the concept of a problem being hard for a complexity class. A problem
X is hard for a class of problems C if every problem in C can be reduced to X. Thus no
problem in C is harder than X, since an algorithm for X allows us to solve any problem in
C. Of course, the notion of hard problems depends on the type of reduction being used.

155 https://en.wikipedia.org/wiki/Sharp-P
156 https://en.wikipedia.org/w/index.php?title=Complexity_class&action=edit&section=
157 https://en.wikipedia.org/wiki/Output-sensitive_algorithm
158 https://en.wikipedia.org/wiki/Enumeration_algorithm
159 https://en.wikipedia.org/w/index.php?title=Complexity_class&action=edit&section=
160 https://en.wikipedia.org/wiki/Reduction_(complexity)
161 https://en.wikipedia.org/wiki/Cook_reduction
162 https://en.wikipedia.org/wiki/Karp_reduction
163 https://en.wikipedia.org/wiki/Levin_reduction
164 https://en.wikipedia.org/wiki/Polynomial-time_reduction
165 https://en.wikipedia.org/wiki/Log-space_reduction

1584
Closure properties of classes

For complexity classes larger than P, polynomial-time reductions are commonly used. In
particular, the set of problems that are hard for NP is the set of NP-hard166 problems.
If a problem X is in C and is hard for C, then X is said to be complete167 for C. This
means that X is the hardest problem in C (Since there could be many problems which
are equally hard, one might say that X is one of the hardest problems in C). Thus the
class of NP-complete168 problems contains the most difficult problems in NP, in the sense
that they are the ones most likely not to be in P. Because the problem P = NP is not
solved, being able to reduce a known NP-complete problem, Π2 , to another problem, Π1 ,
would indicate that there is no known polynomial-time solution for Π1 . This is because a
polynomial-time solution to Π1 would yield a polynomial-time solution to Π2 . Similarly,
because all NP problems can be reduced to the set, finding an NP-complete problem that
can be solved in polynomial time would mean that P = NP.

147.4 Closure properties of classes

Complexity classes have a variety of closure properties; for example, decision classes may
be closed under negation169 , disjunction170 , conjunction171 , or even under all Boolean op-
erations172 . Moreover, they might also be closed under a variety of quantification schemes.
P, for instance, is closed under all Boolean operations, and under quantification over poly-
nomially sized domains. However, it is most likely not closed under quantification over
exponential sized domains.
Each class X that is not closed under negation has a complement class co-Y, which consists
of the complements of the languages contained in X. Similarly one can define the Boolean
closure of a class, and so on; this is, however, less commonly done.
One possible route to separating two complexity classes is to find some closure property
possessed by one and not by the other.

147.5 Relationships between complexity classes

147.5.1 Savitch's theorem

Main article: Savitch's theorem173 Savitch's theorem establishes that PSPACE =


NPSPACE and EXPSPACE = NEXPSPACE. One central question of complexity the-
ory is whether nondeterminism adds significant power to a computational model. This is
central to the open P versus NP problem in the context of time. Savitch's theorem shows

166 https://en.wikipedia.org/wiki/NP-hard
167 https://en.wikipedia.org/wiki/Complete_(complexity)
168 https://en.wikipedia.org/wiki/NP-complete
169 https://en.wikipedia.org/wiki/Negation
170 https://en.wikipedia.org/wiki/Disjunction
171 https://en.wikipedia.org/wiki/Logical_conjunction
172 https://en.wikipedia.org/wiki/Logical_connective
173 https://en.wikipedia.org/wiki/Savitch%27s_theorem

1585
Complexity class

that for space, nondeterminism does not add significantly more power, where ”significant”
means the difference between polynomial and superpolynomial resource requirements (or,
for EXPSPACE, the difference between exponential and superexponential). For example,
Savitch's theorem proves that no problem that requires exponential space for a deterministic
Turing machine can be solved by a nondeterministic polynomial space Turing machine.

147.5.2 Other relations

The following table shows some of the classes of problems (or languages, or grammars) that
are considered in complexity theory. If class X is a strict subset174 of Y, then X is shown
below Y, with a dark line connecting them. If X is a subset, but it is unknown whether
they are equal sets, then the line is lighter and is dotted. Technically, the breakdown into
decidable and undecidable pertains more to the study of computability theory175 but is
useful for putting the complexity classes in perspective.
Decision Problem176

Undecid-
Type 0 (Recursively enumerable)177
able178

Decidable179

EXPSPACE180

NEXPTIME181

EXPTIME182

PSPACE183

174 https://en.wikipedia.org/wiki/Subset
175 https://en.wikipedia.org/wiki/Computability_theory

1586
Relationships between complexity classes

Type 1 (Context Sensi-


tive)184

co-NP185 BQP186 NP187

BPP188

P189

NC190

Type 2 (Context Free)191

Type 3 (Regular)192

147.5.3 Hierarchy theorems

Main articles: Time hierarchy theorem193 and Space hierarchy theorem194 For the complex-
ity classes defined in this way, it is desirable to prove that relaxing the requirements on

193 https://en.wikipedia.org/wiki/Time_hierarchy_theorem
194 https://en.wikipedia.org/wiki/Space_hierarchy_theorem

1587
Complexity class

(say) computation time indeed defines a bigger set of problems. In particular, although
DTIME(n) is contained in DTIME(n2 ), it would be interesting to know if the inclusion is
strict. For time and space requirements, the answer to such questions is given by the time
and space hierarchy theorems respectively. They are called hierarchy theorems because they
induce a proper hierarchy on the classes defined by constraining the respective resources.
Thus there are pairs of complexity classes such that one is properly included in the other.
Having deduced such proper set inclusions, we can proceed to make quantitative statements
about how much more additional time or space is needed in order to increase the number
of problems that can be solved.
More precisely, the time hierarchy theorem195 states that
( ) ( )
DTIME f (n) ⊊ DTIME f (n) · log2 (f (n)) .
The space hierarchy theorem196 states that
( ) ( )
DSPACE f (n) ⊊ DSPACE f (n) · log(f (n)) .
The time and space hierarchy theorems form the basis for most separation results of com-
plexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained
in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in
PSPACE.

147.6 See also


• List of complexity classes197

147.7 Notes
1. A, S198 ; B, B199 (2009). Computational Complexity: A Mod-
ern Approach. Cambridge University Press. p. 144. ISBN200 978-0-521-42426-4201 .
The verifier conducts an interrogation of the prover, repeatedly asking questions and
listening to the prover's responses.
2. S, M202 (2006). Introduction to the Theory of Computation203 (2
.). USA: T C T. . 355. ISBN204 978-0-534-95097-2205 .
3. Arora and Barak p. 286

195 https://en.wikipedia.org/wiki/Time_hierarchy_theorem
196 https://en.wikipedia.org/wiki/Space_hierarchy_theorem
197 https://en.wikipedia.org/wiki/List_of_complexity_classes
198 https://en.wikipedia.org/wiki/Sanjeev_Arora
199 https://en.wikipedia.org/wiki/Boaz_Barak
200 https://en.wikipedia.org/wiki/ISBN_(identifier)
201 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-42426-4
202 https://en.wikipedia.org/wiki/Michael_Sipser
203 https://en.wikipedia.org/wiki/Introduction_to_the_Theory_of_Computation
204 https://en.wikipedia.org/wiki/ISBN_(identifier)
205 https://en.wikipedia.org/wiki/Special:BookSources/978-0-534-95097-2

1588
Further reading

147.8 References
• A, S206 ; B, B207 (2009). Computational Complexity: A Modern
Approach. Cambridge University Press. ISBN208 978-0-521-42426-4209 .
• S, M210 (2006). Introduction to the Theory of Computation211 (2 .).
USA: T C T. ISBN212 978-0-534-95097-2213 .

147.9 Further reading


• The Complexity Zoo214 : A huge list of complexity classes, a reference for experts.
• N I215 . ”C C T”216 . A 
 217  2016-04-16. Includes a diagram showing the hierarchy of complexity
classes and how they fit together.
• Michael Garey218 , and David S. Johnson219 : Computers and Intractability: A Guide to
the Theory of NP-Completeness. New York: W. H. Freeman & Co., 1979. The standard
reference on NP-Complete problems - an important category of problems whose solutions
appear to require an impractically long time to compute.

Important complexity classes (more)

206 https://en.wikipedia.org/wiki/Sanjeev_Arora
207 https://en.wikipedia.org/wiki/Boaz_Barak
208 https://en.wikipedia.org/wiki/ISBN_(identifier)
209 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-42426-4
210 https://en.wikipedia.org/wiki/Michael_Sipser
211 https://en.wikipedia.org/wiki/Introduction_to_the_Theory_of_Computation
212 https://en.wikipedia.org/wiki/ISBN_(identifier)
213 https://en.wikipedia.org/wiki/Special:BookSources/978-0-534-95097-2
214 https://complexityzoo.uwaterloo.ca/Complexity_Zoo
215 https://en.wikipedia.org/wiki/Neil_Immerman
https://web.archive.org/web/20160416021243/https://people.cs.umass.edu/~immerman/
216
complexity_theory.html
217 http://www.cs.umass.edu/~immerman/complexity_theory.html
218 https://en.wikipedia.org/wiki/Michael_Garey
219 https://en.wikipedia.org/wiki/David_S._Johnson

1589
148 P (complexity)

In computational complexity theory1 , P, also known as PTIME or DTIME2 (nO(1) ), is


a fundamental complexity class3 . It contains all decision problems4 that can be solved
by a deterministic Turing machine5 using a polynomial6 amount of computation time7 , or
polynomial time8 .
Cobham's thesis9 holds that P is the class of computational problems that are ”efficiently
solvable” or ”tractable10 ”. This is inexact: in practice, some problems not known to be in P
have practical solutions, and some that are in P do not, but this is a useful rule of thumb.

148.1 Definition

A language11 L is in P if and only if there exists a deterministic Turing machine M, such


that
• M runs for polynomial time on all inputs
• For all x in L, M outputs 1
• For all x not in L, M outputs 0
P can also be viewed as a uniform family of boolean circuits12 . A language L is in P if
and only if there exists a polynomial-time uniform13 family of boolean circuits {Cn : n ∈ N},
such that
• For all n ∈ N, Cn takes n bits as input and outputs 1 bit
• For all x in L, C|x| (x) = 1
• For all x not in L, C|x| (x) = 0
The circuit definition can be weakened to use only a logspace uniform14 family without
changing the complexity class.

1 https://en.wikipedia.org/wiki/Computational_complexity_theory
2 https://en.wikipedia.org/wiki/DTIME
3 https://en.wikipedia.org/wiki/Complexity_class
4 https://en.wikipedia.org/wiki/Decision_problem
5 https://en.wikipedia.org/wiki/Deterministic_Turing_machine
6 https://en.wikipedia.org/wiki/Polynomial
7 https://en.wikipedia.org/wiki/Computation_time
8 https://en.wikipedia.org/wiki/Polynomial_time
9 https://en.wikipedia.org/wiki/Cobham%27s_thesis
10 https://en.wikipedia.org/wiki/Tractable_problem
11 https://en.wikipedia.org/wiki/Formal_language
12 https://en.wikipedia.org/wiki/Boolean_circuit
13 https://en.wikipedia.org/wiki/Circuit_complexity#Polynomial-time_uniform
14 https://en.wikipedia.org/wiki/Circuit_complexity#Logspace_uniform

1591
P (complexity)

148.2 Notable problems in P

P is known to contain many natural problems, including the decision versions of linear
programming15 , calculating the greatest common divisor16 , and finding a maximum match-
ing17 . In 2002, it was shown that the problem of determining if a number is prime18 is in
P.[1] The related class of function problems19 is FP20 .
Several natural problems are complete for P, including st-connectivity21 (or reachability22 )
on alternating graphs.[2] The article on P-complete problems23 lists further relevant problems
in P.

148.3 Relationships to other classes

A generalization of P is NP24 , which is the class of decision problems25 decidable by a


non-deterministic Turing machine26 that runs in polynomial time27 . Equivalently, it is the
class of decision problems where each ”yes” instance has a polynomial size certificate, and
certificates can be checked by a polynomial time deterministic Turing machine. The class
of problems for which this is true for the ”no” instances is called co-NP28 . P is trivially a
subset of NP and of co-NP; most experts believe it is a proper subset,[3] although this (the
P ⊊ NP hypothesis29 ) remains unproven. Another open problem is whether NP = co-NP (a
negative answer would imply P ⊊ NP).
P is also known to be at least as large as L30 , the class of problems decidable in a loga-
rithmic31 amount of memory space32 . A decider using O(log n) space cannot use more than
2O(log n) = nO(1) time, because this is the total number of possible configurations; thus, L is
a subset of P. Another important problem is whether L = P. We do know that P = AL,
the set of problems solvable in logarithmic memory by alternating Turing machines33 . P is
also known to be no larger than PSPACE34 , the class of problems decidable in polynomial
space. Again, whether P = PSPACE is an open problem. To summarize:

15 https://en.wikipedia.org/wiki/Linear_programming
16 https://en.wikipedia.org/wiki/Greatest_common_divisor
17 https://en.wikipedia.org/wiki/Maximum_matching
18 https://en.wikipedia.org/wiki/Prime_number
19 https://en.wikipedia.org/wiki/Function_problem
20 https://en.wikipedia.org/wiki/FP_(complexity)
21 https://en.wikipedia.org/wiki/St-connectivity
22 https://en.wikipedia.org/wiki/Reachability
23 https://en.wikipedia.org/wiki/P-complete
24 https://en.wikipedia.org/wiki/NP_(complexity)
25 https://en.wikipedia.org/wiki/Decision_problem
26 https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine
27 https://en.wikipedia.org/wiki/Polynomial_time
28 https://en.wikipedia.org/wiki/Co-NP
29 https://en.wikipedia.org/wiki/P_versus_NP_problem
30 https://en.wikipedia.org/wiki/L_(complexity)
31 https://en.wikipedia.org/wiki/Logarithm
32 https://en.wikipedia.org/wiki/Memory_space_(computational_resource)
33 https://en.wikipedia.org/wiki/Alternating_Turing_machine
34 https://en.wikipedia.org/wiki/PSPACE

1592
Properties

L ⊆ AL = P ⊆ NP ⊆ PSPACE ⊆ EXPTIME.
Here, EXPTIME35 is the class of problems solvable in exponential time. Of all the classes
shown above, only two strict containments are known:
• P is strictly contained in EXPTIME. Consequently, all EXPTIME-hard problems lie
outside P, and at least one of the containments to the right of P above is strict (in fact,
it is widely believed that all three are strict).
• L is strictly contained in PSPACE.
The most difficult problems in P are P-complete36 problems.
Another generalization of P is P/poly37 , or Nonuniform Polynomial-Time. If a problem is
in P/poly, then it can be solved in deterministic polynomial time provided that an advice
string38 is given that depends only on the length of the input. Unlike for NP, however,
the polynomial-time machine doesn't need to detect fraudulent advice strings; it is not a
verifier. P/poly is a large class containing nearly all practical problems, including all of
BPP39 . If it contains NP, then the polynomial hierarchy40 collapses to the second level. On
the other hand, it also contains some impractical problems, including some undecidable
problems41 such as the unary version of any undecidable problem.
In 1999, Jin-Yi Cai and D. Sivakumar, building on work by Mitsunori Ogihara42 , showed
that if there exists a sparse language43 that is P-complete, then L = P.[4]

148.4 Properties

Polynomial-time algorithms are closed under composition. Intuitively, this says that if one
writes a function that is polynomial-time assuming that function calls are constant-time,
and if those called functions themselves require polynomial time, then the entire algorithm
takes polynomial time. One consequence of this is that P is low44 for itself. This is also one
of the main reasons that P is considered to be a machine-independent class; any machine
”feature”, such as random access45 , that can be simulated in polynomial time can simply
be composed with the main polynomial-time algorithm to reduce it to a polynomial-time
algorithm on a more basic machine.

35 https://en.wikipedia.org/wiki/EXPTIME
36 https://en.wikipedia.org/wiki/P-complete
37 https://en.wikipedia.org/wiki/P/poly
38 https://en.wikipedia.org/wiki/Advice_(complexity)
39 https://en.wikipedia.org/wiki/Bounded-error_probabilistic_polynomial
40 https://en.wikipedia.org/wiki/Polynomial_hierarchy
41 https://en.wikipedia.org/wiki/Undecidable_problem
42 https://en.wikipedia.org/w/index.php?title=Mitsunori_Ogihara&action=edit&redlink=1
43 https://en.wikipedia.org/wiki/Sparse_language
44 https://en.wikipedia.org/wiki/Low_(complexity)
45 https://en.wikipedia.org/wiki/Random_access

1593
P (complexity)

Languages in P are also closed under reversal, intersection46 , union47 , concatenation48 ,


Kleene closure49 , inverse homomorphism50 , and complementation51 .[5]

148.5 Pure existence proofs of polynomial-time algorithms

Some problems are known to be solvable in polynomial-time, but no concrete algorithm is


known for solving them. For example, the Robertson–Seymour theorem52 guarantees that
there is a finite list of forbidden minors53 that characterizes (for example) the set of graphs
that can be embedded on a torus; moreover, Robertson and Seymour showed that there is
an O(n3 ) algorithm for determining whether a graph has a given graph as a minor. This
yields a nonconstructive proof54 that there is a polynomial-time algorithm for determining
if a given graph can be embedded on a torus, despite the fact that no concrete algorithm is
known for this problem.

148.6 Alternative characterizations

In descriptive complexity55 , P can be described as the problems expressible in FO(LFP)56 ,


the first-order logic57 with a least fixed point58 operator added to it, on ordered structures.
In Immerman's 1999 textbook on descriptive complexity,[6] Immerman ascribes this result
to Vardi[7] and to Immerman.[8]
It was published in 2001 that PTIME corresponds to (positive) range concatenation gram-
mars59 .[9]

148.7 History

Kozen60[10] states that Cobham61 and Edmonds62 are ”generally credited with the invention
of the notion of polynomial time.” Cobham invented the class as a robust way of character-

46 https://en.wikipedia.org/wiki/Intersection_(set_theory)
47 https://en.wikipedia.org/wiki/Union_(set_theory)
48 https://en.wikipedia.org/wiki/Concatenation
49 https://en.wikipedia.org/wiki/Kleene_closure
50 https://en.wikipedia.org/wiki/Homomorphism
51 https://en.wikipedia.org/wiki/Complement_(complexity)
52 https://en.wikipedia.org/wiki/Robertson%E2%80%93Seymour_theorem
53 https://en.wikipedia.org/wiki/Forbidden_minor
54 https://en.wikipedia.org/wiki/Nonconstructive_proof
55 https://en.wikipedia.org/wiki/Descriptive_complexity
56 https://en.wikipedia.org/wiki/FO(LFP)
57 https://en.wikipedia.org/wiki/First-order_logic
58 https://en.wikipedia.org/wiki/Least_fixed_point
59 https://en.wikipedia.org/wiki/Range_concatenation_grammars
60 https://en.wikipedia.org/wiki/Dexter_Kozen
https://en.wikipedia.org/w/index.php?title=Alan_Cobham_(mathematician)&action=edit&
61
redlink=1
62 https://en.wikipedia.org/wiki/Jack_Edmonds

1594
Notes

izing efficient algorithms, leading to Cobham's thesis63 . However, H. C. Pocklington64 , in a


1910 paper,[11][12] analyzed two algorithms for solving quadratic congruences, and observed
that one took time ”proportional to a power of the logarithm of the modulus” and contrasted
this with one that took time proportional ”to the modulus itself or its square root”, thus
explicitly drawing a distinction between an algorithm that ran in polynomial time versus
one that did not.

148.8 Notes
1. Manindra Agrawal, Neeraj Kayal, Nitin Saxena, ”PRIMES is in P65 ”, Annals of
Mathematics 160 (2004), no. 2, pp. 781–793.
2. I, N66 (1999). Descriptive Complexity. New York: Springer-Verlag.
ISBN67 978-0-387-98600-568 .
3. Johnsonbaugh, Richard69 ; Schaefer, Marcus, Algorithms, 2004 Pearson Education,
page 458, ISBN70 0-02-360692-471
4. Jin-Yi Cai and D. Sivakumar. Sparse hard sets for P: resolution of a conjecture
of Hartmanis. Journal of Computer and System Sciences, volume 58, issue 2,
pp.280−296. 1999. ISSN72 0022-000073 . At Citeseer74
5. H, J E.; R M; J D. U (2001). Introduc-
tion to automata theory, languages, and computation (2. ed.). Boston: Addison-
Wesley. pp. 425–426. ISBN75 978-020144124676 .
6. I, N77 (1999). Descriptive Complexity. New York: Springer-Verlag.
p. 66. ISBN78 978-0-387-98600-579 .
7. V, M Y. (1982). ”T C  R Q L-
”. STOC '82: Proceedings of the fourteenth annual ACM symposium on
Theory of computing. pp. 137–146. doi80 :10.1145/800070.80218681 .
8. I, N (1982). ”R Q C  P
T”. STOC '82: Proceedings of the fourteenth annual ACM symposium on The-

63 https://en.wikipedia.org/wiki/Cobham%27s_thesis
64 https://en.wikipedia.org/wiki/Henry_Cabourn_Pocklington
65 http://www.cse.iitk.ac.in/users/manindra/algebra/primality_v6.pdf
66 https://en.wikipedia.org/wiki/Neil_Immerman
67 https://en.wikipedia.org/wiki/ISBN_(identifier)
68 https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-98600-5
69 https://en.wikipedia.org/wiki/Richard_Johnsonbaugh
70 https://en.wikipedia.org/wiki/ISBN_(identifier)
71 https://en.wikipedia.org/wiki/Special:BookSources/0-02-360692-4
72 https://en.wikipedia.org/wiki/ISSN_(identifier)
73 https://www.worldcat.org/search?fq=x0:jrnl&q=n2:0022-0000
74 http://citeseer.ist.psu.edu/501645.html
75 https://en.wikipedia.org/wiki/ISBN_(identifier)
76 https://en.wikipedia.org/wiki/Special:BookSources/978-0201441246
77 https://en.wikipedia.org/wiki/Neil_Immerman
78 https://en.wikipedia.org/wiki/ISBN_(identifier)
79 https://en.wikipedia.org/wiki/Special:BookSources/978-0-387-98600-5
80 https://en.wikipedia.org/wiki/Doi_(identifier)
81 https://doi.org/10.1145%2F800070.802186

1595
P (complexity)

ory of computing. pp. 147–152. doi82 :10.1145/800070.80218783 . Revised version in


Information and Control, 68 (1986), 86–104.
9. L K (2010). Parsing Beyond Context-Free Grammars. Springer
Science & Business Media. pp. 5 and 37. ISBN84 978-3-642-14846-085 . citing 86 for
the proof
10. K, D C. (2006). Theory of Computation. Springer. p. 4. ISBN87 978-
1-84628-297-388 .
11. P, H. C.89 (1910–1912). ”T     
   ,      ,
     ”. Proc. Camb. Phil. Soc. 16: 1–5.
12. G, W90 (1994). Mathematics of computation, 1943–1993: a half-
century of computational mathematics: Mathematics of Computation 50th Anniver-
sary Symposium, August 9–13, 1993, Vancouver, British Columbia. Providence, RI:
American Mathematical Society. pp. 503–504. ISBN91 978-0-8218-0291-592 .

148.9 References
• C, A93 (1965). ”T     -
”. Proc. Logic, Methodology, and Philosophy of Science II. North Holland.
• Thomas H. Cormen94 , Charles E. Leiserson95 , Ronald L. Rivest96 , and Clifford Stein97 .
Introduction to Algorithms98 , Second Edition. MIT Press and McGraw–Hill, 2001.
ISBN99 0-262-03293-7100 . Section 34.1: Polynomial time, pp. 971−979.
• P, C H.101 (1994). Computational complexity. Reading, Mass.:
Addison–Wesley. ISBN102 978-0-201-53082-7103 .

82 https://en.wikipedia.org/wiki/Doi_(identifier)
83 https://doi.org/10.1145%2F800070.802187
84 https://en.wikipedia.org/wiki/ISBN_(identifier)
85 https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-14846-0
86 http://mjn.host.cs.st-andrews.ac.uk/publications/2001d.pdf
87 https://en.wikipedia.org/wiki/ISBN_(identifier)
88 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84628-297-3
89 https://en.wikipedia.org/wiki/H._C._Pocklington
90 https://en.wikipedia.org/wiki/Walter_Gautschi
91 https://en.wikipedia.org/wiki/ISBN_(identifier)
92 https://en.wikipedia.org/wiki/Special:BookSources/978-0-8218-0291-5
93 https://en.wikipedia.org/wiki/Alan_Cobham
94 https://en.wikipedia.org/wiki/Thomas_H._Cormen
95 https://en.wikipedia.org/wiki/Charles_E._Leiserson
96 https://en.wikipedia.org/wiki/Ronald_L._Rivest
97 https://en.wikipedia.org/wiki/Clifford_Stein
98 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
99 https://en.wikipedia.org/wiki/ISBN_(identifier)
100 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
101 https://en.wikipedia.org/wiki/Christos_H._Papadimitriou
102 https://en.wikipedia.org/wiki/ISBN_(identifier)
103 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-53082-7

1596
External links

• S, M104 (2006). Introduction to the Theory of Computation, 2nd Edi-


tion. Course Technology Inc. ISBN105 978-0-534-95097-2106 . Section 7.2: The Class P,
pp. 256−263;.

148.10 External links


• Complexity Zoo107 : Class P108
• Complexity Zoo109 : Class P/poly110

Important complexity classes (more)

104 https://en.wikipedia.org/wiki/Michael_Sipser
105 https://en.wikipedia.org/wiki/ISBN_(identifier)
106 https://en.wikipedia.org/wiki/Special:BookSources/978-0-534-95097-2
107 https://en.wikipedia.org/wiki/Complexity_Zoo
108 https://complexityzoo.uwaterloo.ca/Complexity_Zoo:P#p
109 https://en.wikipedia.org/wiki/Complexity_Zoo
110 https://complexityzoo.uwaterloo.ca/Complexity_Zoo:P#ppoly

1597
149 NP (complexity)

This article includes a list of references1 , but its sources remain unclear be-
cause it has insufficient inline citations2 . Please help to improve3 this article
by introducing4 more precise citations. (October 2015)(Learn how and when to remove
this template message5 )

Unsolved problem in computer science:


?
P=NP(more unsolved problems in computer science)6

Figure 408 Euler diagram for P, NP, NP-complete, and NP-hard set of problems.
Under the assumption that P≠NP, the existence of problems within NP but outside both
P and NP-complete was established by Ladner.[1]

1 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
2 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
3 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Fact_and_Reference_Check
4 https://en.wikipedia.org/wiki/Wikipedia:When_to_cite
5 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
6 https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_computer_science

1599
NP (complexity)

In computational complexity theory7 , NP (nondeterministic polynomial time) is a


complexity class8 used to classify decision problems9 . NP is the set10 of decision problems
for which the problem instances11 , where the answer is ”yes”, have proofs12 verifiable in
polynomial time13 by a deterministic Turing machine14 .[2][Note 1]
An equivalent definition of NP is the set of decision problems solvable in polynomial time
by a non-deterministic Turing machine15 . This definition is the basis for the abbreviation
NP; ”nondeterministic16 , polynomial time.” These two definitions are equivalent because the
algorithm based on the Turing machine consists of two phases, the first of which consists of
a guess about the solution, which is generated in a non-deterministic way, while the second
phase consists of a deterministic algorithm that verifies if the guess is a solution to the
problem.[3]
Decision problems are assigned complexity classes (such as NP) based on the fastest known
algorithms. Therefore, decision problems may change classes if faster algorithms are dis-
covered.
It is easy to see that the complexity class P17 (all problems solvable, deterministically, in
polynomial time) is contained in NP (problems where solutions can be verified in poly-
nomial time), because if a problem is solvable in polynomial time then a solution is also
verifiable in polynomial time by simply solving the problem. But NP contains many more
problems[Note 2] , the hardest of which are called NP-complete18 problems. An algorithm
solving such a problem in polynomial time is also able to solve any other NP problem in
polynomial time. The most important P versus NP (“P = NP?”) problem19 , asks whether
polynomial time algorithms exist for solving NP-complete, and by corollary, all NP prob-
lems. It is widely believed that this is not the case.[4]
The complexity class NP is related to the complexity class co-NP20 for which the answer
”no” can be verified in polynomial time. Whether or not NP = co-NP is another outstanding
question in complexity theory.[5]

149.1 Formal definition

The complexity class NP can be defined in terms of NTIME21 as follows:

7 https://en.wikipedia.org/wiki/Computational_complexity_theory
8 https://en.wikipedia.org/wiki/Complexity_class
9 https://en.wikipedia.org/wiki/Decision_problem
10 https://en.wikipedia.org/wiki/Set_(mathematics)
11 https://en.wikipedia.org/wiki/Computational_complexity_theory#Problem_instances
12 https://en.wikipedia.org/wiki/Mathematical_proof
13 https://en.wikipedia.org/wiki/Polynomial_time
14 https://en.wikipedia.org/wiki/Deterministic_Turing_machine
15 https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine
16 https://en.wikipedia.org/wiki/Nondeterministic_algorithm
17 https://en.wikipedia.org/wiki/P_(complexity)
18 https://en.wikipedia.org/wiki/NP-complete
19 https://en.wikipedia.org/wiki/P_versus_NP_problem
20 https://en.wikipedia.org/wiki/Co-NP
21 https://en.wikipedia.org/wiki/NTIME

1600
Background


NP = NTIME(nk ).
k∈N

where NTIME(nk ) is the set of decision problems that can be solved by a non-deterministic
Turing machine22 in O(nk ) time.
Alternatively, NP can be defined using deterministic Turing machines as verifiers. A lan-
guage23 L is in NP if and only if there exist polynomials p and q, and a deterministic Turing
machine M, such that
• For all x and y, the machine M runs in time p(|x|) on input (x, y)
• For all x in L, there exists a string y of length q(|x|) such that M (x, y) = 1
• For all x not in L and all strings y of length q(|x|), M (x, y) = 0

149.2 Background

Many computer science24 problems are contained in NP, like decision versions of many
search25 and optimization problems.

149.2.1 Verifier-based definition

In order to explain the verifier-based definition of NP, consider the subset sum problem26 :
Assume that we are given some integers27 , {−7, −3, −2, 5, 8}, and we wish to know whether
some of these integers sum up to zero. Here, the answer is ”yes”, since the integers {−3,
−2, 5} corresponds to the sum (−3) + (−2) + 5 = 0. The task of deciding whether such a
subset with zero sum exists is called the subset sum problem.
To answer if some of the integers add to zero we can create an algorithm which obtains all
the possible subsets. As the number of integers that we feed into the algorithm becomes
larger, both the number of subsets and the computation time grows exponentially.
But notice that if we are given a particular subset we can efficiently verify whether the
subset sum is zero, by summing the integers of the subset. If the sum is zero, that subset
is a proof or witness28 for the answer is ”yes”. An algorithm that verifies whether a given
subset has sum zero is a verifier. Clearly, summing the integers of a subset can be done in
polynomial time and the subset sum problem is therefore in NP.
The above example can be generalized for any decision problem. Given any instance I of
problem Π and witness W, if there exists a verifier V so that given the ordered pair (I, W)
as input, V returns ”yes” in polynomial time if the witness proves that the answer is ”yes”
and ”no” in polynomial time otherwise, then Π is in NP.

22 https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine
23 https://en.wikipedia.org/wiki/Formal_language
24 https://en.wikipedia.org/wiki/Computer_science
25 https://en.wikipedia.org/wiki/Search_problem
26 https://en.wikipedia.org/wiki/Subset_sum_problem
27 https://en.wikipedia.org/wiki/Integer
28 https://en.wikipedia.org/wiki/Witness_(mathematics)

1601
NP (complexity)

The ”no”-answer version of this problem is stated as: ”given a finite set of integers, does every
non-empty subset have a nonzero sum?”. The verifier-based definition of NP does not require
an efficient verifier for the ”no”-answers. The class of problems with such verifiers for the
”no”-answers is called co-NP. In fact, it is an open question whether all problems in NP also
have verifiers for the ”no”-answers and thus are in co-NP.
In some literature the verifier is called the ”certifier” and the witness the ”certificate29 ”.[2]

149.2.2 Machine-definition

Equivalent to the verifier-based definition is the following characterization: NP is the class


of decision problems30 solvable by a non-deterministic Turing machine31 that runs in poly-
nomial time32 . That is to say, P is in NP whenever P is recognized by some polynomial-time
non-deterministic Turing machine M with an existential acceptance condition, meaning
that w ∈ Pif and only if some computation path of M(w) leads to an accepting state. This
definition is equivalent to the verifier-based definition because a non-deterministic Turing
machine could solve an NP problem in polynomial time by non-deterministically selecting
a certificate and running the verifier on the certificate. Similarly, if such a machine exists,
then a polynomial time verifier can naturally be constructed from it.
In this light, we can define co-NP dually as the class of decision problems P recognizable by
polynomial-time non-deterministic Turing machines with an existential rejection condition.
Since an existential rejection condition is exactly the same thing as a universal accep-
tance condition, we can understand the NP vs. co-NP question as asking whether the
existential and universal acceptance conditions have the same expressive power for the class
of polynomial-time non-deterministic Turing machines.

149.3 Properties

NP is closed under union33 , intersection34 , concatenation35 , Kleene star36 and reversal37 . It


is not known whether NP is closed under complement38 (this question is the so-called ”NP
versus co-NP” question)

29 https://en.wikipedia.org/wiki/Certificate_(complexity)
30 https://en.wikipedia.org/wiki/Decision_problem
31 https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine
32 https://en.wikipedia.org/wiki/Polynomial_time
33 https://en.wikipedia.org/wiki/Union_(set_theory)
34 https://en.wikipedia.org/wiki/Intersection
35 https://en.wikipedia.org/wiki/Concatenation
36 https://en.wikipedia.org/wiki/Kleene_star
37 https://en.wikipedia.org/wiki/Formal_language#Operations_on_languages
38 https://en.wikipedia.org/wiki/Complement_(set_theory)

1602
Why some NP problems are hard to solve

149.4 Why some NP problems are hard to solve

Because of the many important problems in this class, there have been extensive efforts to
find polynomial-time algorithms for problems in NP. However, there remain a large number
of problems in NP that defy such attempts, seeming to require super-polynomial time39 .
Whether these problems are not decidable in polynomial time is one of the greatest open
questions in computer science40 (see P versus NP (”P=NP”) problem41 for an in-depth
discussion).
An important notion in this context is the set of NP-complete42 decision problems, which
is a subset of NP and might be informally described as the ”hardest” problems in NP. If
there is a polynomial-time algorithm for even one of them, then there is a polynomial-time
algorithm for all the problems in NP. Because of this, and because dedicated research has
failed to find a polynomial algorithm for any NP-complete problem, once a problem has been
proven to be NP-complete this is widely regarded as a sign that a polynomial algorithm for
this problem is unlikely to exist.
However, in practical uses, instead of spending computational resources looking for an
optimal solution, a good enough (but potentially suboptimal) solution may often be found
in polynomial time. Also, the real life applications of some problems are easier than their
theoretical equivalents.

149.5 Equivalence of definitions

The two definitions of NP as the class of problems solvable by a nondeterministic Turing


machine43 (TM) in polynomial time and the class of problems verifiable by a determin-
istic Turing machine in polynomial time are equivalent. The proof is described by many
textbooks, for example Sipser's Introduction to the Theory of Computation, section 7.3.
To show this, first suppose we have a deterministic verifier. A nondeterministic machine
can simply nondeterministically run the verifier on all possible proof strings (this requires
only polynomially many steps because it can nondeterministically choose the next character
in the proof string in each step, and the length of the proof string must be polynomially
bounded). If any proof is valid, some path will accept; if no proof is valid, the string is not
in the language and it will reject.
Conversely, suppose we have a nondeterministic TM called A accepting a given language
L. At each of its polynomially many steps, the machine's computation tree44 branches in
at most a finite number of directions. There must be at least one accepting path, and
the string describing this path is the proof supplied to the verifier. The verifier can then
deterministically simulate A, following only the accepting path, and verifying that it accepts

39 https://en.wikipedia.org/wiki/Super-polynomial_time
40 https://en.wikipedia.org/wiki/Computer_science
41 https://en.wikipedia.org/wiki/P_versus_NP_problem
42 https://en.wikipedia.org/wiki/NP-complete
43 https://en.wikipedia.org/wiki/Turing_machine
44 https://en.wikipedia.org/wiki/Computation_tree

1603
NP (complexity)

at the end. If A rejects the input, there is no accepting path, and the verifier will always
reject.

149.6 Relationship to other classes

NP contains all problems in P45 , since one can verify any instance of the problem by simply
ignoring the proof and solving it. NP is contained in PSPACE46 —to show this, it suffices
to construct a PSPACE machine that loops over all proof strings and feeds each one to a
polynomial-time verifier. Since a polynomial-time machine can only read polynomially many
bits, it cannot use more than polynomial space, nor can it read a proof string occupying
more than polynomial space (so we do not have to consider proofs longer than this). NP is
also contained in EXPTIME47 , since the same algorithm operates in exponential time.
co-NP contains those problems which have a simple proof for no instances, sometimes called
counterexamples. For example, primality testing48 trivially lies in co-NP, since one can
refute the primality of an integer by merely supplying a nontrivial factor. NP and co-NP
together form the first level in the polynomial hierarchy49 , higher only than P.
NP is defined using only deterministic machines. If we permit the verifier to be probabilistic
(this however, is not necessarily a BPP machine[6] ), we get the class MA solvable using an
Arthur-Merlin protocol50 with no communication from Arthur to Merlin.
NP is a class of decision problems51 ; the analogous class of function problems is FNP52 .
The only known strict inclusions came from the time hierarchy theorem53 and the space
hierarchy theorem54 , and respectively they are NP ⊊ NEXPTIME and NP ⊊ EXPSPACE.

149.7 Other characterizations

In terms of descriptive complexity theory55 , NP corresponds precisely to the set of languages


definable by existential second-order logic56 (Fagin's theorem57 ).
NP can be seen as a very simple type of interactive proof system58 , where the prover comes
up with the proof certificate and the verifier is a deterministic polynomial-time machine

45 https://en.wikipedia.org/wiki/P_(complexity)
46 https://en.wikipedia.org/wiki/PSPACE
47 https://en.wikipedia.org/wiki/EXPTIME
48 https://en.wikipedia.org/wiki/Primality_test
49 https://en.wikipedia.org/wiki/Polynomial_hierarchy
50 https://en.wikipedia.org/wiki/Arthur-Merlin_protocol
51 https://en.wikipedia.org/wiki/Decision_problem
52 https://en.wikipedia.org/wiki/FNP_(complexity)
53 https://en.wikipedia.org/wiki/Time_hierarchy_theorem
54 https://en.wikipedia.org/wiki/Space_hierarchy_theorem
55 https://en.wikipedia.org/wiki/Descriptive_complexity_theory
56 https://en.wikipedia.org/wiki/Second-order_logic
57 https://en.wikipedia.org/wiki/Fagin%27s_theorem
58 https://en.wikipedia.org/wiki/Interactive_proof_system

1604
Example

that checks it. It is complete because the right proof string will make it accept if there is
one, and it is sound because the verifier cannot accept if there is no acceptable proof string.
A major result of complexity theory is that NP can be characterized as the problems solvable
by probabilistically checkable proofs59 where the verifier uses O(log n) random bits and
examines only a constant number of bits of the proof string (the class PCP(log n, 1)).
More informally, this means that the NP verifier described above can be replaced with one
that just ”spot-checks” a few places in the proof string, and using a limited number of coin
flips can determine the correct answer with high probability. This allows several results
about the hardness of approximation algorithms60 to be proven.

149.8 Example

This is a list of some problems that are in NP:


All problems in P61 , denoted P ⊆ NP. Given a certificate for a problem in P, we can ignore
the certificate and just solve the problem in polynomial time.
The decision version of the travelling salesman problem62 is in NP. Given an input matrix
of distances between n cities, the problem is to determine if there is a route visiting all cities
with total distance less than k.
A proof can simply be a list of the cities. Then verification can clearly be done in polynomial
time. It simply adds the matrix entries corresponding to the paths between the cities.
A non-deterministic Turing machine63 can find such a route as follows:
• At each city it visits it will ”guess” the next city to visit, until it has visited every vertex.
If it gets stuck, it stops immediately.
• At the end it verifies that the route it has taken has cost less than k in O64 (n) time.
One can think of each guess as ”forking65 ” a new copy of the Turing machine to follow each
of the possible paths forward, and if at least one machine finds a route of distance less than
k, that machine accepts the input. (Equivalently, this can be thought of as a single Turing
machine that always guesses correctly)
A binary search66 on the range of possible distances can convert the decision version of
Traveling Salesman to the optimization version, by calling the decision version repeatedly
(a polynomial number of times).
The decision problem version of the integer factorization problem67 : given integers n and
k, is there a factor f with 1 < f < k and f dividing n?

59 https://en.wikipedia.org/wiki/Probabilistically_checkable_proof
60 https://en.wikipedia.org/wiki/Approximation_algorithm
61 https://en.wikipedia.org/wiki/P_(complexity)
62 https://en.wikipedia.org/wiki/Travelling_salesman_problem
63 https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine
64 https://en.wikipedia.org/wiki/Big-O_notation
65 https://en.wikipedia.org/wiki/Fork_(system_call)
66 https://en.wikipedia.org/wiki/Binary_search
67 https://en.wikipedia.org/wiki/Integer_factorization_problem

1605
NP (complexity)

The Subgraph isomorphism problem68 of determining whether graph G contains a subgraph


that is isomorphic to graph H.
The boolean satisfiability problem69 , where we want to know whether or not a certain
formula in propositional logic70 with boolean variables is true for some value of the variables.

149.9 See also


• Turing machine71

149.10 Notes
1. polynomial time refers to how quickly the number of operations needed by an algo-
rithm, relative to the size of the problem, grows. It is therefore a measure of efficiency
of an algorithm.
2. Under the assumption that P≠NP.

149.11 References
1. L, R. E. (1975). ”O      ”.
J. ACM. 22: 151–171. doi72 :10.1145/321864.32187773 . Corollary 1.1.
2. K, J; T, É (2006). Algorithm Design74 (2 .). A-
W. . 46475 . ISBN76 0-321-37291-377 .
3. Alsuwaiyel, M. H.: Algorithms: Design Techniques and Analysis, p. 28378
4. W G79 (J 2002). ”T P=?NP ”80 (PDF). SIGACT News.
33 (2): 34–47. doi81 :10.1145/1052796.105280482 . Retrieved 2008-12-29.
5. K, J; T, É (2006). Algorithm Design83 (2 .). . 49684 .
ISBN85 0-321-37291-386 .

68 https://en.wikipedia.org/wiki/Subgraph_isomorphism_problem
69 https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
70 https://en.wikipedia.org/wiki/Propositional_logic
71 https://en.wikipedia.org/wiki/Turing_machine
72 https://en.wikipedia.org/wiki/Doi_(identifier)
73 https://doi.org/10.1145%2F321864.321877
74 https://archive.org/details/algorithmdesign0000klei
75 https://archive.org/details/algorithmdesign0000klei/page/464
76 https://en.wikipedia.org/wiki/ISBN_(identifier)
77 https://en.wikipedia.org/wiki/Special:BookSources/0-321-37291-3
78 https://books.google.com/books?id=SPx4iHZEOugC&lpg=PP1&pg=PA283#v=onepage&q&f=false
79 https://en.wikipedia.org/wiki/William_Gasarch
80 http://www.cs.umd.edu/~gasarch/papers/poll.pdf
81 https://en.wikipedia.org/wiki/Doi_(identifier)
82 https://doi.org/10.1145%2F1052796.1052804
83 https://archive.org/details/algorithmdesign0000klei
84 https://archive.org/details/algorithmdesign0000klei/page/496
85 https://en.wikipedia.org/wiki/ISBN_(identifier)
86 https://en.wikipedia.org/wiki/Special:BookSources/0-321-37291-3

1606
External links

6. ”C Z:E - C Z”87 . complexityzoo.uwaterloo.ca. Re-


trieved 23 March 2018.

149.12 Further reading


• Thomas H. Cormen88 , Charles E. Leiserson89 , Ronald L. Rivest90 , and Clifford Stein91 .
Introduction to Algorithms92 , Second Edition. MIT Press and McGraw-Hill, 2001.
ISBN93 0-262-03293-794 . Section 34.2: Polynomial-time verification, pp. 979−983.
• M S95 (1997). Introduction to the Theory of Computation96 . PWS P-
. ISBN97 0-534-94728-X98 . Sections 7.3−7.5 (The Class NP, NP-completeness,
Additional NP-complete Problems), pp. 241−271.
• David Harel99 , Yishai Feldman100 . Algorithmics: The Spirit of Computing, Addison-
Wesley, Reading, MA, 3rd edition, 2004.

149.13 External links


• Complexity Zoo101 : NP102
104
• Graph of NP-complete Problems103[dead link ]
• American Scientist105 primer on traditional and recent complexity theory research: ”Ac-
cidental Algorithms”106

Important complexity classes (more)

87 https://complexityzoo.uwaterloo.ca/Complexity_Zoo:E#existsbpp
88 https://en.wikipedia.org/wiki/Thomas_H._Cormen
89 https://en.wikipedia.org/wiki/Charles_E._Leiserson
90 https://en.wikipedia.org/wiki/Ronald_L._Rivest
91 https://en.wikipedia.org/wiki/Clifford_Stein
92 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
93 https://en.wikipedia.org/wiki/ISBN_(identifier)
94 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
95 https://en.wikipedia.org/wiki/Michael_Sipser
96 https://archive.org/details/introductiontoth00sips
97 https://en.wikipedia.org/wiki/ISBN_(identifier)
98 https://en.wikipedia.org/wiki/Special:BookSources/0-534-94728-X
99 https://en.wikipedia.org/wiki/David_Harel
100 https://en.wikipedia.org/w/index.php?title=Yishai_Feldman&action=edit&redlink=1
101 https://en.wikipedia.org/wiki/Complexity_Zoo
102 https://complexityzoo.uwaterloo.ca/Complexity_Zoo:N#np
103 http://page.mi.fu-berlin.de/aneumann/npc.html
105 https://en.wikipedia.org/wiki/American_Scientist
https://web.archive.org/web/20081012155440/http://www.americanscientist.org/issues/
106
pub/accidental-algorithms

1607
150 NP-hardness

For a gentler introduction, see P versus NP problem1 .

Figure 409 Euler diagram for P, NP, NP-complete, and NP-hard set of problems. The
left side is valid under the assumption that P≠NP, while the right side is valid under the
assumption that P=NP (except that the empty language and its complement are never
NP-complete)

NP-hardness (non-deterministic polynomial-time2 hardness) is, in computational com-


plexity theory3 , the defining property of a class of problems that are informally ”at least
as hard as the hardest problems in NP”. A simple example of an NP-hard problem is the
subset sum problem4 .
A more precise specification is: a problem H is NP-hard when every problem L in NP
can be reduced5 in polynomial time6 to H; that is, assuming a solution for H takes 1 unit

1 https://en.wikipedia.org/wiki/P_versus_NP_problem
2 https://en.wikipedia.org/wiki/NP_(complexity)
3 https://en.wikipedia.org/wiki/Computational_complexity_theory
4 https://en.wikipedia.org/wiki/Subset_sum_problem
5 https://en.wikipedia.org/wiki/Reduction_(complexity)
6 https://en.wikipedia.org/wiki/Polynomial_time

1609
NP-hardness

time, H's solution can be used to solve L in polynomial time.[1][2] As a consequence, finding
a polynomial time algorithm to solve any NP-hard problem would give polynomial time
algorithms for all the problems in NP, which is unlikely as many of them are considered
difficult.[3]
A common misconception is that the NP in ”NP-hard” stands for ”non-polynomial” when
in fact it stands for ”non-deterministic7 polynomial acceptable problems”.[4] It is suspected
that there are no polynomial-time algorithms for NP-hard problems, but that has not been
proven.[5] Moreover, the class P8 , in which all problems can be solved in polynomial time,
is contained in the NP9 class.[6]

150.1 Definition

A decision problem10 H is NP-hard when for every problem L in NP, there is a polynomial-
time many-one reduction11 from L to H.[1]:80 An equivalent definition is to require that every
problem L in NP can be solved in polynomial time12 by an oracle machine13 with an oracle
for H.[7] Informally, an algorithm can be thought of that calls such an oracle machine as a
subroutine for solving H and solves L in polynomial time if the subroutine call takes only
one step to compute.
Another definition is to require that there be a polynomial-time reduction from an NP-
complete14 problem G to H.[1]:91 As any problem L in NP reduces in polynomial time to G,
L reduces in turn to H in polynomial time so this new definition implies the previous one.
Awkwardly, it does not restrict the class NP-hard to decision problems, and it also includes
search problems15 or optimization problems16 .

150.2 Consequences

If P ≠NP, then NP-hard problems cannot be solved in polynomial time.


Some NP-hard optimization problems can be polynomial-time approximated17 up to some
constant approximation ratio (in particular, those in APX18 ) or even up to any approxima-
tion ratio (those in PTAS19 or FPTAS20 ).

7 https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine
8 https://en.wikipedia.org/wiki/P_(complexity)
9 https://en.wikipedia.org/wiki/NP_(complexity)
10 https://en.wikipedia.org/wiki/Decision_problem
11 https://en.wikipedia.org/wiki/Many-one_reduction
12 https://en.wikipedia.org/wiki/Polynomial_time
13 https://en.wikipedia.org/wiki/Oracle_machine
14 https://en.wikipedia.org/wiki/NP-complete
15 https://en.wikipedia.org/wiki/Search_problem
16 https://en.wikipedia.org/wiki/Optimization_problem
17 https://en.wikipedia.org/wiki/Approximation_algorithm
18 https://en.wikipedia.org/wiki/APX
https://en.wikipedia.org/wiki/Polynomial-time_approximation_scheme#As_a_complexity_
19
class
20 https://en.wikipedia.org/wiki/Polynomial-time_approximation_scheme#Deterministic

1610
Examples

150.3 Examples

An example of an NP-hard problem is the decision subset sum problem21 : given a set of
integers, does any non-empty subset of them add up to zero? That is a decision problem22
and happens to be NP-complete. Another example of an NP-hard problem is the optimiza-
tion problem of finding the least-cost cyclic route through all nodes of a weighted graph.
This is commonly known as the traveling salesman problem23 .[8]
There are decision problems that are NP-hard but not NP-complete such as the halting
problem24 . That is the problem which asks ”given a program and its input, will it run
forever?” That is a yes/no question and so is a decision problem. It is easy to prove that the
halting problem is NP-hard but not NP-complete. For example, the Boolean satisfiability
problem25 can be reduced to the halting problem by transforming it to the description of
a Turing machine26 that tries all truth value27 assignments and when it finds one that
satisfies the formula it halts and otherwise it goes into an infinite loop. It is also easy to
see that the halting problem is not in NP since all problems in NP are decidable in a finite
number of operations, but the halting problem, in general, is undecidable28 . There are
also NP-hard problems that are neither NP-complete nor Undecidable. For instance, the
language of true quantified Boolean formulas29 is decidable in polynomial space30 , but not
in non-deterministic polynomial time (unless NP = PSPACE31 ).[9]

150.4 NP-naming convention

NP-hard problems do not have to be elements of the complexity class NP. As NP plays a
central role in computational complexity32 , it is used as the basis of several classes:
NP33
Class of computational decision problems for which a given yes-solution can be verified as
a solution in polynomial time by a deterministic Turing machine (or solvable by a non-
deterministic Turing machine in polynomial time).
NP-hard
Class of problems which are at least as hard as the hardest problems in NP. Problems that
are NP-hard do not have to be elements of NP; indeed, they may not even be decidable.

21 https://en.wikipedia.org/wiki/Subset_sum_problem
22 https://en.wikipedia.org/wiki/Decision_problem
23 https://en.wikipedia.org/wiki/Traveling_salesman_problem
24 https://en.wikipedia.org/wiki/Halting_problem
25 https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
26 https://en.wikipedia.org/wiki/Turing_machine
27 https://en.wikipedia.org/wiki/Truth_value
28 https://en.wikipedia.org/wiki/Undecidable_problem
29 https://en.wikipedia.org/wiki/True_quantified_Boolean_formula
30 https://en.wikipedia.org/wiki/PSPACE
31 https://en.wikipedia.org/wiki/PSPACE
32 https://en.wikipedia.org/wiki/Computational_complexity_theory
33 https://en.wikipedia.org/wiki/NP_(complexity)

1611
NP-hardness

NP-complete34
Class of decision problems which contains the hardest problems in NP. Each NP-complete
problem has to be in NP.
NP-easy35
At most as hard as NP, but not necessarily in NP.
NP-equivalent36
Decision problems that are both NP-hard and NP-easy, but not necessarily in NP.
NP-intermediate37
If P and NP are different, then there exist decision problems in the region of NP that
fall between P and the NP-complete problems. (If P and NP are the same class, then
NP-intermediate problems do not exist because in this case every NP-complete problem
would fall in P, and by definition, every problem in NP can be reduced to an NP-complete
problem.)

150.5 Application areas

NP-hard problems are often tackled with rules-based languages in areas including:
• Approximate computing38
• Configuration39
• Cryptography40
• Data mining41
• Decision support42
• Phylogenetics43
• Planning44
• Process monitoring and control
• Rosters or schedules
• Routing/vehicle routing
• Scheduling45

34 https://en.wikipedia.org/wiki/NP-complete
35 https://en.wikipedia.org/wiki/NP-easy
36 https://en.wikipedia.org/wiki/NP-equivalent
37 https://en.wikipedia.org/wiki/NP-intermediate
38 https://en.wikipedia.org/wiki/Approximate_computing
39 https://en.wikipedia.org/wiki/Configuration_management
40 https://en.wikipedia.org/wiki/Cryptography
41 https://en.wikipedia.org/wiki/Data_mining
42 https://en.wikipedia.org/wiki/Decision_support_system
43 https://en.wikipedia.org/wiki/Phylogenetics
44 https://en.wikipedia.org/wiki/Planning
45 https://en.wikipedia.org/wiki/Schedule

1612
References

150.6 References
1. L, J 46 , . (1998). Handbook of Theoretical Computer Science.
Vol. A, Algorithms and complexity. Amsterdam: Elsevier. ISBN47 026272014048 .
OCLC49 24793436850 .
2. K, D (1974). ”P  NP- ”. ACM
SIGACT News. 6 (2): 15–16. doi51 :10.1145/1008304.100830552 .
3. D P B; P C (1994). Introduction to the Theory
of Complexity. Prentice Hall. p. 69. ISBN53 0-13-915380-254 .
4. ”P  NP”55 . www.cs.uky.edu. Archived from the original56 on 2016-09-16. Re-
trieved 2016-09-25.
5. ”S-O » B A » T S C  P≠NP”57 .
www.scottaaronson.com. Retrieved 2016-09-25.
6. ”PHYS771 L 6: P, NP,  F”58 . www.scottaaronson.com. Re-
trieved 2016-09-25.
7. V. J. R-S (1986). A First Course in Computability. Blackwell. p. 159.
ISBN59 0-632-01307-960 .
8. L, E. L.61 ; L, J. K.62 ; R K, A. H. G.; S, D. B.
(1985), The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimiza-
tion63 , J W & S, ISBN64 0-471-90413-965 .
9. More precisely, this language is PSPACE-complete66 ; see, for example,
W, I (2005), Complexity Theory: Exploring the Limits of Efficient Algo-
rithms67 , S, . 189, ISBN68 978354021045069 .

46 https://en.wikipedia.org/wiki/Jan_van_Leeuwen
47 https://en.wikipedia.org/wiki/ISBN_(identifier)
48 https://en.wikipedia.org/wiki/Special:BookSources/0262720140
49 https://en.wikipedia.org/wiki/OCLC_(identifier)
50 http://www.worldcat.org/oclc/247934368
51 https://en.wikipedia.org/wiki/Doi_(identifier)
52 https://doi.org/10.1145%2F1008304.1008305
53 https://en.wikipedia.org/wiki/ISBN_(identifier)
54 https://en.wikipedia.org/wiki/Special:BookSources/0-13-915380-2
https://web.archive.org/web/20160919023326/http://www.cs.uky.edu/~lewis/cs-heuristic/
55
text/class/p-np.html
56 http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/p-np.html
57 http://www.scottaaronson.com/blog/?p=1720
58 http://www.scottaaronson.com/democritus/lec6.html
59 https://en.wikipedia.org/wiki/ISBN_(identifier)
60 https://en.wikipedia.org/wiki/Special:BookSources/0-632-01307-9
61 https://en.wikipedia.org/wiki/Eugene_Lawler
62 https://en.wikipedia.org/wiki/Jan_Karel_Lenstra
63 https://archive.org/details/travelingsalesma00lawl
64 https://en.wikipedia.org/wiki/ISBN_(identifier)
65 https://en.wikipedia.org/wiki/Special:BookSources/0-471-90413-9
66 https://en.wikipedia.org/wiki/PSPACE-complete
67 https://books.google.com/books?id=1fo7_KoFUPsC&pg=PA189
68 https://en.wikipedia.org/wiki/ISBN_(identifier)
69 https://en.wikipedia.org/wiki/Special:BookSources/9783540210450

1613
NP-hardness

• M R. G70  D S. J71 (1979). Computers and Intractability:


A Guide to the Theory of NP-Completeness72 . W.H. F. ISBN73 0-7167-1045-574 .

Important complexity classes (more)

70 https://en.wikipedia.org/wiki/Michael_R._Garey
71 https://en.wikipedia.org/wiki/David_S._Johnson
https://en.wikipedia.org/wiki/Computers_and_Intractability:_A_Guide_to_the_Theory_of_
72
NP-Completeness
73 https://en.wikipedia.org/wiki/ISBN_(identifier)
74 https://en.wikipedia.org/wiki/Special:BookSources/0-7167-1045-5

1614
151 NP-completeness

Complexity class

This article may be confusing or unclear1 to readers. Please help us clar-


ify the article2 . There might be a discussion about this on the talk page3 . (July
2012)(Learn how and when to remove this template message4 )

Figure 410 Euler diagram for P, NP, NP-complete, and NP-hard set of problems. The
left side is valid under the assumption that P≠NP, while the right side is valid under the
assumption that P=NP (except that the empty language and its complement are never
NP-complete, and in general, not every problem in P or NP is NP-complete)

1 https://en.wikipedia.org/wiki/Wikipedia:Vagueness
2 https://en.wikipedia.org/wiki/Wikipedia:Please_clarify
3 https://en.wikipedia.org/wiki/Talk:NP-completeness
4 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

1615
NP-completeness

In computational complexity theory5 , a problem is NP-complete when it can be solved


by a restricted class of brute force search6 algorithms and it can be used to simulate any
other problem with a similar algorithm. More precisely, each input to the problem should
be associated with a set of solutions of polynomial length, whose validity can be tested
quickly (in polynomial time7 ),[1] such that the output for any input is ”yes” if the solution
set is non-empty and ”no” if it is empty. The complexity class of problems of this form
is called NP8 , an abbreviation for ”nondeterministic9 polynomial time”. A problem is said
to be NP-hard10 if everything in NP can be transformed in polynomial time into it, and
a problem is NP-complete if it is both in NP and NP-hard. The NP-complete problems
represent the hardest problems in NP. If any NP-complete problem has a polynomial time
algorithm, all problems in NP do. The set of NP-complete problems is often denoted by
NP-C or NPC.
Although a solution to an NP-complete problem can be verified ”quickly”, there is no known
way to find a solution quickly. That is, the time required to solve the problem using any
currently known algorithm11 increases rapidly as the size of the problem grows. As a
consequence, determining whether it is possible to solve these problems quickly, called the
P versus NP problem12 , is one of the fundamental unsolved problems in computer science13
today.
While a method for computing the solutions to NP-complete problems quickly remains
undiscovered, computer scientists14 and programmers15 still frequently encounter NP-
complete problems. NP-complete problems are often addressed by using heuristic16 methods
and approximation algorithms17 .

151.1 Overview

NP-complete problems are in NP18 , the set of all decision problems19 whose solutions can be
verified in polynomial time; NP may be equivalently defined as the set of decision problems
that can be solved in polynomial time on a non-deterministic Turing machine20 . A problem
p in NP is NP-complete if every other problem in NP can be transformed (or reduced) into
p in polynomial time.

5 https://en.wikipedia.org/wiki/Computational_complexity_theory
6 https://en.wikipedia.org/wiki/Brute_force_search
7 https://en.wikipedia.org/wiki/Polynomial_time
8 https://en.wikipedia.org/wiki/NP_(complexity)
https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine#Deterministic_Turing_
9
Machine
10 https://en.wikipedia.org/wiki/NP-hard
11 https://en.wikipedia.org/wiki/Algorithm
12 https://en.wikipedia.org/wiki/P_versus_NP_problem
13 https://en.wikipedia.org/wiki/List_of_open_problems_in_computer_science
14 https://en.wikipedia.org/wiki/Computer_scientist
15 https://en.wikipedia.org/wiki/Computer_programmer
16 https://en.wikipedia.org/wiki/Heuristic_(computer_science)
17 https://en.wikipedia.org/wiki/Approximation_algorithm
18 https://en.wikipedia.org/wiki/NP_(complexity)
19 https://en.wikipedia.org/wiki/Decision_problem
20 https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine

1616
Formal definition

It is not known whether every problem in NP can be quickly solved—this is called the P
versus NP problem21 . But if any NP-complete problem can be solved quickly, then every
problem in NP can, because the definition of an NP-complete problem states that every
problem in NP must be quickly reducible to every NP-complete problem (that is, it can be
reduced in polynomial time). Because of this, it is often said that NP-complete problems
are harder or more difficult than NP problems in general.

151.2 Formal definition

See also: formal definition for NP-completeness (article P = NP)22 A decision problem C is
NP-complete if:
1. C is in NP, and
2. Every problem in NP is reducible23 to C in polynomial time.[2]
C can be shown to be in NP by demonstrating that a candidate solution to C can be verified
in polynomial time.
Note that a problem satisfying condition 2 is said to be NP-hard24 , whether or not it satisfies
condition 1.[3]
A consequence of this definition is that if we had a polynomial time algorithm (on a UTM25 ,
or any other Turing-equivalent26 abstract machine27 ) for C , we could solve all problems in
NP in polynomial time.

151.3 Background

The concept of NP-completeness was introduced in 1971 (see Cook–Levin theorem28 ),


though the term NP-complete was introduced later. At the 1971 STOC29 conference, there
was a fierce debate between the computer scientists about whether NP-complete problems
could be solved in polynomial time on a deterministic30 Turing machine31 . John Hopcroft32
brought everyone at the conference to a consensus that the question of whether NP-complete
problems are solvable in polynomial time should be put off to be solved at some later date,
since nobody had any formal proofs for their claims one way or the other. This is known
as the question of whether P=NP.

21 https://en.wikipedia.org/wiki/P_versus_NP_problem
22 https://en.wikipedia.org/wiki/P_%3D_NP_problem#NP-completeness
23 https://en.wikipedia.org/wiki/Many-one_reduction
24 https://en.wikipedia.org/wiki/NP-hard
25 https://en.wikipedia.org/wiki/Universal_Turing_machine
26 https://en.wikipedia.org/wiki/Turing_completeness
27 https://en.wikipedia.org/wiki/Abstract_machine
28 https://en.wikipedia.org/wiki/Cook%E2%80%93Levin_theorem
29 https://en.wikipedia.org/wiki/STOC
30 https://en.wikipedia.org/wiki/Deterministic
31 https://en.wikipedia.org/wiki/Turing_machine
32 https://en.wikipedia.org/wiki/John_Hopcroft

1617
NP-completeness

Nobody has yet been able to determine conclusively whether NP-complete problems are
in fact solvable in polynomial time, making this one of the great unsolved problems of
mathematics33 . The Clay Mathematics Institute34 is offering a US$1 million reward to
anyone who has a formal proof that P=NP or that P≠NP.
The Cook–Levin theorem35 states that the Boolean satisfiability problem36 is NP-complete.
In 1972, Richard Karp37 proved that several other problems were also NP-complete (see
Karp's 21 NP-complete problems38 ); thus there is a class of NP-complete problems (besides
the Boolean satisfiability problem). Since the original results, thousands of other problems
have been shown to be NP-complete by reductions from other problems previously shown
to be NP-complete; many of these problems are collected in Garey39 and Johnson's40 1979
book Computers and Intractability: A Guide to the Theory of NP-Completeness41 .[4]

33 https://en.wikipedia.org/wiki/Unsolved_problems_of_mathematics
34 https://en.wikipedia.org/wiki/Clay_Mathematics_Institute
35 https://en.wikipedia.org/wiki/Cook%E2%80%93Levin_theorem
36 https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
37 https://en.wikipedia.org/wiki/Richard_Karp
38 https://en.wikipedia.org/wiki/Karp%27s_21_NP-complete_problems
39 https://en.wikipedia.org/wiki/Michael_Garey
40 https://en.wikipedia.org/wiki/David_S._Johnson
https://en.wikipedia.org/wiki/Computers_and_Intractability:_A_Guide_to_the_Theory_of_
41
NP-Completeness

1618
NP-complete problems

151.4 NP-complete problems

Figure 411 Some NP-complete problems, indicating the reductions typically used to
prove their NP-completeness

Main article: List of NP-complete problems42 An interesting example is the graph isomor-
phism problem43 , the graph theory44 problem of determining whether a graph isomorphism45

42 https://en.wikipedia.org/wiki/List_of_NP-complete_problems
43 https://en.wikipedia.org/wiki/Graph_isomorphism_problem
44 https://en.wikipedia.org/wiki/Graph_theory
45 https://en.wikipedia.org/wiki/Graph_isomorphism

1619
NP-completeness

exists between two graphs. Two graphs are isomorphic46 if one can be transformed47 into
the other simply by renaming vertices48 . Consider these two problems:
• Graph Isomorphism: Is graph G1 isomorphic to graph G2 ?
• Subgraph Isomorphism: Is graph G1 isomorphic to a subgraph of graph G2 ?
The Subgraph Isomorphism problem is NP-complete. The graph isomorphism problem is
suspected to be neither in P nor NP-complete, though it is in NP. This is an example of a
problem that is thought to be hard, but is not thought to be NP-complete.
The easiest way to prove that some new problem is NP-complete is first to prove that it is
in NP, and then to reduce some known NP-complete problem to it. Therefore, it is useful
to know a variety of NP-complete problems. The list below contains some well-known
problems that are NP-complete when expressed as decision problems.
• Boolean satisfiability problem (SAT)49
• Knapsack problem50
• Hamiltonian path problem51
• Travelling salesman problem52 (decision version)
• Subgraph isomorphism problem53
• Subset sum problem54
• Clique problem55
• Vertex cover problem56
• Independent set problem57
• Dominating set problem58
• Graph coloring problem59
To the right is a diagram of some of the problems and the reductions60 typically used
to prove their NP-completeness. In this diagram, problems are reduced from bottom to
top. Note that this diagram is misleading as a description of the mathematical relationship
between these problems, as there exists a polynomial-time reduction61 between any two NP-
complete problems; but it indicates where demonstrating this polynomial-time reduction has
been easiest.

46 https://en.wikipedia.org/wiki/Isomorphic
47 https://en.wikipedia.org/wiki/Isomorphism
48 https://en.wikipedia.org/wiki/Vertex_(graph_theory)
49 https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
50 https://en.wikipedia.org/wiki/Knapsack_problem
51 https://en.wikipedia.org/wiki/Hamiltonian_path_problem
52 https://en.wikipedia.org/wiki/Travelling_salesman_problem
53 https://en.wikipedia.org/wiki/Subgraph_isomorphism_problem
54 https://en.wikipedia.org/wiki/Subset_sum_problem
55 https://en.wikipedia.org/wiki/Clique_problem
56 https://en.wikipedia.org/wiki/Vertex_cover_problem
57 https://en.wikipedia.org/wiki/Independent_set_problem
58 https://en.wikipedia.org/wiki/Dominating_set_problem
59 https://en.wikipedia.org/wiki/Graph_coloring_problem
60 https://en.wikipedia.org/wiki/Reduction_(complexity)
61 https://en.wikipedia.org/wiki/Polynomial-time_reduction

1620
Solving NP-complete problems

There is often only a small difference between a problem in P and an NP-complete problem.
For example, the 3-satisfiability62 problem, a restriction of the boolean satisfiability problem,
remains NP-complete, whereas the slightly more restricted 2-satisfiability63 problem is in P
(specifically, NL-complete64 ), and the slightly more general max. 2-sat. problem is again
NP-complete. Determining whether a graph can be colored with 2 colors is in P, but with
3 colors is NP-complete, even when restricted to planar graphs65 . Determining if a graph
is a cycle66 or is bipartite67 is very easy (in L68 ), but finding a maximum bipartite or a
maximum cycle subgraph is NP-complete. A solution of the knapsack problem69 within any
fixed percentage of the optimal solution can be computed in polynomial time, but finding
the optimal solution is NP-complete.

151.5 Solving NP-complete problems

At present, all known algorithms for NP-complete problems require time that is superpoly-
nomial70 in the input size, and it is unknown whether there are any faster algorithms.
The following techniques can be applied to solve computational problems in general, and
they often give rise to substantially faster algorithms:
• Approximation71 : Instead of searching for an optimal solution, search for a solution that
is at most a factor from an optimal one.
• Randomization72 : Use randomness to get a faster average running time73 , and allow the
algorithm to fail with some small probability. Note: The Monte Carlo method74 is not an
example of an efficient algorithm in this specific sense, although evolutionary approaches
like Genetic algorithms75 may be.
• Restriction: By restricting the structure of the input (e.g., to planar graphs), faster
algorithms are usually possible.
• Parameterization76 : Often there are fast algorithms if certain parameters of the input are
fixed.
• Heuristic77 : An algorithm that works ”reasonably well” in many cases, but for which there
is no proof that it is both always fast and always produces a good result. Metaheuristic78
approaches are often used.

62 https://en.wikipedia.org/wiki/3-satisfiability
63 https://en.wikipedia.org/wiki/2-satisfiability
64 https://en.wikipedia.org/wiki/NL-complete
65 https://en.wikipedia.org/wiki/Planar_graph
66 https://en.wikipedia.org/wiki/Cycle_graph
67 https://en.wikipedia.org/wiki/Bipartite_graph
68 https://en.wikipedia.org/wiki/L_(complexity)
69 https://en.wikipedia.org/wiki/Knapsack_problem
70 https://en.wikipedia.org/wiki/Superpolynomial
71 https://en.wikipedia.org/wiki/Approximation_algorithm
72 https://en.wikipedia.org/wiki/Randomized_algorithm
73 https://en.wikipedia.org/wiki/Running_time
74 https://en.wikipedia.org/wiki/Monte_Carlo_method
75 https://en.wikipedia.org/wiki/Genetic_algorithm
76 https://en.wikipedia.org/wiki/Parameterized_complexity
77 https://en.wikipedia.org/wiki/Heuristic_(computer_science)
78 https://en.wikipedia.org/wiki/Metaheuristic

1621
NP-completeness

One example of a heuristic algorithm is a suboptimal O(n log n) greedy coloring algorithm79
used for graph coloring80 during the register allocation81 phase of some compilers, a tech-
nique called graph-coloring global register allocation82 . Each vertex is a variable, edges are
drawn between variables which are being used at the same time, and colors indicate the reg-
ister assigned to each variable. Because most RISC83 machines have a fairly large number
of general-purpose registers, even a heuristic approach is effective for this application.

151.6 Completeness under different types of reduction

In the definition of NP-complete given above, the term reduction was used in the technical
meaning of a polynomial-time many-one reduction84 .
Another type of reduction is polynomial-time Turing reduction. A problem X is polynomial-
time Turing-reducible to a problem Y if, given a subroutine that solves Y in polynomial time,
one could write a program that calls this subroutine and solves X in polynomial time. This
contrasts with many-one reducibility, which has the restriction that the program can only
call the subroutine once, and the return value of the subroutine must be the return value
of the program.
If one defines the analogue to NP-complete with Turing reductions instead of many-one
reductions, the resulting set of problems won't be smaller than NP-complete; it is an open
question whether it will be any larger.
Another type of reduction that is also often used to define NP-completeness is the
logarithmic-space many-one reduction85 which is a many-one reduction that can be com-
puted with only a logarithmic amount of space. Since every computation that can be
done in logarithmic space86 can also be done in polynomial time it follows that if there is a
logarithmic-space many-one reduction then there is also a polynomial-time many-one reduc-
tion. This type of reduction is more refined than the more usual polynomial-time many-one
reductions and it allows us to distinguish more classes such as P-complete87 . Whether under
these types of reductions the definition of NP-complete changes is still an open problem.
All currently known NP-complete problems are NP-complete under log space reductions.
All currently known NP-complete problems remain NP-complete even under much weaker
reductions.[5] It is known, however, that AC088 reductions define a strictly smaller class than
polynomial-time reductions.[6]

79 https://en.wikipedia.org/wiki/Greedy_coloring
80 https://en.wikipedia.org/wiki/Graph_coloring_problem
81 https://en.wikipedia.org/wiki/Register_allocation
https://en.wikipedia.org/w/index.php?title=Graph-coloring_global_register_allocation&
82
action=edit&redlink=1
83 https://en.wikipedia.org/wiki/RISC
84 https://en.wikipedia.org/wiki/Many-one_reduction
85 https://en.wikipedia.org/wiki/Logarithmic-space_many-one_reduction
86 https://en.wikipedia.org/wiki/Logarithmic_space
87 https://en.wikipedia.org/wiki/P-complete
88 https://en.wikipedia.org/wiki/AC0

1622
Naming

151.7 Naming

According to Donald Knuth89 , the name ”NP-complete” was popularized by Alfred Aho90 ,
John Hopcroft91 and Jeffrey Ullman92 in their celebrated textbook ”The Design and Analysis
of Computer Algorithms”. He reports that they introduced the change in the galley proofs93
for the book (from ”polynomially-complete”), in accordance with the results of a poll he had
conducted of the theoretical computer science94 community.[7] Other suggestions made in
the poll[8] included ”Herculean95 ”, ”formidable”, Steiglitz96 's ”hard-boiled” in honor of Cook,
and Shen Lin's acronym ”PET”, which stood for ”probably exponential time”, but depending
on which way the P versus NP problem97 went, could stand for ”provably exponential time”
or ”previously exponential time”.[9]

151.8 Common misconceptions

The following misconceptions are frequent.[10]


• ”NP-complete problems are the most difficult known problems.” Since NP-complete prob-
lems are in NP, their running time is at most exponential. However, some problems
provably require more time, for example Presburger arithmetic98 .
• ”NP-complete problems are difficult because there are so many different solutions.” On the
one hand, there are many problems that have a solution space just as large, but can be
solved in polynomial time (for example minimum spanning tree99 ). On the other hand,
there are NP-problems with at most one solution that are NP-hard under randomized
polynomial-time reduction (see Valiant–Vazirani theorem100 ).
• ”Solving NP-complete problems requires exponential time.” First, this would imply P ≠NP,
which is still an unsolved question. Further, some NP-complete problems actually

have
algorithms running in superpolynomial, but subexponential time such as O(2 n n). For
example, the independent set101 and dominating set102 problems for planar graphs103
are NP-complete, but can be solved in subexponential time using the planar separator
theorem104 .[11]
• ”All instances of an NP-complete problem are difficult.” Often some instances, or even
most instances, may be easy to solve within polynomial time. However, unless P=NP,

89 https://en.wikipedia.org/wiki/Donald_Knuth
90 https://en.wikipedia.org/wiki/Alfred_Aho
91 https://en.wikipedia.org/wiki/John_Hopcroft
92 https://en.wikipedia.org/wiki/Jeffrey_Ullman
93 https://en.wikipedia.org/wiki/Galley_proofs
94 https://en.wikipedia.org/wiki/Theoretical_computer_science
95 https://en.wikipedia.org/wiki/Labours_of_Hercules
96 https://en.wikipedia.org/wiki/Kenneth_Steiglitz
97 https://en.wikipedia.org/wiki/P_versus_NP_problem
98 https://en.wikipedia.org/wiki/Presburger_arithmetic
99 https://en.wikipedia.org/wiki/Minimum_spanning_tree
100 https://en.wikipedia.org/wiki/Valiant%E2%80%93Vazirani_theorem
101 https://en.wikipedia.org/wiki/Independent_set_problem
102 https://en.wikipedia.org/wiki/Dominating_set_problem
103 https://en.wikipedia.org/wiki/Planar_graph
104 https://en.wikipedia.org/wiki/Planar_separator_theorem

1623
NP-completeness

any polynomial-time algorithm must asymptotically be wrong on more than polynomially


many of the exponentially many inputs of a certain size.[12]
• ”If P=NP, all cryptographic ciphers can be broken.” A polynomial-time problem can be
very difficult to solve in practice if the polynomial's degree or constants are large enough.
For example, ciphers with a fixed key length, such as Advanced Encryption Standard105 ,
can all be broken in constant time (and are thus already known to be in P), though
with current technology that constant may exceed the age of the universe. In addition,
information-theoretic security106 provides cryptographic methods that cannot be broken
even with unlimited computing power.

151.9 Properties

Viewing a decision problem107 as a formal language in some fixed encoding, the set NPC of
all NP-complete problems is not closed under:
• union108
• intersection109
• concatenation110
• Kleene star111
It is not known whether NPC is closed under complementation112 , since NPC=co-NPC113
if and only if NP=co-NP114 , and whether NP=co-NP is an open question115 .[13]

151.10 See also


• Almost complete116
• Gadget (computer science)117
• Ladner's theorem118
• List of NP-complete problems119
• NP-hard120
• P = NP problem121

105 https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
106 https://en.wikipedia.org/wiki/Information-theoretic_security
107 https://en.wikipedia.org/wiki/Decision_problem#Definition
108 https://en.wikipedia.org/wiki/Union_(set_theory)
109 https://en.wikipedia.org/wiki/Intersection
110 https://en.wikipedia.org/wiki/Concatenation
111 https://en.wikipedia.org/wiki/Kleene_star
112 https://en.wikipedia.org/wiki/Complement_(complexity)
113 https://en.wikipedia.org/wiki/Co-NP-complete
114 https://en.wikipedia.org/wiki/Co-NP
115 https://en.wikipedia.org/wiki/Open_problem
116 https://en.wikipedia.org/wiki/Almost_complete
117 https://en.wikipedia.org/wiki/Gadget_(computer_science)
118 https://en.wikipedia.org/wiki/Ladner%27s_theorem
119 https://en.wikipedia.org/wiki/List_of_NP-complete_problems
120 https://en.wikipedia.org/wiki/NP-hard
121 https://en.wikipedia.org/wiki/P_%3D_NP_problem

1624
References

• Strongly NP-complete122
• Travelling Salesman (2012 film)123

151.11 References

151.11.1 Citations
1. C, A124 (1965). ”T     -
”. Proc. Logic, Methodology, and Philosophy of Science II. North Holland.
2. J.  L (1998). Handbook of Theoretical Computer Science. Elsevier.
p. 84. ISBN125 978-0-262-72014-4126 .
3. J.  L (1998). Handbook of Theoretical Computer Science. Elsevier.
p. 80. ISBN127 978-0-262-72014-4128 .
4. G, M R.129 ; J, D. S.130 (1979). V K131 (.).
Computers and Intractability: A Guide to the Theory of NP-Completeness132 .
A S  B   M S. S F,
C.: W. H. F  C. . +338 . ISBN 978-0-7167-1045-5135 .
133 134

MR136 0519066137 .CS1 maint: ref=harv (link138 )


5. A, M.139 ; A, E.; R, S140 (1998). ”R 
C C: A I T   G T”. Journal
of Computer and System Sciences. 57 (2): 127–143. doi141 :10.1006/jcss.1998.1583142 .
ISSN143 1090-2724144 .CS1 maint: ref=harv (link145 )

122 https://en.wikipedia.org/wiki/Strongly_NP-complete
123 https://en.wikipedia.org/wiki/Travelling_Salesman_(2012_film)
124 https://en.wikipedia.org/wiki/Alan_Cobham
125 https://en.wikipedia.org/wiki/ISBN_(identifier)
126 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-72014-4
127 https://en.wikipedia.org/wiki/ISBN_(identifier)
128 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-72014-4
129 https://en.wikipedia.org/wiki/Michael_R._Garey
130 https://en.wikipedia.org/wiki/David_S._Johnson
131 https://en.wikipedia.org/wiki/Victor_Klee
https://en.wikipedia.org/wiki/Computers_and_Intractability:_A_Guide_to_the_Theory_of_
132
NP-Completeness
133 https://archive.org/details/computersintract0000gare/page/
134 https://en.wikipedia.org/wiki/ISBN_(identifier)
135 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7167-1045-5
136 https://en.wikipedia.org/wiki/MR_(identifier)
137 http://www.ams.org/mathscinet-getitem?mr=0519066
138 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
139 https://en.wikipedia.org/wiki/Manindra_Agrawal
140 https://en.wikipedia.org/wiki/Steven_Rudich
141 https://en.wikipedia.org/wiki/Doi_(identifier)
142 https://doi.org/10.1006%2Fjcss.1998.1583
143 https://en.wikipedia.org/wiki/ISSN_(identifier)
144 http://www.worldcat.org/issn/1090-2724
145 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

1625
NP-completeness

6. A, M.146 ; A, E.; I, R.; P, T.147 ; R,


S148 (2001). ”R    ”. Computational
Complexity. 10 (2): 117–138. doi149 :10.1007/s00037-001-8191-1150 . ISSN151 1016-
3328152 .CS1 maint: ref=harv (link153 )
7. Don Knuth154 , Tracy Larrabee, and Paul M. Roberts, Mathematical Writ-
ing155 Archived156 2010-08-27 at the Wayback Machine157 § 25, MAA Notes No. 14,
MAA, 1989 (also Stanford158 Technical Report, 1987).
8. K, D. F. (1974). ”A  ”. SIGACT News. 6 (1):
12–18. doi159 :10.1145/1811129.1811130160 .CS1 maint: ref=harv (link161 )
9. See the poll, or [1]162 .
10. B, P. ”DNA    ”163 .
164 :10.1038/000113-10165 .
11. Bern (1990)166 ; Deĭneko, Klinz & Woeginger (2006)167 ; Dorn et al. (2005)168 harvtxt
error: no target: CITEREFDornPenninksBodlaenderFomin2005 (help169 ); Lipton &
Tarjan (1980)170 .
12. H, L. A.; W, R. (2012). ”SIGACT N C-
 T C 76”. ACM SIGACT News. 43 (4): 70.
doi171 :10.1145/2421119.2421135172 .
13. T, J; W, D. J. A.173 (2006), Complexity and Cryptography: An
Introduction174 , C U P, . 57, ISBN175 9780521617710176 ,

146 https://en.wikipedia.org/wiki/Manindra_Agrawal
147 https://en.wikipedia.org/wiki/Toniann_Pitassi
148 https://en.wikipedia.org/wiki/Steven_Rudich
149 https://en.wikipedia.org/wiki/Doi_(identifier)
150 https://doi.org/10.1007%2Fs00037-001-8191-1
151 https://en.wikipedia.org/wiki/ISSN_(identifier)
152 http://www.worldcat.org/issn/1016-3328
153 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
154 https://en.wikipedia.org/wiki/Don_Knuth
155 http://tex.loria.fr/typographie/mathwriting.pdf
https://web.archive.org/web/20100827044400/http://tex.loria.fr/typographie/
156
mathwriting.pdf
157 https://en.wikipedia.org/wiki/Wayback_Machine
158 https://en.wikipedia.org/wiki/Stanford_University
159 https://en.wikipedia.org/wiki/Doi_(identifier)
160 https://doi.org/10.1145%2F1811129.1811130
161 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
162 http://www.cs.princeton.edu/~wayne/kleinberg-tardos/08np-complete-2x2.pdf
163 http://www.nature.com/news/2000/000113/full/news000113-10.html
164 https://en.wikipedia.org/wiki/Doi_(identifier)
165 https://doi.org/10.1038%2Fnews000113-10
166 #CITEREFBern1990
167 #CITEREFDe%C4%ADnekoKlinzWoeginger2006
168 #CITEREFDornPenninksBodlaenderFomin2005
169 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
170 #CITEREFLiptonTarjan1980
171 https://en.wikipedia.org/wiki/Doi_(identifier)
172 https://doi.org/10.1145%2F2421119.2421135
173 https://en.wikipedia.org/wiki/Dominic_Welsh
174 https://books.google.com/books?id=y_ZwupY8pzUC&pg=PA57
175 https://en.wikipedia.org/wiki/ISBN_(identifier)
176 https://en.wikipedia.org/wiki/Special:BookSources/9780521617710

1626
References

The question of whether NP and co-NP are equal is probably the second most important
open problem in complexity theory, after the P versus NP question.

151.11.2 Sources
• G, M.R.177 ; J, D.S.178 (1979). Computers and Intractability: A Guide
to the Theory of NP-Completeness179 . N Y: W.H. F. ISBN180 978-0-
7167-1045-5181 . This book is a classic, developing the theory, then cataloguing many NP-
Complete problems.
• C, S.A.182 (1971). ”T     ”. Pro-
ceedings, Third Annual ACM Symposium on the Theory of Computing, ACM, New York.
pp. 151–158. doi183 :10.1145/800157.805047184 .
• D, P.E. ”A     NP- ”185 .
COMP202, D.  C S, U  L186 . R
2008-06-21.
• C, P.; K, V.; H, M.; K, M.187 ; W, G188 .
”A   NP  ”189 . KTH NADA, S. R-
 2008-06-21.
• D, K. ”NP- ”190 . Math Reference Project. Retrieved 2008-
06-21.
• K, R. ”L 8: NP- ”191 (PDF). D.  C-
 S, L U, S. A   192 (PDF)
 A 19, 2009. R 2008-06-21.
• S, H.M. ”T   NP-”193 (PPT). I S
L, D.  C S, N T H U194 ,
H C, T. R 2008-06-21.
• J, J.R. ”T   NP-”195 (PPT). D.  C-
 S  I E, N C U196 ,
J C, T. R 2008-06-21.

177 https://en.wikipedia.org/wiki/Michael_Garey
178 https://en.wikipedia.org/wiki/David_S._Johnson
https://en.wikipedia.org/wiki/Computers_and_Intractability:_A_Guide_to_the_Theory_of_
179
NP-Completeness
180 https://en.wikipedia.org/wiki/ISBN_(identifier)
181 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7167-1045-5
182 https://en.wikipedia.org/wiki/Stephen_A._Cook
183 https://en.wikipedia.org/wiki/Doi_(identifier)
184 https://doi.org/10.1145%2F800157.805047
185 http://www.csc.liv.ac.uk/~ped/teachadmin/COMP202/annotated_np.html
186 https://en.wikipedia.org/wiki/University_of_Liverpool
187 https://en.wikipedia.org/wiki/Marek_Karpinski
188 https://en.wikipedia.org/wiki/Gerhard_J._Woeginger
189 http://www.nada.kth.se/~viggo/problemlist/compendium.html
190 http://www.mathreference.com/lan-cx-np,intro.html
https://web.archive.org/web/20090419082030/http://www.cs.lth.se/home/Rolf_Karlsson/
191
bk/lect8.pdf
192 http://www.cs.lth.se/home/Rolf_Karlsson/bk/lect8.pdf
193 http://is.cs.nthu.edu.tw/course/2008Spring/cs431102/hmsunCh08.ppt
194 https://en.wikipedia.org/wiki/National_Tsing_Hua_University
195 http://www.csie.ncu.edu.tw/%7Ejrjiang/alg2006/NPC-3.ppt
196 https://en.wikipedia.org/wiki/National_Central_University

1627
NP-completeness

• C, T.H.197 ; L, C.E.198 ; R, R.L.199 ; S, C.200 (2001). ”C-
 34: NP–C”. Introduction to Algorithms201 (2 .). MIT P
 MG-H. . 966–1021. ISBN202 978-0-262-03293-3203 .
• S, M.204 (1997). ”S 7.4–7.5 (NP-, A NP-
 P)”205 . Introduction to the Theory of Computation. PWS Pub-
lishing. pp. 248–271206 . ISBN207 978-0-534-94728-6208 .
• P, C.209 (1994). ”C 9 (NP- )”. Computa-
tional Complexity (1st ed.). Addison Wesley. pp. 181–218. ISBN210 978-0-201-53082-
7211 .
• Computational Complexity of Games and Puzzles212
• Tetris is Hard, Even to Approximate213
• Minesweeper is NP-complete!214
• B, M (1990). ”F    S   -
 ”. Networks. 20 (1): 109–120. doi215 :10.1002/net.3230200110216 .CS1
maint: ref=harv (link217 ).
• Dĭ, V G.; K, B; W, G J.218 (2006). ”E-
    H     ”. Oper-
ations Research Letters. 34 (3): 269–274. doi219 :10.1016/j.orl.2005.04.013220 .CS1 maint:
ref=harv (link221 ).
• D, F; P, E; B, H L.222 ; F, F V.
(2005). ”E E A  P G: E S
C B D”. Proc. 13th European Symposium on Algorithms
(ESA '05). Lecture Notes in Computer Science. 3669. Springer-Verlag. pp. 95–

197 https://en.wikipedia.org/wiki/Thomas_H._Cormen
198 https://en.wikipedia.org/wiki/Charles_E._Leiserson
199 https://en.wikipedia.org/wiki/Ronald_L._Rivest
200 https://en.wikipedia.org/wiki/Clifford_Stein
201 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
202 https://en.wikipedia.org/wiki/ISBN_(identifier)
203 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03293-3
204 https://en.wikipedia.org/wiki/Michael_Sipser
205 https://archive.org/details/introductiontoth00sips/page/248
206 https://archive.org/details/introductiontoth00sips/page/248
207 https://en.wikipedia.org/wiki/ISBN_(identifier)
208 https://en.wikipedia.org/wiki/Special:BookSources/978-0-534-94728-6
209 https://en.wikipedia.org/wiki/Christos_Papadimitriou
210 https://en.wikipedia.org/wiki/ISBN_(identifier)
211 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-53082-7
212 http://www.ics.uci.edu/~eppstein/cgt/hard.html
213 https://arxiv.org/abs/cs.CC/0210020
https://web.archive.org/web/20061216121200/http://for.mat.bham.ac.uk/R.W.Kaye/minesw/
214
ordmsw.htm
215 https://en.wikipedia.org/wiki/Doi_(identifier)
216 https://doi.org/10.1002%2Fnet.3230200110
217 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
218 https://en.wikipedia.org/wiki/Gerhard_J._Woeginger
219 https://en.wikipedia.org/wiki/Doi_(identifier)
220 https://doi.org/10.1016%2Fj.orl.2005.04.013
221 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
222 https://en.wikipedia.org/wiki/Hans_L._Bodlaender

1628
Further reading

106. doi223 :10.1007/11561071_11224 . ISBN225 978-3-540-29118-3226 .CS1 maint: ref=harv


(link227 ).
• L, R J.228 ; T, R E.229 (1980). ”A  
  ”. SIAM Journal on Computing230 . 9 (3): 615–627.
doi231 :10.1137/0209046232 .CS1 maint: ref=harv (link233 ).

151.12 Further reading


• Scott Aaronson234 , NP-complete Problems and Physical Reality235 , ACM SIGACT236
News, Vol. 36, No. 1. (March 2005), pp. 30–52.
• Lance Fortnow237 , The status of the P versus NP problem238 , Commun. ACM239 , Vol.
52, No. 9. (2009), pp. 78–86.

Important complexity classes (more)

223 https://en.wikipedia.org/wiki/Doi_(identifier)
224 https://doi.org/10.1007%2F11561071_11
225 https://en.wikipedia.org/wiki/ISBN_(identifier)
226 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-29118-3
227 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
228 https://en.wikipedia.org/wiki/Richard_J._Lipton
229 https://en.wikipedia.org/wiki/Robert_Tarjan
230 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
231 https://en.wikipedia.org/wiki/Doi_(identifier)
232 https://doi.org/10.1137%2F0209046
233 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
234 https://en.wikipedia.org/wiki/Scott_Aaronson
235 https://arxiv.org/abs/quant-ph/0502072
236 https://en.wikipedia.org/wiki/SIGACT
237 https://en.wikipedia.org/wiki/Lance_Fortnow
238 http://people.cs.uchicago.edu/~fortnow/papers/pnp-cacm.pdf
239 https://en.wikipedia.org/wiki/Commun._ACM

1629
152 PSPACE

Unsolved problem in computer science:


?
P=PSPACE(more unsolved problems in computer science)1

In computational complexity theory2 , PSPACE is the set of all decision problems3 that
can be solved by a Turing machine4 using a polynomial5 amount of space6 .

152.1 Formal definition

If we denote by SPACE(t(n)), the set of all problems that can be solved by Turing machines7
using O(t(n)) space for some function t of the input size n, then we can define PSPACE
formally as[1]

PSPACE = SPACE(nk ).
k∈N

PSPACE is a strict superset of the set of context-sensitive languages8 .


It turns out that allowing the Turing machine to be nondeterministic9 does not add any extra
power. Because of Savitch's theorem10 ,[2] NPSPACE is equivalent to PSPACE, essentially
because a deterministic Turing machine can simulate a non-deterministic Turing machine11
without needing much more space (even though it may use much more time).[3] Also, the
complements12 of all problems in PSPACE are also in PSPACE, meaning that co-PSPACE
= PSPACE.

1 https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_computer_science
2 https://en.wikipedia.org/wiki/Computational_complexity_theory
3 https://en.wikipedia.org/wiki/Decision_problem
4 https://en.wikipedia.org/wiki/Turing_machine
5 https://en.wikipedia.org/wiki/Polynomial
6 https://en.wikipedia.org/wiki/Space_complexity
7 https://en.wikipedia.org/wiki/Turing_machines
8 https://en.wikipedia.org/wiki/Context-sensitive_language
9 https://en.wikipedia.org/wiki/Nondeterministic_algorithm
10 https://en.wikipedia.org/wiki/Savitch%27s_theorem
11 https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine
12 https://en.wikipedia.org/wiki/Complement_(complexity)

1631
PSPACE

152.2 Relation among other classes

Figure 412 A representation of the relation among complexity classes

The following relations are known between PSPACE and the complexity classes NL13 , P14 ,
NP15 , PH16 , EXPTIME17 and EXPSPACE18 (note that ⊊, meaning strict containment, is
not the same as ⊈):

13 https://en.wikipedia.org/wiki/NL_(complexity)
14 https://en.wikipedia.org/wiki/P_(complexity)
15 https://en.wikipedia.org/wiki/NP_(complexity)
16 https://en.wikipedia.org/wiki/PH_(complexity)
17 https://en.wikipedia.org/wiki/EXPTIME
18 https://en.wikipedia.org/wiki/EXPSPACE

1632
Closure properties

NL ⊆ P ⊆ NP ⊆ PH ⊆ PSPACE
PSPACE ⊆ EXPTIME ⊆ EXPSPACE
NL ⊊ PSPACE ⊊ EXPSPACE
P ⊊ EXPTIME
It is known that in the first and second line, at least one of the set containments must be
strict, but it is not known which. It is widely suspected that all are strict.
The containments in the third line are both known to be strict. The first follows from
direct diagonalization (the space hierarchy theorem19 , NL ⊊ NPSPACE) and the fact that
PSPACE = NPSPACE via Savitch's theorem20 . The second follows simply from the space
hierarchy theorem.
The hardest problems in PSPACE are the PSPACE-Complete problems. See PSPACE-
Complete21 for examples of problems that are suspected to be in PSPACE but not in NP.

152.3 Closure properties

The class PSPACE is closed under operations union22 , complementation23 , and Kleene
star24 .

152.4 Other characterizations

An alternative characterization of PSPACE is the set of problems decidable by an alternating


Turing machine25 in polynomial time, sometimes called APTIME or just AP.[4]
A logical characterization of PSPACE from descriptive complexity26 theory is that it is the
set of problems expressible in second-order logic27 with the addition of a transitive closure28
operator. A full transitive closure is not needed; a commutative transitive closure and
even weaker forms suffice. It is the addition of this operator that (possibly) distinguishes
PSPACE from PH29 .
A major result of complexity theory is that PSPACE can be characterized as all the lan-
guages recognizable by a particular interactive proof system30 , the one defining the class
IP31 . In this system, there is an all-powerful prover trying to convince a randomized
polynomial-time verifier that a string is in the language. It should be able to convince

19 https://en.wikipedia.org/wiki/Space_hierarchy_theorem
20 https://en.wikipedia.org/wiki/Savitch%27s_theorem
21 https://en.wikipedia.org/wiki/PSPACE-Complete
22 https://en.wikipedia.org/wiki/Union_(set_theory)
23 https://en.wikipedia.org/wiki/Complement_(set_theory)
24 https://en.wikipedia.org/wiki/Kleene_star
25 https://en.wikipedia.org/wiki/Alternating_Turing_machine
26 https://en.wikipedia.org/wiki/Descriptive_complexity
27 https://en.wikipedia.org/wiki/Second-order_logic
28 https://en.wikipedia.org/wiki/Transitive_closure
29 https://en.wikipedia.org/wiki/PH_(complexity)
30 https://en.wikipedia.org/wiki/Interactive_proof_system
31 https://en.wikipedia.org/wiki/IP_(complexity)

1633
PSPACE

the verifier with high probability if the string is in the language, but should not be able to
convince it except with low probability if the string is not in the language.
PSPACE can be characterized as the quantum complexity class QIP32 .[5]
PSPACE is also equal to PCTC , problems solvable by classical computers using closed time-
like curves33 ,[6] as well as to BQPCTC , problems solvable by quantum computers34 using
closed timelike curves.[7]

152.5 PSPACE-completeness

Main article: PSPACE-complete35 A language B is PSPACE-complete36 if it is in PSPACE


and it is PSPACE-hard, which means for all A ∈PSPACE, A ≤p B, where A ≤p B means
that there is a polynomial-time many-one reduction37 from A to B. PSPACE-complete
problems are of great importance to studying PSPACE problems because they represent
the most difficult problems in PSPACE. Finding a simple solution to a PSPACE-complete
problem would mean we have a simple solution to all other problems in PSPACE because
all PSPACE problems could be reduced to a PSPACE-complete problem.[8]
An example of a PSPACE-complete problem is the quantified Boolean formula problem38
(usually abbreviated to QBF or TQBF; the T stands for ”true”).[8]

152.6 References
1.Arora & Barak (2009) p.81
2.Arora & Barak (2009) p.85
3.Arora & Barak (2009) p.86
4.Arora & Barak (2009) p.100
5. R J; Z J; S U; J W39 (J
2009). ”QIP = PSPACE”. X40 :0907.473741 .
6. S. A (M 2005). ”NP-    ”.
SIGACT News. arXiv42 :quant-ph/050207243 . Bibcode44 :2005quant.ph..2072A45 ..
7. W, J; A, S (2009). ”C  
     ”. Proceedings

32 https://en.wikipedia.org/wiki/QIP_(complexity)
33 https://en.wikipedia.org/wiki/Closed_timelike_curve
34 https://en.wikipedia.org/wiki/Quantum_computer
35 https://en.wikipedia.org/wiki/PSPACE-complete
36 https://en.wikipedia.org/wiki/PSPACE-complete
37 https://en.wikipedia.org/wiki/Polynomial-time_many-one_reduction
38 https://en.wikipedia.org/wiki/Quantified_Boolean_formula_problem
39 https://en.wikipedia.org/wiki/John_Watrous_(computer_scientist)
40 https://en.wikipedia.org/wiki/ArXiv_(identifier)
41 http://arxiv.org/abs/0907.4737
42 https://en.wikipedia.org/wiki/ArXiv_(identifier)
43 http://arxiv.org/abs/quant-ph/0502072
44 https://en.wikipedia.org/wiki/Bibcode_(identifier)
45 https://ui.adsabs.harvard.edu/abs/2005quant.ph..2072A

1634
References

of the Royal Society A: Mathematical, Physical and Engineering Sciences.


465 (2102): 631. arXiv46 :0808.266947 . Bibcode48 :2009RSPSA.465..631A49 .
50 51
doi :10.1098/rspa.2008.0350 .
8. Arora & Barak (2009) p.83
• A, S52 ; B, B (2009). Computational complexity. A modern ap-
proach. Cambridge University Press53 . ISBN54 978-0-521-42426-455 . Zbl56 1193.6811257 .
• S, M58 (1997). Introduction to the Theory of Computation59 . PWS
P. ISBN60 0-534-94728-X61 . Section 8.2–8.3 (The Class PSPACE, PSPACE-
completeness), pp. 281–294.
• P, C62 (1993). Computational Complexity (1st ed.). Addison
Wesley. ISBN63 0-201-53082-164 . Chapter 19: Polynomial space, pp. 455–490.
• S, M65 (2006). Introduction to the Theory of Computation (2nd ed.).
Thomson Course Technology. ISBN66 0-534-95097-367 . Chapter 8: Space Complexity
• Complexity Zoo68 : PSPACE69

Important complexity classes (more)

46 https://en.wikipedia.org/wiki/ArXiv_(identifier)
47 http://arxiv.org/abs/0808.2669
48 https://en.wikipedia.org/wiki/Bibcode_(identifier)
49 https://ui.adsabs.harvard.edu/abs/2009RSPSA.465..631A
50 https://en.wikipedia.org/wiki/Doi_(identifier)
51 https://doi.org/10.1098%2Frspa.2008.0350
52 https://en.wikipedia.org/wiki/Sanjeev_Arora
53 https://en.wikipedia.org/wiki/Cambridge_University_Press
54 https://en.wikipedia.org/wiki/ISBN_(identifier)
55 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-42426-4
56 https://en.wikipedia.org/wiki/Zbl_(identifier)
57 http://zbmath.org/?format=complete&q=an:1193.68112
58 https://en.wikipedia.org/wiki/Michael_Sipser
59 https://archive.org/details/introductiontoth00sips
60 https://en.wikipedia.org/wiki/ISBN_(identifier)
61 https://en.wikipedia.org/wiki/Special:BookSources/0-534-94728-X
62 https://en.wikipedia.org/wiki/Christos_Papadimitriou
63 https://en.wikipedia.org/wiki/ISBN_(identifier)
64 https://en.wikipedia.org/wiki/Special:BookSources/0-201-53082-1
65 https://en.wikipedia.org/wiki/Michael_Sipser
66 https://en.wikipedia.org/wiki/ISBN_(identifier)
67 https://en.wikipedia.org/wiki/Special:BookSources/0-534-95097-3
68 https://en.wikipedia.org/wiki/Complexity_Zoo
69 https://complexityzoo.uwaterloo.ca/Complexity_Zoo:P#pspace

1635
153 EXPSPACE

In computational complexity theory1 , EXPSPACE is the set2 of all decision problems3


solvable by a deterministic Turing machine4 in exponential5 space6 , i.e., in O(2p(n) ) space,
where p(n) is a polynomial function of n. Some authors restrict p(n) to be a linear function7 ,
but most authors instead call the resulting class ESPACE8 . If we use a nondeterministic
machine instead, we get the class NEXPSPACE, which is equal to EXPSPACE by Savitch's
theorem9 .
A decision problem is EXPSPACE-complete if it is in EXPSPACE, and every problem in
EXPSPACE has a polynomial-time many-one reduction10 to it. In other words, there is
a polynomial-time algorithm11 that transforms instances of one to instances of the other
with the same answer. EXPSPACE-complete problems might be thought of as the hardest
problems in EXPSPACE.
EXPSPACE is a strict superset of PSPACE12 , NP13 , and P14 and is believed to be a strict
superset of EXPTIME15 .

153.1 Formal definition

In terms of DSPACE16 and NSPACE17 ,


∪ k ∪ k
EXPSPACE = DSPACE(2n ) = NSPACE(2n )
k∈N k∈N

1 https://en.wikipedia.org/wiki/Computational_complexity_theory
2 https://en.wikipedia.org/wiki/Set_(mathematics)
3 https://en.wikipedia.org/wiki/Decision_problem
4 https://en.wikipedia.org/wiki/Turing_machine
5 https://en.wikipedia.org/wiki/Exponential_function
6 https://en.wikipedia.org/wiki/Space_complexity
7 https://en.wikipedia.org/wiki/Linear_function
8 https://en.wikipedia.org/wiki/ESPACE
9 https://en.wikipedia.org/wiki/Savitch%27s_theorem
10 https://en.wikipedia.org/wiki/Polynomial-time_many-one_reduction
11 https://en.wikipedia.org/wiki/Algorithm
12 https://en.wikipedia.org/wiki/PSPACE
13 https://en.wikipedia.org/wiki/NP_(complexity)
14 https://en.wikipedia.org/wiki/P_(complexity)
15 https://en.wikipedia.org/wiki/EXPTIME
16 https://en.wikipedia.org/wiki/DSPACE
17 https://en.wikipedia.org/wiki/NSPACE

1637
EXPSPACE

153.2 Examples of problems

An example of an EXPSPACE-complete problem is the problem of recognizing whether two


regular expressions18 represent different languages, where the expressions are limited to four
operators: union, concatenation19 , the Kleene star20 (zero or more copies of an expression),
and squaring (two copies of an expression).[1]
If the Kleene star is left out, then that problem becomes NEXPTIME21 -complete, which
is like EXPTIME-complete, except it is defined in terms of non-deterministic Turing ma-
chines22 rather than deterministic.
It has also been shown by L. Berman in 1980 that the problem of verifying/falsifying any
first-order23 statement about real numbers24 that involves only addition25 and comparison
(but no multiplication26 ) is in EXPSPACE.
Alur and Henzinger extended Linear temporal logic with times (integer) and prove that the
validity problem of their logic is EXPSPACE-complete[2] .
The reachability problem for Petri Nets27 is EXPSPACE-hard. [3]

153.3 Relationship to other classes

EXPSPACE is known to be a strict superset of PSPACE28 , NP29 , and P30 . It is further


suspected to be a strict superset of EXPTIME31 , however this is not known.

153.4 See also


• Game complexity32

18 https://en.wikipedia.org/wiki/Regular_expression
19 https://en.wikipedia.org/wiki/Concatenation
20 https://en.wikipedia.org/wiki/Kleene_star
21 https://en.wikipedia.org/wiki/NEXPTIME
22 https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine
23 https://en.wikipedia.org/wiki/First-order_logic
24 https://en.wikipedia.org/wiki/Real_number
25 https://en.wikipedia.org/wiki/Addition
26 https://en.wikipedia.org/wiki/Multiplication
27 https://en.wikipedia.org/wiki/Petri_Nets
28 https://en.wikipedia.org/wiki/PSPACE
29 https://en.wikipedia.org/wiki/NP_(complexity)
30 https://en.wikipedia.org/wiki/P_(complexity)
31 https://en.wikipedia.org/wiki/EXPTIME
32 https://en.wikipedia.org/wiki/Game_complexity

1638
References

153.5 References
1. Meyer, A.R. and L. Stockmeyer33 . The equivalence problem for regular expressions
with squaring requires exponential space34 . 13th IEEE Symposium on Switching and
Automata Theory, Oct 1972, pp.125−129.
2. A, R; H, T A. (1994-01-01). ”A R T
L”. J. ACM. 41 (1): 181–203. doi35 :10.1145/174644.17465136 . ISSN37 0004-
541138 .
3. L, R. (1976). ”T R P R E
S”39 . Technical Report 62. Yale University.
• L. Berman The complexity of logical theories40 , Theoretical Computer Science 11:71-78,
1980.
• M S41 (1997). Introduction to the Theory of Computation42 . PWS
P. ISBN43 0-534-94728-X44 . Section 9.1.1: Exponential space completeness,
pp. 313−317. Demonstrates that determining equivalence of regular expressions with
exponentiation is EXPSPACE-complete.

Important complexity classes (more)

33 https://en.wikipedia.org/wiki/Larry_Stockmeyer
34 http://people.csail.mit.edu/meyer/rsq.pdf
35 https://en.wikipedia.org/wiki/Doi_(identifier)
36 https://doi.org/10.1145%2F174644.174651
37 https://en.wikipedia.org/wiki/ISSN_(identifier)
38 http://www.worldcat.org/issn/0004-5411
39 http://citeseer.ist.psu.edu/contextsummary/115623/0
https://www.sciencedirect.com/science/article/pii/0304397580900377/pdf?md5=
40
8baff11bdc8680262f944afb9067dbe3&pid=1-s2.0-0304397580900377-main.pdf&_valck=1
41 https://en.wikipedia.org/wiki/Michael_Sipser
42 https://archive.org/details/introductiontoth00sips
43 https://en.wikipedia.org/wiki/ISBN_(identifier)
44 https://en.wikipedia.org/wiki/Special:BookSources/0-534-94728-X

1639
154 P versus NP problem

Unsolved problem in computer science

Unsolved problem in computer science:


If the solution to a problem is easy to check for correctness, must the problem be
easy to solve?(more unsolved problems in computer science)1

Figure 413 Diagram of complexity classes provided that P ≠ NP. The existence of
problems within NP but outside both P and NP-complete, under that assumption, was
established by Ladner's theorem.[1]

1 https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_computer_science

1641
P versus NP problem

Millennium Prize Problems

• P versus NP problem
• Hodge conjecture
• Poincaré conjecture (solved)
• Riemann hypothesis
• Yang–Mills existence and mass gap
• Navier–Stokes existence and smoothness
• Birch and Swinnerton-Dyer conjecture

The P versus NP problem is a major unsolved problem in computer science2 . It asks


whether every problem whose solution can be quickly verified can also be solved quickly.
It is one of the seven Millennium Prize Problems3 selected by the Clay Mathematics Insti-
tute4 , each of which carries a US$1,000,000 prize for the first correct solution.
The informal term quickly, used above, means the existence of an algorithm5 solving the task
that runs in polynomial time6 , such that the time to complete the task varies as a polynomial
function7 on the size of the input to the algorithm (as opposed to, say, exponential time8 ).
The general class of questions for which some algorithm can provide an answer in polynomial
time is called ”class P” or just ”P9 ”. For some questions, there is no known way to find an
answer quickly, but if one is provided with information showing what the answer is, it is
possible to verify the answer quickly. The class of questions for which an answer can be
verified in polynomial time is called NP10 , which stands for ”nondeterministic polynomial
time”.[Note 1]
An answer to the P = NP question would determine whether problems that can be verified
in polynomial time can also be solved in polynomial time. If it turned out that P ≠ NP,
which is widely believed, it would mean that there are problems in NP that are harder to
compute than to verify: they could not be solved in polynomial time, but the answer could
be verified in polynomial time.
Aside from being an important problem in computational theory, a proof either way would
have profound implications for mathematics, cryptography, algorithm research, artificial

2 https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_computer_science
3 https://en.wikipedia.org/wiki/Millennium_Prize_Problems
4 https://en.wikipedia.org/wiki/Clay_Mathematics_Institute
5 https://en.wikipedia.org/wiki/Algorithm
6 https://en.wikipedia.org/wiki/Polynomial_time
7 https://en.wikipedia.org/wiki/Polynomial_function
8 https://en.wikipedia.org/wiki/Exponential_time
9 https://en.wikipedia.org/wiki/P_(complexity)
10 https://en.wikipedia.org/wiki/NP_(complexity)

1642
Example

intelligence11 , game theory12 , multimedia processing, philosophy13 , economics14 and many


other fields.[2]

154.1 Example

Consider Sudoku15 , a game where the player is given a partially filled-in grid of numbers and
attempts to complete the grid following certain rules. Given an incomplete Sudoku grid, of
any size, is there at least one legal solution? Any proposed solution is easily verified, and
the time to check a solution grows slowly (polynomially) as the grid gets bigger. However,
all known algorithms for finding solutions take, for difficult examples, time that grows
exponentially as the grid gets bigger. So, Sudoku is in NP (quickly checkable) but does not
seem to be in P (quickly solvable). Thousands of other problems seem similar, in that they
are fast to check but slow to solve. Researchers have shown that many of the problems in
NP have the extra property that a fast solution to any one of them could be used to build a
quick solution to any other problem in NP, a property called NP-completeness16 . Decades
of searching have not yielded a fast solution to any of these problems, so most scientists
suspect that none of these problems can be solved quickly. This, however, has never been
proven.

154.2 History

The underlying issues were first discussed in the 1950s, in letters from John Forbes Nash Jr.17
to the National Security Agency18 , and from Kurt Gödel19 to John von Neumann20 . The
precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook21
in his seminal paper ”The complexity of theorem proving procedures”[3] (and independently
by Leonid Levin22 in 1973[4] ) and is considered by many to be the most important open
problem in computer science23 .[5]
Although the P versus NP problem was formally defined in 1971, there were previous
inklings of the problems involved, the difficulty of proof, and the potential consequences.
In 1955, mathematician John Nash wrote a letter to the NSA, where he speculated that
cracking a sufficiently complex code would require time exponential in the length of the
key.[6] If proved (and Nash was suitably skeptical) this would imply what is now called
P ≠ NP, since a proposed key can easily be verified in polynomial time. Another mention

11 https://en.wikipedia.org/wiki/Artificial_intelligence
12 https://en.wikipedia.org/wiki/Game_theory
13 https://en.wikipedia.org/wiki/Philosophy
14 https://en.wikipedia.org/wiki/Economics
15 https://en.wikipedia.org/wiki/Sudoku
16 https://en.wikipedia.org/wiki/NP-complete
17 https://en.wikipedia.org/wiki/John_Forbes_Nash_Jr.
18 https://en.wikipedia.org/wiki/National_Security_Agency
19 https://en.wikipedia.org/wiki/Kurt_G%C3%B6del
20 https://en.wikipedia.org/wiki/John_von_Neumann
21 https://en.wikipedia.org/wiki/Stephen_Cook
22 https://en.wikipedia.org/wiki/Leonid_Levin
23 https://en.wikipedia.org/wiki/Computer_science

1643
P versus NP problem

of the underlying problem occurred in a 1956 letter written by Kurt Gödel24 to John von
Neumann25 . Gödel asked whether theorem-proving (now known to be co-NP-complete26 )
could be solved in quadratic27 or linear time28 ,[7] and pointed out one of the most important
consequences—that if so, then the discovery of mathematical proofs could be automated.

154.3 Context

The relation between the complexity classes29 P and NP is studied in computational com-
plexity theory30 , the part of the theory of computation31 dealing with the resources required
during computation to solve a given problem. The most common resources are time (how
many steps it takes to solve a problem) and space (how much memory it takes to solve a
problem).
In such analysis, a model of the computer for which time must be analyzed is required.
Typically such models assume that the computer is deterministic32 (given the computer's
present state and any inputs, there is only one possible action that the computer might
take) and sequential (it performs actions one after the other).
In this theory, the class P consists of all those decision problems33 (defined below34 ) that can
be solved on a deterministic sequential machine in an amount of time that is polynomial35
in the size of the input; the class NP36 consists of all those decision problems whose positive
solutions can be verified in polynomial time37 given the right information, or equivalently,
whose solution can be found in polynomial time on a non-deterministic38 machine.[8] Clearly,
P ⊆NP. Arguably the biggest open question in theoretical computer science39 concerns the
relationship between those two classes:
Is P equal to NP?
Since 2002, William Gasarch40 has conducted three polls of researchers concerning this
and related questions.[9][10][11] Confidence that P ≠ NP has been increasing — in 2019, 88%
believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts,
the 2019 answers became 99% believe P ≠ NP.[11]

24 https://en.wikipedia.org/wiki/Kurt_G%C3%B6del
25 https://en.wikipedia.org/wiki/John_von_Neumann
26 https://en.wikipedia.org/wiki/Co-NP-complete
27 https://en.wikipedia.org/wiki/Quadratic_time
28 https://en.wikipedia.org/wiki/Linear_time
29 https://en.wikipedia.org/wiki/Complexity_class
30 https://en.wikipedia.org/wiki/Computational_complexity_theory
31 https://en.wikipedia.org/wiki/Theory_of_computation
32 https://en.wikipedia.org/wiki/Deterministic_computation
33 https://en.wikipedia.org/wiki/Decision_problem
34 #Formal_definitions
35 https://en.wikipedia.org/wiki/Polynomial
36 https://en.wikipedia.org/wiki/NP_(complexity)
37 https://en.wikipedia.org/wiki/Polynomial_time
38 https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine
39 https://en.wikipedia.org/wiki/Theoretical_computer_science
40 https://en.wikipedia.org/wiki/William_Gasarch

1644
NP-completeness

154.4 NP-completeness

Figure 414 Euler diagram for P, NP, NP-complete, and NP-hard set of problems
(excluding the empty language and its complement, which belong to P but are not
NP-complete)

Main article: NP-completeness41 To attack the P = NP question, the concept of NP-


completeness is very useful. NP-complete problems are a set of problems to each of which
any other NP-problem can be reduced in polynomial time and whose solution may still be
verified in polynomial time. That is, any NP problem can be transformed into any of the
NP-complete problems. Informally, an NP-complete problem is an NPproblem that is at
least as ”tough” as any other problem in NP.
NP-hard42 problems are those at least as hard as NP problems, i.e., all NP problems can
be reduced (in polynomial time) to them. NP-hard problems need not be in NP, i.e., they
need not have solutions verifiable in polynomial time.
For instance, the Boolean satisfiability problem43 is NP-complete by the Cook–Levin the-
orem44 , so any instance of any problem in NP can be transformed mechanically into an
instance of the Boolean satisfiability problem in polynomial time. The Boolean satisfiabil-
ity problem is one of many such NP-complete problems. If any NP-complete problem is

41 https://en.wikipedia.org/wiki/NP-completeness
42 https://en.wikipedia.org/wiki/NP-hard
43 https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
44 https://en.wikipedia.org/wiki/Cook%E2%80%93Levin_theorem

1645
P versus NP problem

in P, then it would follow that P = NP. However, many important problems have been
shown to be NP-complete, and no fast algorithm for any of them is known.
Based on the definition alone it is not obvious that NP-complete problems exist; however,
a trivial and contrived NP-complete problem can be formulated as follows: given a de-
scription of a Turing machine45 M guaranteed to halt in polynomial time, does there exist a
polynomial-size input that M will accept?[12] It is in NP because (given an input) it is sim-
ple to check whether M accepts the input by simulating M; it is NP-complete because the
verifier for any particular instance of a problem in NP can be encoded as a polynomial-time
machine M that takes the solution to be verified as input. Then the question of whether
the instance is a yes or no instance is determined by whether a valid input exists.
The first natural problem proven to be NP-complete was the Boolean satisfiability prob-
lem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that
satisfiability is NP-complete contains technical details about Turing machines as they re-
late to the definition of NP. However, after this problem was proved to be NP-complete,
proof by reduction46 provided a simpler way to show that many other problems are also
NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows
that a solution of Sudoku in polynomial time could also be used to complete Latin squares47
in polynomial time.[13] This in turn gives a solution to the problem of partitioning tri-partite
graphs48 into triangles,[14] which could then be used to find solutions for the special case of
SAT known as 3-SAT,[15] which then provides a solution for general Boolean satisfiability.
So a polynomial time solution to Sudoku leads, by a series of mechanical transformations,
to a polynomial time solution of satisfiability, which in turn can be used to solve any other
NP-problem in polynomial time. Using transformations like this, a vast class of seemingly
unrelated problems are all reducible to one another, and are in a sense ”the same problem”.

154.5 Harder problems

See also: Complexity class49 Although it is unknown whether P = NP, problems out-
side of P are known. Just as the class P is defined in terms of polynomial running time,
the class EXPTIME50 is the set of all decision problems that have exponential running
time. In other words, any problem in EXPTIME is solvable by a deterministic Turing
machine51 in O52 (2p(n) ) time, where p(n) is a polynomial function of n. A decision problem
is EXPTIME-complete53 if it is in EXPTIME, and every problem in EXPTIME has
a polynomial-time many-one reduction54 to it. A number of problems are known to be
EXPTIME-complete. Because it can be shown that P ≠EXPTIME, these problems are

45 https://en.wikipedia.org/wiki/Turing_machine
46 https://en.wikipedia.org/wiki/Reduction_(complexity)
47 https://en.wikipedia.org/wiki/Latin_square
48 https://en.wikipedia.org/wiki/Multipartite_graph
49 https://en.wikipedia.org/wiki/Complexity_class
50 https://en.wikipedia.org/wiki/EXPTIME
51 https://en.wikipedia.org/wiki/Deterministic_Turing_machine
52 https://en.wikipedia.org/wiki/Big_O_notation
53 https://en.wikipedia.org/wiki/EXPTIME#EXPTIME-complete
54 https://en.wikipedia.org/wiki/Polynomial-time_many-one_reduction

1646
Problems in NP not known to be in P or NP-complete

outside P, and so require more than polynomial time. In fact, by the time hierarchy theo-
rem55 , they cannot be solved in significantly less than exponential time. Examples include
finding a perfect strategy for chess56 positions on an N × N board[16] and similar problems
for other board games.[17]
The problem of deciding the truth of a statement in Presburger arithmetic57 requires even
more time. Fischer and Rabin58 proved in 1974[18] that every algorithm that decides the
cn
truth of Presburger statements of length n has a runtime of at least 22 for some constant c.
Hence, the problem is known to need more than exponential run time. Even more difficult
are the undecidable problems59 , such as the halting problem60 . They cannot be completely
solved by any algorithm, in the sense that for any particular algorithm there is at least one
input for which that algorithm will not produce the right answer; it will either produce the
wrong answer, finish without giving a conclusive answer, or otherwise run forever without
producing any answer at all.
It is also possible to consider questions other than decision problems. One such class,
consisting of counting problems, is called #P61 : whereas an NP problem asks ”Are there any
solutions?”, the corresponding #P problem asks ”How many solutions are there?” Clearly,
a #P problem must be at least as hard as the corresponding NP problem, since a count
of solutions immediately tells if at least one solution exists, if the count is greater than
zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy
(for example linear-time) P problems.[19] For these problems, it is very easy to tell whether
solutions exist, but thought to be very hard to tell how many. Many of these problems are
#P-complete62 , and hence among the hardest problems in #P, since a polynomial time
solution to any of them would allow a polynomial time solution to all other #P problems.

154.6 Problems in NP not known to be in P or


NP-complete

Main article: NP-intermediate63 In 1975, Richard E. Ladner64 showed that if P ≠NP then
there exist problems in NP that are neither in P nor NP-complete.[1] Such problems are
called NP-intermediate problems. The graph isomorphism problem65 , the discrete loga-
rithm problem66 and the integer factorization problem67 are examples of problems believed
to be NP-intermediate. They are some of the very few NP problems not known to be in
P or to be NP-complete.

55 https://en.wikipedia.org/wiki/Time_hierarchy_theorem
56 https://en.wikipedia.org/wiki/Chess
57 https://en.wikipedia.org/wiki/Presburger_arithmetic
58 https://en.wikipedia.org/wiki/Michael_O._Rabin
59 https://en.wikipedia.org/wiki/Undecidable_problem
60 https://en.wikipedia.org/wiki/Halting_problem
61 https://en.wikipedia.org/wiki/Sharp-P#P
62 https://en.wikipedia.org/wiki/Sharp-P-complete
63 https://en.wikipedia.org/wiki/NP-intermediate
64 https://en.wikipedia.org/wiki/Richard_E._Ladner
65 https://en.wikipedia.org/wiki/Graph_isomorphism_problem
66 https://en.wikipedia.org/wiki/Discrete_logarithm_problem
67 https://en.wikipedia.org/wiki/Integer_factorization_problem

1647
P versus NP problem

The graph isomorphism problem is the computational problem of determining whether two
finite graphs68 are isomorphic69 . An important unsolved problem in complexity theory is
whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The
answer is not known, but it is believed that the problem is at least not NP-complete.[20] If
graph isomorphism is NP-complete, the polynomial time hierarchy70 collapses to its second
level.[21][22] Since it is widely believed that the polynomial hierarchy does not collapse to any
finite level, it is believed that graph isomorphism is not NP-complete. The best√algorithm
for this problem, due to László Babai71 and Eugene Luks72 , has run time 2O( n log n) for
graphs with n vertices.
The integer factorization problem73 is the computational problem of determining the prime
factorization74 of a given integer. Phrased as a decision problem, it is the problem of deciding
whether the input has a factor less than k. No efficient integer factorization algorithm is
known, and this fact forms the basis of several modern cryptographic systems, such as the
RSA75 algorithm. The integer factorization problem is in NP and in co-NP76 (and even
in UP and co-UP[23] ). If the problem is NP-complete, the polynomial time hierarchy
will collapse to its first level (i.e., NP = co-NP). The best known algorithm for integer
factorization is the general number field sieve77 , which takes expected time
( (( )1 ))
2
64n 3
O exp 9 log(2) (log(n log(2))) 3

to factor an n-bit integer. However, the best known quantum algorithm78 for this problem,
Shor's algorithm79 , does run in polynomial time, although this does not indicate where the
problem lies with respect to non-quantum complexity classes.

68 https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)
69 https://en.wikipedia.org/wiki/Graph_isomorphism
70 https://en.wikipedia.org/wiki/Polynomial_time_hierarchy
71 https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3_Babai
72 https://en.wikipedia.org/wiki/Eugene_Luks
73 https://en.wikipedia.org/wiki/Integer_factorization_problem
74 https://en.wikipedia.org/wiki/Prime_factorization
75 https://en.wikipedia.org/wiki/RSA_(algorithm)
76 https://en.wikipedia.org/wiki/Co-NP
77 https://en.wikipedia.org/wiki/General_number_field_sieve
78 https://en.wikipedia.org/wiki/Quantum_algorithm
79 https://en.wikipedia.org/wiki/Shor%27s_algorithm

1648
Does P mean ”easy”?

154.7 Does P mean ”easy”?

Figure 415 The graph shows time (average of 100 instances in ms using a 933 MHz
Pentium III) vs.problem size for knapsack problems for a state-of-the-art specialized
algorithm. Quadratic fit suggests that empirical algorithmic complexity for instances with
50–10,000 variables is O((log(n))2 ).[24]

All of the above discussion has assumed that P means ”easy” and ”not in P” means ”hard”,
an assumption known as Cobham's thesis80 . It is a common and reasonably accurate as-
sumption in complexity theory; however, it has some caveats.
First, it is not always true in practice. A theoretical polynomial algorithm may have ex-
tremely large constant factors or exponents thus rendering it impractical. For example, the
problem of deciding81 whether a graph G contains H as a minor82 , where H is fixed, can
be solved in a running time of O(n2 ),[25] where n is the number of vertices in G. However,
the big O notation83 hides a constant that depends superexponentially on H. The constant
is greater than 2 ↑↑ (2 ↑↑ (2 ↑↑ (h/2))) (using Knuth's up-arrow notation84 ), and where h is
the number of vertices in H.[26]
On the other hand, even if a problem is shown to be NP-complete, and even if P ≠NP,
there may still be effective approaches to tackling the problem in practice. There are
algorithms for many NP-complete problems, such as the knapsack problem85 , the traveling

80 https://en.wikipedia.org/wiki/Cobham%27s_thesis
81 https://en.wikipedia.org/wiki/Decision_problem
82 https://en.wikipedia.org/wiki/Graph_minor
83 https://en.wikipedia.org/wiki/Big_O_notation
84 https://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation
85 https://en.wikipedia.org/wiki/Knapsack_problem

1649
P versus NP problem

salesman problem86 and the Boolean satisfiability problem87 , that can solve to optimality
many real-world instances in reasonable time. The empirical average-case complexity88
(time vs. problem size) of such algorithms can be surprisingly low. An example is the
simplex algorithm89 in linear programming90 , which works surprisingly well in practice;
despite having exponential worst-case time complexity91 it runs on par with the best known
polynomial-time algorithms.[27]
Finally, there are types of computations which do not conform to the Turing machine
model on which P and NP are defined, such as quantum computation92 and randomized
algorithms93 .

154.8 Reasons to believe P ≠NP or P = NP

According to polls,[9][28] most computer scientists believe that P ≠ NP. A key reason for
this belief is that after decades of studying these problems no one has been able to find
a polynomial-time algorithm for any of more than 3000 important known NP-complete
problems (see List of NP-complete problems94 ). These algorithms were sought long before
the concept of NP-completeness was even defined (Karp's 21 NP-complete problems95 ,
among the first found, were all well-known existing problems at the time they were shown
to be NP-complete). Furthermore, the result P = NP would imply many other startling
results that are currently believed to be false, such as NP = co-NP96 and P = PH97 .
It is also intuitively argued that the existence of problems that are hard to solve but for
which the solutions are easy to verify matches real-world experience.[29]
If P = NP, then the world would be a profoundly different place than we usually assume
it to be. There would be no special value in ”creative leaps,” no fundamental gap between
solving a problem and recognizing the solution once it's found.

S A98 , UT A99
On the other hand, some researchers believe that there is overconfidence in believing
P ≠NP and that researchers should explore proofs of P = NP as well. For example, in
2002 these statements were made:[9]

86 https://en.wikipedia.org/wiki/Traveling_salesman_problem
87 https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
88 https://en.wikipedia.org/wiki/Average-case_complexity
89 https://en.wikipedia.org/wiki/Simplex_algorithm
90 https://en.wikipedia.org/wiki/Linear_programming
91 https://en.wikipedia.org/wiki/Time_complexity
92 https://en.wikipedia.org/wiki/Quantum_computation
93 https://en.wikipedia.org/wiki/Randomized_algorithm
94 https://en.wikipedia.org/wiki/List_of_NP-complete_problems
95 https://en.wikipedia.org/wiki/Karp%27s_21_NP-complete_problems
96 https://en.wikipedia.org/wiki/Co-NP
97 https://en.wikipedia.org/wiki/PH_(complexity)
98 https://en.wikipedia.org/wiki/Scott_Aaronson
99 https://en.wikipedia.org/wiki/UT_Austin

1650
Consequences of solution

The main argument in favor of P ≠ NP is the total lack of fundamental progress in the
area of exhaustive search. This is, in my opinion, a very weak argument. The space of
algorithms is very large and we are only at the beginning of its exploration. [...] The
resolution of Fermat's Last Theorem100 also shows that very simple questions may be
settled only by very deep theories.

M Y. V101 , R U102
Being attached to a speculation is not a good guide to research planning. One should
always try both directions of every problem. Prejudice has caused famous mathemati-
cians to fail to solve famous problems whose solution was opposite to their expectations,
even though they had developed all the methods required.

A N103 , C U104

154.9 Consequences of solution

One of the reasons the problem attracts so much attention is the consequences of the
answer. Either direction of resolution would advance theory enormously, and perhaps have
huge practical consequences as well.

154.9.1 P = NP

A proof that P = NP could have stunning practical consequences if the proof leads to
efficient methods for solving some of the important problems in NP. It is also possible
that a proof would not lead directly to efficient methods, perhaps if the proof is non-
constructive105 , or the size of the bounding polynomial is too big to be efficient in practice.
The consequences, both positive and negative, arise since various NP-complete problems
are fundamental in many fields.
Cryptography, for example, relies on certain problems being difficult. A constructive and
efficient solution[Note 2] to an NP-complete problem such as 3-SAT106 would break most
existing cryptosystems including:
• Existing implementations of public-key cryptography107 ,[30] a foundation for many modern
security applications such as secure financial transactions over the Internet.

100 https://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem
101 https://en.wikipedia.org/wiki/Moshe_Y._Vardi
102 https://en.wikipedia.org/wiki/Rice_University
103 https://en.wikipedia.org/wiki/Anil_Nerode
104 https://en.wikipedia.org/wiki/Cornell_University
105 https://en.wikipedia.org/wiki/Non-constructive_proof
106 https://en.wikipedia.org/wiki/Boolean_satisfiability_problem#3-satisfiability
107 https://en.wikipedia.org/wiki/Public-key_cryptography

1651
P versus NP problem

• Symmetric ciphers108 such as AES109 or 3DES110 ,[31] used for the encryption of commu-
nications data.
• Cryptographic hashing111 , which underlies blockchain112 cryptocurrencies113 such as Bit-
coin114 , and is used to authenticate software updates. For these applications, the problem
of finding a pre-image that hashes to a given value must be difficult in order to be use-
ful, and ideally should require exponential time. However, if P = NP, then finding a
pre-image M can be done in polynomial time, through reduction to SAT.[32]
These would need to be modified or replaced by information-theoretically secure115 solutions
not inherently based on P-NP inequivalence.
On the other hand, there are enormous positive consequences that would follow from ren-
dering tractable many currently mathematically intractable problems. For instance, many
problems in operations research116 are NP-complete, such as some types of integer pro-
gramming117 and the travelling salesman problem118 . Efficient solutions to these problems
would have enormous implications for logistics. Many other important problems, such as
some problems in protein structure prediction119 , are also NP-complete;[33] if these problems
were efficiently solvable it could spur considerable advances in life sciences and biotechnol-
ogy.
But such changes may pale in significance compared to the revolution an efficient method
for solving NP-complete problems would cause in mathematics itself. Gödel, in his early
thoughts on computational complexity, noted that a mechanical method that could solve
any problem would revolutionize mathematics:[34][35]
If there really were a machine with φ(n) ∼ k ⋅n (or even ∼k ⋅n2 ), this would have conse-
quences of the greatest importance. Namely, it would obviously mean that in spite of
the undecidability of the Entscheidungsproblem120 , the mental work of a mathematician
concerning Yes-or-No questions could be completely replaced by a machine. After all,
one would simply have to choose the natural number n so large that when the machine
does not deliver a result, it makes no sense to think more about the problem.
Similarly, Stephen Cook121 says[36]
... it would transform mathematics by allowing a computer to find a formal proof of
any theorem which has a proof of a reasonable length, since formal proofs can easily be

108 https://en.wikipedia.org/wiki/Symmetric_cipher
109 https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
110 https://en.wikipedia.org/wiki/Triple_DES
111 https://en.wikipedia.org/wiki/Cryptographic_hash_function
112 https://en.wikipedia.org/wiki/Blockchain
113 https://en.wikipedia.org/wiki/Cryptocurrency
114 https://en.wikipedia.org/wiki/Bitcoin
115 https://en.wikipedia.org/wiki/Information-theoretic_security
116 https://en.wikipedia.org/wiki/Operations_research
117 https://en.wikipedia.org/wiki/Integer_programming
118 https://en.wikipedia.org/wiki/Travelling_salesman_problem
119 https://en.wikipedia.org/wiki/Protein_structure_prediction
120 https://en.wikipedia.org/wiki/Entscheidungsproblem
121 https://en.wikipedia.org/wiki/Stephen_Cook

1652
Results about difficulty of proof

recognized in polynomial time. Example problems may well include all of the CMI prize
problems122 .
Research mathematicians spend their careers trying to prove theorems, and some proofs
have taken decades or even centuries to find after problems have been stated—for instance,
Fermat's Last Theorem123 took over three centuries to prove. A method that is guaranteed
to find proofs to theorems, should one exist of a ”reasonable” size, would essentially end this
struggle.
Donald Knuth124 has stated that he has come to believe that P = NP, but is reserved
about the impact of a possible proof:[37]
[...] I don't believe that the equality P = NP will turn out to be helpful even if it is
proved, because such a proof will almost surely be nonconstructive.

154.9.2 P ≠NP

A proof that showed that P ≠NP would lack the practical computational benefits of a proof
that P = NP, but would nevertheless represent a very significant advance in computational
complexity theory and provide guidance for future research. It would allow one to show in a
formal way that many common problems cannot be solved efficiently, so that the attention
of researchers can be focused on partial solutions or solutions to other problems. Due to
widespread belief in P ≠NP, much of this focusing of research has already taken place.[38]
Also P ≠NP still leaves open the average-case complexity125 of hard problems in NP. For
example, it is possible that SAT requires exponential time in the worst case, but that
almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo126
has described five hypothetical ”worlds” that could result from different possible resolutions
to the average-case complexity question.[39] These range from ”Algorithmica”, where P =
NP and problems like SAT can be solved efficiently in all instances, to ”Cryptomania”,
where P ≠NP and generating hard instances of problems outside P is easy, with three
intermediate possibilities reflecting different possible distributions of difficulty over instances
of NP-hard problems. The ”world” where P ≠NP but all problems in NP are tractable in
the average case is called ”Heuristica” in the paper. A Princeton University127 workshop in
2009 studied the status of the five worlds.[40]

154.10 Results about difficulty of proof

Although the P = NP problem itself remains open despite a million-dollar prize and a huge
amount of dedicated research, efforts to solve the problem have led to several new techniques.
In particular, some of the most fruitful research related to the P = NP problem has been

122 https://en.wikipedia.org/wiki/Clay_Math_Institute#Millennium_Prize_Problems
123 https://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem
124 https://en.wikipedia.org/wiki/Donald_Knuth
125 https://en.wikipedia.org/wiki/Average-case_complexity
126 https://en.wikipedia.org/wiki/Russell_Impagliazzo
127 https://en.wikipedia.org/wiki/Princeton_University

1653
P versus NP problem

in showing that existing proof techniques are not powerful enough to answer the question,
thus suggesting that novel technical approaches are required.
As additional evidence for the difficulty of the problem, essentially all known proof tech-
niques in computational complexity theory128 fall into one of the following classifications,
each of which is known to be insufficient to prove that P ≠NP:
Classification Definition
Relativizing proofs129 Imagine a world where every algorithm is allowed to make queries to some
fixed subroutine called an oracle130 (a black box which can answer a fixed
set of questions in constant time, such as a black box that solves any given
traveling salesman problem in 1 step), and the running time of the oracle is
not counted against the running time of the algorithm. Most proofs (especially
classical ones) apply uniformly in a world with oracles regardless of what
the oracle does. These proofs are called relativizing. In 1975, Baker, Gill,
and Solovay131 showed that P = NP with respect to some oracles, while
P ≠NP for other oracles.[41] Since relativizing proofs can only prove statements
that are uniformly true with respect to all possible oracles, this showed that
relativizing techniques cannot resolve P = NP.
Natural proofs132 In 1993, Alexander Razborov133 and Steven Rudich134 defined a general class of
proof techniques for circuit complexity lower bounds, called natural proofs135 .[42]
At the time all previously known circuit lower bounds were natural, and circuit
complexity was considered a very promising approach for resolving P = NP.
However, Razborov and Rudich showed that, if one-way functions136 exist, then
no natural proof method can distinguish between P and NP. Although one-
way functions have never been formally proven to exist, most mathematicians
believe that they do, and a proof of their existence would be a much stronger
statement than P ≠NP. Thus it is unlikely that natural proofs alone can
resolve P = NP.
Algebrizing proofs After the Baker-Gill-Solovay result, new non-relativizing proof techniques were
successfully used to prove that IP137 = PSPACE138 . However, in 2008, Scott
Aaronson139 and Avi Wigderson140 showed that the main technical tool used in
the IP = PSPACE proof, known as arithmetization, was also insufficient to
resolve P = NP.[43]

These barriers are another reason why NP-complete problems are useful: if a polynomial-
time algorithm can be demonstrated for an NP-complete problem, this would solve the
P = NP problem in a way not excluded by the above results.
These barriers have also led some computer scientists to suggest that the P versus
NP problem may be independent141 of standard axiom systems like ZFC142 (cannot be
proved or disproved within them). The interpretation of an independence result could be
that either no polynomial-time algorithm exists for any NP-complete problem, and such
a proof cannot be constructed in (e.g.) ZFC, or that polynomial-time algorithms for NP-
complete problems may exist, but it is impossible to prove in ZFC that such algorithms
are correct.[44] However, if it can be shown, using techniques of the sort that are currently
known to be applicable, that the problem cannot be decided even with much weaker as-

128 https://en.wikipedia.org/wiki/Computational_complexity_theory
129 https://en.wikipedia.org/w/index.php?title=Relativizing_proof&action=edit&redlink=1
130 https://en.wikipedia.org/wiki/Oracle_machine
131 https://en.wikipedia.org/wiki/Robert_M._Solovay
132 https://en.wikipedia.org/wiki/Natural_proof
133 https://en.wikipedia.org/wiki/Alexander_Razborov
134 https://en.wikipedia.org/wiki/Steven_Rudich
135 https://en.wikipedia.org/wiki/Natural_proof
136 https://en.wikipedia.org/wiki/One-way_functions
137 https://en.wikipedia.org/wiki/IP_(complexity)
138 https://en.wikipedia.org/wiki/PSPACE
139 https://en.wikipedia.org/wiki/Scott_Aaronson
140 https://en.wikipedia.org/wiki/Avi_Wigderson
141 https://en.wikipedia.org/wiki/Independence_(mathematical_logic)
142 https://en.wikipedia.org/wiki/ZFC

1654
Claimed solutions

sumptions extending the Peano axioms143 (PA) for integer arithmetic, then there would
necessarily exist nearly-polynomial-time algorithms for every problem in NP.[45] Therefore,
if one believes (as most complexity theorists do) that not all problems in NP have efficient
algorithms, it would follow that proofs of independence using those techniques cannot be
possible. Additionally, this result implies that proving independence from PA or ZFC using
currently known techniques is no easier than proving the existence of efficient algorithms
for all problems in NP.

154.11 Claimed solutions

While the P versus NP problem is generally considered unsolved,[46] many amateur and
some professional researchers have claimed solutions. Gerhard J. Woeginger144 maintains
a list that, as of 2018, contains 62 purported proofs of P = NP, 50 proofs of P ≠ NP, 2
proofs the problem is unprovable, and one proof that it is undecidable.[47] Some attempts at
resolving P versus NP have received brief media attention,[48] though these attempts have
since been refuted.

154.12 Logical characterizations

The P = NP problem can be restated in terms of expressible certain classes of logical


statements, as a result of work in descriptive complexity145 .
Consider all languages of finite structures with a fixed signature146 including a linear order147
relation. Then, all such languages in P can be expressed in first-order logic148 with the
addition of a suitable least fixed-point combinator149 . Effectively, this, in combination with
the order, allows the definition of recursive functions. As long as the signature contains at
least one predicate or function in addition to the distinguished order relation, so that the
amount of space taken to store such finite structures is actually polynomial in the number
of elements in the structure, this precisely characterizes P.
Similarly, NP is the set of languages expressible in existential second-order logic150 —that is,
second-order logic restricted to exclude universal quantification151 over relations, functions,
and subsets. The languages in the polynomial hierarchy152 , PH153 , correspond to all of
second-order logic. Thus, the question ”is P a proper subset of NP” can be reformulated
as ”is existential second-order logic able to describe languages (of finite linearly ordered
structures with nontrivial signature) that first-order logic with least fixed point cannot?”.[49]

143 https://en.wikipedia.org/wiki/Peano_axioms
144 https://en.wikipedia.org/wiki/Gerhard_J._Woeginger
145 https://en.wikipedia.org/wiki/Descriptive_complexity
146 https://en.wikipedia.org/wiki/Signature_(logic)
147 https://en.wikipedia.org/wiki/Linear_order
148 https://en.wikipedia.org/wiki/First-order_logic
149 https://en.wikipedia.org/wiki/Fixed-point_combinator
150 https://en.wikipedia.org/wiki/Second-order_logic
151 https://en.wikipedia.org/wiki/Universal_quantification
152 https://en.wikipedia.org/wiki/Polynomial_hierarchy
153 https://en.wikipedia.org/wiki/PH_(complexity)

1655
P versus NP problem

The word ”existential” can even be dropped from the previous characterization, since P =
NP if and only if P = PH (as the former would establish that NP = co-NP, which in
turn implies that NP = PH).

154.13 Polynomial-time algorithms

No algorithm for any NP-complete problem is known to run in polynomial time. However,
there are algorithms known for NP-complete problems with the property that if P = NP,
then the algorithm runs in polynomial time on accepting instances (although with enormous
constants, making the algorithm impractical). However, these algorithms do not qualify as
polynomial time because their running time on rejecting instances are not polynomial. The
following algorithm, due to Levin154 (without any citation), is such an example below. It
correctly accepts the NP-complete language SUBSET-SUM155 . It runs in polynomial time
on inputs that are in SUBSET-SUM if and only if P = NP:
// Algorithm that accepts the NP-complete language SUBSET-SUM.
//
// this is a polynomial-time algorithm if and only if P = NP.
//
// "Polynomial-time" means it returns "yes" in polynomial time when
// the answer should be "yes", and runs forever when it is "no".
//
// Input: S = a finite set of integers
// Output: "yes" if any subset of S adds up to 0.
// Runs forever with no output otherwise.
// Note: "Program number M" is the program obtained by
// writing the integer M in binary, then
// considering that string of bits to be a
// program. Every possible program can be
// generated this way, though most do nothing
// because of syntax errors.
FOR K = 1...∞
FOR M = 1...K
Run program number M for K steps with input S
IF the program outputs a list of distinct integers
AND the integers are all in S
AND the integers sum to 0
THEN
OUTPUT ”yes” and HALT

If, and only if, P = NP, then this is a polynomial-time algorithm accepting an NP-complete
language. ”Accepting” means it gives ”yes” answers in polynomial time, but is allowed to
run forever when the answer is ”no” (also known as a semi-algorithm).
This algorithm is enormously impractical, even if P = NP. If the shortest program that
can solve SUBSET-SUM in polynomial time is b bits long, the above algorithm will try at
least 2b − 1 other programs first.

154 https://en.wikipedia.org/wiki/Leonid_Levin
155 https://en.wikipedia.org/wiki/Subset_sum_problem

1656
Formal definitions

154.14 Formal definitions

154.14.1 P and NP

Conceptually speaking, a decision problem is a problem that takes as input some string156
w over an alphabet Σ, and outputs ”yes” or ”no”. If there is an algorithm157 (say a Turing
machine158 , or a computer program159 with unbounded memory) that can produce the
correct answer for any input string of length n in at most cnk steps, where k and c are
constants independent of the input string, then we say that the problem can be solved
in polynomial time and we place it in the class P. Formally, P is defined as the set of all
languages that can be decided by a deterministic polynomial-time Turing machine. That
is,
P = {L : L = L(M ) for some deterministic polynomial-time Turing machine M }
where
L(M ) = {w ∈ Σ∗ : M accepts w}
and a deterministic polynomial-time Turing machine is a deterministic Turing machine
M that satisfies the following two conditions:
1. M halts on all inputs w and
2. there exists k ∈ N such that TM (n) ∈ O(nk ), where O refers to the big O notation160
and
TM (n) = max{tM (w) : w ∈ Σ∗ , |w| = n}
tM (w) = number of steps M takes to halt on input w.
NP can be defined similarly using nondeterministic Turing machines (the traditional way).
However, a modern approach to define NP is to use the concept of certificate161 and verifier.
Formally, NP is defined as the set of languages over a finite alphabet that have a verifier
that runs in polynomial time, where the notion of ”verifier” is defined as follows.
Let L be a language over a finite alphabet, Σ.
L ∈NP if, and only if, there exists a binary relation R ⊂ Σ∗ × Σ∗ and a positive integer
k such that the following two conditions are satisfied:
1. For all x ∈ Σ∗ , x ∈ L ⇔ ∃y ∈ Σ∗ such that (x, y) ∈ R and |y| ∈ O(|x|k ); and
2. the language LR = {x#y : (x, y) ∈ R} over Σ ∪ {#} is decidable by a deterministic
Turing machine in polynomial time.
A Turing machine that decides LR is called a verifier for L and a y such that (x, y) ∈ R is
called a certificate of membership of x in L.

156 https://en.wikipedia.org/wiki/String_(computer_science)
157 https://en.wikipedia.org/wiki/Algorithm
158 https://en.wikipedia.org/wiki/Turing_machine
159 https://en.wikipedia.org/wiki/Computer_programming
160 https://en.wikipedia.org/wiki/Big_O_notation#Formal_definition
161 https://en.wikipedia.org/wiki/Certificate_(complexity)

1657
P versus NP problem

In general, a verifier does not have to be polynomial-time. However, for L to be in NP,


there must be a verifier that runs in polynomial time.

Example

Let
COMPOSITE = {x ∈ N | x = pq for integers p, q > 1}
{ √ }
R = (x, y) ∈ N × N | 1 < y ≤ x and y divides x .
Clearly, the question of whether a given x is a composite162 is equivalent to the question
of whether x is a member of COMPOSITE. It can be shown that COMPOSITE ∈NP by
verifying that it satisfies the above definition (if we identify natural numbers with their
binary representations).
COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS
primality test163 .[50]

154.14.2 NP-completeness

Main article: NP-completeness164 There are many equivalent ways of describing NP-
completeness.
Let L be a language over a finite alphabet Σ.
L is NP-complete if, and only if, the following two conditions are satisfied:
1. L ∈NP; and
2. any L′ in NP is polynomial-time-reducible to L (written as L′ ≤p L), where L′ ≤p L
if, and only if, the following two conditions are satisfied:
a) There exists f : Σ* → Σ* such that for all w in Σ* we have: (w ∈ L′ ⇔ f (w) ∈ L);
and
b) there exists a polynomial-time Turing machine that halts with f(w) on its tape
on any input w.
Alternatively, if L ∈NP, and there is another NP-complete problem that can be polynomial-
time reduced to L, then L is NP-complete. This is a common way of proving some new
problem is NP-complete.

154.15 Popular culture

The film Travelling Salesman165 , by director Timothy Lanzone, is the story of four mathe-
maticians hired by the US government to solve the P versus NP problem.[51]

162 https://en.wikipedia.org/wiki/Composite_number
163 https://en.wikipedia.org/wiki/AKS_primality_test
164 https://en.wikipedia.org/wiki/NP-completeness
165 https://en.wikipedia.org/wiki/Travelling_Salesman_(2012_film)

1658
See also

In the sixth episode of The Simpsons166 ' seventh season ”Treehouse of Horror VI167 ”,
the equation P=NP is seen shortly after Homer accidentally stumbles into the ”third
dimension”.[52][53]
In the second episode of season 2 of Elementary168 , ”Solve for X”169 revolves around Sherlock
and Watson investigating the murders of mathematicians who were attempting to solve
P versus NP.[54][55]

154.16 See also


• Game complexity170
• List of unsolved problems in mathematics171
• Unique games conjecture172
• Unsolved problems in computer science173

154.17 Notes
1. A nondeterministic Turing machine174 can move to a state that is not determined by
the previous state. Such a machine could solve an NP problem in polynomial time by
falling into the correct answer state (by luck), then conventionally verifying it. Such
machines are not practical for solving realistic problems but can be used as theoretical
models.
2. Exactly how efficient a solution must be to pose a threat to cryptography depends on
the details. A solution of O(N 2 ) with a reasonable constant term would be disastrous.
On the other hand, a solution that is Ω(N 4 ) in almost all cases would not pose an
immediate practical danger.

154.18 References
1. R. E. Ladner ”On the structure of polynomial time reducibility,” Journal of the ACM175
22, pp. 151–171, 1975. Corollary 1.1. ACM site176 .
2. F, L (2013). The Golden Ticket: P, NP, and the Search for the
Impossible. Princeton, NJ: Princeton University Press. ISBN177 9780691156491178 .

166 https://en.wikipedia.org/wiki/The_Simpsons
167 https://en.wikipedia.org/wiki/Treehouse_of_Horror_VI
168 https://en.wikipedia.org/wiki/Elementary_(TV_series)
169 https://en.wikipedia.org/wiki/List_of_Elementary_episodes#Season_2_(2013%E2%80%9314)
170 https://en.wikipedia.org/wiki/Game_complexity
171 https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_mathematics
172 https://en.wikipedia.org/wiki/Unique_games_conjecture
173 https://en.wikipedia.org/wiki/Unsolved_problems_in_computer_science
174 https://en.wikipedia.org/wiki/Nondeterministic_Turing_machine
175 https://en.wikipedia.org/wiki/Journal_of_the_ACM
http://portal.acm.org/citation.cfm?id=321877&dl=ACM&coll=&CFID=15151515&CFTOKEN=
176
6184618
177 https://en.wikipedia.org/wiki/ISBN_(identifier)
178 https://en.wikipedia.org/wiki/Special:BookSources/9780691156491

1659
P versus NP problem

3. C, S179 (1971). ”T     -


”180 . Proceedings of the Third Annual ACM Symposium on Theory of Comput-
ing. pp. 151–158.
4. L. A. L (1973). ”УНИВЕРСАЛЬНЫЕ ЗАДАЧИ ПЕРЕБОРА”181 ( R).
9 (3) (Problems of Information Transmission ed.): 115–116. Cite journal requires
|journal= (help182 )
5. F, L183 (2009). ”T    P versus NP problem”184
(PDF). Communications of the ACM. 52 (9): 78–86. CiteSeerX185 10.1.1.156.767186 .
doi187 :10.1145/1562164.1562186188 . Archived from the original189 (PDF) on 24 Febru-
ary 2011. Retrieved 26 January 2010.
6. NSA (2012). ”L  J N”190 (PDF).
7. H, J. ”G,  N,   P = NP problem”191
(PDF). Bulletin of the European Association for Theoretical Computer Science. 38:
101–107.
8. Sipser, Michael: Introduction to the Theory of Computation, Second Edition, Inter-
national Edition, page 270. Thomson Course Technology, 2006. Definition 7.19 and
Theorem 7.20.
9. W I. G192 (J 2002). ”T P=?NP poll”193 (PDF).
SIGACT News . 194 33 (2): 34–47. CiteSeerX195 10.1.1.172.1005196 .
doi197 :10.1145/564585.564599198 . Retrieved 26 September 2018.
10. W I. G199 . ”T S P=?NP poll”200 (PDF). SIGACT News.
74.
11. ”G C: T T P =? NP P1”201 (PDF).
12. S A. ”PHYS771 L 6: P, NP, and Friends”202 . R
27 A 2007.

179 https://en.wikipedia.org/wiki/Stephen_Cook
180 http://portal.acm.org/citation.cfm?coll=GUIDE&dl=GUIDE&id=805047
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=ppi&paperid=914&option_
181
lang=rus
182 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
183 https://en.wikipedia.org/wiki/Lance_Fortnow
https://wayback.archive-it.org/all/20110224135332/http://www.cs.uchicago.edu/
184
~fortnow/papers/pnp-cacm.pdf
185 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
186 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.156.767
187 https://en.wikipedia.org/wiki/Doi_(identifier)
188 https://doi.org/10.1145%2F1562164.1562186
189 http://www.cs.uchicago.edu/~fortnow/papers/pnp-cacm.pdf
https://www.nsa.gov/Portals/70/documents/news-features/declassified-documents/nash-
190
letters/nash_letters1.pdf
191 http://ecommons.library.cornell.edu/bitstream/1813/6910/1/89-994.pdf
192 https://en.wikipedia.org/wiki/William_Gasarch
193 http://www.cs.umd.edu/~gasarch/papers/poll.pdf
194 https://en.wikipedia.org/wiki/SIGACT_News
195 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
196 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.172.1005
197 https://en.wikipedia.org/wiki/Doi_(identifier)
198 https://doi.org/10.1145%2F564585.564599
199 https://en.wikipedia.org/wiki/William_Gasarch
200 http://www.cs.umd.edu/~gasarch/papers/poll2012.pdf
201 https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/pollpaper3.pdf
202 http://www.scottaaronson.com/democritus/lec6.html

1660
References

13. ”NP-  S”203 .


14. C, C J. (1984). ”T    
L ”. Discrete Applied Mathematics. 8 (1): 25–30.
15. I. H (1981). ”T NP-completeness of some edge-partition problems”. SIAM
J. Comput. 10: 713–717.
16. A F204  D. L (1981). ”C  
  n × n chess requires time exponential in n”. Journal of Combinatorial
Theory, Series A205 . 31 (2): 199–214. doi206 :10.1016/0097-3165(81)90016-9207 .
17. D E208 . ”C C  G  P”209 .
18. F, M J.210 ; R, M O.211 (1974). ”S-E
C  P A”212 . Proceedings of the SIAM-AMS Sym-
posium in Applied Mathematics. 7: 27–41. Archived from the original213 on 15
September 2006. Retrieved 15 October 2017.CS1 maint: ref=harv (link214 )
19. V, L G. (1979). ”T    
 ”. SIAM Journal on Computing. 8 (3): 410–421.
doi215 :10.1137/0208032216 .
20. A, V; K, P P. (2006). ”G    SPP”.
Information and Computation. 204 (5): 835–852. doi217 :10.1016/j.ic.2006.02.002218 .
21. S, U219 (1987). Graph isomorphism is in the low hierarchy. Proceed-
ings of the 4th Annual Symposium on Theoretical Aspects of Computer Science. Lec-
ture Notes in Computer Science. 1987. pp. 114–124. doi220 :10.1007/bfb0039599221 .
ISBN222 978-3-540-17219-2223 .
22. S, U224 (1988). ”G      ”.
Journal of Computer and System Sciences. 37 (3): 312–323. doi225 :10.1016/0022-
0000(88)90010-4226 .

203 http://www.cs.ox.ac.uk/people/paul.goldberg/FCS/sudoku.html
204 https://en.wikipedia.org/wiki/Aviezri_Fraenkel
205 https://en.wikipedia.org/wiki/Journal_of_Combinatorial_Theory
206 https://en.wikipedia.org/wiki/Doi_(identifier)
207 https://doi.org/10.1016%2F0097-3165%2881%2990016-9
208 https://en.wikipedia.org/wiki/David_Eppstein
209 http://www.ics.uci.edu/~eppstein/cgt/hard.html
210 https://en.wikipedia.org/wiki/Michael_J._Fischer
211 https://en.wikipedia.org/wiki/Michael_O._Rabin
https://web.archive.org/web/20060915010325/http://www.lcs.mit.edu/publications/pubs/
212
ps/MIT-LCS-TM-043.ps
213 http://www.lcs.mit.edu/publications/pubs/ps/MIT-LCS-TM-043.ps
214 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
215 https://en.wikipedia.org/wiki/Doi_(identifier)
216 https://doi.org/10.1137%2F0208032
217 https://en.wikipedia.org/wiki/Doi_(identifier)
218 https://doi.org/10.1016%2Fj.ic.2006.02.002
219 https://en.wikipedia.org/wiki/Uwe_Sch%C3%B6ning
220 https://en.wikipedia.org/wiki/Doi_(identifier)
221 https://doi.org/10.1007%2Fbfb0039599
222 https://en.wikipedia.org/wiki/ISBN_(identifier)
223 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-17219-2
224 https://en.wikipedia.org/wiki/Uwe_Sch%C3%B6ning
225 https://en.wikipedia.org/wiki/Doi_(identifier)
226 https://doi.org/10.1016%2F0022-0000%2888%2990010-4

1661
P versus NP problem

23. Lance Fortnow227 . Computational Complexity Blog: Complexity Class of the Week:
Factoring228 . 13 September 2002.
24. Pisinger, D. 2003. ”Where are the hard knapsack problems?” Technical Report
2003/08, Department of Computer Science, University of Copenhagen, Copenhagen,
Denmark
25. K, K. I., K, Y., & R, B. (2012). ”T  
   ”. Journal of Combinatorial Theory, Series B. 102 (2):
424–435. doi229 :10.1016/j.jctb.2011.07.004230 .CS1 maint: multiple names: authors list
(link231 )
26. J, D S. (1987). ”T NP- : A -
  ( 19)”. Journal of Algorithms. 8 (2): 285–303. Cite-
232 233 234
SeerX 10.1.1.114.3864 . doi :10.1016/0196-6774(87)90043-5 . 235

27. G, J; T, T (1996). ”3 A    -


  ”236 . I J. E. B (.). Advances in linear and integer
programming. Oxford Lecture Series in Mathematics and its Applications. 4. New
York: Oxford University Press. pp. 103–144. MR237 1438311238 . Postscript file at
website of Gondzio239 and at McMaster University website of Terlaky240 .CS1 maint:
ref=harv (link241 )
28. R, J (M 2012). ”P vs. NP poll results”242 . Communications of
the ACM. 55 (5): 10.
29. S A. ”R  ”243 ., point 9.
30. See
H, S.  W, O.; W (1997). Hard instance generation for
SAT. Algorithms and Computation. Lecture Notes in Computer Science. 1350.
Springer. pp. 22–31. arXiv244 :cs/9809117245 . Bibcode246 :1998cs........9117H247 .
doi248 :10.1007/3-540-63890-3_4249 . ISBN250 978-3-540-63890-2251 .CS1 maint: mul-

227 https://en.wikipedia.org/wiki/Lance_Fortnow
228 http://weblog.fortnow.com/2002/09/complexity-class-of-week-factoring.html
229 https://en.wikipedia.org/wiki/Doi_(identifier)
230 https://doi.org/10.1016%2Fj.jctb.2011.07.004
231 https://en.wikipedia.org/wiki/Category:CS1_maint:_multiple_names:_authors_list
232 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
233 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.114.3864
234 https://en.wikipedia.org/wiki/Doi_(identifier)
235 https://doi.org/10.1016%2F0196-6774%2887%2990043-5
236 http://www.maths.ed.ac.uk/~gondzio/CV/oxford.ps
237 https://en.wikipedia.org/wiki/MR_(identifier)
238 http://www.ams.org/mathscinet-getitem?mr=1438311
239 http://www.maths.ed.ac.uk/~gondzio/CV/oxford.ps
240 http://www.cas.mcmaster.ca/~terlaky/files/dut-twi-94-73.ps.gz
241 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
242 http://mags.acm.org/communications/201205?pg=12
243 http://scottaaronson.com/blog/?p=122
244 https://en.wikipedia.org/wiki/ArXiv_(identifier)
245 http://arxiv.org/abs/cs/9809117
246 https://en.wikipedia.org/wiki/Bibcode_(identifier)
247 https://ui.adsabs.harvard.edu/abs/1998cs........9117H
248 https://en.wikipedia.org/wiki/Doi_(identifier)
249 https://doi.org/10.1007%2F3-540-63890-3_4
250 https://en.wikipedia.org/wiki/ISBN_(identifier)
251 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-63890-2

1662
References

tiple names: authors list (link252 ) for a reduction of factoring to SAT. A 512 bit
factoring problem (8400 MIPS-years when factored) translates to a SAT problem of
63,652 variables and 406,860 clauses.
31. See, for example,
M, F. & M, L. (2000). ”L    SAT
”. Journal of Automated Reasoning. 24 (1): 165–203. Cite-
253 254 255 256
SeerX 10.1.1.104.962 . doi :10.1023/A:1006326723002 . in which an instance
of DES is encoded as a SAT problem with 10336 variables and 61935 clauses. A 3DES
problem instance would be about 3 times this size.
32. D, D  K, A  V, R-
 (2007). ”I       SAT
”. S. . 377–382. 257 :10.1007/978-3-540-72788-0_36258 .CS1
maint: multiple names: authors list (link259 )
33. B B260 , L T261 (1998). ”P    -
 (HP)   NP-complete”. J. Comput. Biol. 5 (1): 27–40. Cite-
SeerX262 10.1.1.139.5547263 . doi264 :10.1089/cmb.1998.5.27265 . PMID266 9541869267 .
34. History of this letter and its translation from
M S. ”T H  S   P versus NP question”268
(PDF).
35. D S. J. ”A B H  NP-C, 1954–2012”269
(PDF). From pages 359–376 of Optimization Stories, M. Grötschel270 (editor), a spe-
cial issue of ¨ Documenta Mathematica, published in August 2012 and distributed
to attendees at the 21st International Symposium on Mathematical Programming in
Berlin.
36. C, S271 (A 2000). ”T P versus NP Problem”272 (PDF). C
M I273 . R 18 O 2006. Cite journal requires
|journal= (help274 )

252 https://en.wikipedia.org/wiki/Category:CS1_maint:_multiple_names:_authors_list
253 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
254 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.104.962
255 https://en.wikipedia.org/wiki/Doi_(identifier)
256 https://doi.org/10.1023%2FA%3A1006326723002
257 https://en.wikipedia.org/wiki/Doi_(identifier)
258 https://doi.org/10.1007%2F978-3-540-72788-0_36
259 https://en.wikipedia.org/wiki/Category:CS1_maint:_multiple_names:_authors_list
260 https://en.wikipedia.org/wiki/Bonnie_Berger
261 https://en.wikipedia.org/wiki/F._Thomson_Leighton
262 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
263 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.139.5547
264 https://en.wikipedia.org/wiki/Doi_(identifier)
265 https://doi.org/10.1089%2Fcmb.1998.5.27
266 https://en.wikipedia.org/wiki/PMID_(identifier)
267 http://pubmed.ncbi.nlm.nih.gov/9541869
268 http://cs.stanford.edu/people/trevisan/cs172-07/sipser92history.pdf
269 http://www.research.att.com/techdocs/TD_100899.pdf
270 https://en.wikipedia.org/wiki/Martin_Gr%C3%B6tschel
271 https://en.wikipedia.org/wiki/Stephen_Cook
272 http://www.claymath.org/sites/default/files/pvsnp.pdf
273 https://en.wikipedia.org/wiki/Clay_Mathematics_Institute
274 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical

1663
P versus NP problem

37. K, D E.275 (20 M 2014). ”T Q  D
K”276 . informit.com. InformIT277 . Retrieved 20 July 2014.
38. L. R. F (O 1983). ”T H P-S A-
”. Journal of the Operational Research Society278 . 34 (10): 927–934.
doi279 :10.2307/2580891280 . JSTOR281 2580891282 .
39. R. Impagliazzo, ”A personal view of average-case complexity,”283 sct, pp.134, 10th
Annual Structure in Complexity Theory Conference (SCT'95), 1995
40. ”T      ”C  C-
: S  I' W” ”284 . A   285
 15 N 2013.
41. T. P. B; J. G; R. S. (1975). ”R  
P =? NP Question”. SIAM Journal on Computing286 . 4 (4): 431–442.
287
doi :10.1137/0204037 . 288

42. R, A A.; S R (1997). ”N ”. Journal
of Computer and System Sciences. 55 (1): 24–35. doi289 :10.1006/jcss.1997.1494290 .
43. S. A & A. W (2008). Algebrization: A New Barrier in
Complexity Theory291 (PDF). P  ACM STOC'2008. . 731–740.
292 :10.1145/1374376.1374481293 .
44. A, S294 . ”I P Versus NP Formally Independent?”295 (PDF)..
45. B-D, S; H, S (1992). ”O    P versus
NP”296 . T R. 714. Technion. Cite journal requires
297
|journal= (help ).
46. J M298 (8 O 2009). ”P A,  P-NP P H
C”299 . The New York Times.

275 https://en.wikipedia.org/wiki/Donald_Knuth
http://www.informit.com/articles/article.aspx?p=2213858&WT.rss_f=Article&WT.rss_a=
276
Twenty%20Questions%20for%20Donald%20Knuth&WT.rss_ev=a
277 https://en.wikipedia.org/wiki/InformIT_(publisher)
278 https://en.wikipedia.org/wiki/Journal_of_the_Operational_Research_Society
279 https://en.wikipedia.org/wiki/Doi_(identifier)
280 https://doi.org/10.2307%2F2580891
281 https://en.wikipedia.org/wiki/JSTOR_(identifier)
282 http://www.jstor.org/stable/2580891
283 http://cseweb.ucsd.edu/~russell/average.ps
https://web.archive.org/web/20131115034042/http://intractability.princeton.edu/blog/
284
2009/05/program-for-workshop-on-impagliazzos-worlds/
http://intractability.princeton.edu/blog/2009/05/program-for-workshop-on-
285
impagliazzos-worlds/
286 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
287 https://en.wikipedia.org/wiki/Doi_(identifier)
288 https://doi.org/10.1137%2F0204037
289 https://en.wikipedia.org/wiki/Doi_(identifier)
290 https://doi.org/10.1006%2Fjcss.1997.1494
291 http://www.scottaaronson.com/papers/alg.pdf
292 https://en.wikipedia.org/wiki/Doi_(identifier)
293 https://doi.org/10.1145%2F1374376.1374481
294 https://en.wikipedia.org/wiki/Scott_Aaronson
295 http://www.scottaaronson.com/papers/indep.pdf
296 https://www.cs.technion.ac.il/~shai/ph.ps.gz
297 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
298 https://en.wikipedia.org/wiki/John_Markoff
299 https://www.nytimes.com/2009/10/08/science/Wpolynom.html

1664
Further reading

47. G J. W300 . ”T P-versus-NP page”301 . R 24 J


2018.
48. M, J (16 A 2010). ”S 1: P E P. S 2:
W F”302 . The New York Times. Retrieved 20 September 2010.
49. Elvira Mayordomo. ”P versus NP”303 Archived304 16 February 2012 at the Wayback
Machine305 Monografías de la Real Academia de Ciencias de Zaragoza 26: 57–68
(2004).
50. A, M; K, N; S, N (2004). ”PRIMES
  P”306 (PDF). Annals of Mathematics307 . 160 (2): 781–793.
doi308 :10.4007/annals.2004.160.781309 . JSTOR310 3597229311 .
51. G, D (26 A 2012). ”'T S'  
   P  NP”312 . Wired UK. Retrieved 26 April 2012.
52. H, L. ”E: P vs. NP”313 .
53. S, A. ”W   P vs. NP problem? Why is it important?”314 .
54. G, W (7 O 2013). ”P  NP  E? N— P 
NP  ON E”315 . blog.computationalcomplexity.org. Retrieved 6 July 2018.
55. K, N (4 O 2013). ”E S  X R:
S  M”316 . TV.com. Retrieved 6 July 2018.

154.19 Further reading


• C, T (2001). Introduction to Algorithms317 . C: MIT P318 .
ISBN319 978-0-262-03293-3320 .

300 https://en.wikipedia.org/wiki/Gerhard_J._Woeginger
301 http://www.win.tue.nl/~gwoegi/P-versus-NP.htm
302 https://www.nytimes.com/2010/08/17/science/17proof.html?_r=1
http://www.unizar.es/acz/05Publicaciones/Monografias/MonografiasPublicadas/
303
Monografia26/057Mayordomo.pdf
https://web.archive.org/web/20120216154228/http://www.unizar.es/acz/05Publicaciones/
304
Monografias/MonografiasPublicadas/Monografia26/057Mayordomo.pdf
305 https://en.wikipedia.org/wiki/Wayback_Machine
306 http://www.cse.iitk.ac.in/users/manindra/algebra/primality_v6.pdf
307 https://en.wikipedia.org/wiki/Annals_of_Mathematics
308 https://en.wikipedia.org/wiki/Doi_(identifier)
309 https://doi.org/10.4007%2Fannals.2004.160.781
310 https://en.wikipedia.org/wiki/JSTOR_(identifier)
311 http://www.jstor.org/stable/3597229
312 https://www.wired.co.uk/news/archive/2012-04/26/travelling-salesman
313 https://news.mit.edu/2009/explainer-pnp
314 http://science.nd.edu/news/what-is-the-p-vs-np-problem-and-why-is-it-important/
https://blog.computationalcomplexity.org/2013/10/p-vs-np-is-elementary-no-p-vs-np-
315
is-on.html
316 http://www.tv.com/news/elementary-solve-for-x-review-sines-of-murder-138084402962/
317 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
318 https://en.wikipedia.org/wiki/MIT_Press
319 https://en.wikipedia.org/wiki/ISBN_(identifier)
320 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03293-3

1665
P versus NP problem

• G, M; J, D (1979). Computers and Intractability: A Guide to


the Theory of NP-Completeness321 . S F: W. H. F  C322 .
ISBN323 978-0-7167-1045-5324 .
• G, O (2010). P, NP, and NP-Completeness. Cambridge: Cambridge
University Press. ISBN325 978-0-521-12254-2326 . Online drafts327
• I, N. (1987). ”L    ”.
SIAM Journal on Computing. 16 (4): 760–778. CiteSeerX328 10.1.1.75.3035329 .
doi330 :10.1137/0216051331 .
• P, C (1994). Computational Complexity. Boston: Addison-
Wesley. ISBN332 978-0-201-53082-7333 .

154.20 External links

P versus NP problemat Wikipedia's sister projects334


• Quotations335 from Wikiquote
• F, L.; G, W. ”C ”336 .
• Aviad Rubinstein's Hardness of Approximation Between P and NP337 , winner of the
ACM338 's 2017 Doctoral Dissertation Award339 .
• ”P . NP   C C Z”340 . 26 A 2014 − 
YT341 .

Important complexity classes (more)

321 https://archive.org/details/computersintract0000gare
322 https://en.wikipedia.org/wiki/W._H._Freeman_and_Company
323 https://en.wikipedia.org/wiki/ISBN_(identifier)
324 https://en.wikipedia.org/wiki/Special:BookSources/978-0-7167-1045-5
325 https://en.wikipedia.org/wiki/ISBN_(identifier)
326 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-12254-2
327 http://www.wisdom.weizmann.ac.il/~oded/bc-drafts.html
328 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
329 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.75.3035
330 https://en.wikipedia.org/wiki/Doi_(identifier)
331 https://doi.org/10.1137%2F0216051
332 https://en.wikipedia.org/wiki/ISBN_(identifier)
333 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-53082-7
334 https://en.wikipedia.org/wiki/Wikipedia:Wikimedia_sister_projects
335 https://en.wikiquote.org/wiki/P_versus_NP_problem
336 http://weblog.fortnow.com
337 https://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-146.pdf
338 https://en.wikipedia.org/wiki/Association_for_Computing_Machinery
339 https://awards.acm.org/about/2017-doctoral-dissertation
340 https://www.youtube.com/watch?v=YX40hbAHx3s
341 https://en.wikipedia.org/wiki/YouTube

1666
External links

1667
155 Contributors

Edits User
1 !dea4u1
3 $Mathe94$2
2 *thing goes3
1 -- April4
1 -Midorihana-5
1 -OOPSIE-6
1 -Ozone-7
1 .anacondabot8
2 09
1 0x3011410
1 0xF8E811
1 0xsbock12
1 1 point 7 point 413
1 10metreh14
2 10toshiro15
1 11cookeaw116
3 124Nick17
1 1337Space18
3 16@r19
24 1706abhi20

1 https://en.wikipedia.org/wiki/User:!dea4u
https://en.wikipedia.org/w/index.php%3ftitle=User:\protect\TU\textdollar{}Mathe94\
2
protect\TU\textdollar{}&action=edit&redlink=1
3 https://en.wikipedia.org/w/index.php%3ftitle=User:*thing_goes&action=edit&redlink=1
4 https://en.wikipedia.org/wiki/User:--_April
5 https://en.wikipedia.org/wiki/User:-Midorihana-
6 https://en.wikipedia.org/w/index.php%3ftitle=User:-OOPSIE-&action=edit&redlink=1
7 https://en.wikipedia.org/wiki/User:-Ozone-
8 https://en.wikipedia.org/wiki/User:.anacondabot
9 https://en.wikipedia.org/wiki/User:0
10 https://en.wikipedia.org/wiki/User:0x30114
11 https://en.wikipedia.org/wiki/User:0xF8E8
12 https://en.wikipedia.org/w/index.php%3ftitle=User:0xsbock&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:1_point_7_point_4&action=edit&
13
redlink=1
14 https://en.wikipedia.org/wiki/User:10metreh
15 https://en.wikipedia.org/w/index.php%3ftitle=User:10toshiro&action=edit&redlink=1
16 https://en.wikipedia.org/wiki/User:11cookeaw1
17 https://en.wikipedia.org/wiki/User:124Nick
18 https://en.wikipedia.org/wiki/User:1337Space
19 https://en.wikipedia.org/wiki/User:16@r
20 https://en.wikipedia.org/w/index.php%3ftitle=User:1706abhi&action=edit&redlink=1

1669
Contributors

1 1qaz-pl21
1 24ip22
1 28421u2232nfenfcenc23
2 28bot24
2 28bytes25
3 2bam26
1 2help27
1 2over028
16 2pem29
1 316 student30
7 37KZ31
20 3ghef32
19 3mta333
1 40EBFD34
2 4368a35
1 49TL36
1 4Jays103437
1 4c27f8e656bb34703d936fc59ede9a38
1 4get39
1 4hodmt40
1 4ndyD41
1 4pq1injbok42
26 4v4l0n4243
1 564dude44
3 5kyw0rm 245

21 https://en.wikipedia.org/w/index.php%3ftitle=User:1qaz-pl&action=edit&redlink=1
22 https://en.wikipedia.org/w/index.php%3ftitle=User:24ip&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:28421u2232nfenfcenc&action=edit&
23
redlink=1
24 https://en.wikipedia.org/wiki/User:28bot
25 https://en.wikipedia.org/wiki/User:28bytes
26 https://en.wikipedia.org/wiki/User:2bam
27 https://en.wikipedia.org/wiki/User:2help
28 https://en.wikipedia.org/wiki/User:2over0
29 https://en.wikipedia.org/wiki/User:2pem
30 https://en.wikipedia.org/w/index.php%3ftitle=User:316_student&action=edit&redlink=1
31 https://en.wikipedia.org/wiki/User:37KZ
32 https://en.wikipedia.org/w/index.php%3ftitle=User:3ghef&action=edit&redlink=1
33 https://en.wikipedia.org/wiki/User:3mta3
34 https://en.wikipedia.org/wiki/User:40EBFD
35 https://en.wikipedia.org/wiki/User:4368a
36 https://en.wikipedia.org/wiki/User:49TL
37 https://en.wikipedia.org/w/index.php%3ftitle=User:4Jays1034&action=edit&redlink=1
38 https://en.wikipedia.org/wiki/User:4c27f8e656bb34703d936fc59ede9a
39 https://en.wikipedia.org/w/index.php%3ftitle=User:4get&action=edit&redlink=1
40 https://en.wikipedia.org/wiki/User:4hodmt
41 https://en.wikipedia.org/wiki/User:4ndyD
42 https://en.wikipedia.org/wiki/User:4pq1injbok
43 https://en.wikipedia.org/wiki/User:4v4l0n42
44 https://en.wikipedia.org/wiki/User:564dude
45 https://en.wikipedia.org/w/index.php%3ftitle=User:5kyw0rm_2&action=edit&redlink=1

1670
External links

1 5rockhopper446
1 6e5f8bd4247
7 7&6=thirteen48
1 71vikrant9449
6 7250
5 7804j51
1 7Sidz52
1 7gün24saat53
1 7uuuuuu54
2 94rain55
1 9719856
1 A Great Catholic Person57
1 A Meteorite58
1 A Nobody59
3 A bit iffy60
1 A lad insane61
3 A-Ge062
2 A. Pichler63
1 A.Ou64
1 A.amitkumar65
1 A.bit66
2 A11131767
2 A3ng3l68
50 A3nm69
2 A4bot70

46 https://en.wikipedia.org/w/index.php%3ftitle=User:5rockhopper4&action=edit&redlink=1
47 https://en.wikipedia.org/wiki/User:6e5f8bd42
48 https://en.wikipedia.org/wiki/User:7%25266%253Dthirteen
49 https://en.wikipedia.org/w/index.php%3ftitle=User:71vikrant94&action=edit&redlink=1
50 https://en.wikipedia.org/wiki/User:72
51 https://en.wikipedia.org/wiki/User:7804j
52 https://en.wikipedia.org/wiki/User:7Sidz
https://en.wikipedia.org/w/index.php%3ftitle=User:7g%25C3%25BCn24saat&action=edit&
53
redlink=1
54 https://en.wikipedia.org/w/index.php%3ftitle=User:7uuuuuu&action=edit&redlink=1
55 https://en.wikipedia.org/wiki/User:94rain
56 https://en.wikipedia.org/wiki/User:97198
57 https://en.wikipedia.org/wiki/User:A_Great_Catholic_Person
58 https://en.wikipedia.org/w/index.php%3ftitle=User:A_Meteorite&action=edit&redlink=1
59 https://en.wikipedia.org/wiki/User:A_Nobody
60 https://en.wikipedia.org/wiki/User:A_bit_iffy
61 https://en.wikipedia.org/wiki/User:A_lad_insane
62 https://en.wikipedia.org/wiki/User:A-Ge0
63 https://en.wikipedia.org/wiki/User:A._Pichler
64 https://en.wikipedia.org/wiki/User:A.Ou
65 https://en.wikipedia.org/wiki/User:A.amitkumar
66 https://en.wikipedia.org/wiki/User:A.bit
67 https://en.wikipedia.org/w/index.php%3ftitle=User:A111317&action=edit&redlink=1
68 https://en.wikipedia.org/w/index.php%3ftitle=User:A3ng3l&action=edit&redlink=1
69 https://en.wikipedia.org/wiki/User:A3nm
70 https://en.wikipedia.org/wiki/User:A4bot

1671
Contributors

1 A571
7 A5b72
3 A87673
1 A93091374
1 AABEENA75
1 AANaimi76
2 ABCD77
1 ACSE78
4 AHMartin79
2 AHusain314180
1 AI RPer81
2 AJim82
1 AL SAM83
1 ALongDream84
26 AManWithNoPlan85
3 ANHealey86
7 ANONYMOUS COWARD0xC0DE87
2 AP Shinobi88
5 APH89
1 ASchmoo90
11 AStrathman91
1 ATBS92
1 AVand93
1 Aacombarro8994
3 Aaditya 795

71 https://en.wikipedia.org/wiki/User:A5
72 https://en.wikipedia.org/wiki/User:A5b
73 https://en.wikipedia.org/w/index.php%3ftitle=User:A876&action=edit&redlink=1
74 https://en.wikipedia.org/wiki/User:A930913
75 https://en.wikipedia.org/wiki/User:AABEENA
76 https://en.wikipedia.org/w/index.php%3ftitle=User:AANaimi&action=edit&redlink=1
77 https://en.wikipedia.org/wiki/User:ABCD
78 https://en.wikipedia.org/w/index.php%3ftitle=User:ACSE&action=edit&redlink=1
79 https://en.wikipedia.org/wiki/User:AHMartin
80 https://en.wikipedia.org/w/index.php%3ftitle=User:AHusain3141&action=edit&redlink=1
81 https://en.wikipedia.org/wiki/User:AI_RPer
82 https://en.wikipedia.org/wiki/User:AJim
83 https://en.wikipedia.org/wiki/User:AL_SAM
84 https://en.wikipedia.org/w/index.php%3ftitle=User:ALongDream&action=edit&redlink=1
85 https://en.wikipedia.org/wiki/User:AManWithNoPlan
86 https://en.wikipedia.org/wiki/User:ANHealey
https://en.wikipedia.org/w/index.php%3ftitle=User:ANONYMOUS_COWARD0xC0DE&action=edit&
87
redlink=1
88 https://en.wikipedia.org/wiki/User:AP_Shinobi
89 https://en.wikipedia.org/wiki/User:APH
90 https://en.wikipedia.org/w/index.php%3ftitle=User:ASchmoo&action=edit&redlink=1
91 https://en.wikipedia.org/w/index.php%3ftitle=User:AStrathman&action=edit&redlink=1
92 https://en.wikipedia.org/w/index.php%3ftitle=User:ATBS&action=edit&redlink=1
93 https://en.wikipedia.org/wiki/User:AVand
94 https://en.wikipedia.org/w/index.php%3ftitle=User:Aacombarro89&action=edit&redlink=1
95 https://en.wikipedia.org/wiki/User:Aaditya_7

1672
External links

2 Aadnan.tufail96
2 Aajaja97
1 Aalok.sathe98
3 Aarchiba99
1 Aaron Hurst100
5 Aaron McDaid101
1 Aaron Nitro Danielson102
5 Aaron Rotenberg103
1 Aaron Will104
1 AaronLeeIV105
3 Aaronbrick106
1 Aaronchall107
1 Aaronmhamilton108
2 Aaronzat109
2 Aayjaay26110
2 Abatasigh111
1 Abatishchev112
1 Abaumgar113
1 Abc45624114
8 Abcarter115
1 Abce2116
1 Abd117
1 Abdeaitali118
3 Abdull119
1 Abdullahfrq120

96 https://en.wikipedia.org/w/index.php%3ftitle=User:Aadnan.tufail&action=edit&redlink=1
97 https://en.wikipedia.org/w/index.php%3ftitle=User:Aajaja&action=edit&redlink=1
98 https://en.wikipedia.org/wiki/User:Aalok.sathe
99 https://en.wikipedia.org/wiki/User:Aarchiba
100 https://en.wikipedia.org/wiki/User:Aaron_Hurst
101 https://en.wikipedia.org/wiki/User:Aaron_McDaid
https://en.wikipedia.org/w/index.php%3ftitle=User:Aaron_Nitro_Danielson&action=edit&
102
redlink=1
103 https://en.wikipedia.org/wiki/User:Aaron_Rotenberg
104 https://en.wikipedia.org/wiki/User:Aaron_Will
105 https://en.wikipedia.org/w/index.php%3ftitle=User:AaronLeeIV&action=edit&redlink=1
106 https://en.wikipedia.org/wiki/User:Aaronbrick
107 https://en.wikipedia.org/wiki/User:Aaronchall
108 https://en.wikipedia.org/wiki/User:Aaronmhamilton
109 https://en.wikipedia.org/w/index.php%3ftitle=User:Aaronzat&action=edit&redlink=1
110 https://en.wikipedia.org/w/index.php%3ftitle=User:Aayjaay26&action=edit&redlink=1
111 https://en.wikipedia.org/w/index.php%3ftitle=User:Abatasigh&action=edit&redlink=1
112 https://en.wikipedia.org/wiki/User:Abatishchev
113 https://en.wikipedia.org/w/index.php%3ftitle=User:Abaumgar&action=edit&redlink=1
114 https://en.wikipedia.org/w/index.php%3ftitle=User:Abc45624&action=edit&redlink=1
115 https://en.wikipedia.org/wiki/User:Abcarter
116 https://en.wikipedia.org/wiki/User:Abce2
117 https://en.wikipedia.org/wiki/User:Abd
118 https://en.wikipedia.org/wiki/User:Abdeaitali
119 https://en.wikipedia.org/wiki/User:Abdull
120 https://en.wikipedia.org/w/index.php%3ftitle=User:Abdullahfrq&action=edit&redlink=1

1673
Contributors

5 Abdulsathar.mec121
1 Abeb leu122
4 Abednigo123
1 Abeg92124
8 Abelmoschus Esculentus125
1 Abelson126
1 Aberdeen01127
2 Abhi137128
1 AbhiMukh97129
1 Abhiday130
1 Abhilash Mhaisne131
1 Abhinav.in132
1 Abhinav316133
1 Abhishek.kumar.ak134
1 Abhishekupadhya135
1 Abit4it136
2 Ablonus137
2 Abouelsaoud138
1 Abovechief139
1 Abraham, B.S.140
1 Abrech141
2 Absemindprof142
1 Absgomz66143
1 Abtinb144
14 Abu adam~enwiki145

https://en.wikipedia.org/w/index.php%3ftitle=User:Abdulsathar.mec&action=edit&
121
redlink=1
122 https://en.wikipedia.org/w/index.php%3ftitle=User:Abeb_leu&action=edit&redlink=1
123 https://en.wikipedia.org/w/index.php%3ftitle=User:Abednigo&action=edit&redlink=1
124 https://en.wikipedia.org/wiki/User:Abeg92
125 https://en.wikipedia.org/wiki/User:Abelmoschus_Esculentus
126 https://en.wikipedia.org/wiki/User:Abelson
127 https://en.wikipedia.org/wiki/User:Aberdeen01
128 https://en.wikipedia.org/w/index.php%3ftitle=User:Abhi137&action=edit&redlink=1
129 https://en.wikipedia.org/wiki/User:AbhiMukh97
130 https://en.wikipedia.org/w/index.php%3ftitle=User:Abhiday&action=edit&redlink=1
131 https://en.wikipedia.org/wiki/User:Abhilash_Mhaisne
132 https://en.wikipedia.org/wiki/User:Abhinav.in
133 https://en.wikipedia.org/w/index.php%3ftitle=User:Abhinav316&action=edit&redlink=1
134 https://en.wikipedia.org/wiki/User:Abhishek.kumar.ak
135 https://en.wikipedia.org/wiki/User:Abhishekupadhya
136 https://en.wikipedia.org/w/index.php%3ftitle=User:Abit4it&action=edit&redlink=1
137 https://en.wikipedia.org/wiki/User:Ablonus
138 https://en.wikipedia.org/w/index.php%3ftitle=User:Abouelsaoud&action=edit&redlink=1
139 https://en.wikipedia.org/w/index.php%3ftitle=User:Abovechief&action=edit&redlink=1
140 https://en.wikipedia.org/wiki/User:Abraham,_B.S.
141 https://en.wikipedia.org/wiki/User:Abrech
142 https://en.wikipedia.org/wiki/User:Absemindprof
143 https://en.wikipedia.org/w/index.php%3ftitle=User:Absgomz66&action=edit&redlink=1
144 https://en.wikipedia.org/wiki/User:Abtinb
145 https://en.wikipedia.org/wiki/User:Abu_adam~enwiki

1674
External links

1 AbuSH1993146
1 Abusomani147
1 Acabashi148
2 Acalamari149
3 Acasteigts150
3 Accelerometer151
1 AceMyth152
2 Acebulf153
1 Achalddave154
1 Achraf52155
2 Acidburnjd156
2 Acopyeditor157
1 Acroterion158
2 Action Jackson IV159
1 Activebird160
2 Ad Orientem161
1 Ad2004162
1 AdSR163
1 Adair2324164
1 Adam McMaster165
1 Adam Schloss166
5 Adam Zivner167
2 Adam majewski168
2 Adam1729169
2 Adam5703170

146 https://en.wikipedia.org/w/index.php%3ftitle=User:AbuSH1993&action=edit&redlink=1
147 https://en.wikipedia.org/w/index.php%3ftitle=User:Abusomani&action=edit&redlink=1
148 https://en.wikipedia.org/wiki/User:Acabashi
149 https://en.wikipedia.org/wiki/User:Acalamari
150 https://en.wikipedia.org/w/index.php%3ftitle=User:Acasteigts&action=edit&redlink=1
151 https://en.wikipedia.org/wiki/User:Accelerometer
152 https://en.wikipedia.org/wiki/User:AceMyth
153 https://en.wikipedia.org/wiki/User:Acebulf
154 https://en.wikipedia.org/wiki/User:Achalddave
155 https://en.wikipedia.org/wiki/User:Achraf52
156 https://en.wikipedia.org/w/index.php%3ftitle=User:Acidburnjd&action=edit&redlink=1
157 https://en.wikipedia.org/wiki/User:Acopyeditor
158 https://en.wikipedia.org/wiki/User:Acroterion
https://en.wikipedia.org/w/index.php%3ftitle=User:Action_Jackson_IV&action=edit&
159
redlink=1
160 https://en.wikipedia.org/w/index.php%3ftitle=User:Activebird&action=edit&redlink=1
161 https://en.wikipedia.org/wiki/User:Ad_Orientem
162 https://en.wikipedia.org/w/index.php%3ftitle=User:Ad2004&action=edit&redlink=1
163 https://en.wikipedia.org/wiki/User:AdSR
164 https://en.wikipedia.org/wiki/User:Adair2324
165 https://en.wikipedia.org/wiki/User:Adam_McMaster
166 https://en.wikipedia.org/wiki/User:Adam_Schloss
167 https://en.wikipedia.org/wiki/User:Adam_Zivner
168 https://en.wikipedia.org/wiki/User:Adam_majewski
169 https://en.wikipedia.org/wiki/User:Adam1729
170 https://en.wikipedia.org/w/index.php%3ftitle=User:Adam5703&action=edit&redlink=1

1675
Contributors

1 AdamAtlas171
2 AdamBignell172
1 AdamProcter173
1 AdamRetchless174
2 AdamTReineke175
2 Adamant.pwn176
2 Adamarnesen177
2 Adambro178
2 Adamianash179
1 Adammathias180
1 Adamsandberg181
8 Adamuu182
1 Adashiel183
8 Adavidb184
1 Adbharadwaj185
1 AddWittyNameHere186
131 Addbot187
1 Addps4cat188
1 Adegbolaadeola189
3 Adharsh Krishnan190
1 Adhikari IM191
2 Adicarlo192
1 Adiel193
1 Aditi.chaurasia194
1 Adityad195

171 https://en.wikipedia.org/wiki/User:AdamAtlas
172 https://en.wikipedia.org/w/index.php%3ftitle=User:AdamBignell&action=edit&redlink=1
173 https://en.wikipedia.org/wiki/User:AdamProcter
174 https://en.wikipedia.org/wiki/User:AdamRetchless
175 https://en.wikipedia.org/wiki/User:AdamTReineke
176 https://en.wikipedia.org/w/index.php%3ftitle=User:Adamant.pwn&action=edit&redlink=1
177 https://en.wikipedia.org/wiki/User:Adamarnesen
178 https://en.wikipedia.org/w/index.php%3ftitle=User:Adambro&action=edit&redlink=1
179 https://en.wikipedia.org/w/index.php%3ftitle=User:Adamianash&action=edit&redlink=1
180 https://en.wikipedia.org/wiki/User:Adammathias
181 https://en.wikipedia.org/wiki/User:Adamsandberg
182 https://en.wikipedia.org/wiki/User:Adamuu
183 https://en.wikipedia.org/wiki/User:Adashiel
184 https://en.wikipedia.org/wiki/User:Adavidb
185 https://en.wikipedia.org/w/index.php%3ftitle=User:Adbharadwaj&action=edit&redlink=1
186 https://en.wikipedia.org/wiki/User:AddWittyNameHere
187 https://en.wikipedia.org/wiki/User:Addbot
188 https://en.wikipedia.org/wiki/User:Addps4cat
https://en.wikipedia.org/w/index.php%3ftitle=User:Adegbolaadeola&action=edit&redlink=
189
1
https://en.wikipedia.org/w/index.php%3ftitle=User:Adharsh_Krishnan&action=edit&
190
redlink=1
191 https://en.wikipedia.org/w/index.php%3ftitle=User:Adhikari_IM&action=edit&redlink=1
192 https://en.wikipedia.org/wiki/User:Adicarlo
193 https://en.wikipedia.org/wiki/User:Adiel
https://en.wikipedia.org/w/index.php%3ftitle=User:Aditi.chaurasia&action=edit&
194
redlink=1
195 https://en.wikipedia.org/wiki/User:Adityad

1676
External links

1 Adityapasalapudi196
2 Adityasinghhhhhh197
1 Adityazutshi198
9 Adking80199
1 Adlingepa200
1 Adolphus79201
1 Adoniscik202
2 Adreliha203
1 Adreno204
1 Adrian.hawryluk205
25 Adrianwn206
1 Advance512207
1 AdventurousSquirrel208
1 Aechase1209
2 Aedieder210
3 Aednichols211
1 Aelvin212
8 Aenar213
1 Aene214
1 Aenima23215
4 Aeons216
1 Aeonx217
1 Aeriform218
1 Aerion219
1 Aerophare263220

https://en.wikipedia.org/w/index.php%3ftitle=User:Adityapasalapudi&action=edit&
196
redlink=1
197 https://en.wikipedia.org/wiki/User:Adityasinghhhhhh
198 https://en.wikipedia.org/w/index.php%3ftitle=User:Adityazutshi&action=edit&redlink=1
199 https://en.wikipedia.org/wiki/User:Adking80
200 https://en.wikipedia.org/w/index.php%3ftitle=User:Adlingepa&action=edit&redlink=1
201 https://en.wikipedia.org/wiki/User:Adolphus79
202 https://en.wikipedia.org/wiki/User:Adoniscik
203 https://en.wikipedia.org/w/index.php%3ftitle=User:Adreliha&action=edit&redlink=1
204 https://en.wikipedia.org/w/index.php%3ftitle=User:Adreno&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Adrian.hawryluk&action=edit&
205
redlink=1
206 https://en.wikipedia.org/wiki/User:Adrianwn
207 https://en.wikipedia.org/wiki/User:Advance512
208 https://en.wikipedia.org/wiki/User:AdventurousSquirrel
209 https://en.wikipedia.org/w/index.php%3ftitle=User:Aechase1&action=edit&redlink=1
210 https://en.wikipedia.org/w/index.php%3ftitle=User:Aedieder&action=edit&redlink=1
211 https://en.wikipedia.org/wiki/User:Aednichols
212 https://en.wikipedia.org/w/index.php%3ftitle=User:Aelvin&action=edit&redlink=1
213 https://en.wikipedia.org/wiki/User:Aenar
214 https://en.wikipedia.org/wiki/User:Aene
215 https://en.wikipedia.org/w/index.php%3ftitle=User:Aenima23&action=edit&redlink=1
216 https://en.wikipedia.org/wiki/User:Aeons
217 https://en.wikipedia.org/wiki/User:Aeonx
218 https://en.wikipedia.org/wiki/User:Aeriform
219 https://en.wikipedia.org/wiki/User:Aerion
220 https://en.wikipedia.org/w/index.php%3ftitle=User:Aerophare263&action=edit&redlink=1

1677
Contributors

1 Aerosprite221
1 Afog222
1 Afrozenator223
2 Afshinzarei224
1 Aftermath1983225
1 Aftersox226
1 Agabrielson227
9 AgadaUrbanit228
6 Agaditya229
1 Agaib230
3 Agamir231
2 AgarwalSumeet232
1 Agatecat2700233
1 Agateller234
3 Agcala~enwiki235
1 Age Happens236
1 Agent X2237
2 Agent00x238
2 AgentBon239
1 AgentPeppermint240
1 AgentSnoop241
1 Agentydragon242
2 Agorf243
2 Agro1986244
3 Agthorr245

221 https://en.wikipedia.org/wiki/User:Aerosprite
222 https://en.wikipedia.org/w/index.php%3ftitle=User:Afog&action=edit&redlink=1
223 https://en.wikipedia.org/wiki/User:Afrozenator
224 https://en.wikipedia.org/w/index.php%3ftitle=User:Afshinzarei&action=edit&redlink=1
225 https://en.wikipedia.org/w/index.php%3ftitle=User:Aftermath1983&action=edit&redlink=1
226 https://en.wikipedia.org/w/index.php%3ftitle=User:Aftersox&action=edit&redlink=1
227 https://en.wikipedia.org/w/index.php%3ftitle=User:Agabrielson&action=edit&redlink=1
228 https://en.wikipedia.org/wiki/User:AgadaUrbanit
229 https://en.wikipedia.org/w/index.php%3ftitle=User:Agaditya&action=edit&redlink=1
230 https://en.wikipedia.org/wiki/User:Agaib
231 https://en.wikipedia.org/w/index.php%3ftitle=User:Agamir&action=edit&redlink=1
232 https://en.wikipedia.org/wiki/User:AgarwalSumeet
233 https://en.wikipedia.org/wiki/User:Agatecat2700
234 https://en.wikipedia.org/wiki/User:Agateller
235 https://en.wikipedia.org/wiki/User:Agcala~enwiki
236 https://en.wikipedia.org/wiki/User:Age_Happens
237 https://en.wikipedia.org/w/index.php%3ftitle=User:Agent_X2&action=edit&redlink=1
238 https://en.wikipedia.org/w/index.php%3ftitle=User:Agent00x&action=edit&redlink=1
239 https://en.wikipedia.org/w/index.php%3ftitle=User:AgentBon&action=edit&redlink=1
240 https://en.wikipedia.org/wiki/User:AgentPeppermint
241 https://en.wikipedia.org/w/index.php%3ftitle=User:AgentSnoop&action=edit&redlink=1
242 https://en.wikipedia.org/wiki/User:Agentydragon
243 https://en.wikipedia.org/wiki/User:Agorf
244 https://en.wikipedia.org/wiki/User:Agro1986
245 https://en.wikipedia.org/wiki/User:Agthorr

1678
External links

3 Aguydude246
2 Ahaider3247
1 Aham1234248
1 Ahari4249
1 Ahmad Faridi250
3 Ahmad.829251
1 Ahmad87252
1 Ahmadabdolkader253
1 Ahmadiesa abu254
2 Ahmadsh255
1 Ahmed saeed256
1 Ahmed virk257
5 Ahmedelbatran258
11 Ahoerstemeier259
1 Ahonc260
2 Ahshabazz261
1 Ahughes6262
25 Ahy1263
2 Aibyou chan264
1 AidaFernandaUFPE265
2 Aidanf266
2 Aimal.rextin267
2 Aindurti268
2 Airalcorn2269
1 Airbornemihir270

246 https://en.wikipedia.org/wiki/User:Aguydude
247 https://en.wikipedia.org/w/index.php%3ftitle=User:Ahaider3&action=edit&redlink=1
248 https://en.wikipedia.org/w/index.php%3ftitle=User:Aham1234&action=edit&redlink=1
249 https://en.wikipedia.org/w/index.php%3ftitle=User:Ahari4&action=edit&redlink=1
250 https://en.wikipedia.org/wiki/User:Ahmad_Faridi
251 https://en.wikipedia.org/w/index.php%3ftitle=User:Ahmad.829&action=edit&redlink=1
252 https://en.wikipedia.org/wiki/User:Ahmad87
https://en.wikipedia.org/w/index.php%3ftitle=User:Ahmadabdolkader&action=edit&
253
redlink=1
254 https://en.wikipedia.org/w/index.php%3ftitle=User:Ahmadiesa_abu&action=edit&redlink=1
255 https://en.wikipedia.org/w/index.php%3ftitle=User:Ahmadsh&action=edit&redlink=1
256 https://en.wikipedia.org/wiki/User:Ahmed_saeed
257 https://en.wikipedia.org/w/index.php%3ftitle=User:Ahmed_virk&action=edit&redlink=1
258 https://en.wikipedia.org/w/index.php%3ftitle=User:Ahmedelbatran&action=edit&redlink=1
259 https://en.wikipedia.org/wiki/User:Ahoerstemeier
260 https://en.wikipedia.org/wiki/User:Ahonc
261 https://en.wikipedia.org/wiki/User:Ahshabazz
262 https://en.wikipedia.org/w/index.php%3ftitle=User:Ahughes6&action=edit&redlink=1
263 https://en.wikipedia.org/wiki/User:Ahy1
264 https://en.wikipedia.org/w/index.php%3ftitle=User:Aibyou_chan&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:AidaFernandaUFPE&action=edit&
265
redlink=1
266 https://en.wikipedia.org/wiki/User:Aidanf
267 https://en.wikipedia.org/w/index.php%3ftitle=User:Aimal.rextin&action=edit&redlink=1
268 https://en.wikipedia.org/w/index.php%3ftitle=User:Aindurti&action=edit&redlink=1
269 https://en.wikipedia.org/wiki/User:Airalcorn2
270 https://en.wikipedia.org/wiki/User:Airbornemihir

1679
Contributors

2 Airplaneman271
1 Airtucha272
2 Airwickdamian273
1 Aj8uppal274
7 Ajalvare275
1 Ajay raj sh276
1 Ajihood277
1 Ajitk278
6 Ajnosek279
2 Ajo Mama280
1 Ajweinstein281
1 Ak1990282
1 Akaloger283
1 Akamad284
1 Akapoorx00285
3 Akatib286
21 Akavariat287
1 Akavel~enwiki288
1 Akerans289
2 Akerbos290
2 Akhan.iipsmca291
1 Akhil999in292
1 Akiezun293
1 Akim Demaille294
1 Akmenon295

271 https://en.wikipedia.org/wiki/User:Airplaneman
272 https://en.wikipedia.org/w/index.php%3ftitle=User:Airtucha&action=edit&redlink=1
273 https://en.wikipedia.org/w/index.php%3ftitle=User:Airwickdamian&action=edit&redlink=1
274 https://en.wikipedia.org/wiki/User:Aj8uppal
275 https://en.wikipedia.org/w/index.php%3ftitle=User:Ajalvare&action=edit&redlink=1
276 https://en.wikipedia.org/w/index.php%3ftitle=User:Ajay_raj_sh&action=edit&redlink=1
277 https://en.wikipedia.org/wiki/User:Ajihood
278 https://en.wikipedia.org/w/index.php%3ftitle=User:Ajitk&action=edit&redlink=1
279 https://en.wikipedia.org/wiki/User:Ajnosek
280 https://en.wikipedia.org/w/index.php%3ftitle=User:Ajo_Mama&action=edit&redlink=1
281 https://en.wikipedia.org/w/index.php%3ftitle=User:Ajweinstein&action=edit&redlink=1
282 https://en.wikipedia.org/w/index.php%3ftitle=User:Ak1990&action=edit&redlink=1
283 https://en.wikipedia.org/w/index.php%3ftitle=User:Akaloger&action=edit&redlink=1
284 https://en.wikipedia.org/wiki/User:Akamad
285 https://en.wikipedia.org/w/index.php%3ftitle=User:Akapoorx00&action=edit&redlink=1
286 https://en.wikipedia.org/w/index.php%3ftitle=User:Akatib&action=edit&redlink=1
287 https://en.wikipedia.org/wiki/User:Akavariat
288 https://en.wikipedia.org/wiki/User:Akavel~enwiki
289 https://en.wikipedia.org/wiki/User:Akerans
290 https://en.wikipedia.org/w/index.php%3ftitle=User:Akerbos&action=edit&redlink=1
291 https://en.wikipedia.org/w/index.php%3ftitle=User:Akhan.iipsmca&action=edit&redlink=1
292 https://en.wikipedia.org/w/index.php%3ftitle=User:Akhil999in&action=edit&redlink=1
293 https://en.wikipedia.org/w/index.php%3ftitle=User:Akiezun&action=edit&redlink=1
294 https://en.wikipedia.org/w/index.php%3ftitle=User:Akim_Demaille&action=edit&redlink=1
295 https://en.wikipedia.org/w/index.php%3ftitle=User:Akmenon&action=edit&redlink=1

1680
External links

2 Aknxy296
1 Akokskis297
3 Aks1521298
1 AkselA299
1 Akshay.morye300
1 Aktotaa301
1 Akuchaev302
2 Akuchling303
1 Akukanov304
7 Akutagawa10305
5 Aladdin.chettouh306
10 Alaibot307
7 Alain Amiouni308
3 AlainD309
1 Alan Au310
1 Alan smithee311
1 Alan.ca312
3 AlanBarrett313
1 AlanS1951314
5 AlanSherlock315
1 AlanTonisson316
10 AlanUS317
6 Alanb318
1 Alanbrowne319
1 Alanhuang122320

296 https://en.wikipedia.org/wiki/User:Aknxy
297 https://en.wikipedia.org/wiki/User:Akokskis
298 https://en.wikipedia.org/wiki/User:Aks1521
299 https://en.wikipedia.org/wiki/User:AkselA
300 https://en.wikipedia.org/w/index.php%3ftitle=User:Akshay.morye&action=edit&redlink=1
301 https://en.wikipedia.org/w/index.php%3ftitle=User:Aktotaa&action=edit&redlink=1
302 https://en.wikipedia.org/w/index.php%3ftitle=User:Akuchaev&action=edit&redlink=1
303 https://en.wikipedia.org/wiki/User:Akuchling
304 https://en.wikipedia.org/w/index.php%3ftitle=User:Akukanov&action=edit&redlink=1
305 https://en.wikipedia.org/w/index.php%3ftitle=User:Akutagawa10&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Aladdin.chettouh&action=edit&
306
redlink=1
307 https://en.wikipedia.org/wiki/User:Alaibot
308 https://en.wikipedia.org/w/index.php%3ftitle=User:Alain_Amiouni&action=edit&redlink=1
309 https://en.wikipedia.org/wiki/User:AlainD
310 https://en.wikipedia.org/wiki/User:Alan_Au
311 https://en.wikipedia.org/wiki/User:Alan_smithee
312 https://en.wikipedia.org/wiki/User:Alan.ca
313 https://en.wikipedia.org/wiki/User:AlanBarrett
314 https://en.wikipedia.org/w/index.php%3ftitle=User:AlanS1951&action=edit&redlink=1
315 https://en.wikipedia.org/w/index.php%3ftitle=User:AlanSherlock&action=edit&redlink=1
316 https://en.wikipedia.org/wiki/User:AlanTonisson
317 https://en.wikipedia.org/wiki/User:AlanUS
318 https://en.wikipedia.org/w/index.php%3ftitle=User:Alanb&action=edit&redlink=1
319 https://en.wikipedia.org/w/index.php%3ftitle=User:Alanbrowne&action=edit&redlink=1
320 https://en.wikipedia.org/wiki/User:Alanhuang122

1681
Contributors

20 Alansohn321
1 Alassius322
4 Alastair Carnegie323
1 Alayah64324
1 Alberto da Calvairate~enwiki325
22 Albertzeyer326
2 Albmedina327
1 Albmont328
1 Albuseer329
2 Alcat33330
2 Alcauchy331
1 Alcazar84332
1 Alcides333
3 Alcidesfonseca334
2 AlcoholVat335
1 Aldie336
3 Ale jrb337
4 Ale2006338
1 AlecTaylor339
1 Aleenf1340
1 Alejandro.isaza341
4 Aleks-ger342
1 Aleks80343
1 Aleksandr.kraichuk344

321 https://en.wikipedia.org/wiki/User:Alansohn
322 https://en.wikipedia.org/wiki/User:Alassius
https://en.wikipedia.org/w/index.php%3ftitle=User:Alastair_Carnegie&action=edit&
323
redlink=1
324 https://en.wikipedia.org/w/index.php%3ftitle=User:Alayah64&action=edit&redlink=1
325 https://en.wikipedia.org/wiki/User:Alberto_da_Calvairate~enwiki
326 https://en.wikipedia.org/wiki/User:Albertzeyer
327 https://en.wikipedia.org/w/index.php%3ftitle=User:Albmedina&action=edit&redlink=1
328 https://en.wikipedia.org/wiki/User:Albmont
329 https://en.wikipedia.org/wiki/User:Albuseer
330 https://en.wikipedia.org/w/index.php%3ftitle=User:Alcat33&action=edit&redlink=1
331 https://en.wikipedia.org/w/index.php%3ftitle=User:Alcauchy&action=edit&redlink=1
332 https://en.wikipedia.org/wiki/User:Alcazar84
333 https://en.wikipedia.org/wiki/User:Alcides
https://en.wikipedia.org/w/index.php%3ftitle=User:Alcidesfonseca&action=edit&redlink=
334
1
335 https://en.wikipedia.org/wiki/User:AlcoholVat
336 https://en.wikipedia.org/wiki/User:Aldie
337 https://en.wikipedia.org/wiki/User:Ale_jrb
338 https://en.wikipedia.org/wiki/User:Ale2006
339 https://en.wikipedia.org/w/index.php%3ftitle=User:AlecTaylor&action=edit&redlink=1
340 https://en.wikipedia.org/wiki/User:Aleenf1
https://en.wikipedia.org/w/index.php%3ftitle=User:Alejandro.isaza&action=edit&
341
redlink=1
342 https://en.wikipedia.org/w/index.php%3ftitle=User:Aleks-ger&action=edit&redlink=1
343 https://en.wikipedia.org/w/index.php%3ftitle=User:Aleks80&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Aleksandr.kraichuk&action=edit&
344
redlink=1

1682
External links

1 Alemaaltevinden345
2 Alertrferman346
1 Alessandro57347
1 Alex Bakharev348
1 Alex Dainiak349
1 Alex Krainov350
2 Alex Pimenov351
21 Alex Selby352
2 Alex.atkins353
1 Alex.g354
1 Alex.koturanov355
4 Alex.mccarthy356
1 Alex2go357
1 Alex43223358
1 Alex86450359
2 AlexAlex360
2 AlexCovarrubias361
7 AlexPlank362
1 AlexTG363
1 Alexander364
1 Alexander Anoprienko365
1 Alexander Brock366
1 Alexander Davronov367
1 Alexander Misel368

https://en.wikipedia.org/w/index.php%3ftitle=User:Alemaaltevinden&action=edit&
345
redlink=1
346 https://en.wikipedia.org/w/index.php%3ftitle=User:Alertrferman&action=edit&redlink=1
347 https://en.wikipedia.org/wiki/User:Alessandro57
348 https://en.wikipedia.org/wiki/User:Alex_Bakharev
349 https://en.wikipedia.org/wiki/User:Alex_Dainiak
350 https://en.wikipedia.org/wiki/User:Alex_Krainov
351 https://en.wikipedia.org/w/index.php%3ftitle=User:Alex_Pimenov&action=edit&redlink=1
352 https://en.wikipedia.org/w/index.php%3ftitle=User:Alex_Selby&action=edit&redlink=1
353 https://en.wikipedia.org/w/index.php%3ftitle=User:Alex.atkins&action=edit&redlink=1
354 https://en.wikipedia.org/wiki/User:Alex.g
https://en.wikipedia.org/w/index.php%3ftitle=User:Alex.koturanov&action=edit&redlink=
355
1
356 https://en.wikipedia.org/w/index.php%3ftitle=User:Alex.mccarthy&action=edit&redlink=1
357 https://en.wikipedia.org/w/index.php%3ftitle=User:Alex2go&action=edit&redlink=1
358 https://en.wikipedia.org/wiki/User:Alex43223
359 https://en.wikipedia.org/w/index.php%3ftitle=User:Alex86450&action=edit&redlink=1
360 https://en.wikipedia.org/wiki/User:AlexAlex
361 https://en.wikipedia.org/wiki/User:AlexCovarrubias
362 https://en.wikipedia.org/wiki/User:AlexPlank
363 https://en.wikipedia.org/wiki/User:AlexTG
364 https://en.wikipedia.org/wiki/User:Alexander
https://en.wikipedia.org/w/index.php%3ftitle=User:Alexander_Anoprienko&action=edit&
365
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Alexander_Brock&action=edit&
366
redlink=1
367 https://en.wikipedia.org/wiki/User:Alexander_Davronov
368 https://en.wikipedia.org/wiki/User:Alexander_Misel

1683
Contributors

1 Alexander Shekhovtsov369
1 Alexander256370
2 AlexanderZoul371
3 AlexandreZ372
1 Alexandru.Olteanu373
15 Alexbot374
1 Alexbrandts375
1 Alexcollins376
1 Alexdow377
3 Alexei Kopylov378
1 Alexeicolin379
17 Alexey Muranov380
1 Alexey.kudinkin381
3 Alexeytuzhilin382
2 Alexf383
1 Alexgotsis384
1 Alexisastupidnoob385
11 Alexius08386
1 Alexjshepler387
1 Alexkorn388
3 Alexkyoung389
2 Alexlambson390
1 Alexmorter391
1 Alexrakia392

https://en.wikipedia.org/w/index.php%3ftitle=User:Alexander_Shekhovtsov&action=edit&
369
redlink=1
370 https://en.wikipedia.org/wiki/User:Alexander256
371 https://en.wikipedia.org/w/index.php%3ftitle=User:AlexanderZoul&action=edit&redlink=1
372 https://en.wikipedia.org/w/index.php%3ftitle=User:AlexandreZ&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Alexandru.Olteanu&action=edit&
373
redlink=1
374 https://en.wikipedia.org/wiki/User:Alexbot
375 https://en.wikipedia.org/w/index.php%3ftitle=User:Alexbrandts&action=edit&redlink=1
376 https://en.wikipedia.org/w/index.php%3ftitle=User:Alexcollins&action=edit&redlink=1
377 https://en.wikipedia.org/w/index.php%3ftitle=User:Alexdow&action=edit&redlink=1
378 https://en.wikipedia.org/wiki/User:Alexei_Kopylov
379 https://en.wikipedia.org/w/index.php%3ftitle=User:Alexeicolin&action=edit&redlink=1
380 https://en.wikipedia.org/wiki/User:Alexey_Muranov
https://en.wikipedia.org/w/index.php%3ftitle=User:Alexey.kudinkin&action=edit&
381
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Alexeytuzhilin&action=edit&redlink=
382
1
383 https://en.wikipedia.org/wiki/User:Alexf
384 https://en.wikipedia.org/w/index.php%3ftitle=User:Alexgotsis&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Alexisastupidnoob&action=edit&
385
redlink=1
386 https://en.wikipedia.org/wiki/User:Alexius08
387 https://en.wikipedia.org/w/index.php%3ftitle=User:Alexjshepler&action=edit&redlink=1
388 https://en.wikipedia.org/w/index.php%3ftitle=User:Alexkorn&action=edit&redlink=1
389 https://en.wikipedia.org/wiki/User:Alexkyoung
390 https://en.wikipedia.org/w/index.php%3ftitle=User:Alexlambson&action=edit&redlink=1
391 https://en.wikipedia.org/w/index.php%3ftitle=User:Alexmorter&action=edit&redlink=1
392 https://en.wikipedia.org/w/index.php%3ftitle=User:Alexrakia&action=edit&redlink=1

1684
External links

7 Alexs2008393
2 Alexsh394
1 Alextlu395
1 Alexwatson396
3 Alexwhite42000397
1 Alexwho314398
1 Alfalfahotshots399
1 Alfie400
2 Alfie66401
1 AlfieNet402
1 Alfio403
1 Alfredo J. Herrera Lago404
2 Algebra123230405
4 Algebraist406
2 Algebran407
8 Algo open408
2 Algoman101409
7 Alhoori410
1 Ali hadian411
1 Alienus412
2 Alieseraj413
1 Alinempc414
1 Aliok ao415
2 Aliqk1603416
1 Alison417

393 https://en.wikipedia.org/wiki/User:Alexs2008
394 https://en.wikipedia.org/wiki/User:Alexsh
395 https://en.wikipedia.org/wiki/User:Alextlu
396 https://en.wikipedia.org/wiki/User:Alexwatson
https://en.wikipedia.org/w/index.php%3ftitle=User:Alexwhite42000&action=edit&redlink=
397
1
398 https://en.wikipedia.org/wiki/User:Alexwho314
399 https://en.wikipedia.org/wiki/User:Alfalfahotshots
400 https://en.wikipedia.org/wiki/User:Alfie
401 https://en.wikipedia.org/wiki/User:Alfie66
402 https://en.wikipedia.org/wiki/User:AlfieNet
403 https://en.wikipedia.org/wiki/User:Alfio
404 https://en.wikipedia.org/wiki/User:Alfredo_J._Herrera_Lago
405 https://en.wikipedia.org/w/index.php%3ftitle=User:Algebra123230&action=edit&redlink=1
406 https://en.wikipedia.org/wiki/User:Algebraist
407 https://en.wikipedia.org/wiki/User:Algebran
408 https://en.wikipedia.org/w/index.php%3ftitle=User:Algo_open&action=edit&redlink=1
409 https://en.wikipedia.org/w/index.php%3ftitle=User:Algoman101&action=edit&redlink=1
410 https://en.wikipedia.org/wiki/User:Alhoori
411 https://en.wikipedia.org/w/index.php%3ftitle=User:Ali_hadian&action=edit&redlink=1
412 https://en.wikipedia.org/wiki/User:Alienus
413 https://en.wikipedia.org/wiki/User:Alieseraj
414 https://en.wikipedia.org/w/index.php%3ftitle=User:Alinempc&action=edit&redlink=1
415 https://en.wikipedia.org/wiki/User:Aliok_ao
416 https://en.wikipedia.org/wiki/User:Aliqk1603
417 https://en.wikipedia.org/wiki/User:Alison

1685
Contributors

1 Alistair1978418
1 AlistairMcMillan419
1 Alkarex420
3 Alksentrs421
2 Alksub422
1 AllProgramming423
2 AllUltima424
13 Allan McInnes425
2 Allan speck426
3 AllanBz427
10 AlleborgoBot428
1 Allefant429
1 Allen3430
1 AllenDowney431
4 AllenWalker61432
1 AllenZh433
2 Allens434
3 Allforrous435
1 Allmightyduck436
3 Allstarecho437
1 Allthefoxes438
1 AllyUnion439
3 Almi440
3 Almit39441
1 Almy442

418 https://en.wikipedia.org/wiki/User:Alistair1978
419 https://en.wikipedia.org/wiki/User:AlistairMcMillan
420 https://en.wikipedia.org/wiki/User:Alkarex
421 https://en.wikipedia.org/wiki/User:Alksentrs
422 https://en.wikipedia.org/wiki/User:Alksub
https://en.wikipedia.org/w/index.php%3ftitle=User:AllProgramming&action=edit&redlink=
423
1
424 https://en.wikipedia.org/wiki/User:AllUltima
425 https://en.wikipedia.org/wiki/User:Allan_McInnes
426 https://en.wikipedia.org/w/index.php%3ftitle=User:Allan_speck&action=edit&redlink=1
427 https://en.wikipedia.org/w/index.php%3ftitle=User:AllanBz&action=edit&redlink=1
428 https://en.wikipedia.org/wiki/User:AlleborgoBot
429 https://en.wikipedia.org/wiki/User:Allefant
430 https://en.wikipedia.org/wiki/User:Allen3
431 https://en.wikipedia.org/w/index.php%3ftitle=User:AllenDowney&action=edit&redlink=1
432 https://en.wikipedia.org/w/index.php%3ftitle=User:AllenWalker61&action=edit&redlink=1
433 https://en.wikipedia.org/wiki/User:AllenZh
434 https://en.wikipedia.org/wiki/User:Allens
435 https://en.wikipedia.org/wiki/User:Allforrous
436 https://en.wikipedia.org/wiki/User:Allmightyduck
437 https://en.wikipedia.org/wiki/User:Allstarecho
438 https://en.wikipedia.org/wiki/User:Allthefoxes
439 https://en.wikipedia.org/wiki/User:AllyUnion
440 https://en.wikipedia.org/wiki/User:Almi
441 https://en.wikipedia.org/w/index.php%3ftitle=User:Almit39&action=edit&redlink=1
442 https://en.wikipedia.org/wiki/User:Almy

1686
External links

2 AlnoktaBOT443
2 Aloftus2444
1 Aloksukhwani445
1 Alotau446
1 Alotropico447
1 AlphaBetaGamma01448
5 Alphachimpbot449
1 Alphama450
3 Alquantor451
1 Alroth452
1 Alsee453
1 Altay8454
335 Altenmann455
1 AltoStev456
1 Alvestrand457
1 Alvin Seville458
1 Alvin-cs459
1 AlwaysAngry460
2 Alyssaq461
1 Alyssonbruno462
2 Amadys463
4 Amakuha464
1 Amakuru465
2 Aman krc466
1 Amandal1810467

443 https://en.wikipedia.org/wiki/User:AlnoktaBOT
444 https://en.wikipedia.org/wiki/User:Aloftus2
445 https://en.wikipedia.org/w/index.php%3ftitle=User:Aloksukhwani&action=edit&redlink=1
446 https://en.wikipedia.org/wiki/User:Alotau
447 https://en.wikipedia.org/w/index.php%3ftitle=User:Alotropico&action=edit&redlink=1
448 https://en.wikipedia.org/wiki/User:AlphaBetaGamma01
449 https://en.wikipedia.org/wiki/User:Alphachimpbot
450 https://en.wikipedia.org/wiki/User:Alphama
451 https://en.wikipedia.org/wiki/User:Alquantor
452 https://en.wikipedia.org/w/index.php%3ftitle=User:Alroth&action=edit&redlink=1
453 https://en.wikipedia.org/wiki/User:Alsee
454 https://en.wikipedia.org/w/index.php%3ftitle=User:Altay8&action=edit&redlink=1
455 https://en.wikipedia.org/wiki/User:Altenmann
456 https://en.wikipedia.org/wiki/User:AltoStev
457 https://en.wikipedia.org/wiki/User:Alvestrand
458 https://en.wikipedia.org/wiki/User:Alvin_Seville
459 https://en.wikipedia.org/wiki/User:Alvin-cs
460 https://en.wikipedia.org/wiki/User:AlwaysAngry
461 https://en.wikipedia.org/w/index.php%3ftitle=User:Alyssaq&action=edit&redlink=1
462 https://en.wikipedia.org/w/index.php%3ftitle=User:Alyssonbruno&action=edit&redlink=1
463 https://en.wikipedia.org/wiki/User:Amadys
464 https://en.wikipedia.org/wiki/User:Amakuha
465 https://en.wikipedia.org/wiki/User:Amakuru
466 https://en.wikipedia.org/w/index.php%3ftitle=User:Aman_krc&action=edit&redlink=1
467 https://en.wikipedia.org/w/index.php%3ftitle=User:Amandal1810&action=edit&redlink=1

1687
Contributors

3 Amanjain110893468
1 AmarChandra469
2 Amarpalsinghkapoor470
4 Amaury471
1 Amazing thunder storm472
1 Ambassador1473
1 Amcshane474
8 Amelio Vázquez475
2 Amenel476
5 Amicable always477
2 Amiceli478
2 Amin-mlm479
3 Amine.marref480
2 Amiodusz481
3 AmirOnWiki482
1 Amirmalekzadeh483
1 Amirobot484
1 Amitkumarp485
1 Ammubhave486
1 Amol Agrawal487
1 Amol2960488
1 Amon16489
1 Amortias490
1 Ampsthatgoto11491

https://en.wikipedia.org/w/index.php%3ftitle=User:Amanjain110893&action=edit&redlink=
468
1
469 https://en.wikipedia.org/wiki/User:AmarChandra
https://en.wikipedia.org/w/index.php%3ftitle=User:Amarpalsinghkapoor&action=edit&
470
redlink=1
471 https://en.wikipedia.org/wiki/User:Amaury
https://en.wikipedia.org/w/index.php%3ftitle=User:Amazing_thunder_storm&action=edit&
472
redlink=1
473 https://en.wikipedia.org/w/index.php%3ftitle=User:Ambassador1&action=edit&redlink=1
474 https://en.wikipedia.org/w/index.php%3ftitle=User:Amcshane&action=edit&redlink=1
475 https://en.wikipedia.org/wiki/User:Amelio_V%25C3%25A1zquez
476 https://en.wikipedia.org/wiki/User:Amenel
477 https://en.wikipedia.org/wiki/User:Amicable_always
478 https://en.wikipedia.org/w/index.php%3ftitle=User:Amiceli&action=edit&redlink=1
479 https://en.wikipedia.org/w/index.php%3ftitle=User:Amin-mlm&action=edit&redlink=1
480 https://en.wikipedia.org/w/index.php%3ftitle=User:Amine.marref&action=edit&redlink=1
481 https://en.wikipedia.org/w/index.php%3ftitle=User:Amiodusz&action=edit&redlink=1
482 https://en.wikipedia.org/wiki/User:AmirOnWiki
https://en.wikipedia.org/w/index.php%3ftitle=User:Amirmalekzadeh&action=edit&redlink=
483
1
484 https://en.wikipedia.org/wiki/User:Amirobot
485 https://en.wikipedia.org/w/index.php%3ftitle=User:Amitkumarp&action=edit&redlink=1
486 https://en.wikipedia.org/wiki/User:Ammubhave
487 https://en.wikipedia.org/w/index.php%3ftitle=User:Amol_Agrawal&action=edit&redlink=1
488 https://en.wikipedia.org/w/index.php%3ftitle=User:Amol2960&action=edit&redlink=1
489 https://en.wikipedia.org/w/index.php%3ftitle=User:Amon16&action=edit&redlink=1
490 https://en.wikipedia.org/wiki/User:Amortias
https://en.wikipedia.org/w/index.php%3ftitle=User:Ampsthatgoto11&action=edit&redlink=
491
1

1688
External links

1 Amrish deep492
2 AmritasyaPutra493
1 Ams80494
1 Amux495
1 An Autist496
4 AnAj497
1 AnOddName498
1 Anabus499
4 Anachronist500
3 Anadverb501
1 AnakngAraw502
3 Anas1712503
4 Anaxial504
1 Ancheta Wis505
11 Anchor Link Bot506
1 AncientToaster507
1 Anclation~enwiki508
1 Andejons509
7 Anders.ahlgren510
1 AndersBot511
2 Anderson512
1 AndiPersti513
1 Andipi514
19 Andre Engels515
1 Andre.bittar516

492 https://en.wikipedia.org/w/index.php%3ftitle=User:Amrish_deep&action=edit&redlink=1
493 https://en.wikipedia.org/wiki/User:AmritasyaPutra
494 https://en.wikipedia.org/wiki/User:Ams80
495 https://en.wikipedia.org/wiki/User:Amux
496 https://en.wikipedia.org/wiki/User:An_Autist
497 https://en.wikipedia.org/wiki/User:AnAj
498 https://en.wikipedia.org/wiki/User:AnOddName
499 https://en.wikipedia.org/w/index.php%3ftitle=User:Anabus&action=edit&redlink=1
500 https://en.wikipedia.org/wiki/User:Anachronist
501 https://en.wikipedia.org/w/index.php%3ftitle=User:Anadverb&action=edit&redlink=1
502 https://en.wikipedia.org/wiki/User:AnakngAraw
503 https://en.wikipedia.org/wiki/User:Anas1712
504 https://en.wikipedia.org/wiki/User:Anaxial
505 https://en.wikipedia.org/wiki/User:Ancheta_Wis
506 https://en.wikipedia.org/wiki/User:Anchor_Link_Bot
507 https://en.wikipedia.org/wiki/User:AncientToaster
508 https://en.wikipedia.org/wiki/User:Anclation~enwiki
509 https://en.wikipedia.org/wiki/User:Andejons
510 https://en.wikipedia.org/wiki/User:Anders.ahlgren
511 https://en.wikipedia.org/wiki/User:AndersBot
512 https://en.wikipedia.org/wiki/User:Anderson
513 https://en.wikipedia.org/wiki/User:AndiPersti
514 https://en.wikipedia.org/w/index.php%3ftitle=User:Andipi&action=edit&redlink=1
515 https://en.wikipedia.org/wiki/User:Andre_Engels
516 https://en.wikipedia.org/w/index.php%3ftitle=User:Andre.bittar&action=edit&redlink=1

1689
Contributors

2 Andre.holzner517
1 Andrea105518
85 Andreas Kaufmann519
1 Andreas Rejbrand520
3 AndreasWittenstein521
1 Andreasabel522
2 Andreasneumann523
1 Andreasr2d2524
5 Andrei Stroe525
1 Andrejj526
1 Andrejskraba527
2 Andres528
1 Andresambrois529
1 Andrew Helwer530
1 Andrew Rodland531
1 Andrew00786532
1 AndrewBuck533
1 AndrewCmcauliffe534
1 AndrewHowse535
2 AndrewWTaylor536
1 Andrewa537
1 Andrewaskew538
3 Andrewman327539
1 Andrewmu540
1 Andrewpmk541

517 https://en.wikipedia.org/w/index.php%3ftitle=User:Andre.holzner&action=edit&redlink=1
518 https://en.wikipedia.org/wiki/User:Andrea105
519 https://en.wikipedia.org/wiki/User:Andreas_Kaufmann
520 https://en.wikipedia.org/wiki/User:Andreas_Rejbrand
https://en.wikipedia.org/w/index.php%3ftitle=User:AndreasWittenstein&action=edit&
521
redlink=1
522 https://en.wikipedia.org/w/index.php%3ftitle=User:Andreasabel&action=edit&redlink=1
523 https://en.wikipedia.org/wiki/User:Andreasneumann
524 https://en.wikipedia.org/wiki/User:Andreasr2d2
525 https://en.wikipedia.org/wiki/User:Andrei_Stroe
526 https://en.wikipedia.org/wiki/User:Andrejj
527 https://en.wikipedia.org/w/index.php%3ftitle=User:Andrejskraba&action=edit&redlink=1
528 https://en.wikipedia.org/wiki/User:Andres
529 https://en.wikipedia.org/w/index.php%3ftitle=User:Andresambrois&action=edit&redlink=1
530 https://en.wikipedia.org/wiki/User:Andrew_Helwer
531 https://en.wikipedia.org/wiki/User:Andrew_Rodland
532 https://en.wikipedia.org/w/index.php%3ftitle=User:Andrew00786&action=edit&redlink=1
533 https://en.wikipedia.org/wiki/User:AndrewBuck
https://en.wikipedia.org/w/index.php%3ftitle=User:AndrewCmcauliffe&action=edit&
534
redlink=1
535 https://en.wikipedia.org/wiki/User:AndrewHowse
536 https://en.wikipedia.org/wiki/User:AndrewWTaylor
537 https://en.wikipedia.org/wiki/User:Andrewa
538 https://en.wikipedia.org/wiki/User:Andrewaskew
539 https://en.wikipedia.org/wiki/User:Andrewman327
540 https://en.wikipedia.org/w/index.php%3ftitle=User:Andrewmu&action=edit&redlink=1
541 https://en.wikipedia.org/wiki/User:Andrewpmk

1690
External links

1 Andrewrosenberg542
1 Andrewrp543
1 Andrey.a.mitin544
1 Andriamitondra545
25 Andris546
2 Andy Dingley547
3 Andy M. Wang548
2 Andy19601549
2 AndyBQ550
2 AndyKali551
5 Andyhowlett552
2 Ang3lboy2001553
5 Angela554
1 Angelababy00555
1 AngleWyrm556
3 Angus Lepper557
1 Angus1986558
1 Anharrington559
2 Anikah560
1 Aniketmk561
1 Aniksarkar6562
1 Aninhumer563
1 Anish karimaloor564
6 Anishphilljoe565
1 Aniskhan001566

https://en.wikipedia.org/w/index.php%3ftitle=User:Andrewrosenberg&action=edit&
542
redlink=1
543 https://en.wikipedia.org/wiki/User:Andrewrp
544 https://en.wikipedia.org/wiki/User:Andrey.a.mitin
https://en.wikipedia.org/w/index.php%3ftitle=User:Andriamitondra&action=edit&redlink=
545
1
546 https://en.wikipedia.org/wiki/User:Andris
547 https://en.wikipedia.org/wiki/User:Andy_Dingley
548 https://en.wikipedia.org/wiki/User:Andy_M._Wang
549 https://en.wikipedia.org/w/index.php%3ftitle=User:Andy19601&action=edit&redlink=1
550 https://en.wikipedia.org/wiki/User:AndyBQ
551 https://en.wikipedia.org/wiki/User:AndyKali
552 https://en.wikipedia.org/w/index.php%3ftitle=User:Andyhowlett&action=edit&redlink=1
553 https://en.wikipedia.org/w/index.php%3ftitle=User:Ang3lboy2001&action=edit&redlink=1
554 https://en.wikipedia.org/wiki/User:Angela
555 https://en.wikipedia.org/w/index.php%3ftitle=User:Angelababy00&action=edit&redlink=1
556 https://en.wikipedia.org/w/index.php%3ftitle=User:AngleWyrm&action=edit&redlink=1
557 https://en.wikipedia.org/wiki/User:Angus_Lepper
558 https://en.wikipedia.org/wiki/User:Angus1986
559 https://en.wikipedia.org/w/index.php%3ftitle=User:Anharrington&action=edit&redlink=1
560 https://en.wikipedia.org/w/index.php%3ftitle=User:Anikah&action=edit&redlink=1
561 https://en.wikipedia.org/wiki/User:Aniketmk
562 https://en.wikipedia.org/w/index.php%3ftitle=User:Aniksarkar6&action=edit&redlink=1
563 https://en.wikipedia.org/w/index.php%3ftitle=User:Aninhumer&action=edit&redlink=1
564 https://en.wikipedia.org/wiki/User:Anish_karimaloor
565 https://en.wikipedia.org/w/index.php%3ftitle=User:Anishphilljoe&action=edit&redlink=1
566 https://en.wikipedia.org/wiki/User:Aniskhan001

1691
Contributors

1 Aniu~enwiki567
1 Anizzomc568
1 Ankit kulkarni569
1 Ankitbhatt570
19 Ankitsachan333571
5 Anks9b572
3 Ankursri494573
2 Anmlr574
1 Ann O'nyme575
6 AnnGabrieli576
1 Anna Frodesiak577
1 Anne Bauval578
1 Annetroy93579
1 Annick Robberecht580
3 Anog581
1 Anomalocaris582
1 Anomie583
233 AnomieBOT584
1 Anon423585
1 Anonrabbi586
1 Anonymann587
15 Anonymous Dissident588
2 Anonymous Random Person589
1 Anonymous101590
6 Anonymousacademic591

567 https://en.wikipedia.org/w/index.php%3ftitle=User:Aniu~enwiki&action=edit&redlink=1
568 https://en.wikipedia.org/wiki/User:Anizzomc
https://en.wikipedia.org/w/index.php%3ftitle=User:Ankit_kulkarni&action=edit&redlink=
569
1
570 https://en.wikipedia.org/wiki/User:Ankitbhatt
https://en.wikipedia.org/w/index.php%3ftitle=User:Ankitsachan333&action=edit&redlink=
571
1
572 https://en.wikipedia.org/w/index.php%3ftitle=User:Anks9b&action=edit&redlink=1
573 https://en.wikipedia.org/w/index.php%3ftitle=User:Ankursri494&action=edit&redlink=1
574 https://en.wikipedia.org/w/index.php%3ftitle=User:Anmlr&action=edit&redlink=1
575 https://en.wikipedia.org/wiki/User:Ann_O%2527nyme
576 https://en.wikipedia.org/wiki/User:AnnGabrieli
577 https://en.wikipedia.org/wiki/User:Anna_Frodesiak
578 https://en.wikipedia.org/w/index.php%3ftitle=User:Anne_Bauval&action=edit&redlink=1
579 https://en.wikipedia.org/wiki/User:Annetroy93
https://en.wikipedia.org/w/index.php%3ftitle=User:Annick_Robberecht&action=edit&
580
redlink=1
581 https://en.wikipedia.org/w/index.php%3ftitle=User:Anog&action=edit&redlink=1
582 https://en.wikipedia.org/wiki/User:Anomalocaris
583 https://en.wikipedia.org/wiki/User:Anomie
584 https://en.wikipedia.org/wiki/User:AnomieBOT
585 https://en.wikipedia.org/wiki/User:Anon423
586 https://en.wikipedia.org/w/index.php%3ftitle=User:Anonrabbi&action=edit&redlink=1
587 https://en.wikipedia.org/w/index.php%3ftitle=User:Anonymann&action=edit&redlink=1
588 https://en.wikipedia.org/wiki/User:Anonymous_Dissident
589 https://en.wikipedia.org/wiki/User:Anonymous_Random_Person
590 https://en.wikipedia.org/wiki/User:Anonymous101
591 https://en.wikipedia.org/wiki/User:Anonymousacademic

1692
External links

4 Anonymousone2592
1 Anoopjohnson593
1 AnotherPerson 2594
1 Anrewar595
4 Anrnusna596
6 Ansatz597
2 Ansell598
2 Anselmotalotta599
2 Ansuman04600
1 AntPraxis601
1 Antaeus FeIdspar602
9 Antaeus Feldspar603
2 Antandrus604
3 Anthony Appleyard605
2 Anthonynow12606
1 Anti-Nationalist607
2 AntiSpamBot608
23 AntiVandalBot609
2 Antientropic610
1 Antiqueight611
1 Antiufo612
1 Antiuser613
4 Antkro614
1 AntonDevil615
11 Antonbharkamsan616

592 https://en.wikipedia.org/w/index.php%3ftitle=User:Anonymousone2&action=edit&redlink=1
593 https://en.wikipedia.org/wiki/User:Anoopjohnson
https://en.wikipedia.org/w/index.php%3ftitle=User:AnotherPerson_2&action=edit&
594
redlink=1
595 https://en.wikipedia.org/w/index.php%3ftitle=User:Anrewar&action=edit&redlink=1
596 https://en.wikipedia.org/wiki/User:Anrnusna
597 https://en.wikipedia.org/wiki/User:Ansatz
598 https://en.wikipedia.org/wiki/User:Ansell
599 https://en.wikipedia.org/wiki/User:Anselmotalotta
600 https://en.wikipedia.org/w/index.php%3ftitle=User:Ansuman04&action=edit&redlink=1
601 https://en.wikipedia.org/w/index.php%3ftitle=User:AntPraxis&action=edit&redlink=1
602 https://en.wikipedia.org/wiki/User:Antaeus_FeIdspar
603 https://en.wikipedia.org/wiki/User:Antaeus_Feldspar
604 https://en.wikipedia.org/wiki/User:Antandrus
605 https://en.wikipedia.org/wiki/User:Anthony_Appleyard
606 https://en.wikipedia.org/wiki/User:Anthonynow12
607 https://en.wikipedia.org/wiki/User:Anti-Nationalist
608 https://en.wikipedia.org/wiki/User:AntiSpamBot
609 https://en.wikipedia.org/wiki/User:AntiVandalBot
610 https://en.wikipedia.org/w/index.php%3ftitle=User:Antientropic&action=edit&redlink=1
611 https://en.wikipedia.org/wiki/User:Antiqueight
612 https://en.wikipedia.org/w/index.php%3ftitle=User:Antiufo&action=edit&redlink=1
613 https://en.wikipedia.org/wiki/User:Antiuser
614 https://en.wikipedia.org/w/index.php%3ftitle=User:Antkro&action=edit&redlink=1
615 https://en.wikipedia.org/w/index.php%3ftitle=User:AntonDevil&action=edit&redlink=1
616 https://en.wikipedia.org/wiki/User:Antonbharkamsan

1693
Contributors

1 Antonielly617
1 Antoninweb618
1 Antonio Lopez619
1 Anukul.mohil620
2 Anupam207621
20 Anupchowdary622
1 Anurag.x.singh623
2 Anuragbms624
1 Anurmi~enwiki625
1 Anventia626
1 Anyeverybody627
3 Aostrander628
1 Aotsfan007629
2 Ap630
2 Apalamarchuk631
1 Apalsola632
14 Apanag633
4 Aperisic634
1 Aphaia635
1 Apoc2400636
1 Apoctyliptic637
2 Apoorbo638
4 Apoorva181192639
1 Appleseed640
1 Applrpn641

617 https://en.wikipedia.org/wiki/User:Antonielly
618 https://en.wikipedia.org/wiki/User:Antoninweb
619 https://en.wikipedia.org/wiki/User:Antonio_Lopez
620 https://en.wikipedia.org/w/index.php%3ftitle=User:Anukul.mohil&action=edit&redlink=1
621 https://en.wikipedia.org/w/index.php%3ftitle=User:Anupam207&action=edit&redlink=1
622 https://en.wikipedia.org/w/index.php%3ftitle=User:Anupchowdary&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Anurag.x.singh&action=edit&redlink=
623
1
624 https://en.wikipedia.org/w/index.php%3ftitle=User:Anuragbms&action=edit&redlink=1
625 https://en.wikipedia.org/w/index.php%3ftitle=User:Anurmi~enwiki&action=edit&redlink=1
626 https://en.wikipedia.org/wiki/User:Anventia
627 https://en.wikipedia.org/wiki/User:Anyeverybody
628 https://en.wikipedia.org/wiki/User:Aostrander
629 https://en.wikipedia.org/wiki/User:Aotsfan007
630 https://en.wikipedia.org/wiki/User:Ap
631 https://en.wikipedia.org/w/index.php%3ftitle=User:Apalamarchuk&action=edit&redlink=1
632 https://en.wikipedia.org/wiki/User:Apalsola
633 https://en.wikipedia.org/wiki/User:Apanag
634 https://en.wikipedia.org/wiki/User:Aperisic
635 https://en.wikipedia.org/wiki/User:Aphaia
636 https://en.wikipedia.org/wiki/User:Apoc2400
637 https://en.wikipedia.org/wiki/User:Apoctyliptic
638 https://en.wikipedia.org/w/index.php%3ftitle=User:Apoorbo&action=edit&redlink=1
639 https://en.wikipedia.org/w/index.php%3ftitle=User:Apoorva181192&action=edit&redlink=1
640 https://en.wikipedia.org/wiki/User:Appleseed
641 https://en.wikipedia.org/wiki/User:Applrpn

1694
External links

1 Aquae642
3 Arabani643
1 Arafateajaz644
5 Aragorn2645
1 Arakunem646
2 Aranya647
1 Araqsya Manukyan648
4 Arauzo649
3 Arbalest Mike650
5 Arbor651
11 Arbor to SJ652
1 Arcadeanalytics653
1 Arcades654
1 Arcaruron655
1 Arcfrk656
1 ArchaeanDragon657
1 Archibald Fitzchesterfield658
2 Archimerged659
2 Archiver2660
3 Arcorann661
1 Arcturus4669662
1 Ardnew663
1 Arefindk664
1 Aresch~enwiki665
7 ArglebargleIV666

642 https://en.wikipedia.org/w/index.php%3ftitle=User:Aquae&action=edit&redlink=1
643 https://en.wikipedia.org/wiki/User:Arabani
644 https://en.wikipedia.org/w/index.php%3ftitle=User:Arafateajaz&action=edit&redlink=1
645 https://en.wikipedia.org/wiki/User:Aragorn2
646 https://en.wikipedia.org/wiki/User:Arakunem
647 https://en.wikipedia.org/wiki/User:Aranya
https://en.wikipedia.org/w/index.php%3ftitle=User:Araqsya_Manukyan&action=edit&
648
redlink=1
649 https://en.wikipedia.org/wiki/User:Arauzo
650 https://en.wikipedia.org/wiki/User:Arbalest_Mike
651 https://en.wikipedia.org/wiki/User:Arbor
652 https://en.wikipedia.org/wiki/User:Arbor_to_SJ
https://en.wikipedia.org/w/index.php%3ftitle=User:Arcadeanalytics&action=edit&
653
redlink=1
654 https://en.wikipedia.org/wiki/User:Arcades
655 https://en.wikipedia.org/w/index.php%3ftitle=User:Arcaruron&action=edit&redlink=1
656 https://en.wikipedia.org/wiki/User:Arcfrk
https://en.wikipedia.org/w/index.php%3ftitle=User:ArchaeanDragon&action=edit&redlink=
657
1
658 https://en.wikipedia.org/wiki/User:Archibald_Fitzchesterfield
659 https://en.wikipedia.org/wiki/User:Archimerged
660 https://en.wikipedia.org/w/index.php%3ftitle=User:Archiver2&action=edit&redlink=1
661 https://en.wikipedia.org/w/index.php%3ftitle=User:Arcorann&action=edit&redlink=1
662 https://en.wikipedia.org/wiki/User:Arcturus4669
663 https://en.wikipedia.org/wiki/User:Ardnew
664 https://en.wikipedia.org/w/index.php%3ftitle=User:Arefindk&action=edit&redlink=1
665 https://en.wikipedia.org/w/index.php%3ftitle=User:Aresch~enwiki&action=edit&redlink=1
666 https://en.wikipedia.org/wiki/User:ArglebargleIV

1695
Contributors

1 Argumzio667
2 Arichnad668
2 Arid Zkwelty669
3 Arided670
3 Ariel.671
5 Ariel.jacobs672
1 Arik181673
1 Ario674
2 AristosM675
1 Arjarj676
21 Arjayay677
2 Arjun G. Menon678
1 Arjunaraoc679
1 Arjunaugu680
1 Arka sett681
2 Arkenflame682
2 Arlekean683
3 Arlene47684
1 Arlolra685
1 Armine badalyan686
2 Army1987687
3 ArniDagur688
1 Arno Matthias689
59 ArnoldReinhold690
4 Aroben691

667 https://en.wikipedia.org/wiki/User:Argumzio
668 https://en.wikipedia.org/wiki/User:Arichnad
669 https://en.wikipedia.org/w/index.php%3ftitle=User:Arid_Zkwelty&action=edit&redlink=1
670 https://en.wikipedia.org/wiki/User:Arided
671 https://en.wikipedia.org/wiki/User:Ariel.
672 https://en.wikipedia.org/wiki/User:Ariel.jacobs
673 https://en.wikipedia.org/w/index.php%3ftitle=User:Arik181&action=edit&redlink=1
674 https://en.wikipedia.org/wiki/User:Ario
675 https://en.wikipedia.org/wiki/User:AristosM
676 https://en.wikipedia.org/wiki/User:Arjarj
677 https://en.wikipedia.org/wiki/User:Arjayay
678 https://en.wikipedia.org/wiki/User:Arjun_G._Menon
679 https://en.wikipedia.org/wiki/User:Arjunaraoc
680 https://en.wikipedia.org/wiki/User:Arjunaugu
681 https://en.wikipedia.org/w/index.php%3ftitle=User:Arka_sett&action=edit&redlink=1
682 https://en.wikipedia.org/w/index.php%3ftitle=User:Arkenflame&action=edit&redlink=1
683 https://en.wikipedia.org/w/index.php%3ftitle=User:Arlekean&action=edit&redlink=1
684 https://en.wikipedia.org/wiki/User:Arlene47
685 https://en.wikipedia.org/wiki/User:Arlolra
https://en.wikipedia.org/w/index.php%3ftitle=User:Armine_badalyan&action=edit&
686
redlink=1
687 https://en.wikipedia.org/wiki/User:Army1987
688 https://en.wikipedia.org/wiki/User:ArniDagur
689 https://en.wikipedia.org/wiki/User:Arno_Matthias
690 https://en.wikipedia.org/wiki/User:ArnoldReinhold
691 https://en.wikipedia.org/w/index.php%3ftitle=User:Aroben&action=edit&redlink=1

1696
External links

1 Aronisstav692
5 Arpi Ter-Araqelyan693
1 Arpi0292694
1 Arpitm695
5 Arrandale696
2 Arrenlex697
1 Arrogant262698
2 Arronax50699
4 Arsstyleh700
6 Artagnon701
1 Artdadamo702
4 Artemisart W703
1 Artemohanjanyan704
1 Arthaey705
1 Arthas702706
6 Arthena707
10 Arthur Frayn708
6 Arthur MILCHIOR709
105 Arthur Rubin710
23 ArthurBot711
4 ArthurDenture712
1 Arthuralbano713
2 Artichoker714
1 Artin map715
2 Artinmm716

692 https://en.wikipedia.org/wiki/User:Aronisstav
https://en.wikipedia.org/w/index.php%3ftitle=User:Arpi_Ter-Araqelyan&action=edit&
693
redlink=1
694 https://en.wikipedia.org/w/index.php%3ftitle=User:Arpi0292&action=edit&redlink=1
695 https://en.wikipedia.org/w/index.php%3ftitle=User:Arpitm&action=edit&redlink=1
696 https://en.wikipedia.org/w/index.php%3ftitle=User:Arrandale&action=edit&redlink=1
697 https://en.wikipedia.org/w/index.php%3ftitle=User:Arrenlex&action=edit&redlink=1
698 https://en.wikipedia.org/w/index.php%3ftitle=User:Arrogant262&action=edit&redlink=1
699 https://en.wikipedia.org/wiki/User:Arronax50
700 https://en.wikipedia.org/w/index.php%3ftitle=User:Arsstyleh&action=edit&redlink=1
701 https://en.wikipedia.org/wiki/User:Artagnon
702 https://en.wikipedia.org/w/index.php%3ftitle=User:Artdadamo&action=edit&redlink=1
703 https://en.wikipedia.org/w/index.php%3ftitle=User:Artemisart_W&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Artemohanjanyan&action=edit&
704
redlink=1
705 https://en.wikipedia.org/wiki/User:Arthaey
706 https://en.wikipedia.org/w/index.php%3ftitle=User:Arthas702&action=edit&redlink=1
707 https://en.wikipedia.org/wiki/User:Arthena
708 https://en.wikipedia.org/wiki/User:Arthur_Frayn
709 https://en.wikipedia.org/wiki/User:Arthur_MILCHIOR
710 https://en.wikipedia.org/wiki/User:Arthur_Rubin
711 https://en.wikipedia.org/wiki/User:ArthurBot
712 https://en.wikipedia.org/wiki/User:ArthurDenture
713 https://en.wikipedia.org/w/index.php%3ftitle=User:Arthuralbano&action=edit&redlink=1
714 https://en.wikipedia.org/wiki/User:Artichoker
715 https://en.wikipedia.org/w/index.php%3ftitle=User:Artin_map&action=edit&redlink=1
716 https://en.wikipedia.org/w/index.php%3ftitle=User:Artinmm&action=edit&redlink=1

1697
Contributors

1 Arto B717
1 Artoonie718
12 Artoria2e5719
1 ArturShaik720
1 Artyom Kalinin721
2 Arun APEC722
1 Arunmoezhi723
1 Arunsingh16724
64 Arvindn725
1 AryaTargaryen726
1 ArzelaAscoli727
9 As the glorious weep728
1 Asacarny729
1 AsceticRose730
9 Ascánder731
2 Asdfghjklkjhgfdsasdfghjklkjhgfds732
6 Asdquefty733
6 Aseld734
2 Asgowrisankar735
4 Ashawley736
1 Ashewmaker737
1 Ashiato738
1 Ashikbekal739
1 Ashingeorge740
6 Ashish Negi 001741

717 https://en.wikipedia.org/wiki/User:Arto_B
718 https://en.wikipedia.org/wiki/User:Artoonie
719 https://en.wikipedia.org/wiki/User:Artoria2e5
720 https://en.wikipedia.org/w/index.php%3ftitle=User:ArturShaik&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Artyom_Kalinin&action=edit&redlink=
721
1
722 https://en.wikipedia.org/w/index.php%3ftitle=User:Arun_APEC&action=edit&redlink=1
723 https://en.wikipedia.org/wiki/User:Arunmoezhi
724 https://en.wikipedia.org/wiki/User:Arunsingh16
725 https://en.wikipedia.org/wiki/User:Arvindn
726 https://en.wikipedia.org/w/index.php%3ftitle=User:AryaTargaryen&action=edit&redlink=1
727 https://en.wikipedia.org/wiki/User:ArzelaAscoli
728 https://en.wikipedia.org/wiki/User:As_the_glorious_weep
729 https://en.wikipedia.org/w/index.php%3ftitle=User:Asacarny&action=edit&redlink=1
730 https://en.wikipedia.org/w/index.php%3ftitle=User:AsceticRose&action=edit&redlink=1
731 https://en.wikipedia.org/wiki/User:Asc%25C3%25A1nder
https://en.wikipedia.org/w/index.php%3ftitle=User:Asdfghjklkjhgfdsasdfghjklkjhgfds&
732
action=edit&redlink=1
733 https://en.wikipedia.org/wiki/User:Asdquefty
734 https://en.wikipedia.org/wiki/User:Aseld
735 https://en.wikipedia.org/wiki/User:Asgowrisankar
736 https://en.wikipedia.org/wiki/User:Ashawley
737 https://en.wikipedia.org/w/index.php%3ftitle=User:Ashewmaker&action=edit&redlink=1
738 https://en.wikipedia.org/w/index.php%3ftitle=User:Ashiato&action=edit&redlink=1
739 https://en.wikipedia.org/w/index.php%3ftitle=User:Ashikbekal&action=edit&redlink=1
740 https://en.wikipedia.org/w/index.php%3ftitle=User:Ashingeorge&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Ashish_Negi_001&action=edit&
741
redlink=1

1698
External links

1 AshishMbm2012742
2 Ashmoo743
1 Ashvio744
2 Ashwin745
1 Asiir746
1 Asinop747
1 Askerlee748
1 Askewchan749
1 Asmeurer750
1 Asp-GentooLinux~enwiki751
1 Aspiziri752
1 Asplode753
1 Asqueella754
1 Asred9755
2 Assimil8or~enwiki756
1 Astral757
2 AstroCS758
2 AstroNomer759
1 Astroceltica760
1 Astronouth7303761
1 Astrophil762
1 Asturius763
1 Asuuberg764
1 Aswincweety765
4 Asxxx766

742 https://en.wikipedia.org/w/index.php%3ftitle=User:AshishMbm2012&action=edit&redlink=1
743 https://en.wikipedia.org/wiki/User:Ashmoo
744 https://en.wikipedia.org/w/index.php%3ftitle=User:Ashvio&action=edit&redlink=1
745 https://en.wikipedia.org/wiki/User:Ashwin
746 https://en.wikipedia.org/wiki/User:Asiir
747 https://en.wikipedia.org/w/index.php%3ftitle=User:Asinop&action=edit&redlink=1
748 https://en.wikipedia.org/w/index.php%3ftitle=User:Askerlee&action=edit&redlink=1
749 https://en.wikipedia.org/w/index.php%3ftitle=User:Askewchan&action=edit&redlink=1
750 https://en.wikipedia.org/wiki/User:Asmeurer
https://en.wikipedia.org/w/index.php%3ftitle=User:Asp-GentooLinux~enwiki&action=edit&
751
redlink=1
752 https://en.wikipedia.org/w/index.php%3ftitle=User:Aspiziri&action=edit&redlink=1
753 https://en.wikipedia.org/wiki/User:Asplode
754 https://en.wikipedia.org/w/index.php%3ftitle=User:Asqueella&action=edit&redlink=1
755 https://en.wikipedia.org/w/index.php%3ftitle=User:Asred9&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Assimil8or~enwiki&action=edit&
756
redlink=1
757 https://en.wikipedia.org/wiki/User:Astral
758 https://en.wikipedia.org/w/index.php%3ftitle=User:AstroCS&action=edit&redlink=1
759 https://en.wikipedia.org/wiki/User:AstroNomer
760 https://en.wikipedia.org/wiki/User:Astroceltica
761 https://en.wikipedia.org/wiki/User:Astronouth7303
762 https://en.wikipedia.org/wiki/User:Astrophil
763 https://en.wikipedia.org/w/index.php%3ftitle=User:Asturius&action=edit&redlink=1
764 https://en.wikipedia.org/w/index.php%3ftitle=User:Asuuberg&action=edit&redlink=1
765 https://en.wikipedia.org/w/index.php%3ftitle=User:Aswincweety&action=edit&redlink=1
766 https://en.wikipedia.org/w/index.php%3ftitle=User:Asxxx&action=edit&redlink=1

1699
Contributors

2 Atari44767
1 Atcold768
1 Atethnekos769
1 Atiliogomes770
4 Atlant771
2 Atlantia772
2 Atlantic306773
1 Atlantis2002774
1 Atmilios775
1 AtomCrusher776
1 Atomicbeachball777
4 Atoponce778
1 Atthaphong779
1 Atulsingh7890780
2 Audriusa781
6 Augurar782
1 Auminski783
3 Aureooms784
4 Auric785
1 Aurinegro~enwiki786
1 AustenWHead787
5 AustinBuchanan788
1 Autumnalmonk789
1 AvalerionV790
1 Avatar791

767 https://en.wikipedia.org/w/index.php%3ftitle=User:Atari44&action=edit&redlink=1
768 https://en.wikipedia.org/w/index.php%3ftitle=User:Atcold&action=edit&redlink=1
769 https://en.wikipedia.org/wiki/User:Atethnekos
770 https://en.wikipedia.org/w/index.php%3ftitle=User:Atiliogomes&action=edit&redlink=1
771 https://en.wikipedia.org/wiki/User:Atlant
772 https://en.wikipedia.org/wiki/User:Atlantia
773 https://en.wikipedia.org/wiki/User:Atlantic306
774 https://en.wikipedia.org/w/index.php%3ftitle=User:Atlantis2002&action=edit&redlink=1
775 https://en.wikipedia.org/wiki/User:Atmilios
776 https://en.wikipedia.org/wiki/User:AtomCrusher
777 https://en.wikipedia.org/wiki/User:Atomicbeachball
778 https://en.wikipedia.org/wiki/User:Atoponce
779 https://en.wikipedia.org/w/index.php%3ftitle=User:Atthaphong&action=edit&redlink=1
780 https://en.wikipedia.org/w/index.php%3ftitle=User:Atulsingh7890&action=edit&redlink=1
781 https://en.wikipedia.org/wiki/User:Audriusa
782 https://en.wikipedia.org/wiki/User:Augurar
783 https://en.wikipedia.org/wiki/User:Auminski
784 https://en.wikipedia.org/wiki/User:Aureooms
785 https://en.wikipedia.org/wiki/User:Auric
https://en.wikipedia.org/w/index.php%3ftitle=User:Aurinegro~enwiki&action=edit&
786
redlink=1
787 https://en.wikipedia.org/w/index.php%3ftitle=User:AustenWHead&action=edit&redlink=1
788 https://en.wikipedia.org/wiki/User:AustinBuchanan
789 https://en.wikipedia.org/wiki/User:Autumnalmonk
790 https://en.wikipedia.org/wiki/User:AvalerionV
791 https://en.wikipedia.org/wiki/User:Avatar

1700
External links

1 Avatarmonkeykirby792
1 Avaya1793
2 Avb794
1 Aveeare795
1 Averisk796
1 Avermapub797
1 Averruncus798
1 Avi1mizrahi799
2 AvicAWB800
2 Avichouhan1801
1 Avidarj802
1 Avihu803
1 Avisnacks804
2 Avkulkarni805
1 Avmr806
1 AvnishIT807
1 Avnjay808
4 AvocatoBot809
3 Awaterl810
1 Awesomeness811
2 Awu812
81 AxelBoldt813
1 Axl814
4 Axlrosen815
1 Axule816

https://en.wikipedia.org/w/index.php%3ftitle=User:Avatarmonkeykirby&action=edit&
792
redlink=1
793 https://en.wikipedia.org/wiki/User:Avaya1
794 https://en.wikipedia.org/wiki/User:Avb
795 https://en.wikipedia.org/w/index.php%3ftitle=User:Aveeare&action=edit&redlink=1
796 https://en.wikipedia.org/w/index.php%3ftitle=User:Averisk&action=edit&redlink=1
797 https://en.wikipedia.org/w/index.php%3ftitle=User:Avermapub&action=edit&redlink=1
798 https://en.wikipedia.org/wiki/User:Averruncus
799 https://en.wikipedia.org/w/index.php%3ftitle=User:Avi1mizrahi&action=edit&redlink=1
800 https://en.wikipedia.org/wiki/User:AvicAWB
801 https://en.wikipedia.org/w/index.php%3ftitle=User:Avichouhan1&action=edit&redlink=1
802 https://en.wikipedia.org/wiki/User:Avidarj
803 https://en.wikipedia.org/wiki/User:Avihu
804 https://en.wikipedia.org/w/index.php%3ftitle=User:Avisnacks&action=edit&redlink=1
805 https://en.wikipedia.org/w/index.php%3ftitle=User:Avkulkarni&action=edit&redlink=1
806 https://en.wikipedia.org/w/index.php%3ftitle=User:Avmr&action=edit&redlink=1
807 https://en.wikipedia.org/wiki/User:AvnishIT
808 https://en.wikipedia.org/wiki/User:Avnjay
809 https://en.wikipedia.org/wiki/User:AvocatoBot
810 https://en.wikipedia.org/wiki/User:Awaterl
811 https://en.wikipedia.org/wiki/User:Awesomeness
812 https://en.wikipedia.org/wiki/User:Awu
813 https://en.wikipedia.org/wiki/User:AxelBoldt
814 https://en.wikipedia.org/wiki/User:Axl
815 https://en.wikipedia.org/wiki/User:Axlrosen
816 https://en.wikipedia.org/w/index.php%3ftitle=User:Axule&action=edit&redlink=1

1701
Contributors

4 AySz88817
1 Ayda D818
6 Aydee819
2 Ayman820
2 Ayushyogi821
2 Azhand1979822
1 Azimuth1823
1 Azndragon1987824
1 Azotlichid825
1 AzrgExplorers826
4 Azx0987827
1 B828
1 B.burton123829
1 B1993alram830
2 B3N831
5 B3virq3b832
7 B4hand833
8 B6s~enwiki834
17 BACbKA835
3 BAxelrod836
2 BB-Froggy837
1 BB6953838
61 BD2412839
97 BG19bot840
1 BHC841

817 https://en.wikipedia.org/wiki/User:AySz88
818 https://en.wikipedia.org/wiki/User:Ayda_D
819 https://en.wikipedia.org/wiki/User:Aydee
820 https://en.wikipedia.org/wiki/User:Ayman
821 https://en.wikipedia.org/w/index.php%3ftitle=User:Ayushyogi&action=edit&redlink=1
822 https://en.wikipedia.org/w/index.php%3ftitle=User:Azhand1979&action=edit&redlink=1
823 https://en.wikipedia.org/wiki/User:Azimuth1
824 https://en.wikipedia.org/w/index.php%3ftitle=User:Azndragon1987&action=edit&redlink=1
825 https://en.wikipedia.org/wiki/User:Azotlichid
826 https://en.wikipedia.org/w/index.php%3ftitle=User:AzrgExplorers&action=edit&redlink=1
827 https://en.wikipedia.org/w/index.php%3ftitle=User:Azx0987&action=edit&redlink=1
828 https://en.wikipedia.org/wiki/User:B
829 https://en.wikipedia.org/wiki/User:B.burton123
830 https://en.wikipedia.org/w/index.php%3ftitle=User:B1993alram&action=edit&redlink=1
831 https://en.wikipedia.org/wiki/User:B3N
832 https://en.wikipedia.org/wiki/User:B3virq3b
833 https://en.wikipedia.org/wiki/User:B4hand
834 https://en.wikipedia.org/wiki/User:B6s~enwiki
835 https://en.wikipedia.org/wiki/User:BACbKA
836 https://en.wikipedia.org/wiki/User:BAxelrod
837 https://en.wikipedia.org/wiki/User:BB-Froggy
838 https://en.wikipedia.org/w/index.php%3ftitle=User:BB6953&action=edit&redlink=1
839 https://en.wikipedia.org/wiki/User:BD2412
840 https://en.wikipedia.org/wiki/User:BG19bot
841 https://en.wikipedia.org/wiki/User:BHC

1702
External links

1 BIS Suma842
1 BKfi843
2 BMB844
4 BOT-Superzerocool845
3 BOTarate846
6 BPositive847
3 BRW848
8 B^4849
1 Ba2kell850
1 Baane247851
1 Baartvaark852
1 Baba Arouj853
3 Babak.barati854
2 BabbaQ855
2 Babbage856
4 Babitaarora857
1 Baby112342858
1 Baby123412859
1 Babymissfortune860
1 Backslash Forwardslash861
1 Bact862
1 Bad Romance863
1 Baden-Paul864
1 Baderyp865
1 Badtoothfairy866

842 https://en.wikipedia.org/wiki/User:BIS_Suma
843 https://en.wikipedia.org/wiki/User:BKfi
844 https://en.wikipedia.org/w/index.php%3ftitle=User:BMB&action=edit&redlink=1
845 https://en.wikipedia.org/wiki/User:BOT-Superzerocool
846 https://en.wikipedia.org/wiki/User:BOTarate
847 https://en.wikipedia.org/wiki/User:BPositive
848 https://en.wikipedia.org/wiki/User:BRW
849 https://en.wikipedia.org/wiki/User:B%255E4
850 https://en.wikipedia.org/w/index.php%3ftitle=User:Ba2kell&action=edit&redlink=1
851 https://en.wikipedia.org/wiki/User:Baane247
852 https://en.wikipedia.org/w/index.php%3ftitle=User:Baartvaark&action=edit&redlink=1
853 https://en.wikipedia.org/w/index.php%3ftitle=User:Baba_Arouj&action=edit&redlink=1
854 https://en.wikipedia.org/w/index.php%3ftitle=User:Babak.barati&action=edit&redlink=1
855 https://en.wikipedia.org/wiki/User:BabbaQ
856 https://en.wikipedia.org/wiki/User:Babbage
857 https://en.wikipedia.org/wiki/User:Babitaarora
858 https://en.wikipedia.org/w/index.php%3ftitle=User:Baby112342&action=edit&redlink=1
859 https://en.wikipedia.org/w/index.php%3ftitle=User:Baby123412&action=edit&redlink=1
860 https://en.wikipedia.org/wiki/User:Babymissfortune
861 https://en.wikipedia.org/wiki/User:Backslash_Forwardslash
862 https://en.wikipedia.org/wiki/User:Bact
863 https://en.wikipedia.org/w/index.php%3ftitle=User:Bad_Romance&action=edit&redlink=1
864 https://en.wikipedia.org/wiki/User:Baden-Paul
865 https://en.wikipedia.org/w/index.php%3ftitle=User:Baderyp&action=edit&redlink=1
866 https://en.wikipedia.org/wiki/User:Badtoothfairy

1703
Contributors

3 Baerentp867
2 Baeriivan868
1 Bagoto869
1 Bagsc870
2 Bahaabadi871
2 Baiyubin872
1 Baka toroi873
2 Bakanov874
14 Baking Soda875
9 Balabiot876
1 Balajiivish877
1 Balamud21878
1 Baliame879
1 Balloonguy880
5 Baltar, Gaius881
4 Balu.ertl882
2 Bammie73883
3 Bamyers99884
2 BananaFiend885
1 BananaLanguage886
1 Bananastalktome887
2 Banaticus888
2 BandW2011889
4 Banedon890
1 Bangsholt891

867 https://en.wikipedia.org/wiki/User:Baerentp
868 https://en.wikipedia.org/w/index.php%3ftitle=User:Baeriivan&action=edit&redlink=1
869 https://en.wikipedia.org/wiki/User:Bagoto
870 https://en.wikipedia.org/wiki/User:Bagsc
871 https://en.wikipedia.org/w/index.php%3ftitle=User:Bahaabadi&action=edit&redlink=1
872 https://en.wikipedia.org/w/index.php%3ftitle=User:Baiyubin&action=edit&redlink=1
873 https://en.wikipedia.org/w/index.php%3ftitle=User:Baka_toroi&action=edit&redlink=1
874 https://en.wikipedia.org/wiki/User:Bakanov
875 https://en.wikipedia.org/wiki/User:Baking_Soda
876 https://en.wikipedia.org/wiki/User:Balabiot
877 https://en.wikipedia.org/w/index.php%3ftitle=User:Balajiivish&action=edit&redlink=1
878 https://en.wikipedia.org/w/index.php%3ftitle=User:Balamud21&action=edit&redlink=1
879 https://en.wikipedia.org/w/index.php%3ftitle=User:Baliame&action=edit&redlink=1
880 https://en.wikipedia.org/wiki/User:Balloonguy
881 https://en.wikipedia.org/w/index.php%3ftitle=User:Baltar,_Gaius&action=edit&redlink=1
882 https://en.wikipedia.org/wiki/User:Balu.ertl
883 https://en.wikipedia.org/w/index.php%3ftitle=User:Bammie73&action=edit&redlink=1
884 https://en.wikipedia.org/wiki/User:Bamyers99
885 https://en.wikipedia.org/wiki/User:BananaFiend
886 https://en.wikipedia.org/wiki/User:BananaLanguage
887 https://en.wikipedia.org/wiki/User:Bananastalktome
888 https://en.wikipedia.org/wiki/User:Banaticus
889 https://en.wikipedia.org/wiki/User:BandW2011
890 https://en.wikipedia.org/w/index.php%3ftitle=User:Banedon&action=edit&redlink=1
891 https://en.wikipedia.org/wiki/User:Bangsholt

1704
External links

1 Barabum892
4 Barak Sh893
1 Barbarab111894
1 Barcex895
1 Barefootguru896
1 Barfooz897
1 Barlw898
1 Barolololo899
1 Barometz900
1 Baronjonas901
1 Barras902
1 BarrelProof903
2 BarretB904
3 BarroColorado905
1 BarryFruitman906
2 Bart Massey907
1 Bart133908
1 Bartledan909
5 Bartoron2910
1 Bartosz911
4 Baruneju912
1 BasVanSchaik913
3 Base698914
1 Basetarp915
1 Bassbonerocks916

892 https://en.wikipedia.org/wiki/User:Barabum
893 https://en.wikipedia.org/wiki/User:Barak_Sh
894 https://en.wikipedia.org/w/index.php%3ftitle=User:Barbarab111&action=edit&redlink=1
895 https://en.wikipedia.org/wiki/User:Barcex
896 https://en.wikipedia.org/wiki/User:Barefootguru
897 https://en.wikipedia.org/wiki/User:Barfooz
898 https://en.wikipedia.org/w/index.php%3ftitle=User:Barlw&action=edit&redlink=1
899 https://en.wikipedia.org/w/index.php%3ftitle=User:Barolololo&action=edit&redlink=1
900 https://en.wikipedia.org/wiki/User:Barometz
901 https://en.wikipedia.org/wiki/User:Baronjonas
902 https://en.wikipedia.org/wiki/User:Barras
903 https://en.wikipedia.org/wiki/User:BarrelProof
904 https://en.wikipedia.org/wiki/User:BarretB
905 https://en.wikipedia.org/wiki/User:BarroColorado
906 https://en.wikipedia.org/w/index.php%3ftitle=User:BarryFruitman&action=edit&redlink=1
907 https://en.wikipedia.org/wiki/User:Bart_Massey
908 https://en.wikipedia.org/w/index.php%3ftitle=User:Bart133&action=edit&redlink=1
909 https://en.wikipedia.org/wiki/User:Bartledan
910 https://en.wikipedia.org/wiki/User:Bartoron2
911 https://en.wikipedia.org/wiki/User:Bartosz
912 https://en.wikipedia.org/wiki/User:Baruneju
913 https://en.wikipedia.org/w/index.php%3ftitle=User:BasVanSchaik&action=edit&redlink=1
914 https://en.wikipedia.org/w/index.php%3ftitle=User:Base698&action=edit&redlink=1
915 https://en.wikipedia.org/w/index.php%3ftitle=User:Basetarp&action=edit&redlink=1
916 https://en.wikipedia.org/wiki/User:Bassbonerocks

1705
Contributors

1 Basstrekker87917
2 Bastetswarrior918
1 Bathysphere919
2 Batkins920
2 Battamer921
18 BattyBot922
2 Baudway923
1 Bavla924
7 Bayard~enwiki925
2 Bazonka926
2 Bazuz927
3 Bbb23928
3 Bbi5291929
7 Bbik930
1 Bbr88931
2 Bbukh932
3 Bcnof933
1 Bcopp.wiki934
1 Bcurcio87935
1 Bcward936
3 Bdawson1982937
3 Bdesham938
1 Bdj939
1 Bdragon940
1 BeLuckyDaf941

917 https://en.wikipedia.org/w/index.php%3ftitle=User:Basstrekker87&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Bastetswarrior&action=edit&redlink=
918
1
919 https://en.wikipedia.org/w/index.php%3ftitle=User:Bathysphere&action=edit&redlink=1
920 https://en.wikipedia.org/wiki/User:Batkins
921 https://en.wikipedia.org/wiki/User:Battamer
922 https://en.wikipedia.org/wiki/User:BattyBot
923 https://en.wikipedia.org/w/index.php%3ftitle=User:Baudway&action=edit&redlink=1
924 https://en.wikipedia.org/w/index.php%3ftitle=User:Bavla&action=edit&redlink=1
925 https://en.wikipedia.org/w/index.php%3ftitle=User:Bayard~enwiki&action=edit&redlink=1
926 https://en.wikipedia.org/wiki/User:Bazonka
927 https://en.wikipedia.org/wiki/User:Bazuz
928 https://en.wikipedia.org/wiki/User:Bbb23
929 https://en.wikipedia.org/wiki/User:Bbi5291
930 https://en.wikipedia.org/wiki/User:Bbik
931 https://en.wikipedia.org/w/index.php%3ftitle=User:Bbr88&action=edit&redlink=1
932 https://en.wikipedia.org/w/index.php%3ftitle=User:Bbukh&action=edit&redlink=1
933 https://en.wikipedia.org/wiki/User:Bcnof
934 https://en.wikipedia.org/w/index.php%3ftitle=User:Bcopp.wiki&action=edit&redlink=1
935 https://en.wikipedia.org/w/index.php%3ftitle=User:Bcurcio87&action=edit&redlink=1
936 https://en.wikipedia.org/w/index.php%3ftitle=User:Bcward&action=edit&redlink=1
937 https://en.wikipedia.org/w/index.php%3ftitle=User:Bdawson1982&action=edit&redlink=1
938 https://en.wikipedia.org/wiki/User:Bdesham
939 https://en.wikipedia.org/w/index.php%3ftitle=User:Bdj&action=edit&redlink=1
940 https://en.wikipedia.org/wiki/User:Bdragon
941 https://en.wikipedia.org/w/index.php%3ftitle=User:BeLuckyDaf&action=edit&redlink=1

1706
External links

1 Bear-rings942
3 Bearcat943
1 Beardybloke944
1 Beatkist945
4 Beatle Fab Four946
2 Beau~enwiki947
1 Beb0s948
1 Bebbo1977949
1 Beda42950
1 Beefman951
1 Beekeepingschool952
1 Beenorgone953
8 Beetstra954
2 Behco955
1 Behnam956
1 BehnamFarid957
2 Beijingxilu958
29 Beland959
1 Bella Swan Wolf960
1 Bellerophon5685961
5 Bellowhead678962
19 Ben Standeven963
1 Ben pcc964
26 BenFrantzDale965
4 BenKovitz966

942 https://en.wikipedia.org/wiki/User:Bear-rings
943 https://en.wikipedia.org/wiki/User:Bearcat
944 https://en.wikipedia.org/wiki/User:Beardybloke
945 https://en.wikipedia.org/w/index.php%3ftitle=User:Beatkist&action=edit&redlink=1
946 https://en.wikipedia.org/wiki/User:Beatle_Fab_Four
947 https://en.wikipedia.org/w/index.php%3ftitle=User:Beau~enwiki&action=edit&redlink=1
948 https://en.wikipedia.org/w/index.php%3ftitle=User:Beb0s&action=edit&redlink=1
949 https://en.wikipedia.org/w/index.php%3ftitle=User:Bebbo1977&action=edit&redlink=1
950 https://en.wikipedia.org/wiki/User:Beda42
951 https://en.wikipedia.org/wiki/User:Beefman
https://en.wikipedia.org/w/index.php%3ftitle=User:Beekeepingschool&action=edit&
952
redlink=1
953 https://en.wikipedia.org/w/index.php%3ftitle=User:Beenorgone&action=edit&redlink=1
954 https://en.wikipedia.org/wiki/User:Beetstra
955 https://en.wikipedia.org/wiki/User:Behco
956 https://en.wikipedia.org/wiki/User:Behnam
957 https://en.wikipedia.org/wiki/User:BehnamFarid
958 https://en.wikipedia.org/w/index.php%3ftitle=User:Beijingxilu&action=edit&redlink=1
959 https://en.wikipedia.org/wiki/User:Beland
https://en.wikipedia.org/w/index.php%3ftitle=User:Bella_Swan_Wolf&action=edit&
960
redlink=1
961 https://en.wikipedia.org/wiki/User:Bellerophon5685
962 https://en.wikipedia.org/wiki/User:Bellowhead678
963 https://en.wikipedia.org/wiki/User:Ben_Standeven
964 https://en.wikipedia.org/wiki/User:Ben_pcc
965 https://en.wikipedia.org/wiki/User:BenFrantzDale
966 https://en.wikipedia.org/wiki/User:BenKovitz

1707
Contributors

8 BenRG967
1 Benash968
2 Benbread969
43 Bender the Bot970
1 Bender05971
59 Bender235972
1 Bender250973
73 Bender2k14974
1 Benehsv975
2 Benfab976
2 Bengski68977
1 Benhen1997978
1 Benhiller979
1 BeniBela980
1 Benja981
1 Benjis1999982
2 Benjohnbarnes983
1 Bennv3771984
2 Bensin985
3 Benson Muite986
1 Benstown987
1 Bento00988
1 Bentsm989
4 BenzolBot990
1 Berdidaine991

967 https://en.wikipedia.org/wiki/User:BenRG
968 https://en.wikipedia.org/w/index.php%3ftitle=User:Benash&action=edit&redlink=1
969 https://en.wikipedia.org/wiki/User:Benbread
970 https://en.wikipedia.org/wiki/User:Bender_the_Bot
971 https://en.wikipedia.org/wiki/User:Bender05
972 https://en.wikipedia.org/wiki/User:Bender235
973 https://en.wikipedia.org/w/index.php%3ftitle=User:Bender250&action=edit&redlink=1
974 https://en.wikipedia.org/wiki/User:Bender2k14
975 https://en.wikipedia.org/w/index.php%3ftitle=User:Benehsv&action=edit&redlink=1
976 https://en.wikipedia.org/w/index.php%3ftitle=User:Benfab&action=edit&redlink=1
977 https://en.wikipedia.org/wiki/User:Bengski68
978 https://en.wikipedia.org/wiki/User:Benhen1997
979 https://en.wikipedia.org/wiki/User:Benhiller
980 https://en.wikipedia.org/w/index.php%3ftitle=User:BeniBela&action=edit&redlink=1
981 https://en.wikipedia.org/w/index.php%3ftitle=User:Benja&action=edit&redlink=1
982 https://en.wikipedia.org/w/index.php%3ftitle=User:Benjis1999&action=edit&redlink=1
983 https://en.wikipedia.org/w/index.php%3ftitle=User:Benjohnbarnes&action=edit&redlink=1
984 https://en.wikipedia.org/wiki/User:Bennv3771
985 https://en.wikipedia.org/wiki/User:Bensin
986 https://en.wikipedia.org/wiki/User:Benson_Muite
987 https://en.wikipedia.org/wiki/User:Benstown
988 https://en.wikipedia.org/wiki/User:Bento00
989 https://en.wikipedia.org/w/index.php%3ftitle=User:Bentsm&action=edit&redlink=1
990 https://en.wikipedia.org/wiki/User:BenzolBot
991 https://en.wikipedia.org/wiki/User:Berdidaine

1708
External links

1 Bereajan992
4 Berean Hunter993
4 Bereziny994
4 Bergstra995
1 Berlinguyinca996
1 Bernard François997
1 Bernard Teo998
2 Bernard van der Wees999
6 BernardZ1000
5 BernardoSulzbach1001
2 Bernardopacheco1002
2 Berteun1003
1 Bertrc1004
2 Beryllium1005
1 Bestwolf19831006
2 Beta161007
1 Beta791008
2 BetacommandBot1009
6 BethNaught1010
7 Bethnim1011
2 Betseg1012
1 Betterusername1013
1 Beurocraticobama21014
1 Bevo1015
1 Bewildebeast1016

992 https://en.wikipedia.org/w/index.php%3ftitle=User:Bereajan&action=edit&redlink=1
993 https://en.wikipedia.org/wiki/User:Berean_Hunter
994 https://en.wikipedia.org/w/index.php%3ftitle=User:Bereziny&action=edit&redlink=1
995 https://en.wikipedia.org/w/index.php%3ftitle=User:Bergstra&action=edit&redlink=1
996 https://en.wikipedia.org/w/index.php%3ftitle=User:Berlinguyinca&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Bernard_Fran%25C3%25A7ois&action=
997
edit&redlink=1
998 https://en.wikipedia.org/wiki/User:Bernard_Teo
https://en.wikipedia.org/w/index.php%3ftitle=User:Bernard_van_der_Wees&action=edit&
999
redlink=1
1000 https://en.wikipedia.org/wiki/User:BernardZ
1001 https://en.wikipedia.org/wiki/User:BernardoSulzbach
https://en.wikipedia.org/w/index.php%3ftitle=User:Bernardopacheco&action=edit&
1002
redlink=1
1003 https://en.wikipedia.org/wiki/User:Berteun
1004 https://en.wikipedia.org/wiki/User:Bertrc
1005 https://en.wikipedia.org/wiki/User:Beryllium
1006 https://en.wikipedia.org/wiki/User:Bestwolf1983
1007 https://en.wikipedia.org/wiki/User:Beta16
1008 https://en.wikipedia.org/w/index.php%3ftitle=User:Beta79&action=edit&redlink=1
1009 https://en.wikipedia.org/wiki/User:BetacommandBot
1010 https://en.wikipedia.org/wiki/User:BethNaught
1011 https://en.wikipedia.org/wiki/User:Bethnim
1012 https://en.wikipedia.org/wiki/User:Betseg
1013 https://en.wikipedia.org/wiki/User:Betterusername
1014 https://en.wikipedia.org/wiki/User:Beurocraticobama2
1015 https://en.wikipedia.org/wiki/User:Bevo
1016 https://en.wikipedia.org/wiki/User:Bewildebeast

1709
Contributors

1 Bexing1017
1 Bfinn1018
1 Bfishburne1019
1 Bfjf1020
5 Bg99891021
1 Bgmort1022
1 Bgrish651023
1 Bgruber1024
19 Bgwhite1025
1 Bharatshettybarkur1026
1 Bhaveshnande1027
1 Bhh19881028
1 Bhny1029
1 Bhunacat101030
8 BiT1031
23 Bibcode Bot1032
1 Bidabadi~enwiki1033
1 Biffta1034
2 BigDwiki1035
1 Bigmonachus1036
1 Bignose1037
2 Bigtimepeace1038
1 Bihco1039
2 Biker Biker1040
1 Bilal Hoor1041

1017 https://en.wikipedia.org/w/index.php%3ftitle=User:Bexing&action=edit&redlink=1
1018 https://en.wikipedia.org/wiki/User:Bfinn
1019 https://en.wikipedia.org/w/index.php%3ftitle=User:Bfishburne&action=edit&redlink=1
1020 https://en.wikipedia.org/w/index.php%3ftitle=User:Bfjf&action=edit&redlink=1
1021 https://en.wikipedia.org/w/index.php%3ftitle=User:Bg9989&action=edit&redlink=1
1022 https://en.wikipedia.org/w/index.php%3ftitle=User:Bgmort&action=edit&redlink=1
1023 https://en.wikipedia.org/w/index.php%3ftitle=User:Bgrish65&action=edit&redlink=1
1024 https://en.wikipedia.org/w/index.php%3ftitle=User:Bgruber&action=edit&redlink=1
1025 https://en.wikipedia.org/w/index.php%3ftitle=User:Bgwhite&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Bharatshettybarkur&action=edit&
1026
redlink=1
1027 https://en.wikipedia.org/w/index.php%3ftitle=User:Bhaveshnande&action=edit&redlink=1
1028 https://en.wikipedia.org/w/index.php%3ftitle=User:Bhh1988&action=edit&redlink=1
1029 https://en.wikipedia.org/wiki/User:Bhny
1030 https://en.wikipedia.org/wiki/User:Bhunacat10
1031 https://en.wikipedia.org/wiki/User:BiT
1032 https://en.wikipedia.org/wiki/User:Bibcode_Bot
https://en.wikipedia.org/w/index.php%3ftitle=User:Bidabadi~enwiki&action=edit&
1033
redlink=1
1034 https://en.wikipedia.org/w/index.php%3ftitle=User:Biffta&action=edit&redlink=1
1035 https://en.wikipedia.org/w/index.php%3ftitle=User:BigDwiki&action=edit&redlink=1
1036 https://en.wikipedia.org/wiki/User:Bigmonachus
1037 https://en.wikipedia.org/wiki/User:Bignose
1038 https://en.wikipedia.org/wiki/User:Bigtimepeace
1039 https://en.wikipedia.org/wiki/User:Bihco
1040 https://en.wikipedia.org/wiki/User:Biker_Biker
1041 https://en.wikipedia.org/w/index.php%3ftitle=User:Bilal_Hoor&action=edit&redlink=1

1710
External links

2 Bill wang12341042
1 Bill william compton1043
1 Billaaa1044
1 Billhpike1045
8 Billinghurst1046
1 Billyfrankjoe1047
4 Billylikeswikis1048
2 Bilorv1049
1 Binishjoshi1050
1 Bintu syam1051
2 BioPupil1052
1 BioTronic1053
1 Biografer1054
2 BiomolecularGraphics4All1055
1 Bipul Binni1056
1 Bird of paradox1057
2 Biruitorul1058
1 BitCrazed1059
1 Biyectivo1060
2 Bjarneh1061
1 BjarteSorensen1062
3 Bjorn Reese1063
2 Bjornson811064
1 Bjozen1065
12 Bjturedwind1066

1042 https://en.wikipedia.org/w/index.php%3ftitle=User:Bill_wang1234&action=edit&redlink=1
1043 https://en.wikipedia.org/wiki/User:Bill_william_compton
1044 https://en.wikipedia.org/w/index.php%3ftitle=User:Billaaa&action=edit&redlink=1
1045 https://en.wikipedia.org/wiki/User:Billhpike
1046 https://en.wikipedia.org/wiki/User:Billinghurst
1047 https://en.wikipedia.org/w/index.php%3ftitle=User:Billyfrankjoe&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Billylikeswikis&action=edit&
1048
redlink=1
1049 https://en.wikipedia.org/wiki/User:Bilorv
1050 https://en.wikipedia.org/wiki/User:Binishjoshi
1051 https://en.wikipedia.org/w/index.php%3ftitle=User:Bintu_syam&action=edit&redlink=1
1052 https://en.wikipedia.org/wiki/User:BioPupil
1053 https://en.wikipedia.org/w/index.php%3ftitle=User:BioTronic&action=edit&redlink=1
1054 https://en.wikipedia.org/wiki/User:Biografer
https://en.wikipedia.org/w/index.php%3ftitle=User:BiomolecularGraphics4All&action=
1055
edit&redlink=1
1056 https://en.wikipedia.org/w/index.php%3ftitle=User:Bipul_Binni&action=edit&redlink=1
1057 https://en.wikipedia.org/wiki/User:Bird_of_paradox
1058 https://en.wikipedia.org/wiki/User:Biruitorul
1059 https://en.wikipedia.org/w/index.php%3ftitle=User:BitCrazed&action=edit&redlink=1
1060 https://en.wikipedia.org/w/index.php%3ftitle=User:Biyectivo&action=edit&redlink=1
1061 https://en.wikipedia.org/wiki/User:Bjarneh
1062 https://en.wikipedia.org/wiki/User:BjarteSorensen
1063 https://en.wikipedia.org/w/index.php%3ftitle=User:Bjorn_Reese&action=edit&redlink=1
1064 https://en.wikipedia.org/w/index.php%3ftitle=User:Bjornson81&action=edit&redlink=1
1065 https://en.wikipedia.org/w/index.php%3ftitle=User:Bjozen&action=edit&redlink=1
1066 https://en.wikipedia.org/w/index.php%3ftitle=User:Bjturedwind&action=edit&redlink=1

1711
Contributors

1 Bk3141591067
45 Bkell1068
8 Bkkbrad1069
7 Black Falcon1070
1 BlackFingolfin1071
2 Blackbombchu1072
1 BlackholeWA1073
1 Blacksonne1074
2 Blahedo1075
1 Blair Azzopardi1076
7 Blaisorblade1077
1 Blake-1078
1 Blamelooseradiolabel1079
1 BlankVerse1080
1 Blashyrk1081
1 Blaxthos1082
1 Blazotron1083
5 BlckKnght1084
1 Bleakgadfly1085
2 Bledwith1086
2 Bliljerk1011087
1 BlitzGreg1088
1 Bloghog231089
1 Blogjack1090
13 Blokhead1091

1067 https://en.wikipedia.org/wiki/User:Bk314159
1068 https://en.wikipedia.org/wiki/User:Bkell
1069 https://en.wikipedia.org/wiki/User:Bkkbrad
1070 https://en.wikipedia.org/wiki/User:Black_Falcon
1071 https://en.wikipedia.org/wiki/User:BlackFingolfin
1072 https://en.wikipedia.org/wiki/User:Blackbombchu
1073 https://en.wikipedia.org/wiki/User:BlackholeWA
1074 https://en.wikipedia.org/w/index.php%3ftitle=User:Blacksonne&action=edit&redlink=1
1075 https://en.wikipedia.org/wiki/User:Blahedo
https://en.wikipedia.org/w/index.php%3ftitle=User:Blair_Azzopardi&action=edit&
1076
redlink=1
1077 https://en.wikipedia.org/wiki/User:Blaisorblade
1078 https://en.wikipedia.org/wiki/User:Blake-
1079 https://en.wikipedia.org/wiki/User:Blamelooseradiolabel
1080 https://en.wikipedia.org/wiki/User:BlankVerse
1081 https://en.wikipedia.org/w/index.php%3ftitle=User:Blashyrk&action=edit&redlink=1
1082 https://en.wikipedia.org/wiki/User:Blaxthos
1083 https://en.wikipedia.org/wiki/User:Blazotron
1084 https://en.wikipedia.org/wiki/User:BlckKnght
1085 https://en.wikipedia.org/w/index.php%3ftitle=User:Bleakgadfly&action=edit&redlink=1
1086 https://en.wikipedia.org/w/index.php%3ftitle=User:Bledwith&action=edit&redlink=1
1087 https://en.wikipedia.org/w/index.php%3ftitle=User:Bliljerk101&action=edit&redlink=1
1088 https://en.wikipedia.org/wiki/User:BlitzGreg
1089 https://en.wikipedia.org/wiki/User:Bloghog23
1090 https://en.wikipedia.org/w/index.php%3ftitle=User:Blogjack&action=edit&redlink=1
1091 https://en.wikipedia.org/wiki/User:Blokhead

1712
External links

2 Bloodfox 11092
1 Bloodshedder1093
2 Blotwell1094
1 BlueFenixReborn1095
4 BlueNovember1096
1 Bluear1097
24 Bluebot1098
1 Bluechimera01099
1 Blueclaw1100
1 Bluefog681101
1 Bluegrass1102
3 Bluemoose1103
1 Bluerasberry1104
2 Blueshifting1105
1 Bluewaves1106
1 Bluhd1107
1 Bmanwiki1108
1 Bmatheny1109
1 Bmearns1110
8 Bmears111111
2 Bo Jacoby1112
2 Bo.chen.cool1113
1 Boatsdesk1114
1 Bob.v.R1115
1 Bob13121116

1092 https://en.wikipedia.org/w/index.php%3ftitle=User:Bloodfox_1&action=edit&redlink=1
1093 https://en.wikipedia.org/wiki/User:Bloodshedder
1094 https://en.wikipedia.org/wiki/User:Blotwell
1095 https://en.wikipedia.org/wiki/User:BlueFenixReborn
1096 https://en.wikipedia.org/wiki/User:BlueNovember
1097 https://en.wikipedia.org/wiki/User:Bluear
1098 https://en.wikipedia.org/wiki/User:Bluebot
1099 https://en.wikipedia.org/w/index.php%3ftitle=User:Bluechimera0&action=edit&redlink=1
1100 https://en.wikipedia.org/wiki/User:Blueclaw
1101 https://en.wikipedia.org/w/index.php%3ftitle=User:Bluefog68&action=edit&redlink=1
1102 https://en.wikipedia.org/w/index.php%3ftitle=User:Bluegrass&action=edit&redlink=1
1103 https://en.wikipedia.org/wiki/User:Bluemoose
1104 https://en.wikipedia.org/wiki/User:Bluerasberry
1105 https://en.wikipedia.org/w/index.php%3ftitle=User:Blueshifting&action=edit&redlink=1
1106 https://en.wikipedia.org/w/index.php%3ftitle=User:Bluewaves&action=edit&redlink=1
1107 https://en.wikipedia.org/wiki/User:Bluhd
1108 https://en.wikipedia.org/w/index.php%3ftitle=User:Bmanwiki&action=edit&redlink=1
1109 https://en.wikipedia.org/wiki/User:Bmatheny
1110 https://en.wikipedia.org/wiki/User:Bmearns
1111 https://en.wikipedia.org/wiki/User:Bmears11
1112 https://en.wikipedia.org/wiki/User:Bo_Jacoby
1113 https://en.wikipedia.org/w/index.php%3ftitle=User:Bo.chen.cool&action=edit&redlink=1
1114 https://en.wikipedia.org/wiki/User:Boatsdesk
1115 https://en.wikipedia.org/wiki/User:Bob.v.R
1116 https://en.wikipedia.org/w/index.php%3ftitle=User:Bob1312&action=edit&redlink=1

1713
Contributors

1 Bob1960evens1117
1 Bob3051118
3 Bob59721119
1 Bobblewik1120
1 Bobcat641121
8 Bobo1921122
1 Bobogoobo1123
3 Bobrayner1124
1 BodhisattvaBot1125
3 Boemanneke1126
2 Bofa1127
3 Boffob1128
1 Bogdan24121129
1 Bogdangiusca1130
1 Boggle33431131
1 Boing! said Zebedee1132
1 Bojannestorovic1133
4 Boky901134
1 BoldLuis1135
2 Bolero~enwiki1136
3 Boleslav Bobcik1137
1 Boleyn1138
2 Boleyn.su1139
1 Bollyjeff1140
1 Bomac1141

1117 https://en.wikipedia.org/wiki/User:Bob1960evens
1118 https://en.wikipedia.org/wiki/User:Bob305
1119 https://en.wikipedia.org/wiki/User:Bob5972
1120 https://en.wikipedia.org/wiki/User:Bobblewik
1121 https://en.wikipedia.org/w/index.php%3ftitle=User:Bobcat64&action=edit&redlink=1
1122 https://en.wikipedia.org/wiki/User:Bobo192
1123 https://en.wikipedia.org/wiki/User:Bobogoobo
1124 https://en.wikipedia.org/wiki/User:Bobrayner
1125 https://en.wikipedia.org/wiki/User:BodhisattvaBot
1126 https://en.wikipedia.org/w/index.php%3ftitle=User:Boemanneke&action=edit&redlink=1
1127 https://en.wikipedia.org/w/index.php%3ftitle=User:Bofa&action=edit&redlink=1
1128 https://en.wikipedia.org/wiki/User:Boffob
1129 https://en.wikipedia.org/w/index.php%3ftitle=User:Bogdan2412&action=edit&redlink=1
1130 https://en.wikipedia.org/wiki/User:Bogdangiusca
1131 https://en.wikipedia.org/w/index.php%3ftitle=User:Boggle3343&action=edit&redlink=1
1132 https://en.wikipedia.org/wiki/User:Boing!_said_Zebedee
1133 https://en.wikipedia.org/wiki/User:Bojannestorovic
1134 https://en.wikipedia.org/w/index.php%3ftitle=User:Boky90&action=edit&redlink=1
1135 https://en.wikipedia.org/wiki/User:BoldLuis
1136 https://en.wikipedia.org/wiki/User:Bolero~enwiki
1137 https://en.wikipedia.org/wiki/User:Boleslav_Bobcik
1138 https://en.wikipedia.org/wiki/User:Boleyn
1139 https://en.wikipedia.org/w/index.php%3ftitle=User:Boleyn.su&action=edit&redlink=1
1140 https://en.wikipedia.org/wiki/User:Bollyjeff
1141 https://en.wikipedia.org/wiki/User:Bomac

1714
External links

5 Bomazi1142
1 Bomberzocker1143
2 Bonadea1144
1 Boneill1145
2 Bongwarrior1146
4 Bonny TM1147
5 BonzaiThePenguin1148
1 Boom12345671149
1 BoomerAB1150
2 Boostage911151
1 Boothinator1152
1 Boothy4431153
1 Bootvis1154
2 Bor751155
1 Borat fan1156
2 Boredzo1157
1 Borgatts1158
5 Borgx1159
1 Borice1160
1 Borislav1161
3 Bosmon1162
1 Bot-Schafter1163
4 BotKung1164
11 Bota471165
1 Botaki1166

1142 https://en.wikipedia.org/wiki/User:Bomazi
1143 https://en.wikipedia.org/w/index.php%3ftitle=User:Bomberzocker&action=edit&redlink=1
1144 https://en.wikipedia.org/wiki/User:Bonadea
1145 https://en.wikipedia.org/w/index.php%3ftitle=User:Boneill&action=edit&redlink=1
1146 https://en.wikipedia.org/wiki/User:Bongwarrior
1147 https://en.wikipedia.org/wiki/User:Bonny_TM
https://en.wikipedia.org/w/index.php%3ftitle=User:BonzaiThePenguin&action=edit&
1148
redlink=1
1149 https://en.wikipedia.org/w/index.php%3ftitle=User:Boom1234567&action=edit&redlink=1
1150 https://en.wikipedia.org/wiki/User:BoomerAB
1151 https://en.wikipedia.org/w/index.php%3ftitle=User:Boostage91&action=edit&redlink=1
1152 https://en.wikipedia.org/w/index.php%3ftitle=User:Boothinator&action=edit&redlink=1
1153 https://en.wikipedia.org/wiki/User:Boothy443
1154 https://en.wikipedia.org/wiki/User:Bootvis
1155 https://en.wikipedia.org/wiki/User:Bor75
1156 https://en.wikipedia.org/w/index.php%3ftitle=User:Borat_fan&action=edit&redlink=1
1157 https://en.wikipedia.org/wiki/User:Boredzo
1158 https://en.wikipedia.org/w/index.php%3ftitle=User:Borgatts&action=edit&redlink=1
1159 https://en.wikipedia.org/wiki/User:Borgx
1160 https://en.wikipedia.org/w/index.php%3ftitle=User:Borice&action=edit&redlink=1
1161 https://en.wikipedia.org/wiki/User:Borislav
1162 https://en.wikipedia.org/wiki/User:Bosmon
1163 https://en.wikipedia.org/wiki/User:Bot-Schafter
1164 https://en.wikipedia.org/wiki/User:BotKung
1165 https://en.wikipedia.org/wiki/User:Bota47
1166 https://en.wikipedia.org/wiki/User:Botaki

1715
Contributors

1 Bouke~enwiki1167
14 Boulevardier1168
1 Boute1169
2 BozMo1170
4 Bpapa21171
1 Bpeps1172
1 Bpkmadden1173
1 Bporopat1174
1 Bputman1175
1 Bracton1176
10 Brad77771177
4 BradAustin21178
1 Braddjwinter1179
2 Bradleykuszmaul1180
1 Bradwindy1181
1 Bradyoung011182
1 Brahle1183
1 Braincricket1184
1 Braindrain00001185
1 Brainix1186
1 Brainresearch1187
3 Brainulator91188
1 Brambleclawx1189
1 Bramdj1190
3 BranStark1191

1167 https://en.wikipedia.org/wiki/User:Bouke~enwiki
1168 https://en.wikipedia.org/wiki/User:Boulevardier
1169 https://en.wikipedia.org/wiki/User:Boute
1170 https://en.wikipedia.org/wiki/User:BozMo
1171 https://en.wikipedia.org/w/index.php%3ftitle=User:Bpapa2&action=edit&redlink=1
1172 https://en.wikipedia.org/w/index.php%3ftitle=User:Bpeps&action=edit&redlink=1
1173 https://en.wikipedia.org/w/index.php%3ftitle=User:Bpkmadden&action=edit&redlink=1
1174 https://en.wikipedia.org/w/index.php%3ftitle=User:Bporopat&action=edit&redlink=1
1175 https://en.wikipedia.org/w/index.php%3ftitle=User:Bputman&action=edit&redlink=1
1176 https://en.wikipedia.org/wiki/User:Bracton
1177 https://en.wikipedia.org/wiki/User:Brad7777
1178 https://en.wikipedia.org/w/index.php%3ftitle=User:BradAustin2&action=edit&redlink=1
1179 https://en.wikipedia.org/w/index.php%3ftitle=User:Braddjwinter&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Bradleykuszmaul&action=edit&
1180
redlink=1
1181 https://en.wikipedia.org/wiki/User:Bradwindy
1182 https://en.wikipedia.org/w/index.php%3ftitle=User:Bradyoung01&action=edit&redlink=1
1183 https://en.wikipedia.org/w/index.php%3ftitle=User:Brahle&action=edit&redlink=1
1184 https://en.wikipedia.org/wiki/User:Braincricket
1185 https://en.wikipedia.org/wiki/User:Braindrain0000
1186 https://en.wikipedia.org/wiki/User:Brainix
1187 https://en.wikipedia.org/w/index.php%3ftitle=User:Brainresearch&action=edit&redlink=1
1188 https://en.wikipedia.org/wiki/User:Brainulator9
1189 https://en.wikipedia.org/wiki/User:Brambleclawx
1190 https://en.wikipedia.org/w/index.php%3ftitle=User:Bramdj&action=edit&redlink=1
1191 https://en.wikipedia.org/wiki/User:BranStark

1716
External links

1 BrandonLeeRichards1192
1 Brandonrhodes1193
1 Branonm1194
1 Braphael1195
1 Bravandi21196
2 Brayan Jaimes1197
2 Brazzy1198
2 Breawycker1199
1 Brendanl791200
3 Breno1201
1 Brent Gulanowski1202
1 Brett Alexander Hunter1203
15 Brewhaha@edmc.net1204
4 Brews ohare1205
3 Bri1206
1 Brian Gunderson1207
1 Brian Kerwin Williams1208
2 Brian09181209
1 BrianRice1210
2 BrianWilloughby1211
2 Brianbjparker1212
8 Briandamgaard1213
1 Brianga1214
3 Brianjd1215

https://en.wikipedia.org/w/index.php%3ftitle=User:BrandonLeeRichards&action=edit&
1192
redlink=1
1193 https://en.wikipedia.org/w/index.php%3ftitle=User:Brandonrhodes&action=edit&redlink=1
1194 https://en.wikipedia.org/w/index.php%3ftitle=User:Branonm&action=edit&redlink=1
1195 https://en.wikipedia.org/w/index.php%3ftitle=User:Braphael&action=edit&redlink=1
1196 https://en.wikipedia.org/w/index.php%3ftitle=User:Bravandi2&action=edit&redlink=1
1197 https://en.wikipedia.org/wiki/User:Brayan_Jaimes
1198 https://en.wikipedia.org/wiki/User:Brazzy
1199 https://en.wikipedia.org/wiki/User:Breawycker
1200 https://en.wikipedia.org/wiki/User:Brendanl79
1201 https://en.wikipedia.org/wiki/User:Breno
1202 https://en.wikipedia.org/wiki/User:Brent_Gulanowski
https://en.wikipedia.org/w/index.php%3ftitle=User:Brett_Alexander_Hunter&action=edit&
1203
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Brewhaha@edmc.net&action=edit&
1204
redlink=1
1205 https://en.wikipedia.org/wiki/User:Brews_ohare
1206 https://en.wikipedia.org/wiki/User:Bri
1207 https://en.wikipedia.org/wiki/User:Brian_Gunderson
https://en.wikipedia.org/w/index.php%3ftitle=User:Brian_Kerwin_Williams&action=edit&
1208
redlink=1
1209 https://en.wikipedia.org/wiki/User:Brian0918
1210 https://en.wikipedia.org/wiki/User:BrianRice
1211 https://en.wikipedia.org/wiki/User:BrianWilloughby
1212 https://en.wikipedia.org/w/index.php%3ftitle=User:Brianbjparker&action=edit&redlink=1
1213 https://en.wikipedia.org/w/index.php%3ftitle=User:Briandamgaard&action=edit&redlink=1
1214 https://en.wikipedia.org/wiki/User:Brianga
1215 https://en.wikipedia.org/wiki/User:Brianjd

1717
Contributors

2 Briansdumb1216
1 Brianv1217
4 Brick Thrower1218
3 Brickbeard1219
1 BridgelessQiu1220
2 BrightR1221
3 Brighterorange1222
2 Brightgalrs1223
2 BringsVictory1224
2 Brion VIBBER1225
2 Brirush1226
1 Britcruise1227
1 British Potato1228
1 Bro1229
3 Broadbot1230
17 BrokenSegue1231
1 Broketheinterweb1232
1 Bromistaguy1233
67 Brona1234
4 Brookie1235
7 BrotherE1236
4 BrownHairedGirl1237
1 Brownsteve1238
1 Bruce Lee's Relative Chin1239
2 Bruce1ee1240

1216 https://en.wikipedia.org/w/index.php%3ftitle=User:Briansdumb&action=edit&redlink=1
1217 https://en.wikipedia.org/w/index.php%3ftitle=User:Brianv&action=edit&redlink=1
1218 https://en.wikipedia.org/wiki/User:Brick_Thrower
1219 https://en.wikipedia.org/wiki/User:Brickbeard
1220 https://en.wikipedia.org/w/index.php%3ftitle=User:BridgelessQiu&action=edit&redlink=1
1221 https://en.wikipedia.org/wiki/User:BrightR
1222 https://en.wikipedia.org/wiki/User:Brighterorange
1223 https://en.wikipedia.org/wiki/User:Brightgalrs
1224 https://en.wikipedia.org/w/index.php%3ftitle=User:BringsVictory&action=edit&redlink=1
1225 https://en.wikipedia.org/wiki/User:Brion_VIBBER
1226 https://en.wikipedia.org/wiki/User:Brirush
1227 https://en.wikipedia.org/wiki/User:Britcruise
https://en.wikipedia.org/w/index.php%3ftitle=User:British_Potato&action=edit&redlink=
1228
1
1229 https://en.wikipedia.org/wiki/User:Bro
1230 https://en.wikipedia.org/wiki/User:Broadbot
1231 https://en.wikipedia.org/wiki/User:BrokenSegue
1232 https://en.wikipedia.org/wiki/User:Broketheinterweb
1233 https://en.wikipedia.org/wiki/User:Bromistaguy
1234 https://en.wikipedia.org/wiki/User:Brona
1235 https://en.wikipedia.org/wiki/User:Brookie
1236 https://en.wikipedia.org/wiki/User:BrotherE
1237 https://en.wikipedia.org/wiki/User:BrownHairedGirl
1238 https://en.wikipedia.org/wiki/User:Brownsteve
1239 https://en.wikipedia.org/wiki/User:Bruce_Lee%2527s_Relative_Chin
1240 https://en.wikipedia.org/wiki/User:Bruce1ee

1718
External links

1 Brucevdk1241
1 Bruno Unna1242
2 Brunodantas31243
2 Brusegadi1244
2 Bruyninc~enwiki1245
1 Brw121246
11 Bryan Derksen1247
1 BryghtShadow1248
1 Bsadowski11249
1 Bsdaemon1250
2 Bse31251
2 Bsilverthorn1252
1 Bsotomay1253
1 Bte991254
1 Btw order1255
2 Btwied1256
5 Btyner1257
4 BuZZdEE.BuzZ1258
1 Buaagg1259
73 Bubba731260
1 Bubble snipe1261
1 Buchibaba1262
1 Bucketsofg1263
2 Budalla1264
2 Buddhikaeport1265

1241 https://en.wikipedia.org/wiki/User:Brucevdk
1242 https://en.wikipedia.org/wiki/User:Bruno_Unna
1243 https://en.wikipedia.org/w/index.php%3ftitle=User:Brunodantas3&action=edit&redlink=1
1244 https://en.wikipedia.org/wiki/User:Brusegadi
https://en.wikipedia.org/w/index.php%3ftitle=User:Bruyninc~enwiki&action=edit&
1245
redlink=1
1246 https://en.wikipedia.org/wiki/User:Brw12
1247 https://en.wikipedia.org/wiki/User:Bryan_Derksen
1248 https://en.wikipedia.org/wiki/User:BryghtShadow
1249 https://en.wikipedia.org/wiki/User:Bsadowski1
1250 https://en.wikipedia.org/wiki/User:Bsdaemon
1251 https://en.wikipedia.org/wiki/User:Bse3
1252 https://en.wikipedia.org/wiki/User:Bsilverthorn
1253 https://en.wikipedia.org/wiki/User:Bsotomay
1254 https://en.wikipedia.org/wiki/User:Bte99
1255 https://en.wikipedia.org/wiki/User:Btw_order
1256 https://en.wikipedia.org/wiki/User:Btwied
1257 https://en.wikipedia.org/wiki/User:Btyner
1258 https://en.wikipedia.org/w/index.php%3ftitle=User:BuZZdEE.BuzZ&action=edit&redlink=1
1259 https://en.wikipedia.org/w/index.php%3ftitle=User:Buaagg&action=edit&redlink=1
1260 https://en.wikipedia.org/wiki/User:Bubba73
1261 https://en.wikipedia.org/w/index.php%3ftitle=User:Bubble_snipe&action=edit&redlink=1
1262 https://en.wikipedia.org/wiki/User:Buchibaba
1263 https://en.wikipedia.org/wiki/User:Bucketsofg
1264 https://en.wikipedia.org/w/index.php%3ftitle=User:Budalla&action=edit&redlink=1
1265 https://en.wikipedia.org/wiki/User:Buddhikaeport

1719
Contributors

2 Buehrenm781266
1 Bughunter21267
1 Bug~enwiki1268
1 Bulgaria mitko1269
2 Bumbulski1270
1 Bunnyhop111271
2 Buptcharlie1272
2 Burakov1273
1 Burburger01274
1 Burkenyo1275
4 Burn1276
1 BurntSky1277
1 Burschik1278
1 BurtAlert1279
1 Bushytails1280
1 Busimus1281
2 Buster791282
2 Butros1283
2 Butterwell1284
4 Bvbellomo1285
5 BwDraco1286
1 Bwabes1287
1 Bwegs141288
1 Bweilz1289
5 Byhoe1290

1266 https://en.wikipedia.org/w/index.php%3ftitle=User:Buehrenm78&action=edit&redlink=1
1267 https://en.wikipedia.org/wiki/User:Bughunter2
1268 https://en.wikipedia.org/w/index.php%3ftitle=User:Bug~enwiki&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Bulgaria_mitko&action=edit&redlink=
1269
1
1270 https://en.wikipedia.org/wiki/User:Bumbulski
1271 https://en.wikipedia.org/wiki/User:Bunnyhop11
1272 https://en.wikipedia.org/w/index.php%3ftitle=User:Buptcharlie&action=edit&redlink=1
1273 https://en.wikipedia.org/w/index.php%3ftitle=User:Burakov&action=edit&redlink=1
1274 https://en.wikipedia.org/wiki/User:Burburger0
1275 https://en.wikipedia.org/wiki/User:Burkenyo
1276 https://en.wikipedia.org/wiki/User:Burn
1277 https://en.wikipedia.org/wiki/User:BurntSky
1278 https://en.wikipedia.org/wiki/User:Burschik
1279 https://en.wikipedia.org/w/index.php%3ftitle=User:BurtAlert&action=edit&redlink=1
1280 https://en.wikipedia.org/wiki/User:Bushytails
1281 https://en.wikipedia.org/w/index.php%3ftitle=User:Busimus&action=edit&redlink=1
1282 https://en.wikipedia.org/wiki/User:Buster79
1283 https://en.wikipedia.org/wiki/User:Butros
1284 https://en.wikipedia.org/w/index.php%3ftitle=User:Butterwell&action=edit&redlink=1
1285 https://en.wikipedia.org/w/index.php%3ftitle=User:Bvbellomo&action=edit&redlink=1
1286 https://en.wikipedia.org/wiki/User:BwDraco
1287 https://en.wikipedia.org/wiki/User:Bwabes
1288 https://en.wikipedia.org/w/index.php%3ftitle=User:Bwegs14&action=edit&redlink=1
1289 https://en.wikipedia.org/wiki/User:Bweilz
1290 https://en.wikipedia.org/wiki/User:Byhoe

1720
External links

2 Byrnedj121291
2 Bzorro1292
4 C S1293
12 C. A. Russell1294
2 C. Siebert~enwiki1295
9 C. lorenz1296
7 C.Fred1297
5 C.LloydHuggins1298
2 C0dergirl1299
1 C452071300
3 C4K31301
1 C4chandu1302
12 C7protal1303
2 CALR1304
12 CAPTAIN RAJU1305
32 CBKAtTopsails1306
2 CBM21307
1 CFCF1308
2 CGamesPlay1309
14 CJLL Wright1310
1 CKlunck1311
17 CLCStudent1312
1 COBot1313
108 CRGreathouse1314
2 CSWarren1315

1291 https://en.wikipedia.org/w/index.php%3ftitle=User:Byrnedj12&action=edit&redlink=1
1292 https://en.wikipedia.org/w/index.php%3ftitle=User:Bzorro&action=edit&redlink=1
1293 https://en.wikipedia.org/wiki/User:C_S
1294 https://en.wikipedia.org/wiki/User:C._A._Russell
https://en.wikipedia.org/w/index.php%3ftitle=User:C._Siebert~enwiki&action=edit&
1295
redlink=1
1296 https://en.wikipedia.org/wiki/User:C._lorenz
1297 https://en.wikipedia.org/wiki/User:C.Fred
https://en.wikipedia.org/w/index.php%3ftitle=User:C.LloydHuggins&action=edit&redlink=
1298
1
1299 https://en.wikipedia.org/w/index.php%3ftitle=User:C0dergirl&action=edit&redlink=1
1300 https://en.wikipedia.org/wiki/User:C45207
1301 https://en.wikipedia.org/wiki/User:C4K3
1302 https://en.wikipedia.org/wiki/User:C4chandu
1303 https://en.wikipedia.org/w/index.php%3ftitle=User:C7protal&action=edit&redlink=1
1304 https://en.wikipedia.org/wiki/User:CALR
1305 https://en.wikipedia.org/wiki/User:CAPTAIN_RAJU
1306 https://en.wikipedia.org/wiki/User:CBKAtTopsails
1307 https://en.wikipedia.org/wiki/User:CBM2
1308 https://en.wikipedia.org/wiki/User:CFCF
1309 https://en.wikipedia.org/w/index.php%3ftitle=User:CGamesPlay&action=edit&redlink=1
1310 https://en.wikipedia.org/wiki/User:CJLL_Wright
1311 https://en.wikipedia.org/wiki/User:CKlunck
1312 https://en.wikipedia.org/wiki/User:CLCStudent
1313 https://en.wikipedia.org/wiki/User:COBot
1314 https://en.wikipedia.org/wiki/User:CRGreathouse
1315 https://en.wikipedia.org/wiki/User:CSWarren

1721
Contributors

2 CV99331316
2 CWenger1317
1 CWii1318
1 CYD1319
1 Cab.jones1320
5 Cachedio1321
1 Cacophony1322
2 Cactus.man1323
3 Cadillac0001324
15 Caesura1325
1 Caiaffa1326
1 Cairomax1327
2 Cal 12341328
2 Calabe19921329
3 Calbaer1330
4 Calculuslover1331
1 Calculuslover8001332
1 Calixte1333
1 Callanecc1334
1 Callistan1335
1 Caltas1336
1 Calwiki1337
3 CambridgeBayWeather1338
3 Camembert1339
2 Camilord1340

1316 https://en.wikipedia.org/wiki/User:CV9933
1317 https://en.wikipedia.org/wiki/User:CWenger
1318 https://en.wikipedia.org/w/index.php%3ftitle=User:CWii&action=edit&redlink=1
1319 https://en.wikipedia.org/wiki/User:CYD
1320 https://en.wikipedia.org/wiki/User:Cab.jones
1321 https://en.wikipedia.org/wiki/User:Cachedio
1322 https://en.wikipedia.org/wiki/User:Cacophony
1323 https://en.wikipedia.org/wiki/User:Cactus.man
1324 https://en.wikipedia.org/wiki/User:Cadillac000
1325 https://en.wikipedia.org/wiki/User:Caesura
1326 https://en.wikipedia.org/wiki/User:Caiaffa
1327 https://en.wikipedia.org/w/index.php%3ftitle=User:Cairomax&action=edit&redlink=1
1328 https://en.wikipedia.org/wiki/User:Cal_1234
1329 https://en.wikipedia.org/wiki/User:Calabe1992
1330 https://en.wikipedia.org/wiki/User:Calbaer
1331 https://en.wikipedia.org/wiki/User:Calculuslover
https://en.wikipedia.org/w/index.php%3ftitle=User:Calculuslover800&action=edit&
1332
redlink=1
1333 https://en.wikipedia.org/wiki/User:Calixte
1334 https://en.wikipedia.org/wiki/User:Callanecc
1335 https://en.wikipedia.org/w/index.php%3ftitle=User:Callistan&action=edit&redlink=1
1336 https://en.wikipedia.org/wiki/User:Caltas
1337 https://en.wikipedia.org/w/index.php%3ftitle=User:Calwiki&action=edit&redlink=1
1338 https://en.wikipedia.org/wiki/User:CambridgeBayWeather
1339 https://en.wikipedia.org/wiki/User:Camembert
1340 https://en.wikipedia.org/w/index.php%3ftitle=User:Camilord&action=edit&redlink=1

1722
External links

1 Camipco1341
1 Camsbury1342
8 Camw1343
4 Can't sleep, clown will eat me1344
2 CanOfWorms1345
4 CanadianLinuxUser1346
2 Canavalia1347
7 CanisRufus1348
1 Cannolis1349
2 Canon1350
1 Canopus Grandiflora1351
3 Cant google me1352
1 Cantalamessa1353
2 Canthusus1354
2 Canuckian891355
1 Canyq1356
1 Caot~enwiki1357
2 CapitalR1358
5 Capitalist1359
5 Capricorn421360
2 Captain Fortran1361
1 Captain Segfault1362
1 Captain-n00dle1363
1 CaptainEek1364
2 Captainfranz1365

1341 https://en.wikipedia.org/wiki/User:Camipco
1342 https://en.wikipedia.org/w/index.php%3ftitle=User:Camsbury&action=edit&redlink=1
1343 https://en.wikipedia.org/wiki/User:Camw
1344 https://en.wikipedia.org/wiki/User:Can%2527t_sleep,_clown_will_eat_me
1345 https://en.wikipedia.org/w/index.php%3ftitle=User:CanOfWorms&action=edit&redlink=1
1346 https://en.wikipedia.org/wiki/User:CanadianLinuxUser
1347 https://en.wikipedia.org/w/index.php%3ftitle=User:Canavalia&action=edit&redlink=1
1348 https://en.wikipedia.org/wiki/User:CanisRufus
1349 https://en.wikipedia.org/wiki/User:Cannolis
1350 https://en.wikipedia.org/wiki/User:Canon
https://en.wikipedia.org/w/index.php%3ftitle=User:Canopus_Grandiflora&action=edit&
1351
redlink=1
1352 https://en.wikipedia.org/wiki/User:Cant_google_me
1353 https://en.wikipedia.org/wiki/User:Cantalamessa
1354 https://en.wikipedia.org/wiki/User:Canthusus
1355 https://en.wikipedia.org/wiki/User:Canuckian89
1356 https://en.wikipedia.org/wiki/User:Canyq
1357 https://en.wikipedia.org/w/index.php%3ftitle=User:Caot~enwiki&action=edit&redlink=1
1358 https://en.wikipedia.org/wiki/User:CapitalR
1359 https://en.wikipedia.org/wiki/User:Capitalist
1360 https://en.wikipedia.org/wiki/User:Capricorn42
1361 https://en.wikipedia.org/wiki/User:Captain_Fortran
1362 https://en.wikipedia.org/wiki/User:Captain_Segfault
1363 https://en.wikipedia.org/wiki/User:Captain-n00dle
1364 https://en.wikipedia.org/wiki/User:CaptainEek
1365 https://en.wikipedia.org/w/index.php%3ftitle=User:Captainfranz&action=edit&redlink=1

1723
Contributors

1 Caramdir~enwiki1366
2 Carbo12001367
1 Cardiff.Shirley.19951368
1 CardinalDan1369
10 Carey Evans1370
3 Carl2981371
1 CarlH1372
2 CarlManaster1373
1 Carlette1374
1 Carlj71375
1 CarlosHoyos~enwiki1376
1 CarlosJiménezSánchezC5121377
2 Carlschroedl1378
2 Carlwitt1379
2 Carmichael1380
6 CarminPolitano1381
2 Carmitsp1382
1 CaroleHenson1383
1 CarpeCerevisi1384
1 Carribeiro1385
1 Carrot official1386
5 CarsracBot1387
1 Carter1388
1 CarterFendley1389
1 Casbah~enwiki1390

1366 https://en.wikipedia.org/wiki/User:Caramdir~enwiki
1367 https://en.wikipedia.org/w/index.php%3ftitle=User:Carbo1200&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Cardiff.Shirley.1995&action=edit&
1368
redlink=1
1369 https://en.wikipedia.org/wiki/User:CardinalDan
1370 https://en.wikipedia.org/wiki/User:Carey_Evans
1371 https://en.wikipedia.org/w/index.php%3ftitle=User:Carl298&action=edit&redlink=1
1372 https://en.wikipedia.org/w/index.php%3ftitle=User:CarlH&action=edit&redlink=1
1373 https://en.wikipedia.org/wiki/User:CarlManaster
1374 https://en.wikipedia.org/w/index.php%3ftitle=User:Carlette&action=edit&redlink=1
1375 https://en.wikipedia.org/wiki/User:Carlj7
1376 https://en.wikipedia.org/wiki/User:CarlosHoyos~enwiki
https://en.wikipedia.org/w/index.php%3ftitle=User:CarlosJim%25C3%25A9nezS%25C3%
1377
25A1nchezC512&action=edit&redlink=1
1378 https://en.wikipedia.org/wiki/User:Carlschroedl
1379 https://en.wikipedia.org/w/index.php%3ftitle=User:Carlwitt&action=edit&redlink=1
1380 https://en.wikipedia.org/wiki/User:Carmichael
1381 https://en.wikipedia.org/wiki/User:CarminPolitano
1382 https://en.wikipedia.org/w/index.php%3ftitle=User:Carmitsp&action=edit&redlink=1
1383 https://en.wikipedia.org/wiki/User:CaroleHenson
1384 https://en.wikipedia.org/w/index.php%3ftitle=User:CarpeCerevisi&action=edit&redlink=1
1385 https://en.wikipedia.org/wiki/User:Carribeiro
1386 https://en.wikipedia.org/wiki/User:Carrot_official
1387 https://en.wikipedia.org/wiki/User:CarsracBot
1388 https://en.wikipedia.org/wiki/User:Carter
1389 https://en.wikipedia.org/w/index.php%3ftitle=User:CarterFendley&action=edit&redlink=1
1390 https://en.wikipedia.org/w/index.php%3ftitle=User:Casbah~enwiki&action=edit&redlink=1

1724
External links

2 Casperbp1391
1 Cassiopeia1392
1 Casted1393
1 Cat Parade1394
1 Cataxxx1395
2 Catbar1396
1 Cate1397
1 Categulario1398
1 Catlemur1399
2 Catskul1400
3 Catslash1401
1 Causa sui1402
1 Cazort1403
1 Cbane1404
1 Cbarlow31405
6 Cburnett1406
31 Ccalmen1407
1 Ccn1408
2 Cdhio1409
2 Cdiggins1410
1 Cdvr19931411
1 Ceacy1412
1 Cebus1413
2 CecilWard1414
117 Cedar1011415

1391 https://en.wikipedia.org/wiki/User:Casperbp
1392 https://en.wikipedia.org/wiki/User:Cassiopeia
1393 https://en.wikipedia.org/w/index.php%3ftitle=User:Casted&action=edit&redlink=1
1394 https://en.wikipedia.org/wiki/User:Cat_Parade
1395 https://en.wikipedia.org/w/index.php%3ftitle=User:Cataxxx&action=edit&redlink=1
1396 https://en.wikipedia.org/wiki/User:Catbar
1397 https://en.wikipedia.org/wiki/User:Cate
1398 https://en.wikipedia.org/w/index.php%3ftitle=User:Categulario&action=edit&redlink=1
1399 https://en.wikipedia.org/wiki/User:Catlemur
1400 https://en.wikipedia.org/wiki/User:Catskul
1401 https://en.wikipedia.org/w/index.php%3ftitle=User:Catslash&action=edit&redlink=1
1402 https://en.wikipedia.org/wiki/User:Causa_sui
1403 https://en.wikipedia.org/wiki/User:Cazort
1404 https://en.wikipedia.org/wiki/User:Cbane
1405 https://en.wikipedia.org/wiki/User:Cbarlow3
1406 https://en.wikipedia.org/wiki/User:Cburnett
1407 https://en.wikipedia.org/w/index.php%3ftitle=User:Ccalmen&action=edit&redlink=1
1408 https://en.wikipedia.org/wiki/User:Ccn
1409 https://en.wikipedia.org/wiki/User:Cdhio
1410 https://en.wikipedia.org/w/index.php%3ftitle=User:Cdiggins&action=edit&redlink=1
1411 https://en.wikipedia.org/w/index.php%3ftitle=User:Cdvr1993&action=edit&redlink=1
1412 https://en.wikipedia.org/w/index.php%3ftitle=User:Ceacy&action=edit&redlink=1
1413 https://en.wikipedia.org/w/index.php%3ftitle=User:Cebus&action=edit&redlink=1
1414 https://en.wikipedia.org/wiki/User:CecilWard
1415 https://en.wikipedia.org/w/index.php%3ftitle=User:Cedar101&action=edit&redlink=1

1725
Contributors

5 Cedgar231416
1 Cedric dlb1417
4 Cema1418
1 Cembirler1419
2 CentralTime3011420
5 Centrx1421
4 Cerabot~enwiki1422
1 Cerasti1423
1 Cerber1424
1 Cerebellum1425
1 Cerniagigante1426
15 Ceros1427
8 Certes1428
2 Certicom1429
17 CesarB1430
1 Cesarramsan1431
2 Cesarsorm~enwiki1432
1 Ceyockey1433
1 Cf. Hay1434
1 Cfailde1435
1 Cfeet771436
2 Cffk1437
1 Cgma1438
1 Chadernook1439
1 Chadhutchins101440

1416 https://en.wikipedia.org/wiki/User:Cedgar23
1417 https://en.wikipedia.org/wiki/User:Cedric_dlb
1418 https://en.wikipedia.org/wiki/User:Cema
1419 https://en.wikipedia.org/w/index.php%3ftitle=User:Cembirler&action=edit&redlink=1
1420 https://en.wikipedia.org/wiki/User:CentralTime301
1421 https://en.wikipedia.org/wiki/User:Centrx
1422 https://en.wikipedia.org/wiki/User:Cerabot~enwiki
1423 https://en.wikipedia.org/w/index.php%3ftitle=User:Cerasti&action=edit&redlink=1
1424 https://en.wikipedia.org/wiki/User:Cerber
1425 https://en.wikipedia.org/wiki/User:Cerebellum
1426 https://en.wikipedia.org/wiki/User:Cerniagigante
1427 https://en.wikipedia.org/wiki/User:Ceros
1428 https://en.wikipedia.org/wiki/User:Certes
1429 https://en.wikipedia.org/wiki/User:Certicom
1430 https://en.wikipedia.org/wiki/User:CesarB
1431 https://en.wikipedia.org/w/index.php%3ftitle=User:Cesarramsan&action=edit&redlink=1
1432 https://en.wikipedia.org/wiki/User:Cesarsorm~enwiki
1433 https://en.wikipedia.org/wiki/User:Ceyockey
1434 https://en.wikipedia.org/wiki/User:Cf._Hay
1435 https://en.wikipedia.org/wiki/User:Cfailde
1436 https://en.wikipedia.org/w/index.php%3ftitle=User:Cfeet77&action=edit&redlink=1
1437 https://en.wikipedia.org/wiki/User:Cffk
1438 https://en.wikipedia.org/w/index.php%3ftitle=User:Cgma&action=edit&redlink=1
1439 https://en.wikipedia.org/w/index.php%3ftitle=User:Chadernook&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Chadhutchins10&action=edit&redlink=
1440
1

1726
External links

2 Chadyoung1441
1 Chadzii241442
6 Chalst1443
1 Chameleon1444
7 ChamithN1445
1 Chamoisblanc1446
1 Chandhooguy1447
2 ChandlerMapBot1448
1 ChandraKarChandra1449
1 Chandraguptamaurya1450
1 Chandres1451
2 ChangChienFu1452
1 ChanningWalton1453
2 Channonnahc1454
1 Chao Xu1455
1 Chaohuang1456
1 Chaojoker1457
6 Chaos50231458
2 ChaosCon1459
1 Chaosdruid1460
67 Charles Matthews1461
2 Charles yiming1462
1 CharlesC1463
12 CharlesGillingham1464
3 CharlesHBennett1465

1441 https://en.wikipedia.org/wiki/User:Chadyoung
1442 https://en.wikipedia.org/w/index.php%3ftitle=User:Chadzii24&action=edit&redlink=1
1443 https://en.wikipedia.org/wiki/User:Chalst
1444 https://en.wikipedia.org/wiki/User:Chameleon
1445 https://en.wikipedia.org/wiki/User:ChamithN
1446 https://en.wikipedia.org/w/index.php%3ftitle=User:Chamoisblanc&action=edit&redlink=1
1447 https://en.wikipedia.org/w/index.php%3ftitle=User:Chandhooguy&action=edit&redlink=1
1448 https://en.wikipedia.org/wiki/User:ChandlerMapBot
https://en.wikipedia.org/w/index.php%3ftitle=User:ChandraKarChandra&action=edit&
1449
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Chandraguptamaurya&action=edit&
1450
redlink=1
1451 https://en.wikipedia.org/wiki/User:Chandres
1452 https://en.wikipedia.org/wiki/User:ChangChienFu
1453 https://en.wikipedia.org/wiki/User:ChanningWalton
1454 https://en.wikipedia.org/w/index.php%3ftitle=User:Channonnahc&action=edit&redlink=1
1455 https://en.wikipedia.org/wiki/User:Chao_Xu
1456 https://en.wikipedia.org/w/index.php%3ftitle=User:Chaohuang&action=edit&redlink=1
1457 https://en.wikipedia.org/wiki/User:Chaojoker
1458 https://en.wikipedia.org/wiki/User:Chaos5023
1459 https://en.wikipedia.org/w/index.php%3ftitle=User:ChaosCon&action=edit&redlink=1
1460 https://en.wikipedia.org/wiki/User:Chaosdruid
1461 https://en.wikipedia.org/wiki/User:Charles_Matthews
https://en.wikipedia.org/w/index.php%3ftitle=User:Charles_yiming&action=edit&redlink=
1462
1
1463 https://en.wikipedia.org/wiki/User:CharlesC
1464 https://en.wikipedia.org/wiki/User:CharlesGillingham
1465 https://en.wikipedia.org/wiki/User:CharlesHBennett

1727
Contributors

2 Charlotte4561231466
1 CharlotteWebb1467
1 Charmoniumq1468
23 Charvest1469
8 Chas zzz brown1470
1 ChaseKR1471
1 ChazBeckett1472
2 Chealer1473
11 Cheater no11474
1 Checkhelps6661475
2 Checkingfax1476
1 Ched1477
2 Cheeser11478
1 Cheethdj1479
1 Cheezycrust1480
1 Cheezykins1481
2 Chehabz1482
1 Chenglongjiang1483
8 Chenopodiaceous1484
4 Chenxiaoqino1485
5 Cherkash1486
1 Chery1487
1 Chester Markel1488
1 Chet Gray1489
1 Chibby0ne1490

https://en.wikipedia.org/w/index.php%3ftitle=User:Charlotte456123&action=edit&
1466
redlink=1
1467 https://en.wikipedia.org/wiki/User:CharlotteWebb
1468 https://en.wikipedia.org/wiki/User:Charmoniumq
1469 https://en.wikipedia.org/wiki/User:Charvest
1470 https://en.wikipedia.org/wiki/User:Chas_zzz_brown
1471 https://en.wikipedia.org/wiki/User:ChaseKR
1472 https://en.wikipedia.org/wiki/User:ChazBeckett
1473 https://en.wikipedia.org/wiki/User:Chealer
1474 https://en.wikipedia.org/wiki/User:Cheater_no1
1475 https://en.wikipedia.org/w/index.php%3ftitle=User:Checkhelps666&action=edit&redlink=1
1476 https://en.wikipedia.org/wiki/User:Checkingfax
1477 https://en.wikipedia.org/wiki/User:Ched
1478 https://en.wikipedia.org/wiki/User:Cheeser1
1479 https://en.wikipedia.org/w/index.php%3ftitle=User:Cheethdj&action=edit&redlink=1
1480 https://en.wikipedia.org/w/index.php%3ftitle=User:Cheezycrust&action=edit&redlink=1
1481 https://en.wikipedia.org/wiki/User:Cheezykins
1482 https://en.wikipedia.org/w/index.php%3ftitle=User:Chehabz&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Chenglongjiang&action=edit&redlink=
1483
1
1484 https://en.wikipedia.org/wiki/User:Chenopodiaceous
1485 https://en.wikipedia.org/wiki/User:Chenxiaoqino
1486 https://en.wikipedia.org/w/index.php%3ftitle=User:Cherkash&action=edit&redlink=1
1487 https://en.wikipedia.org/wiki/User:Chery
1488 https://en.wikipedia.org/wiki/User:Chester_Markel
1489 https://en.wikipedia.org/wiki/User:Chet_Gray
1490 https://en.wikipedia.org/wiki/User:Chibby0ne

1728
External links

42 Chiefhuggybear1491
3 ChildofMidnight1492
16 Chinju1493
1 Chip Wildon Forster1494
4 Chipchap1495
1 Chirag zyro1496
1 Chire1497
2 Chlewbot1498
3 Chmarkine1499
2 Chmod0071500
1 Chmod6441501
57 Chobot1502
1 Chochopk1503
2 Chocolateboy1504
5 Choess1505
3 Chompy Ace1506
12 Chongkian1507
2 Choor monster1508
1 Chopchopwhitey1509
1 Chopstickles1510
4 Chowbok1511
1 Chricho1512
12 Chris Capoccia1513
1 Chris G1514
2 Chris Kirov1515

1491 https://en.wikipedia.org/wiki/User:Chiefhuggybear
1492 https://en.wikipedia.org/wiki/User:ChildofMidnight
1493 https://en.wikipedia.org/wiki/User:Chinju
1494 https://en.wikipedia.org/wiki/User:Chip_Wildon_Forster
1495 https://en.wikipedia.org/w/index.php%3ftitle=User:Chipchap&action=edit&redlink=1
1496 https://en.wikipedia.org/w/index.php%3ftitle=User:Chirag_zyro&action=edit&redlink=1
1497 https://en.wikipedia.org/wiki/User:Chire
1498 https://en.wikipedia.org/wiki/User:Chlewbot
1499 https://en.wikipedia.org/w/index.php%3ftitle=User:Chmarkine&action=edit&redlink=1
1500 https://en.wikipedia.org/wiki/User:Chmod007
1501 https://en.wikipedia.org/w/index.php%3ftitle=User:Chmod644&action=edit&redlink=1
1502 https://en.wikipedia.org/wiki/User:Chobot
1503 https://en.wikipedia.org/wiki/User:Chochopk
1504 https://en.wikipedia.org/wiki/User:Chocolateboy
1505 https://en.wikipedia.org/wiki/User:Choess
1506 https://en.wikipedia.org/wiki/User:Chompy_Ace
1507 https://en.wikipedia.org/wiki/User:Chongkian
1508 https://en.wikipedia.org/w/index.php%3ftitle=User:Choor_monster&action=edit&redlink=1
1509 https://en.wikipedia.org/wiki/User:Chopchopwhitey
1510 https://en.wikipedia.org/wiki/User:Chopstickles
1511 https://en.wikipedia.org/wiki/User:Chowbok
1512 https://en.wikipedia.org/wiki/User:Chricho
1513 https://en.wikipedia.org/wiki/User:Chris_Capoccia
1514 https://en.wikipedia.org/wiki/User:Chris_G
1515 https://en.wikipedia.org/w/index.php%3ftitle=User:Chris_Kirov&action=edit&redlink=1

1729
Contributors

1 Chris Lundberg1516
1 Chris Peikert1517
1 Chris Pressey1518
22 Chris the speller1519
1 Chris-gore1520
79 Chris-martin1521
1 Chris551522
1 ChrisCork1523
1 ChrisForno1524
12 ChrisGualtieri1525
5 Chrisahn1526
1 Chrischan~enwiki1527
1 Chrisdone1528
1 Chrisfeilbach1529
2 Chrisjrn1530
1 Chrisjwmartin1531
1 Chrislk021532
1 Chrism1533
2 Chrismcevoy1534
1 Christan801535
1 Christer.berg1536
1 Christian.fritz1537
4 Christian751538
1 Christofpaar1539

https://en.wikipedia.org/w/index.php%3ftitle=User:Chris_Lundberg&action=edit&redlink=
1516
1
1517 https://en.wikipedia.org/wiki/User:Chris_Peikert
1518 https://en.wikipedia.org/w/index.php%3ftitle=User:Chris_Pressey&action=edit&redlink=1
1519 https://en.wikipedia.org/wiki/User:Chris_the_speller
1520 https://en.wikipedia.org/wiki/User:Chris-gore
1521 https://en.wikipedia.org/wiki/User:Chris-martin
1522 https://en.wikipedia.org/wiki/User:Chris55
1523 https://en.wikipedia.org/wiki/User:ChrisCork
1524 https://en.wikipedia.org/wiki/User:ChrisForno
1525 https://en.wikipedia.org/wiki/User:ChrisGualtieri
1526 https://en.wikipedia.org/wiki/User:Chrisahn
https://en.wikipedia.org/w/index.php%3ftitle=User:Chrischan~enwiki&action=edit&
1527
redlink=1
1528 https://en.wikipedia.org/wiki/User:Chrisdone
1529 https://en.wikipedia.org/w/index.php%3ftitle=User:Chrisfeilbach&action=edit&redlink=1
1530 https://en.wikipedia.org/wiki/User:Chrisjrn
1531 https://en.wikipedia.org/wiki/User:Chrisjwmartin
1532 https://en.wikipedia.org/wiki/User:Chrislk02
1533 https://en.wikipedia.org/wiki/User:Chrism
1534 https://en.wikipedia.org/wiki/User:Chrismcevoy
1535 https://en.wikipedia.org/w/index.php%3ftitle=User:Christan80&action=edit&redlink=1
1536 https://en.wikipedia.org/w/index.php%3ftitle=User:Christer.berg&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Christian.fritz&action=edit&
1537
redlink=1
1538 https://en.wikipedia.org/wiki/User:Christian75
1539 https://en.wikipedia.org/w/index.php%3ftitle=User:Christofpaar&action=edit&redlink=1

1730
External links

3 Christoph Dürr1540
1 ChristophE1541
1 Christopher E. Thompson1542
3 Christopher Parham1543
2 Christopher19681544
1 Christopherlin1545
1 Chrisvls1546
1 Chronolegion1547
1 Chrzęszczyboczek1548
1 Chsahit1549
1 Chub~enwiki1550
1 Chuck Simmons1551
1 Chuckmasterrhymes1552
18 ChuispastonBot1553
1 Chutzpan1554
1 Chuunen Baka1555
3 Chval01556
1 Chzz1557
174 CiaPan1558
1 Ciaccona1559
1 Ciacho101560
5 Cic1561
2 Cicconetti1562
1 Cimon Avaro1563
6 Cincoutprabu1564

https://en.wikipedia.org/w/index.php%3ftitle=User:Christoph_D%25C3%25BCrr&action=
1540
edit&redlink=1
1541 https://en.wikipedia.org/wiki/User:ChristophE
1542 https://en.wikipedia.org/wiki/User:Christopher_E._Thompson
1543 https://en.wikipedia.org/wiki/User:Christopher_Parham
1544 https://en.wikipedia.org/wiki/User:Christopher1968
1545 https://en.wikipedia.org/wiki/User:Christopherlin
1546 https://en.wikipedia.org/wiki/User:Chrisvls
1547 https://en.wikipedia.org/wiki/User:Chronolegion
https://en.wikipedia.org/w/index.php%3ftitle=User:Chrz%25C4%2599szczyboczek&action=
1548
edit&redlink=1
1549 https://en.wikipedia.org/w/index.php%3ftitle=User:Chsahit&action=edit&redlink=1
1550 https://en.wikipedia.org/w/index.php%3ftitle=User:Chub~enwiki&action=edit&redlink=1
1551 https://en.wikipedia.org/wiki/User:Chuck_Simmons
https://en.wikipedia.org/w/index.php%3ftitle=User:Chuckmasterrhymes&action=edit&
1552
redlink=1
1553 https://en.wikipedia.org/wiki/User:ChuispastonBot
1554 https://en.wikipedia.org/wiki/User:Chutzpan
1555 https://en.wikipedia.org/wiki/User:Chuunen_Baka
1556 https://en.wikipedia.org/w/index.php%3ftitle=User:Chval0&action=edit&redlink=1
1557 https://en.wikipedia.org/wiki/User:Chzz
1558 https://en.wikipedia.org/wiki/User:CiaPan
1559 https://en.wikipedia.org/wiki/User:Ciaccona
1560 https://en.wikipedia.org/wiki/User:Ciacho10
1561 https://en.wikipedia.org/w/index.php%3ftitle=User:Cic&action=edit&redlink=1
1562 https://en.wikipedia.org/w/index.php%3ftitle=User:Cicconetti&action=edit&redlink=1
1563 https://en.wikipedia.org/wiki/User:Cimon_Avaro
1564 https://en.wikipedia.org/w/index.php%3ftitle=User:Cincoutprabu&action=edit&redlink=1

1731
Contributors

1 Cinfa781565
1 Cipher10241566
1 Ciphergoth1567
6 Ciphers1568
1 Cirala281569
7 Circular171570
231 Citation bot1571
69 Citation bot 11572
18 CitationCleanerBot1573
2 Citizen Canine1574
3 Citrus5381575
1 Civil Engineer III1576
1 Cjsmith.us1577
2 Cjtonde1578
1 Ck lostsword1579
1 Ckere1580
2 Ckorff1581
2 Ckplato1582
1 Cl2ha1583
4 Clacker1584
2 Clancybufton1585
2 Clancy~enwiki1586
2 Clangin1587
1 ClansOfIntrigue1588
5 Classicalecon1589

1565 https://en.wikipedia.org/w/index.php%3ftitle=User:Cinfa78&action=edit&redlink=1
1566 https://en.wikipedia.org/w/index.php%3ftitle=User:Cipher1024&action=edit&redlink=1
1567 https://en.wikipedia.org/wiki/User:Ciphergoth
1568 https://en.wikipedia.org/wiki/User:Ciphers
1569 https://en.wikipedia.org/w/index.php%3ftitle=User:Cirala28&action=edit&redlink=1
1570 https://en.wikipedia.org/wiki/User:Circular17
1571 https://en.wikipedia.org/wiki/User:Citation_bot
1572 https://en.wikipedia.org/wiki/User:Citation_bot_1
1573 https://en.wikipedia.org/wiki/User:CitationCleanerBot
1574 https://en.wikipedia.org/wiki/User:Citizen_Canine
1575 https://en.wikipedia.org/wiki/User:Citrus538
1576 https://en.wikipedia.org/wiki/User:Civil_Engineer_III
1577 https://en.wikipedia.org/wiki/User:Cjsmith.us
1578 https://en.wikipedia.org/w/index.php%3ftitle=User:Cjtonde&action=edit&redlink=1
1579 https://en.wikipedia.org/wiki/User:Ck_lostsword
1580 https://en.wikipedia.org/wiki/User:Ckere
1581 https://en.wikipedia.org/w/index.php%3ftitle=User:Ckorff&action=edit&redlink=1
1582 https://en.wikipedia.org/w/index.php%3ftitle=User:Ckplato&action=edit&redlink=1
1583 https://en.wikipedia.org/w/index.php%3ftitle=User:Cl2ha&action=edit&redlink=1
1584 https://en.wikipedia.org/w/index.php%3ftitle=User:Clacker&action=edit&redlink=1
1585 https://en.wikipedia.org/w/index.php%3ftitle=User:Clancybufton&action=edit&redlink=1
1586 https://en.wikipedia.org/wiki/User:Clancy~enwiki
1587 https://en.wikipedia.org/wiki/User:Clangin
https://en.wikipedia.org/w/index.php%3ftitle=User:ClansOfIntrigue&action=edit&
1588
redlink=1
1589 https://en.wikipedia.org/wiki/User:Classicalecon

1732
External links

1 Clayart-terracotta1590
2 Claygate1591
1 Clayrat1592
1 Cleancutkid1593
1 CleanupService1594
3 Clecio~enwiki1595
2 Clementi1596
1 Clementina1597
3 Clemmy1598
1 Clerkva1599
1 Cliff smith1600
1 Cliffb1601
1 ClockworkSoul1602
11 Closedmouth1603
2 Cloud2001604
2 CloudNine1605
2 Cloudjpk1606
1 Cloudrunner1607
77 ClueBot1608
508 ClueBot NG1609
3 Clx3211610
1 Cmapku1611
5 Cmbay1612
1 Cmcfarland1613
10 CmdrObot1614

https://en.wikipedia.org/w/index.php%3ftitle=User:Clayart-terracotta&action=edit&
1590
redlink=1
1591 https://en.wikipedia.org/wiki/User:Claygate
1592 https://en.wikipedia.org/wiki/User:Clayrat
1593 https://en.wikipedia.org/wiki/User:Cleancutkid
https://en.wikipedia.org/w/index.php%3ftitle=User:CleanupService&action=edit&redlink=
1594
1
1595 https://en.wikipedia.org/w/index.php%3ftitle=User:Clecio~enwiki&action=edit&redlink=1
1596 https://en.wikipedia.org/wiki/User:Clementi
1597 https://en.wikipedia.org/wiki/User:Clementina
1598 https://en.wikipedia.org/wiki/User:Clemmy
1599 https://en.wikipedia.org/w/index.php%3ftitle=User:Clerkva&action=edit&redlink=1
1600 https://en.wikipedia.org/wiki/User:Cliff_smith
1601 https://en.wikipedia.org/wiki/User:Cliffb
1602 https://en.wikipedia.org/wiki/User:ClockworkSoul
1603 https://en.wikipedia.org/wiki/User:Closedmouth
1604 https://en.wikipedia.org/wiki/User:Cloud200
1605 https://en.wikipedia.org/wiki/User:CloudNine
1606 https://en.wikipedia.org/w/index.php%3ftitle=User:Cloudjpk&action=edit&redlink=1
1607 https://en.wikipedia.org/wiki/User:Cloudrunner
1608 https://en.wikipedia.org/wiki/User:ClueBot
1609 https://en.wikipedia.org/wiki/User:ClueBot_NG
1610 https://en.wikipedia.org/w/index.php%3ftitle=User:Clx321&action=edit&redlink=1
1611 https://en.wikipedia.org/w/index.php%3ftitle=User:Cmapku&action=edit&redlink=1
1612 https://en.wikipedia.org/wiki/User:Cmbay
1613 https://en.wikipedia.org/wiki/User:Cmcfarland
1614 https://en.wikipedia.org/wiki/User:CmdrObot

1733
Contributors

1 Cmdrjameson1615
11 Cmglee1616
2 Cmskog1617
1 Cncmaster1618
19 Cngoulimis1619
1 Cnoonphd1620
1 Cnwilliams1621
5 CobaltBlue1622
1 Cobaltbluetony1623
2 Cobblet1624
8 Cobi1625
1 Coblin1626
1 Cochito~enwiki1627
1 Cocoaghost1628
1 Cocohead7811629
3 Coconut75941630
2 CocuBot1631
3 CodeHive1632
2 Codecrux1633
1 Codemaker20151634
3 Codeman381635
1 Codethinkers1636
1 Coding.mike1637
1 Codwiki1638
1 Coemgenus1639

1615 https://en.wikipedia.org/wiki/User:Cmdrjameson
1616 https://en.wikipedia.org/wiki/User:Cmglee
1617 https://en.wikipedia.org/w/index.php%3ftitle=User:Cmskog&action=edit&redlink=1
1618 https://en.wikipedia.org/wiki/User:Cncmaster
1619 https://en.wikipedia.org/wiki/User:Cngoulimis
1620 https://en.wikipedia.org/w/index.php%3ftitle=User:Cnoonphd&action=edit&redlink=1
1621 https://en.wikipedia.org/wiki/User:Cnwilliams
1622 https://en.wikipedia.org/wiki/User:CobaltBlue
1623 https://en.wikipedia.org/wiki/User:Cobaltbluetony
1624 https://en.wikipedia.org/wiki/User:Cobblet
1625 https://en.wikipedia.org/wiki/User:Cobi
1626 https://en.wikipedia.org/wiki/User:Coblin
https://en.wikipedia.org/w/index.php%3ftitle=User:Cochito~enwiki&action=edit&redlink=
1627
1
1628 https://en.wikipedia.org/w/index.php%3ftitle=User:Cocoaghost&action=edit&redlink=1
1629 https://en.wikipedia.org/wiki/User:Cocohead781
1630 https://en.wikipedia.org/w/index.php%3ftitle=User:Coconut7594&action=edit&redlink=1
1631 https://en.wikipedia.org/wiki/User:CocuBot
1632 https://en.wikipedia.org/w/index.php%3ftitle=User:CodeHive&action=edit&redlink=1
1633 https://en.wikipedia.org/wiki/User:Codecrux
1634 https://en.wikipedia.org/wiki/User:Codemaker2015
1635 https://en.wikipedia.org/wiki/User:Codeman38
1636 https://en.wikipedia.org/wiki/User:Codethinkers
1637 https://en.wikipedia.org/w/index.php%3ftitle=User:Coding.mike&action=edit&redlink=1
1638 https://en.wikipedia.org/wiki/User:Codwiki
1639 https://en.wikipedia.org/wiki/User:Coemgenus

1734
External links

2 Coffee1640
2 Coffee2theorems1641
1 Coffeepusher1642
1 Cogdat1643
1 CohesionBot1644
2 Cointyro1645
1 Colapeninsula1646
1 ColdFusion6501647
1 ColdShine1648
3 Coldfire821649
2 ColdthroatEskimo1650
2 Cole Kitchen1651
2 Colemanfoley1652
3 Colfer21653
2 Colfulus1654
3 Colin Barrett1655
1 Colin Greene1656
9 Colin M1657
1 Colinb1658
1 Collin Stocks1659
22 Colonies Chris1660
1 Colonna1661
1 Colossus13~enwiki1662
1 CombAuc1663
3 Cometstyles1664

1640 https://en.wikipedia.org/wiki/User:Coffee
1641 https://en.wikipedia.org/wiki/User:Coffee2theorems
1642 https://en.wikipedia.org/wiki/User:Coffeepusher
1643 https://en.wikipedia.org/w/index.php%3ftitle=User:Cogdat&action=edit&redlink=1
1644 https://en.wikipedia.org/wiki/User:CohesionBot
1645 https://en.wikipedia.org/wiki/User:Cointyro
1646 https://en.wikipedia.org/wiki/User:Colapeninsula
1647 https://en.wikipedia.org/wiki/User:ColdFusion650
1648 https://en.wikipedia.org/wiki/User:ColdShine
1649 https://en.wikipedia.org/wiki/User:Coldfire82
1650 https://en.wikipedia.org/wiki/User:ColdthroatEskimo
1651 https://en.wikipedia.org/w/index.php%3ftitle=User:Cole_Kitchen&action=edit&redlink=1
1652 https://en.wikipedia.org/w/index.php%3ftitle=User:Colemanfoley&action=edit&redlink=1
1653 https://en.wikipedia.org/wiki/User:Colfer2
1654 https://en.wikipedia.org/wiki/User:Colfulus
1655 https://en.wikipedia.org/wiki/User:Colin_Barrett
1656 https://en.wikipedia.org/wiki/User:Colin_Greene
1657 https://en.wikipedia.org/wiki/User:Colin_M
1658 https://en.wikipedia.org/wiki/User:Colinb
1659 https://en.wikipedia.org/wiki/User:Collin_Stocks
1660 https://en.wikipedia.org/wiki/User:Colonies_Chris
1661 https://en.wikipedia.org/w/index.php%3ftitle=User:Colonna&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Colossus13~enwiki&action=edit&
1662
redlink=1
1663 https://en.wikipedia.org/w/index.php%3ftitle=User:CombAuc&action=edit&redlink=1
1664 https://en.wikipedia.org/wiki/User:Cometstyles

1735
Contributors

2 Commander Keane bot1665


4 CommonsDelinker1666
8 Comocomocomocomo1667
16 Comp.arch1668
1 Comp1231669
1 Compfreak71670
2 Compie1671
1 Composingliger1672
2 Compotatoj1673
6 Compsim1674
1 Computilizer1675
1 Compynerd2551676
1 Comrade0091677
1 ConceptExp1678
1 ConceptuallyComplex1679
2 Concubine1191680
3 Congml1681
6 Connelly1682
1 Connor mezza1683
1 Connormulcahey1684
1 Constructive editor1685
1 ContemporaryOne1686
1 ContentNerd1687
1 Contrasedative1688

1665 https://en.wikipedia.org/wiki/User:Commander_Keane_bot
1666 https://en.wikipedia.org/wiki/User:CommonsDelinker
https://en.wikipedia.org/w/index.php%3ftitle=User:Comocomocomocomo&action=edit&
1667
redlink=1
1668 https://en.wikipedia.org/wiki/User:Comp.arch
1669 https://en.wikipedia.org/w/index.php%3ftitle=User:Comp123&action=edit&redlink=1
1670 https://en.wikipedia.org/wiki/User:Compfreak7
1671 https://en.wikipedia.org/w/index.php%3ftitle=User:Compie&action=edit&redlink=1
1672 https://en.wikipedia.org/wiki/User:Composingliger
1673 https://en.wikipedia.org/wiki/User:Compotatoj
1674 https://en.wikipedia.org/w/index.php%3ftitle=User:Compsim&action=edit&redlink=1
1675 https://en.wikipedia.org/w/index.php%3ftitle=User:Computilizer&action=edit&redlink=1
1676 https://en.wikipedia.org/w/index.php%3ftitle=User:Compynerd255&action=edit&redlink=1
1677 https://en.wikipedia.org/wiki/User:Comrade009
1678 https://en.wikipedia.org/w/index.php%3ftitle=User:ConceptExp&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:ConceptuallyComplex&action=edit&
1679
redlink=1
1680 https://en.wikipedia.org/w/index.php%3ftitle=User:Concubine119&action=edit&redlink=1
1681 https://en.wikipedia.org/w/index.php%3ftitle=User:Congml&action=edit&redlink=1
1682 https://en.wikipedia.org/wiki/User:Connelly
1683 https://en.wikipedia.org/wiki/User:Connor_mezza
https://en.wikipedia.org/w/index.php%3ftitle=User:Connormulcahey&action=edit&redlink=
1684
1
1685 https://en.wikipedia.org/wiki/User:Constructive_editor
https://en.wikipedia.org/w/index.php%3ftitle=User:ContemporaryOne&action=edit&
1686
redlink=1
1687 https://en.wikipedia.org/wiki/User:ContentNerd
https://en.wikipedia.org/w/index.php%3ftitle=User:Contrasedative&action=edit&redlink=
1688
1

1736
External links

26 Conversion script1689
1 Cookie4869~enwiki1690
1 CoolieCoolster1691
1 Copyeditor421692
1 Copysan1693
1 CoralisTree1694
3 Coralmizu1695
1 Coreydragon1696
1 Corn cheese1697
1 Cornflake pirate1698
1 Corpx1699
1 CorrectKissinTime1700
2 Correction box1701
11 Corti1702
3 Corwinjoy1703
1 Coshoi1704
1 CosineKitty1705
1 Cosmi1706
1 Cosmia Nebula1707
3 Cosmic Clouds1708
1 Coulson21709
3 CountMacula1710
1 CounterVandalismBot1711
6 CountingPine1712
1 Courcelles1713

1689 https://en.wikipedia.org/wiki/User:Conversion_script
1690 https://en.wikipedia.org/wiki/User:Cookie4869~enwiki
1691 https://en.wikipedia.org/wiki/User:CoolieCoolster
1692 https://en.wikipedia.org/w/index.php%3ftitle=User:Copyeditor42&action=edit&redlink=1
1693 https://en.wikipedia.org/wiki/User:Copysan
1694 https://en.wikipedia.org/wiki/User:CoralisTree
1695 https://en.wikipedia.org/wiki/User:Coralmizu
1696 https://en.wikipedia.org/wiki/User:Coreydragon
1697 https://en.wikipedia.org/wiki/User:Corn_cheese
https://en.wikipedia.org/w/index.php%3ftitle=User:Cornflake_pirate&action=edit&
1698
redlink=1
1699 https://en.wikipedia.org/wiki/User:Corpx
https://en.wikipedia.org/w/index.php%3ftitle=User:CorrectKissinTime&action=edit&
1700
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Correction_box&action=edit&redlink=
1701
1
1702 https://en.wikipedia.org/wiki/User:Corti
1703 https://en.wikipedia.org/w/index.php%3ftitle=User:Corwinjoy&action=edit&redlink=1
1704 https://en.wikipedia.org/wiki/User:Coshoi
1705 https://en.wikipedia.org/wiki/User:CosineKitty
1706 https://en.wikipedia.org/wiki/User:Cosmi
1707 https://en.wikipedia.org/wiki/User:Cosmia_Nebula
1708 https://en.wikipedia.org/w/index.php%3ftitle=User:Cosmic_Clouds&action=edit&redlink=1
1709 https://en.wikipedia.org/w/index.php%3ftitle=User:Coulson2&action=edit&redlink=1
1710 https://en.wikipedia.org/w/index.php%3ftitle=User:CountMacula&action=edit&redlink=1
1711 https://en.wikipedia.org/wiki/User:CounterVandalismBot
1712 https://en.wikipedia.org/wiki/User:CountingPine
1713 https://en.wikipedia.org/wiki/User:Courcelles

1737
Contributors

3 Covidely1714
1 Cowgod141715
1 Cowprophet1716
2 Cowsandmilk1717
1 Cpiral1718
2 Cpl Syx1719
3 Cprakash1720
4 Cpt Wise1721
2 CptViraj1722
2 Cqql1723
1 Cquan1724
1 Cquimper1725
1 CraftedPixelWiki1726
1 Craig Baker1727
3 Craig Barkhouse1728
1 Craig Stuntz1729
2 Craig t moore1730
1 Craigyjack1731
1 Craptree1732
1 CrasherX1733
5 Crashmatrix1734
1 Crashthatch1735
1 Crasshopper1736
1 Cratylus31737
2 Craw-daddy1738

1714 https://en.wikipedia.org/w/index.php%3ftitle=User:Covidely&action=edit&redlink=1
1715 https://en.wikipedia.org/wiki/User:Cowgod14
1716 https://en.wikipedia.org/wiki/User:Cowprophet
1717 https://en.wikipedia.org/wiki/User:Cowsandmilk
1718 https://en.wikipedia.org/wiki/User:Cpiral
1719 https://en.wikipedia.org/wiki/User:Cpl_Syx
1720 https://en.wikipedia.org/w/index.php%3ftitle=User:Cprakash&action=edit&redlink=1
1721 https://en.wikipedia.org/w/index.php%3ftitle=User:Cpt_Wise&action=edit&redlink=1
1722 https://en.wikipedia.org/wiki/User:CptViraj
1723 https://en.wikipedia.org/w/index.php%3ftitle=User:Cqql&action=edit&redlink=1
1724 https://en.wikipedia.org/wiki/User:Cquan
1725 https://en.wikipedia.org/w/index.php%3ftitle=User:Cquimper&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:CraftedPixelWiki&action=edit&
1726
redlink=1
1727 https://en.wikipedia.org/wiki/User:Craig_Baker
https://en.wikipedia.org/w/index.php%3ftitle=User:Craig_Barkhouse&action=edit&
1728
redlink=1
1729 https://en.wikipedia.org/wiki/User:Craig_Stuntz
1730 https://en.wikipedia.org/wiki/User:Craig_t_moore
1731 https://en.wikipedia.org/wiki/User:Craigyjack
1732 https://en.wikipedia.org/wiki/User:Craptree
1733 https://en.wikipedia.org/w/index.php%3ftitle=User:CrasherX&action=edit&redlink=1
1734 https://en.wikipedia.org/wiki/User:Crashmatrix
1735 https://en.wikipedia.org/w/index.php%3ftitle=User:Crashthatch&action=edit&redlink=1
1736 https://en.wikipedia.org/wiki/User:Crasshopper
1737 https://en.wikipedia.org/wiki/User:Cratylus3
1738 https://en.wikipedia.org/wiki/User:Craw-daddy

1738
External links

1 Craxic1739
1 Crazy Ivan1740
2 Crazy george1741
6 Crazy2be1742
1 Crazycomputers1743
2 Crc321744
2 Crecy991745
1 Creffett1746
2 Crefrog1747
31 Creidieki1748
10 Cretog81749
1 Cribe1750
3 Crisbodnar1751
1 Crispmuncher1752
1 CristianCantoro1753
1 CrniBombarder!!!1754
1 Croc hunter1755
7 Cronholm1441756
1 Cronium1757
1 Crorodriguezro1758
1 Crosis1011759
1 Crouchbk1760
1 Crowsnest1761
4 Crowst1762
3 CruiserAbhi1763

1739 https://en.wikipedia.org/w/index.php%3ftitle=User:Craxic&action=edit&redlink=1
1740 https://en.wikipedia.org/wiki/User:Crazy_Ivan
1741 https://en.wikipedia.org/wiki/User:Crazy_george
1742 https://en.wikipedia.org/wiki/User:Crazy2be
1743 https://en.wikipedia.org/wiki/User:Crazycomputers
1744 https://en.wikipedia.org/wiki/User:Crc32
1745 https://en.wikipedia.org/wiki/User:Crecy99
1746 https://en.wikipedia.org/wiki/User:Creffett
1747 https://en.wikipedia.org/w/index.php%3ftitle=User:Crefrog&action=edit&redlink=1
1748 https://en.wikipedia.org/wiki/User:Creidieki
1749 https://en.wikipedia.org/wiki/User:Cretog8
1750 https://en.wikipedia.org/w/index.php%3ftitle=User:Cribe&action=edit&redlink=1
1751 https://en.wikipedia.org/w/index.php%3ftitle=User:Crisbodnar&action=edit&redlink=1
1752 https://en.wikipedia.org/wiki/User:Crispmuncher
1753 https://en.wikipedia.org/wiki/User:CristianCantoro
1754 https://en.wikipedia.org/wiki/User:CrniBombarder!!!
1755 https://en.wikipedia.org/w/index.php%3ftitle=User:Croc_hunter&action=edit&redlink=1
1756 https://en.wikipedia.org/wiki/User:Cronholm144
1757 https://en.wikipedia.org/wiki/User:Cronium
https://en.wikipedia.org/w/index.php%3ftitle=User:Crorodriguezro&action=edit&redlink=
1758
1
1759 https://en.wikipedia.org/w/index.php%3ftitle=User:Crosis101&action=edit&redlink=1
1760 https://en.wikipedia.org/w/index.php%3ftitle=User:Crouchbk&action=edit&redlink=1
1761 https://en.wikipedia.org/wiki/User:Crowsnest
1762 https://en.wikipedia.org/w/index.php%3ftitle=User:Crowst&action=edit&redlink=1
1763 https://en.wikipedia.org/w/index.php%3ftitle=User:CruiserAbhi&action=edit&redlink=1

1739
Contributors

1 Crumpuppet1764
2 Cryptic C621765
4 CryptoDerk1766
1 Cryptoid1767
1 Cryptomaniac21768
1 Cryptopocalypse1769
2 Crystallizedcarbon1770
1 Crywalt1771
1 Cscott1772
1 Cshr1773
3 Csl771774
3 Cspooner1775
4 Css1776
1 Cst171777
1 Cstanford.math1778
2 Csurguine1779
1 Cthe1780
3 Ctlaux1781
9 Ctxppc1782
1 Cuaxdon1783
14 Cuberoot311784
3 Cubiksoundz1785
1 Cubism441786
8 Cuddlyable31787
1 CudduCunnu1231788

1764 https://en.wikipedia.org/w/index.php%3ftitle=User:Crumpuppet&action=edit&redlink=1
1765 https://en.wikipedia.org/wiki/User:Cryptic_C62
1766 https://en.wikipedia.org/wiki/User:CryptoDerk
1767 https://en.wikipedia.org/wiki/User:Cryptoid
1768 https://en.wikipedia.org/wiki/User:Cryptomaniac2
1769 https://en.wikipedia.org/wiki/User:Cryptopocalypse
1770 https://en.wikipedia.org/wiki/User:Crystallizedcarbon
1771 https://en.wikipedia.org/w/index.php%3ftitle=User:Crywalt&action=edit&redlink=1
1772 https://en.wikipedia.org/wiki/User:Cscott
1773 https://en.wikipedia.org/w/index.php%3ftitle=User:Cshr&action=edit&redlink=1
1774 https://en.wikipedia.org/wiki/User:Csl77
1775 https://en.wikipedia.org/w/index.php%3ftitle=User:Cspooner&action=edit&redlink=1
1776 https://en.wikipedia.org/wiki/User:Css
1777 https://en.wikipedia.org/wiki/User:Cst17
1778 https://en.wikipedia.org/wiki/User:Cstanford.math
1779 https://en.wikipedia.org/wiki/User:Csurguine
1780 https://en.wikipedia.org/wiki/User:Cthe
1781 https://en.wikipedia.org/wiki/User:Ctlaux
1782 https://en.wikipedia.org/wiki/User:Ctxppc
1783 https://en.wikipedia.org/wiki/User:Cuaxdon
1784 https://en.wikipedia.org/wiki/User:Cuberoot31
1785 https://en.wikipedia.org/w/index.php%3ftitle=User:Cubiksoundz&action=edit&redlink=1
1786 https://en.wikipedia.org/wiki/User:Cubism44
1787 https://en.wikipedia.org/wiki/User:Cuddlyable3
1788 https://en.wikipedia.org/w/index.php%3ftitle=User:CudduCunnu123&action=edit&redlink=1

1740
External links

1 Cuihtlauac1789
4 Culipan1790
1 CultureDrone1791
2 Culturefanatic121792
2 Cumthsc1793
1 Cup o' Java1794
3 Curb Chain1795
1 Curdflappers1796
1 Curps1797
1 Curpsbot-unicodify1798
1 Curtmack1799
2 Cusku'i1800
3 Cutelyaware1801
2 Cuzelac1802
2 Cuzkatzimhut1803
1 CvyvvZkmSUDowVf1804
1 Cwitty1805
1 Cwl21pitt1806
1 Cwowo1807
1 Cyan1808
1 Cybdestroyer1809
5 CyberShadow1810
2 CyberSkull1811
9 Cyberbot II1812
1 Cyberboys911813

1789 https://en.wikipedia.org/wiki/User:Cuihtlauac
1790 https://en.wikipedia.org/w/index.php%3ftitle=User:Culipan&action=edit&redlink=1
1791 https://en.wikipedia.org/wiki/User:CultureDrone
1792 https://en.wikipedia.org/wiki/User:Culturefanatic12
1793 https://en.wikipedia.org/w/index.php%3ftitle=User:Cumthsc&action=edit&redlink=1
1794 https://en.wikipedia.org/wiki/User:Cup_o%2527_Java
1795 https://en.wikipedia.org/wiki/User:Curb_Chain
1796 https://en.wikipedia.org/wiki/User:Curdflappers
1797 https://en.wikipedia.org/wiki/User:Curps
1798 https://en.wikipedia.org/wiki/User:Curpsbot-unicodify
1799 https://en.wikipedia.org/wiki/User:Curtmack
1800 https://en.wikipedia.org/wiki/User:Cusku%2527i
1801 https://en.wikipedia.org/wiki/User:Cutelyaware
1802 https://en.wikipedia.org/w/index.php%3ftitle=User:Cuzelac&action=edit&redlink=1
1803 https://en.wikipedia.org/wiki/User:Cuzkatzimhut
https://en.wikipedia.org/w/index.php%3ftitle=User:CvyvvZkmSUDowVf&action=edit&
1804
redlink=1
1805 https://en.wikipedia.org/wiki/User:Cwitty
1806 https://en.wikipedia.org/w/index.php%3ftitle=User:Cwl21pitt&action=edit&redlink=1
1807 https://en.wikipedia.org/w/index.php%3ftitle=User:Cwowo&action=edit&redlink=1
1808 https://en.wikipedia.org/wiki/User:Cyan
1809 https://en.wikipedia.org/w/index.php%3ftitle=User:Cybdestroyer&action=edit&redlink=1
1810 https://en.wikipedia.org/wiki/User:CyberShadow
1811 https://en.wikipedia.org/wiki/User:CyberSkull
1812 https://en.wikipedia.org/wiki/User:Cyberbot_II
1813 https://en.wikipedia.org/w/index.php%3ftitle=User:Cyberboys91&action=edit&redlink=1

1741
Contributors

211 Cybercobra1814
1 Cyberjoac1815
4 CyborgTosser1816
5 Cyborgbadger1817
2 Cyc1151818
1 Cycling-professor1819
2 Cyclopaedic1820
8 Cyde1821
29 Cydebot1822
3 Cyhawk1823
1 Cymbalta1824
1 Cynddl1825
1 Cyp1826
3 Cypherquest1827
2 Cyrius1828
1 Cyrus Grisham1829
1 Czarkoff1830
1 Czavoianu1831
1 Czxttkl1832
1 D1833
1 D A Patriarche1834
1 D climacus1835
3 D'ohBot1836
65 D.Lazard1837
1 D.M. from Ukraine1838

1814 https://en.wikipedia.org/wiki/User:Cybercobra
1815 https://en.wikipedia.org/w/index.php%3ftitle=User:Cyberjoac&action=edit&redlink=1
1816 https://en.wikipedia.org/wiki/User:CyborgTosser
1817 https://en.wikipedia.org/w/index.php%3ftitle=User:Cyborgbadger&action=edit&redlink=1
1818 https://en.wikipedia.org/w/index.php%3ftitle=User:Cyc115&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Cycling-professor&action=edit&
1819
redlink=1
1820 https://en.wikipedia.org/wiki/User:Cyclopaedic
1821 https://en.wikipedia.org/wiki/User:Cyde
1822 https://en.wikipedia.org/wiki/User:Cydebot
1823 https://en.wikipedia.org/wiki/User:Cyhawk
1824 https://en.wikipedia.org/wiki/User:Cymbalta
1825 https://en.wikipedia.org/wiki/User:Cynddl
1826 https://en.wikipedia.org/wiki/User:Cyp
1827 https://en.wikipedia.org/wiki/User:Cypherquest
1828 https://en.wikipedia.org/wiki/User:Cyrius
1829 https://en.wikipedia.org/wiki/User:Cyrus_Grisham
1830 https://en.wikipedia.org/wiki/User:Czarkoff
1831 https://en.wikipedia.org/w/index.php%3ftitle=User:Czavoianu&action=edit&redlink=1
1832 https://en.wikipedia.org/wiki/User:Czxttkl
1833 https://en.wikipedia.org/wiki/User:D
1834 https://en.wikipedia.org/wiki/User:D_A_Patriarche
1835 https://en.wikipedia.org/wiki/User:D_climacus
1836 https://en.wikipedia.org/wiki/User:D%2527ohBot
1837 https://en.wikipedia.org/wiki/User:D.Lazard
1838 https://en.wikipedia.org/wiki/User:D.M._from_Ukraine

1742
External links

2 D.scain.farenzena1839
1 D07621840
1 D4g0thur1841
2 D61842
1 D753041843
1 DABoffey1844
2 DAMOXGYAMER1845
1 DARTH SIDIOUS 21846
1 DASDBILL21847
1 DAndC1848
1 DCDuring1849
8 DFRussia1850
2 DFS4541851
1 DGG1852
9 DHN1853
46 DHN-bot~enwiki1854
1 DIY~enwiki1855
1 DKEdwards1856
1 DKMell1857
1 DLUrner1858
1 DMCer1859
2 DMacks1860
1 DNewhall1861
5 DOwenWilliams1862
1 DPoon1863

https://en.wikipedia.org/w/index.php%3ftitle=User:D.scain.farenzena&action=edit&
1839
redlink=1
1840 https://en.wikipedia.org/w/index.php%3ftitle=User:D0762&action=edit&redlink=1
1841 https://en.wikipedia.org/wiki/User:D4g0thur
1842 https://en.wikipedia.org/wiki/User:D6
1843 https://en.wikipedia.org/w/index.php%3ftitle=User:D75304&action=edit&redlink=1
1844 https://en.wikipedia.org/wiki/User:DABoffey
1845 https://en.wikipedia.org/w/index.php%3ftitle=User:DAMOXGYAMER&action=edit&redlink=1
1846 https://en.wikipedia.org/wiki/User:DARTH_SIDIOUS_2
1847 https://en.wikipedia.org/w/index.php%3ftitle=User:DASDBILL2&action=edit&redlink=1
1848 https://en.wikipedia.org/wiki/User:DAndC
1849 https://en.wikipedia.org/wiki/User:DCDuring
1850 https://en.wikipedia.org/wiki/User:DFRussia
1851 https://en.wikipedia.org/w/index.php%3ftitle=User:DFS454&action=edit&redlink=1
1852 https://en.wikipedia.org/wiki/User:DGG
1853 https://en.wikipedia.org/wiki/User:DHN
1854 https://en.wikipedia.org/wiki/User:DHN-bot~enwiki
1855 https://en.wikipedia.org/w/index.php%3ftitle=User:DIY~enwiki&action=edit&redlink=1
1856 https://en.wikipedia.org/wiki/User:DKEdwards
1857 https://en.wikipedia.org/wiki/User:DKMell
1858 https://en.wikipedia.org/w/index.php%3ftitle=User:DLUrner&action=edit&redlink=1
1859 https://en.wikipedia.org/wiki/User:DMCer
1860 https://en.wikipedia.org/wiki/User:DMacks
1861 https://en.wikipedia.org/wiki/User:DNewhall
1862 https://en.wikipedia.org/wiki/User:DOwenWilliams
1863 https://en.wikipedia.org/wiki/User:DPoon

1743
Contributors

3 DRAGON BOOSTER1864
5 DRLB1865
1 DSatz1866
3 DSisyphBot1867
17 DVdm1868
1 DWay1869
1 Da nuke1870
1 DaBler1871
2 DaBrown951872
1 DaGizza1873
1 DaVinci1874
1 Dachshund1875
1 Dacian herbei1876
1 Dadudadu1877
1 DaedalusInfinity1878
1 Daehrednud1879
1 Daekharel1880
1 Daev1881
1 Dafyddg1882
1 Dagaspar1883
1 Dagrooms2521884
2 Daisymoobeer1885
6 Daiyuda1886
3 Daiyusha1887
1 Dake~enwiki1888

1864 https://en.wikipedia.org/wiki/User:DRAGON_BOOSTER
1865 https://en.wikipedia.org/wiki/User:DRLB
1866 https://en.wikipedia.org/wiki/User:DSatz
1867 https://en.wikipedia.org/wiki/User:DSisyphBot
1868 https://en.wikipedia.org/wiki/User:DVdm
1869 https://en.wikipedia.org/wiki/User:DWay
1870 https://en.wikipedia.org/wiki/User:Da_nuke
1871 https://en.wikipedia.org/wiki/User:DaBler
1872 https://en.wikipedia.org/w/index.php%3ftitle=User:DaBrown95&action=edit&redlink=1
1873 https://en.wikipedia.org/wiki/User:DaGizza
1874 https://en.wikipedia.org/wiki/User:DaVinci
1875 https://en.wikipedia.org/wiki/User:Dachshund
1876 https://en.wikipedia.org/w/index.php%3ftitle=User:Dacian_herbei&action=edit&redlink=1
1877 https://en.wikipedia.org/wiki/User:Dadudadu
1878 https://en.wikipedia.org/wiki/User:DaedalusInfinity
1879 https://en.wikipedia.org/w/index.php%3ftitle=User:Daehrednud&action=edit&redlink=1
1880 https://en.wikipedia.org/wiki/User:Daekharel
1881 https://en.wikipedia.org/wiki/User:Daev
1882 https://en.wikipedia.org/w/index.php%3ftitle=User:Dafyddg&action=edit&redlink=1
1883 https://en.wikipedia.org/w/index.php%3ftitle=User:Dagaspar&action=edit&redlink=1
1884 https://en.wikipedia.org/w/index.php%3ftitle=User:Dagrooms252&action=edit&redlink=1
1885 https://en.wikipedia.org/w/index.php%3ftitle=User:Daisymoobeer&action=edit&redlink=1
1886 https://en.wikipedia.org/w/index.php%3ftitle=User:Daiyuda&action=edit&redlink=1
1887 https://en.wikipedia.org/wiki/User:Daiyusha
1888 https://en.wikipedia.org/wiki/User:Dake~enwiki

1744
External links

2 Dakhoox1889
1 Dale Gerdemann1890
3 Dalhamir1891
1 Daliumosah1892
1 Daltenty1893
8 Dalton Quinn1894
2 DaltonCastle1895
1 Dam120701la1896
29 Damian Yerrick1897
2 Damien Karras1898
1 Damiens.rf1899
2 Dammit1900
4 Damonkohler1901
1 Damotclese1902
1 Dan D. Ric1903
3 Dan Koehl1904
2 Dan Wang1905
1 Dan1001906
2 DanBishop1907
1 DanHakimi1908
1 DanTrent1909
1 Danadocus1910
136 Danakil1911
1 Dancefire~enwiki1912
1 DancingPhilosopher1913

1889 https://en.wikipedia.org/w/index.php%3ftitle=User:Dakhoox&action=edit&redlink=1
1890 https://en.wikipedia.org/wiki/User:Dale_Gerdemann
1891 https://en.wikipedia.org/w/index.php%3ftitle=User:Dalhamir&action=edit&redlink=1
1892 https://en.wikipedia.org/w/index.php%3ftitle=User:Daliumosah&action=edit&redlink=1
1893 https://en.wikipedia.org/w/index.php%3ftitle=User:Daltenty&action=edit&redlink=1
1894 https://en.wikipedia.org/wiki/User:Dalton_Quinn
1895 https://en.wikipedia.org/wiki/User:DaltonCastle
1896 https://en.wikipedia.org/w/index.php%3ftitle=User:Dam120701la&action=edit&redlink=1
1897 https://en.wikipedia.org/wiki/User:Damian_Yerrick
1898 https://en.wikipedia.org/wiki/User:Damien_Karras
1899 https://en.wikipedia.org/wiki/User:Damiens.rf
1900 https://en.wikipedia.org/wiki/User:Dammit
1901 https://en.wikipedia.org/w/index.php%3ftitle=User:Damonkohler&action=edit&redlink=1
1902 https://en.wikipedia.org/wiki/User:Damotclese
1903 https://en.wikipedia.org/wiki/User:Dan_D._Ric
1904 https://en.wikipedia.org/wiki/User:Dan_Koehl
1905 https://en.wikipedia.org/wiki/User:Dan_Wang
1906 https://en.wikipedia.org/wiki/User:Dan100
1907 https://en.wikipedia.org/wiki/User:DanBishop
1908 https://en.wikipedia.org/wiki/User:DanHakimi
1909 https://en.wikipedia.org/w/index.php%3ftitle=User:DanTrent&action=edit&redlink=1
1910 https://en.wikipedia.org/w/index.php%3ftitle=User:Danadocus&action=edit&redlink=1
1911 https://en.wikipedia.org/wiki/User:Danakil
https://en.wikipedia.org/w/index.php%3ftitle=User:Dancefire~enwiki&action=edit&
1912
redlink=1
1913 https://en.wikipedia.org/wiki/User:DancingPhilosopher

1745
Contributors

2 Dandorid1914
2 Dandv1915
1 DangerousPanda1916
1 Dangling Reference1917
3 Dangulo1918
1 Danhash1919
2 Dani di Neudo1920
1 Daniel Bonniot de Ruisselet1921
2 Daniel Brockman1922
1 Daniel Dandrada1923
2 Daniel Geisler1924
3 Daniel Karapetyan1925
4 Daniel Mietchen1926
26 Daniel Quinlan1927
3 Daniel.Cardenas1928
4 Daniel5Ko1929
1 DanielKO1930
1 DanielMayer1931
3 DanielRuben1932
4 DanielWaterworth1933
2 Danielcamiel1934
1 Danieldanielcolo1935
1 Daniele.tampieri1936
1 Danielms1937

1914 https://en.wikipedia.org/wiki/User:Dandorid
1915 https://en.wikipedia.org/wiki/User:Dandv
1916 https://en.wikipedia.org/wiki/User:DangerousPanda
1917 https://en.wikipedia.org/wiki/User:Dangling_Reference
1918 https://en.wikipedia.org/wiki/User:Dangulo
1919 https://en.wikipedia.org/wiki/User:Danhash
1920 https://en.wikipedia.org/wiki/User:Dani_di_Neudo
https://en.wikipedia.org/w/index.php%3ftitle=User:Daniel_Bonniot_de_Ruisselet&action=
1921
edit&redlink=1
1922 https://en.wikipedia.org/wiki/User:Daniel_Brockman
https://en.wikipedia.org/w/index.php%3ftitle=User:Daniel_Dandrada&action=edit&
1923
redlink=1
1924 https://en.wikipedia.org/wiki/User:Daniel_Geisler
https://en.wikipedia.org/w/index.php%3ftitle=User:Daniel_Karapetyan&action=edit&
1925
redlink=1
1926 https://en.wikipedia.org/wiki/User:Daniel_Mietchen
1927 https://en.wikipedia.org/wiki/User:Daniel_Quinlan
1928 https://en.wikipedia.org/wiki/User:Daniel.Cardenas
1929 https://en.wikipedia.org/w/index.php%3ftitle=User:Daniel5Ko&action=edit&redlink=1
1930 https://en.wikipedia.org/wiki/User:DanielKO
1931 https://en.wikipedia.org/wiki/User:DanielMayer
1932 https://en.wikipedia.org/w/index.php%3ftitle=User:DanielRuben&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:DanielWaterworth&action=edit&
1933
redlink=1
1934 https://en.wikipedia.org/w/index.php%3ftitle=User:Danielcamiel&action=edit&redlink=1
1935 https://en.wikipedia.org/wiki/User:Danieldanielcolo
1936 https://en.wikipedia.org/wiki/User:Daniele.tampieri
1937 https://en.wikipedia.org/wiki/User:Danielms

1746
External links

1 Danieloliveira561938
3 Danielx1939
3 Danilcha1940
3 Danim1941
1 Danivila951942
1 Dankogai1943
2 Danmaz741944
1 Danmoberly1945
1 Dannaf1946
1 Danno uk1947
1 Danny1948
1 Danny Rathjens1949
1 DannyAsher1950
6 DannyS7121951
7 Dannyniu1952
1 Dannyps1953
1 Danrah1954
1 Dansiman1955
1 Danski4541956
1 Dante Shamest1957
4 Dantheox1958
1 Danwizard2081959
1 Danyaljj1960
23 Daoudamjad1961
1 Dappawit1962

https://en.wikipedia.org/w/index.php%3ftitle=User:Danieloliveira56&action=edit&
1938
redlink=1
1939 https://en.wikipedia.org/wiki/User:Danielx
1940 https://en.wikipedia.org/w/index.php%3ftitle=User:Danilcha&action=edit&redlink=1
1941 https://en.wikipedia.org/wiki/User:Danim
1942 https://en.wikipedia.org/w/index.php%3ftitle=User:Danivila95&action=edit&redlink=1
1943 https://en.wikipedia.org/w/index.php%3ftitle=User:Dankogai&action=edit&redlink=1
1944 https://en.wikipedia.org/w/index.php%3ftitle=User:Danmaz74&action=edit&redlink=1
1945 https://en.wikipedia.org/wiki/User:Danmoberly
1946 https://en.wikipedia.org/w/index.php%3ftitle=User:Dannaf&action=edit&redlink=1
1947 https://en.wikipedia.org/wiki/User:Danno_uk
1948 https://en.wikipedia.org/wiki/User:Danny
1949 https://en.wikipedia.org/wiki/User:Danny_Rathjens
1950 https://en.wikipedia.org/wiki/User:DannyAsher
1951 https://en.wikipedia.org/wiki/User:DannyS712
1952 https://en.wikipedia.org/wiki/User:Dannyniu
1953 https://en.wikipedia.org/wiki/User:Dannyps
1954 https://en.wikipedia.org/wiki/User:Danrah
1955 https://en.wikipedia.org/wiki/User:Dansiman
1956 https://en.wikipedia.org/wiki/User:Danski454
1957 https://en.wikipedia.org/wiki/User:Dante_Shamest
1958 https://en.wikipedia.org/wiki/User:Dantheox
1959 https://en.wikipedia.org/w/index.php%3ftitle=User:Danwizard208&action=edit&redlink=1
1960 https://en.wikipedia.org/w/index.php%3ftitle=User:Danyaljj&action=edit&redlink=1
1961 https://en.wikipedia.org/wiki/User:Daoudamjad
1962 https://en.wikipedia.org/wiki/User:Dappawit

1747
Contributors

1 Darangho1963
1 Darco1964
1 Darcourse1965
4 Dardasavta1966
1 Darguz Parsilvan1967
1 Daria04201968
1 Darij1969
1 Dariopy1970
4 Dark Charles1971
1 Dark Silver Crow1972
1 Dark knight1973
5 Dark-World251974
1 DarkAudit1975
1 DarkFalls1976
1 Darkest ruby1977
2 Darkicebot1978
1 Darklilac1979
1 Darktemplar1980
5 Darkwind1981
1 Darrel francis1982
1 Darren Strash1983
11 DarrylNester1984
1 Darth Mike1985
1 Darthhappyface1986
1 Darvii1987

1963 https://en.wikipedia.org/w/index.php%3ftitle=User:Darangho&action=edit&redlink=1
1964 https://en.wikipedia.org/wiki/User:Darco
1965 https://en.wikipedia.org/wiki/User:Darcourse
1966 https://en.wikipedia.org/wiki/User:Dardasavta
1967 https://en.wikipedia.org/wiki/User:Darguz_Parsilvan
1968 https://en.wikipedia.org/w/index.php%3ftitle=User:Daria0420&action=edit&redlink=1
1969 https://en.wikipedia.org/wiki/User:Darij
1970 https://en.wikipedia.org/w/index.php%3ftitle=User:Dariopy&action=edit&redlink=1
1971 https://en.wikipedia.org/wiki/User:Dark_Charles
1972 https://en.wikipedia.org/wiki/User:Dark_Silver_Crow
1973 https://en.wikipedia.org/wiki/User:Dark_knight
1974 https://en.wikipedia.org/wiki/User:Dark-World25
1975 https://en.wikipedia.org/wiki/User:DarkAudit
1976 https://en.wikipedia.org/wiki/User:DarkFalls
1977 https://en.wikipedia.org/w/index.php%3ftitle=User:Darkest_ruby&action=edit&redlink=1
1978 https://en.wikipedia.org/wiki/User:Darkicebot
1979 https://en.wikipedia.org/wiki/User:Darklilac
1980 https://en.wikipedia.org/w/index.php%3ftitle=User:Darktemplar&action=edit&redlink=1
1981 https://en.wikipedia.org/wiki/User:Darkwind
https://en.wikipedia.org/w/index.php%3ftitle=User:Darrel_francis&action=edit&redlink=
1982
1
1983 https://en.wikipedia.org/wiki/User:Darren_Strash
1984 https://en.wikipedia.org/w/index.php%3ftitle=User:DarrylNester&action=edit&redlink=1
1985 https://en.wikipedia.org/wiki/User:Darth_Mike
1986 https://en.wikipedia.org/wiki/User:Darthhappyface
1987 https://en.wikipedia.org/wiki/User:Darvii

1748
External links

6 DarwIn1988
7 DarwinPeacock1989
1 Darylgolden1990
1 DasBrose~enwiki1991
1 Dasboe1992
1 Dastgirpojee1993
1 Dastoger Bashar1994
1 DatGoodDude3421995
9 DataWraith1996
1 Datumizer1997
1 Dav!dB1998
1 DavRosen1999
1 Dave Bass2000
1 Dave.Dunford2001
1 Dave6832002
1 DaveTheRed2003
20 Daveagp2004
4 Davekaminski2005
3 Davemck2006
2 Davepape2007
3 Davetcoleman2008
1 Davew haverford2009
1 David Buchan2010
1 David Bunde2011
1 David Chouinard2012

1988 https://en.wikipedia.org/wiki/User:DarwIn
1989 https://en.wikipedia.org/wiki/User:DarwinPeacock
1990 https://en.wikipedia.org/wiki/User:Darylgolden
https://en.wikipedia.org/w/index.php%3ftitle=User:DasBrose~enwiki&action=edit&
1991
redlink=1
1992 https://en.wikipedia.org/w/index.php%3ftitle=User:Dasboe&action=edit&redlink=1
1993 https://en.wikipedia.org/w/index.php%3ftitle=User:Dastgirpojee&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Dastoger_Bashar&action=edit&
1994
redlink=1
1995 https://en.wikipedia.org/wiki/User:DatGoodDude342
1996 https://en.wikipedia.org/wiki/User:DataWraith
1997 https://en.wikipedia.org/wiki/User:Datumizer
1998 https://en.wikipedia.org/w/index.php%3ftitle=User:Dav!dB&action=edit&redlink=1
1999 https://en.wikipedia.org/wiki/User:DavRosen
2000 https://en.wikipedia.org/wiki/User:Dave_Bass
2001 https://en.wikipedia.org/wiki/User:Dave.Dunford
2002 https://en.wikipedia.org/w/index.php%3ftitle=User:Dave683&action=edit&redlink=1
2003 https://en.wikipedia.org/wiki/User:DaveTheRed
2004 https://en.wikipedia.org/wiki/User:Daveagp
2005 https://en.wikipedia.org/wiki/User:Davekaminski
2006 https://en.wikipedia.org/wiki/User:Davemck
2007 https://en.wikipedia.org/wiki/User:Davepape
2008 https://en.wikipedia.org/wiki/User:Davetcoleman
2009 https://en.wikipedia.org/wiki/User:Davew_haverford
2010 https://en.wikipedia.org/w/index.php%3ftitle=User:David_Buchan&action=edit&redlink=1
2011 https://en.wikipedia.org/w/index.php%3ftitle=User:David_Bunde&action=edit&redlink=1
2012 https://en.wikipedia.org/wiki/User:David_Chouinard

1749
Contributors

2 David Condrey2013
3 David Cooke2014
2371 David Eppstein2015
5 David Gerard2016
3 David Haslam2017
1 David Jordan2018
1 David Martland2019
1 David Newton2020
1 David Pal2021
1 David Souther2022
1 David of Earth2023
6 David spector2024
1 David.Mestel2025
3 David.Monniaux2026
2 David.daileyatsrudotedu2027
7 David.hillshafer2028
10 David.moreno722029
1 David95502030
2 DavidBiesack2031
2 DavidBrooks2032
1 DavidCBryant2033
37 DavidCary2034
2 DavidConrad2035
4 DavidGrayson2036
2 DavidGries2037

2013 https://en.wikipedia.org/wiki/User:David_Condrey
2014 https://en.wikipedia.org/w/index.php%3ftitle=User:David_Cooke&action=edit&redlink=1
2015 https://en.wikipedia.org/wiki/User:David_Eppstein
2016 https://en.wikipedia.org/wiki/User:David_Gerard
2017 https://en.wikipedia.org/wiki/User:David_Haslam
2018 https://en.wikipedia.org/wiki/User:David_Jordan
2019 https://en.wikipedia.org/wiki/User:David_Martland
2020 https://en.wikipedia.org/wiki/User:David_Newton
2021 https://en.wikipedia.org/wiki/User:David_Pal
2022 https://en.wikipedia.org/wiki/User:David_Souther
https://en.wikipedia.org/w/index.php%3ftitle=User:David_of_Earth&action=edit&redlink=
2023
1
2024 https://en.wikipedia.org/wiki/User:David_spector
2025 https://en.wikipedia.org/wiki/User:David.Mestel
2026 https://en.wikipedia.org/wiki/User:David.Monniaux
https://en.wikipedia.org/w/index.php%3ftitle=User:David.daileyatsrudotedu&action=
2027
edit&redlink=1
2028 https://en.wikipedia.org/wiki/User:David.hillshafer
2029 https://en.wikipedia.org/wiki/User:David.moreno72
2030 https://en.wikipedia.org/w/index.php%3ftitle=User:David9550&action=edit&redlink=1
2031 https://en.wikipedia.org/wiki/User:DavidBiesack
2032 https://en.wikipedia.org/wiki/User:DavidBrooks
2033 https://en.wikipedia.org/wiki/User:DavidCBryant
2034 https://en.wikipedia.org/wiki/User:DavidCary
2035 https://en.wikipedia.org/wiki/User:DavidConrad
2036 https://en.wikipedia.org/wiki/User:DavidGrayson
2037 https://en.wikipedia.org/w/index.php%3ftitle=User:DavidGries&action=edit&redlink=1

1750
External links

1 DavidHarkness2038
2 DavidLeighEllis2039
1 DavidRF2040
2 DavidSJ2041
1 DavidWBrooks2042
4 Daviddwd2043
1 Davidfoley2044
2 Davidfstr2045
29 Davidgothberg2046
1 Davidhand2047
3 Davidhorman2048
10 Davidhu0903ex32049
1 Davidlyness2050
2 Davidvandebunte2051
1 Davidwt2052
4 Davidyeotb2053
1 Davinci8x82054
2 Davisonio2055
1 Davitf2056
1 Davorian2057
2 Davub2058
2 Dawn Bard2059
10 Dawnseeker20002060
3 Dayewalker2061
1 Dazappa2062

2038 https://en.wikipedia.org/w/index.php%3ftitle=User:DavidHarkness&action=edit&redlink=1
2039 https://en.wikipedia.org/wiki/User:DavidLeighEllis
2040 https://en.wikipedia.org/wiki/User:DavidRF
2041 https://en.wikipedia.org/wiki/User:DavidSJ
2042 https://en.wikipedia.org/wiki/User:DavidWBrooks
2043 https://en.wikipedia.org/wiki/User:Daviddwd
2044 https://en.wikipedia.org/w/index.php%3ftitle=User:Davidfoley&action=edit&redlink=1
2045 https://en.wikipedia.org/wiki/User:Davidfstr
2046 https://en.wikipedia.org/wiki/User:Davidgothberg
2047 https://en.wikipedia.org/w/index.php%3ftitle=User:Davidhand&action=edit&redlink=1
2048 https://en.wikipedia.org/wiki/User:Davidhorman
2049 https://en.wikipedia.org/wiki/User:Davidhu0903ex3
2050 https://en.wikipedia.org/w/index.php%3ftitle=User:Davidlyness&action=edit&redlink=1
2051 https://en.wikipedia.org/wiki/User:Davidvandebunte
2052 https://en.wikipedia.org/wiki/User:Davidwt
2053 https://en.wikipedia.org/w/index.php%3ftitle=User:Davidyeotb&action=edit&redlink=1
2054 https://en.wikipedia.org/w/index.php%3ftitle=User:Davinci8x8&action=edit&redlink=1
2055 https://en.wikipedia.org/wiki/User:Davisonio
2056 https://en.wikipedia.org/w/index.php%3ftitle=User:Davitf&action=edit&redlink=1
2057 https://en.wikipedia.org/w/index.php%3ftitle=User:Davorian&action=edit&redlink=1
2058 https://en.wikipedia.org/w/index.php%3ftitle=User:Davub&action=edit&redlink=1
2059 https://en.wikipedia.org/wiki/User:Dawn_Bard
2060 https://en.wikipedia.org/wiki/User:Dawnseeker2000
2061 https://en.wikipedia.org/wiki/User:Dayewalker
2062 https://en.wikipedia.org/w/index.php%3ftitle=User:Dazappa&action=edit&redlink=1

1751
Contributors

6 Daztekk2063
1 Dbabbitt2064
3 Dbagnall2065
1 Dbeatty2066
6 Dbenbenn2067
7 Dbfirs2068
2 Dbingham2069
1 Dbmag92070
2 Dbmikus2071
1 Dbroadwell2072
5 Dc9872073
1 DcTheLeet2074
1 Dcallen2075
1 Dcdanny2076
1 Dcfleck2077
21 Dcirovic2078
12 Dcljr2079
1253 Dcoetzee2080
11 DcoetzeeBot~enwiki2081
1 Dd49322082
1 Ddawson2083
6 Ddccc2084
2 DePiep2085
82 Deacon Vorbis2086
1 Deacon of Pndapetzim2087

2063 https://en.wikipedia.org/w/index.php%3ftitle=User:Daztekk&action=edit&redlink=1
2064 https://en.wikipedia.org/wiki/User:Dbabbitt
2065 https://en.wikipedia.org/w/index.php%3ftitle=User:Dbagnall&action=edit&redlink=1
2066 https://en.wikipedia.org/w/index.php%3ftitle=User:Dbeatty&action=edit&redlink=1
2067 https://en.wikipedia.org/wiki/User:Dbenbenn
2068 https://en.wikipedia.org/wiki/User:Dbfirs
2069 https://en.wikipedia.org/wiki/User:Dbingham
2070 https://en.wikipedia.org/wiki/User:Dbmag9
2071 https://en.wikipedia.org/wiki/User:Dbmikus
2072 https://en.wikipedia.org/wiki/User:Dbroadwell
2073 https://en.wikipedia.org/wiki/User:Dc987
2074 https://en.wikipedia.org/w/index.php%3ftitle=User:DcTheLeet&action=edit&redlink=1
2075 https://en.wikipedia.org/w/index.php%3ftitle=User:Dcallen&action=edit&redlink=1
2076 https://en.wikipedia.org/wiki/User:Dcdanny
2077 https://en.wikipedia.org/wiki/User:Dcfleck
2078 https://en.wikipedia.org/wiki/User:Dcirovic
2079 https://en.wikipedia.org/wiki/User:Dcljr
2080 https://en.wikipedia.org/wiki/User:Dcoetzee
2081 https://en.wikipedia.org/wiki/User:DcoetzeeBot~enwiki
2082 https://en.wikipedia.org/w/index.php%3ftitle=User:Dd4932&action=edit&redlink=1
2083 https://en.wikipedia.org/wiki/User:Ddawson
2084 https://en.wikipedia.org/w/index.php%3ftitle=User:Ddccc&action=edit&redlink=1
2085 https://en.wikipedia.org/wiki/User:DePiep
2086 https://en.wikipedia.org/wiki/User:Deacon_Vorbis
2087 https://en.wikipedia.org/wiki/User:Deacon_of_Pndapetzim

1752
External links

1 Deadcode2088
1 Dean p foster2089
3 Deanonwiki2090
1 Deanphd2091
1 Dearingj2092
3 Debackerl2093
6 Debamf2094
1 Debenben2095
1 Debeo Morium2096
1 Debivort2097
3 Deborahjay2098
1 Debresser2099
1 Decay72100
3 Decltype2101
1 Deco Engel2102
2 Deconvolution2103
1 Decoy2104
4 Decrease7892105
21 Decrypt32106
2 Dede.exe2107
2 Dedicatedecoy2108
5 Deeday-UK2109
6 DeepReasoner2110
16 Deepakabhyankar2111
2 Deepakjoy2112

2088 https://en.wikipedia.org/w/index.php%3ftitle=User:Deadcode&action=edit&redlink=1
2089 https://en.wikipedia.org/wiki/User:Dean_p_foster
2090 https://en.wikipedia.org/w/index.php%3ftitle=User:Deanonwiki&action=edit&redlink=1
2091 https://en.wikipedia.org/w/index.php%3ftitle=User:Deanphd&action=edit&redlink=1
2092 https://en.wikipedia.org/wiki/User:Dearingj
2093 https://en.wikipedia.org/w/index.php%3ftitle=User:Debackerl&action=edit&redlink=1
2094 https://en.wikipedia.org/w/index.php%3ftitle=User:Debamf&action=edit&redlink=1
2095 https://en.wikipedia.org/wiki/User:Debenben
2096 https://en.wikipedia.org/wiki/User:Debeo_Morium
2097 https://en.wikipedia.org/wiki/User:Debivort
2098 https://en.wikipedia.org/wiki/User:Deborahjay
2099 https://en.wikipedia.org/wiki/User:Debresser
2100 https://en.wikipedia.org/w/index.php%3ftitle=User:Decay7&action=edit&redlink=1
2101 https://en.wikipedia.org/wiki/User:Decltype
2102 https://en.wikipedia.org/wiki/User:Deco_Engel
2103 https://en.wikipedia.org/w/index.php%3ftitle=User:Deconvolution&action=edit&redlink=1
2104 https://en.wikipedia.org/wiki/User:Decoy
2105 https://en.wikipedia.org/wiki/User:Decrease789
2106 https://en.wikipedia.org/wiki/User:Decrypt3
2107 https://en.wikipedia.org/w/index.php%3ftitle=User:Dede.exe&action=edit&redlink=1
2108 https://en.wikipedia.org/w/index.php%3ftitle=User:Dedicatedecoy&action=edit&redlink=1
2109 https://en.wikipedia.org/wiki/User:Deeday-UK
2110 https://en.wikipedia.org/w/index.php%3ftitle=User:DeepReasoner&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Deepakabhyankar&action=edit&
2111
redlink=1
2112 https://en.wikipedia.org/w/index.php%3ftitle=User:Deepakjoy&action=edit&redlink=1

1753
Contributors

1 Deeparnab2113
2 Deeptrivia2114
1 Deflagg2115
2 Deineka2116
11 Dekart2117
7 Delaszk2118
3 Deli nk2119
3 Delirium2120
5 Delldot2121
2 Delmet2122
1 Delocalizer2123
1 Delog2124
1 Delt012125
1 Delta 512126
12 Deltahedron2127
1 Deltaway2128
1 Delusion232129
1 Demicx2130
1 Demiraven2131
6 Demiurge10002132
6 DemocraticLuntz2133
1 Demonic19932134
4 Demonkoryu2135
1 Demosta2136
1 Den fjättrade ankan~enwiki2137

2113 https://en.wikipedia.org/wiki/User:Deeparnab
2114 https://en.wikipedia.org/wiki/User:Deeptrivia
2115 https://en.wikipedia.org/w/index.php%3ftitle=User:Deflagg&action=edit&redlink=1
2116 https://en.wikipedia.org/wiki/User:Deineka
2117 https://en.wikipedia.org/wiki/User:Dekart
2118 https://en.wikipedia.org/wiki/User:Delaszk
2119 https://en.wikipedia.org/wiki/User:Deli_nk
2120 https://en.wikipedia.org/wiki/User:Delirium
2121 https://en.wikipedia.org/wiki/User:Delldot
2122 https://en.wikipedia.org/w/index.php%3ftitle=User:Delmet&action=edit&redlink=1
2123 https://en.wikipedia.org/w/index.php%3ftitle=User:Delocalizer&action=edit&redlink=1
2124 https://en.wikipedia.org/wiki/User:Delog
2125 https://en.wikipedia.org/wiki/User:Delt01
2126 https://en.wikipedia.org/wiki/User:Delta_51
2127 https://en.wikipedia.org/wiki/User:Deltahedron
2128 https://en.wikipedia.org/w/index.php%3ftitle=User:Deltaway&action=edit&redlink=1
2129 https://en.wikipedia.org/wiki/User:Delusion23
2130 https://en.wikipedia.org/wiki/User:Demicx
2131 https://en.wikipedia.org/w/index.php%3ftitle=User:Demiraven&action=edit&redlink=1
2132 https://en.wikipedia.org/wiki/User:Demiurge1000
2133 https://en.wikipedia.org/wiki/User:DemocraticLuntz
2134 https://en.wikipedia.org/w/index.php%3ftitle=User:Demonic1993&action=edit&redlink=1
2135 https://en.wikipedia.org/wiki/User:Demonkoryu
2136 https://en.wikipedia.org/wiki/User:Demosta
2137 https://en.wikipedia.org/wiki/User:Den_fj%25C3%25A4ttrade_ankan~enwiki

1754
External links

10 Denisarona2138
1 Deniscostadsc2139
4 Denispir2140
1 Denkkar2141
2 Dennis Brown2142
1 Dennis.warner2143
1 Denny2144
1 Denshade2145
4 DeprecatedFixerBot2146
1 Deputyduck2147
1 Der Golem2148
1 Der schiefe Turm2149
1 DerGraph~enwiki2150
1 DerHexer2151
1 Derbeth2152
1 Dereckson2153
3 Derek M2154
8 Derek Parnell2155
3 Derek R Bullamore2156
14 Derek Ross2157
10 Derek farn2158
1 Derek whp2159
6 Deret19872160
4 Derickvigne2161
3 Derlay2162

2138 https://en.wikipedia.org/wiki/User:Denisarona
2139 https://en.wikipedia.org/w/index.php%3ftitle=User:Deniscostadsc&action=edit&redlink=1
2140 https://en.wikipedia.org/wiki/User:Denispir
2141 https://en.wikipedia.org/w/index.php%3ftitle=User:Denkkar&action=edit&redlink=1
2142 https://en.wikipedia.org/wiki/User:Dennis_Brown
2143 https://en.wikipedia.org/w/index.php%3ftitle=User:Dennis.warner&action=edit&redlink=1
2144 https://en.wikipedia.org/wiki/User:Denny
2145 https://en.wikipedia.org/w/index.php%3ftitle=User:Denshade&action=edit&redlink=1
2146 https://en.wikipedia.org/wiki/User:DeprecatedFixerBot
2147 https://en.wikipedia.org/w/index.php%3ftitle=User:Deputyduck&action=edit&redlink=1
2148 https://en.wikipedia.org/wiki/User:Der_Golem
2149 https://en.wikipedia.org/wiki/User:Der_schiefe_Turm
https://en.wikipedia.org/w/index.php%3ftitle=User:DerGraph~enwiki&action=edit&
2150
redlink=1
2151 https://en.wikipedia.org/wiki/User:DerHexer
2152 https://en.wikipedia.org/wiki/User:Derbeth
2153 https://en.wikipedia.org/wiki/User:Dereckson
2154 https://en.wikipedia.org/wiki/User:Derek_M
2155 https://en.wikipedia.org/wiki/User:Derek_Parnell
2156 https://en.wikipedia.org/wiki/User:Derek_R_Bullamore
2157 https://en.wikipedia.org/wiki/User:Derek_Ross
2158 https://en.wikipedia.org/wiki/User:Derek_farn
2159 https://en.wikipedia.org/w/index.php%3ftitle=User:Derek_whp&action=edit&redlink=1
2160 https://en.wikipedia.org/w/index.php%3ftitle=User:Deret1987&action=edit&redlink=1
2161 https://en.wikipedia.org/w/index.php%3ftitle=User:Derickvigne&action=edit&redlink=1
2162 https://en.wikipedia.org/wiki/User:Derlay

1755
Contributors

1 Desalgo2163
2 Deshraj2164
1 Desilvai2165
1 Deskana2166
4 Dessertsheep2167
1 Dessources2168
1 DestroyerOfSense2169
4 Destynova2170
1 DevUrandom2171
1 Devanshuhpandey2172
23 DevastatorIIC2173
1 Deveedutta2174
2 DevilsAdvocate2175
4 Deviprasad22552176
2 Devis2177
5 Devourer092178
1 Dewnetu2179
11 Dewritech2180
38 Dexbot2181
1 Df747jet2182
6 Dfarrell072183
1 DferDaisy2184
3 Dfeuer2185
1 Dfinkel2186
1 Dfletter2187

2163 https://en.wikipedia.org/w/index.php%3ftitle=User:Desalgo&action=edit&redlink=1
2164 https://en.wikipedia.org/w/index.php%3ftitle=User:Deshraj&action=edit&redlink=1
2165 https://en.wikipedia.org/w/index.php%3ftitle=User:Desilvai&action=edit&redlink=1
2166 https://en.wikipedia.org/wiki/User:Deskana
2167 https://en.wikipedia.org/w/index.php%3ftitle=User:Dessertsheep&action=edit&redlink=1
2168 https://en.wikipedia.org/wiki/User:Dessources
https://en.wikipedia.org/w/index.php%3ftitle=User:DestroyerOfSense&action=edit&
2169
redlink=1
2170 https://en.wikipedia.org/wiki/User:Destynova
2171 https://en.wikipedia.org/w/index.php%3ftitle=User:DevUrandom&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Devanshuhpandey&action=edit&
2172
redlink=1
2173 https://en.wikipedia.org/wiki/User:DevastatorIIC
2174 https://en.wikipedia.org/wiki/User:Deveedutta
2175 https://en.wikipedia.org/wiki/User:DevilsAdvocate
https://en.wikipedia.org/w/index.php%3ftitle=User:Deviprasad2255&action=edit&redlink=
2176
1
2177 https://en.wikipedia.org/w/index.php%3ftitle=User:Devis&action=edit&redlink=1
2178 https://en.wikipedia.org/wiki/User:Devourer09
2179 https://en.wikipedia.org/w/index.php%3ftitle=User:Dewnetu&action=edit&redlink=1
2180 https://en.wikipedia.org/wiki/User:Dewritech
2181 https://en.wikipedia.org/wiki/User:Dexbot
2182 https://en.wikipedia.org/wiki/User:Df747jet
2183 https://en.wikipedia.org/wiki/User:Dfarrell07
2184 https://en.wikipedia.org/wiki/User:DferDaisy
2185 https://en.wikipedia.org/wiki/User:Dfeuer
2186 https://en.wikipedia.org/w/index.php%3ftitle=User:Dfinkel&action=edit&redlink=1
2187 https://en.wikipedia.org/wiki/User:Dfletter

1756
External links

5 Dfrankow2188
3 Dggreen2189
1 Dglushkov662190
1 Dgpop2191
1 Dgrant2192
1 Dgse872193
1 Dhb1012194
1 Dhrav2195
8 Dhrm772196
1 Dhruven16002197
1 Dhruvmalik92198
1 Dhuss2199
6 Diannaa2200
2 Diberri2201
2 Dick tingler2202
24 Dicklyon2203
2 DidierStevens2204
3 DieSwartzPunkt2205
16 Diego Moya2206
4 Diego UFCG~enwiki2207
3 Diego diaz espinoza2208
10 Dieter Simon2209
3 Digisus2210
1 Digital Organism2211
1 Digital infinity2212

2188 https://en.wikipedia.org/wiki/User:Dfrankow
2189 https://en.wikipedia.org/w/index.php%3ftitle=User:Dggreen&action=edit&redlink=1
2190 https://en.wikipedia.org/w/index.php%3ftitle=User:Dglushkov66&action=edit&redlink=1
2191 https://en.wikipedia.org/wiki/User:Dgpop
2192 https://en.wikipedia.org/wiki/User:Dgrant
2193 https://en.wikipedia.org/w/index.php%3ftitle=User:Dgse87&action=edit&redlink=1
2194 https://en.wikipedia.org/wiki/User:Dhb101
2195 https://en.wikipedia.org/w/index.php%3ftitle=User:Dhrav&action=edit&redlink=1
2196 https://en.wikipedia.org/wiki/User:Dhrm77
2197 https://en.wikipedia.org/w/index.php%3ftitle=User:Dhruven1600&action=edit&redlink=1
2198 https://en.wikipedia.org/w/index.php%3ftitle=User:Dhruvmalik9&action=edit&redlink=1
2199 https://en.wikipedia.org/wiki/User:Dhuss
2200 https://en.wikipedia.org/wiki/User:Diannaa
2201 https://en.wikipedia.org/wiki/User:Diberri
2202 https://en.wikipedia.org/w/index.php%3ftitle=User:Dick_tingler&action=edit&redlink=1
2203 https://en.wikipedia.org/wiki/User:Dicklyon
2204 https://en.wikipedia.org/w/index.php%3ftitle=User:DidierStevens&action=edit&redlink=1
2205 https://en.wikipedia.org/wiki/User:DieSwartzPunkt
2206 https://en.wikipedia.org/wiki/User:Diego_Moya
2207 https://en.wikipedia.org/wiki/User:Diego_UFCG~enwiki
https://en.wikipedia.org/w/index.php%3ftitle=User:Diego_diaz_espinoza&action=edit&
2208
redlink=1
2209 https://en.wikipedia.org/wiki/User:Dieter_Simon
2210 https://en.wikipedia.org/wiki/User:Digisus
https://en.wikipedia.org/w/index.php%3ftitle=User:Digital_Organism&action=edit&
2211
redlink=1
2212 https://en.wikipedia.org/wiki/User:Digital_infinity

1757
Contributors

7 Digwuren2213
1 Dila2214
1 Dileep984902215
1 Dillard4212216
1 Dillon2562217
2 Dima david2218
2 Dima12219
1 Dimchord2220
1 Dimpu12342221
1 Dina2222
7 Dinamik-bot2223
2 Dingolover69692224
1 Dino2225
1 Dinoen2226
1 Dinomite2227
5 Diomidis Spinellis2228
19 Dionyziz2229
1 Dipunj2230
3 DirkOliverTheis2231
1 Dirkbb2232
2 Dirkjhogan2233
1 DisasterManX2234
4 Disavian2235
1 DiscantX2236
3 Discboy2237

2213 https://en.wikipedia.org/w/index.php%3ftitle=User:Digwuren&action=edit&redlink=1
2214 https://en.wikipedia.org/w/index.php%3ftitle=User:Dila&action=edit&redlink=1
2215 https://en.wikipedia.org/w/index.php%3ftitle=User:Dileep98490&action=edit&redlink=1
2216 https://en.wikipedia.org/wiki/User:Dillard421
2217 https://en.wikipedia.org/w/index.php%3ftitle=User:Dillon256&action=edit&redlink=1
2218 https://en.wikipedia.org/w/index.php%3ftitle=User:Dima_david&action=edit&redlink=1
2219 https://en.wikipedia.org/wiki/User:Dima1
2220 https://en.wikipedia.org/w/index.php%3ftitle=User:Dimchord&action=edit&redlink=1
2221 https://en.wikipedia.org/w/index.php%3ftitle=User:Dimpu1234&action=edit&redlink=1
2222 https://en.wikipedia.org/wiki/User:Dina
2223 https://en.wikipedia.org/wiki/User:Dinamik-bot
https://en.wikipedia.org/w/index.php%3ftitle=User:Dingolover6969&action=edit&redlink=
2224
1
2225 https://en.wikipedia.org/wiki/User:Dino
2226 https://en.wikipedia.org/w/index.php%3ftitle=User:Dinoen&action=edit&redlink=1
2227 https://en.wikipedia.org/wiki/User:Dinomite
2228 https://en.wikipedia.org/wiki/User:Diomidis_Spinellis
2229 https://en.wikipedia.org/wiki/User:Dionyziz
2230 https://en.wikipedia.org/wiki/User:Dipunj
https://en.wikipedia.org/w/index.php%3ftitle=User:DirkOliverTheis&action=edit&
2231
redlink=1
2232 https://en.wikipedia.org/wiki/User:Dirkbb
2233 https://en.wikipedia.org/w/index.php%3ftitle=User:Dirkjhogan&action=edit&redlink=1
2234 https://en.wikipedia.org/wiki/User:DisasterManX
2235 https://en.wikipedia.org/wiki/User:Disavian
2236 https://en.wikipedia.org/wiki/User:DiscantX
2237 https://en.wikipedia.org/wiki/User:Discboy

1758
External links

13 Discospinster2238
1 Dishantpandya7772239
4 Dishonesty Test2240
1 DisillusionedBitterAndKnackered2241
1 Dispenser2242
4 Dissident2243
1 Disuja19752244
1 Dittaeva2245
1 Dittymathew2246
3 DivideByZero142247
2 DivineAlpha2248
2 DivineBurner2249
2 Divyanshj.162250
2 DixonDBot2251
8 Dixtosa2252
1 Djbwiki2253
3 Djcollom2254
1 Djh24002255
3 Djhulme2256
1 Djice912257
1 Djszapi2258
1 Dk10272259
1 Dkasak2260
1 Dkf112261
3 Dkolegov2262

2238 https://en.wikipedia.org/wiki/User:Discospinster
https://en.wikipedia.org/w/index.php%3ftitle=User:Dishantpandya777&action=edit&
2239
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Dishonesty_Test&action=edit&
2240
redlink=1
2241 https://en.wikipedia.org/wiki/User:DisillusionedBitterAndKnackered
2242 https://en.wikipedia.org/wiki/User:Dispenser
2243 https://en.wikipedia.org/wiki/User:Dissident
2244 https://en.wikipedia.org/w/index.php%3ftitle=User:Disuja1975&action=edit&redlink=1
2245 https://en.wikipedia.org/wiki/User:Dittaeva
2246 https://en.wikipedia.org/wiki/User:Dittymathew
2247 https://en.wikipedia.org/wiki/User:DivideByZero14
2248 https://en.wikipedia.org/w/index.php%3ftitle=User:DivineAlpha&action=edit&redlink=1
2249 https://en.wikipedia.org/wiki/User:DivineBurner
2250 https://en.wikipedia.org/w/index.php%3ftitle=User:Divyanshj.16&action=edit&redlink=1
2251 https://en.wikipedia.org/wiki/User:DixonDBot
2252 https://en.wikipedia.org/wiki/User:Dixtosa
2253 https://en.wikipedia.org/wiki/User:Djbwiki
2254 https://en.wikipedia.org/w/index.php%3ftitle=User:Djcollom&action=edit&redlink=1
2255 https://en.wikipedia.org/w/index.php%3ftitle=User:Djh2400&action=edit&redlink=1
2256 https://en.wikipedia.org/w/index.php%3ftitle=User:Djhulme&action=edit&redlink=1
2257 https://en.wikipedia.org/w/index.php%3ftitle=User:Djice91&action=edit&redlink=1
2258 https://en.wikipedia.org/w/index.php%3ftitle=User:Djszapi&action=edit&redlink=1
2259 https://en.wikipedia.org/w/index.php%3ftitle=User:Dk1027&action=edit&redlink=1
2260 https://en.wikipedia.org/wiki/User:Dkasak
2261 https://en.wikipedia.org/wiki/User:Dkf11
2262 https://en.wikipedia.org/w/index.php%3ftitle=User:Dkolegov&action=edit&redlink=1

1759
Contributors

2 Dkostic2263
1 Dkrepid2264
1 Dl20002265
8 Dlakavi2266
3 Dllu2267
1 Dlu7762268
29 Dmaciej2269
1 Dmaclach2270
4 Dmbstudio2271
1 Dmcandre2272
1 Dmccarty2273
1 Dmcmanam2274
1 Dmcomer2275
1 Dmercer2276
3 Dmforcier2277
1 DmitTrix2278
10 Dmitri L. Slabk.2279
1 Dmitri pavlov2280
2 Dmitri6662281
15 Dmitry Brant2282
2 Dmitry Dzhus2283
1 Dmn2284
1 Dmoss2285
2 Dmr22286
9 Dmwpowers2287

2263 https://en.wikipedia.org/wiki/User:Dkostic
2264 https://en.wikipedia.org/w/index.php%3ftitle=User:Dkrepid&action=edit&redlink=1
2265 https://en.wikipedia.org/wiki/User:Dl2000
2266 https://en.wikipedia.org/w/index.php%3ftitle=User:Dlakavi&action=edit&redlink=1
2267 https://en.wikipedia.org/wiki/User:Dllu
2268 https://en.wikipedia.org/wiki/User:Dlu776
2269 https://en.wikipedia.org/w/index.php%3ftitle=User:Dmaciej&action=edit&redlink=1
2270 https://en.wikipedia.org/w/index.php%3ftitle=User:Dmaclach&action=edit&redlink=1
2271 https://en.wikipedia.org/wiki/User:Dmbstudio
2272 https://en.wikipedia.org/wiki/User:Dmcandre
2273 https://en.wikipedia.org/w/index.php%3ftitle=User:Dmccarty&action=edit&redlink=1
2274 https://en.wikipedia.org/wiki/User:Dmcmanam
2275 https://en.wikipedia.org/w/index.php%3ftitle=User:Dmcomer&action=edit&redlink=1
2276 https://en.wikipedia.org/wiki/User:Dmercer
2277 https://en.wikipedia.org/wiki/User:Dmforcier
2278 https://en.wikipedia.org/w/index.php%3ftitle=User:DmitTrix&action=edit&redlink=1
2279 https://en.wikipedia.org/wiki/User:Dmitri_L._Slabk.
2280 https://en.wikipedia.org/w/index.php%3ftitle=User:Dmitri_pavlov&action=edit&redlink=1
2281 https://en.wikipedia.org/w/index.php%3ftitle=User:Dmitri666&action=edit&redlink=1
2282 https://en.wikipedia.org/wiki/User:Dmitry_Brant
2283 https://en.wikipedia.org/wiki/User:Dmitry_Dzhus
2284 https://en.wikipedia.org/wiki/User:Dmn
2285 https://en.wikipedia.org/wiki/User:Dmoss
2286 https://en.wikipedia.org/wiki/User:Dmr2
2287 https://en.wikipedia.org/wiki/User:Dmwpowers

1760
External links

2 Dmyersturnbull2288
3 Dmytro2289
3 Dn45952290
1 DnetSvg2291
6 Dngnogu2292
1 DniQ2293
5 DocWatson422294
17 Doccolinni2295
1 DoctorKubla2296
1 Doctorbozzball2297
1 Doctordiehard2298
3 Doctorhook2299
2 Doctormatt2300
8 Docu2301
1 Dodno2302
1 DoebLoggs2303
4 Doicanhden2304
5 DokReggar2305
2 Dolotta2306
1 Dolphinigle~enwiki2307
4 DomQ2308
1 Domesticenginerd2309
23 Domingos2310
11 Dominus2311
1 Domokato2312

2288 https://en.wikipedia.org/wiki/User:Dmyersturnbull
2289 https://en.wikipedia.org/wiki/User:Dmytro
2290 https://en.wikipedia.org/wiki/User:Dn4595
2291 https://en.wikipedia.org/wiki/User:DnetSvg
2292 https://en.wikipedia.org/w/index.php%3ftitle=User:Dngnogu&action=edit&redlink=1
2293 https://en.wikipedia.org/wiki/User:DniQ
2294 https://en.wikipedia.org/wiki/User:DocWatson42
2295 https://en.wikipedia.org/w/index.php%3ftitle=User:Doccolinni&action=edit&redlink=1
2296 https://en.wikipedia.org/wiki/User:DoctorKubla
https://en.wikipedia.org/w/index.php%3ftitle=User:Doctorbozzball&action=edit&redlink=
2297
1
2298 https://en.wikipedia.org/wiki/User:Doctordiehard
2299 https://en.wikipedia.org/wiki/User:Doctorhook
2300 https://en.wikipedia.org/wiki/User:Doctormatt
2301 https://en.wikipedia.org/wiki/User:Docu
2302 https://en.wikipedia.org/wiki/User:Dodno
2303 https://en.wikipedia.org/wiki/User:DoebLoggs
2304 https://en.wikipedia.org/w/index.php%3ftitle=User:Doicanhden&action=edit&redlink=1
2305 https://en.wikipedia.org/wiki/User:DokReggar
2306 https://en.wikipedia.org/wiki/User:Dolotta
https://en.wikipedia.org/w/index.php%3ftitle=User:Dolphinigle~enwiki&action=edit&
2307
redlink=1
2308 https://en.wikipedia.org/wiki/User:DomQ
2309 https://en.wikipedia.org/wiki/User:Domesticenginerd
2310 https://en.wikipedia.org/w/index.php%3ftitle=User:Domingos&action=edit&redlink=1
2311 https://en.wikipedia.org/wiki/User:Dominus
2312 https://en.wikipedia.org/wiki/User:Domokato

1761
Contributors

1 Domswaine2313
4 Don4of42314
4 DonaldLflr2315
3 Donarreiskoffer2316
1 Donbraffitt2317
1 Donfbreed2318
13 Donhalcon2319
14 Donner602320
2 Doodyo1232321
2 Doorbuster2172322
2 Dopehead92323
2 Dopexxx2324
2 Doprendek2325
70 Doradus2326
4 Dorbec2327
3 DorganBot2328
4 DoriSmith2329
4 Dorsetonian2330
2 Dosman2331
1 Dotcapitalized2332
8 Doug Bell2333
1 Doug42334
7 Dough342335
5 Dougher2336
2 Douglas R. White2337

2313 https://en.wikipedia.org/wiki/User:Domswaine
2314 https://en.wikipedia.org/wiki/User:Don4of4
2315 https://en.wikipedia.org/w/index.php%3ftitle=User:DonaldLflr&action=edit&redlink=1
2316 https://en.wikipedia.org/wiki/User:Donarreiskoffer
2317 https://en.wikipedia.org/wiki/User:Donbraffitt
2318 https://en.wikipedia.org/w/index.php%3ftitle=User:Donfbreed&action=edit&redlink=1
2319 https://en.wikipedia.org/wiki/User:Donhalcon
2320 https://en.wikipedia.org/wiki/User:Donner60
2321 https://en.wikipedia.org/w/index.php%3ftitle=User:Doodyo123&action=edit&redlink=1
2322 https://en.wikipedia.org/w/index.php%3ftitle=User:Doorbuster217&action=edit&redlink=1
2323 https://en.wikipedia.org/w/index.php%3ftitle=User:Dopehead9&action=edit&redlink=1
2324 https://en.wikipedia.org/wiki/User:Dopexxx
2325 https://en.wikipedia.org/wiki/User:Doprendek
2326 https://en.wikipedia.org/wiki/User:Doradus
2327 https://en.wikipedia.org/w/index.php%3ftitle=User:Dorbec&action=edit&redlink=1
2328 https://en.wikipedia.org/wiki/User:DorganBot
2329 https://en.wikipedia.org/wiki/User:DoriSmith
2330 https://en.wikipedia.org/wiki/User:Dorsetonian
2331 https://en.wikipedia.org/wiki/User:Dosman
https://en.wikipedia.org/w/index.php%3ftitle=User:Dotcapitalized&action=edit&redlink=
2332
1
2333 https://en.wikipedia.org/wiki/User:Doug_Bell
2334 https://en.wikipedia.org/wiki/User:Doug4
2335 https://en.wikipedia.org/w/index.php%3ftitle=User:Dough34&action=edit&redlink=1
2336 https://en.wikipedia.org/w/index.php%3ftitle=User:Dougher&action=edit&redlink=1
2337 https://en.wikipedia.org/wiki/User:Douglas_R._White

1762
External links

1 Douglas Ray2338
1 Douglas Wilhelm Harder2339
1 DouglasHeld2340
1 Douglasnaphas2341
4 Doulos Christos2342
15 Download2343
1 Downtown dan seattle2344
1 Dpbert2345
2 Dpiddy13372346
1 Dpleibovitz2347
1 Dr. Dictum Dictus2348
2 Dr. Gonzo2349
1 Dr. Persi2350
2 Dr. Universe2351
4 Dr.Koljan2352
1 Dr.RMills2353
1 Dr.S.Ramachandran2354
1 Dr0b3rts2355
1 Dr3s2356
1 DrAndrewColes2357
1 DrBozzball2358
1 DrHow2359
2 DrJunge2360
1 Draco flavus2361
2 Dragon guy2362

2338 https://en.wikipedia.org/wiki/User:Douglas_Ray
2339 https://en.wikipedia.org/wiki/User:Douglas_Wilhelm_Harder
2340 https://en.wikipedia.org/wiki/User:DouglasHeld
2341 https://en.wikipedia.org/w/index.php%3ftitle=User:Douglasnaphas&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Doulos_Christos&action=edit&
2342
redlink=1
2343 https://en.wikipedia.org/wiki/User:Download
2344 https://en.wikipedia.org/wiki/User:Downtown_dan_seattle
2345 https://en.wikipedia.org/w/index.php%3ftitle=User:Dpbert&action=edit&redlink=1
2346 https://en.wikipedia.org/wiki/User:Dpiddy1337
2347 https://en.wikipedia.org/wiki/User:Dpleibovitz
https://en.wikipedia.org/w/index.php%3ftitle=User:Dr._Dictum_Dictus&action=edit&
2348
redlink=1
2349 https://en.wikipedia.org/w/index.php%3ftitle=User:Dr._Gonzo&action=edit&redlink=1
2350 https://en.wikipedia.org/wiki/User:Dr._Persi
2351 https://en.wikipedia.org/w/index.php%3ftitle=User:Dr._Universe&action=edit&redlink=1
2352 https://en.wikipedia.org/w/index.php%3ftitle=User:Dr.Koljan&action=edit&redlink=1
2353 https://en.wikipedia.org/w/index.php%3ftitle=User:Dr.RMills&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Dr.S.Ramachandran&action=edit&
2354
redlink=1
2355 https://en.wikipedia.org/w/index.php%3ftitle=User:Dr0b3rts&action=edit&redlink=1
2356 https://en.wikipedia.org/w/index.php%3ftitle=User:Dr3s&action=edit&redlink=1
2357 https://en.wikipedia.org/wiki/User:DrAndrewColes
2358 https://en.wikipedia.org/w/index.php%3ftitle=User:DrBozzball&action=edit&redlink=1
2359 https://en.wikipedia.org/w/index.php%3ftitle=User:DrHow&action=edit&redlink=1
2360 https://en.wikipedia.org/wiki/User:DrJunge
2361 https://en.wikipedia.org/wiki/User:Draco_flavus
2362 https://en.wikipedia.org/wiki/User:Dragon_guy

1763
Contributors

8 DragonBot2363
1 Dragonchilde2364
1 Dragonflare822365
5 DragonflySixtyseven2366
1 Dragunov4022367
1 Drake Redcrest2368
3 Dralea2369
1 Drange net2370
2 Dranorter2371
3 Dratman2372
2 Drbreznjev2373
1 Drdevil442374
1 DreamGuy2375
1 Dreamyshade2376
1 Dreftymac2377
4 Dreske2378
1 Drewmutt2379
1 Drewnoakes2380
1 Drfloob2381
8 Dricherby2382
5 DrilBot2383
2 Drilnoth2384
2 Drishti Wali2385
1 Drizzd~enwiki2386
1 Drjan2387

2363 https://en.wikipedia.org/wiki/User:DragonBot
2364 https://en.wikipedia.org/wiki/User:Dragonchilde
2365 https://en.wikipedia.org/w/index.php%3ftitle=User:Dragonflare82&action=edit&redlink=1
2366 https://en.wikipedia.org/wiki/User:DragonflySixtyseven
2367 https://en.wikipedia.org/wiki/User:Dragunov402
https://en.wikipedia.org/w/index.php%3ftitle=User:Drake_Redcrest&action=edit&redlink=
2368
1
2369 https://en.wikipedia.org/w/index.php%3ftitle=User:Dralea&action=edit&redlink=1
2370 https://en.wikipedia.org/wiki/User:Drange_net
2371 https://en.wikipedia.org/wiki/User:Dranorter
2372 https://en.wikipedia.org/wiki/User:Dratman
2373 https://en.wikipedia.org/wiki/User:Drbreznjev
2374 https://en.wikipedia.org/w/index.php%3ftitle=User:Drdevil44&action=edit&redlink=1
2375 https://en.wikipedia.org/wiki/User:DreamGuy
2376 https://en.wikipedia.org/wiki/User:Dreamyshade
2377 https://en.wikipedia.org/wiki/User:Dreftymac
2378 https://en.wikipedia.org/w/index.php%3ftitle=User:Dreske&action=edit&redlink=1
2379 https://en.wikipedia.org/wiki/User:Drewmutt
2380 https://en.wikipedia.org/w/index.php%3ftitle=User:Drewnoakes&action=edit&redlink=1
2381 https://en.wikipedia.org/w/index.php%3ftitle=User:Drfloob&action=edit&redlink=1
2382 https://en.wikipedia.org/w/index.php%3ftitle=User:Dricherby&action=edit&redlink=1
2383 https://en.wikipedia.org/wiki/User:DrilBot
2384 https://en.wikipedia.org/wiki/User:Drilnoth
2385 https://en.wikipedia.org/w/index.php%3ftitle=User:Drishti_Wali&action=edit&redlink=1
2386 https://en.wikipedia.org/wiki/User:Drizzd~enwiki
2387 https://en.wikipedia.org/wiki/User:Drjan

1764
External links

1 Drkarger2388
1 Drobilla2389
1 Droll2390
1 Dronning Chæna2391
6 Drostie2392
2 Drpaule2393
1 Drphilharmonic2394
46 Drrilll2395
1 Drsteve2396
1 Drt12452397
1 Drtom2398
1 Drttm2399
1 DrummingDS12400
1 Dry.2401
1 Dryguy2402
2 Dscholte2403
1 Dscottboggs2404
77 Dsimic2405
1 Dsingularity2406
1 Dsm2407
1 Dsmo223322902408
1 Dsnpost2409
2 Dsp de2410
1 Dspart2411
1 Dstein642412

2388 https://en.wikipedia.org/wiki/User:Drkarger
2389 https://en.wikipedia.org/w/index.php%3ftitle=User:Drobilla&action=edit&redlink=1
2390 https://en.wikipedia.org/wiki/User:Droll
2391 https://en.wikipedia.org/wiki/User:Dronning_Ch%25C3%25A6na
2392 https://en.wikipedia.org/wiki/User:Drostie
2393 https://en.wikipedia.org/wiki/User:Drpaule
https://en.wikipedia.org/w/index.php%3ftitle=User:Drphilharmonic&action=edit&redlink=
2394
1
2395 https://en.wikipedia.org/w/index.php%3ftitle=User:Drrilll&action=edit&redlink=1
2396 https://en.wikipedia.org/w/index.php%3ftitle=User:Drsteve&action=edit&redlink=1
2397 https://en.wikipedia.org/w/index.php%3ftitle=User:Drt1245&action=edit&redlink=1
2398 https://en.wikipedia.org/w/index.php%3ftitle=User:Drtom&action=edit&redlink=1
2399 https://en.wikipedia.org/wiki/User:Drttm
2400 https://en.wikipedia.org/w/index.php%3ftitle=User:DrummingDS1&action=edit&redlink=1
2401 https://en.wikipedia.org/wiki/User:Dry.
2402 https://en.wikipedia.org/wiki/User:Dryguy
2403 https://en.wikipedia.org/w/index.php%3ftitle=User:Dscholte&action=edit&redlink=1
2404 https://en.wikipedia.org/w/index.php%3ftitle=User:Dscottboggs&action=edit&redlink=1
2405 https://en.wikipedia.org/wiki/User:Dsimic
2406 https://en.wikipedia.org/w/index.php%3ftitle=User:Dsingularity&action=edit&redlink=1
2407 https://en.wikipedia.org/wiki/User:Dsm
2408 https://en.wikipedia.org/w/index.php%3ftitle=User:Dsmo22332290&action=edit&redlink=1
2409 https://en.wikipedia.org/w/index.php%3ftitle=User:Dsnpost&action=edit&redlink=1
2410 https://en.wikipedia.org/wiki/User:Dsp_de
2411 https://en.wikipedia.org/wiki/User:Dspart
2412 https://en.wikipedia.org/wiki/User:Dstein64

1765
Contributors

2 Dthomsen82413
4 Dtotoo2414
1 Dtrebbien2415
2 Dtunkelang2416
1 Duagloth2417
1 DuaneLAnderson2418
1 Duck11232419
2 Duckbill2420
1 Ducker2421
2 Dudu902422
4 Dudzcom2423
2 DuineSidhe2424
1 Dukebox2425
2 Dukon2426
21 DumZiBoT2427
5 DumbBOT2428
1 Dummyfarmer2429
1 Duncan2430
1 Duncan Hope2431
3 Duncan.Hull2432
26 Duncancumming2433
1 Duncant2434
3 Duncharris2435
1 Dungahk2436
13 Duoduoduo2437

2413 https://en.wikipedia.org/wiki/User:Dthomsen8
2414 https://en.wikipedia.org/w/index.php%3ftitle=User:Dtotoo&action=edit&redlink=1
2415 https://en.wikipedia.org/wiki/User:Dtrebbien
2416 https://en.wikipedia.org/wiki/User:Dtunkelang
2417 https://en.wikipedia.org/wiki/User:Duagloth
https://en.wikipedia.org/w/index.php%3ftitle=User:DuaneLAnderson&action=edit&redlink=
2418
1
2419 https://en.wikipedia.org/w/index.php%3ftitle=User:Duck1123&action=edit&redlink=1
2420 https://en.wikipedia.org/wiki/User:Duckbill
2421 https://en.wikipedia.org/wiki/User:Ducker
2422 https://en.wikipedia.org/wiki/User:Dudu90
2423 https://en.wikipedia.org/wiki/User:Dudzcom
2424 https://en.wikipedia.org/w/index.php%3ftitle=User:DuineSidhe&action=edit&redlink=1
2425 https://en.wikipedia.org/w/index.php%3ftitle=User:Dukebox&action=edit&redlink=1
2426 https://en.wikipedia.org/wiki/User:Dukon
2427 https://en.wikipedia.org/wiki/User:DumZiBoT
2428 https://en.wikipedia.org/wiki/User:DumbBOT
2429 https://en.wikipedia.org/w/index.php%3ftitle=User:Dummyfarmer&action=edit&redlink=1
2430 https://en.wikipedia.org/wiki/User:Duncan
2431 https://en.wikipedia.org/w/index.php%3ftitle=User:Duncan_Hope&action=edit&redlink=1
2432 https://en.wikipedia.org/wiki/User:Duncan.Hull
2433 https://en.wikipedia.org/wiki/User:Duncancumming
2434 https://en.wikipedia.org/wiki/User:Duncant
2435 https://en.wikipedia.org/wiki/User:Duncharris
2436 https://en.wikipedia.org/w/index.php%3ftitle=User:Dungahk&action=edit&redlink=1
2437 https://en.wikipedia.org/wiki/User:Duoduoduo

1766
External links

1 Duplicity2438
1 Durron5972439
1 Dustball2440
43 Duvavic12441
1 Dvanatta2442
1 Dvoina132443
1 Dwedit2444
1 Dwemthy2445
1 Dwhdwh2446
1 Dwheeler2447
1 Dwija Prasad2448
2 Dwkrueger22449
2 Dyadron2450
1 Dybdahl2451
2 Dycedarg2452
2 Dylan Thurston2453
1 Dylan m362454
1 Dymaio2455
3 Dze272456
1 Dzenanz2457
1 Dzf19922458
1 Dzhim2459
1 Dzikasosna2460
1 Déjà Vu2461
3 Długosz2462

2438 https://en.wikipedia.org/wiki/User:Duplicity
2439 https://en.wikipedia.org/wiki/User:Durron597
2440 https://en.wikipedia.org/wiki/User:Dustball
2441 https://en.wikipedia.org/w/index.php%3ftitle=User:Duvavic1&action=edit&redlink=1
2442 https://en.wikipedia.org/w/index.php%3ftitle=User:Dvanatta&action=edit&redlink=1
2443 https://en.wikipedia.org/w/index.php%3ftitle=User:Dvoina13&action=edit&redlink=1
2444 https://en.wikipedia.org/wiki/User:Dwedit
2445 https://en.wikipedia.org/w/index.php%3ftitle=User:Dwemthy&action=edit&redlink=1
2446 https://en.wikipedia.org/w/index.php%3ftitle=User:Dwhdwh&action=edit&redlink=1
2447 https://en.wikipedia.org/wiki/User:Dwheeler
2448 https://en.wikipedia.org/w/index.php%3ftitle=User:Dwija_Prasad&action=edit&redlink=1
2449 https://en.wikipedia.org/w/index.php%3ftitle=User:Dwkrueger2&action=edit&redlink=1
2450 https://en.wikipedia.org/wiki/User:Dyadron
2451 https://en.wikipedia.org/wiki/User:Dybdahl
2452 https://en.wikipedia.org/wiki/User:Dycedarg
https://en.wikipedia.org/w/index.php%3ftitle=User:Dylan_Thurston&action=edit&redlink=
2453
1
2454 https://en.wikipedia.org/w/index.php%3ftitle=User:Dylan_m36&action=edit&redlink=1
2455 https://en.wikipedia.org/w/index.php%3ftitle=User:Dymaio&action=edit&redlink=1
2456 https://en.wikipedia.org/wiki/User:Dze27
2457 https://en.wikipedia.org/wiki/User:Dzenanz
2458 https://en.wikipedia.org/w/index.php%3ftitle=User:Dzf1992&action=edit&redlink=1
2459 https://en.wikipedia.org/wiki/User:Dzhim
2460 https://en.wikipedia.org/w/index.php%3ftitle=User:Dzikasosna&action=edit&redlink=1
2461 https://en.wikipedia.org/wiki/User:D%25C3%25A9j%25C3%25A0_Vu
2462 https://en.wikipedia.org/wiki/User:D%25C5%2582ugosz

1767
Contributors

2 E David Moyer2463
1 E.prn742464
5 E.ruzi2465
1 E23~enwiki2466
1 E2eamon2467
3 E7em2468
4 EAspenwood2469
2 EAtsala2470
1 EBusiness2471
1 EChamilakis2472
3 EEMANUEL0012473
1 EEMIV2474
8 EEng2475
1 EIFY2476
1 ELApro2477
2 ELLinng2478
1 ERIKER2479
5 ESkog2480
3 EUStudent62481
4 EXPTIME-complete2482
1 EZio2483
2 EagleFan2484
1 Ealdgyth2485
1 Eallende2486
1 Eapache2487

2463 https://en.wikipedia.org/wiki/User:E_David_Moyer
2464 https://en.wikipedia.org/w/index.php%3ftitle=User:E.prn74&action=edit&redlink=1
2465 https://en.wikipedia.org/w/index.php%3ftitle=User:E.ruzi&action=edit&redlink=1
2466 https://en.wikipedia.org/wiki/User:E23~enwiki
2467 https://en.wikipedia.org/w/index.php%3ftitle=User:E2eamon&action=edit&redlink=1
2468 https://en.wikipedia.org/w/index.php%3ftitle=User:E7em&action=edit&redlink=1
2469 https://en.wikipedia.org/w/index.php%3ftitle=User:EAspenwood&action=edit&redlink=1
2470 https://en.wikipedia.org/w/index.php%3ftitle=User:EAtsala&action=edit&redlink=1
2471 https://en.wikipedia.org/w/index.php%3ftitle=User:EBusiness&action=edit&redlink=1
2472 https://en.wikipedia.org/w/index.php%3ftitle=User:EChamilakis&action=edit&redlink=1
2473 https://en.wikipedia.org/w/index.php%3ftitle=User:EEMANUEL001&action=edit&redlink=1
2474 https://en.wikipedia.org/wiki/User:EEMIV
2475 https://en.wikipedia.org/wiki/User:EEng
2476 https://en.wikipedia.org/wiki/User:EIFY
2477 https://en.wikipedia.org/wiki/User:ELApro
2478 https://en.wikipedia.org/wiki/User:ELLinng
2479 https://en.wikipedia.org/w/index.php%3ftitle=User:ERIKER&action=edit&redlink=1
2480 https://en.wikipedia.org/wiki/User:ESkog
2481 https://en.wikipedia.org/wiki/User:EUStudent6
https://en.wikipedia.org/w/index.php%3ftitle=User:EXPTIME-complete&action=edit&
2482
redlink=1
2483 https://en.wikipedia.org/wiki/User:EZio
2484 https://en.wikipedia.org/wiki/User:EagleFan
2485 https://en.wikipedia.org/wiki/User:Ealdgyth
2486 https://en.wikipedia.org/w/index.php%3ftitle=User:Eallende&action=edit&redlink=1
2487 https://en.wikipedia.org/wiki/User:Eapache

1768
External links

1 EapenZhan2488
6 Earl King2489
1 Earlster2490
1 Earobinson2491
1 Earthcosmos2492
8 Eassin2493
4 Easwarno12494
1 Easyas12c2495
1 Eaterjolly2496
1 Eb0nx9GwixAsyaSf2497
3 Ebehn2498
1 Ebony Jackson2499
1 Ebrahim2500
2 EbrushERB2501
1 Ebyabe2502
17 Ecb292503
13 Eclecticos2504
1 Ecliptica2505
1 Ecnerwala2506
1 Ecomaikgolf2507
4 EconoPhysicist2508
3 Econterms2509
1 Ecpiandy2510
1 Ecruzhe2511
5 Ed Poor2512

2488 https://en.wikipedia.org/w/index.php%3ftitle=User:EapenZhan&action=edit&redlink=1
2489 https://en.wikipedia.org/wiki/User:Earl_King
2490 https://en.wikipedia.org/w/index.php%3ftitle=User:Earlster&action=edit&redlink=1
2491 https://en.wikipedia.org/w/index.php%3ftitle=User:Earobinson&action=edit&redlink=1
2492 https://en.wikipedia.org/wiki/User:Earthcosmos
2493 https://en.wikipedia.org/wiki/User:Eassin
2494 https://en.wikipedia.org/wiki/User:Easwarno1
2495 https://en.wikipedia.org/wiki/User:Easyas12c
2496 https://en.wikipedia.org/wiki/User:Eaterjolly
https://en.wikipedia.org/w/index.php%3ftitle=User:Eb0nx9GwixAsyaSf&action=edit&
2497
redlink=1
2498 https://en.wikipedia.org/w/index.php%3ftitle=User:Ebehn&action=edit&redlink=1
2499 https://en.wikipedia.org/wiki/User:Ebony_Jackson
2500 https://en.wikipedia.org/wiki/User:Ebrahim
2501 https://en.wikipedia.org/w/index.php%3ftitle=User:EbrushERB&action=edit&redlink=1
2502 https://en.wikipedia.org/wiki/User:Ebyabe
2503 https://en.wikipedia.org/w/index.php%3ftitle=User:Ecb29&action=edit&redlink=1
2504 https://en.wikipedia.org/w/index.php%3ftitle=User:Eclecticos&action=edit&redlink=1
2505 https://en.wikipedia.org/w/index.php%3ftitle=User:Ecliptica&action=edit&redlink=1
2506 https://en.wikipedia.org/w/index.php%3ftitle=User:Ecnerwala&action=edit&redlink=1
2507 https://en.wikipedia.org/w/index.php%3ftitle=User:Ecomaikgolf&action=edit&redlink=1
2508 https://en.wikipedia.org/wiki/User:EconoPhysicist
2509 https://en.wikipedia.org/wiki/User:Econterms
2510 https://en.wikipedia.org/wiki/User:Ecpiandy
2511 https://en.wikipedia.org/w/index.php%3ftitle=User:Ecruzhe&action=edit&redlink=1
2512 https://en.wikipedia.org/wiki/User:Ed_Poor

1769
Contributors

2 EdChem2513
3 EdC~enwiki2514
1 EdEColbert2515
1 EdGl2516
4 EdH2517
2 EdSchouten2518
8 Edaeda2519
2 Edaelon2520
2 Edantes-usa2521
3 Edchi2522
1 Edcolins2523
1 EddEdmondson2524
3 Eddideigel2525
1 Eddiearin1232526
1 Eddof132527
5 Edemaine2528
1 Edepot2529
4 Edgar1812530
4 Edgetitleman2531
2 Edgy42532
1 EdiTor2533
1 Edible Melon2534
9 Editor Guy Dude2535
33 Edlee2536
2 EdoBot2537

2513 https://en.wikipedia.org/wiki/User:EdChem
2514 https://en.wikipedia.org/wiki/User:EdC~enwiki
2515 https://en.wikipedia.org/wiki/User:EdEColbert
2516 https://en.wikipedia.org/wiki/User:EdGl
2517 https://en.wikipedia.org/wiki/User:EdH
2518 https://en.wikipedia.org/wiki/User:EdSchouten
2519 https://en.wikipedia.org/w/index.php%3ftitle=User:Edaeda&action=edit&redlink=1
2520 https://en.wikipedia.org/wiki/User:Edaelon
2521 https://en.wikipedia.org/w/index.php%3ftitle=User:Edantes-usa&action=edit&redlink=1
2522 https://en.wikipedia.org/wiki/User:Edchi
2523 https://en.wikipedia.org/wiki/User:Edcolins
2524 https://en.wikipedia.org/wiki/User:EddEdmondson
2525 https://en.wikipedia.org/wiki/User:Eddideigel
2526 https://en.wikipedia.org/w/index.php%3ftitle=User:Eddiearin123&action=edit&redlink=1
2527 https://en.wikipedia.org/w/index.php%3ftitle=User:Eddof13&action=edit&redlink=1
2528 https://en.wikipedia.org/wiki/User:Edemaine
2529 https://en.wikipedia.org/wiki/User:Edepot
2530 https://en.wikipedia.org/wiki/User:Edgar181
2531 https://en.wikipedia.org/w/index.php%3ftitle=User:Edgetitleman&action=edit&redlink=1
2532 https://en.wikipedia.org/wiki/User:Edgy4
2533 https://en.wikipedia.org/wiki/User:EdiTor
2534 https://en.wikipedia.org/w/index.php%3ftitle=User:Edible_Melon&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Editor_Guy_Dude&action=edit&
2535
redlink=1
2536 https://en.wikipedia.org/wiki/User:Edlee
2537 https://en.wikipedia.org/wiki/User:EdoBot

1770
External links

1 Edongliu2538
4 Edratzer2539
1 Edupedro2540
13 Edward2541
6 Edward Z. Yang2542
2 EdwardH2543
1 EdwardHades2544
5 Edwininlondon2545
1 Eeekster2546
1 EeeveeeFrost2547
18 Eequor2548
2 Eescardo2549
1 Eewild2550
3 Efansoftware2551
4 Efficacious2552
1 Efnar2553
2 Egamal2554
1 Eggishorn2555
1 Eggman642556
1 Eggsyntax2557
1 Egli2558
3 Egnever2559
1 Egomes jason2560
1 Egriffin2561
1 Egsan Bacon2562

2538 https://en.wikipedia.org/w/index.php%3ftitle=User:Edongliu&action=edit&redlink=1
2539 https://en.wikipedia.org/wiki/User:Edratzer
2540 https://en.wikipedia.org/wiki/User:Edupedro
2541 https://en.wikipedia.org/wiki/User:Edward
2542 https://en.wikipedia.org/wiki/User:Edward_Z._Yang
2543 https://en.wikipedia.org/wiki/User:EdwardH
2544 https://en.wikipedia.org/wiki/User:EdwardHades
2545 https://en.wikipedia.org/wiki/User:Edwininlondon
2546 https://en.wikipedia.org/wiki/User:Eeekster
2547 https://en.wikipedia.org/wiki/User:EeeveeeFrost
2548 https://en.wikipedia.org/wiki/User:Eequor
2549 https://en.wikipedia.org/w/index.php%3ftitle=User:Eescardo&action=edit&redlink=1
2550 https://en.wikipedia.org/w/index.php%3ftitle=User:Eewild&action=edit&redlink=1
2551 https://en.wikipedia.org/w/index.php%3ftitle=User:Efansoftware&action=edit&redlink=1
2552 https://en.wikipedia.org/wiki/User:Efficacious
2553 https://en.wikipedia.org/w/index.php%3ftitle=User:Efnar&action=edit&redlink=1
2554 https://en.wikipedia.org/wiki/User:Egamal
2555 https://en.wikipedia.org/wiki/User:Eggishorn
2556 https://en.wikipedia.org/wiki/User:Eggman64
2557 https://en.wikipedia.org/w/index.php%3ftitle=User:Eggsyntax&action=edit&redlink=1
2558 https://en.wikipedia.org/wiki/User:Egli
2559 https://en.wikipedia.org/wiki/User:Egnever
2560 https://en.wikipedia.org/w/index.php%3ftitle=User:Egomes_jason&action=edit&redlink=1
2561 https://en.wikipedia.org/wiki/User:Egriffin
2562 https://en.wikipedia.org/wiki/User:Egsan_Bacon

1771
Contributors

1 Ehamberg2563
2 Ehrenkater2564
1 Ehsan~enwiki2565
1 Eigensoul2566
1 Eijkhout2567
1 Eisnel2568
1 Ej2569
1 Eje2112570
1 Ejrh2571
2 Ejtttje2572
1 Ekeilty172573
1 Ekeyme2574
2 Eklipse2575
1 Ekotkie2576
1 Ekujupr2577
26 El C2578
1 El Charpi~enwiki2579
1 El Roih2580
1 El bot de la dieta2581
1 El cid, el campeador2582
1 El10t2583
1 ElBenevolente2584
1 ElKevbo2585
1 ElWatchDog2586
1 Elanguescence2587

2563 https://en.wikipedia.org/wiki/User:Ehamberg
2564 https://en.wikipedia.org/wiki/User:Ehrenkater
2565 https://en.wikipedia.org/wiki/User:Ehsan~enwiki
2566 https://en.wikipedia.org/w/index.php%3ftitle=User:Eigensoul&action=edit&redlink=1
2567 https://en.wikipedia.org/wiki/User:Eijkhout
2568 https://en.wikipedia.org/wiki/User:Eisnel
2569 https://en.wikipedia.org/w/index.php%3ftitle=User:Ej&action=edit&redlink=1
2570 https://en.wikipedia.org/wiki/User:Eje211
2571 https://en.wikipedia.org/wiki/User:Ejrh
2572 https://en.wikipedia.org/w/index.php%3ftitle=User:Ejtttje&action=edit&redlink=1
2573 https://en.wikipedia.org/w/index.php%3ftitle=User:Ekeilty17&action=edit&redlink=1
2574 https://en.wikipedia.org/w/index.php%3ftitle=User:Ekeyme&action=edit&redlink=1
2575 https://en.wikipedia.org/wiki/User:Eklipse
2576 https://en.wikipedia.org/wiki/User:Ekotkie
2577 https://en.wikipedia.org/w/index.php%3ftitle=User:Ekujupr&action=edit&redlink=1
2578 https://en.wikipedia.org/wiki/User:El_C
2579 https://en.wikipedia.org/wiki/User:El_Charpi~enwiki
2580 https://en.wikipedia.org/w/index.php%3ftitle=User:El_Roih&action=edit&redlink=1
2581 https://en.wikipedia.org/wiki/User:El_bot_de_la_dieta
2582 https://en.wikipedia.org/wiki/User:El_cid,_el_campeador
2583 https://en.wikipedia.org/wiki/User:El10t
2584 https://en.wikipedia.org/wiki/User:ElBenevolente
2585 https://en.wikipedia.org/wiki/User:ElKevbo
2586 https://en.wikipedia.org/w/index.php%3ftitle=User:ElWatchDog&action=edit&redlink=1
2587 https://en.wikipedia.org/wiki/User:Elanguescence

1772
External links

1 Elassint2588
1 Elchris4142589
1 Elcielo9172590
20 Eldar2591
1 Eldruin2592
1 Electriccatfish22593
1 Electricmaster2594
1 Electro2595
4 Electron92596
2 ElectronicsEnthusiast2597
2 Electrum2598
1 EleferenBot2599
2 Elektrolurch2600
1 Elektron2601
1 Elektropepi2602
3 Elembis2603
1 Elephant in a tornado2604
3 Eleschinski20002605
1 Eleuther2606
1 Eleveneleven2607
1 Elf2608
1 Elharo2609
5 Eli4ph2610
5 Elias2611
1 Elijej2612

2588 https://en.wikipedia.org/wiki/User:Elassint
2589 https://en.wikipedia.org/w/index.php%3ftitle=User:Elchris414&action=edit&redlink=1
2590 https://en.wikipedia.org/w/index.php%3ftitle=User:Elcielo917&action=edit&redlink=1
2591 https://en.wikipedia.org/w/index.php%3ftitle=User:Eldar&action=edit&redlink=1
2592 https://en.wikipedia.org/wiki/User:Eldruin
https://en.wikipedia.org/w/index.php%3ftitle=User:Electriccatfish2&action=edit&
2593
redlink=1
2594 https://en.wikipedia.org/wiki/User:Electricmaster
2595 https://en.wikipedia.org/w/index.php%3ftitle=User:Electro&action=edit&redlink=1
2596 https://en.wikipedia.org/wiki/User:Electron9
2597 https://en.wikipedia.org/wiki/User:ElectronicsEnthusiast
2598 https://en.wikipedia.org/w/index.php%3ftitle=User:Electrum&action=edit&redlink=1
2599 https://en.wikipedia.org/wiki/User:EleferenBot
2600 https://en.wikipedia.org/w/index.php%3ftitle=User:Elektrolurch&action=edit&redlink=1
2601 https://en.wikipedia.org/wiki/User:Elektron
2602 https://en.wikipedia.org/w/index.php%3ftitle=User:Elektropepi&action=edit&redlink=1
2603 https://en.wikipedia.org/wiki/User:Elembis
2604 https://en.wikipedia.org/wiki/User:Elephant_in_a_tornado
2605 https://en.wikipedia.org/wiki/User:Eleschinski2000
2606 https://en.wikipedia.org/wiki/User:Eleuther
2607 https://en.wikipedia.org/wiki/User:Eleveneleven
2608 https://en.wikipedia.org/wiki/User:Elf
2609 https://en.wikipedia.org/w/index.php%3ftitle=User:Elharo&action=edit&redlink=1
2610 https://en.wikipedia.org/w/index.php%3ftitle=User:Eli4ph&action=edit&redlink=1
2611 https://en.wikipedia.org/wiki/User:Elias
2612 https://en.wikipedia.org/w/index.php%3ftitle=User:Elijej&action=edit&redlink=1

1773
Contributors

1 Elirankoren2613
1 Elisfkc2614
1 Elitre2615
1 Eliz812616
1 Elizabeyth2617
1 Elkfrawy2618
2 Elkman2619
3 Ellywa2620
3 ElnaserAbdelwahab2621
2 ElonNarai2622
1 ElricMelvar2623
7 Elwoz2624
1 Elyazalmahfouz2625
2 Emadix2626
1 Emallove2627
2 Emanueru2628
76 EmausBot2629
1 Embanner2630
1 Emc22631
1 Emdtechnology2632
1 Emergie2633
2 EmersonLowry2634
1 Emil Sander Bak2635
1 Emil22636
189 EmilJ2637

2613 https://en.wikipedia.org/w/index.php%3ftitle=User:Elirankoren&action=edit&redlink=1
2614 https://en.wikipedia.org/wiki/User:Elisfkc
2615 https://en.wikipedia.org/wiki/User:Elitre
2616 https://en.wikipedia.org/wiki/User:Eliz81
2617 https://en.wikipedia.org/wiki/User:Elizabeyth
2618 https://en.wikipedia.org/w/index.php%3ftitle=User:Elkfrawy&action=edit&redlink=1
2619 https://en.wikipedia.org/wiki/User:Elkman
2620 https://en.wikipedia.org/wiki/User:Ellywa
https://en.wikipedia.org/w/index.php%3ftitle=User:ElnaserAbdelwahab&action=edit&
2621
redlink=1
2622 https://en.wikipedia.org/w/index.php%3ftitle=User:ElonNarai&action=edit&redlink=1
2623 https://en.wikipedia.org/w/index.php%3ftitle=User:ElricMelvar&action=edit&redlink=1
2624 https://en.wikipedia.org/wiki/User:Elwoz
https://en.wikipedia.org/w/index.php%3ftitle=User:Elyazalmahfouz&action=edit&redlink=
2625
1
2626 https://en.wikipedia.org/wiki/User:Emadix
2627 https://en.wikipedia.org/wiki/User:Emallove
2628 https://en.wikipedia.org/w/index.php%3ftitle=User:Emanueru&action=edit&redlink=1
2629 https://en.wikipedia.org/wiki/User:EmausBot
2630 https://en.wikipedia.org/wiki/User:Embanner
2631 https://en.wikipedia.org/wiki/User:Emc2
2632 https://en.wikipedia.org/w/index.php%3ftitle=User:Emdtechnology&action=edit&redlink=1
2633 https://en.wikipedia.org/w/index.php%3ftitle=User:Emergie&action=edit&redlink=1
2634 https://en.wikipedia.org/wiki/User:EmersonLowry
https://en.wikipedia.org/w/index.php%3ftitle=User:Emil_Sander_Bak&action=edit&
2635
redlink=1
2636 https://en.wikipedia.org/wiki/User:Emil2
2637 https://en.wikipedia.org/wiki/User:EmilJ

1774
External links

1 Emilgoldsmith2638
1 Emilkeyder2639
2 Emimull2640
1 Emin632641
3 Emir of Wikipedia2642
1 Emj2643
1 Emmanuel5h2644
1 Empaisley2645
2 Emrysk2646
1 Emurphy422647
3 EncMstr2648
1 Encyclopediamonitor2649
3 EndgameCondition2650
1 Energy Dome2651
2 Engheta2652
1 EngineerFromVega2653
1 EngineerScotty2654
1 Ennorehling2655
1 Enobrev2656
21 Enochlau2657
5 Enoksrd2658
1 Enricorpg2659
1 Enrique.benimeli2660
6 Enterprisey2661
1 Enthdegree2662

2638 https://en.wikipedia.org/w/index.php%3ftitle=User:Emilgoldsmith&action=edit&redlink=1
2639 https://en.wikipedia.org/w/index.php%3ftitle=User:Emilkeyder&action=edit&redlink=1
2640 https://en.wikipedia.org/w/index.php%3ftitle=User:Emimull&action=edit&redlink=1
2641 https://en.wikipedia.org/w/index.php%3ftitle=User:Emin63&action=edit&redlink=1
2642 https://en.wikipedia.org/wiki/User:Emir_of_Wikipedia
2643 https://en.wikipedia.org/wiki/User:Emj
2644 https://en.wikipedia.org/wiki/User:Emmanuel5h
2645 https://en.wikipedia.org/w/index.php%3ftitle=User:Empaisley&action=edit&redlink=1
2646 https://en.wikipedia.org/w/index.php%3ftitle=User:Emrysk&action=edit&redlink=1
2647 https://en.wikipedia.org/w/index.php%3ftitle=User:Emurphy42&action=edit&redlink=1
2648 https://en.wikipedia.org/wiki/User:EncMstr
https://en.wikipedia.org/w/index.php%3ftitle=User:Encyclopediamonitor&action=edit&
2649
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:EndgameCondition&action=edit&
2650
redlink=1
2651 https://en.wikipedia.org/wiki/User:Energy_Dome
2652 https://en.wikipedia.org/wiki/User:Engheta
2653 https://en.wikipedia.org/wiki/User:EngineerFromVega
2654 https://en.wikipedia.org/wiki/User:EngineerScotty
2655 https://en.wikipedia.org/wiki/User:Ennorehling
2656 https://en.wikipedia.org/w/index.php%3ftitle=User:Enobrev&action=edit&redlink=1
2657 https://en.wikipedia.org/wiki/User:Enochlau
2658 https://en.wikipedia.org/w/index.php%3ftitle=User:Enoksrd&action=edit&redlink=1
2659 https://en.wikipedia.org/wiki/User:Enricorpg
https://en.wikipedia.org/w/index.php%3ftitle=User:Enrique.benimeli&action=edit&
2660
redlink=1
2661 https://en.wikipedia.org/wiki/User:Enterprisey
2662 https://en.wikipedia.org/wiki/User:Enthdegree

1775
Contributors

6 Entranced982663
3 Entropy2664
1 Enyokoyama2665
1 EnzaiBot2666
2 Enzoferber2667
1 EoRdE62668
2 Epachamo2669
6 Epbr1232670
4 Epeefleche2671
3 Epicgenius2672
1 EpochFail2673
1 Epok2674
1 Epsilon601982675
1 Eptalon2676
1 Eptin2677
1 Equadoros2678
2 Er Komandante2679
1 Eraserhead12680
1 Erasmussen2681
1 Erdic2682
2 Erdogany2683
3 Erebus Morgaine2684
41 Erel Segal2685
2 Erentar20022686
1 Ergotius2687

2663 https://en.wikipedia.org/wiki/User:Entranced98
2664 https://en.wikipedia.org/wiki/User:Entropy
2665 https://en.wikipedia.org/wiki/User:Enyokoyama
2666 https://en.wikipedia.org/wiki/User:EnzaiBot
2667 https://en.wikipedia.org/w/index.php%3ftitle=User:Enzoferber&action=edit&redlink=1
2668 https://en.wikipedia.org/wiki/User:EoRdE6
2669 https://en.wikipedia.org/wiki/User:Epachamo
2670 https://en.wikipedia.org/wiki/User:Epbr123
2671 https://en.wikipedia.org/wiki/User:Epeefleche
2672 https://en.wikipedia.org/wiki/User:Epicgenius
2673 https://en.wikipedia.org/wiki/User:EpochFail
2674 https://en.wikipedia.org/wiki/User:Epok
2675 https://en.wikipedia.org/wiki/User:Epsilon60198
2676 https://en.wikipedia.org/wiki/User:Eptalon
2677 https://en.wikipedia.org/wiki/User:Eptin
2678 https://en.wikipedia.org/w/index.php%3ftitle=User:Equadoros&action=edit&redlink=1
2679 https://en.wikipedia.org/wiki/User:Er_Komandante
2680 https://en.wikipedia.org/wiki/User:Eraserhead1
2681 https://en.wikipedia.org/w/index.php%3ftitle=User:Erasmussen&action=edit&redlink=1
2682 https://en.wikipedia.org/wiki/User:Erdic
2683 https://en.wikipedia.org/w/index.php%3ftitle=User:Erdogany&action=edit&redlink=1
2684 https://en.wikipedia.org/wiki/User:Erebus_Morgaine
2685 https://en.wikipedia.org/wiki/User:Erel_Segal
2686 https://en.wikipedia.org/wiki/User:Erentar2002
2687 https://en.wikipedia.org/wiki/User:Ergotius

1776
External links

2 Eric Burnett2688
2 Eric Kvaalen2689
1 Eric Le Bigot2690
1 Eric Rowland2691
1 Eric.weigle2692
22 Eric1192693
2 EricTalevich2694
1 Ericamick2695
1 ErichS82696
1 Ericl2342697
1 Ericpony2698
2 Ericvalero22699
1 Erik Sjölund2700
4 Erik.Bjareholt2701
8 Erik9bot2702
1 ErikHaugen2703
1 Erikthomp2704
4 ErikvanB2705
2 Erkan Yilmaz2706
4 ErnestSDavis2707
1 Ernie shoemaker2708
1 Ernir2709
1 Erodium2710
1 Erpingham2711
1 ErrantX2712

2688 https://en.wikipedia.org/wiki/User:Eric_Burnett
2689 https://en.wikipedia.org/wiki/User:Eric_Kvaalen
2690 https://en.wikipedia.org/w/index.php%3ftitle=User:Eric_Le_Bigot&action=edit&redlink=1
2691 https://en.wikipedia.org/wiki/User:Eric_Rowland
2692 https://en.wikipedia.org/wiki/User:Eric.weigle
2693 https://en.wikipedia.org/wiki/User:Eric119
2694 https://en.wikipedia.org/wiki/User:EricTalevich
2695 https://en.wikipedia.org/wiki/User:Ericamick
2696 https://en.wikipedia.org/w/index.php%3ftitle=User:ErichS8&action=edit&redlink=1
2697 https://en.wikipedia.org/wiki/User:Ericl234
2698 https://en.wikipedia.org/w/index.php%3ftitle=User:Ericpony&action=edit&redlink=1
2699 https://en.wikipedia.org/w/index.php%3ftitle=User:Ericvalero2&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Erik_Sj%25C3%25B6lund&action=edit&
2700
redlink=1
2701 https://en.wikipedia.org/wiki/User:Erik.Bjareholt
2702 https://en.wikipedia.org/wiki/User:Erik9bot
2703 https://en.wikipedia.org/wiki/User:ErikHaugen
2704 https://en.wikipedia.org/w/index.php%3ftitle=User:Erikthomp&action=edit&redlink=1
2705 https://en.wikipedia.org/wiki/User:ErikvanB
2706 https://en.wikipedia.org/wiki/User:Erkan_Yilmaz
2707 https://en.wikipedia.org/w/index.php%3ftitle=User:ErnestSDavis&action=edit&redlink=1
2708 https://en.wikipedia.org/wiki/User:Ernie_shoemaker
2709 https://en.wikipedia.org/wiki/User:Ernir
2710 https://en.wikipedia.org/wiki/User:Erodium
2711 https://en.wikipedia.org/wiki/User:Erpingham
2712 https://en.wikipedia.org/wiki/User:ErrantX

1777
Contributors

3 Erraph2713
1 ErtySeidohl2714
1 Ertzeid2715
2 Erudecorp2716
1 Eruionnyron2717
1 Erwin2718
1 Erxnmedia2719
3 Esap2720
6 Esayan-Nare2721
1 Esb2722
1 Escape Orbit2723
5 Escarbot2724
1 Eser.aygun2725
2 Eserra2726
8 EsfirK2727
5 Eskimbot2728
1 Esmond.pitt2729
3 Esokullu2730
2 Espresso-hound2731
1 Esqg2732
435 Esquivalience2733
1 Esquivalience (alt)2734
2 Esrogs2735
6 EssRon2736
1 Essabowser2737

2713 https://en.wikipedia.org/w/index.php%3ftitle=User:Erraph&action=edit&redlink=1
2714 https://en.wikipedia.org/w/index.php%3ftitle=User:ErtySeidohl&action=edit&redlink=1
2715 https://en.wikipedia.org/w/index.php%3ftitle=User:Ertzeid&action=edit&redlink=1
2716 https://en.wikipedia.org/wiki/User:Erudecorp
2717 https://en.wikipedia.org/w/index.php%3ftitle=User:Eruionnyron&action=edit&redlink=1
2718 https://en.wikipedia.org/wiki/User:Erwin
2719 https://en.wikipedia.org/wiki/User:Erxnmedia
2720 https://en.wikipedia.org/wiki/User:Esap
2721 https://en.wikipedia.org/w/index.php%3ftitle=User:Esayan-Nare&action=edit&redlink=1
2722 https://en.wikipedia.org/wiki/User:Esb
2723 https://en.wikipedia.org/wiki/User:Escape_Orbit
2724 https://en.wikipedia.org/wiki/User:Escarbot
2725 https://en.wikipedia.org/w/index.php%3ftitle=User:Eser.aygun&action=edit&redlink=1
2726 https://en.wikipedia.org/w/index.php%3ftitle=User:Eserra&action=edit&redlink=1
2727 https://en.wikipedia.org/w/index.php%3ftitle=User:EsfirK&action=edit&redlink=1
2728 https://en.wikipedia.org/wiki/User:Eskimbot
2729 https://en.wikipedia.org/w/index.php%3ftitle=User:Esmond.pitt&action=edit&redlink=1
2730 https://en.wikipedia.org/w/index.php%3ftitle=User:Esokullu&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Espresso-hound&action=edit&redlink=
2731
1
2732 https://en.wikipedia.org/wiki/User:Esqg
2733 https://en.wikipedia.org/wiki/User:Esquivalience
2734 https://en.wikipedia.org/wiki/User:Esquivalience_(alt)
2735 https://en.wikipedia.org/wiki/User:Esrogs
2736 https://en.wikipedia.org/w/index.php%3ftitle=User:EssRon&action=edit&redlink=1
2737 https://en.wikipedia.org/w/index.php%3ftitle=User:Essabowser&action=edit&redlink=1

1778
External links

1 Essess2738
2 Esszet2739
9 EstablishedCalculus2740
1 Estamel Tharchon2741
4 Estill-math2742
3 Estirabot2743
2 EtaoinWu2744
2 Ethanbas2745
1 Ethanlin12746
1 Ethanpet1132747
1 Ethanwonton2748
1 Ethereal-Blade2749
1 EthereumEtherscan BitcoinBitcore2750
1 Etopocketo2751
2 Ettrig2752
1 EuPhyte2753
3 Eubot2754
2 Eubulides2755
3 Eug2756
1 Eugeneiiim2757
12 Euloanty2758
1 Eumat1142759
1 Euphilos2760
1 Eurooppa~enwiki2761
1 Eurosong2762

2738 https://en.wikipedia.org/w/index.php%3ftitle=User:Essess&action=edit&redlink=1
2739 https://en.wikipedia.org/w/index.php%3ftitle=User:Esszet&action=edit&redlink=1
2740 https://en.wikipedia.org/wiki/User:EstablishedCalculus
2741 https://en.wikipedia.org/wiki/User:Estamel_Tharchon
2742 https://en.wikipedia.org/w/index.php%3ftitle=User:Estill-math&action=edit&redlink=1
2743 https://en.wikipedia.org/wiki/User:Estirabot
2744 https://en.wikipedia.org/wiki/User:EtaoinWu
2745 https://en.wikipedia.org/wiki/User:Ethanbas
2746 https://en.wikipedia.org/w/index.php%3ftitle=User:Ethanlin1&action=edit&redlink=1
2747 https://en.wikipedia.org/wiki/User:Ethanpet113
2748 https://en.wikipedia.org/w/index.php%3ftitle=User:Ethanwonton&action=edit&redlink=1
2749 https://en.wikipedia.org/wiki/User:Ethereal-Blade
https://en.wikipedia.org/w/index.php%3ftitle=User:EthereumEtherscan_BitcoinBitcore&
2750
action=edit&redlink=1
2751 https://en.wikipedia.org/w/index.php%3ftitle=User:Etopocketo&action=edit&redlink=1
2752 https://en.wikipedia.org/wiki/User:Ettrig
2753 https://en.wikipedia.org/wiki/User:EuPhyte
2754 https://en.wikipedia.org/wiki/User:Eubot
2755 https://en.wikipedia.org/wiki/User:Eubulides
2756 https://en.wikipedia.org/wiki/User:Eug
2757 https://en.wikipedia.org/wiki/User:Eugeneiiim
2758 https://en.wikipedia.org/w/index.php%3ftitle=User:Euloanty&action=edit&redlink=1
2759 https://en.wikipedia.org/wiki/User:Eumat114
2760 https://en.wikipedia.org/w/index.php%3ftitle=User:Euphilos&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Eurooppa~enwiki&action=edit&
2761
redlink=1
2762 https://en.wikipedia.org/wiki/User:Eurosong

1779
Contributors

1 Euryalus2763
3 Eus Kevin2764
2 Evan Aad2765
4 Evand2766
1 Evanh20082767
1 Evanjbowling2768
3 Eve Hall2769
1 Ever wonder2770
1 Everard Proudfoot2771
5 Evercat2772
4 Everedux2773
2 Evergrey~enwiki2774
3 Evershade2775
3 Everyking2776
2 Evgeney Knyazhev2777
1 Evgeni Sergeev2778
1 Evgeny Kapun2779
1 Evil Andy2780
1 Evil Monkey2781
1 Evileye732782
23 Evolution and evolvability2783
2 Evolve psi2784
1 Ewa50502785
2 Ewedistrict2786
1 Ewen2787

2763 https://en.wikipedia.org/wiki/User:Euryalus
2764 https://en.wikipedia.org/wiki/User:Eus_Kevin
2765 https://en.wikipedia.org/w/index.php%3ftitle=User:Evan_Aad&action=edit&redlink=1
2766 https://en.wikipedia.org/wiki/User:Evand
2767 https://en.wikipedia.org/wiki/User:Evanh2008
2768 https://en.wikipedia.org/w/index.php%3ftitle=User:Evanjbowling&action=edit&redlink=1
2769 https://en.wikipedia.org/wiki/User:Eve_Hall
2770 https://en.wikipedia.org/wiki/User:Ever_wonder
2771 https://en.wikipedia.org/wiki/User:Everard_Proudfoot
2772 https://en.wikipedia.org/wiki/User:Evercat
2773 https://en.wikipedia.org/wiki/User:Everedux
https://en.wikipedia.org/w/index.php%3ftitle=User:Evergrey~enwiki&action=edit&
2774
redlink=1
2775 https://en.wikipedia.org/w/index.php%3ftitle=User:Evershade&action=edit&redlink=1
2776 https://en.wikipedia.org/wiki/User:Everyking
https://en.wikipedia.org/w/index.php%3ftitle=User:Evgeney_Knyazhev&action=edit&
2777
redlink=1
2778 https://en.wikipedia.org/wiki/User:Evgeni_Sergeev
2779 https://en.wikipedia.org/wiki/User:Evgeny_Kapun
2780 https://en.wikipedia.org/w/index.php%3ftitle=User:Evil_Andy&action=edit&redlink=1
2781 https://en.wikipedia.org/wiki/User:Evil_Monkey
2782 https://en.wikipedia.org/w/index.php%3ftitle=User:Evileye73&action=edit&redlink=1
2783 https://en.wikipedia.org/wiki/User:Evolution_and_evolvability
2784 https://en.wikipedia.org/w/index.php%3ftitle=User:Evolve_psi&action=edit&redlink=1
2785 https://en.wikipedia.org/w/index.php%3ftitle=User:Ewa5050&action=edit&redlink=1
2786 https://en.wikipedia.org/w/index.php%3ftitle=User:Ewedistrict&action=edit&redlink=1
2787 https://en.wikipedia.org/wiki/User:Ewen

1780
External links

2 Ewlyahoocom2788
6 Ewx2789
10 Excirial2790
2 Exercisephys2791
3 Exile oi2792
1 Exitmoose2793
1 Exmortis2232794
1 Exoji2e2795
4 Experiment1232796
4 Expiring frog2797
1 Explorer092798
1 Extensive~enwiki2799
1 Extra9992800
1 Eyal02801
1 Eyer2802
1 Eyesnore2803
5 Eyreland2804
5 Eyrian2805
1 EzequielBalmori2806
1 Ezhiki2807
2 Ezrakilty2808
4 Ezubaric2809
2 F.Raab2810
1 FACBot2811
2 FANSTARbot2812

2788 https://en.wikipedia.org/wiki/User:Ewlyahoocom
2789 https://en.wikipedia.org/w/index.php%3ftitle=User:Ewx&action=edit&redlink=1
2790 https://en.wikipedia.org/wiki/User:Excirial
2791 https://en.wikipedia.org/wiki/User:Exercisephys
2792 https://en.wikipedia.org/w/index.php%3ftitle=User:Exile_oi&action=edit&redlink=1
2793 https://en.wikipedia.org/wiki/User:Exitmoose
2794 https://en.wikipedia.org/w/index.php%3ftitle=User:Exmortis223&action=edit&redlink=1
2795 https://en.wikipedia.org/w/index.php%3ftitle=User:Exoji2e&action=edit&redlink=1
2796 https://en.wikipedia.org/wiki/User:Experiment123
2797 https://en.wikipedia.org/w/index.php%3ftitle=User:Expiring_frog&action=edit&redlink=1
2798 https://en.wikipedia.org/w/index.php%3ftitle=User:Explorer09&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Extensive~enwiki&action=edit&
2799
redlink=1
2800 https://en.wikipedia.org/wiki/User:Extra999
2801 https://en.wikipedia.org/wiki/User:Eyal0
2802 https://en.wikipedia.org/wiki/User:Eyer
2803 https://en.wikipedia.org/wiki/User:Eyesnore
2804 https://en.wikipedia.org/w/index.php%3ftitle=User:Eyreland&action=edit&redlink=1
2805 https://en.wikipedia.org/wiki/User:Eyrian
https://en.wikipedia.org/w/index.php%3ftitle=User:EzequielBalmori&action=edit&
2806
redlink=1
2807 https://en.wikipedia.org/wiki/User:Ezhiki
2808 https://en.wikipedia.org/wiki/User:Ezrakilty
2809 https://en.wikipedia.org/w/index.php%3ftitle=User:Ezubaric&action=edit&redlink=1
2810 https://en.wikipedia.org/w/index.php%3ftitle=User:F.Raab&action=edit&redlink=1
2811 https://en.wikipedia.org/wiki/User:FACBot
2812 https://en.wikipedia.org/wiki/User:FANSTARbot

1781
Contributors

4 FLuisWP2813
2 FULANO20192814
1 FULBERT2815
1 Fabercap2816
1 Fabian Steeg~enwiki2817
1 Fabihola912818
6 Fabior19842819
1 Fabricebaro2820
1 FactoidCow2821
1 FactorialG2822
1 Fagstein2823
1 Fahadmunir322824
2 Faham1232825
1 FairFare2826
5 Faizan2827
1 Faizbash2828
2 Falcon87652829
1 Falcone~enwiki2830
1 Falcongl2831
3 Falk Lieder2832
1 Falkirks2833
1 False vacuum2834
7 Falsedef2835
1 Famtaro2836
2 Fancitron2837

2813 https://en.wikipedia.org/wiki/User:FLuisWP
2814 https://en.wikipedia.org/w/index.php%3ftitle=User:FULANO2019&action=edit&redlink=1
2815 https://en.wikipedia.org/wiki/User:FULBERT
2816 https://en.wikipedia.org/w/index.php%3ftitle=User:Fabercap&action=edit&redlink=1
2817 https://en.wikipedia.org/wiki/User:Fabian_Steeg~enwiki
2818 https://en.wikipedia.org/w/index.php%3ftitle=User:Fabihola91&action=edit&redlink=1
2819 https://en.wikipedia.org/wiki/User:Fabior1984
2820 https://en.wikipedia.org/wiki/User:Fabricebaro
2821 https://en.wikipedia.org/wiki/User:FactoidCow
2822 https://en.wikipedia.org/w/index.php%3ftitle=User:FactorialG&action=edit&redlink=1
2823 https://en.wikipedia.org/wiki/User:Fagstein
2824 https://en.wikipedia.org/w/index.php%3ftitle=User:Fahadmunir32&action=edit&redlink=1
2825 https://en.wikipedia.org/w/index.php%3ftitle=User:Faham123&action=edit&redlink=1
2826 https://en.wikipedia.org/wiki/User:FairFare
2827 https://en.wikipedia.org/wiki/User:Faizan
2828 https://en.wikipedia.org/w/index.php%3ftitle=User:Faizbash&action=edit&redlink=1
2829 https://en.wikipedia.org/wiki/User:Falcon8765
2830 https://en.wikipedia.org/wiki/User:Falcone~enwiki
2831 https://en.wikipedia.org/w/index.php%3ftitle=User:Falcongl&action=edit&redlink=1
2832 https://en.wikipedia.org/wiki/User:Falk_Lieder
2833 https://en.wikipedia.org/wiki/User:Falkirks
2834 https://en.wikipedia.org/wiki/User:False_vacuum
2835 https://en.wikipedia.org/wiki/User:Falsedef
2836 https://en.wikipedia.org/w/index.php%3ftitle=User:Famtaro&action=edit&redlink=1
2837 https://en.wikipedia.org/w/index.php%3ftitle=User:Fancitron&action=edit&redlink=1

1782
External links

4 Fangyuan1st2838
3 Fangz2839
2 Fanis842840
1 Faolin422841
5 Farach2842
1 Faradayplank2843
2 Farazbhinder2844
1 Farazy2845
9 Faridani2846
2 Fartfart01012847
3 Farzan Taj2848
6 Farzaneh2849
2 Fashnek2850
1 Fasten2851
10 Fastfission2852
3 Fasthogrider2853
1 Fastily2854
1 Fastilysock (usurped)2855
1 Fatal19552856
1 Fatjedi892857
1 Fatphil2858
9 Faure.thomas~enwiki2859
2 FauxFaux2860
103 Favonian2861
6 Fawcett52862

2838 https://en.wikipedia.org/wiki/User:Fangyuan1st
2839 https://en.wikipedia.org/wiki/User:Fangz
2840 https://en.wikipedia.org/wiki/User:Fanis84
2841 https://en.wikipedia.org/wiki/User:Faolin42
2842 https://en.wikipedia.org/w/index.php%3ftitle=User:Farach&action=edit&redlink=1
2843 https://en.wikipedia.org/wiki/User:Faradayplank
2844 https://en.wikipedia.org/w/index.php%3ftitle=User:Farazbhinder&action=edit&redlink=1
2845 https://en.wikipedia.org/wiki/User:Farazy
2846 https://en.wikipedia.org/wiki/User:Faridani
2847 https://en.wikipedia.org/w/index.php%3ftitle=User:Fartfart0101&action=edit&redlink=1
2848 https://en.wikipedia.org/w/index.php%3ftitle=User:Farzan_Taj&action=edit&redlink=1
2849 https://en.wikipedia.org/wiki/User:Farzaneh
2850 https://en.wikipedia.org/wiki/User:Fashnek
2851 https://en.wikipedia.org/wiki/User:Fasten
2852 https://en.wikipedia.org/wiki/User:Fastfission
2853 https://en.wikipedia.org/w/index.php%3ftitle=User:Fasthogrider&action=edit&redlink=1
2854 https://en.wikipedia.org/wiki/User:Fastily
https://en.wikipedia.org/w/index.php%3ftitle=User:Fastilysock_(usurped)&action=edit&
2855
redlink=1
2856 https://en.wikipedia.org/w/index.php%3ftitle=User:Fatal1955&action=edit&redlink=1
2857 https://en.wikipedia.org/w/index.php%3ftitle=User:Fatjedi89&action=edit&redlink=1
2858 https://en.wikipedia.org/w/index.php%3ftitle=User:Fatphil&action=edit&redlink=1
2859 https://en.wikipedia.org/wiki/User:Faure.thomas~enwiki
2860 https://en.wikipedia.org/w/index.php%3ftitle=User:FauxFaux&action=edit&redlink=1
2861 https://en.wikipedia.org/wiki/User:Favonian
2862 https://en.wikipedia.org/wiki/User:Fawcett5

1783
Contributors

5 Fawly2863
2 Fayenatic london2864
1 FayssalF2865
1 Fbz.ict2866
2 Fcady20072867
2 Fchristo2868
3 Feanor9812869
1 Fedayee2870
1 Feeeshboy2871
1 Feigenbaum2872
1 Felagund2873
1 Felipe Sobreira Abrahão2874
1 FelipeVargasRigo2875
1 Felix Hohne2876
1 Felix Wiemann2877
1 Fender01074012878
2 Fender1232879
3 FenixFeather2880
2 Fennec2881
1 Feradz2882
13 FeralOink2883
1 Ferengi2884
2 Ferkelparade2885
1 Fernanluyano2886
1 Ferris372887

2863 https://en.wikipedia.org/wiki/User:Fawly
2864 https://en.wikipedia.org/wiki/User:Fayenatic_london
2865 https://en.wikipedia.org/wiki/User:FayssalF
2866 https://en.wikipedia.org/w/index.php%3ftitle=User:Fbz.ict&action=edit&redlink=1
2867 https://en.wikipedia.org/wiki/User:Fcady2007
2868 https://en.wikipedia.org/w/index.php%3ftitle=User:Fchristo&action=edit&redlink=1
2869 https://en.wikipedia.org/w/index.php%3ftitle=User:Feanor981&action=edit&redlink=1
2870 https://en.wikipedia.org/wiki/User:Fedayee
2871 https://en.wikipedia.org/wiki/User:Feeeshboy
2872 https://en.wikipedia.org/wiki/User:Feigenbaum
2873 https://en.wikipedia.org/wiki/User:Felagund
2874 https://en.wikipedia.org/wiki/User:Felipe_Sobreira_Abrah%25C3%25A3o
2875 https://en.wikipedia.org/wiki/User:FelipeVargasRigo
2876 https://en.wikipedia.org/wiki/User:Felix_Hohne
2877 https://en.wikipedia.org/w/index.php%3ftitle=User:Felix_Wiemann&action=edit&redlink=1
2878 https://en.wikipedia.org/wiki/User:Fender0107401
2879 https://en.wikipedia.org/w/index.php%3ftitle=User:Fender123&action=edit&redlink=1
2880 https://en.wikipedia.org/wiki/User:FenixFeather
2881 https://en.wikipedia.org/wiki/User:Fennec
2882 https://en.wikipedia.org/wiki/User:Feradz
2883 https://en.wikipedia.org/wiki/User:FeralOink
2884 https://en.wikipedia.org/wiki/User:Ferengi
2885 https://en.wikipedia.org/wiki/User:Ferkelparade
2886 https://en.wikipedia.org/w/index.php%3ftitle=User:Fernanluyano&action=edit&redlink=1
2887 https://en.wikipedia.org/wiki/User:Ferris37

1784
External links

1 Ferritecore2888
1 Fewmenleft2889
1 Feynman812890
1 Feynmanliang2891
1 Fezzerof2892
1 Ffaarr2893
1 Ffcccc2894
10 Fgnievinski2895
1 Fiachra100032896
1 FiachraByrne2897
1 Fiarr2898
1 Fibonacci2899
1 Fifteen Minutes Of Fame2900
1 Figod2901
17 Fij2902
1 Fijal2903
2 FilBenLeafBoy2904
1 Filip Euler2905
1 Filip.bartek2906
1 Filip130419822907
1 FilipeS2908
1 Filu~enwiki2909
1 FinalMinuet2910
7 Finell2911
1 Finem2912

2888 https://en.wikipedia.org/wiki/User:Ferritecore
2889 https://en.wikipedia.org/wiki/User:Fewmenleft
2890 https://en.wikipedia.org/wiki/User:Feynman81
2891 https://en.wikipedia.org/w/index.php%3ftitle=User:Feynmanliang&action=edit&redlink=1
2892 https://en.wikipedia.org/wiki/User:Fezzerof
2893 https://en.wikipedia.org/wiki/User:Ffaarr
2894 https://en.wikipedia.org/w/index.php%3ftitle=User:Ffcccc&action=edit&redlink=1
2895 https://en.wikipedia.org/wiki/User:Fgnievinski
2896 https://en.wikipedia.org/wiki/User:Fiachra10003
2897 https://en.wikipedia.org/wiki/User:FiachraByrne
2898 https://en.wikipedia.org/w/index.php%3ftitle=User:Fiarr&action=edit&redlink=1
2899 https://en.wikipedia.org/wiki/User:Fibonacci
2900 https://en.wikipedia.org/wiki/User:Fifteen_Minutes_Of_Fame
2901 https://en.wikipedia.org/w/index.php%3ftitle=User:Figod&action=edit&redlink=1
2902 https://en.wikipedia.org/wiki/User:Fij
2903 https://en.wikipedia.org/w/index.php%3ftitle=User:Fijal&action=edit&redlink=1
2904 https://en.wikipedia.org/wiki/User:FilBenLeafBoy
2905 https://en.wikipedia.org/wiki/User:Filip_Euler
2906 https://en.wikipedia.org/w/index.php%3ftitle=User:Filip.bartek&action=edit&redlink=1
2907 https://en.wikipedia.org/w/index.php%3ftitle=User:Filip13041982&action=edit&redlink=1
2908 https://en.wikipedia.org/wiki/User:FilipeS
2909 https://en.wikipedia.org/wiki/User:Filu~enwiki
2910 https://en.wikipedia.org/wiki/User:FinalMinuet
2911 https://en.wikipedia.org/wiki/User:Finell
2912 https://en.wikipedia.org/w/index.php%3ftitle=User:Finem&action=edit&redlink=1

1785
Contributors

1 Finity2913
1 FinnishOverlord2914
4 Finnusertop2915
2 Fintler2916
2 Fintor2917
3 Fioravante Patrone2918
1 FirefoxRocks2919
2 Firenu2920
1 Firestarforever2921
1 Firsfron2922
1 Firvqipo2923
2 Fish and karate2924
1 Fishy314152925
2 Fivelittlemonkeys2926
6 Fizbin72927
2 Fizo862928
4 Fkodama2929
1 Fkoenig~enwiki2930
30 FlaBot2931
13 FlagrantUsername2932
6 Flammifer2933
1 Flandidlydanders2934
1 Flapitrr2935
1 Flappychappy2936
1 Flarity2937

2913 https://en.wikipedia.org/wiki/User:Finity
2914 https://en.wikipedia.org/wiki/User:FinnishOverlord
2915 https://en.wikipedia.org/wiki/User:Finnusertop
2916 https://en.wikipedia.org/wiki/User:Fintler
2917 https://en.wikipedia.org/wiki/User:Fintor
2918 https://en.wikipedia.org/wiki/User:Fioravante_Patrone
2919 https://en.wikipedia.org/wiki/User:FirefoxRocks
2920 https://en.wikipedia.org/wiki/User:Firenu
2921 https://en.wikipedia.org/wiki/User:Firestarforever
2922 https://en.wikipedia.org/wiki/User:Firsfron
2923 https://en.wikipedia.org/w/index.php%3ftitle=User:Firvqipo&action=edit&redlink=1
2924 https://en.wikipedia.org/wiki/User:Fish_and_karate
2925 https://en.wikipedia.org/w/index.php%3ftitle=User:Fishy31415&action=edit&redlink=1
2926 https://en.wikipedia.org/wiki/User:Fivelittlemonkeys
2927 https://en.wikipedia.org/wiki/User:Fizbin7
2928 https://en.wikipedia.org/w/index.php%3ftitle=User:Fizo86&action=edit&redlink=1
2929 https://en.wikipedia.org/w/index.php%3ftitle=User:Fkodama&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Fkoenig~enwiki&action=edit&redlink=
2930
1
2931 https://en.wikipedia.org/wiki/User:FlaBot
2932 https://en.wikipedia.org/wiki/User:FlagrantUsername
2933 https://en.wikipedia.org/wiki/User:Flammifer
https://en.wikipedia.org/w/index.php%3ftitle=User:Flandidlydanders&action=edit&
2934
redlink=1
2935 https://en.wikipedia.org/w/index.php%3ftitle=User:Flapitrr&action=edit&redlink=1
2936 https://en.wikipedia.org/wiki/User:Flappychappy
2937 https://en.wikipedia.org/w/index.php%3ftitle=User:Flarity&action=edit&redlink=1

1786
External links

4 Flatline2938
1 Flaviut2939
1 Flex2940
1 FlippyFlink2941
3 Flok2942
1 Flooded with them hundreds2943
1 Floodyberry2944
1 Flordeneu2945
1 Florentls2946
1 Floridada2947
1 Flosfa2948
1 Flouran2949
1 Flowi2950
2 Flt entendre2951
27 Flyer22 Frozen2952
2 Flyingspuds2953
20 Fmadd2954
1 Fmccown2955
1 Fmicaeli2956
1 Fmorstatter2957
8 Fnielsen2958
1 Fojr2959
1 Folic Acid2960
1 Folletto2961
7 Foo1232962

2938 https://en.wikipedia.org/wiki/User:Flatline
2939 https://en.wikipedia.org/w/index.php%3ftitle=User:Flaviut&action=edit&redlink=1
2940 https://en.wikipedia.org/wiki/User:Flex
2941 https://en.wikipedia.org/wiki/User:FlippyFlink
2942 https://en.wikipedia.org/wiki/User:Flok
2943 https://en.wikipedia.org/wiki/User:Flooded_with_them_hundreds
2944 https://en.wikipedia.org/w/index.php%3ftitle=User:Floodyberry&action=edit&redlink=1
2945 https://en.wikipedia.org/wiki/User:Flordeneu
2946 https://en.wikipedia.org/w/index.php%3ftitle=User:Florentls&action=edit&redlink=1
2947 https://en.wikipedia.org/wiki/User:Floridada
2948 https://en.wikipedia.org/wiki/User:Flosfa
2949 https://en.wikipedia.org/w/index.php%3ftitle=User:Flouran&action=edit&redlink=1
2950 https://en.wikipedia.org/wiki/User:Flowi
2951 https://en.wikipedia.org/w/index.php%3ftitle=User:Flt_entendre&action=edit&redlink=1
2952 https://en.wikipedia.org/wiki/User:Flyer22_Frozen
2953 https://en.wikipedia.org/w/index.php%3ftitle=User:Flyingspuds&action=edit&redlink=1
2954 https://en.wikipedia.org/wiki/User:Fmadd
2955 https://en.wikipedia.org/wiki/User:Fmccown
2956 https://en.wikipedia.org/w/index.php%3ftitle=User:Fmicaeli&action=edit&redlink=1
2957 https://en.wikipedia.org/wiki/User:Fmorstatter
2958 https://en.wikipedia.org/wiki/User:Fnielsen
2959 https://en.wikipedia.org/wiki/User:Fojr
2960 https://en.wikipedia.org/wiki/User:Folic_Acid
2961 https://en.wikipedia.org/wiki/User:Folletto
2962 https://en.wikipedia.org/w/index.php%3ftitle=User:Foo123&action=edit&redlink=1

1787
Contributors

3 Foobar2963
1 Foobarnix2964
5 Foobaz2965
1 Foot2966
1 Forbsey2967
5 Forderud2968
1 Forehann86512969
1 ForestAlpaca2970
1 Forgot to put name2971
1 Fortdj332972
4 Fortnow2973
1 Fotvoren2974
1 Four Dog Night2975
1 FourteenDays2976
1 Fox Wilson2977
2 FoxBot2978
1 Foxandpotatoes2979
1 Foxjwill2980
1 Fparnon2981
2 Frafl2982
1 Fragapanagos2983
8 Fraggle812984
4 Fragglet2985
1 France34702986
1 Francinum2987

2963 https://en.wikipedia.org/wiki/User:Foobar
2964 https://en.wikipedia.org/wiki/User:Foobarnix
2965 https://en.wikipedia.org/wiki/User:Foobaz
2966 https://en.wikipedia.org/w/index.php%3ftitle=User:Foot&action=edit&redlink=1
2967 https://en.wikipedia.org/w/index.php%3ftitle=User:Forbsey&action=edit&redlink=1
2968 https://en.wikipedia.org/wiki/User:Forderud
2969 https://en.wikipedia.org/wiki/User:Forehann8651
2970 https://en.wikipedia.org/wiki/User:ForestAlpaca
2971 https://en.wikipedia.org/wiki/User:Forgot_to_put_name
2972 https://en.wikipedia.org/wiki/User:Fortdj33
2973 https://en.wikipedia.org/w/index.php%3ftitle=User:Fortnow&action=edit&redlink=1
2974 https://en.wikipedia.org/w/index.php%3ftitle=User:Fotvoren&action=edit&redlink=1
2975 https://en.wikipedia.org/wiki/User:Four_Dog_Night
2976 https://en.wikipedia.org/w/index.php%3ftitle=User:FourteenDays&action=edit&redlink=1
2977 https://en.wikipedia.org/wiki/User:Fox_Wilson
2978 https://en.wikipedia.org/wiki/User:FoxBot
2979 https://en.wikipedia.org/wiki/User:Foxandpotatoes
2980 https://en.wikipedia.org/wiki/User:Foxjwill
2981 https://en.wikipedia.org/w/index.php%3ftitle=User:Fparnon&action=edit&redlink=1
2982 https://en.wikipedia.org/w/index.php%3ftitle=User:Frafl&action=edit&redlink=1
2983 https://en.wikipedia.org/w/index.php%3ftitle=User:Fragapanagos&action=edit&redlink=1
2984 https://en.wikipedia.org/wiki/User:Fraggle81
2985 https://en.wikipedia.org/wiki/User:Fragglet
2986 https://en.wikipedia.org/wiki/User:France3470
2987 https://en.wikipedia.org/wiki/User:Francinum

1788
External links

3 Francis Tyers2988
6 Franciscouzo2989
1 FrancoGG2990
1 Francxzshu2991
3 FrankTobia2992
4 Frankenviking2993
11 Frankrod442994
1 Frantisek.jandos2995
2 François Pitt2996
8 François Robere2997
73 Frap2998
1 Frasmog2999
1 Fraxtil3000
1 Frazzydee3001
2 Freakingtips3002
2 Freakofnurture3003
25 Frecklefoot3004
1 Fred Bauder3005
3 Fred Bradstadt3006
1 Fred Gandt3007
2 Fred J3008
1 Fred20133009
1 Fredanator3010
69 Fredrik3011
1 FreePeter30003012

2988 https://en.wikipedia.org/wiki/User:Francis_Tyers
2989 https://en.wikipedia.org/w/index.php%3ftitle=User:Franciscouzo&action=edit&redlink=1
2990 https://en.wikipedia.org/wiki/User:FrancoGG
2991 https://en.wikipedia.org/w/index.php%3ftitle=User:Francxzshu&action=edit&redlink=1
2992 https://en.wikipedia.org/wiki/User:FrankTobia
2993 https://en.wikipedia.org/w/index.php%3ftitle=User:Frankenviking&action=edit&redlink=1
2994 https://en.wikipedia.org/w/index.php%3ftitle=User:Frankrod44&action=edit&redlink=1
2995 https://en.wikipedia.org/wiki/User:Frantisek.jandos
https://en.wikipedia.org/w/index.php%3ftitle=User:Fran%25C3%25A7ois_Pitt&action=edit&
2996
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Fran%25C3%25A7ois_Robere&action=
2997
edit&redlink=1
2998 https://en.wikipedia.org/wiki/User:Frap
2999 https://en.wikipedia.org/w/index.php%3ftitle=User:Frasmog&action=edit&redlink=1
3000 https://en.wikipedia.org/wiki/User:Fraxtil
3001 https://en.wikipedia.org/wiki/User:Frazzydee
3002 https://en.wikipedia.org/w/index.php%3ftitle=User:Freakingtips&action=edit&redlink=1
3003 https://en.wikipedia.org/wiki/User:Freakofnurture
3004 https://en.wikipedia.org/wiki/User:Frecklefoot
3005 https://en.wikipedia.org/wiki/User:Fred_Bauder
3006 https://en.wikipedia.org/wiki/User:Fred_Bradstadt
3007 https://en.wikipedia.org/wiki/User:Fred_Gandt
3008 https://en.wikipedia.org/wiki/User:Fred_J
3009 https://en.wikipedia.org/wiki/User:Fred2013
3010 https://en.wikipedia.org/w/index.php%3ftitle=User:Fredanator&action=edit&redlink=1
3011 https://en.wikipedia.org/wiki/User:Fredrik
3012 https://en.wikipedia.org/w/index.php%3ftitle=User:FreePeter3000&action=edit&redlink=1

1789
Contributors

1 Freederick3013
2 Frehley3014
36 Frencheigh3015
2 FreplySpang3016
1 Fresal3017
81 FrescoBot3018
36 Fresheneesz3019
2 Freshman4043020
2 Fridayda13~enwiki3021
15 Frietjes3022
8 Frikle3023
18 Frizzil3024
2 Frl9873025
3 Fropuff3026
7 Frosty3027
5 Froth3028
1 Frozen43223029
1 Frozenport3030
1 Frungi3031
2 Fryed-peach3032
1 Frze3033
1 Fsaada013034
1 Fschwarzentruber3035
3 Fsiler3036
1 Ftiercel3037

3013 https://en.wikipedia.org/wiki/User:Freederick
3014 https://en.wikipedia.org/wiki/User:Frehley
3015 https://en.wikipedia.org/w/index.php%3ftitle=User:Frencheigh&action=edit&redlink=1
3016 https://en.wikipedia.org/wiki/User:FreplySpang
3017 https://en.wikipedia.org/w/index.php%3ftitle=User:Fresal&action=edit&redlink=1
3018 https://en.wikipedia.org/wiki/User:FrescoBot
3019 https://en.wikipedia.org/wiki/User:Fresheneesz
3020 https://en.wikipedia.org/wiki/User:Freshman404
https://en.wikipedia.org/w/index.php%3ftitle=User:Fridayda13~enwiki&action=edit&
3021
redlink=1
3022 https://en.wikipedia.org/wiki/User:Frietjes
3023 https://en.wikipedia.org/wiki/User:Frikle
3024 https://en.wikipedia.org/wiki/User:Frizzil
3025 https://en.wikipedia.org/w/index.php%3ftitle=User:Frl987&action=edit&redlink=1
3026 https://en.wikipedia.org/wiki/User:Fropuff
3027 https://en.wikipedia.org/wiki/User:Frosty
3028 https://en.wikipedia.org/w/index.php%3ftitle=User:Froth&action=edit&redlink=1
3029 https://en.wikipedia.org/wiki/User:Frozen4322
3030 https://en.wikipedia.org/wiki/User:Frozenport
3031 https://en.wikipedia.org/wiki/User:Frungi
3032 https://en.wikipedia.org/wiki/User:Fryed-peach
3033 https://en.wikipedia.org/wiki/User:Frze
3034 https://en.wikipedia.org/w/index.php%3ftitle=User:Fsaada01&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Fschwarzentruber&action=edit&
3035
redlink=1
3036 https://en.wikipedia.org/wiki/User:Fsiler
3037 https://en.wikipedia.org/wiki/User:Ftiercel

1790
External links

1 Fts463038
1 Fuddle3039
2 Fuelbottle3040
1 Fugitron3041
1 Fukuoka V3042
1 Fumitol3043
2 Funandtrvl3044
3 Func3045
3 FurnaldHall3046
1 Furrfu3047
38 Furrykef3048
1 FusionNow3049
1 FuturePrefect3050
2 FutureTrillionaire3051
1 Fuujuhi3052
1 Fuxx3053
1 Fuzheado3054
1 Fuzzie3055
1 FuzziusMaximus3056
1 Fuzzy3057
1 Fuzzypeg3058
5 FvdP3059
2 Fvw3060
2 Fx23061
1 Fxer3062

3038 https://en.wikipedia.org/w/index.php%3ftitle=User:Fts46&action=edit&redlink=1
3039 https://en.wikipedia.org/wiki/User:Fuddle
3040 https://en.wikipedia.org/wiki/User:Fuelbottle
3041 https://en.wikipedia.org/wiki/User:Fugitron
3042 https://en.wikipedia.org/w/index.php%3ftitle=User:Fukuoka_V&action=edit&redlink=1
3043 https://en.wikipedia.org/wiki/User:Fumitol
3044 https://en.wikipedia.org/wiki/User:Funandtrvl
3045 https://en.wikipedia.org/wiki/User:Func
3046 https://en.wikipedia.org/w/index.php%3ftitle=User:FurnaldHall&action=edit&redlink=1
3047 https://en.wikipedia.org/wiki/User:Furrfu
3048 https://en.wikipedia.org/wiki/User:Furrykef
3049 https://en.wikipedia.org/w/index.php%3ftitle=User:FusionNow&action=edit&redlink=1
3050 https://en.wikipedia.org/wiki/User:FuturePrefect
3051 https://en.wikipedia.org/wiki/User:FutureTrillionaire
3052 https://en.wikipedia.org/w/index.php%3ftitle=User:Fuujuhi&action=edit&redlink=1
3053 https://en.wikipedia.org/wiki/User:Fuxx
3054 https://en.wikipedia.org/wiki/User:Fuzheado
3055 https://en.wikipedia.org/wiki/User:Fuzzie
3056 https://en.wikipedia.org/wiki/User:FuzziusMaximus
3057 https://en.wikipedia.org/wiki/User:Fuzzy
3058 https://en.wikipedia.org/wiki/User:Fuzzypeg
3059 https://en.wikipedia.org/wiki/User:FvdP
3060 https://en.wikipedia.org/wiki/User:Fvw
3061 https://en.wikipedia.org/wiki/User:Fx2
3062 https://en.wikipedia.org/wiki/User:Fxer

1791
Contributors

1 Fxm123063
8 Fylwind3064
2 Fyrael3065
2 Fyyer3066
2 Fæ3067
1 G. Moore3068
2 G1213069
2 G7163070
1 G8moon3071
1 GBL3072
2 GCRhoads3073
3 GDallimore3074
1 GGT3075
1 GGordonWorleyIII3076
1 GLmathgrant3077
6 GPHemsley3078
4 GPJ3079
2 GPhilip3080
1 GRHooked3081
5 GRuban3082
1 GS44443083
2 GSMR3084
3 GSS3085
1 GTBacchus3086
1 GVOLTT3087

3063 https://en.wikipedia.org/wiki/User:Fxm12
3064 https://en.wikipedia.org/wiki/User:Fylwind
3065 https://en.wikipedia.org/wiki/User:Fyrael
3066 https://en.wikipedia.org/wiki/User:Fyyer
3067 https://en.wikipedia.org/wiki/User:F%25C3%25A6
3068 https://en.wikipedia.org/wiki/User:G._Moore
3069 https://en.wikipedia.org/w/index.php%3ftitle=User:G121&action=edit&redlink=1
3070 https://en.wikipedia.org/wiki/User:G716
3071 https://en.wikipedia.org/w/index.php%3ftitle=User:G8moon&action=edit&redlink=1
3072 https://en.wikipedia.org/w/index.php%3ftitle=User:GBL&action=edit&redlink=1
3073 https://en.wikipedia.org/w/index.php%3ftitle=User:GCRhoads&action=edit&redlink=1
3074 https://en.wikipedia.org/wiki/User:GDallimore
3075 https://en.wikipedia.org/wiki/User:GGT
3076 https://en.wikipedia.org/wiki/User:GGordonWorleyIII
3077 https://en.wikipedia.org/wiki/User:GLmathgrant
3078 https://en.wikipedia.org/wiki/User:GPHemsley
3079 https://en.wikipedia.org/w/index.php%3ftitle=User:GPJ&action=edit&redlink=1
3080 https://en.wikipedia.org/w/index.php%3ftitle=User:GPhilip&action=edit&redlink=1
3081 https://en.wikipedia.org/wiki/User:GRHooked
3082 https://en.wikipedia.org/wiki/User:GRuban
3083 https://en.wikipedia.org/wiki/User:GS4444
3084 https://en.wikipedia.org/wiki/User:GSMR
3085 https://en.wikipedia.org/wiki/User:GSS
3086 https://en.wikipedia.org/wiki/User:GTBacchus
3087 https://en.wikipedia.org/wiki/User:GVOLTT

1792
External links

2 Gabbe3088
1 Gabefair3089
1 Gabetarian3090
1 Gabn13091
1 GaborUrbanics3092
4 Gadfium3093
2 Gadig3094
1 Gaeddal3095
1 Gaelan3096
1 Gaganbansal1233097
4 Gah43098
2 Gaiacarra3099
1 Gail3100
9 Gaius Cornelius3101
1 Gajeam3102
1 Gak3103
2 Gakrivas3104
2 Galanom3105
1 Galaxiaad3106
1 Galaxy073107
1 Galeru3108
2 Galobtter3109
1 Galobtter's sock3110
4 Galoubet3111
1 Galzigler3112

3088 https://en.wikipedia.org/wiki/User:Gabbe
3089 https://en.wikipedia.org/wiki/User:Gabefair
3090 https://en.wikipedia.org/wiki/User:Gabetarian
3091 https://en.wikipedia.org/w/index.php%3ftitle=User:Gabn1&action=edit&redlink=1
3092 https://en.wikipedia.org/w/index.php%3ftitle=User:GaborUrbanics&action=edit&redlink=1
3093 https://en.wikipedia.org/wiki/User:Gadfium
3094 https://en.wikipedia.org/wiki/User:Gadig
3095 https://en.wikipedia.org/wiki/User:Gaeddal
3096 https://en.wikipedia.org/wiki/User:Gaelan
https://en.wikipedia.org/w/index.php%3ftitle=User:Gaganbansal123&action=edit&redlink=
3097
1
3098 https://en.wikipedia.org/wiki/User:Gah4
3099 https://en.wikipedia.org/wiki/User:Gaiacarra
3100 https://en.wikipedia.org/wiki/User:Gail
3101 https://en.wikipedia.org/wiki/User:Gaius_Cornelius
3102 https://en.wikipedia.org/w/index.php%3ftitle=User:Gajeam&action=edit&redlink=1
3103 https://en.wikipedia.org/wiki/User:Gak
3104 https://en.wikipedia.org/wiki/User:Gakrivas
3105 https://en.wikipedia.org/w/index.php%3ftitle=User:Galanom&action=edit&redlink=1
3106 https://en.wikipedia.org/wiki/User:Galaxiaad
3107 https://en.wikipedia.org/wiki/User:Galaxy07
3108 https://en.wikipedia.org/w/index.php%3ftitle=User:Galeru&action=edit&redlink=1
3109 https://en.wikipedia.org/wiki/User:Galobtter
3110 https://en.wikipedia.org/wiki/User:Galobtter%2527s_sock
3111 https://en.wikipedia.org/wiki/User:Galoubet
3112 https://en.wikipedia.org/wiki/User:Galzigler

1793
Contributors

1 Gam33113
2 Gamall Wednesday Ida3114
1 Gambhir.jagmeet3115
19 GamePlayerAI3116
2 Gamesbot3117
1 Gametheorist773118
3 Gamma~enwiki3119
2 GanKeyu3120
1 Ganapathi Bappa3121
18 Gandalf613122
1 GangofOne3123
1 Gankro3124
1 Gap95513125
8 Garamond Lethe3126
3 Garas3127
2 GarbledLecture9333128
1 Gardar Rurak3129
5 Gareth Jones3130
3 Gareth McCaughan3131
1 Garfeild3132
9 Garfield Garfield3133
3 Gargaj3134
2 Garo3135
3 Garoth3136
4 Garrettw873137

3113 https://en.wikipedia.org/wiki/User:Gam3
3114 https://en.wikipedia.org/wiki/User:Gamall_Wednesday_Ida
https://en.wikipedia.org/w/index.php%3ftitle=User:Gambhir.jagmeet&action=edit&
3115
redlink=1
3116 https://en.wikipedia.org/w/index.php%3ftitle=User:GamePlayerAI&action=edit&redlink=1
3117 https://en.wikipedia.org/w/index.php%3ftitle=User:Gamesbot&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Gametheorist77&action=edit&redlink=
3118
1
3119 https://en.wikipedia.org/wiki/User:Gamma~enwiki
3120 https://en.wikipedia.org/wiki/User:GanKeyu
https://en.wikipedia.org/w/index.php%3ftitle=User:Ganapathi_Bappa&action=edit&
3121
redlink=1
3122 https://en.wikipedia.org/wiki/User:Gandalf61
3123 https://en.wikipedia.org/wiki/User:GangofOne
3124 https://en.wikipedia.org/w/index.php%3ftitle=User:Gankro&action=edit&redlink=1
3125 https://en.wikipedia.org/wiki/User:Gap9551
3126 https://en.wikipedia.org/wiki/User:Garamond_Lethe
3127 https://en.wikipedia.org/wiki/User:Garas
3128 https://en.wikipedia.org/wiki/User:GarbledLecture933
3129 https://en.wikipedia.org/wiki/User:Gardar_Rurak
3130 https://en.wikipedia.org/wiki/User:Gareth_Jones
3131 https://en.wikipedia.org/wiki/User:Gareth_McCaughan
3132 https://en.wikipedia.org/w/index.php%3ftitle=User:Garfeild&action=edit&redlink=1
3133 https://en.wikipedia.org/wiki/User:Garfield_Garfield
3134 https://en.wikipedia.org/wiki/User:Gargaj
3135 https://en.wikipedia.org/wiki/User:Garo
3136 https://en.wikipedia.org/wiki/User:Garoth
3137 https://en.wikipedia.org/wiki/User:Garrettw87

1794
External links

2 Gary3138
1 Garydoranjr3139
3 Garygagliardi3140
11 Garyzx3141
2 Gasarch3142
1 Gascenciom983143
2 Gaspercat~enwiki3144
2 GateKeeper3145
1 Gatemansgc3146
1 Gatinha3147
11 Gatoatigrado3148
1 Gattom3149
6 Gauravxpress3150
2 Gautham tpsz3151
1 Gavenkoa3152
14 Gavia immer3153
1 Gawain Bolton3154
1 Gawi3155
1 Gazimoff3156
8 Gazpacho3157
1 Gbeauregard3158
2 Gbrose853159
1 Gbruin3160
1 Gbutler693161
2 Gcaee3162

3138 https://en.wikipedia.org/wiki/User:Gary
3139 https://en.wikipedia.org/w/index.php%3ftitle=User:Garydoranjr&action=edit&redlink=1
3140 https://en.wikipedia.org/w/index.php%3ftitle=User:Garygagliardi&action=edit&redlink=1
3141 https://en.wikipedia.org/wiki/User:Garyzx
3142 https://en.wikipedia.org/wiki/User:Gasarch
3143 https://en.wikipedia.org/w/index.php%3ftitle=User:Gascenciom98&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Gaspercat~enwiki&action=edit&
3144
redlink=1
3145 https://en.wikipedia.org/wiki/User:GateKeeper
3146 https://en.wikipedia.org/wiki/User:Gatemansgc
3147 https://en.wikipedia.org/wiki/User:Gatinha
3148 https://en.wikipedia.org/wiki/User:Gatoatigrado
3149 https://en.wikipedia.org/wiki/User:Gattom
3150 https://en.wikipedia.org/w/index.php%3ftitle=User:Gauravxpress&action=edit&redlink=1
3151 https://en.wikipedia.org/w/index.php%3ftitle=User:Gautham_tpsz&action=edit&redlink=1
3152 https://en.wikipedia.org/wiki/User:Gavenkoa
3153 https://en.wikipedia.org/wiki/User:Gavia_immer
3154 https://en.wikipedia.org/wiki/User:Gawain_Bolton
3155 https://en.wikipedia.org/w/index.php%3ftitle=User:Gawi&action=edit&redlink=1
3156 https://en.wikipedia.org/wiki/User:Gazimoff
3157 https://en.wikipedia.org/wiki/User:Gazpacho
3158 https://en.wikipedia.org/w/index.php%3ftitle=User:Gbeauregard&action=edit&redlink=1
3159 https://en.wikipedia.org/w/index.php%3ftitle=User:Gbrose85&action=edit&redlink=1
3160 https://en.wikipedia.org/w/index.php%3ftitle=User:Gbruin&action=edit&redlink=1
3161 https://en.wikipedia.org/w/index.php%3ftitle=User:Gbutler69&action=edit&redlink=1
3162 https://en.wikipedia.org/w/index.php%3ftitle=User:Gcaee&action=edit&redlink=1

1795
Contributors

1 Gcarvelli3163
2 Gdamyanov3164
1 Gdavidp3165
3 Gdelente3166
2 Gdessy3167
2 Gdewilde3168
2 Gdm3169
1 Gdo013170
43 Gdr3171
1 Geeee3172
1 Geekdiva3173
2 Geeker873174
1 GeeksHaveFeelings3175
2 Gehenna15103176
1 Gelingvistoj3177
3 Gelwood3178
1 GenYesJV3179
1 Gene Nygaard3180
2 Gene Thomas3181
2 Gene Ward Smith3182
1 Gene Wilson3183
2 Gene.arboit3184
1 General Wesc3185
1 GeneralMac3186
1 GeneralizationsAreBad3187

3163 https://en.wikipedia.org/w/index.php%3ftitle=User:Gcarvelli&action=edit&redlink=1
3164 https://en.wikipedia.org/w/index.php%3ftitle=User:Gdamyanov&action=edit&redlink=1
3165 https://en.wikipedia.org/wiki/User:Gdavidp
3166 https://en.wikipedia.org/w/index.php%3ftitle=User:Gdelente&action=edit&redlink=1
3167 https://en.wikipedia.org/w/index.php%3ftitle=User:Gdessy&action=edit&redlink=1
3168 https://en.wikipedia.org/wiki/User:Gdewilde
3169 https://en.wikipedia.org/wiki/User:Gdm
3170 https://en.wikipedia.org/wiki/User:Gdo01
3171 https://en.wikipedia.org/wiki/User:Gdr
3172 https://en.wikipedia.org/wiki/User:Geeee
3173 https://en.wikipedia.org/wiki/User:Geekdiva
3174 https://en.wikipedia.org/wiki/User:Geeker87
3175 https://en.wikipedia.org/wiki/User:GeeksHaveFeelings
3176 https://en.wikipedia.org/wiki/User:Gehenna1510
3177 https://en.wikipedia.org/w/index.php%3ftitle=User:Gelingvistoj&action=edit&redlink=1
3178 https://en.wikipedia.org/wiki/User:Gelwood
3179 https://en.wikipedia.org/w/index.php%3ftitle=User:GenYesJV&action=edit&redlink=1
3180 https://en.wikipedia.org/wiki/User:Gene_Nygaard
3181 https://en.wikipedia.org/w/index.php%3ftitle=User:Gene_Thomas&action=edit&redlink=1
3182 https://en.wikipedia.org/wiki/User:Gene_Ward_Smith
3183 https://en.wikipedia.org/wiki/User:Gene_Wilson
3184 https://en.wikipedia.org/wiki/User:Gene.arboit
3185 https://en.wikipedia.org/wiki/User:General_Wesc
3186 https://en.wikipedia.org/w/index.php%3ftitle=User:GeneralMac&action=edit&redlink=1
3187 https://en.wikipedia.org/wiki/User:GeneralizationsAreBad

1796
External links

3 Gentleman wiki3188
11 Genusfour3189
1 Geoffadams3190
1 Geoffrey.landis3191
1 GeoffreyT20003192
2 Geoffreybernardo3193
1 Geoffrey~enwiki3194
2 Geofftech3195
19 Geolinkios3196
1 Geomatique3197
1 GeordieMcBain3198
1 George Burgess3199
1 George1003200
1 George126~enwiki3201
3 GeorgeBills3202
2 Georgiraichovgeorgiev3203
2 Geospizafortis3204
4 Gerakibot3205
13 Gerbrant3206
4 Gerel3207
1 Gerfriedc3208
1 GergM3209
6 Gergelypalla3210
1 Gerhardvalentin3211

https://en.wikipedia.org/w/index.php%3ftitle=User:Gentleman_wiki&action=edit&redlink=
3188
1
3189 https://en.wikipedia.org/wiki/User:Genusfour
3190 https://en.wikipedia.org/w/index.php%3ftitle=User:Geoffadams&action=edit&redlink=1
3191 https://en.wikipedia.org/wiki/User:Geoffrey.landis
3192 https://en.wikipedia.org/wiki/User:GeoffreyT2000
3193 https://en.wikipedia.org/wiki/User:Geoffreybernardo
3194 https://en.wikipedia.org/wiki/User:Geoffrey~enwiki
3195 https://en.wikipedia.org/wiki/User:Geofftech
3196 https://en.wikipedia.org/w/index.php%3ftitle=User:Geolinkios&action=edit&redlink=1
3197 https://en.wikipedia.org/w/index.php%3ftitle=User:Geomatique&action=edit&redlink=1
3198 https://en.wikipedia.org/w/index.php%3ftitle=User:GeordieMcBain&action=edit&redlink=1
3199 https://en.wikipedia.org/wiki/User:George_Burgess
3200 https://en.wikipedia.org/wiki/User:George100
https://en.wikipedia.org/w/index.php%3ftitle=User:George126~enwiki&action=edit&
3201
redlink=1
3202 https://en.wikipedia.org/wiki/User:GeorgeBills
https://en.wikipedia.org/w/index.php%3ftitle=User:Georgiraichovgeorgiev&action=edit&
3203
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Geospizafortis&action=edit&redlink=
3204
1
3205 https://en.wikipedia.org/wiki/User:Gerakibot
3206 https://en.wikipedia.org/wiki/User:Gerbrant
3207 https://en.wikipedia.org/wiki/User:Gerel
3208 https://en.wikipedia.org/w/index.php%3ftitle=User:Gerfriedc&action=edit&redlink=1
3209 https://en.wikipedia.org/w/index.php%3ftitle=User:GergM&action=edit&redlink=1
3210 https://en.wikipedia.org/w/index.php%3ftitle=User:Gergelypalla&action=edit&redlink=1
3211 https://en.wikipedia.org/wiki/User:Gerhardvalentin

1797
Contributors

2 GermanJoe3212
1 GermanX3213
1 Germanrh3214
2 Germyb3215
2 Geron823216
6 Gerrit3217
1 Gerweck3218
14 Get Learnt3219
1 Getonyourfeet3220
3 GeypycGn3221
1 Gf uip3222
2 Gfis3223
2 Gfoley43224
1 Gfox883225
2 Gg4u3226
2 Ggia3227
1 Ghazer~enwiki3228
5 Ghettoblaster3229
3 Ghodsnia3230
5 Ghost303231
3 GhostML3232
1 GhostyFromTheMoon3233
4 GiM3234
1 Gianfranco3235
1 Giants20083236

3212 https://en.wikipedia.org/wiki/User:GermanJoe
3213 https://en.wikipedia.org/wiki/User:GermanX
3214 https://en.wikipedia.org/w/index.php%3ftitle=User:Germanrh&action=edit&redlink=1
3215 https://en.wikipedia.org/w/index.php%3ftitle=User:Germyb&action=edit&redlink=1
3216 https://en.wikipedia.org/w/index.php%3ftitle=User:Geron82&action=edit&redlink=1
3217 https://en.wikipedia.org/wiki/User:Gerrit
3218 https://en.wikipedia.org/w/index.php%3ftitle=User:Gerweck&action=edit&redlink=1
3219 https://en.wikipedia.org/wiki/User:Get_Learnt
3220 https://en.wikipedia.org/wiki/User:Getonyourfeet
3221 https://en.wikipedia.org/w/index.php%3ftitle=User:GeypycGn&action=edit&redlink=1
3222 https://en.wikipedia.org/w/index.php%3ftitle=User:Gf_uip&action=edit&redlink=1
3223 https://en.wikipedia.org/wiki/User:Gfis
3224 https://en.wikipedia.org/wiki/User:Gfoley4
3225 https://en.wikipedia.org/w/index.php%3ftitle=User:Gfox88&action=edit&redlink=1
3226 https://en.wikipedia.org/w/index.php%3ftitle=User:Gg4u&action=edit&redlink=1
3227 https://en.wikipedia.org/wiki/User:Ggia
3228 https://en.wikipedia.org/wiki/User:Ghazer~enwiki
3229 https://en.wikipedia.org/wiki/User:Ghettoblaster
3230 https://en.wikipedia.org/w/index.php%3ftitle=User:Ghodsnia&action=edit&redlink=1
3231 https://en.wikipedia.org/wiki/User:Ghost30
3232 https://en.wikipedia.org/w/index.php%3ftitle=User:GhostML&action=edit&redlink=1
3233 https://en.wikipedia.org/wiki/User:GhostyFromTheMoon
3234 https://en.wikipedia.org/wiki/User:GiM
3235 https://en.wikipedia.org/wiki/User:Gianfranco
3236 https://en.wikipedia.org/wiki/User:Giants2008

1798
External links

1 Gidonb3237
1 GiftOfmGabb3238
224 Giftlite3239
2 Gilderien3240
1 Gildos3241
38 Gilliam3242
1 Gillyweed3243
7 Gilo19693244
1 Gimco3245
1 GimliDotNet3246
4 Gimmetrow3247
3 Ginsuloft3248
1 GioeleBarabucci3249
1 Gioto3250
11 GiovanniSidwell3251
16 Giovatardu3252
5 Giraffedata3253
1 Gire 3pich20053254
1 Girlwithglasses3255
1 Girlwithgreeneyes3256
2 Girth Summit3257
1 GiveAFishABone3258
1 Gjbayes3259
1 Gjd0013260
2 Gklambauer3261

3237 https://en.wikipedia.org/wiki/User:Gidonb
3238 https://en.wikipedia.org/w/index.php%3ftitle=User:GiftOfmGabb&action=edit&redlink=1
3239 https://en.wikipedia.org/wiki/User:Giftlite
3240 https://en.wikipedia.org/wiki/User:Gilderien
3241 https://en.wikipedia.org/w/index.php%3ftitle=User:Gildos&action=edit&redlink=1
3242 https://en.wikipedia.org/wiki/User:Gilliam
3243 https://en.wikipedia.org/wiki/User:Gillyweed
3244 https://en.wikipedia.org/wiki/User:Gilo1969
3245 https://en.wikipedia.org/w/index.php%3ftitle=User:Gimco&action=edit&redlink=1
3246 https://en.wikipedia.org/wiki/User:GimliDotNet
3247 https://en.wikipedia.org/wiki/User:Gimmetrow
3248 https://en.wikipedia.org/wiki/User:Ginsuloft
3249 https://en.wikipedia.org/wiki/User:GioeleBarabucci
3250 https://en.wikipedia.org/wiki/User:Gioto
3251 https://en.wikipedia.org/wiki/User:GiovanniSidwell
3252 https://en.wikipedia.org/w/index.php%3ftitle=User:Giovatardu&action=edit&redlink=1
3253 https://en.wikipedia.org/wiki/User:Giraffedata
3254 https://en.wikipedia.org/wiki/User:Gire_3pich2005
https://en.wikipedia.org/w/index.php%3ftitle=User:Girlwithglasses&action=edit&
3255
redlink=1
3256 https://en.wikipedia.org/wiki/User:Girlwithgreeneyes
3257 https://en.wikipedia.org/wiki/User:Girth_Summit
https://en.wikipedia.org/w/index.php%3ftitle=User:GiveAFishABone&action=edit&redlink=
3258
1
3259 https://en.wikipedia.org/w/index.php%3ftitle=User:Gjbayes&action=edit&redlink=1
3260 https://en.wikipedia.org/wiki/User:Gjd001
3261 https://en.wikipedia.org/w/index.php%3ftitle=User:Gklambauer&action=edit&redlink=1

1799
Contributors

1 Glacialfox3262
3 Glaisher3263
1 Glane233264
1 Glassmage3265
1 Glengordon013266
3 Glenn3267
13 GlennLawyer3268
1 Glenstamp3269
6 Glinos3270
1 Globbet3271
2 Glomerule3272
1 Glow0083273
1148 Glrx3274
1 Gluttton3275
1 Gmagkots3276
1 Gmarsden3277
2 Gmaxwell3278
3 Gmazeroff3279
1 Gmelli3280
1 Gmentat3281
6 Gmharhar3282
1 Gmile~enwiki3283
1 Gms3284
1 Gnaggnoyil3285
1 Gnomz0073286

3262 https://en.wikipedia.org/wiki/User:Glacialfox
3263 https://en.wikipedia.org/wiki/User:Glaisher
3264 https://en.wikipedia.org/wiki/User:Glane23
3265 https://en.wikipedia.org/wiki/User:Glassmage
3266 https://en.wikipedia.org/wiki/User:Glengordon01
3267 https://en.wikipedia.org/wiki/User:Glenn
3268 https://en.wikipedia.org/wiki/User:GlennLawyer
3269 https://en.wikipedia.org/w/index.php%3ftitle=User:Glenstamp&action=edit&redlink=1
3270 https://en.wikipedia.org/wiki/User:Glinos
3271 https://en.wikipedia.org/wiki/User:Globbet
3272 https://en.wikipedia.org/w/index.php%3ftitle=User:Glomerule&action=edit&redlink=1
3273 https://en.wikipedia.org/w/index.php%3ftitle=User:Glow008&action=edit&redlink=1
3274 https://en.wikipedia.org/wiki/User:Glrx
3275 https://en.wikipedia.org/w/index.php%3ftitle=User:Gluttton&action=edit&redlink=1
3276 https://en.wikipedia.org/w/index.php%3ftitle=User:Gmagkots&action=edit&redlink=1
3277 https://en.wikipedia.org/wiki/User:Gmarsden
3278 https://en.wikipedia.org/wiki/User:Gmaxwell
3279 https://en.wikipedia.org/wiki/User:Gmazeroff
3280 https://en.wikipedia.org/wiki/User:Gmelli
3281 https://en.wikipedia.org/w/index.php%3ftitle=User:Gmentat&action=edit&redlink=1
3282 https://en.wikipedia.org/w/index.php%3ftitle=User:Gmharhar&action=edit&redlink=1
3283 https://en.wikipedia.org/w/index.php%3ftitle=User:Gmile~enwiki&action=edit&redlink=1
3284 https://en.wikipedia.org/wiki/User:Gms
3285 https://en.wikipedia.org/w/index.php%3ftitle=User:Gnaggnoyil&action=edit&redlink=1
3286 https://en.wikipedia.org/wiki/User:Gnomz007

1800
External links

1 Gnusbiz3287
1 GoCooL3288
1 GoShow3289
1 Goatasaur3290
2 Gogo Dodo3291
1 Gogobera3292
1 Gogolwold3293
6 GoingBatty3294
1 Gojun0773295
1 Gokayhuz3296
1 Golbez3297
1 Gold44443298
1 Goldfndr3299
1 Golle953300
1 Gonzolito3301
2 Gooaccname3302
3 Goochelaar3303
1 GoodDay3304
1 Gooeyforms3305
8 Googl3306
2 Googol303307
5 Gopalkrishnan833308
1 GordonFremen3309
2 Gordonnovak3310
3 GorillaWarfare3311

3287 https://en.wikipedia.org/wiki/User:Gnusbiz
3288 https://en.wikipedia.org/w/index.php%3ftitle=User:GoCooL&action=edit&redlink=1
3289 https://en.wikipedia.org/wiki/User:GoShow
3290 https://en.wikipedia.org/wiki/User:Goatasaur
3291 https://en.wikipedia.org/wiki/User:Gogo_Dodo
3292 https://en.wikipedia.org/wiki/User:Gogobera
3293 https://en.wikipedia.org/wiki/User:Gogolwold
3294 https://en.wikipedia.org/wiki/User:GoingBatty
3295 https://en.wikipedia.org/wiki/User:Gojun077
3296 https://en.wikipedia.org/wiki/User:Gokayhuz
3297 https://en.wikipedia.org/wiki/User:Golbez
3298 https://en.wikipedia.org/w/index.php%3ftitle=User:Gold4444&action=edit&redlink=1
3299 https://en.wikipedia.org/wiki/User:Goldfndr
3300 https://en.wikipedia.org/w/index.php%3ftitle=User:Golle95&action=edit&redlink=1
3301 https://en.wikipedia.org/wiki/User:Gonzolito
3302 https://en.wikipedia.org/w/index.php%3ftitle=User:Gooaccname&action=edit&redlink=1
3303 https://en.wikipedia.org/wiki/User:Goochelaar
3304 https://en.wikipedia.org/wiki/User:GoodDay
3305 https://en.wikipedia.org/w/index.php%3ftitle=User:Gooeyforms&action=edit&redlink=1
3306 https://en.wikipedia.org/wiki/User:Googl
3307 https://en.wikipedia.org/wiki/User:Googol30
https://en.wikipedia.org/w/index.php%3ftitle=User:Gopalkrishnan83&action=edit&
3308
redlink=1
3309 https://en.wikipedia.org/wiki/User:GordonFremen
3310 https://en.wikipedia.org/w/index.php%3ftitle=User:Gordonnovak&action=edit&redlink=1
3311 https://en.wikipedia.org/wiki/User:GorillaWarfare

1801
Contributors

1 Gorrri3312
1 Gorthian3313
1 Gou72143093314
1 Goutamrocks3315
1 Gpollock3316
1 Gpraveenkumar53317
1 Gpvos3318
1 Gr pbi3319
3 GrEp3320
1 Graboy3321
1 Gracefool3322
3 Graeme Bartlett3323
6 GraemeL3324
1 GraemeMcRae3325
2 Grafen3326
1 Gragragra3327
40 Graham873328
1 GrammarAndShape3329
5 Grammarbot3330
3 Grantstevens3331
2 Grapesoda223332
2 Graph Theory page blanker3333
1 GraphTheoryPwns3334
7 Graue3335

3312 https://en.wikipedia.org/wiki/User:Gorrri
3313 https://en.wikipedia.org/wiki/User:Gorthian
3314 https://en.wikipedia.org/w/index.php%3ftitle=User:Gou7214309&action=edit&redlink=1
3315 https://en.wikipedia.org/w/index.php%3ftitle=User:Goutamrocks&action=edit&redlink=1
3316 https://en.wikipedia.org/wiki/User:Gpollock
https://en.wikipedia.org/w/index.php%3ftitle=User:Gpraveenkumar5&action=edit&redlink=
3317
1
3318 https://en.wikipedia.org/wiki/User:Gpvos
3319 https://en.wikipedia.org/w/index.php%3ftitle=User:Gr_pbi&action=edit&redlink=1
3320 https://en.wikipedia.org/wiki/User:GrEp
3321 https://en.wikipedia.org/w/index.php%3ftitle=User:Graboy&action=edit&redlink=1
3322 https://en.wikipedia.org/wiki/User:Gracefool
3323 https://en.wikipedia.org/wiki/User:Graeme_Bartlett
3324 https://en.wikipedia.org/wiki/User:GraemeL
3325 https://en.wikipedia.org/wiki/User:GraemeMcRae
3326 https://en.wikipedia.org/wiki/User:Grafen
3327 https://en.wikipedia.org/w/index.php%3ftitle=User:Gragragra&action=edit&redlink=1
3328 https://en.wikipedia.org/wiki/User:Graham87
https://en.wikipedia.org/w/index.php%3ftitle=User:GrammarAndShape&action=edit&
3329
redlink=1
3330 https://en.wikipedia.org/wiki/User:Grammarbot
3331 https://en.wikipedia.org/w/index.php%3ftitle=User:Grantstevens&action=edit&redlink=1
3332 https://en.wikipedia.org/wiki/User:Grapesoda22
https://en.wikipedia.org/w/index.php%3ftitle=User:Graph_Theory_page_blanker&action=
3333
edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:GraphTheoryPwns&action=edit&
3334
redlink=1
3335 https://en.wikipedia.org/wiki/User:Graue

1802
External links

1 GravityUp3336
1 Grayfell3337
1 GreatWhiteNortherner3338
3 Greatdiwei3339
21 GreenC bot3340
1 GreenWeasel113341
1 Greenleaf~enwiki3342
2 Greenmatter3343
1 Greenplastictree3344
7 Greenrd3345
1 GregRM3346
3 Gregbard3347
11 Gregdu3348
1 Gregman23349
56 GregorB3350
4 Gregsinclair3351
1 Gremagor3352
1 Gremel1233353
6 Grendelkhan3354
1 Grey ghost3355
2 Grick3356
1 Grim233357
1 GrimFang43358
1 Grimguard3359
2 GrinBot~enwiki3360

3336 https://en.wikipedia.org/wiki/User:GravityUp
3337 https://en.wikipedia.org/wiki/User:Grayfell
3338 https://en.wikipedia.org/wiki/User:GreatWhiteNortherner
3339 https://en.wikipedia.org/wiki/User:Greatdiwei
3340 https://en.wikipedia.org/wiki/User:GreenC_bot
3341 https://en.wikipedia.org/wiki/User:GreenWeasel11
3342 https://en.wikipedia.org/wiki/User:Greenleaf~enwiki
3343 https://en.wikipedia.org/wiki/User:Greenmatter
3344 https://en.wikipedia.org/wiki/User:Greenplastictree
3345 https://en.wikipedia.org/wiki/User:Greenrd
3346 https://en.wikipedia.org/wiki/User:GregRM
3347 https://en.wikipedia.org/wiki/User:Gregbard
3348 https://en.wikipedia.org/wiki/User:Gregdu
3349 https://en.wikipedia.org/w/index.php%3ftitle=User:Gregman2&action=edit&redlink=1
3350 https://en.wikipedia.org/wiki/User:GregorB
3351 https://en.wikipedia.org/wiki/User:Gregsinclair
3352 https://en.wikipedia.org/w/index.php%3ftitle=User:Gremagor&action=edit&redlink=1
3353 https://en.wikipedia.org/wiki/User:Gremel123
3354 https://en.wikipedia.org/wiki/User:Grendelkhan
3355 https://en.wikipedia.org/wiki/User:Grey_ghost
3356 https://en.wikipedia.org/w/index.php%3ftitle=User:Grick&action=edit&redlink=1
3357 https://en.wikipedia.org/wiki/User:Grim23
3358 https://en.wikipedia.org/wiki/User:GrimFang4
3359 https://en.wikipedia.org/w/index.php%3ftitle=User:Grimguard&action=edit&redlink=1
3360 https://en.wikipedia.org/wiki/User:GrinBot~enwiki

1803
Contributors

1 Grinning Fool3361
1 Groffles3362
1 Grog~enwiki3363
6 GromXXVII3364
3 Gronk Oz3365
2 Grotendeels Onschadelijk3366
5 GrouchoBot3367
13 Groupthink3368
1 GroveGuy3369
1 Grover cleveland3370
1 Groxx3371
1 Grsbmd3372
1 Gruauder3373
2 Grubber3374
1 Grumpyland3375
19 GrundyCamellia3376
1 Grunt3377
3 Gryspnik3378
2 Gråbergs Gråa Sång3379
1 Gsantor3380
2 Gscshoyru3381
1 Gtcostello3382
1 Gtong323383
1 Guahnala3384
11 Guanabot3385

3361 https://en.wikipedia.org/wiki/User:Grinning_Fool
3362 https://en.wikipedia.org/w/index.php%3ftitle=User:Groffles&action=edit&redlink=1
3363 https://en.wikipedia.org/w/index.php%3ftitle=User:Grog~enwiki&action=edit&redlink=1
3364 https://en.wikipedia.org/wiki/User:GromXXVII
3365 https://en.wikipedia.org/wiki/User:Gronk_Oz
3366 https://en.wikipedia.org/wiki/User:Grotendeels_Onschadelijk
3367 https://en.wikipedia.org/wiki/User:GrouchoBot
3368 https://en.wikipedia.org/wiki/User:Groupthink
3369 https://en.wikipedia.org/wiki/User:GroveGuy
3370 https://en.wikipedia.org/wiki/User:Grover_cleveland
3371 https://en.wikipedia.org/wiki/User:Groxx
3372 https://en.wikipedia.org/w/index.php%3ftitle=User:Grsbmd&action=edit&redlink=1
3373 https://en.wikipedia.org/wiki/User:Gruauder
3374 https://en.wikipedia.org/wiki/User:Grubber
3375 https://en.wikipedia.org/wiki/User:Grumpyland
3376 https://en.wikipedia.org/wiki/User:GrundyCamellia
3377 https://en.wikipedia.org/wiki/User:Grunt
3378 https://en.wikipedia.org/wiki/User:Gryspnik
3379 https://en.wikipedia.org/wiki/User:Gr%25C3%25A5bergs_Gr%25C3%25A5a_S%25C3%25A5ng
3380 https://en.wikipedia.org/w/index.php%3ftitle=User:Gsantor&action=edit&redlink=1
3381 https://en.wikipedia.org/wiki/User:Gscshoyru
3382 https://en.wikipedia.org/w/index.php%3ftitle=User:Gtcostello&action=edit&redlink=1
3383 https://en.wikipedia.org/w/index.php%3ftitle=User:Gtong32&action=edit&redlink=1
3384 https://en.wikipedia.org/w/index.php%3ftitle=User:Guahnala&action=edit&redlink=1
3385 https://en.wikipedia.org/wiki/User:Guanabot

1804
External links

3 Guanaco3386
3 Guillaume23033387
6 GulDan3388
11 Gulliveig3389
1 Gulumeemee3390
1 Gumby6003391
1 Gundersen533392
1 Guneshwor3393
1 Gunnar Larsson3394
2 Guoguo123395
2 Guppyfinsoup3396
3 Guray90003397
3 Gurch3398
2 Gurpreetkaur883399
2 Gursimransinghhanspal3400
1 Gurt Posh3401
2 Gustavb3402
3 Gustedt3403
1 Guturu Bhuvanamitra3404
3 Gutworth3405
13 Gutza3406
18 Guy Macon3407
2 Guy vandegrift3408
1 Guybrush3409
1 Guysapire3410

3386 https://en.wikipedia.org/wiki/User:Guanaco
3387 https://en.wikipedia.org/w/index.php%3ftitle=User:Guillaume2303&action=edit&redlink=1
3388 https://en.wikipedia.org/wiki/User:GulDan
3389 https://en.wikipedia.org/w/index.php%3ftitle=User:Gulliveig&action=edit&redlink=1
3390 https://en.wikipedia.org/wiki/User:Gulumeemee
3391 https://en.wikipedia.org/w/index.php%3ftitle=User:Gumby600&action=edit&redlink=1
3392 https://en.wikipedia.org/wiki/User:Gundersen53
3393 https://en.wikipedia.org/wiki/User:Guneshwor
3394 https://en.wikipedia.org/wiki/User:Gunnar_Larsson
3395 https://en.wikipedia.org/wiki/User:Guoguo12
3396 https://en.wikipedia.org/wiki/User:Guppyfinsoup
3397 https://en.wikipedia.org/w/index.php%3ftitle=User:Guray9000&action=edit&redlink=1
3398 https://en.wikipedia.org/wiki/User:Gurch
https://en.wikipedia.org/w/index.php%3ftitle=User:Gurpreetkaur88&action=edit&redlink=
3399
1
https://en.wikipedia.org/w/index.php%3ftitle=User:Gursimransinghhanspal&action=edit&
3400
redlink=1
3401 https://en.wikipedia.org/wiki/User:Gurt_Posh
3402 https://en.wikipedia.org/wiki/User:Gustavb
3403 https://en.wikipedia.org/w/index.php%3ftitle=User:Gustedt&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Guturu_Bhuvanamitra&action=edit&
3404
redlink=1
3405 https://en.wikipedia.org/wiki/User:Gutworth
3406 https://en.wikipedia.org/wiki/User:Gutza
3407 https://en.wikipedia.org/wiki/User:Guy_Macon
3408 https://en.wikipedia.org/wiki/User:Guy_vandegrift
3409 https://en.wikipedia.org/wiki/User:Guybrush
3410 https://en.wikipedia.org/w/index.php%3ftitle=User:Guysapire&action=edit&redlink=1

1805
Contributors

2 Guywan3411
1 Guzzrocha3412
26 Gwern3413
2 Gwernol3414
1 Gwg93415
1 Gwzz3416
1 Gyan3417
1 Gyllstromk3418
1 Gylpm3419
1 Gymel3420
1 Gzabers3421
1 Gökhan3422
10 GünniX3423
4 H.ehsaan3424
2 H3llBot3425
1 H3nry3426
1 HFuruseth3427
10 HJ Mitchell3428
1 HMSLavender3429
5 HMSSolent3430
3 HPA3431
2 HPRappaport3432
2 HRoestBot3433
1 HXZBZSHDHDHED3434
1 Habboud13435

3411 https://en.wikipedia.org/wiki/User:Guywan
3412 https://en.wikipedia.org/w/index.php%3ftitle=User:Guzzrocha&action=edit&redlink=1
3413 https://en.wikipedia.org/wiki/User:Gwern
3414 https://en.wikipedia.org/wiki/User:Gwernol
3415 https://en.wikipedia.org/w/index.php%3ftitle=User:Gwg9&action=edit&redlink=1
3416 https://en.wikipedia.org/w/index.php%3ftitle=User:Gwzz&action=edit&redlink=1
3417 https://en.wikipedia.org/wiki/User:Gyan
3418 https://en.wikipedia.org/wiki/User:Gyllstromk
3419 https://en.wikipedia.org/w/index.php%3ftitle=User:Gylpm&action=edit&redlink=1
3420 https://en.wikipedia.org/wiki/User:Gymel
3421 https://en.wikipedia.org/wiki/User:Gzabers
3422 https://en.wikipedia.org/wiki/User:G%25C3%25B6khan
3423 https://en.wikipedia.org/wiki/User:G%25C3%25BCnniX
3424 https://en.wikipedia.org/wiki/User:H.ehsaan
3425 https://en.wikipedia.org/wiki/User:H3llBot
3426 https://en.wikipedia.org/w/index.php%3ftitle=User:H3nry&action=edit&redlink=1
3427 https://en.wikipedia.org/wiki/User:HFuruseth
3428 https://en.wikipedia.org/wiki/User:HJ_Mitchell
3429 https://en.wikipedia.org/wiki/User:HMSLavender
3430 https://en.wikipedia.org/wiki/User:HMSSolent
3431 https://en.wikipedia.org/wiki/User:HPA
3432 https://en.wikipedia.org/wiki/User:HPRappaport
3433 https://en.wikipedia.org/wiki/User:HRoestBot
3434 https://en.wikipedia.org/w/index.php%3ftitle=User:HXZBZSHDHDHED&action=edit&redlink=1
3435 https://en.wikipedia.org/w/index.php%3ftitle=User:Habboud1&action=edit&redlink=1

1806
External links

1 Habib cse ruet3436


7 Hadal3437
2 Hadrianheugh3438
4 HaeB3439
5 Haeinous3440
1 Haeynzen3441
1 Hagerman3442
6 Hahahafr3443
7 Haham hanuka3444
2 Haidang0013445
1 Haigee20073446
1 Haikz3447
1 Hair Commodore3448
1 Hairhorn3449
19 Hairy Dude3450
3 HairyFotr3451
1 Haiviet~enwiki3452
1 Hajijohn3453
1 Hajile 003454
4 Hakanai3455
1 Hakanhaberdar3456
1 Haker4o3457
1 Hakkinen3458
1 HalHal3459
4 Halawu3460

https://en.wikipedia.org/w/index.php%3ftitle=User:Habib_cse_ruet&action=edit&redlink=
3436
1
3437 https://en.wikipedia.org/wiki/User:Hadal
3438 https://en.wikipedia.org/wiki/User:Hadrianheugh
3439 https://en.wikipedia.org/wiki/User:HaeB
3440 https://en.wikipedia.org/wiki/User:Haeinous
3441 https://en.wikipedia.org/w/index.php%3ftitle=User:Haeynzen&action=edit&redlink=1
3442 https://en.wikipedia.org/wiki/User:Hagerman
3443 https://en.wikipedia.org/w/index.php%3ftitle=User:Hahahafr&action=edit&redlink=1
3444 https://en.wikipedia.org/wiki/User:Haham_hanuka
3445 https://en.wikipedia.org/w/index.php%3ftitle=User:Haidang001&action=edit&redlink=1
3446 https://en.wikipedia.org/w/index.php%3ftitle=User:Haigee2007&action=edit&redlink=1
3447 https://en.wikipedia.org/wiki/User:Haikz
3448 https://en.wikipedia.org/wiki/User:Hair_Commodore
3449 https://en.wikipedia.org/wiki/User:Hairhorn
3450 https://en.wikipedia.org/wiki/User:Hairy_Dude
3451 https://en.wikipedia.org/wiki/User:HairyFotr
https://en.wikipedia.org/w/index.php%3ftitle=User:Haiviet~enwiki&action=edit&redlink=
3452
1
3453 https://en.wikipedia.org/w/index.php%3ftitle=User:Hajijohn&action=edit&redlink=1
3454 https://en.wikipedia.org/w/index.php%3ftitle=User:Hajile_00&action=edit&redlink=1
3455 https://en.wikipedia.org/w/index.php%3ftitle=User:Hakanai&action=edit&redlink=1
3456 https://en.wikipedia.org/w/index.php%3ftitle=User:Hakanhaberdar&action=edit&redlink=1
3457 https://en.wikipedia.org/w/index.php%3ftitle=User:Haker4o&action=edit&redlink=1
3458 https://en.wikipedia.org/wiki/User:Hakkinen
3459 https://en.wikipedia.org/wiki/User:HalHal
3460 https://en.wikipedia.org/w/index.php%3ftitle=User:Halawu&action=edit&redlink=1

1807
Contributors

1 Halcyonhazard3461
1 HalfShadow3462
1 HalfW3463
2 Hallows AG3464
3 Ham Pastrami3465
2 Hamaad.s3466
1 HamburgerRadio3467
1 Hamed.moeeni3468
2 Hammadhaleem3469
1 Hamsterlopithecus3470
3 Hamza18863471
1 HandMnr13472
1 Handige Harrie3473
2 HandsomeFella3474
1 HanielBarbosa3475
2 Hankwang3476
1 HannahBGSM3477
2 Hannan12123478
2 Hannasnow3479
3 Hannes Eder3480
5 Hans Adler3481
2 Hansamurai3482
2 Hao2lian3483
6 Haoyao3484
2 Haparsi3485

3461 https://en.wikipedia.org/wiki/User:Halcyonhazard
3462 https://en.wikipedia.org/wiki/User:HalfShadow
3463 https://en.wikipedia.org/wiki/User:HalfW
3464 https://en.wikipedia.org/wiki/User:Hallows_AG
3465 https://en.wikipedia.org/wiki/User:Ham_Pastrami
3466 https://en.wikipedia.org/w/index.php%3ftitle=User:Hamaad.s&action=edit&redlink=1
3467 https://en.wikipedia.org/wiki/User:HamburgerRadio
3468 https://en.wikipedia.org/w/index.php%3ftitle=User:Hamed.moeeni&action=edit&redlink=1
3469 https://en.wikipedia.org/wiki/User:Hammadhaleem
3470 https://en.wikipedia.org/wiki/User:Hamsterlopithecus
3471 https://en.wikipedia.org/w/index.php%3ftitle=User:Hamza1886&action=edit&redlink=1
3472 https://en.wikipedia.org/w/index.php%3ftitle=User:HandMnr1&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Handige_Harrie&action=edit&redlink=
3473
1
3474 https://en.wikipedia.org/wiki/User:HandsomeFella
3475 https://en.wikipedia.org/w/index.php%3ftitle=User:HanielBarbosa&action=edit&redlink=1
3476 https://en.wikipedia.org/wiki/User:Hankwang
3477 https://en.wikipedia.org/w/index.php%3ftitle=User:HannahBGSM&action=edit&redlink=1
3478 https://en.wikipedia.org/w/index.php%3ftitle=User:Hannan1212&action=edit&redlink=1
3479 https://en.wikipedia.org/wiki/User:Hannasnow
3480 https://en.wikipedia.org/wiki/User:Hannes_Eder
3481 https://en.wikipedia.org/w/index.php%3ftitle=User:Hans_Adler&action=edit&redlink=1
3482 https://en.wikipedia.org/wiki/User:Hansamurai
3483 https://en.wikipedia.org/wiki/User:Hao2lian
3484 https://en.wikipedia.org/w/index.php%3ftitle=User:Haoyao&action=edit&redlink=1
3485 https://en.wikipedia.org/w/index.php%3ftitle=User:Haparsi&action=edit&redlink=1

1808
External links

3 Happybunny953486
3 Happypal3487
9 Happyuk3488
1 Hardmath3489
3 Hari3490
1 Hari63893491
13 Harish victory3492
7 Hariva3493
3 Harmil3494
1 Harp3495
1 Harpreet Osahan3496
4 Harrigan3497
3 Harrisonmetz3498
1 Harriv3499
1 Harro3500
2 Harry0xBd3501
1 Harryboyles3502
2 HarshKhatore3503
1 HarshalVTripathi3504
2 Harshaljahagirdar3505
5 Harshitm263506
1 Harthur3507
3 Hart~enwiki3508
2 Harvi0043509

3486 https://en.wikipedia.org/w/index.php%3ftitle=User:Happybunny95&action=edit&redlink=1
3487 https://en.wikipedia.org/wiki/User:Happypal
3488 https://en.wikipedia.org/w/index.php%3ftitle=User:Happyuk&action=edit&redlink=1
3489 https://en.wikipedia.org/w/index.php%3ftitle=User:Hardmath&action=edit&redlink=1
3490 https://en.wikipedia.org/wiki/User:Hari
3491 https://en.wikipedia.org/wiki/User:Hari6389
https://en.wikipedia.org/w/index.php%3ftitle=User:Harish_victory&action=edit&redlink=
3492
1
3493 https://en.wikipedia.org/wiki/User:Hariva
3494 https://en.wikipedia.org/wiki/User:Harmil
3495 https://en.wikipedia.org/wiki/User:Harp
https://en.wikipedia.org/w/index.php%3ftitle=User:Harpreet_Osahan&action=edit&
3496
redlink=1
3497 https://en.wikipedia.org/wiki/User:Harrigan
3498 https://en.wikipedia.org/w/index.php%3ftitle=User:Harrisonmetz&action=edit&redlink=1
3499 https://en.wikipedia.org/wiki/User:Harriv
3500 https://en.wikipedia.org/w/index.php%3ftitle=User:Harro&action=edit&redlink=1
3501 https://en.wikipedia.org/wiki/User:Harry0xBd
3502 https://en.wikipedia.org/wiki/User:Harryboyles
3503 https://en.wikipedia.org/w/index.php%3ftitle=User:HarshKhatore&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:HarshalVTripathi&action=edit&
3504
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Harshaljahagirdar&action=edit&
3505
redlink=1
3506 https://en.wikipedia.org/wiki/User:Harshitm26
3507 https://en.wikipedia.org/w/index.php%3ftitle=User:Harthur&action=edit&redlink=1
3508 https://en.wikipedia.org/w/index.php%3ftitle=User:Hart~enwiki&action=edit&redlink=1
3509 https://en.wikipedia.org/w/index.php%3ftitle=User:Harvi004&action=edit&redlink=1

1809
Contributors

1 Hasanadnantaha3510
2 Hasanjamil3511
6 Hashar3512
12 HasharBot~enwiki3513
2 Hashbrowncipher3514
2 Hashproduct3515
1 Hasive3516
1 Hate123veteran3517
1 Haterade1113518
3 Hathawayc3519
1 Hatmatbbat103520
1 Havanafreestone3521
2 Havardk3522
1 Hawk7773523
1 Haxwell3524
1 Hayazin3525
1 HaydenWong3526
2 Hayman303527
1 Hayral~enwiki3528
1 Hbruhn3529
1 Hcethatsme3530
7 Hcsradek3531
1 Hdanak3532
6 Hdante3533
1 Hdc11123534

3510 https://en.wikipedia.org/wiki/User:Hasanadnantaha
3511 https://en.wikipedia.org/w/index.php%3ftitle=User:Hasanjamil&action=edit&redlink=1
3512 https://en.wikipedia.org/wiki/User:Hashar
3513 https://en.wikipedia.org/wiki/User:HasharBot~enwiki
3514 https://en.wikipedia.org/wiki/User:Hashbrowncipher
3515 https://en.wikipedia.org/wiki/User:Hashproduct
3516 https://en.wikipedia.org/wiki/User:Hasive
https://en.wikipedia.org/w/index.php%3ftitle=User:Hate123veteran&action=edit&redlink=
3517
1
3518 https://en.wikipedia.org/w/index.php%3ftitle=User:Haterade111&action=edit&redlink=1
3519 https://en.wikipedia.org/wiki/User:Hathawayc
3520 https://en.wikipedia.org/wiki/User:Hatmatbbat10
3521 https://en.wikipedia.org/wiki/User:Havanafreestone
3522 https://en.wikipedia.org/wiki/User:Havardk
3523 https://en.wikipedia.org/wiki/User:Hawk777
3524 https://en.wikipedia.org/wiki/User:Haxwell
3525 https://en.wikipedia.org/wiki/User:Hayazin
3526 https://en.wikipedia.org/wiki/User:HaydenWong
3527 https://en.wikipedia.org/wiki/User:Hayman30
3528 https://en.wikipedia.org/w/index.php%3ftitle=User:Hayral~enwiki&action=edit&redlink=1
3529 https://en.wikipedia.org/wiki/User:Hbruhn
3530 https://en.wikipedia.org/wiki/User:Hcethatsme
3531 https://en.wikipedia.org/wiki/User:Hcsradek
3532 https://en.wikipedia.org/w/index.php%3ftitle=User:Hdanak&action=edit&redlink=1
3533 https://en.wikipedia.org/wiki/User:Hdante
3534 https://en.wikipedia.org/w/index.php%3ftitle=User:Hdc1112&action=edit&redlink=1

1810
External links

1 He863535
2 HeMath3536
4 Head3537
96 Headbomb3538
1 Headlessplatter3539
2 HebrewHammerTime3540
3 Hede20003541
1 Hefo~enwiki3542
1 Hegariz3543
5 Heineman3544
1 Heliac3545
1 Helios3546
2 Helios2k63547
6 Helix843548
4 Hell1123423549
1 Hell114213550
8 Hellknowz3551
1 Helohe3552
3 HelpUsStopSpam3553
46 Helpful Pixie Bot3554
1 Hemanshu3555
6 Hemant19cse3556
1 Henke373557
2 Hennerhubel3558
14 Henning Makholm3559

3535 https://en.wikipedia.org/w/index.php%3ftitle=User:He86&action=edit&redlink=1
3536 https://en.wikipedia.org/w/index.php%3ftitle=User:HeMath&action=edit&redlink=1
3537 https://en.wikipedia.org/wiki/User:Head
3538 https://en.wikipedia.org/wiki/User:Headbomb
https://en.wikipedia.org/w/index.php%3ftitle=User:Headlessplatter&action=edit&
3539
redlink=1
3540 https://en.wikipedia.org/wiki/User:HebrewHammerTime
3541 https://en.wikipedia.org/wiki/User:Hede2000
3542 https://en.wikipedia.org/wiki/User:Hefo~enwiki
3543 https://en.wikipedia.org/w/index.php%3ftitle=User:Hegariz&action=edit&redlink=1
3544 https://en.wikipedia.org/w/index.php%3ftitle=User:Heineman&action=edit&redlink=1
3545 https://en.wikipedia.org/wiki/User:Heliac
3546 https://en.wikipedia.org/wiki/User:Helios
3547 https://en.wikipedia.org/w/index.php%3ftitle=User:Helios2k6&action=edit&redlink=1
3548 https://en.wikipedia.org/wiki/User:Helix84
3549 https://en.wikipedia.org/w/index.php%3ftitle=User:Hell112342&action=edit&redlink=1
3550 https://en.wikipedia.org/w/index.php%3ftitle=User:Hell11421&action=edit&redlink=1
3551 https://en.wikipedia.org/wiki/User:Hellknowz
3552 https://en.wikipedia.org/wiki/User:Helohe
3553 https://en.wikipedia.org/wiki/User:HelpUsStopSpam
3554 https://en.wikipedia.org/wiki/User:Helpful_Pixie_Bot
3555 https://en.wikipedia.org/wiki/User:Hemanshu
3556 https://en.wikipedia.org/wiki/User:Hemant19cse
3557 https://en.wikipedia.org/w/index.php%3ftitle=User:Henke37&action=edit&redlink=1
3558 https://en.wikipedia.org/wiki/User:Hennerhubel
3559 https://en.wikipedia.org/wiki/User:Henning_Makholm

1811
Contributors

1 HenningFernau3560
2 HenningThielemann3561
1 Henry Delforn (old)3562
1 HenryLi3563
7 Henrygb3564
1 Henryksloan3565
1 Henryy3213566
3 Hephaestos3567
3 Herbee3568
1 Herbstein3569
1 HerculeBot3570
5 HereToHelp3571
70 Hermel3572
1 Hernan mvs3573
2 Heron3574
3 Herry123575
1 Herry12343576
1 Herry431133577
1 Hertz18883578
1 Hesamsaberi3579
22 Hetori3580
2 Heyzeuss3581
1 Hfalc3582
6 Hfastedge3583
1 Hftf3584

3560 https://en.wikipedia.org/w/index.php%3ftitle=User:HenningFernau&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:HenningThielemann&action=edit&
3561
redlink=1
3562 https://en.wikipedia.org/wiki/User:Henry_Delforn_(old)
3563 https://en.wikipedia.org/wiki/User:HenryLi
3564 https://en.wikipedia.org/wiki/User:Henrygb
3565 https://en.wikipedia.org/w/index.php%3ftitle=User:Henryksloan&action=edit&redlink=1
3566 https://en.wikipedia.org/w/index.php%3ftitle=User:Henryy321&action=edit&redlink=1
3567 https://en.wikipedia.org/wiki/User:Hephaestos
3568 https://en.wikipedia.org/wiki/User:Herbee
3569 https://en.wikipedia.org/w/index.php%3ftitle=User:Herbstein&action=edit&redlink=1
3570 https://en.wikipedia.org/wiki/User:HerculeBot
3571 https://en.wikipedia.org/wiki/User:HereToHelp
3572 https://en.wikipedia.org/wiki/User:Hermel
3573 https://en.wikipedia.org/w/index.php%3ftitle=User:Hernan_mvs&action=edit&redlink=1
3574 https://en.wikipedia.org/wiki/User:Heron
3575 https://en.wikipedia.org/w/index.php%3ftitle=User:Herry12&action=edit&redlink=1
3576 https://en.wikipedia.org/w/index.php%3ftitle=User:Herry1234&action=edit&redlink=1
3577 https://en.wikipedia.org/w/index.php%3ftitle=User:Herry43113&action=edit&redlink=1
3578 https://en.wikipedia.org/wiki/User:Hertz1888
3579 https://en.wikipedia.org/w/index.php%3ftitle=User:Hesamsaberi&action=edit&redlink=1
3580 https://en.wikipedia.org/w/index.php%3ftitle=User:Hetori&action=edit&redlink=1
3581 https://en.wikipedia.org/wiki/User:Heyzeuss
3582 https://en.wikipedia.org/w/index.php%3ftitle=User:Hfalc&action=edit&redlink=1
3583 https://en.wikipedia.org/wiki/User:Hfastedge
3584 https://en.wikipedia.org/w/index.php%3ftitle=User:Hftf&action=edit&redlink=1

1812
External links

1 Hgranqvist3585
1 Hhbs3586
1 Hideyuki3587
1 Hiemstra3588
1 High-quality323589
5 Hiihammuk3590
4 Hiiiiiiiiiiiiiiiiiiiii3591
6 Hike3953592
1 Himanshub163593
3 Himanshubeniwal3594
1 Hintss3595
6 Hipal3596
1 Hirak 993597
1 Hirsutism3598
4 Hirzel3599
1 Histrion3600
5 Hjfreyer3601
1 Hkkyun3602
1 Hkleinnl3603
5 Hlg3604
1 Hmonroe3605
1 Hmwith3606
3 Hnixon013607
1 Hobart3608
2 HoboMcJoe3609

3585 https://en.wikipedia.org/wiki/User:Hgranqvist
3586 https://en.wikipedia.org/w/index.php%3ftitle=User:Hhbs&action=edit&redlink=1
3587 https://en.wikipedia.org/wiki/User:Hideyuki
3588 https://en.wikipedia.org/wiki/User:Hiemstra
https://en.wikipedia.org/w/index.php%3ftitle=User:High-quality32&action=edit&redlink=
3589
1
3590 https://en.wikipedia.org/w/index.php%3ftitle=User:Hiihammuk&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Hiiiiiiiiiiiiiiiiiiiii&action=edit&
3591
redlink=1
3592 https://en.wikipedia.org/wiki/User:Hike395
3593 https://en.wikipedia.org/w/index.php%3ftitle=User:Himanshub16&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Himanshubeniwal&action=edit&
3594
redlink=1
3595 https://en.wikipedia.org/wiki/User:Hintss
3596 https://en.wikipedia.org/wiki/User:Hipal
3597 https://en.wikipedia.org/wiki/User:Hirak_99
3598 https://en.wikipedia.org/wiki/User:Hirsutism
3599 https://en.wikipedia.org/wiki/User:Hirzel
3600 https://en.wikipedia.org/wiki/User:Histrion
3601 https://en.wikipedia.org/wiki/User:Hjfreyer
3602 https://en.wikipedia.org/w/index.php%3ftitle=User:Hkkyun&action=edit&redlink=1
3603 https://en.wikipedia.org/wiki/User:Hkleinnl
3604 https://en.wikipedia.org/w/index.php%3ftitle=User:Hlg&action=edit&redlink=1
3605 https://en.wikipedia.org/w/index.php%3ftitle=User:Hmonroe&action=edit&redlink=1
3606 https://en.wikipedia.org/wiki/User:Hmwith
3607 https://en.wikipedia.org/w/index.php%3ftitle=User:Hnixon01&action=edit&redlink=1
3608 https://en.wikipedia.org/wiki/User:Hobart
3609 https://en.wikipedia.org/w/index.php%3ftitle=User:HoboMcJoe&action=edit&redlink=1

1813
Contributors

1 Hobophobe3610
1 Hofingerandi3611
2 Hofmic3612
1 Hofoen3613
1 Hojjatjafary3614
1 Hola q hace123533615
3 HolyCookie3616
1 Homei3617
1 HongxuChen3618
2 Honnza3619
3 Honza Záruba3620
2 Hoof13413621
1 Hoonose3622
3 Hooperbloob3623
1 Hooshdaran3624
6 Horcrux3625
1 Hornbydd3626
1 Horst-schlaemma3627
1 Hosamaly3628
2 HoserHead3629
2 HotdogPi3630
1 HowardBGolden3631
2 Hoyda13632
1 Hritcu3633
2 Hrushikesh Tilak3634

3610 https://en.wikipedia.org/wiki/User:Hobophobe
3611 https://en.wikipedia.org/w/index.php%3ftitle=User:Hofingerandi&action=edit&redlink=1
3612 https://en.wikipedia.org/wiki/User:Hofmic
3613 https://en.wikipedia.org/wiki/User:Hofoen
3614 https://en.wikipedia.org/w/index.php%3ftitle=User:Hojjatjafary&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Hola_q_hace12353&action=edit&
3615
redlink=1
3616 https://en.wikipedia.org/wiki/User:HolyCookie
3617 https://en.wikipedia.org/wiki/User:Homei
3618 https://en.wikipedia.org/wiki/User:HongxuChen
3619 https://en.wikipedia.org/w/index.php%3ftitle=User:Honnza&action=edit&redlink=1
3620 https://en.wikipedia.org/wiki/User:Honza_Z%25C3%25A1ruba
3621 https://en.wikipedia.org/w/index.php%3ftitle=User:Hoof1341&action=edit&redlink=1
3622 https://en.wikipedia.org/wiki/User:Hoonose
3623 https://en.wikipedia.org/w/index.php%3ftitle=User:Hooperbloob&action=edit&redlink=1
3624 https://en.wikipedia.org/wiki/User:Hooshdaran
3625 https://en.wikipedia.org/wiki/User:Horcrux
3626 https://en.wikipedia.org/w/index.php%3ftitle=User:Hornbydd&action=edit&redlink=1
3627 https://en.wikipedia.org/wiki/User:Horst-schlaemma
3628 https://en.wikipedia.org/wiki/User:Hosamaly
3629 https://en.wikipedia.org/w/index.php%3ftitle=User:HoserHead&action=edit&redlink=1
3630 https://en.wikipedia.org/wiki/User:HotdogPi
3631 https://en.wikipedia.org/wiki/User:HowardBGolden
3632 https://en.wikipedia.org/wiki/User:Hoyda1
3633 https://en.wikipedia.org/wiki/User:Hritcu
https://en.wikipedia.org/w/index.php%3ftitle=User:Hrushikesh_Tilak&action=edit&
3634
redlink=1

1814
External links

2 Htmnssn3635
2 Hu123636
2 Huazheng3637
2 HubbaKuba3638
3 Hugh Aguilar3639
1 Hughdbrown3640
1 HugoHelp3641
1 Hugowolf3642
1 Hullaballoo Wolfowitz3643
1 Humanengr3644
1 Hummeling3645
1 Hummerrocket3646
8 HumphreyW3647
1 Hungaricus3648
1 HuntHello3649
1 Hunterm2673650
1 Hunyadym3651
2 Huon3652
1 Hurricane1113653
4 Husky3654
1 Hutchison de3655
2 Huttarl3656
1 Huxleys3657
2 Huynl3658
5 Hv3659

3635 https://en.wikipedia.org/wiki/User:Htmnssn
3636 https://en.wikipedia.org/wiki/User:Hu12
3637 https://en.wikipedia.org/w/index.php%3ftitle=User:Huazheng&action=edit&redlink=1
3638 https://en.wikipedia.org/w/index.php%3ftitle=User:HubbaKuba&action=edit&redlink=1
3639 https://en.wikipedia.org/w/index.php%3ftitle=User:Hugh_Aguilar&action=edit&redlink=1
3640 https://en.wikipedia.org/wiki/User:Hughdbrown
3641 https://en.wikipedia.org/wiki/User:HugoHelp
3642 https://en.wikipedia.org/w/index.php%3ftitle=User:Hugowolf&action=edit&redlink=1
3643 https://en.wikipedia.org/wiki/User:Hullaballoo_Wolfowitz
3644 https://en.wikipedia.org/wiki/User:Humanengr
3645 https://en.wikipedia.org/w/index.php%3ftitle=User:Hummeling&action=edit&redlink=1
3646 https://en.wikipedia.org/wiki/User:Hummerrocket
3647 https://en.wikipedia.org/w/index.php%3ftitle=User:HumphreyW&action=edit&redlink=1
3648 https://en.wikipedia.org/w/index.php%3ftitle=User:Hungaricus&action=edit&redlink=1
3649 https://en.wikipedia.org/wiki/User:HuntHello
3650 https://en.wikipedia.org/wiki/User:Hunterm267
3651 https://en.wikipedia.org/wiki/User:Hunyadym
3652 https://en.wikipedia.org/wiki/User:Huon
3653 https://en.wikipedia.org/wiki/User:Hurricane111
3654 https://en.wikipedia.org/wiki/User:Husky
3655 https://en.wikipedia.org/wiki/User:Hutchison_de
3656 https://en.wikipedia.org/w/index.php%3ftitle=User:Huttarl&action=edit&redlink=1
3657 https://en.wikipedia.org/w/index.php%3ftitle=User:Huxleys&action=edit&redlink=1
3658 https://en.wikipedia.org/w/index.php%3ftitle=User:Huynl&action=edit&redlink=1
3659 https://en.wikipedia.org/wiki/User:Hv

1815
Contributors

2 Hvn04133660
1 Hwymeers3661
2 Hyacinth3662
7 Hyad3663
2 Hydrargyrum3664
4 Hydromania3665
2 Hydrox3666
1 Hyegolfer3667
3 Hyperbolick3668
1 Hyperdivision3669
1 Hyperneural3670
1 Hypersonic123671
2 Hypnosifl3672
2 Hypotroph3673
4 Hzyzcq3674
1 I Hate Banner adds3675
6 I am One of Many3676
1 I do not exist3677
25 I dream of horses3678
1 I'm Aya Syameimaru!3679
1 ICrann153680
3 ILikeThings3681
1 IMSoP3682
2 IMalc3683
1 IMalinowski3684

3660 https://en.wikipedia.org/wiki/User:Hvn0413
3661 https://en.wikipedia.org/w/index.php%3ftitle=User:Hwymeers&action=edit&redlink=1
3662 https://en.wikipedia.org/wiki/User:Hyacinth
3663 https://en.wikipedia.org/wiki/User:Hyad
3664 https://en.wikipedia.org/wiki/User:Hydrargyrum
3665 https://en.wikipedia.org/wiki/User:Hydromania
3666 https://en.wikipedia.org/wiki/User:Hydrox
3667 https://en.wikipedia.org/w/index.php%3ftitle=User:Hyegolfer&action=edit&redlink=1
3668 https://en.wikipedia.org/wiki/User:Hyperbolick
3669 https://en.wikipedia.org/wiki/User:Hyperdivision
3670 https://en.wikipedia.org/wiki/User:Hyperneural
3671 https://en.wikipedia.org/wiki/User:Hypersonic12
3672 https://en.wikipedia.org/wiki/User:Hypnosifl
3673 https://en.wikipedia.org/wiki/User:Hypotroph
3674 https://en.wikipedia.org/w/index.php%3ftitle=User:Hzyzcq&action=edit&redlink=1
3675 https://en.wikipedia.org/wiki/User:I_Hate_Banner_adds
3676 https://en.wikipedia.org/wiki/User:I_am_One_of_Many
3677 https://en.wikipedia.org/wiki/User:I_do_not_exist
3678 https://en.wikipedia.org/wiki/User:I_dream_of_horses
3679 https://en.wikipedia.org/wiki/User:I%2527m_Aya_Syameimaru!
3680 https://en.wikipedia.org/w/index.php%3ftitle=User:ICrann15&action=edit&redlink=1
3681 https://en.wikipedia.org/wiki/User:ILikeThings
3682 https://en.wikipedia.org/wiki/User:IMSoP
3683 https://en.wikipedia.org/w/index.php%3ftitle=User:IMalc&action=edit&redlink=1
3684 https://en.wikipedia.org/w/index.php%3ftitle=User:IMalinowski&action=edit&redlink=1

1816
External links

2 INkubusse3685
1 IOLJeff3686
1 IRP3687
1 IRelayer3688
6 IRockStone3689
2 IRudyak3690
1 IW.HG3691
1 IWOLF3692
1 IagoQnsi3693
3 Iain.dalton3694
1 Iamfaster3695
1 Iamhigh3696
1 Iamseiko3697
1 Ian Ashley3698
2 Ian Rose3699
3 Ian10003700
14 IanOsgood3701
1 Ianb3702
5 Ianb14693703
1 Ianhowlett3704
2 Ibadibam3705
6 Ibmua3706
4 Ibrar Ahmad Shinwari3707
5 Icairns3708
2 Icaoberg3709

3685 https://en.wikipedia.org/wiki/User:INkubusse
3686 https://en.wikipedia.org/wiki/User:IOLJeff
3687 https://en.wikipedia.org/wiki/User:IRP
3688 https://en.wikipedia.org/wiki/User:IRelayer
3689 https://en.wikipedia.org/wiki/User:IRockStone
3690 https://en.wikipedia.org/w/index.php%3ftitle=User:IRudyak&action=edit&redlink=1
3691 https://en.wikipedia.org/wiki/User:IW.HG
3692 https://en.wikipedia.org/w/index.php%3ftitle=User:IWOLF&action=edit&redlink=1
3693 https://en.wikipedia.org/wiki/User:IagoQnsi
3694 https://en.wikipedia.org/w/index.php%3ftitle=User:Iain.dalton&action=edit&redlink=1
3695 https://en.wikipedia.org/wiki/User:Iamfaster
3696 https://en.wikipedia.org/w/index.php%3ftitle=User:Iamhigh&action=edit&redlink=1
3697 https://en.wikipedia.org/wiki/User:Iamseiko
3698 https://en.wikipedia.org/w/index.php%3ftitle=User:Ian_Ashley&action=edit&redlink=1
3699 https://en.wikipedia.org/wiki/User:Ian_Rose
3700 https://en.wikipedia.org/wiki/User:Ian1000
3701 https://en.wikipedia.org/wiki/User:IanOsgood
3702 https://en.wikipedia.org/wiki/User:Ianb
3703 https://en.wikipedia.org/w/index.php%3ftitle=User:Ianb1469&action=edit&redlink=1
3704 https://en.wikipedia.org/w/index.php%3ftitle=User:Ianhowlett&action=edit&redlink=1
3705 https://en.wikipedia.org/wiki/User:Ibadibam
3706 https://en.wikipedia.org/w/index.php%3ftitle=User:Ibmua&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Ibrar_Ahmad_Shinwari&action=edit&
3707
redlink=1
3708 https://en.wikipedia.org/wiki/User:Icairns
3709 https://en.wikipedia.org/w/index.php%3ftitle=User:Icaoberg&action=edit&redlink=1

1817
Contributors

1 Iceblock3710
1 Ich3711
1 Ichernev3712
2 IchibanPL3713
1 Ideogram3714
2 Idioma-bot3715
2 Idiosyncratic-bumblebee3716
2 Iekpo3717
3 Ieopo3718
1 Ifarzana3719
1 Iffy3720
2 Ifnord3721
1 Iggy the Swan3722
1 Ignacioerrico3723
1 Ignatzmice3724
1 Igodard3725
2 Igoldste3726
1 Igor Yalovecky3727
1 Igor.demura3728
1 IgorRodchenkov3729
61 Igorpak3730
1 Igsou3731
9 IgushevEdward3732
13 Ihardlythinkso3733
3 Ihope1273734

3710 https://en.wikipedia.org/wiki/User:Iceblock
3711 https://en.wikipedia.org/wiki/User:Ich
3712 https://en.wikipedia.org/w/index.php%3ftitle=User:Ichernev&action=edit&redlink=1
3713 https://en.wikipedia.org/wiki/User:IchibanPL
3714 https://en.wikipedia.org/wiki/User:Ideogram
3715 https://en.wikipedia.org/wiki/User:Idioma-bot
https://en.wikipedia.org/w/index.php%3ftitle=User:Idiosyncratic-bumblebee&action=
3716
edit&redlink=1
3717 https://en.wikipedia.org/wiki/User:Iekpo
3718 https://en.wikipedia.org/wiki/User:Ieopo
3719 https://en.wikipedia.org/w/index.php%3ftitle=User:Ifarzana&action=edit&redlink=1
3720 https://en.wikipedia.org/wiki/User:Iffy
3721 https://en.wikipedia.org/wiki/User:Ifnord
3722 https://en.wikipedia.org/wiki/User:Iggy_the_Swan
3723 https://en.wikipedia.org/wiki/User:Ignacioerrico
3724 https://en.wikipedia.org/wiki/User:Ignatzmice
3725 https://en.wikipedia.org/w/index.php%3ftitle=User:Igodard&action=edit&redlink=1
3726 https://en.wikipedia.org/wiki/User:Igoldste
3727 https://en.wikipedia.org/wiki/User:Igor_Yalovecky
3728 https://en.wikipedia.org/w/index.php%3ftitle=User:Igor.demura&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:IgorRodchenkov&action=edit&redlink=
3729
1
3730 https://en.wikipedia.org/wiki/User:Igorpak
3731 https://en.wikipedia.org/w/index.php%3ftitle=User:Igsou&action=edit&redlink=1
3732 https://en.wikipedia.org/w/index.php%3ftitle=User:IgushevEdward&action=edit&redlink=1
3733 https://en.wikipedia.org/wiki/User:Ihardlythinkso
3734 https://en.wikipedia.org/wiki/User:Ihope127

1818
External links

1 Iiii I I I3735
1 Ijab.zhan3736
1 Ijgt3737
1 Ijon3738
7 IkamusumeFan3739
18 Ikcotyck3740
1 Ikemccaslin3741
1 IlMatematicoUNO3742
2 Ilana3743
1 Iliazm3744
1 Illia Connell3745
2 Illnab10243746
6 Illuminatedwax3747
11 Ilmari Karonen3748
1 Ilya3749
2 Ilya presman3750
1 Ilya.gazman3751
1 Ilyathemuromets3752
26 ImTheIP3753
1 Imadeitmyself3754
1 Iman.saleh3755
1 Imjooseo3756
2 Imjustin3757
1 Imminent773758
3 Immunize3759

3735 https://en.wikipedia.org/wiki/User:Iiii_I_I_I
3736 https://en.wikipedia.org/w/index.php%3ftitle=User:Ijab.zhan&action=edit&redlink=1
3737 https://en.wikipedia.org/w/index.php%3ftitle=User:Ijgt&action=edit&redlink=1
3738 https://en.wikipedia.org/wiki/User:Ijon
3739 https://en.wikipedia.org/wiki/User:IkamusumeFan
3740 https://en.wikipedia.org/wiki/User:Ikcotyck
3741 https://en.wikipedia.org/w/index.php%3ftitle=User:Ikemccaslin&action=edit&redlink=1
3742 https://en.wikipedia.org/wiki/User:IlMatematicoUNO
3743 https://en.wikipedia.org/wiki/User:Ilana
3744 https://en.wikipedia.org/wiki/User:Iliazm
3745 https://en.wikipedia.org/w/index.php%3ftitle=User:Illia_Connell&action=edit&redlink=1
3746 https://en.wikipedia.org/wiki/User:Illnab1024
3747 https://en.wikipedia.org/wiki/User:Illuminatedwax
3748 https://en.wikipedia.org/wiki/User:Ilmari_Karonen
3749 https://en.wikipedia.org/wiki/User:Ilya
3750 https://en.wikipedia.org/w/index.php%3ftitle=User:Ilya_presman&action=edit&redlink=1
3751 https://en.wikipedia.org/w/index.php%3ftitle=User:Ilya.gazman&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Ilyathemuromets&action=edit&
3752
redlink=1
3753 https://en.wikipedia.org/wiki/User:ImTheIP
3754 https://en.wikipedia.org/wiki/User:Imadeitmyself
3755 https://en.wikipedia.org/w/index.php%3ftitle=User:Iman.saleh&action=edit&redlink=1
3756 https://en.wikipedia.org/wiki/User:Imjooseo
3757 https://en.wikipedia.org/w/index.php%3ftitle=User:Imjustin&action=edit&redlink=1
3758 https://en.wikipedia.org/wiki/User:Imminent77
3759 https://en.wikipedia.org/wiki/User:Immunize

1819
Contributors

1 Imnotminkus3760
3 ImperfectlyInformed3761
2 Imposing3762
9 Imran3763
1 Imyourfoot3764
1 Imz3765
1 InShaneee3766
1 Inavda3767
2 Incnis Mrsi3768
4 Incompetence3769
1 Incredible Shrinking Dandy3770
3 Indeblo3771
12 Indeed1233772
1 Indeterminate3773
1 Infinity ive3774
1 Infinity03775
1 InflationIncentive3776
1 Inforealism3777
1 Informavoreglutton3778
1 Ink-Jetty3779
9 Inkling3780
1 Ino5hiro3781
1 Inphynite3782
2 Inquam3783
1 Int19h3784

3760 https://en.wikipedia.org/wiki/User:Imnotminkus
3761 https://en.wikipedia.org/wiki/User:ImperfectlyInformed
3762 https://en.wikipedia.org/w/index.php%3ftitle=User:Imposing&action=edit&redlink=1
3763 https://en.wikipedia.org/wiki/User:Imran
3764 https://en.wikipedia.org/wiki/User:Imyourfoot
3765 https://en.wikipedia.org/wiki/User:Imz
3766 https://en.wikipedia.org/wiki/User:InShaneee
3767 https://en.wikipedia.org/wiki/User:Inavda
3768 https://en.wikipedia.org/wiki/User:Incnis_Mrsi
3769 https://en.wikipedia.org/wiki/User:Incompetence
https://en.wikipedia.org/w/index.php%3ftitle=User:Incredible_Shrinking_Dandy&action=
3770
edit&redlink=1
3771 https://en.wikipedia.org/w/index.php%3ftitle=User:Indeblo&action=edit&redlink=1
3772 https://en.wikipedia.org/wiki/User:Indeed123
3773 https://en.wikipedia.org/wiki/User:Indeterminate
3774 https://en.wikipedia.org/w/index.php%3ftitle=User:Infinity_ive&action=edit&redlink=1
3775 https://en.wikipedia.org/wiki/User:Infinity0
https://en.wikipedia.org/w/index.php%3ftitle=User:InflationIncentive&action=edit&
3776
redlink=1
3777 https://en.wikipedia.org/w/index.php%3ftitle=User:Inforealism&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Informavoreglutton&action=edit&
3778
redlink=1
3779 https://en.wikipedia.org/w/index.php%3ftitle=User:Ink-Jetty&action=edit&redlink=1
3780 https://en.wikipedia.org/wiki/User:Inkling
3781 https://en.wikipedia.org/wiki/User:Ino5hiro
3782 https://en.wikipedia.org/wiki/User:Inphynite
3783 https://en.wikipedia.org/w/index.php%3ftitle=User:Inquam&action=edit&redlink=1
3784 https://en.wikipedia.org/wiki/User:Int19h

1820
External links

4 Intangir3785
3 Integr8e3786
1 IntegralPython3787
1 Intel40043788
2 Intellec73789
123 InternetArchiveBot3790
2 Intervallic3791
98 Intgr3792
1 Intheshadowplay3793
1 Intr1993794
1 InvalidOS3795
1 Inventitech3796
23 InverseHypercube3797
5 InvinciblyHarshit3798
4 Ioannis ar3799
1 Iohannes Animosus3800
2 Iokevins3801
1 Ios.manor3802
1 IosCreeper3803
1 Ioscius3804
3 Iotatau3805
2 Ipeirotis3806
13 Ira Leviton3807
1 Iron Wallaby3808
2 IronGargoyle3809

3785 https://en.wikipedia.org/wiki/User:Intangir
3786 https://en.wikipedia.org/w/index.php%3ftitle=User:Integr8e&action=edit&redlink=1
3787 https://en.wikipedia.org/wiki/User:IntegralPython
3788 https://en.wikipedia.org/w/index.php%3ftitle=User:Intel4004&action=edit&redlink=1
3789 https://en.wikipedia.org/wiki/User:Intellec7
3790 https://en.wikipedia.org/wiki/User:InternetArchiveBot
3791 https://en.wikipedia.org/w/index.php%3ftitle=User:Intervallic&action=edit&redlink=1
3792 https://en.wikipedia.org/wiki/User:Intgr
https://en.wikipedia.org/w/index.php%3ftitle=User:Intheshadowplay&action=edit&
3793
redlink=1
3794 https://en.wikipedia.org/wiki/User:Intr199
3795 https://en.wikipedia.org/wiki/User:InvalidOS
3796 https://en.wikipedia.org/w/index.php%3ftitle=User:Inventitech&action=edit&redlink=1
3797 https://en.wikipedia.org/wiki/User:InverseHypercube
https://en.wikipedia.org/w/index.php%3ftitle=User:InvinciblyHarshit&action=edit&
3798
redlink=1
3799 https://en.wikipedia.org/w/index.php%3ftitle=User:Ioannis_ar&action=edit&redlink=1
3800 https://en.wikipedia.org/wiki/User:Iohannes_Animosus
3801 https://en.wikipedia.org/w/index.php%3ftitle=User:Iokevins&action=edit&redlink=1
3802 https://en.wikipedia.org/wiki/User:Ios.manor
3803 https://en.wikipedia.org/wiki/User:IosCreeper
3804 https://en.wikipedia.org/wiki/User:Ioscius
3805 https://en.wikipedia.org/wiki/User:Iotatau
3806 https://en.wikipedia.org/wiki/User:Ipeirotis
3807 https://en.wikipedia.org/wiki/User:Ira_Leviton
3808 https://en.wikipedia.org/wiki/User:Iron_Wallaby
3809 https://en.wikipedia.org/wiki/User:IronGargoyle

1821
Contributors

1 Ironholds3810
2 Ironmagma3811
1 Irrevenant3812
4 Isaac3813
1 Isacdaavid3814
1 Isambard Kingdom3815
8 Isaottn3816
1 Isb04593817
1 Isilanes3818
4 Isis~enwiki3819
1 Island Monkey3820
3 Isnow3821
1 Isomorph3822
14 Isomorphismus3823
2 Istanton3824
1 IstvanWolf3825
2 Ita1401883826
5 Itai3827
1 Itamarcu3828
1 ItaniuMatrix3829
2 Itmozart3830
1 Itnilesh3831
20 ItsMutual3832
1 ItsProgrammable3833
18 Itsameen-bc1031123834

3810 https://en.wikipedia.org/wiki/User:Ironholds
3811 https://en.wikipedia.org/w/index.php%3ftitle=User:Ironmagma&action=edit&redlink=1
3812 https://en.wikipedia.org/wiki/User:Irrevenant
3813 https://en.wikipedia.org/wiki/User:Isaac
3814 https://en.wikipedia.org/wiki/User:Isacdaavid
3815 https://en.wikipedia.org/wiki/User:Isambard_Kingdom
3816 https://en.wikipedia.org/w/index.php%3ftitle=User:Isaottn&action=edit&redlink=1
3817 https://en.wikipedia.org/wiki/User:Isb0459
3818 https://en.wikipedia.org/wiki/User:Isilanes
3819 https://en.wikipedia.org/wiki/User:Isis~enwiki
3820 https://en.wikipedia.org/wiki/User:Island_Monkey
3821 https://en.wikipedia.org/wiki/User:Isnow
3822 https://en.wikipedia.org/wiki/User:Isomorph
3823 https://en.wikipedia.org/w/index.php%3ftitle=User:Isomorphismus&action=edit&redlink=1
3824 https://en.wikipedia.org/w/index.php%3ftitle=User:Istanton&action=edit&redlink=1
3825 https://en.wikipedia.org/wiki/User:IstvanWolf
3826 https://en.wikipedia.org/wiki/User:Ita140188
3827 https://en.wikipedia.org/wiki/User:Itai
3828 https://en.wikipedia.org/w/index.php%3ftitle=User:Itamarcu&action=edit&redlink=1
3829 https://en.wikipedia.org/wiki/User:ItaniuMatrix
3830 https://en.wikipedia.org/w/index.php%3ftitle=User:Itmozart&action=edit&redlink=1
3831 https://en.wikipedia.org/w/index.php%3ftitle=User:Itnilesh&action=edit&redlink=1
3832 https://en.wikipedia.org/w/index.php%3ftitle=User:ItsMutual&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:ItsProgrammable&action=edit&
3833
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Itsameen-bc103112&action=edit&
3834
redlink=1

1822
External links

1 Itslynn3835
2 Itub3836
2 IvR3837
1 Ivan Akira3838
6 Ivan Kuckir3839
1 Ivan Pozdeev3840
1 Ivan Ukhov3841
1 Ivan Štambuk3842
1 Ivan.Savov3843
1 IvanAndreevich3844
1 Ivansanchez3845
1 Ivionday3846
1 Iwaterpolo3847
25 Ixfd643848
1 Izkala3849
3 Izno3850
3 IznoRepeat3851
1 J thomas moros3852
4 J-Wiki3853
7 J. Finkelstein3854
1 J.Dong8203855
2 J.N.3856
1 J.Verstrynge3857
1 J.Voss3858
14 J.delanoy3859

3835 https://en.wikipedia.org/w/index.php%3ftitle=User:Itslynn&action=edit&redlink=1
3836 https://en.wikipedia.org/wiki/User:Itub
3837 https://en.wikipedia.org/w/index.php%3ftitle=User:IvR&action=edit&redlink=1
3838 https://en.wikipedia.org/wiki/User:Ivan_Akira
3839 https://en.wikipedia.org/w/index.php%3ftitle=User:Ivan_Kuckir&action=edit&redlink=1
3840 https://en.wikipedia.org/w/index.php%3ftitle=User:Ivan_Pozdeev&action=edit&redlink=1
3841 https://en.wikipedia.org/w/index.php%3ftitle=User:Ivan_Ukhov&action=edit&redlink=1
3842 https://en.wikipedia.org/wiki/User:Ivan_%25C5%25A0tambuk
3843 https://en.wikipedia.org/wiki/User:Ivan.Savov
3844 https://en.wikipedia.org/wiki/User:IvanAndreevich
3845 https://en.wikipedia.org/wiki/User:Ivansanchez
3846 https://en.wikipedia.org/wiki/User:Ivionday
3847 https://en.wikipedia.org/wiki/User:Iwaterpolo
3848 https://en.wikipedia.org/wiki/User:Ixfd64
3849 https://en.wikipedia.org/w/index.php%3ftitle=User:Izkala&action=edit&redlink=1
3850 https://en.wikipedia.org/wiki/User:Izno
3851 https://en.wikipedia.org/wiki/User:IznoRepeat
https://en.wikipedia.org/w/index.php%3ftitle=User:J_thomas_moros&action=edit&redlink=
3852
1
3853 https://en.wikipedia.org/wiki/User:J-Wiki
3854 https://en.wikipedia.org/wiki/User:J._Finkelstein
3855 https://en.wikipedia.org/wiki/User:J.Dong820
3856 https://en.wikipedia.org/wiki/User:J.N.
3857 https://en.wikipedia.org/w/index.php%3ftitle=User:J.Verstrynge&action=edit&redlink=1
3858 https://en.wikipedia.org/wiki/User:J.Voss
3859 https://en.wikipedia.org/wiki/User:J.delanoy

1823
Contributors

2 J04n3860
1 J201606283861
3 J2kun3862
1 J36miles3863
7 J4 james3864
39 JAnDbot3865
1 JBW3866
1 JBakaka3867
2 JBocco3868
34 JCW-CleanerBot3869
2 JCarlos3870
1 JCarriker3871
1 JDOCH3872
2 JDspeeder13873
2 JF Bastien3874
5 JForget3875
1 JHMM133876
1 JHunterJ3877
3 JIP3878
3 JJ Harrison3879
12 JJMC893880
4 JJuran3881
3 JL-Bot3882
9 JLaTondre3883
4 JMCorey3884

3860 https://en.wikipedia.org/wiki/User:J04n
3861 https://en.wikipedia.org/wiki/User:J20160628
3862 https://en.wikipedia.org/wiki/User:J2kun
3863 https://en.wikipedia.org/wiki/User:J36miles
3864 https://en.wikipedia.org/w/index.php%3ftitle=User:J4_james&action=edit&redlink=1
3865 https://en.wikipedia.org/wiki/User:JAnDbot
3866 https://en.wikipedia.org/wiki/User:JBW
3867 https://en.wikipedia.org/wiki/User:JBakaka
3868 https://en.wikipedia.org/w/index.php%3ftitle=User:JBocco&action=edit&redlink=1
3869 https://en.wikipedia.org/wiki/User:JCW-CleanerBot
3870 https://en.wikipedia.org/wiki/User:JCarlos
3871 https://en.wikipedia.org/wiki/User:JCarriker
3872 https://en.wikipedia.org/w/index.php%3ftitle=User:JDOCH&action=edit&redlink=1
3873 https://en.wikipedia.org/w/index.php%3ftitle=User:JDspeeder1&action=edit&redlink=1
3874 https://en.wikipedia.org/wiki/User:JF_Bastien
3875 https://en.wikipedia.org/wiki/User:JForget
3876 https://en.wikipedia.org/wiki/User:JHMM13
3877 https://en.wikipedia.org/wiki/User:JHunterJ
3878 https://en.wikipedia.org/wiki/User:JIP
3879 https://en.wikipedia.org/wiki/User:JJ_Harrison
3880 https://en.wikipedia.org/wiki/User:JJMC89
3881 https://en.wikipedia.org/w/index.php%3ftitle=User:JJuran&action=edit&redlink=1
3882 https://en.wikipedia.org/wiki/User:JL-Bot
3883 https://en.wikipedia.org/wiki/User:JLaTondre
3884 https://en.wikipedia.org/wiki/User:JMCorey

1824
External links

1 JMOprof3885
23 JMP EAX3886
1 JMRodrigues3887
2 JMiall3888
3 JMyrleFuller3889
1 JNGNYC3890
2 JNW3891
1 JP827479193892
4 JPRBW3893
1 JPopyack3894
2 JRB-Europe3895
1 JRGJON3896
30 JRSpriggs3897
3 JRavn3898
1 JTN3899
2 JVK's3900
1 JW 000003901
2 JWilk3902
4 JYBot3903
2 Ja49cs693904
2 JaGa3905
1 JaJa0N113906
2 Jabanabba3907
1 Jabernal3908
9 Jacch1233909

3885 https://en.wikipedia.org/wiki/User:JMOprof
3886 https://en.wikipedia.org/wiki/User:JMP_EAX
3887 https://en.wikipedia.org/w/index.php%3ftitle=User:JMRodrigues&action=edit&redlink=1
3888 https://en.wikipedia.org/wiki/User:JMiall
3889 https://en.wikipedia.org/wiki/User:JMyrleFuller
3890 https://en.wikipedia.org/w/index.php%3ftitle=User:JNGNYC&action=edit&redlink=1
3891 https://en.wikipedia.org/wiki/User:JNW
3892 https://en.wikipedia.org/wiki/User:JP82747919
3893 https://en.wikipedia.org/wiki/User:JPRBW
3894 https://en.wikipedia.org/wiki/User:JPopyack
3895 https://en.wikipedia.org/w/index.php%3ftitle=User:JRB-Europe&action=edit&redlink=1
3896 https://en.wikipedia.org/w/index.php%3ftitle=User:JRGJON&action=edit&redlink=1
3897 https://en.wikipedia.org/wiki/User:JRSpriggs
3898 https://en.wikipedia.org/wiki/User:JRavn
3899 https://en.wikipedia.org/wiki/User:JTN
3900 https://en.wikipedia.org/w/index.php%3ftitle=User:JVK%2527s&action=edit&redlink=1
3901 https://en.wikipedia.org/wiki/User:JW_00000
3902 https://en.wikipedia.org/wiki/User:JWilk
3903 https://en.wikipedia.org/wiki/User:JYBot
3904 https://en.wikipedia.org/w/index.php%3ftitle=User:Ja49cs69&action=edit&redlink=1
3905 https://en.wikipedia.org/wiki/User:JaGa
3906 https://en.wikipedia.org/wiki/User:JaJa0N11
3907 https://en.wikipedia.org/w/index.php%3ftitle=User:Jabanabba&action=edit&redlink=1
3908 https://en.wikipedia.org/wiki/User:Jabernal
3909 https://en.wikipedia.org/w/index.php%3ftitle=User:Jacch123&action=edit&redlink=1

1825
Contributors

1 Jacektomas3910
6 Jachto3911
2 Jack Greenmaven3912
1 Jack-A-Roe3913
1 Jack90s153914
3 JackH3915
1 JackSchmidt3916
1 Jackbars3917
3 Jackcanty3918
4 JackieBot3919
2 JackintheBox3920
1 Jackjackjackbruce3921
3 Jackson tale3922
1 Jackzhp3923
1 Jacob Finn3924
2 Jacob grace3925
1 JacobIsrael183926
1 Jacobcroope3927
1 Jacobjames103928
1 Jacobko3929
1 Jacobkwitkoski3930
6 Jacobolus3931
1 JacobsonUCI3932
2 Jacona3933
1 Jadrian3934

3910 https://en.wikipedia.org/w/index.php%3ftitle=User:Jacektomas&action=edit&redlink=1
3911 https://en.wikipedia.org/w/index.php%3ftitle=User:Jachto&action=edit&redlink=1
3912 https://en.wikipedia.org/wiki/User:Jack_Greenmaven
3913 https://en.wikipedia.org/wiki/User:Jack-A-Roe
3914 https://en.wikipedia.org/wiki/User:Jack90s15
3915 https://en.wikipedia.org/wiki/User:JackH
3916 https://en.wikipedia.org/wiki/User:JackSchmidt
3917 https://en.wikipedia.org/w/index.php%3ftitle=User:Jackbars&action=edit&redlink=1
3918 https://en.wikipedia.org/w/index.php%3ftitle=User:Jackcanty&action=edit&redlink=1
3919 https://en.wikipedia.org/wiki/User:JackieBot
3920 https://en.wikipedia.org/wiki/User:JackintheBox
https://en.wikipedia.org/w/index.php%3ftitle=User:Jackjackjackbruce&action=edit&
3921
redlink=1
3922 https://en.wikipedia.org/w/index.php%3ftitle=User:Jackson_tale&action=edit&redlink=1
3923 https://en.wikipedia.org/wiki/User:Jackzhp
3924 https://en.wikipedia.org/w/index.php%3ftitle=User:Jacob_Finn&action=edit&redlink=1
3925 https://en.wikipedia.org/wiki/User:Jacob_grace
3926 https://en.wikipedia.org/w/index.php%3ftitle=User:JacobIsrael18&action=edit&redlink=1
3927 https://en.wikipedia.org/w/index.php%3ftitle=User:Jacobcroope&action=edit&redlink=1
3928 https://en.wikipedia.org/w/index.php%3ftitle=User:Jacobjames10&action=edit&redlink=1
3929 https://en.wikipedia.org/wiki/User:Jacobko
https://en.wikipedia.org/w/index.php%3ftitle=User:Jacobkwitkoski&action=edit&redlink=
3930
1
3931 https://en.wikipedia.org/wiki/User:Jacobolus
3932 https://en.wikipedia.org/w/index.php%3ftitle=User:JacobsonUCI&action=edit&redlink=1
3933 https://en.wikipedia.org/wiki/User:Jacona
3934 https://en.wikipedia.org/w/index.php%3ftitle=User:Jadrian&action=edit&redlink=1

1826
External links

13 Jafet3935
5 Jagadeesh hooli3936
2 Jagged 853937
1 Jaguaraci3938
1 Jai dit3939
3 Jaimecohen3940
1 Jake Wartenberg3941
1 JakeD4093942
1 Jakito3943
1 Jakob Voss3944
1 Jaksmata3945
1 Jakub Vrána3946
4 Jalal03947
1 Jalal4763948
2 Jaleks3949
1 Jalestro3950
2 Jalpar753951
2 Jamartinh3952
19 Jamelan3953
1 James Jim Moriarty3954
1 James Kemp3955
5 James pic3956
1 James.S3957
1 James43883958
1 JamesGecko3959

3935 https://en.wikipedia.org/wiki/User:Jafet
https://en.wikipedia.org/w/index.php%3ftitle=User:Jagadeesh_hooli&action=edit&
3936
redlink=1
3937 https://en.wikipedia.org/wiki/User:Jagged_85
3938 https://en.wikipedia.org/w/index.php%3ftitle=User:Jaguaraci&action=edit&redlink=1
3939 https://en.wikipedia.org/wiki/User:Jai_dit
3940 https://en.wikipedia.org/w/index.php%3ftitle=User:Jaimecohen&action=edit&redlink=1
3941 https://en.wikipedia.org/wiki/User:Jake_Wartenberg
3942 https://en.wikipedia.org/w/index.php%3ftitle=User:JakeD409&action=edit&redlink=1
3943 https://en.wikipedia.org/wiki/User:Jakito
3944 https://en.wikipedia.org/wiki/User:Jakob_Voss
3945 https://en.wikipedia.org/wiki/User:Jaksmata
3946 https://en.wikipedia.org/wiki/User:Jakub_Vr%25C3%25A1na
3947 https://en.wikipedia.org/wiki/User:Jalal0
3948 https://en.wikipedia.org/w/index.php%3ftitle=User:Jalal476&action=edit&redlink=1
3949 https://en.wikipedia.org/wiki/User:Jaleks
3950 https://en.wikipedia.org/w/index.php%3ftitle=User:Jalestro&action=edit&redlink=1
3951 https://en.wikipedia.org/w/index.php%3ftitle=User:Jalpar75&action=edit&redlink=1
3952 https://en.wikipedia.org/w/index.php%3ftitle=User:Jamartinh&action=edit&redlink=1
3953 https://en.wikipedia.org/wiki/User:Jamelan
3954 https://en.wikipedia.org/wiki/User:James_Jim_Moriarty
3955 https://en.wikipedia.org/wiki/User:James_Kemp
3956 https://en.wikipedia.org/wiki/User:James_pic
3957 https://en.wikipedia.org/wiki/User:James.S
3958 https://en.wikipedia.org/w/index.php%3ftitle=User:James4388&action=edit&redlink=1
3959 https://en.wikipedia.org/wiki/User:JamesGecko

1827
Contributors

2 JamesHDavenport3960
1 JamesMishra3961
1 JamesNZ3962
3 Jamesd90073963
7 Jamesday3964
2 Jameshfisher3965
1 Jamesmcoughlan3966
1 Jamesmontalvo33967
1 JameswKinser113968
3 Jamesx123453969
5 Jamgoodman3970
1 Jamie King3971
3 Jamjam3373972
11 Jan Hidders3973
1 Jan Spousta3974
1 Jan Wassenberg3975
6 Jan Winnicki3976
1 Jan.Kamenicek3977
2 JanKuipers3978
1 JanSegre3979
3 JanSuchy3980
2 Jandalhandler3981
1 Janes353982
1 Jangirke3983
1 Janm673984

3960 https://en.wikipedia.org/wiki/User:JamesHDavenport
3961 https://en.wikipedia.org/wiki/User:JamesMishra
3962 https://en.wikipedia.org/wiki/User:JamesNZ
3963 https://en.wikipedia.org/w/index.php%3ftitle=User:Jamesd9007&action=edit&redlink=1
3964 https://en.wikipedia.org/wiki/User:Jamesday
3965 https://en.wikipedia.org/wiki/User:Jameshfisher
https://en.wikipedia.org/w/index.php%3ftitle=User:Jamesmcoughlan&action=edit&redlink=
3966
1
3967 https://en.wikipedia.org/wiki/User:Jamesmontalvo3
3968 https://en.wikipedia.org/wiki/User:JameswKinser11
3969 https://en.wikipedia.org/wiki/User:Jamesx12345
3970 https://en.wikipedia.org/wiki/User:Jamgoodman
3971 https://en.wikipedia.org/wiki/User:Jamie_King
3972 https://en.wikipedia.org/w/index.php%3ftitle=User:Jamjam337&action=edit&redlink=1
3973 https://en.wikipedia.org/wiki/User:Jan_Hidders
3974 https://en.wikipedia.org/wiki/User:Jan_Spousta
3975 https://en.wikipedia.org/wiki/User:Jan_Wassenberg
3976 https://en.wikipedia.org/wiki/User:Jan_Winnicki
3977 https://en.wikipedia.org/wiki/User:Jan.Kamenicek
3978 https://en.wikipedia.org/w/index.php%3ftitle=User:JanKuipers&action=edit&redlink=1
3979 https://en.wikipedia.org/wiki/User:JanSegre
3980 https://en.wikipedia.org/wiki/User:JanSuchy
3981 https://en.wikipedia.org/wiki/User:Jandalhandler
3982 https://en.wikipedia.org/w/index.php%3ftitle=User:Janes35&action=edit&redlink=1
3983 https://en.wikipedia.org/w/index.php%3ftitle=User:Jangirke&action=edit&redlink=1
3984 https://en.wikipedia.org/wiki/User:Janm67

1828
External links

1 Janus.debondt3985
1 Janzert3986
4 Jao3987
1 Japanese Searobin3988
1 Japo3989
2 Jaraiya3990
1 Jarajapu3991
58 Jarble3992
23 Jaredwf3993
1 JasSingh993994
2 Jashar3995
2 Jasidaoui3996
1 Jasmjc23997
1 Jason Davies3998
4 Jason Quinn3999
1 Jason S Heise4000
1 Jason-derp864001
2 Jason.Rafe.Miller4002
1 Jason.surratt4003
4 Jasonb054004
1 Jasonwlh3144005
1 Jasonyo4006
13 Jasper Deng4007
1 Jassyt4008
1 Jauclair~enwiki4009

3985 https://en.wikipedia.org/w/index.php%3ftitle=User:Janus.debondt&action=edit&redlink=1
3986 https://en.wikipedia.org/w/index.php%3ftitle=User:Janzert&action=edit&redlink=1
3987 https://en.wikipedia.org/wiki/User:Jao
3988 https://en.wikipedia.org/wiki/User:Japanese_Searobin
3989 https://en.wikipedia.org/wiki/User:Japo
3990 https://en.wikipedia.org/w/index.php%3ftitle=User:Jaraiya&action=edit&redlink=1
3991 https://en.wikipedia.org/wiki/User:Jarajapu
3992 https://en.wikipedia.org/wiki/User:Jarble
3993 https://en.wikipedia.org/wiki/User:Jaredwf
3994 https://en.wikipedia.org/w/index.php%3ftitle=User:JasSingh99&action=edit&redlink=1
3995 https://en.wikipedia.org/w/index.php%3ftitle=User:Jashar&action=edit&redlink=1
3996 https://en.wikipedia.org/wiki/User:Jasidaoui
3997 https://en.wikipedia.org/w/index.php%3ftitle=User:Jasmjc2&action=edit&redlink=1
3998 https://en.wikipedia.org/wiki/User:Jason_Davies
3999 https://en.wikipedia.org/wiki/User:Jason_Quinn
4000 https://en.wikipedia.org/wiki/User:Jason_S_Heise
4001 https://en.wikipedia.org/w/index.php%3ftitle=User:Jason-derp86&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Jason.Rafe.Miller&action=edit&
4002
redlink=1
4003 https://en.wikipedia.org/wiki/User:Jason.surratt
4004 https://en.wikipedia.org/wiki/User:Jasonb05
4005 https://en.wikipedia.org/w/index.php%3ftitle=User:Jasonwlh314&action=edit&redlink=1
4006 https://en.wikipedia.org/wiki/User:Jasonyo
4007 https://en.wikipedia.org/wiki/User:Jasper_Deng
4008 https://en.wikipedia.org/w/index.php%3ftitle=User:Jassyt&action=edit&redlink=1
4009 https://en.wikipedia.org/wiki/User:Jauclair~enwiki

1829
Contributors

1 Jauerback4010
2 JavaRogers4011
1 Javalenok4012
1 JavierMC4013
1 Javierito924014
1 Javit4015
1 Jawillens4016
2 Jax Omen4017
1 Jaxelrod4018
1 Jaxl4019
1 Jay Litman4020
1 Jay.Param4021
6 Jayamohan4022
1 Jayaneethatj4023
1 Jayanta Sen4024
1 Jaybuffington4025
109 Jaydavidmartin4026
1 Jayendran.j4027
1 JaykGrey4028
3 Jayme4029
1 Jazmatician4030
1 Jbalint4031
2 Jbander4032
2 Jbellessa874033
2 Jbonneau4034

4010 https://en.wikipedia.org/wiki/User:Jauerback
4011 https://en.wikipedia.org/w/index.php%3ftitle=User:JavaRogers&action=edit&redlink=1
4012 https://en.wikipedia.org/wiki/User:Javalenok
4013 https://en.wikipedia.org/wiki/User:JavierMC
4014 https://en.wikipedia.org/wiki/User:Javierito92
4015 https://en.wikipedia.org/wiki/User:Javit
4016 https://en.wikipedia.org/w/index.php%3ftitle=User:Jawillens&action=edit&redlink=1
4017 https://en.wikipedia.org/w/index.php%3ftitle=User:Jax_Omen&action=edit&redlink=1
4018 https://en.wikipedia.org/w/index.php%3ftitle=User:Jaxelrod&action=edit&redlink=1
4019 https://en.wikipedia.org/wiki/User:Jaxl
4020 https://en.wikipedia.org/wiki/User:Jay_Litman
4021 https://en.wikipedia.org/w/index.php%3ftitle=User:Jay.Param&action=edit&redlink=1
4022 https://en.wikipedia.org/w/index.php%3ftitle=User:Jayamohan&action=edit&redlink=1
4023 https://en.wikipedia.org/wiki/User:Jayaneethatj
4024 https://en.wikipedia.org/wiki/User:Jayanta_Sen
4025 https://en.wikipedia.org/wiki/User:Jaybuffington
4026 https://en.wikipedia.org/wiki/User:Jaydavidmartin
4027 https://en.wikipedia.org/w/index.php%3ftitle=User:Jayendran.j&action=edit&redlink=1
4028 https://en.wikipedia.org/wiki/User:JaykGrey
4029 https://en.wikipedia.org/w/index.php%3ftitle=User:Jayme&action=edit&redlink=1
4030 https://en.wikipedia.org/wiki/User:Jazmatician
4031 https://en.wikipedia.org/wiki/User:Jbalint
4032 https://en.wikipedia.org/w/index.php%3ftitle=User:Jbander&action=edit&redlink=1
4033 https://en.wikipedia.org/wiki/User:Jbellessa87
4034 https://en.wikipedia.org/w/index.php%3ftitle=User:Jbonneau&action=edit&redlink=1

1830
External links

2 Jbragadeesh4035
2 Jby Yeah4036
1 Jcarroll4037
1 Jclin4038
2 Jc~enwiki4039
1 Jdaloner4040
1 Jdanz4041
1 Jdaudier4042
1 Jdb67usa4043
1 Jdcomix4044
1 Jdelatorr4045
4 Jdfekete4046
1 Jdforrester4047
3 Jdh304048
3 Jdhedden4049
1 Jdm644050
3 Jdpipe4051
2 Jdurham64052
5 JeLuF4053
1 Jeaise4054
1 Jean-Gabriel Young4055
1 Jean-Pierre de la Croix4056
1 Jean.julius4057
1 Jeberle4058
2 Jecraig@yahoo.com4059

4035 https://en.wikipedia.org/w/index.php%3ftitle=User:Jbragadeesh&action=edit&redlink=1
4036 https://en.wikipedia.org/w/index.php%3ftitle=User:Jby_Yeah&action=edit&redlink=1
4037 https://en.wikipedia.org/wiki/User:Jcarroll
4038 https://en.wikipedia.org/wiki/User:Jclin
4039 https://en.wikipedia.org/wiki/User:Jc~enwiki
4040 https://en.wikipedia.org/w/index.php%3ftitle=User:Jdaloner&action=edit&redlink=1
4041 https://en.wikipedia.org/w/index.php%3ftitle=User:Jdanz&action=edit&redlink=1
4042 https://en.wikipedia.org/w/index.php%3ftitle=User:Jdaudier&action=edit&redlink=1
4043 https://en.wikipedia.org/w/index.php%3ftitle=User:Jdb67usa&action=edit&redlink=1
4044 https://en.wikipedia.org/wiki/User:Jdcomix
4045 https://en.wikipedia.org/w/index.php%3ftitle=User:Jdelatorr&action=edit&redlink=1
4046 https://en.wikipedia.org/wiki/User:Jdfekete
4047 https://en.wikipedia.org/wiki/User:Jdforrester
4048 https://en.wikipedia.org/w/index.php%3ftitle=User:Jdh30&action=edit&redlink=1
4049 https://en.wikipedia.org/w/index.php%3ftitle=User:Jdhedden&action=edit&redlink=1
4050 https://en.wikipedia.org/wiki/User:Jdm64
4051 https://en.wikipedia.org/wiki/User:Jdpipe
4052 https://en.wikipedia.org/wiki/User:Jdurham6
4053 https://en.wikipedia.org/wiki/User:JeLuF
4054 https://en.wikipedia.org/w/index.php%3ftitle=User:Jeaise&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Jean-Gabriel_Young&action=edit&
4055
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Jean-Pierre_de_la_Croix&action=
4056
edit&redlink=1
4057 https://en.wikipedia.org/wiki/User:Jean.julius
4058 https://en.wikipedia.org/wiki/User:Jeberle
https://en.wikipedia.org/w/index.php%3ftitle=User:Jecraig@yahoo.com&action=edit&
4059
redlink=1

1831
Contributors

3 Jedaily4060
1 JediAmrit4061
6 Jediknil4062
1 Jeenuv4063
1 JeepdaySock4064
1 Jef413414065
1 Jefepat4066
1 Jeff Dahl4067
19 Jeff Erickson4068
5 Jeff G.4069
3 Jeff30004070
2 JeffAEdmonds4071
7 JeffDonner4072
1 Jeffbadge4073
1 Jeffhoy4074
1 Jeffmilner4075
1 Jeffq4076
2 Jeffra454077
1 Jeffrd104078
2 Jeffrey Mall4079
1 JeffyTheDragonSlayer4080
9 Jeh4081
1 Jellysandwich04082
18 Jellyworld4083
1 Jelsova4084

4060 https://en.wikipedia.org/wiki/User:Jedaily
4061 https://en.wikipedia.org/w/index.php%3ftitle=User:JediAmrit&action=edit&redlink=1
4062 https://en.wikipedia.org/wiki/User:Jediknil
4063 https://en.wikipedia.org/w/index.php%3ftitle=User:Jeenuv&action=edit&redlink=1
4064 https://en.wikipedia.org/wiki/User:JeepdaySock
4065 https://en.wikipedia.org/w/index.php%3ftitle=User:Jef41341&action=edit&redlink=1
4066 https://en.wikipedia.org/w/index.php%3ftitle=User:Jefepat&action=edit&redlink=1
4067 https://en.wikipedia.org/wiki/User:Jeff_Dahl
4068 https://en.wikipedia.org/wiki/User:Jeff_Erickson
4069 https://en.wikipedia.org/wiki/User:Jeff_G.
4070 https://en.wikipedia.org/wiki/User:Jeff3000
4071 https://en.wikipedia.org/w/index.php%3ftitle=User:JeffAEdmonds&action=edit&redlink=1
4072 https://en.wikipedia.org/wiki/User:JeffDonner
4073 https://en.wikipedia.org/wiki/User:Jeffbadge
4074 https://en.wikipedia.org/wiki/User:Jeffhoy
4075 https://en.wikipedia.org/wiki/User:Jeffmilner
4076 https://en.wikipedia.org/wiki/User:Jeffq
4077 https://en.wikipedia.org/wiki/User:Jeffra45
4078 https://en.wikipedia.org/wiki/User:Jeffrd10
4079 https://en.wikipedia.org/wiki/User:Jeffrey_Mall
4080 https://en.wikipedia.org/wiki/User:JeffyTheDragonSlayer
4081 https://en.wikipedia.org/wiki/User:Jeh
https://en.wikipedia.org/w/index.php%3ftitle=User:Jellysandwich0&action=edit&redlink=
4082
1
4083 https://en.wikipedia.org/w/index.php%3ftitle=User:Jellyworld&action=edit&redlink=1
4084 https://en.wikipedia.org/wiki/User:Jelsova

1832
External links

6 Jeltz4085
2 Jemfinch4086
2 Jenerick4087
5 Jengelh4088
2 Jenny Lam4089
1 Jeodesic4090
1 JeremiahY4091
1 Jeremy W Powell4092
2 Jeremy rutman4093
1 Jeremymy4094
8 Jerf4095
1 Jeri Aulurtve4096
1 Jerk1114097
2 Jerodlycett4098
1 Jerome zhu4099
4 Jeronimo4100
3 Jerry Hintze4101
3 JerryFriedman4102
7 Jerryobject4103
1 Jersey Devil4104
1 JerseyChewi4105
10 Jesin4106
1 Jespdj4107
2 Jesse V.4108
14 Jesse Viviano4109

4085 https://en.wikipedia.org/wiki/User:Jeltz
4086 https://en.wikipedia.org/w/index.php%3ftitle=User:Jemfinch&action=edit&redlink=1
4087 https://en.wikipedia.org/wiki/User:Jenerick
4088 https://en.wikipedia.org/wiki/User:Jengelh
4089 https://en.wikipedia.org/w/index.php%3ftitle=User:Jenny_Lam&action=edit&redlink=1
4090 https://en.wikipedia.org/wiki/User:Jeodesic
4091 https://en.wikipedia.org/wiki/User:JeremiahY
https://en.wikipedia.org/w/index.php%3ftitle=User:Jeremy_W_Powell&action=edit&
4092
redlink=1
4093 https://en.wikipedia.org/wiki/User:Jeremy_rutman
4094 https://en.wikipedia.org/w/index.php%3ftitle=User:Jeremymy&action=edit&redlink=1
4095 https://en.wikipedia.org/wiki/User:Jerf
4096 https://en.wikipedia.org/w/index.php%3ftitle=User:Jeri_Aulurtve&action=edit&redlink=1
4097 https://en.wikipedia.org/w/index.php%3ftitle=User:Jerk111&action=edit&redlink=1
4098 https://en.wikipedia.org/wiki/User:Jerodlycett
4099 https://en.wikipedia.org/w/index.php%3ftitle=User:Jerome_zhu&action=edit&redlink=1
4100 https://en.wikipedia.org/wiki/User:Jeronimo
4101 https://en.wikipedia.org/wiki/User:Jerry_Hintze
4102 https://en.wikipedia.org/wiki/User:JerryFriedman
4103 https://en.wikipedia.org/wiki/User:Jerryobject
4104 https://en.wikipedia.org/wiki/User:Jersey_Devil
4105 https://en.wikipedia.org/wiki/User:JerseyChewi
4106 https://en.wikipedia.org/wiki/User:Jesin
4107 https://en.wikipedia.org/w/index.php%3ftitle=User:Jespdj&action=edit&redlink=1
4108 https://en.wikipedia.org/wiki/User:Jesse_V.
4109 https://en.wikipedia.org/wiki/User:Jesse_Viviano

1833
Contributors

2 JesseStone4110
2 JesseW4111
1 JetCrusherTorpedo4112
1 Jetroberts4113
4 Jevy12344114
2 Jewillco4115
1 Jey424116
4 Jez99994117
1 Jfd344118
3 Jflabourdette4119
8 Jfmantis4120
1 Jfraumen4121
1 Jfreyreg4122
1 Jfriedly4123
2 Jfroelich4124
1 Jftuga4125
1 Jfwk4126
1 Jgabe5504127
6 Jgarrett4128
23 Jheiv4129
2 Jheld884130
8 Jhertel4131
1 Jheuristic4132
1 Jhill2704133
2 Jhorthos4134

4110 https://en.wikipedia.org/wiki/User:JesseStone
4111 https://en.wikipedia.org/wiki/User:JesseW
4112 https://en.wikipedia.org/wiki/User:JetCrusherTorpedo
4113 https://en.wikipedia.org/wiki/User:Jetroberts
4114 https://en.wikipedia.org/w/index.php%3ftitle=User:Jevy1234&action=edit&redlink=1
4115 https://en.wikipedia.org/w/index.php%3ftitle=User:Jewillco&action=edit&redlink=1
4116 https://en.wikipedia.org/w/index.php%3ftitle=User:Jey42&action=edit&redlink=1
4117 https://en.wikipedia.org/w/index.php%3ftitle=User:Jez9999&action=edit&redlink=1
4118 https://en.wikipedia.org/w/index.php%3ftitle=User:Jfd34&action=edit&redlink=1
4119 https://en.wikipedia.org/wiki/User:Jflabourdette
4120 https://en.wikipedia.org/wiki/User:Jfmantis
4121 https://en.wikipedia.org/wiki/User:Jfraumen
4122 https://en.wikipedia.org/w/index.php%3ftitle=User:Jfreyreg&action=edit&redlink=1
4123 https://en.wikipedia.org/wiki/User:Jfriedly
4124 https://en.wikipedia.org/wiki/User:Jfroelich
4125 https://en.wikipedia.org/w/index.php%3ftitle=User:Jftuga&action=edit&redlink=1
4126 https://en.wikipedia.org/w/index.php%3ftitle=User:Jfwk&action=edit&redlink=1
4127 https://en.wikipedia.org/w/index.php%3ftitle=User:Jgabe550&action=edit&redlink=1
4128 https://en.wikipedia.org/w/index.php%3ftitle=User:Jgarrett&action=edit&redlink=1
4129 https://en.wikipedia.org/wiki/User:Jheiv
4130 https://en.wikipedia.org/wiki/User:Jheld88
4131 https://en.wikipedia.org/wiki/User:Jhertel
4132 https://en.wikipedia.org/wiki/User:Jheuristic
4133 https://en.wikipedia.org/wiki/User:Jhill270
4134 https://en.wikipedia.org/wiki/User:Jhorthos

1834
External links

7 JhsBot4135
1 JialinOuyang4136
3 Jiang4137
2 Jianhui674138
1 Jiayq4139
1 Jidanni4140
1 JidongChen4141
2 Jiesiren4142
1 Jiffles14143
2 Jihlim4144
1 Jihuni4145
2 Jiklo15694146
1 Jillglen4147
1 Jim Huggins4148
4 Jim McKeeth4149
1 Jim baker4150
2 Jim wales jr4151
1 Jim.Callahan,Orlando4152
1 Jim.henderson4153
44 Jim11384154
2 JimD4155
1 JimDeLaHunt4156
1 JimJJewett4157
2 JimVC34158
1 Jimblackler4159

4135 https://en.wikipedia.org/wiki/User:JhsBot
4136 https://en.wikipedia.org/w/index.php%3ftitle=User:JialinOuyang&action=edit&redlink=1
4137 https://en.wikipedia.org/wiki/User:Jiang
4138 https://en.wikipedia.org/wiki/User:Jianhui67
4139 https://en.wikipedia.org/w/index.php%3ftitle=User:Jiayq&action=edit&redlink=1
4140 https://en.wikipedia.org/wiki/User:Jidanni
4141 https://en.wikipedia.org/w/index.php%3ftitle=User:JidongChen&action=edit&redlink=1
4142 https://en.wikipedia.org/w/index.php%3ftitle=User:Jiesiren&action=edit&redlink=1
4143 https://en.wikipedia.org/w/index.php%3ftitle=User:Jiffles1&action=edit&redlink=1
4144 https://en.wikipedia.org/w/index.php%3ftitle=User:Jihlim&action=edit&redlink=1
4145 https://en.wikipedia.org/wiki/User:Jihuni
4146 https://en.wikipedia.org/wiki/User:Jiklo1569
4147 https://en.wikipedia.org/w/index.php%3ftitle=User:Jillglen&action=edit&redlink=1
4148 https://en.wikipedia.org/wiki/User:Jim_Huggins
4149 https://en.wikipedia.org/wiki/User:Jim_McKeeth
4150 https://en.wikipedia.org/w/index.php%3ftitle=User:Jim_baker&action=edit&redlink=1
4151 https://en.wikipedia.org/w/index.php%3ftitle=User:Jim_wales_jr&action=edit&redlink=1
4152 https://en.wikipedia.org/wiki/User:Jim.Callahan,Orlando
4153 https://en.wikipedia.org/wiki/User:Jim.henderson
4154 https://en.wikipedia.org/wiki/User:Jim1138
4155 https://en.wikipedia.org/wiki/User:JimD
4156 https://en.wikipedia.org/wiki/User:JimDeLaHunt
4157 https://en.wikipedia.org/w/index.php%3ftitle=User:JimJJewett&action=edit&redlink=1
4158 https://en.wikipedia.org/wiki/User:JimVC3
4159 https://en.wikipedia.org/wiki/User:Jimblackler

1835
Contributors

5 Jimbreed4160
1 JimmyChen4161
1 Jimmypizza4162
1 Jimmytharpe4163
1 Jimp4164
8 Jimw3384165
1 JindalApoorv4166
4 JingguoYao4167
6 JingleSting4168
1 Jinma4169
5 Jin~enwiki4170
1 Jiri Pavelka4171
14 Jirka64172
1 Jiten D4173
38 Jitse Niesen4174
1 Jittat~enwiki4175
8 Jiuguang Wang4176
1 Jixani4177
4 Jiyeyuran4178
1 Jj1374179
2 Jjhat14180
1 Jjjjjjjjjj4181
1 Jjsoos4182
3 Jjurski4183
8 Jk2q3jrklse4184

4160 https://en.wikipedia.org/wiki/User:Jimbreed
4161 https://en.wikipedia.org/w/index.php%3ftitle=User:JimmyChen&action=edit&redlink=1
4162 https://en.wikipedia.org/w/index.php%3ftitle=User:Jimmypizza&action=edit&redlink=1
4163 https://en.wikipedia.org/wiki/User:Jimmytharpe
4164 https://en.wikipedia.org/wiki/User:Jimp
4165 https://en.wikipedia.org/wiki/User:Jimw338
4166 https://en.wikipedia.org/w/index.php%3ftitle=User:JindalApoorv&action=edit&redlink=1
4167 https://en.wikipedia.org/w/index.php%3ftitle=User:JingguoYao&action=edit&redlink=1
4168 https://en.wikipedia.org/w/index.php%3ftitle=User:JingleSting&action=edit&redlink=1
4169 https://en.wikipedia.org/wiki/User:Jinma
4170 https://en.wikipedia.org/wiki/User:Jin~enwiki
4171 https://en.wikipedia.org/w/index.php%3ftitle=User:Jiri_Pavelka&action=edit&redlink=1
4172 https://en.wikipedia.org/wiki/User:Jirka6
4173 https://en.wikipedia.org/wiki/User:Jiten_D
4174 https://en.wikipedia.org/wiki/User:Jitse_Niesen
4175 https://en.wikipedia.org/w/index.php%3ftitle=User:Jittat~enwiki&action=edit&redlink=1
4176 https://en.wikipedia.org/wiki/User:Jiuguang_Wang
4177 https://en.wikipedia.org/w/index.php%3ftitle=User:Jixani&action=edit&redlink=1
4178 https://en.wikipedia.org/w/index.php%3ftitle=User:Jiyeyuran&action=edit&redlink=1
4179 https://en.wikipedia.org/wiki/User:Jj137
4180 https://en.wikipedia.org/wiki/User:Jjhat1
4181 https://en.wikipedia.org/wiki/User:Jjjjjjjjjj
4182 https://en.wikipedia.org/w/index.php%3ftitle=User:Jjsoos&action=edit&redlink=1
4183 https://en.wikipedia.org/w/index.php%3ftitle=User:Jjurski&action=edit&redlink=1
4184 https://en.wikipedia.org/w/index.php%3ftitle=User:Jk2q3jrklse&action=edit&redlink=1

1836
External links

1 Jkl4185
4 Jleedev4186
5 Jlind04187
3 Jll4188
1 Jlpinar834189
10 Jludwig4190
8 Jlyons244191
1 Jmacglashan4192
5 Jmadison1174193
1 Jmagdanz4194
1 Jmah4195
2 Jmalki4196
1 Jmallios4197
2 Jman07264198
1 Jmartinezot4199
1 Jmath6664200
1 Jmcc1504201
2 Jmechevarria4202
4 Jmencisom4203
3 Jmgibson33124204
1 Jmhain4205
3 Jmluy174206
1 Jms494207
3 Jmuessig4208
1 Jmw028244209

4185 https://en.wikipedia.org/wiki/User:Jkl
4186 https://en.wikipedia.org/wiki/User:Jleedev
4187 https://en.wikipedia.org/wiki/User:Jlind0
4188 https://en.wikipedia.org/wiki/User:Jll
4189 https://en.wikipedia.org/wiki/User:Jlpinar83
4190 https://en.wikipedia.org/wiki/User:Jludwig
4191 https://en.wikipedia.org/w/index.php%3ftitle=User:Jlyons24&action=edit&redlink=1
4192 https://en.wikipedia.org/w/index.php%3ftitle=User:Jmacglashan&action=edit&redlink=1
4193 https://en.wikipedia.org/w/index.php%3ftitle=User:Jmadison117&action=edit&redlink=1
4194 https://en.wikipedia.org/wiki/User:Jmagdanz
4195 https://en.wikipedia.org/w/index.php%3ftitle=User:Jmah&action=edit&redlink=1
4196 https://en.wikipedia.org/w/index.php%3ftitle=User:Jmalki&action=edit&redlink=1
4197 https://en.wikipedia.org/wiki/User:Jmallios
4198 https://en.wikipedia.org/w/index.php%3ftitle=User:Jman0726&action=edit&redlink=1
4199 https://en.wikipedia.org/wiki/User:Jmartinezot
4200 https://en.wikipedia.org/wiki/User:Jmath666
4201 https://en.wikipedia.org/wiki/User:Jmcc150
4202 https://en.wikipedia.org/w/index.php%3ftitle=User:Jmechevarria&action=edit&redlink=1
4203 https://en.wikipedia.org/wiki/User:Jmencisom
4204 https://en.wikipedia.org/w/index.php%3ftitle=User:Jmgibson3312&action=edit&redlink=1
4205 https://en.wikipedia.org/w/index.php%3ftitle=User:Jmhain&action=edit&redlink=1
4206 https://en.wikipedia.org/w/index.php%3ftitle=User:Jmluy17&action=edit&redlink=1
4207 https://en.wikipedia.org/w/index.php%3ftitle=User:Jms49&action=edit&redlink=1
4208 https://en.wikipedia.org/w/index.php%3ftitle=User:Jmuessig&action=edit&redlink=1
4209 https://en.wikipedia.org/w/index.php%3ftitle=User:Jmw02824&action=edit&redlink=1

1837
Contributors

2 Jncraton4210
4 Jnestorius4211
1 Jnhnum14212
1 Jni4213
1 Jnlin4214
1 Jnordwick4215
1 Jnoring4216
3 Joachimvalente4217
1 Jobu01014218
106 Jochen Burghardt4219
2 Jochgem4220
2 Joe Schmedley4221
4 JoeKearney4222
4 Joefromrandb4223
152 Joel B. Lewis4224
2 Joel Brennan4225
1 Joeleoj1234226
4 Joelimlimit4227
1 Joelpt4228
1 Joerg Bader4229
5 Joerg Kurt Wegner4230
2 JoergenB4231
1 Joespiff4232
1 Joestape894233
1 Joey-das-WBF4234

4210 https://en.wikipedia.org/wiki/User:Jncraton
4211 https://en.wikipedia.org/wiki/User:Jnestorius
4212 https://en.wikipedia.org/w/index.php%3ftitle=User:Jnhnum1&action=edit&redlink=1
4213 https://en.wikipedia.org/wiki/User:Jni
4214 https://en.wikipedia.org/wiki/User:Jnlin
4215 https://en.wikipedia.org/w/index.php%3ftitle=User:Jnordwick&action=edit&redlink=1
4216 https://en.wikipedia.org/wiki/User:Jnoring
https://en.wikipedia.org/w/index.php%3ftitle=User:Joachimvalente&action=edit&redlink=
4217
1
4218 https://en.wikipedia.org/wiki/User:Jobu0101
4219 https://en.wikipedia.org/wiki/User:Jochen_Burghardt
4220 https://en.wikipedia.org/w/index.php%3ftitle=User:Jochgem&action=edit&redlink=1
4221 https://en.wikipedia.org/wiki/User:Joe_Schmedley
4222 https://en.wikipedia.org/wiki/User:JoeKearney
4223 https://en.wikipedia.org/wiki/User:Joefromrandb
4224 https://en.wikipedia.org/wiki/User:Joel_B._Lewis
4225 https://en.wikipedia.org/w/index.php%3ftitle=User:Joel_Brennan&action=edit&redlink=1
4226 https://en.wikipedia.org/w/index.php%3ftitle=User:Joeleoj123&action=edit&redlink=1
4227 https://en.wikipedia.org/wiki/User:Joelimlimit
4228 https://en.wikipedia.org/w/index.php%3ftitle=User:Joelpt&action=edit&redlink=1
4229 https://en.wikipedia.org/w/index.php%3ftitle=User:Joerg_Bader&action=edit&redlink=1
4230 https://en.wikipedia.org/wiki/User:Joerg_Kurt_Wegner
4231 https://en.wikipedia.org/wiki/User:JoergenB
4232 https://en.wikipedia.org/w/index.php%3ftitle=User:Joespiff&action=edit&redlink=1
4233 https://en.wikipedia.org/w/index.php%3ftitle=User:Joestape89&action=edit&redlink=1
4234 https://en.wikipedia.org/wiki/User:Joey-das-WBF

1838
External links

1 Jogers4235
8 Jogloran4236
1 Johan.de.Ruiter4237
1 Johand1994238
1 Johanna4239
1 John4240
3 John ”Hannibal” Smith4241
1 John Comeau4242
1 John Reed Riley4243
3 John Vandenberg4244
1 John lilburne4245
1 John lindgren4246
23 John of Reading4247
1 John854248
45 JohnBlackburne4249
1 JohnBoyerPhd4250
1 JohnI4251
4 JohnOwens4252
1 JohnWStockwell4253
2 Johndburger4254
1 Johndhackensacker3d4255
1 Johngouf854256
1 Johnjianfang4257
2 Johnl14794258
1 Johnleach4259

4235 https://en.wikipedia.org/wiki/User:Jogers
4236 https://en.wikipedia.org/wiki/User:Jogloran
https://en.wikipedia.org/w/index.php%3ftitle=User:Johan.de.Ruiter&action=edit&
4237
redlink=1
4238 https://en.wikipedia.org/wiki/User:Johand199
4239 https://en.wikipedia.org/wiki/User:Johanna
4240 https://en.wikipedia.org/wiki/User:John
4241 https://en.wikipedia.org/wiki/User:John_%2522Hannibal%2522_Smith
4242 https://en.wikipedia.org/wiki/User:John_Comeau
4243 https://en.wikipedia.org/wiki/User:John_Reed_Riley
4244 https://en.wikipedia.org/wiki/User:John_Vandenberg
4245 https://en.wikipedia.org/wiki/User:John_lilburne
4246 https://en.wikipedia.org/w/index.php%3ftitle=User:John_lindgren&action=edit&redlink=1
4247 https://en.wikipedia.org/wiki/User:John_of_Reading
4248 https://en.wikipedia.org/w/index.php%3ftitle=User:John85&action=edit&redlink=1
4249 https://en.wikipedia.org/wiki/User:JohnBlackburne
4250 https://en.wikipedia.org/w/index.php%3ftitle=User:JohnBoyerPhd&action=edit&redlink=1
4251 https://en.wikipedia.org/wiki/User:JohnI
4252 https://en.wikipedia.org/wiki/User:JohnOwens
https://en.wikipedia.org/w/index.php%3ftitle=User:JohnWStockwell&action=edit&redlink=
4253
1
4254 https://en.wikipedia.org/wiki/User:Johndburger
4255 https://en.wikipedia.org/wiki/User:Johndhackensacker3d
4256 https://en.wikipedia.org/w/index.php%3ftitle=User:Johngouf85&action=edit&redlink=1
4257 https://en.wikipedia.org/w/index.php%3ftitle=User:Johnjianfang&action=edit&redlink=1
4258 https://en.wikipedia.org/wiki/User:Johnl1479
4259 https://en.wikipedia.org/wiki/User:Johnleach

1839
Contributors

1 Johnny Zoo4260
1 Johnsopc4261
18 Johnuniq4262
2 Johnwon4263
2 Jojalozzo4264
2 Jojan4265
1 Jojit fb4266
11 Jok20004267
6 Jokes Free4Me4268
1 Joli Tambour4269
2 Jolsfa1234270
1 Jolt5274271
43 Jon Awbrey4272
1 Jon har4273
1 Jon.c.anderson4274
3 JonDePlume4275
1 JonGinny4276
1 JonH4277
8 JonHarder4278
1 JonJacobsen4279
1 JonMarkPerry4280
6 Jonadab~enwiki4281
1 Jonahugh4282
2 Jonas Kölker4283
1 Jonas.b.jensen4284

4260 https://en.wikipedia.org/wiki/User:Johnny_Zoo
4261 https://en.wikipedia.org/wiki/User:Johnsopc
4262 https://en.wikipedia.org/wiki/User:Johnuniq
4263 https://en.wikipedia.org/w/index.php%3ftitle=User:Johnwon&action=edit&redlink=1
4264 https://en.wikipedia.org/wiki/User:Jojalozzo
4265 https://en.wikipedia.org/wiki/User:Jojan
4266 https://en.wikipedia.org/wiki/User:Jojit_fb
4267 https://en.wikipedia.org/wiki/User:Jok2000
4268 https://en.wikipedia.org/wiki/User:Jokes_Free4Me
4269 https://en.wikipedia.org/w/index.php%3ftitle=User:Joli_Tambour&action=edit&redlink=1
4270 https://en.wikipedia.org/w/index.php%3ftitle=User:Jolsfa123&action=edit&redlink=1
4271 https://en.wikipedia.org/w/index.php%3ftitle=User:Jolt527&action=edit&redlink=1
4272 https://en.wikipedia.org/wiki/User:Jon_Awbrey
4273 https://en.wikipedia.org/wiki/User:Jon_har
https://en.wikipedia.org/w/index.php%3ftitle=User:Jon.c.anderson&action=edit&redlink=
4274
1
4275 https://en.wikipedia.org/wiki/User:JonDePlume
4276 https://en.wikipedia.org/w/index.php%3ftitle=User:JonGinny&action=edit&redlink=1
4277 https://en.wikipedia.org/wiki/User:JonH
4278 https://en.wikipedia.org/wiki/User:JonHarder
4279 https://en.wikipedia.org/w/index.php%3ftitle=User:JonJacobsen&action=edit&redlink=1
4280 https://en.wikipedia.org/w/index.php%3ftitle=User:JonMarkPerry&action=edit&redlink=1
4281 https://en.wikipedia.org/wiki/User:Jonadab~enwiki
4282 https://en.wikipedia.org/w/index.php%3ftitle=User:Jonahugh&action=edit&redlink=1
4283 https://en.wikipedia.org/wiki/User:Jonas_K%25C3%25B6lker
https://en.wikipedia.org/w/index.php%3ftitle=User:Jonas.b.jensen&action=edit&redlink=
4284
1

1840
External links

1 Jonasbn4285
1 Jonathan de Boyne Pollard4286
1 JonathanFreed4287
1 Jonathanzung4288
2 Jonchen424289
1 Joncop4290
13 Jonesey954291
2 Jongman.koo4292
1 Jonhall4293
1 Jonhkr4294
2 Joniscool984295
1 Jonmcauliffe4296
1 Jonnty4297
1 Jonny Diamond4298
1 Jonnypurgatory4299
1 Jonon4300
3 Jonsafari4301
1 Joostvanpinxten4302
5 Jopxton4303
1 Jordanbray4304
1 JordiGH4305
5 Jordsan4306
240 Jorge Stolfi4307
1 Jorgecarleitao4308
4 Jorgelin104309

4285 https://en.wikipedia.org/wiki/User:Jonasbn
https://en.wikipedia.org/w/index.php%3ftitle=User:Jonathan_de_Boyne_Pollard&action=
4286
edit&redlink=1
4287 https://en.wikipedia.org/wiki/User:JonathanFreed
4288 https://en.wikipedia.org/w/index.php%3ftitle=User:Jonathanzung&action=edit&redlink=1
4289 https://en.wikipedia.org/w/index.php%3ftitle=User:Jonchen42&action=edit&redlink=1
4290 https://en.wikipedia.org/w/index.php%3ftitle=User:Joncop&action=edit&redlink=1
4291 https://en.wikipedia.org/wiki/User:Jonesey95
4292 https://en.wikipedia.org/w/index.php%3ftitle=User:Jongman.koo&action=edit&redlink=1
4293 https://en.wikipedia.org/wiki/User:Jonhall
4294 https://en.wikipedia.org/w/index.php%3ftitle=User:Jonhkr&action=edit&redlink=1
4295 https://en.wikipedia.org/w/index.php%3ftitle=User:Joniscool98&action=edit&redlink=1
4296 https://en.wikipedia.org/w/index.php%3ftitle=User:Jonmcauliffe&action=edit&redlink=1
4297 https://en.wikipedia.org/wiki/User:Jonnty
4298 https://en.wikipedia.org/w/index.php%3ftitle=User:Jonny_Diamond&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Jonnypurgatory&action=edit&redlink=
4299
1
4300 https://en.wikipedia.org/wiki/User:Jonon
4301 https://en.wikipedia.org/wiki/User:Jonsafari
https://en.wikipedia.org/w/index.php%3ftitle=User:Joostvanpinxten&action=edit&
4302
redlink=1
4303 https://en.wikipedia.org/w/index.php%3ftitle=User:Jopxton&action=edit&redlink=1
4304 https://en.wikipedia.org/w/index.php%3ftitle=User:Jordanbray&action=edit&redlink=1
4305 https://en.wikipedia.org/wiki/User:JordiGH
4306 https://en.wikipedia.org/w/index.php%3ftitle=User:Jordsan&action=edit&redlink=1
4307 https://en.wikipedia.org/wiki/User:Jorge_Stolfi
4308 https://en.wikipedia.org/wiki/User:Jorgecarleitao
4309 https://en.wikipedia.org/w/index.php%3ftitle=User:Jorgelin10&action=edit&redlink=1

1841
Contributors

1 Jorgenev4310
2 Jorgenumata4311
5 Joriki4312
2 Jorvis4313
1 Jose.m.c.torres4314
1 Jose.yallouz4315
15 Josecgomez4316
1 Josell24317
1 Joseph A. Spadaro4318
1 JosephGrimaldi4319
1 JosephMDecock4320
1 Josephdanna4321
1 Josephjeevan4322
1 Josephmarty4323
1 Josephshanak4324
1 Josephsieh4325
1 Josephskeller4326
5 Josephvk4327
14 Josh Kehn4328
1 Josh35804329
1 JoshReeves24330
2 Joshgans4331
2 Joshgilkerson4332
2 Joshi19834333
5 Joshk4334

4310 https://en.wikipedia.org/wiki/User:Jorgenev
4311 https://en.wikipedia.org/wiki/User:Jorgenumata
4312 https://en.wikipedia.org/wiki/User:Joriki
4313 https://en.wikipedia.org/wiki/User:Jorvis
https://en.wikipedia.org/w/index.php%3ftitle=User:Jose.m.c.torres&action=edit&
4314
redlink=1
4315 https://en.wikipedia.org/w/index.php%3ftitle=User:Jose.yallouz&action=edit&redlink=1
4316 https://en.wikipedia.org/w/index.php%3ftitle=User:Josecgomez&action=edit&redlink=1
4317 https://en.wikipedia.org/w/index.php%3ftitle=User:Josell2&action=edit&redlink=1
4318 https://en.wikipedia.org/wiki/User:Joseph_A._Spadaro
4319 https://en.wikipedia.org/wiki/User:JosephGrimaldi
4320 https://en.wikipedia.org/w/index.php%3ftitle=User:JosephMDecock&action=edit&redlink=1
4321 https://en.wikipedia.org/w/index.php%3ftitle=User:Josephdanna&action=edit&redlink=1
4322 https://en.wikipedia.org/w/index.php%3ftitle=User:Josephjeevan&action=edit&redlink=1
4323 https://en.wikipedia.org/wiki/User:Josephmarty
4324 https://en.wikipedia.org/wiki/User:Josephshanak
4325 https://en.wikipedia.org/w/index.php%3ftitle=User:Josephsieh&action=edit&redlink=1
4326 https://en.wikipedia.org/w/index.php%3ftitle=User:Josephskeller&action=edit&redlink=1
4327 https://en.wikipedia.org/wiki/User:Josephvk
4328 https://en.wikipedia.org/wiki/User:Josh_Kehn
4329 https://en.wikipedia.org/wiki/User:Josh3580
4330 https://en.wikipedia.org/w/index.php%3ftitle=User:JoshReeves2&action=edit&redlink=1
4331 https://en.wikipedia.org/w/index.php%3ftitle=User:Joshgans&action=edit&redlink=1
4332 https://en.wikipedia.org/w/index.php%3ftitle=User:Joshgilkerson&action=edit&redlink=1
4333 https://en.wikipedia.org/wiki/User:Joshi1983
4334 https://en.wikipedia.org/wiki/User:Joshk

1842
External links

1 Joshsny4335
4 Joshua Issac4336
4 JoshuaZ4337
1 Joshuaali4338
2 Joshuagmath4339
9 Joshxyz4340
3 Josijohal4341
2 Jossi4342
2 Josteinaj4343
1 Josuedalboni4344
5 Josve05a4345
1 Jowan20054346
2 Joy4347
1 Joy19634348
2 Jozefgajdos4349
1 Jp.gle4350
3 Jp26jp4351
2 Jpbowen4352
1 Jpiroto4353
1 Jpk4354
1 Jpkotta4355
1 Jplauri4356
2 Jpl~enwiki4357
1 Jpmelos4358
1 Jpmunz4359

4335 https://en.wikipedia.org/w/index.php%3ftitle=User:Joshsny&action=edit&redlink=1
4336 https://en.wikipedia.org/wiki/User:Joshua_Issac
4337 https://en.wikipedia.org/wiki/User:JoshuaZ
4338 https://en.wikipedia.org/w/index.php%3ftitle=User:Joshuaali&action=edit&redlink=1
4339 https://en.wikipedia.org/w/index.php%3ftitle=User:Joshuagmath&action=edit&redlink=1
4340 https://en.wikipedia.org/w/index.php%3ftitle=User:Joshxyz&action=edit&redlink=1
4341 https://en.wikipedia.org/w/index.php%3ftitle=User:Josijohal&action=edit&redlink=1
4342 https://en.wikipedia.org/wiki/User:Jossi
4343 https://en.wikipedia.org/w/index.php%3ftitle=User:Josteinaj&action=edit&redlink=1
4344 https://en.wikipedia.org/w/index.php%3ftitle=User:Josuedalboni&action=edit&redlink=1
4345 https://en.wikipedia.org/wiki/User:Josve05a
4346 https://en.wikipedia.org/wiki/User:Jowan2005
4347 https://en.wikipedia.org/wiki/User:Joy
4348 https://en.wikipedia.org/wiki/User:Joy1963
4349 https://en.wikipedia.org/w/index.php%3ftitle=User:Jozefgajdos&action=edit&redlink=1
4350 https://en.wikipedia.org/w/index.php%3ftitle=User:Jp.gle&action=edit&redlink=1
4351 https://en.wikipedia.org/wiki/User:Jp26jp
4352 https://en.wikipedia.org/wiki/User:Jpbowen
4353 https://en.wikipedia.org/w/index.php%3ftitle=User:Jpiroto&action=edit&redlink=1
4354 https://en.wikipedia.org/wiki/User:Jpk
4355 https://en.wikipedia.org/wiki/User:Jpkotta
4356 https://en.wikipedia.org/w/index.php%3ftitle=User:Jplauri&action=edit&redlink=1
4357 https://en.wikipedia.org/w/index.php%3ftitle=User:Jpl~enwiki&action=edit&redlink=1
4358 https://en.wikipedia.org/w/index.php%3ftitle=User:Jpmelos&action=edit&redlink=1
4359 https://en.wikipedia.org/w/index.php%3ftitle=User:Jpmunz&action=edit&redlink=1

1843
Contributors

1 Jprg19664360
1 Jrachiele4361
1 Jrgauthier4362
10 Jrheller14363
2 Jrmcdaniel4364
1 Jrouquie4365
1 Jrwhitehill4366
1 Jsamarziya4367
1 Jschievink4368
1 Jschnur4369
1 Jschwa14370
1 JsePrometheus4371
1 Jsejcksn4372
1 Jsimone4373
2 Jsmuller14374
1 Jtcho4375
4 Jthemphill4376
1 Jthillik4377
1 Jtir4378
1 Jtle5154379
1 Jtwdog4380
2 Ju123584381
1 Juancarlosgolos4382
1 Juanmamb4383
1 Juanpabloaj4384

4360 https://en.wikipedia.org/wiki/User:Jprg1966
4361 https://en.wikipedia.org/w/index.php%3ftitle=User:Jrachiele&action=edit&redlink=1
4362 https://en.wikipedia.org/wiki/User:Jrgauthier
4363 https://en.wikipedia.org/wiki/User:Jrheller1
4364 https://en.wikipedia.org/w/index.php%3ftitle=User:Jrmcdaniel&action=edit&redlink=1
4365 https://en.wikipedia.org/wiki/User:Jrouquie
4366 https://en.wikipedia.org/w/index.php%3ftitle=User:Jrwhitehill&action=edit&redlink=1
4367 https://en.wikipedia.org/wiki/User:Jsamarziya
4368 https://en.wikipedia.org/w/index.php%3ftitle=User:Jschievink&action=edit&redlink=1
4369 https://en.wikipedia.org/wiki/User:Jschnur
4370 https://en.wikipedia.org/wiki/User:Jschwa1
4371 https://en.wikipedia.org/w/index.php%3ftitle=User:JsePrometheus&action=edit&redlink=1
4372 https://en.wikipedia.org/wiki/User:Jsejcksn
4373 https://en.wikipedia.org/wiki/User:Jsimone
4374 https://en.wikipedia.org/w/index.php%3ftitle=User:Jsmuller1&action=edit&redlink=1
4375 https://en.wikipedia.org/w/index.php%3ftitle=User:Jtcho&action=edit&redlink=1
4376 https://en.wikipedia.org/wiki/User:Jthemphill
4377 https://en.wikipedia.org/w/index.php%3ftitle=User:Jthillik&action=edit&redlink=1
4378 https://en.wikipedia.org/wiki/User:Jtir
4379 https://en.wikipedia.org/w/index.php%3ftitle=User:Jtle515&action=edit&redlink=1
4380 https://en.wikipedia.org/wiki/User:Jtwdog
4381 https://en.wikipedia.org/w/index.php%3ftitle=User:Ju12358&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Juancarlosgolos&action=edit&
4382
redlink=1
4383 https://en.wikipedia.org/w/index.php%3ftitle=User:Juanmamb&action=edit&redlink=1
4384 https://en.wikipedia.org/w/index.php%3ftitle=User:Juanpabloaj&action=edit&redlink=1

1844
External links

1 JubalFortran4385
2 Judsonotis4386
1 Juegdtj6384387
1 Jugander4388
1 Juhachi4389
1 Jujutacular4390
12 Julesd4391
1 JuliaMelody4392
1 Julian Mendez4393
3 Juliancolton4394
1 Juliand4395
1 Julianiacoponi4396
1 Juliano4397
1 Julietdeltalima4398
1 JuliusClimacus4399
2 Juliusz Gonera4400
1 Julyo4401
11 Jumbuck4402
5 JumpDiscont4403
1 Junkyardsparkle4404
3 Jupoulton4405
1 Jusjih4406
3 JustAHappyCamper4407
1 JustAMuggle4408

4385 https://en.wikipedia.org/w/index.php%3ftitle=User:JubalFortran&action=edit&redlink=1
4386 https://en.wikipedia.org/w/index.php%3ftitle=User:Judsonotis&action=edit&redlink=1
4387 https://en.wikipedia.org/wiki/User:Juegdtj638
4388 https://en.wikipedia.org/wiki/User:Jugander
4389 https://en.wikipedia.org/wiki/User:Juhachi
4390 https://en.wikipedia.org/wiki/User:Jujutacular
4391 https://en.wikipedia.org/wiki/User:Julesd
4392 https://en.wikipedia.org/wiki/User:JuliaMelody
4393 https://en.wikipedia.org/wiki/User:Julian_Mendez
4394 https://en.wikipedia.org/wiki/User:Juliancolton
4395 https://en.wikipedia.org/wiki/User:Juliand
https://en.wikipedia.org/w/index.php%3ftitle=User:Julianiacoponi&action=edit&redlink=
4396
1
4397 https://en.wikipedia.org/wiki/User:Juliano
4398 https://en.wikipedia.org/wiki/User:Julietdeltalima
https://en.wikipedia.org/w/index.php%3ftitle=User:JuliusClimacus&action=edit&redlink=
4399
1
https://en.wikipedia.org/w/index.php%3ftitle=User:Juliusz_Gonera&action=edit&redlink=
4400
1
4401 https://en.wikipedia.org/wiki/User:Julyo
4402 https://en.wikipedia.org/wiki/User:Jumbuck
4403 https://en.wikipedia.org/w/index.php%3ftitle=User:JumpDiscont&action=edit&redlink=1
4404 https://en.wikipedia.org/wiki/User:Junkyardsparkle
4405 https://en.wikipedia.org/w/index.php%3ftitle=User:Jupoulton&action=edit&redlink=1
4406 https://en.wikipedia.org/wiki/User:Jusjih
https://en.wikipedia.org/w/index.php%3ftitle=User:JustAHappyCamper&action=edit&
4407
redlink=1
4408 https://en.wikipedia.org/wiki/User:JustAMuggle

1845
Contributors

1 JustBerry4409
1 Justeditingtoday4410
2 Justin Mauger4411
1 Justin Stafford4412
124 Justin W Smith4413
1 Justin15w4414
3 JustinWick4415
1 Justincheng12345-bot4416
2 Justinhj4417
4 Justinlebar4418
2 JustmeChrisO4419
3 Justonce124420
1 Jvilledieu4421
1 Jvs4422
1 Jwgoh4423
1 Jwh3354424
10 Jwikip4425
1 Jwissick4426
1 Jwlee4427
1 Jwuthe24428
2 Jyoteshrc4429
1 Jytug4430
7 JzG4431
2 Jérôme4432
1 Jéské Couriano4433

4409 https://en.wikipedia.org/wiki/User:JustBerry
4410 https://en.wikipedia.org/wiki/User:Justeditingtoday
4411 https://en.wikipedia.org/wiki/User:Justin_Mauger
4412 https://en.wikipedia.org/wiki/User:Justin_Stafford
4413 https://en.wikipedia.org/wiki/User:Justin_W_Smith
4414 https://en.wikipedia.org/wiki/User:Justin15w
4415 https://en.wikipedia.org/wiki/User:JustinWick
4416 https://en.wikipedia.org/wiki/User:Justincheng12345-bot
4417 https://en.wikipedia.org/w/index.php%3ftitle=User:Justinhj&action=edit&redlink=1
4418 https://en.wikipedia.org/wiki/User:Justinlebar
4419 https://en.wikipedia.org/w/index.php%3ftitle=User:JustmeChrisO&action=edit&redlink=1
4420 https://en.wikipedia.org/w/index.php%3ftitle=User:Justonce12&action=edit&redlink=1
4421 https://en.wikipedia.org/wiki/User:Jvilledieu
4422 https://en.wikipedia.org/wiki/User:Jvs
4423 https://en.wikipedia.org/w/index.php%3ftitle=User:Jwgoh&action=edit&redlink=1
4424 https://en.wikipedia.org/wiki/User:Jwh335
4425 https://en.wikipedia.org/wiki/User:Jwikip
4426 https://en.wikipedia.org/w/index.php%3ftitle=User:Jwissick&action=edit&redlink=1
4427 https://en.wikipedia.org/w/index.php%3ftitle=User:Jwlee&action=edit&redlink=1
4428 https://en.wikipedia.org/wiki/User:Jwuthe2
4429 https://en.wikipedia.org/w/index.php%3ftitle=User:Jyoteshrc&action=edit&redlink=1
4430 https://en.wikipedia.org/w/index.php%3ftitle=User:Jytug&action=edit&redlink=1
4431 https://en.wikipedia.org/wiki/User:JzG
4432 https://en.wikipedia.org/wiki/User:J%25C3%25A9r%25C3%25B4me
4433 https://en.wikipedia.org/wiki/User:J%25C3%25A9sk%25C3%25A9_Couriano

1846
External links

2 Jóna Þórunn4434
1 Jørdan4435
1 K-evariste4436
2 K.A.Sayed4437
2 K22344438
7 K3rb4439
4 K6ka4440
3 KAP034441
1 KBKarma4442
1 KConWiki4443
1 KGV4444
5 KGirlTrucker814445
1 KHAAAAAAAAAAN4446
1 KI4447
2 KJK::Hyperion4448
1 KJS774449
3 KLBot24450
4 KMeyer4451
1 KRPent4452
1 KSFT4453
2 KSmrq4454
1 KYN4455
1 KYve488ETIzrtiYS4456
1 Kaare4457
2 Kache44458

4434 https://en.wikipedia.org/wiki/User:J%25C3%25B3na_%25C3%259E%25C3%25B3runn
4435 https://en.wikipedia.org/wiki/User:J%25C3%25B8rdan
4436 https://en.wikipedia.org/w/index.php%3ftitle=User:K-evariste&action=edit&redlink=1
4437 https://en.wikipedia.org/w/index.php%3ftitle=User:K.A.Sayed&action=edit&redlink=1
4438 https://en.wikipedia.org/w/index.php%3ftitle=User:K2234&action=edit&redlink=1
4439 https://en.wikipedia.org/wiki/User:K3rb
4440 https://en.wikipedia.org/wiki/User:K6ka
4441 https://en.wikipedia.org/wiki/User:KAP03
4442 https://en.wikipedia.org/wiki/User:KBKarma
4443 https://en.wikipedia.org/wiki/User:KConWiki
4444 https://en.wikipedia.org/wiki/User:KGV
4445 https://en.wikipedia.org/wiki/User:KGirlTrucker81
4446 https://en.wikipedia.org/wiki/User:KHAAAAAAAAAAN
4447 https://en.wikipedia.org/wiki/User:KI
4448 https://en.wikipedia.org/w/index.php%3ftitle=User:KJK::Hyperion&action=edit&redlink=1
4449 https://en.wikipedia.org/wiki/User:KJS77
4450 https://en.wikipedia.org/wiki/User:KLBot2
4451 https://en.wikipedia.org/wiki/User:KMeyer
4452 https://en.wikipedia.org/w/index.php%3ftitle=User:KRPent&action=edit&redlink=1
4453 https://en.wikipedia.org/wiki/User:KSFT
4454 https://en.wikipedia.org/wiki/User:KSmrq
4455 https://en.wikipedia.org/wiki/User:KYN
https://en.wikipedia.org/w/index.php%3ftitle=User:KYve488ETIzrtiYS&action=edit&
4456
redlink=1
4457 https://en.wikipedia.org/wiki/User:Kaare
4458 https://en.wikipedia.org/wiki/User:Kache4

1847
Contributors

2 Kaeso4459
1 Kagundu4460
1 Kaicarver4461
5 Kaidul4462
1 Kaihsu4463
1 Kaimiddleton4464
1 Kainino4465
1 KaisaL4466
1 Kaiserb4467
4 Kaladis4468
1 Kalathalan4469
2 Kalebdf4470
2 Kallerdis4471
1 Kalogeropoulos4472
10 Kalraritz4473
1 Kaltenmeyer4474
1 Kaly J.4475
1 Kalyani B attluri4476
6 Kamac1244477
6 KamikazeBot4478
1 Kamron.Batman4479
1 KamuiShirou4480
3 Kanargias4481
1 Kane51874482
3 Kangaroosrule4483

4459 https://en.wikipedia.org/w/index.php%3ftitle=User:Kaeso&action=edit&redlink=1
4460 https://en.wikipedia.org/wiki/User:Kagundu
4461 https://en.wikipedia.org/wiki/User:Kaicarver
4462 https://en.wikipedia.org/wiki/User:Kaidul
4463 https://en.wikipedia.org/wiki/User:Kaihsu
4464 https://en.wikipedia.org/wiki/User:Kaimiddleton
4465 https://en.wikipedia.org/wiki/User:Kainino
4466 https://en.wikipedia.org/wiki/User:KaisaL
4467 https://en.wikipedia.org/wiki/User:Kaiserb
4468 https://en.wikipedia.org/w/index.php%3ftitle=User:Kaladis&action=edit&redlink=1
4469 https://en.wikipedia.org/wiki/User:Kalathalan
4470 https://en.wikipedia.org/w/index.php%3ftitle=User:Kalebdf&action=edit&redlink=1
4471 https://en.wikipedia.org/wiki/User:Kallerdis
4472 https://en.wikipedia.org/wiki/User:Kalogeropoulos
4473 https://en.wikipedia.org/w/index.php%3ftitle=User:Kalraritz&action=edit&redlink=1
4474 https://en.wikipedia.org/wiki/User:Kaltenmeyer
4475 https://en.wikipedia.org/w/index.php%3ftitle=User:Kaly_J.&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Kalyani_B_attluri&action=edit&
4476
redlink=1
4477 https://en.wikipedia.org/w/index.php%3ftitle=User:Kamac124&action=edit&redlink=1
4478 https://en.wikipedia.org/wiki/User:KamikazeBot
4479 https://en.wikipedia.org/wiki/User:Kamron.Batman
4480 https://en.wikipedia.org/wiki/User:KamuiShirou
4481 https://en.wikipedia.org/w/index.php%3ftitle=User:Kanargias&action=edit&redlink=1
4482 https://en.wikipedia.org/wiki/User:Kane5187
4483 https://en.wikipedia.org/wiki/User:Kangaroosrule

1848
External links

1 Kanguole4484
1 Kanie4485
1 Kanitani4486
1 Kanterme4487
1 Kaobear4488
2 Kapgains4489
4 Kapil.xerox4490
2 Kapildalwani4491
8 Karada4492
1 Karam.Anthony.K4493
1 Karaminchan4494
1 Karanam raghuvardhan4495
1 Kareekacha4496
1 Karenjc4497
3 Karl Dickman4498
1 Karl Stroetmann4499
6 Karl-Henner4500
1 KarlFrei4501
1 Karlhendrikse4502
8 Karnan4503
2 Karspider4504
5 Karuthedam4505
3 Kasirbot4506
4 KasparBot4507
2 Kasper.laursen4508

4484 https://en.wikipedia.org/wiki/User:Kanguole
4485 https://en.wikipedia.org/wiki/User:Kanie
4486 https://en.wikipedia.org/w/index.php%3ftitle=User:Kanitani&action=edit&redlink=1
4487 https://en.wikipedia.org/w/index.php%3ftitle=User:Kanterme&action=edit&redlink=1
4488 https://en.wikipedia.org/wiki/User:Kaobear
4489 https://en.wikipedia.org/wiki/User:Kapgains
4490 https://en.wikipedia.org/wiki/User:Kapil.xerox
4491 https://en.wikipedia.org/w/index.php%3ftitle=User:Kapildalwani&action=edit&redlink=1
4492 https://en.wikipedia.org/wiki/User:Karada
4493 https://en.wikipedia.org/wiki/User:Karam.Anthony.K
4494 https://en.wikipedia.org/w/index.php%3ftitle=User:Karaminchan&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Karanam_raghuvardhan&action=edit&
4495
redlink=1
4496 https://en.wikipedia.org/w/index.php%3ftitle=User:Kareekacha&action=edit&redlink=1
4497 https://en.wikipedia.org/wiki/User:Karenjc
4498 https://en.wikipedia.org/wiki/User:Karl_Dickman
4499 https://en.wikipedia.org/wiki/User:Karl_Stroetmann
4500 https://en.wikipedia.org/wiki/User:Karl-Henner
4501 https://en.wikipedia.org/wiki/User:KarlFrei
4502 https://en.wikipedia.org/w/index.php%3ftitle=User:Karlhendrikse&action=edit&redlink=1
4503 https://en.wikipedia.org/wiki/User:Karnan
4504 https://en.wikipedia.org/w/index.php%3ftitle=User:Karspider&action=edit&redlink=1
4505 https://en.wikipedia.org/w/index.php%3ftitle=User:Karuthedam&action=edit&redlink=1
4506 https://en.wikipedia.org/wiki/User:Kasirbot
4507 https://en.wikipedia.org/wiki/User:KasparBot
https://en.wikipedia.org/w/index.php%3ftitle=User:Kasper.laursen&action=edit&redlink=
4508
1

1849
Contributors

1 Kasslert4509
3 Kastchei4510
1 Katakana2504511
3 Kate4512
1 Kathmandu guy4513
8 Katieh55844514
2 Katsushi4515
4 Kaushik04024516
1 Kaustuv4517
1 KavehYousefi20194518
1 Kawautar4519
1 Kayau4520
1 KaylenTPillay4521
2 Kazastankas4522
5 Kazvorpal4523
2 Kb034524
2 Kbdank714525
2 Kbdankbot4526
4 Kbk4527
2 Kbkorb4528
1 Kbolino4529
8 Kbrose4530
2 Kc034531
1 Kcm17004532
1 Kcrca4533

4509 https://en.wikipedia.org/wiki/User:Kasslert
4510 https://en.wikipedia.org/w/index.php%3ftitle=User:Kastchei&action=edit&redlink=1
4511 https://en.wikipedia.org/w/index.php%3ftitle=User:Katakana250&action=edit&redlink=1
4512 https://en.wikipedia.org/wiki/User:Kate
4513 https://en.wikipedia.org/w/index.php%3ftitle=User:Kathmandu_guy&action=edit&redlink=1
4514 https://en.wikipedia.org/wiki/User:Katieh5584
4515 https://en.wikipedia.org/w/index.php%3ftitle=User:Katsushi&action=edit&redlink=1
4516 https://en.wikipedia.org/w/index.php%3ftitle=User:Kaushik0402&action=edit&redlink=1
4517 https://en.wikipedia.org/wiki/User:Kaustuv
https://en.wikipedia.org/w/index.php%3ftitle=User:KavehYousefi2019&action=edit&
4518
redlink=1
4519 https://en.wikipedia.org/w/index.php%3ftitle=User:Kawautar&action=edit&redlink=1
4520 https://en.wikipedia.org/wiki/User:Kayau
4521 https://en.wikipedia.org/w/index.php%3ftitle=User:KaylenTPillay&action=edit&redlink=1
4522 https://en.wikipedia.org/w/index.php%3ftitle=User:Kazastankas&action=edit&redlink=1
4523 https://en.wikipedia.org/wiki/User:Kazvorpal
4524 https://en.wikipedia.org/wiki/User:Kb03
4525 https://en.wikipedia.org/wiki/User:Kbdank71
4526 https://en.wikipedia.org/wiki/User:Kbdankbot
4527 https://en.wikipedia.org/wiki/User:Kbk
4528 https://en.wikipedia.org/w/index.php%3ftitle=User:Kbkorb&action=edit&redlink=1
4529 https://en.wikipedia.org/wiki/User:Kbolino
4530 https://en.wikipedia.org/wiki/User:Kbrose
4531 https://en.wikipedia.org/w/index.php%3ftitle=User:Kc03&action=edit&redlink=1
4532 https://en.wikipedia.org/w/index.php%3ftitle=User:Kcm1700&action=edit&redlink=1
4533 https://en.wikipedia.org/w/index.php%3ftitle=User:Kcrca&action=edit&redlink=1

1850
External links

3 Kdakin4534
3 Kdau4535
3 Kdnewton4536
1 Ke4roh4537
1 Kebede4474538
6 Kedar.mhaswade4539
2 Keenan Pepper4540
1 Keepssouth4541
1 Keesal4542
1 Keg274543
1 Kehrbykid4544
1 KeijiBranshi4545
5 Keilana4546
1 Keith Cascio4547
4 Keith D4548
2 Keith4random4549
1 Keithhalonen4550
2 Keithphw4551
1 Keithttaylor4552
1 Kejv24553
1 Kelapstick4554
1 Kelly2224555
2 KellyCoinGuy4556
1 KelvSYC4557
1 Ken Appleby4558

4534 https://en.wikipedia.org/wiki/User:Kdakin
4535 https://en.wikipedia.org/wiki/User:Kdau
4536 https://en.wikipedia.org/wiki/User:Kdnewton
4537 https://en.wikipedia.org/wiki/User:Ke4roh
4538 https://en.wikipedia.org/w/index.php%3ftitle=User:Kebede447&action=edit&redlink=1
4539 https://en.wikipedia.org/wiki/User:Kedar.mhaswade
4540 https://en.wikipedia.org/wiki/User:Keenan_Pepper
4541 https://en.wikipedia.org/wiki/User:Keepssouth
4542 https://en.wikipedia.org/w/index.php%3ftitle=User:Keesal&action=edit&redlink=1
4543 https://en.wikipedia.org/w/index.php%3ftitle=User:Keg27&action=edit&redlink=1
4544 https://en.wikipedia.org/wiki/User:Kehrbykid
4545 https://en.wikipedia.org/w/index.php%3ftitle=User:KeijiBranshi&action=edit&redlink=1
4546 https://en.wikipedia.org/wiki/User:Keilana
4547 https://en.wikipedia.org/wiki/User:Keith_Cascio
4548 https://en.wikipedia.org/wiki/User:Keith_D
4549 https://en.wikipedia.org/w/index.php%3ftitle=User:Keith4random&action=edit&redlink=1
4550 https://en.wikipedia.org/w/index.php%3ftitle=User:Keithhalonen&action=edit&redlink=1
4551 https://en.wikipedia.org/w/index.php%3ftitle=User:Keithphw&action=edit&redlink=1
4552 https://en.wikipedia.org/wiki/User:Keithttaylor
4553 https://en.wikipedia.org/w/index.php%3ftitle=User:Kejv2&action=edit&redlink=1
4554 https://en.wikipedia.org/wiki/User:Kelapstick
4555 https://en.wikipedia.org/w/index.php%3ftitle=User:Kelly222&action=edit&redlink=1
4556 https://en.wikipedia.org/wiki/User:KellyCoinGuy
4557 https://en.wikipedia.org/wiki/User:KelvSYC
4558 https://en.wikipedia.org/w/index.php%3ftitle=User:Ken_Appleby&action=edit&redlink=1

1851
Contributors

1 KenPR4559
2 Kenb2154560
1 Kencf06184561
4 Kendrick74562
1 Kenelm Erfith4563
2 Kenfyre4564
3 Kenneth M Burke4565
1 Kenny2wiki4566
1 Kenrgoss4567
1 Kensor4568
16 Kenyon4569
2 Keo-san4570
1 Kercker4571
16 Kesla4572
1 Kessa 7134573
1 Kesyka4574
2 Ketiltrout4575
5 Kevin4576
2 Kevin Albert4577
1 Kevin12xd4578
1 Kevin1kevin1k4579
1 KevinNLeeds4580
1 Kevinsystrom4581
1 Kewlito4582
1 Keynell4583

4559 https://en.wikipedia.org/wiki/User:KenPR
4560 https://en.wikipedia.org/wiki/User:Kenb215
4561 https://en.wikipedia.org/wiki/User:Kencf0618
4562 https://en.wikipedia.org/wiki/User:Kendrick7
4563 https://en.wikipedia.org/wiki/User:Kenelm_Erfith
4564 https://en.wikipedia.org/wiki/User:Kenfyre
4565 https://en.wikipedia.org/wiki/User:Kenneth_M_Burke
4566 https://en.wikipedia.org/wiki/User:Kenny2wiki
4567 https://en.wikipedia.org/w/index.php%3ftitle=User:Kenrgoss&action=edit&redlink=1
4568 https://en.wikipedia.org/wiki/User:Kensor
4569 https://en.wikipedia.org/wiki/User:Kenyon
4570 https://en.wikipedia.org/w/index.php%3ftitle=User:Keo-san&action=edit&redlink=1
4571 https://en.wikipedia.org/wiki/User:Kercker
4572 https://en.wikipedia.org/wiki/User:Kesla
4573 https://en.wikipedia.org/w/index.php%3ftitle=User:Kessa_713&action=edit&redlink=1
4574 https://en.wikipedia.org/w/index.php%3ftitle=User:Kesyka&action=edit&redlink=1
4575 https://en.wikipedia.org/wiki/User:Ketiltrout
4576 https://en.wikipedia.org/wiki/User:Kevin
4577 https://en.wikipedia.org/wiki/User:Kevin_Albert
4578 https://en.wikipedia.org/wiki/User:Kevin12xd
4579 https://en.wikipedia.org/w/index.php%3ftitle=User:Kevin1kevin1k&action=edit&redlink=1
4580 https://en.wikipedia.org/w/index.php%3ftitle=User:KevinNLeeds&action=edit&redlink=1
4581 https://en.wikipedia.org/w/index.php%3ftitle=User:Kevinsystrom&action=edit&redlink=1
4582 https://en.wikipedia.org/wiki/User:Kewlito
4583 https://en.wikipedia.org/wiki/User:Keynell

1852
External links

1 Kf4bdy4584
2 Kgautam284585
1 Kgeza74586
1 Kgfleischmann4587
8 Kh naba4588
1 Kh313114589
1 Khaister4590
1 Khalid4591
1 Khanser4592
14 Khassan du4593
4 Khazar24594
1 Kheto4595
1 Khondrion4596
1 Khromegnome4597
1 Khukri4598
1 Kiamlaluno4599
1 Kiand4600
1 Kickboy4601
1 Kiefer4602
11 Kiefer.Wolfowitz4603
1 Kieronoldham4604
1 Kievite4605
1 Kigelim4606
2 Killarnee4607
1 KillerGardevoir4608

4584 https://en.wikipedia.org/wiki/User:Kf4bdy
4585 https://en.wikipedia.org/w/index.php%3ftitle=User:Kgautam28&action=edit&redlink=1
4586 https://en.wikipedia.org/w/index.php%3ftitle=User:Kgeza7&action=edit&redlink=1
4587 https://en.wikipedia.org/wiki/User:Kgfleischmann
4588 https://en.wikipedia.org/w/index.php%3ftitle=User:Kh_naba&action=edit&redlink=1
4589 https://en.wikipedia.org/w/index.php%3ftitle=User:Kh31311&action=edit&redlink=1
4590 https://en.wikipedia.org/wiki/User:Khaister
4591 https://en.wikipedia.org/wiki/User:Khalid
4592 https://en.wikipedia.org/w/index.php%3ftitle=User:Khanser&action=edit&redlink=1
4593 https://en.wikipedia.org/w/index.php%3ftitle=User:Khassan_du&action=edit&redlink=1
4594 https://en.wikipedia.org/wiki/User:Khazar2
4595 https://en.wikipedia.org/wiki/User:Kheto
4596 https://en.wikipedia.org/wiki/User:Khondrion
4597 https://en.wikipedia.org/wiki/User:Khromegnome
4598 https://en.wikipedia.org/wiki/User:Khukri
4599 https://en.wikipedia.org/wiki/User:Kiamlaluno
4600 https://en.wikipedia.org/wiki/User:Kiand
4601 https://en.wikipedia.org/wiki/User:Kickboy
4602 https://en.wikipedia.org/w/index.php%3ftitle=User:Kiefer&action=edit&redlink=1
4603 https://en.wikipedia.org/wiki/User:Kiefer.Wolfowitz
4604 https://en.wikipedia.org/wiki/User:Kieronoldham
4605 https://en.wikipedia.org/w/index.php%3ftitle=User:Kievite&action=edit&redlink=1
4606 https://en.wikipedia.org/w/index.php%3ftitle=User:Kigelim&action=edit&redlink=1
4607 https://en.wikipedia.org/wiki/User:Killarnee
4608 https://en.wikipedia.org/wiki/User:KillerGardevoir

1853
Contributors

2 KillerWave4609
1 Killiondude4610
1 KiloByte4611
4 Kilom6914612
1 Kinetic374613
3 King Bee4614
2 King awe17074615
2 King of Hearts4616
1 King of Scorpions4617
1 KingPuppy4618
3 Kingfishr4619
1 Kingjames iv4620
1 Kinglag4621
1 Kingofqcumber4622
7 Kingpin134623
1 Kingpin9x4624
4 Kingsindian4625
2 Kingturtle4626
3 Kinkydarkbird4627
1 Kinu4628
5 Kipb94629
1 Kiril Simeonovski4630
1 Kirill Borisenko4631
1 Kirpo4632
1 Kirrages4633

4609 https://en.wikipedia.org/w/index.php%3ftitle=User:KillerWave&action=edit&redlink=1
4610 https://en.wikipedia.org/wiki/User:Killiondude
4611 https://en.wikipedia.org/wiki/User:KiloByte
4612 https://en.wikipedia.org/w/index.php%3ftitle=User:Kilom691&action=edit&redlink=1
4613 https://en.wikipedia.org/wiki/User:Kinetic37
4614 https://en.wikipedia.org/wiki/User:King_Bee
4615 https://en.wikipedia.org/w/index.php%3ftitle=User:King_awe1707&action=edit&redlink=1
4616 https://en.wikipedia.org/wiki/User:King_of_Hearts
4617 https://en.wikipedia.org/wiki/User:King_of_Scorpions
4618 https://en.wikipedia.org/wiki/User:KingPuppy
4619 https://en.wikipedia.org/w/index.php%3ftitle=User:Kingfishr&action=edit&redlink=1
4620 https://en.wikipedia.org/w/index.php%3ftitle=User:Kingjames_iv&action=edit&redlink=1
4621 https://en.wikipedia.org/wiki/User:Kinglag
4622 https://en.wikipedia.org/w/index.php%3ftitle=User:Kingofqcumber&action=edit&redlink=1
4623 https://en.wikipedia.org/wiki/User:Kingpin13
4624 https://en.wikipedia.org/w/index.php%3ftitle=User:Kingpin9x&action=edit&redlink=1
4625 https://en.wikipedia.org/wiki/User:Kingsindian
4626 https://en.wikipedia.org/wiki/User:Kingturtle
4627 https://en.wikipedia.org/wiki/User:Kinkydarkbird
4628 https://en.wikipedia.org/wiki/User:Kinu
4629 https://en.wikipedia.org/w/index.php%3ftitle=User:Kipb9&action=edit&redlink=1
4630 https://en.wikipedia.org/wiki/User:Kiril_Simeonovski
4631 https://en.wikipedia.org/wiki/User:Kirill_Borisenko
4632 https://en.wikipedia.org/w/index.php%3ftitle=User:Kirpo&action=edit&redlink=1
4633 https://en.wikipedia.org/wiki/User:Kirrages

1854
External links

1 Kirtag Hratiba4634
1 Kissedsmiley4635
1 KitMarlow4636
1 Kitarak4637
1 KittyKAY44638
1 Kiudee4639
3 Kiwi1284640
1 Kiwi1374641
1 Kiwipidae4642
3 Kiyarashfarivar4643
1 Kizor4644
4 Kjells4645
2 Kjerish4646
1 Kjetil r4647
2 Kjmitch4648
3 Kkanwariya4649
2 Kkmurray4650
19 Kku4651
1 Kl4m4652
3 Klahnako4653
1 Klapi4654
1 KlappCK4655
1 Klapper524656
1 Klausikm~enwiki4657
2 Klausness4658

https://en.wikipedia.org/w/index.php%3ftitle=User:Kirtag_Hratiba&action=edit&redlink=
4634
1
4635 https://en.wikipedia.org/wiki/User:Kissedsmiley
4636 https://en.wikipedia.org/w/index.php%3ftitle=User:KitMarlow&action=edit&redlink=1
4637 https://en.wikipedia.org/w/index.php%3ftitle=User:Kitarak&action=edit&redlink=1
4638 https://en.wikipedia.org/wiki/User:KittyKAY4
4639 https://en.wikipedia.org/w/index.php%3ftitle=User:Kiudee&action=edit&redlink=1
4640 https://en.wikipedia.org/wiki/User:Kiwi128
4641 https://en.wikipedia.org/wiki/User:Kiwi137
4642 https://en.wikipedia.org/wiki/User:Kiwipidae
https://en.wikipedia.org/w/index.php%3ftitle=User:Kiyarashfarivar&action=edit&
4643
redlink=1
4644 https://en.wikipedia.org/wiki/User:Kizor
4645 https://en.wikipedia.org/wiki/User:Kjells
4646 https://en.wikipedia.org/wiki/User:Kjerish
4647 https://en.wikipedia.org/wiki/User:Kjetil_r
4648 https://en.wikipedia.org/wiki/User:Kjmitch
4649 https://en.wikipedia.org/w/index.php%3ftitle=User:Kkanwariya&action=edit&redlink=1
4650 https://en.wikipedia.org/wiki/User:Kkmurray
4651 https://en.wikipedia.org/wiki/User:Kku
4652 https://en.wikipedia.org/wiki/User:Kl4m
4653 https://en.wikipedia.org/w/index.php%3ftitle=User:Klahnako&action=edit&redlink=1
4654 https://en.wikipedia.org/wiki/User:Klapi
4655 https://en.wikipedia.org/wiki/User:KlappCK
4656 https://en.wikipedia.org/w/index.php%3ftitle=User:Klapper52&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Klausikm~enwiki&action=edit&
4657
redlink=1
4658 https://en.wikipedia.org/wiki/User:Klausness

1855
Contributors

6 Klbrain4659
1 Kleinschlemm4660
2 Klemen Kocjancic4661
1 Kletos4662
1 Kleuske4663
3 Klodr4664
1 Klossner4665
10 Klrste4666
4 Klutzy4667
1 Kmag~enwiki4668
1 Kmgpratyush4669
1 Kmhapa4670
1 Kmhkmh4671
2 Kmmsfg4672
1 Kmote4673
4 Kndiaye4674
1 Kndimov4675
6 Knecht034676
1 KneeLess4677
2 KnightMove4678
2 KnightRider~enwiki4679
3 Knivd4680
2 Knowledge-is-power4681
1 KnowledgeOfSelf4682
2 Knuckles4683

4659 https://en.wikipedia.org/wiki/User:Klbrain
4660 https://en.wikipedia.org/w/index.php%3ftitle=User:Kleinschlemm&action=edit&redlink=1
4661 https://en.wikipedia.org/wiki/User:Klemen_Kocjancic
4662 https://en.wikipedia.org/wiki/User:Kletos
4663 https://en.wikipedia.org/wiki/User:Kleuske
4664 https://en.wikipedia.org/w/index.php%3ftitle=User:Klodr&action=edit&redlink=1
4665 https://en.wikipedia.org/w/index.php%3ftitle=User:Klossner&action=edit&redlink=1
4666 https://en.wikipedia.org/w/index.php%3ftitle=User:Klrste&action=edit&redlink=1
4667 https://en.wikipedia.org/wiki/User:Klutzy
4668 https://en.wikipedia.org/w/index.php%3ftitle=User:Kmag~enwiki&action=edit&redlink=1
4669 https://en.wikipedia.org/w/index.php%3ftitle=User:Kmgpratyush&action=edit&redlink=1
4670 https://en.wikipedia.org/w/index.php%3ftitle=User:Kmhapa&action=edit&redlink=1
4671 https://en.wikipedia.org/wiki/User:Kmhkmh
4672 https://en.wikipedia.org/w/index.php%3ftitle=User:Kmmsfg&action=edit&redlink=1
4673 https://en.wikipedia.org/wiki/User:Kmote
4674 https://en.wikipedia.org/wiki/User:Kndiaye
4675 https://en.wikipedia.org/wiki/User:Kndimov
4676 https://en.wikipedia.org/w/index.php%3ftitle=User:Knecht03&action=edit&redlink=1
4677 https://en.wikipedia.org/wiki/User:KneeLess
4678 https://en.wikipedia.org/wiki/User:KnightMove
4679 https://en.wikipedia.org/wiki/User:KnightRider~enwiki
4680 https://en.wikipedia.org/w/index.php%3ftitle=User:Knivd&action=edit&redlink=1
4681 https://en.wikipedia.org/wiki/User:Knowledge-is-power
4682 https://en.wikipedia.org/wiki/User:KnowledgeOfSelf
4683 https://en.wikipedia.org/wiki/User:Knuckles

1856
External links

15 Knutux4684
4 Koavf4685
1 Kobrabones4686
6 KocjoBot~enwiki4687
2 Koczy4688
1 Koen de Mare4689
1 KoenDelaere4690
1 Koerkra4691
2 Koertefa4692
1 Koffieyahoo4693
5 Kogorman4694
4 Koja kjx4695
1 Koko904696
2 Kolbasz4697
38 KolbertBot4698
1 Kondavarsha4699
1 Konne884700
1 Konnetikut4701
1 Konstantin Pest4702
1 Konstantin Veretennicov4703
2 Koolnik904704
2 Kooo4705
9 Kope4706
1 Koreshok20004707
2 KorinoChikara4708

4684 https://en.wikipedia.org/wiki/User:Knutux
4685 https://en.wikipedia.org/wiki/User:Koavf
4686 https://en.wikipedia.org/wiki/User:Kobrabones
4687 https://en.wikipedia.org/wiki/User:KocjoBot~enwiki
4688 https://en.wikipedia.org/wiki/User:Koczy
4689 https://en.wikipedia.org/w/index.php%3ftitle=User:Koen_de_Mare&action=edit&redlink=1
4690 https://en.wikipedia.org/wiki/User:KoenDelaere
4691 https://en.wikipedia.org/w/index.php%3ftitle=User:Koerkra&action=edit&redlink=1
4692 https://en.wikipedia.org/wiki/User:Koertefa
4693 https://en.wikipedia.org/wiki/User:Koffieyahoo
4694 https://en.wikipedia.org/wiki/User:Kogorman
4695 https://en.wikipedia.org/w/index.php%3ftitle=User:Koja_kjx&action=edit&redlink=1
4696 https://en.wikipedia.org/wiki/User:Koko90
4697 https://en.wikipedia.org/wiki/User:Kolbasz
4698 https://en.wikipedia.org/wiki/User:KolbertBot
4699 https://en.wikipedia.org/wiki/User:Kondavarsha
4700 https://en.wikipedia.org/w/index.php%3ftitle=User:Konne88&action=edit&redlink=1
4701 https://en.wikipedia.org/wiki/User:Konnetikut
4702 https://en.wikipedia.org/wiki/User:Konstantin_Pest
https://en.wikipedia.org/w/index.php%3ftitle=User:Konstantin_Veretennicov&action=
4703
edit&redlink=1
4704 https://en.wikipedia.org/w/index.php%3ftitle=User:Koolnik90&action=edit&redlink=1
4705 https://en.wikipedia.org/wiki/User:Kooo
4706 https://en.wikipedia.org/w/index.php%3ftitle=User:Kope&action=edit&redlink=1
4707 https://en.wikipedia.org/w/index.php%3ftitle=User:Koreshok2000&action=edit&redlink=1
4708 https://en.wikipedia.org/wiki/User:KorinoChikara

1857
Contributors

1 Korrawit4709
2 Korval4710
7 Kostmo4711
1 Kotasik4712
1 Kotha arun20054713
3 Kotika984714
17 Kotniski4715
2 Kozuch4716
1 Kpememory4717
2 Kpjas4718
5 Kracekumar4719
11 Kragen4720
1 Krakhan4721
3 Kralizec!4722
2 Kranix4723
3 Krauss4724
12 Kraven007mega4725
53 Kri4726
1 Krishna.914727
5 Krishna5534728
2 Krishnachandranvn4729
1 Krishnakanth.694730
2 Kriskra~enwiki4731
1 Kristjan Wager4732
3 Kristjan.Jonasson4733

4709 https://en.wikipedia.org/w/index.php%3ftitle=User:Korrawit&action=edit&redlink=1
4710 https://en.wikipedia.org/w/index.php%3ftitle=User:Korval&action=edit&redlink=1
4711 https://en.wikipedia.org/wiki/User:Kostmo
4712 https://en.wikipedia.org/wiki/User:Kotasik
4713 https://en.wikipedia.org/wiki/User:Kotha_arun2005
4714 https://en.wikipedia.org/wiki/User:Kotika98
4715 https://en.wikipedia.org/w/index.php%3ftitle=User:Kotniski&action=edit&redlink=1
4716 https://en.wikipedia.org/wiki/User:Kozuch
4717 https://en.wikipedia.org/w/index.php%3ftitle=User:Kpememory&action=edit&redlink=1
4718 https://en.wikipedia.org/wiki/User:Kpjas
4719 https://en.wikipedia.org/wiki/User:Kracekumar
4720 https://en.wikipedia.org/wiki/User:Kragen
4721 https://en.wikipedia.org/w/index.php%3ftitle=User:Krakhan&action=edit&redlink=1
4722 https://en.wikipedia.org/wiki/User:Kralizec!
4723 https://en.wikipedia.org/wiki/User:Kranix
4724 https://en.wikipedia.org/wiki/User:Krauss
4725 https://en.wikipedia.org/w/index.php%3ftitle=User:Kraven007mega&action=edit&redlink=1
4726 https://en.wikipedia.org/wiki/User:Kri
4727 https://en.wikipedia.org/w/index.php%3ftitle=User:Krishna.91&action=edit&redlink=1
4728 https://en.wikipedia.org/w/index.php%3ftitle=User:Krishna553&action=edit&redlink=1
4729 https://en.wikipedia.org/wiki/User:Krishnachandranvn
https://en.wikipedia.org/w/index.php%3ftitle=User:Krishnakanth.69&action=edit&
4730
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Kriskra~enwiki&action=edit&redlink=
4731
1
4732 https://en.wikipedia.org/wiki/User:Kristjan_Wager
4733 https://en.wikipedia.org/wiki/User:Kristjan.Jonasson

1858
External links

1 Kristoferye4734
1 Krithin4735
1 Krivokon.dmitry4736
1 KrizzyB4737
1 Kromped4738
2 Kroq-gar784739
1 Krotera4740
1 Krp~enwiki4741
2 Krueger.joey4742
2 Kruskal4743
1 Kruusamägi4744
2 Ksana4745
1 Kstone9994746
1 Ksyrie4747
3 Kthejoker4748
1 Ktims4749
1 Kubanczyk4750
1 Kubek154751
1 Kufat4752
1 Kulp4753
1 Kulvko4754
1 Kungfuadam4755
2 Kupferhirn4756
1 Kurapix4757
1 Kurige4758

4734 https://en.wikipedia.org/w/index.php%3ftitle=User:Kristoferye&action=edit&redlink=1
4735 https://en.wikipedia.org/wiki/User:Krithin
https://en.wikipedia.org/w/index.php%3ftitle=User:Krivokon.dmitry&action=edit&
4736
redlink=1
4737 https://en.wikipedia.org/wiki/User:KrizzyB
4738 https://en.wikipedia.org/w/index.php%3ftitle=User:Kromped&action=edit&redlink=1
4739 https://en.wikipedia.org/wiki/User:Kroq-gar78
4740 https://en.wikipedia.org/wiki/User:Krotera
4741 https://en.wikipedia.org/w/index.php%3ftitle=User:Krp~enwiki&action=edit&redlink=1
4742 https://en.wikipedia.org/w/index.php%3ftitle=User:Krueger.joey&action=edit&redlink=1
4743 https://en.wikipedia.org/wiki/User:Kruskal
4744 https://en.wikipedia.org/wiki/User:Kruusam%25C3%25A4gi
4745 https://en.wikipedia.org/w/index.php%3ftitle=User:Ksana&action=edit&redlink=1
4746 https://en.wikipedia.org/wiki/User:Kstone999
4747 https://en.wikipedia.org/wiki/User:Ksyrie
4748 https://en.wikipedia.org/wiki/User:Kthejoker
4749 https://en.wikipedia.org/wiki/User:Ktims
4750 https://en.wikipedia.org/wiki/User:Kubanczyk
4751 https://en.wikipedia.org/wiki/User:Kubek15
4752 https://en.wikipedia.org/wiki/User:Kufat
4753 https://en.wikipedia.org/w/index.php%3ftitle=User:Kulp&action=edit&redlink=1
4754 https://en.wikipedia.org/w/index.php%3ftitle=User:Kulvko&action=edit&redlink=1
4755 https://en.wikipedia.org/wiki/User:Kungfuadam
4756 https://en.wikipedia.org/w/index.php%3ftitle=User:Kupferhirn&action=edit&redlink=1
4757 https://en.wikipedia.org/w/index.php%3ftitle=User:Kurapix&action=edit&redlink=1
4758 https://en.wikipedia.org/w/index.php%3ftitle=User:Kurige&action=edit&redlink=1

1859
Contributors

1 Kurivaim4759
1 Kurochka4760
1 KuroiShiroi4761
4 Kuru4762
1 Kurykh4763
1 Kushal khadka4764
1 Kusma4765
3 Kusunose4766
4 Kuszi4767
4 Kutiepie944768
2 Kvamsi824769
6 Kven644770
4 Kvihill4771
5 Kvikram4772
6 Kvng4773
2 Kwamikagami4774
1 Kwiki user4775
1 Kx11864776
54 Kxx4777
1 Ky0$h1n4778
2 Kyellan4779
1 Kyle MoJo4780
3 Kyle the bot4781
5 Kyle10094782
1 Kyledalefrederick4783

4759 https://en.wikipedia.org/w/index.php%3ftitle=User:Kurivaim&action=edit&redlink=1
4760 https://en.wikipedia.org/wiki/User:Kurochka
4761 https://en.wikipedia.org/w/index.php%3ftitle=User:KuroiShiroi&action=edit&redlink=1
4762 https://en.wikipedia.org/wiki/User:Kuru
4763 https://en.wikipedia.org/wiki/User:Kurykh
4764 https://en.wikipedia.org/w/index.php%3ftitle=User:Kushal_khadka&action=edit&redlink=1
4765 https://en.wikipedia.org/wiki/User:Kusma
4766 https://en.wikipedia.org/wiki/User:Kusunose
4767 https://en.wikipedia.org/wiki/User:Kuszi
4768 https://en.wikipedia.org/w/index.php%3ftitle=User:Kutiepie94&action=edit&redlink=1
4769 https://en.wikipedia.org/w/index.php%3ftitle=User:Kvamsi82&action=edit&redlink=1
4770 https://en.wikipedia.org/w/index.php%3ftitle=User:Kven64&action=edit&redlink=1
4771 https://en.wikipedia.org/wiki/User:Kvihill
4772 https://en.wikipedia.org/wiki/User:Kvikram
4773 https://en.wikipedia.org/wiki/User:Kvng
4774 https://en.wikipedia.org/wiki/User:Kwamikagami
4775 https://en.wikipedia.org/w/index.php%3ftitle=User:Kwiki_user&action=edit&redlink=1
4776 https://en.wikipedia.org/wiki/User:Kx1186
4777 https://en.wikipedia.org/wiki/User:Kxx
https://en.wikipedia.org/w/index.php%3ftitle=User:Ky0\protect\TU\textdollar{}h1n&
4778
action=edit&redlink=1
4779 https://en.wikipedia.org/wiki/User:Kyellan
4780 https://en.wikipedia.org/wiki/User:Kyle_MoJo
4781 https://en.wikipedia.org/wiki/User:Kyle_the_bot
4782 https://en.wikipedia.org/w/index.php%3ftitle=User:Kyle1009&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Kyledalefrederick&action=edit&
4783
redlink=1

1860
External links

2 Kylemcinnes4784
17 KylieTastic4785
1 KylimeDragon4786
1 KymFarnik4787
2 Kyokpae~enwiki4788
2 Kyousuke.k4789
1 Kyuko4790
1 KyuubiSeal4791
1 Kzkzb4792
1 Kzzl4793
1 L293D4794
1 L337 kybldmstr4795
1 L8 ManeValidus4796
1 LA24797
2 LBand4798
87 LC~enwiki4799
1 LDGraham4800
1 LFaraone4801
3 LJosil4802
1 LMLB4803
2 LMolr4804
42 LOL4805
1 LPS and MLP Fan4806
2 LaaknorBot4807
1 Lab-oratory4808

4784 https://en.wikipedia.org/wiki/User:Kylemcinnes
4785 https://en.wikipedia.org/wiki/User:KylieTastic
4786 https://en.wikipedia.org/w/index.php%3ftitle=User:KylimeDragon&action=edit&redlink=1
4787 https://en.wikipedia.org/wiki/User:KymFarnik
4788 https://en.wikipedia.org/wiki/User:Kyokpae~enwiki
4789 https://en.wikipedia.org/w/index.php%3ftitle=User:Kyousuke.k&action=edit&redlink=1
4790 https://en.wikipedia.org/wiki/User:Kyuko
4791 https://en.wikipedia.org/w/index.php%3ftitle=User:KyuubiSeal&action=edit&redlink=1
4792 https://en.wikipedia.org/w/index.php%3ftitle=User:Kzkzb&action=edit&redlink=1
4793 https://en.wikipedia.org/wiki/User:Kzzl
4794 https://en.wikipedia.org/wiki/User:L293D
4795 https://en.wikipedia.org/wiki/User:L337_kybldmstr
https://en.wikipedia.org/w/index.php%3ftitle=User:L8_ManeValidus&action=edit&redlink=
4796
1
4797 https://en.wikipedia.org/wiki/User:LA2
4798 https://en.wikipedia.org/w/index.php%3ftitle=User:LBand&action=edit&redlink=1
4799 https://en.wikipedia.org/wiki/User:LC~enwiki
4800 https://en.wikipedia.org/w/index.php%3ftitle=User:LDGraham&action=edit&redlink=1
4801 https://en.wikipedia.org/wiki/User:LFaraone
4802 https://en.wikipedia.org/wiki/User:LJosil
4803 https://en.wikipedia.org/w/index.php%3ftitle=User:LMLB&action=edit&redlink=1
4804 https://en.wikipedia.org/w/index.php%3ftitle=User:LMolr&action=edit&redlink=1
4805 https://en.wikipedia.org/wiki/User:LOL
4806 https://en.wikipedia.org/wiki/User:LPS_and_MLP_Fan
4807 https://en.wikipedia.org/wiki/User:LaaknorBot
4808 https://en.wikipedia.org/wiki/User:Lab-oratory

1861
Contributors

1 Labachevskij4809
1 Labalius4810
1 Laberinto154811
1 Lacipac4812
4 Lacis alfredo4813
5 Lady-shirakawa4814
1 Lage~enwiki4815
2 Lailsonbm4816
1 Lajsikonik4817
1 Laleppa4818
1 Lalithsuresh4819
1 Lamb994820
1 Lambda Fairy4821
44 Lambiam4822
1 Lambin~enwiki4823
3 Lamdk4824
1 Lamro4825
1 Lancebop4826
1 Lancekt4827
1 Landroo4828
4 Lanem4829
2 Lanov4830
1 Lansey4831
1 Lanthaler4832
6 Lantonov4833

4809 https://en.wikipedia.org/w/index.php%3ftitle=User:Labachevskij&action=edit&redlink=1
4810 https://en.wikipedia.org/wiki/User:Labalius
4811 https://en.wikipedia.org/w/index.php%3ftitle=User:Laberinto15&action=edit&redlink=1
4812 https://en.wikipedia.org/w/index.php%3ftitle=User:Lacipac&action=edit&redlink=1
4813 https://en.wikipedia.org/w/index.php%3ftitle=User:Lacis_alfredo&action=edit&redlink=1
4814 https://en.wikipedia.org/wiki/User:Lady-shirakawa
4815 https://en.wikipedia.org/wiki/User:Lage~enwiki
4816 https://en.wikipedia.org/w/index.php%3ftitle=User:Lailsonbm&action=edit&redlink=1
4817 https://en.wikipedia.org/wiki/User:Lajsikonik
4818 https://en.wikipedia.org/wiki/User:Laleppa
4819 https://en.wikipedia.org/w/index.php%3ftitle=User:Lalithsuresh&action=edit&redlink=1
4820 https://en.wikipedia.org/wiki/User:Lamb99
4821 https://en.wikipedia.org/w/index.php%3ftitle=User:Lambda_Fairy&action=edit&redlink=1
4822 https://en.wikipedia.org/wiki/User:Lambiam
4823 https://en.wikipedia.org/w/index.php%3ftitle=User:Lambin~enwiki&action=edit&redlink=1
4824 https://en.wikipedia.org/wiki/User:Lamdk
4825 https://en.wikipedia.org/wiki/User:Lamro
4826 https://en.wikipedia.org/w/index.php%3ftitle=User:Lancebop&action=edit&redlink=1
4827 https://en.wikipedia.org/wiki/User:Lancekt
4828 https://en.wikipedia.org/wiki/User:Landroo
4829 https://en.wikipedia.org/wiki/User:Lanem
4830 https://en.wikipedia.org/wiki/User:Lanov
4831 https://en.wikipedia.org/w/index.php%3ftitle=User:Lansey&action=edit&redlink=1
4832 https://en.wikipedia.org/wiki/User:Lanthaler
4833 https://en.wikipedia.org/wiki/User:Lantonov

1862
External links

3 Laoris4834
2 LapoLuchini4835
3 Lark ascending4836
2 Larosek4837
2 Larry laptop4838
4 LarryLACa4839
2 Larryv4840
1 Lars Trebing4841
1 LarsMarius4842
1 Larsborn4843
1 Larsholmjensen4844
1 Lartoven4845
1 Laschatzer4846
5 Laser brain4847
1 Lasz4848
8 Latex-yow4849
1 Latin.ufmg4850
1 Laubrau~enwiki4851
2 Laudaka4852
1 Laughsinthestocks4853
1 Launchballer4854
1 Launchpadx4855
2 Lauren maggio4856
2 Laurens~enwiki4857
1 Laurentiuu4858

4834 https://en.wikipedia.org/w/index.php%3ftitle=User:Laoris&action=edit&redlink=1
4835 https://en.wikipedia.org/wiki/User:LapoLuchini
https://en.wikipedia.org/w/index.php%3ftitle=User:Lark_ascending&action=edit&redlink=
4836
1
4837 https://en.wikipedia.org/w/index.php%3ftitle=User:Larosek&action=edit&redlink=1
4838 https://en.wikipedia.org/wiki/User:Larry_laptop
4839 https://en.wikipedia.org/wiki/User:LarryLACa
4840 https://en.wikipedia.org/wiki/User:Larryv
4841 https://en.wikipedia.org/wiki/User:Lars_Trebing
4842 https://en.wikipedia.org/wiki/User:LarsMarius
4843 https://en.wikipedia.org/wiki/User:Larsborn
4844 https://en.wikipedia.org/wiki/User:Larsholmjensen
4845 https://en.wikipedia.org/wiki/User:Lartoven
4846 https://en.wikipedia.org/w/index.php%3ftitle=User:Laschatzer&action=edit&redlink=1
4847 https://en.wikipedia.org/wiki/User:Laser_brain
4848 https://en.wikipedia.org/w/index.php%3ftitle=User:Lasz&action=edit&redlink=1
4849 https://en.wikipedia.org/w/index.php%3ftitle=User:Latex-yow&action=edit&redlink=1
4850 https://en.wikipedia.org/w/index.php%3ftitle=User:Latin.ufmg&action=edit&redlink=1
4851 https://en.wikipedia.org/wiki/User:Laubrau~enwiki
4852 https://en.wikipedia.org/wiki/User:Laudaka
https://en.wikipedia.org/w/index.php%3ftitle=User:Laughsinthestocks&action=edit&
4853
redlink=1
4854 https://en.wikipedia.org/wiki/User:Launchballer
4855 https://en.wikipedia.org/w/index.php%3ftitle=User:Launchpadx&action=edit&redlink=1
4856 https://en.wikipedia.org/wiki/User:Lauren_maggio
https://en.wikipedia.org/w/index.php%3ftitle=User:Laurens~enwiki&action=edit&redlink=
4857
1
4858 https://en.wikipedia.org/w/index.php%3ftitle=User:Laurentiuu&action=edit&redlink=1

1863
Contributors

2 Laurențiu Dascălu4859
6 Laurinkus4860
4 Laurusnobilis4861
1 Lauwr4862
3 Lavaka4863
1 Lavenderbunny4864
1 Lavv174865
1 Lawlzlawlz4866
2 Lawpjc4867
1 Lawrence25074868
1 Ldecola4869
1 Ldoron4870
4 Ldthai4871
1 Le-idiot4872
1 Leaflord4873
6 Leafyplant4874
1 LearnerGenius4875
1 Leastfixedpoint4876
1 LeaveSleaves4877
1 Leberbaum4878
1 Lebroyl4879
3 Lee Carre4880
7 Lee Daniel Crocker4881
4 Lee J Haywood4882
1 Leedeth4883

4859 https://en.wikipedia.org/wiki/User:Lauren%25C8%259Biu_Dasc%25C4%2583lu
4860 https://en.wikipedia.org/wiki/User:Laurinkus
4861 https://en.wikipedia.org/w/index.php%3ftitle=User:Laurusnobilis&action=edit&redlink=1
4862 https://en.wikipedia.org/wiki/User:Lauwr
4863 https://en.wikipedia.org/wiki/User:Lavaka
4864 https://en.wikipedia.org/w/index.php%3ftitle=User:Lavenderbunny&action=edit&redlink=1
4865 https://en.wikipedia.org/w/index.php%3ftitle=User:Lavv17&action=edit&redlink=1
4866 https://en.wikipedia.org/w/index.php%3ftitle=User:Lawlzlawlz&action=edit&redlink=1
4867 https://en.wikipedia.org/wiki/User:Lawpjc
4868 https://en.wikipedia.org/w/index.php%3ftitle=User:Lawrence2507&action=edit&redlink=1
4869 https://en.wikipedia.org/wiki/User:Ldecola
4870 https://en.wikipedia.org/w/index.php%3ftitle=User:Ldoron&action=edit&redlink=1
4871 https://en.wikipedia.org/w/index.php%3ftitle=User:Ldthai&action=edit&redlink=1
4872 https://en.wikipedia.org/wiki/User:Le-idiot
4873 https://en.wikipedia.org/w/index.php%3ftitle=User:Leaflord&action=edit&redlink=1
4874 https://en.wikipedia.org/wiki/User:Leafyplant
4875 https://en.wikipedia.org/wiki/User:LearnerGenius
4876 https://en.wikipedia.org/wiki/User:Leastfixedpoint
4877 https://en.wikipedia.org/wiki/User:LeaveSleaves
4878 https://en.wikipedia.org/w/index.php%3ftitle=User:Leberbaum&action=edit&redlink=1
4879 https://en.wikipedia.org/wiki/User:Lebroyl
4880 https://en.wikipedia.org/wiki/User:Lee_Carre
4881 https://en.wikipedia.org/wiki/User:Lee_Daniel_Crocker
4882 https://en.wikipedia.org/wiki/User:Lee_J_Haywood
4883 https://en.wikipedia.org/wiki/User:Leedeth

1864
External links

9 Leegrc4884
1 Legoalphapanzer4885
21 Legobot4886
1 Leibniz4887
1 Leirbag.arc4888
1 Leithp4889
1 Lejyby4890
5 Lekkio4891
1 Lekro4892
7 Leland McInnes4893
1 Lemontea4894
1 Lemzwerg4895
2 Lennysz4896
1 Lenschulwitz4897
31 Leon math4898
1 LeonMaat4899
10 Leonard G.4900
2 LeonardoGregianin4901
4 LeonardoRob0t4902
80 Leonhard Fortier4903
5 Leonxlin4904
1 Lephrim4905
2 Leprof 72724906
1 Lerdthenerd4907
6 Lescanomiguel19984908

4884 https://en.wikipedia.org/w/index.php%3ftitle=User:Leegrc&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Legoalphapanzer&action=edit&
4885
redlink=1
4886 https://en.wikipedia.org/wiki/User:Legobot
4887 https://en.wikipedia.org/wiki/User:Leibniz
4888 https://en.wikipedia.org/w/index.php%3ftitle=User:Leirbag.arc&action=edit&redlink=1
4889 https://en.wikipedia.org/wiki/User:Leithp
4890 https://en.wikipedia.org/w/index.php%3ftitle=User:Lejyby&action=edit&redlink=1
4891 https://en.wikipedia.org/w/index.php%3ftitle=User:Lekkio&action=edit&redlink=1
4892 https://en.wikipedia.org/wiki/User:Lekro
4893 https://en.wikipedia.org/wiki/User:Leland_McInnes
4894 https://en.wikipedia.org/wiki/User:Lemontea
4895 https://en.wikipedia.org/w/index.php%3ftitle=User:Lemzwerg&action=edit&redlink=1
4896 https://en.wikipedia.org/w/index.php%3ftitle=User:Lennysz&action=edit&redlink=1
4897 https://en.wikipedia.org/wiki/User:Lenschulwitz
4898 https://en.wikipedia.org/wiki/User:Leon_math
4899 https://en.wikipedia.org/w/index.php%3ftitle=User:LeonMaat&action=edit&redlink=1
4900 https://en.wikipedia.org/wiki/User:Leonard_G.
4901 https://en.wikipedia.org/wiki/User:LeonardoGregianin
4902 https://en.wikipedia.org/wiki/User:LeonardoRob0t
4903 https://en.wikipedia.org/wiki/User:Leonhard_Fortier
4904 https://en.wikipedia.org/wiki/User:Leonxlin
4905 https://en.wikipedia.org/w/index.php%3ftitle=User:Lephrim&action=edit&redlink=1
4906 https://en.wikipedia.org/wiki/User:Leprof_7272
4907 https://en.wikipedia.org/wiki/User:Lerdthenerd
https://en.wikipedia.org/w/index.php%3ftitle=User:Lescanomiguel1998&action=edit&
4908
redlink=1

1865
Contributors

1 Leschnei4909
1 Lesonyrra4910
7 Lesser Cartographies4911
2 Leszek Jańczuk4912
2 Let4time4913
1 Letcreate1234914
1 Lethe4915
6 Levelmartinmusatov4916
1 Leventov4917
1 Levin4918
1 Levmatta4919
2 Lewis Goudy4920
1 Lewis69h4921
1 Lewix4922
1 Lexor4923
1 LeyNon4924
2 Leycec4925
1 Leyo4926
11 Lfstevens4927
2 Lhuang34928
4 LiDaobing4929
1 Liam McM4930
1 Liam21054931
1 Liam9874932
2 Liandrei4933

4909 https://en.wikipedia.org/wiki/User:Leschnei
4910 https://en.wikipedia.org/wiki/User:Lesonyrra
4911 https://en.wikipedia.org/wiki/User:Lesser_Cartographies
4912 https://en.wikipedia.org/wiki/User:Leszek_Ja%25C5%2584czuk
4913 https://en.wikipedia.org/w/index.php%3ftitle=User:Let4time&action=edit&redlink=1
4914 https://en.wikipedia.org/wiki/User:Letcreate123
4915 https://en.wikipedia.org/wiki/User:Lethe
4916 https://en.wikipedia.org/wiki/User:Levelmartinmusatov
4917 https://en.wikipedia.org/w/index.php%3ftitle=User:Leventov&action=edit&redlink=1
4918 https://en.wikipedia.org/wiki/User:Levin
4919 https://en.wikipedia.org/w/index.php%3ftitle=User:Levmatta&action=edit&redlink=1
4920 https://en.wikipedia.org/wiki/User:Lewis_Goudy
4921 https://en.wikipedia.org/w/index.php%3ftitle=User:Lewis69h&action=edit&redlink=1
4922 https://en.wikipedia.org/wiki/User:Lewix
4923 https://en.wikipedia.org/wiki/User:Lexor
4924 https://en.wikipedia.org/w/index.php%3ftitle=User:LeyNon&action=edit&redlink=1
4925 https://en.wikipedia.org/wiki/User:Leycec
4926 https://en.wikipedia.org/wiki/User:Leyo
4927 https://en.wikipedia.org/wiki/User:Lfstevens
4928 https://en.wikipedia.org/w/index.php%3ftitle=User:Lhuang3&action=edit&redlink=1
4929 https://en.wikipedia.org/wiki/User:LiDaobing
4930 https://en.wikipedia.org/wiki/User:Liam_McM
4931 https://en.wikipedia.org/w/index.php%3ftitle=User:Liam2105&action=edit&redlink=1
4932 https://en.wikipedia.org/wiki/User:Liam987
4933 https://en.wikipedia.org/wiki/User:Liandrei

1866
External links

22 Liao4934
1 LibLord4935
2 LiberatorG4936
1 Liberatus4937
1 Liberlogos4938
1 Libertyrights4939
4 Libor Vilímek4940
6 Lida Hayrapetyan4941
1 Lidden~enwiki4942
1 Life of Riley4943
1 Liftarn4944
1 Liger424945
19 Lightbot4946
1 Lighthouse644947
2 Lightmouse4948
3 Lightst4949
7 Ligulembot4950
6 Liko814951
12 LilHelpa4952
1 Lillanes4953
1 Lilnepatiz4954
1 Lilygrimm2574955
3 Lim Wei Quan4956
1 Limaner4957
5 Limit-theorem4958

4934 https://en.wikipedia.org/wiki/User:Liao
4935 https://en.wikipedia.org/wiki/User:LibLord
4936 https://en.wikipedia.org/wiki/User:LiberatorG
4937 https://en.wikipedia.org/w/index.php%3ftitle=User:Liberatus&action=edit&redlink=1
4938 https://en.wikipedia.org/wiki/User:Liberlogos
4939 https://en.wikipedia.org/wiki/User:Libertyrights
4940 https://en.wikipedia.org/wiki/User:Libor_Vil%25C3%25ADmek
https://en.wikipedia.org/w/index.php%3ftitle=User:Lida_Hayrapetyan&action=edit&
4941
redlink=1
4942 https://en.wikipedia.org/w/index.php%3ftitle=User:Lidden~enwiki&action=edit&redlink=1
4943 https://en.wikipedia.org/wiki/User:Life_of_Riley
4944 https://en.wikipedia.org/wiki/User:Liftarn
4945 https://en.wikipedia.org/w/index.php%3ftitle=User:Liger42&action=edit&redlink=1
4946 https://en.wikipedia.org/wiki/User:Lightbot
4947 https://en.wikipedia.org/wiki/User:Lighthouse64
4948 https://en.wikipedia.org/wiki/User:Lightmouse
4949 https://en.wikipedia.org/wiki/User:Lightst
4950 https://en.wikipedia.org/wiki/User:Ligulembot
4951 https://en.wikipedia.org/wiki/User:Liko81
4952 https://en.wikipedia.org/wiki/User:LilHelpa
4953 https://en.wikipedia.org/w/index.php%3ftitle=User:Lillanes&action=edit&redlink=1
4954 https://en.wikipedia.org/wiki/User:Lilnepatiz
4955 https://en.wikipedia.org/w/index.php%3ftitle=User:Lilygrimm257&action=edit&redlink=1
4956 https://en.wikipedia.org/w/index.php%3ftitle=User:Lim_Wei_Quan&action=edit&redlink=1
4957 https://en.wikipedia.org/w/index.php%3ftitle=User:Limaner&action=edit&redlink=1
4958 https://en.wikipedia.org/wiki/User:Limit-theorem

1867
Contributors

12 Linas4959
3 Linasargsyan4960
1 Lindsaywinkler4961
1 Linealchamp4962
1 Linguica4963
1 LinguistAtLarge4964
1 Lingust4965
21 Lingwanjae4966
4 LinkFA-Bot4967
1 Linkato14968
5 Linket4969
1 Linux9814970
1 Linuxbabu~enwiki4971
1 Lioinnisfree4972
1 Liorma4973
1 LiranKatzir4974
2 Liridon4975
2 Liron004976
1 Liso4977
1 Litdayss4978
1 LithiumFlash4979
2 Lithopsian4980
2 Lithui4981
1 Little Mountain 54982
1 LittleBenW4983

4959 https://en.wikipedia.org/wiki/User:Linas
4960 https://en.wikipedia.org/w/index.php%3ftitle=User:Linasargsyan&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Lindsaywinkler&action=edit&redlink=
4961
1
4962 https://en.wikipedia.org/wiki/User:Linealchamp
4963 https://en.wikipedia.org/w/index.php%3ftitle=User:Linguica&action=edit&redlink=1
4964 https://en.wikipedia.org/wiki/User:LinguistAtLarge
4965 https://en.wikipedia.org/wiki/User:Lingust
4966 https://en.wikipedia.org/w/index.php%3ftitle=User:Lingwanjae&action=edit&redlink=1
4967 https://en.wikipedia.org/wiki/User:LinkFA-Bot
4968 https://en.wikipedia.org/w/index.php%3ftitle=User:Linkato1&action=edit&redlink=1
4969 https://en.wikipedia.org/w/index.php%3ftitle=User:Linket&action=edit&redlink=1
4970 https://en.wikipedia.org/w/index.php%3ftitle=User:Linux981&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Linuxbabu~enwiki&action=edit&
4971
redlink=1
4972 https://en.wikipedia.org/w/index.php%3ftitle=User:Lioinnisfree&action=edit&redlink=1
4973 https://en.wikipedia.org/w/index.php%3ftitle=User:Liorma&action=edit&redlink=1
4974 https://en.wikipedia.org/w/index.php%3ftitle=User:LiranKatzir&action=edit&redlink=1
4975 https://en.wikipedia.org/wiki/User:Liridon
4976 https://en.wikipedia.org/wiki/User:Liron00
4977 https://en.wikipedia.org/wiki/User:Liso
4978 https://en.wikipedia.org/w/index.php%3ftitle=User:Litdayss&action=edit&redlink=1
4979 https://en.wikipedia.org/wiki/User:LithiumFlash
4980 https://en.wikipedia.org/wiki/User:Lithopsian
4981 https://en.wikipedia.org/w/index.php%3ftitle=User:Lithui&action=edit&redlink=1
4982 https://en.wikipedia.org/wiki/User:Little_Mountain_5
4983 https://en.wikipedia.org/wiki/User:LittleBenW

1868
External links

7 LittleDan4984
2 Liu Yuezhang4985
6 LiuZhaoliang4986
1 Liuofficial4987
2 Lixinso4988
1 LizardJr84989
20 LizardWizard4990
1 Lizheng024991
1 LjL4992
1 Ljtale4993
2 Lkajsdflkj4994
1 Lkjhgfdsa4995
2 Llightex4996
1 Lmfantome4997
1 Lmonson264998
1 Loabok4999
1 Loading5000
8 Loadmaster5001
2 Localh775002
1 Lockley5003
3 Locobot5004
1 LodeRunner5005
3 Lofty abyss5006
4 Logan5007
1 Logan.aggregate5008

4984 https://en.wikipedia.org/wiki/User:LittleDan
4985 https://en.wikipedia.org/w/index.php%3ftitle=User:Liu_Yuezhang&action=edit&redlink=1
4986 https://en.wikipedia.org/w/index.php%3ftitle=User:LiuZhaoliang&action=edit&redlink=1
4987 https://en.wikipedia.org/wiki/User:Liuofficial
4988 https://en.wikipedia.org/wiki/User:Lixinso
4989 https://en.wikipedia.org/wiki/User:LizardJr8
4990 https://en.wikipedia.org/wiki/User:LizardWizard
4991 https://en.wikipedia.org/w/index.php%3ftitle=User:Lizheng02&action=edit&redlink=1
4992 https://en.wikipedia.org/wiki/User:LjL
4993 https://en.wikipedia.org/w/index.php%3ftitle=User:Ljtale&action=edit&redlink=1
4994 https://en.wikipedia.org/w/index.php%3ftitle=User:Lkajsdflkj&action=edit&redlink=1
4995 https://en.wikipedia.org/wiki/User:Lkjhgfdsa
4996 https://en.wikipedia.org/wiki/User:Llightex
4997 https://en.wikipedia.org/w/index.php%3ftitle=User:Lmfantome&action=edit&redlink=1
4998 https://en.wikipedia.org/w/index.php%3ftitle=User:Lmonson26&action=edit&redlink=1
4999 https://en.wikipedia.org/w/index.php%3ftitle=User:Loabok&action=edit&redlink=1
5000 https://en.wikipedia.org/wiki/User:Loading
5001 https://en.wikipedia.org/wiki/User:Loadmaster
5002 https://en.wikipedia.org/w/index.php%3ftitle=User:Localh77&action=edit&redlink=1
5003 https://en.wikipedia.org/wiki/User:Lockley
5004 https://en.wikipedia.org/wiki/User:Locobot
5005 https://en.wikipedia.org/wiki/User:LodeRunner
5006 https://en.wikipedia.org/wiki/User:Lofty_abyss
5007 https://en.wikipedia.org/wiki/User:Logan
5008 https://en.wikipedia.org/wiki/User:Logan.aggregate

1869
Contributors

1 LoganZhou5009
3 Logx795010
2 Loisel5011
3 LokeshRavindranathan5012
7 LokiClock5013
1 LokiTheatreChick5014
1 Lonezor5015
2 Longhair5016
3 Loodog5017
1 Looie4965018
2 Look2See15019
1 Loopology5020
7 Looxix~enwiki5021
1 Lor5022
8 Loraof5023
2 Lord Bolingbroke5024
1 Lord Emsworth5025
1 LordAnubisBOT5026
2 LordArtemis5027
1 Lordmetroid5028
1 Lordvadr5029
1 Loren.wilton5030
1 Loreto~enwiki5031
3 Lotje5032
1 LottsoLuck5033

5009 https://en.wikipedia.org/w/index.php%3ftitle=User:LoganZhou&action=edit&redlink=1
5010 https://en.wikipedia.org/w/index.php%3ftitle=User:Logx79&action=edit&redlink=1
5011 https://en.wikipedia.org/wiki/User:Loisel
5012 https://en.wikipedia.org/wiki/User:LokeshRavindranathan
5013 https://en.wikipedia.org/wiki/User:LokiClock
https://en.wikipedia.org/w/index.php%3ftitle=User:LokiTheatreChick&action=edit&
5014
redlink=1
5015 https://en.wikipedia.org/wiki/User:Lonezor
5016 https://en.wikipedia.org/wiki/User:Longhair
5017 https://en.wikipedia.org/wiki/User:Loodog
5018 https://en.wikipedia.org/wiki/User:Looie496
5019 https://en.wikipedia.org/wiki/User:Look2See1
5020 https://en.wikipedia.org/w/index.php%3ftitle=User:Loopology&action=edit&redlink=1
5021 https://en.wikipedia.org/wiki/User:Looxix~enwiki
5022 https://en.wikipedia.org/wiki/User:Lor
5023 https://en.wikipedia.org/wiki/User:Loraof
5024 https://en.wikipedia.org/wiki/User:Lord_Bolingbroke
5025 https://en.wikipedia.org/wiki/User:Lord_Emsworth
5026 https://en.wikipedia.org/wiki/User:LordAnubisBOT
5027 https://en.wikipedia.org/wiki/User:LordArtemis
5028 https://en.wikipedia.org/wiki/User:Lordmetroid
5029 https://en.wikipedia.org/w/index.php%3ftitle=User:Lordvadr&action=edit&redlink=1
5030 https://en.wikipedia.org/wiki/User:Loren.wilton
5031 https://en.wikipedia.org/w/index.php%3ftitle=User:Loreto~enwiki&action=edit&redlink=1
5032 https://en.wikipedia.org/wiki/User:Lotje
5033 https://en.wikipedia.org/w/index.php%3ftitle=User:LottsoLuck&action=edit&redlink=1

1870
External links

1 Lotu5034
103 LouScheffer5035
1 Loufranco5036
2 Louigi5037
1 Louis Kyu Won Ryu5038
2 Louis flood5039
1 LouisWins5040
1 Louispencer325041
4 Louperibot5042
2 Lourakis5043
1 Lousyd5044
1 Love debian5045
1 LoveEncounterFlow5046
1 LoverushAM5047
24 Low-frequency internal5048
5 Lowellian5049
2 Lowercase Sigma5050
3 Lp.vitor5051
3 Lpgeffen5052
3 Lqqhh5053
1 Lqs5054
1 Lr0^^k5055
1 Lserni5056
10 Lt-wiki-bot5057
1 Ltomic5058

5034 https://en.wikipedia.org/wiki/User:Lotu
5035 https://en.wikipedia.org/wiki/User:LouScheffer
5036 https://en.wikipedia.org/w/index.php%3ftitle=User:Loufranco&action=edit&redlink=1
5037 https://en.wikipedia.org/wiki/User:Louigi
5038 https://en.wikipedia.org/wiki/User:Louis_Kyu_Won_Ryu
5039 https://en.wikipedia.org/w/index.php%3ftitle=User:Louis_flood&action=edit&redlink=1
5040 https://en.wikipedia.org/w/index.php%3ftitle=User:LouisWins&action=edit&redlink=1
5041 https://en.wikipedia.org/w/index.php%3ftitle=User:Louispencer32&action=edit&redlink=1
5042 https://en.wikipedia.org/wiki/User:Louperibot
5043 https://en.wikipedia.org/wiki/User:Lourakis
5044 https://en.wikipedia.org/wiki/User:Lousyd
5045 https://en.wikipedia.org/w/index.php%3ftitle=User:Love_debian&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:LoveEncounterFlow&action=edit&
5046
redlink=1
5047 https://en.wikipedia.org/wiki/User:LoverushAM
https://en.wikipedia.org/w/index.php%3ftitle=User:Low-frequency_internal&action=edit&
5048
redlink=1
5049 https://en.wikipedia.org/wiki/User:Lowellian
5050 https://en.wikipedia.org/wiki/User:Lowercase_Sigma
5051 https://en.wikipedia.org/wiki/User:Lp.vitor
5052 https://en.wikipedia.org/wiki/User:Lpgeffen
5053 https://en.wikipedia.org/wiki/User:Lqqhh
5054 https://en.wikipedia.org/wiki/User:Lqs
5055 https://en.wikipedia.org/wiki/User:Lr0%255E%255Ek
5056 https://en.wikipedia.org/w/index.php%3ftitle=User:Lserni&action=edit&redlink=1
5057 https://en.wikipedia.org/wiki/User:Lt-wiki-bot
5058 https://en.wikipedia.org/w/index.php%3ftitle=User:Ltomic&action=edit&redlink=1

1871
Contributors

1 LuK35059
4 Luc4~enwiki5060
1 LucasBrown5061
1 Luchostein5062
2 LucienBOT5063
1 Luckas Blade5064
125 Luckas-bot5065
1 Lucyollielove5066
1 Ludicfallacy5067
1 Ludwig Boltzmann5068
6 LudwikSzymonJaniuk5069
6 Lugia24535070
1 Lugnad5071
3 Luis Sanchez5072
1 Luisa Valencia5073
3 Luk5074
2 Lukas32265075
3 Luke Gustafson5076
1 Luke13375077
4 Luke18:2-85078
2 Lukecassidy85079
1 Lukeh15080
1 Lukejacksonbn5081
1 LuluKiffer5082
2 Luna Santin5083

5059 https://en.wikipedia.org/wiki/User:LuK3
5060 https://en.wikipedia.org/wiki/User:Luc4~enwiki
5061 https://en.wikipedia.org/wiki/User:LucasBrown
5062 https://en.wikipedia.org/w/index.php%3ftitle=User:Luchostein&action=edit&redlink=1
5063 https://en.wikipedia.org/wiki/User:LucienBOT
5064 https://en.wikipedia.org/wiki/User:Luckas_Blade
5065 https://en.wikipedia.org/wiki/User:Luckas-bot
5066 https://en.wikipedia.org/w/index.php%3ftitle=User:Lucyollielove&action=edit&redlink=1
5067 https://en.wikipedia.org/w/index.php%3ftitle=User:Ludicfallacy&action=edit&redlink=1
5068 https://en.wikipedia.org/wiki/User:Ludwig_Boltzmann
5069 https://en.wikipedia.org/wiki/User:LudwikSzymonJaniuk
5070 https://en.wikipedia.org/wiki/User:Lugia2453
5071 https://en.wikipedia.org/wiki/User:Lugnad
5072 https://en.wikipedia.org/wiki/User:Luis_Sanchez
https://en.wikipedia.org/w/index.php%3ftitle=User:Luisa_Valencia&action=edit&redlink=
5073
1
5074 https://en.wikipedia.org/wiki/User:Luk
5075 https://en.wikipedia.org/wiki/User:Lukas3226
5076 https://en.wikipedia.org/wiki/User:Luke_Gustafson
5077 https://en.wikipedia.org/wiki/User:Luke1337
5078 https://en.wikipedia.org/wiki/User:Luke18:2-8
5079 https://en.wikipedia.org/w/index.php%3ftitle=User:Lukecassidy8&action=edit&redlink=1
5080 https://en.wikipedia.org/w/index.php%3ftitle=User:Lukeh1&action=edit&redlink=1
5081 https://en.wikipedia.org/w/index.php%3ftitle=User:Lukejacksonbn&action=edit&redlink=1
5082 https://en.wikipedia.org/w/index.php%3ftitle=User:LuluKiffer&action=edit&redlink=1
5083 https://en.wikipedia.org/wiki/User:Luna_Santin

1872
External links

1 Lunae5084
2 Lunakid5085
4 LunaticFringe5086
1 Lunch5087
2 LungZeno5088
1 Lupin5089
1 Lupin VII5090
7 Luqui5091
1 Luther935092
2 LutzL5093
1 Luv2run5094
1 Luxem5095
3 Luís Felipe Braga5096
1 Lvntcs5097
4 Lvsmart5098
9 Lwr3145099
1 Lycaon5100
2 Lycurgus5101
2 Lyn2406902345102
1 Lynx6075103
1 LynxTufts5104
1 Lynxoid845105
1 Lyondif025106
11 Lyonsam5107
2 Lyuflamb5108

5084 https://en.wikipedia.org/w/index.php%3ftitle=User:Lunae&action=edit&redlink=1
5085 https://en.wikipedia.org/wiki/User:Lunakid
5086 https://en.wikipedia.org/wiki/User:LunaticFringe
5087 https://en.wikipedia.org/wiki/User:Lunch
5088 https://en.wikipedia.org/wiki/User:LungZeno
5089 https://en.wikipedia.org/wiki/User:Lupin
5090 https://en.wikipedia.org/wiki/User:Lupin_VII
5091 https://en.wikipedia.org/wiki/User:Luqui
5092 https://en.wikipedia.org/w/index.php%3ftitle=User:Luther93&action=edit&redlink=1
5093 https://en.wikipedia.org/wiki/User:LutzL
5094 https://en.wikipedia.org/wiki/User:Luv2run
5095 https://en.wikipedia.org/w/index.php%3ftitle=User:Luxem&action=edit&redlink=1
5096 https://en.wikipedia.org/wiki/User:Lu%25C3%25ADs_Felipe_Braga
5097 https://en.wikipedia.org/w/index.php%3ftitle=User:Lvntcs&action=edit&redlink=1
5098 https://en.wikipedia.org/w/index.php%3ftitle=User:Lvsmart&action=edit&redlink=1
5099 https://en.wikipedia.org/w/index.php%3ftitle=User:Lwr314&action=edit&redlink=1
5100 https://en.wikipedia.org/wiki/User:Lycaon
5101 https://en.wikipedia.org/wiki/User:Lycurgus
5102 https://en.wikipedia.org/w/index.php%3ftitle=User:Lyn240690234&action=edit&redlink=1
5103 https://en.wikipedia.org/w/index.php%3ftitle=User:Lynx607&action=edit&redlink=1
5104 https://en.wikipedia.org/wiki/User:LynxTufts
5105 https://en.wikipedia.org/wiki/User:Lynxoid84
5106 https://en.wikipedia.org/wiki/User:Lyondif02
5107 https://en.wikipedia.org/wiki/User:Lyonsam
5108 https://en.wikipedia.org/wiki/User:Lyuflamb

1873
Contributors

1 Lzap5109
1 Lzur5110
1 M.Bitton5111
1 M.O.X5112
1 M.aznaveh5113
1 M1ss1ontomars2k45114
1 M2millenium5115
2 M412k5116
1 M7595117
2 M7bot5118
1 MATThematical5119
2 MBlaze Lightning5120
12 MC105121
54 MCiura5122
1 MEOGLOBAL5123
6 MER-C5124
4 MFH5125
2 MForster5126
4 MH~enwiki5127
7 MIT Trekkie5128
3 MITTER985129
2 MLIS Student5130
4 MMarcuzzo5131
1 MONGO5132
1 MRD20145133

5109 https://en.wikipedia.org/wiki/User:Lzap
5110 https://en.wikipedia.org/wiki/User:Lzur
5111 https://en.wikipedia.org/wiki/User:M.Bitton
5112 https://en.wikipedia.org/wiki/User:M.O.X
5113 https://en.wikipedia.org/w/index.php%3ftitle=User:M.aznaveh&action=edit&redlink=1
5114 https://en.wikipedia.org/wiki/User:M1ss1ontomars2k4
5115 https://en.wikipedia.org/w/index.php%3ftitle=User:M2millenium&action=edit&redlink=1
5116 https://en.wikipedia.org/wiki/User:M412k
5117 https://en.wikipedia.org/w/index.php%3ftitle=User:M759&action=edit&redlink=1
5118 https://en.wikipedia.org/wiki/User:M7bot
5119 https://en.wikipedia.org/wiki/User:MATThematical
5120 https://en.wikipedia.org/wiki/User:MBlaze_Lightning
5121 https://en.wikipedia.org/wiki/User:MC10
5122 https://en.wikipedia.org/wiki/User:MCiura
5123 https://en.wikipedia.org/wiki/User:MEOGLOBAL
5124 https://en.wikipedia.org/wiki/User:MER-C
5125 https://en.wikipedia.org/wiki/User:MFH
5126 https://en.wikipedia.org/w/index.php%3ftitle=User:MForster&action=edit&redlink=1
5127 https://en.wikipedia.org/wiki/User:MH~enwiki
5128 https://en.wikipedia.org/wiki/User:MIT_Trekkie
5129 https://en.wikipedia.org/w/index.php%3ftitle=User:MITTER98&action=edit&redlink=1
5130 https://en.wikipedia.org/w/index.php%3ftitle=User:MLIS_Student&action=edit&redlink=1
5131 https://en.wikipedia.org/w/index.php%3ftitle=User:MMarcuzzo&action=edit&redlink=1
5132 https://en.wikipedia.org/wiki/User:MONGO
5133 https://en.wikipedia.org/wiki/User:MRD2014

1874
External links

1 MRFraga5134
1 MRqtH25135
7 MSBOT5136
11 MSGJ5137
4 MSJapan5138
2 MSheshera5139
1 MTA~enwiki5140
5 MTSbot~enwiki5141
1 MTwTm5142
1 MZMcBride5143
1 Ma8thew5144
1 Mabdul5145
1 MacShrike5146
1 Macarion5147
1 Maciek.nowakowski5148
2 Maco14215149
1 Macofe5150
46 Macrakis5151
1 MacsBug5152
1 Macy5153
1 Mad Jaqk5154
1 Mad7285155
1 MadScientistVX5156
1 Madanpiyush5157
1 Madcoverboy5158

5134 https://en.wikipedia.org/wiki/User:MRFraga
5135 https://en.wikipedia.org/wiki/User:MRqtH2
5136 https://en.wikipedia.org/wiki/User:MSBOT
5137 https://en.wikipedia.org/wiki/User:MSGJ
5138 https://en.wikipedia.org/wiki/User:MSJapan
5139 https://en.wikipedia.org/wiki/User:MSheshera
5140 https://en.wikipedia.org/wiki/User:MTA~enwiki
5141 https://en.wikipedia.org/wiki/User:MTSbot~enwiki
5142 https://en.wikipedia.org/w/index.php%3ftitle=User:MTwTm&action=edit&redlink=1
5143 https://en.wikipedia.org/wiki/User:MZMcBride
5144 https://en.wikipedia.org/wiki/User:Ma8thew
5145 https://en.wikipedia.org/wiki/User:Mabdul
5146 https://en.wikipedia.org/w/index.php%3ftitle=User:MacShrike&action=edit&redlink=1
5147 https://en.wikipedia.org/wiki/User:Macarion
https://en.wikipedia.org/w/index.php%3ftitle=User:Maciek.nowakowski&action=edit&
5148
redlink=1
5149 https://en.wikipedia.org/w/index.php%3ftitle=User:Maco1421&action=edit&redlink=1
5150 https://en.wikipedia.org/w/index.php%3ftitle=User:Macofe&action=edit&redlink=1
5151 https://en.wikipedia.org/wiki/User:Macrakis
5152 https://en.wikipedia.org/w/index.php%3ftitle=User:MacsBug&action=edit&redlink=1
5153 https://en.wikipedia.org/wiki/User:Macy
5154 https://en.wikipedia.org/wiki/User:Mad_Jaqk
5155 https://en.wikipedia.org/w/index.php%3ftitle=User:Mad728&action=edit&redlink=1
5156 https://en.wikipedia.org/wiki/User:MadScientistVX
5157 https://en.wikipedia.org/wiki/User:Madanpiyush
5158 https://en.wikipedia.org/wiki/User:Madcoverboy

1875
Contributors

1 Maddendalybrokaw5159
2 Maddyabr5160
2 MadeYourReadThis5161
1 Madewokherd5162
1 Madhan virgo5163
1 Madmardigan535164
1 Madnag4u5165
1 Maelin5166
2 Maeln5167
1 Maformatiker5168
1 Magasjukur5169
1 Maged9185170
65 Maggyero5171
5 Maghnus5172
15 Magic links bot5173
1 MagicMatt10215174
1 Magicalbendini5175
2 Magicbronson5176
56 Magioladitis5177
5 Magister Mathematicae5178
2 Magmi5179
1 Magnus Bakken5180
1 MagnusA.Bot5181
3 Magriteappleface5182
1 Magyar255183

https://en.wikipedia.org/w/index.php%3ftitle=User:Maddendalybrokaw&action=edit&
5159
redlink=1
5160 https://en.wikipedia.org/w/index.php%3ftitle=User:Maddyabr&action=edit&redlink=1
5161 https://en.wikipedia.org/wiki/User:MadeYourReadThis
5162 https://en.wikipedia.org/w/index.php%3ftitle=User:Madewokherd&action=edit&redlink=1
5163 https://en.wikipedia.org/w/index.php%3ftitle=User:Madhan_virgo&action=edit&redlink=1
5164 https://en.wikipedia.org/wiki/User:Madmardigan53
5165 https://en.wikipedia.org/w/index.php%3ftitle=User:Madnag4u&action=edit&redlink=1
5166 https://en.wikipedia.org/wiki/User:Maelin
5167 https://en.wikipedia.org/w/index.php%3ftitle=User:Maeln&action=edit&redlink=1
5168 https://en.wikipedia.org/wiki/User:Maformatiker
5169 https://en.wikipedia.org/wiki/User:Magasjukur
5170 https://en.wikipedia.org/w/index.php%3ftitle=User:Maged918&action=edit&redlink=1
5171 https://en.wikipedia.org/w/index.php%3ftitle=User:Maggyero&action=edit&redlink=1
5172 https://en.wikipedia.org/wiki/User:Maghnus
5173 https://en.wikipedia.org/wiki/User:Magic_links_bot
5174 https://en.wikipedia.org/w/index.php%3ftitle=User:MagicMatt1021&action=edit&redlink=1
5175 https://en.wikipedia.org/wiki/User:Magicalbendini
5176 https://en.wikipedia.org/w/index.php%3ftitle=User:Magicbronson&action=edit&redlink=1
5177 https://en.wikipedia.org/wiki/User:Magioladitis
5178 https://en.wikipedia.org/wiki/User:Magister_Mathematicae
5179 https://en.wikipedia.org/w/index.php%3ftitle=User:Magmi&action=edit&redlink=1
5180 https://en.wikipedia.org/wiki/User:Magnus_Bakken
5181 https://en.wikipedia.org/wiki/User:MagnusA.Bot
https://en.wikipedia.org/w/index.php%3ftitle=User:Magriteappleface&action=edit&
5182
redlink=1
5183 https://en.wikipedia.org/w/index.php%3ftitle=User:Magyar25&action=edit&redlink=1

1876
External links

2 Mahagaja5184
1 Mahahahaneapneap5185
1 Mahak library5186
7 Mahanga5187
3 Mahanthi15188
1 Mahlerite5189
1 Mahlon5190
2 Mahue5191
7 Maid in wales5192
1 MainFrame5193
1 Maitchy5194
6 Maju wiki5195
2 Makaimc5196
2 Makecat-bot5197
2 Makeemlighter5198
4 Makeswell5199
1 Maklemak5200
8 Maksim-e~enwiki5201
1 Makyen5202
3 MalachiPhillips5203
6 MalafayaBot5204
5 Malbrain5205
3 Malcohol5206
2 Malcolm5207
1 Malcolm Farmer5208

5184 https://en.wikipedia.org/wiki/User:Mahagaja
5185 https://en.wikipedia.org/wiki/User:Mahahahaneapneap
5186 https://en.wikipedia.org/wiki/User:Mahak_library
5187 https://en.wikipedia.org/wiki/User:Mahanga
5188 https://en.wikipedia.org/w/index.php%3ftitle=User:Mahanthi1&action=edit&redlink=1
5189 https://en.wikipedia.org/wiki/User:Mahlerite
5190 https://en.wikipedia.org/w/index.php%3ftitle=User:Mahlon&action=edit&redlink=1
5191 https://en.wikipedia.org/w/index.php%3ftitle=User:Mahue&action=edit&redlink=1
5192 https://en.wikipedia.org/w/index.php%3ftitle=User:Maid_in_wales&action=edit&redlink=1
5193 https://en.wikipedia.org/wiki/User:MainFrame
5194 https://en.wikipedia.org/wiki/User:Maitchy
5195 https://en.wikipedia.org/w/index.php%3ftitle=User:Maju_wiki&action=edit&redlink=1
5196 https://en.wikipedia.org/w/index.php%3ftitle=User:Makaimc&action=edit&redlink=1
5197 https://en.wikipedia.org/wiki/User:Makecat-bot
5198 https://en.wikipedia.org/wiki/User:Makeemlighter
5199 https://en.wikipedia.org/wiki/User:Makeswell
5200 https://en.wikipedia.org/w/index.php%3ftitle=User:Maklemak&action=edit&redlink=1
5201 https://en.wikipedia.org/wiki/User:Maksim-e~enwiki
5202 https://en.wikipedia.org/wiki/User:Makyen
https://en.wikipedia.org/w/index.php%3ftitle=User:MalachiPhillips&action=edit&
5203
redlink=1
5204 https://en.wikipedia.org/wiki/User:MalafayaBot
5205 https://en.wikipedia.org/wiki/User:Malbrain
5206 https://en.wikipedia.org/wiki/User:Malcohol
5207 https://en.wikipedia.org/w/index.php%3ftitle=User:Malcolm&action=edit&redlink=1
5208 https://en.wikipedia.org/wiki/User:Malcolm_Farmer

1877
Contributors

2 Malcolma5209
1 Male19795210
11 Malerooster5211
1 Malhonen5212
1 Malinion5213
2 Malsaqer5214
1 Malyctenar5215
4 Mameisam5216
1 Mamling5217
7 Mandarax5218
5 Mandeepsandhu5219
2 Mandelbrotdescent5220
1 Mandyhan5221
1 Maneeshsinha5222
6 Mangarah5223
5 Mange015224
3 Mangojuice5225
2 Mani15226
7 Manik7620075227
2 Manish tezu5228
4 Manish1811925229
3 Manjuadoor5230
2 Manmanmamamamama5231
1 Mannerheimcross5232
3 Manoguru5233

5209 https://en.wikipedia.org/wiki/User:Malcolma
5210 https://en.wikipedia.org/wiki/User:Male1979
5211 https://en.wikipedia.org/wiki/User:Malerooster
5212 https://en.wikipedia.org/wiki/User:Malhonen
5213 https://en.wikipedia.org/wiki/User:Malinion
5214 https://en.wikipedia.org/w/index.php%3ftitle=User:Malsaqer&action=edit&redlink=1
5215 https://en.wikipedia.org/wiki/User:Malyctenar
5216 https://en.wikipedia.org/wiki/User:Mameisam
5217 https://en.wikipedia.org/wiki/User:Mamling
5218 https://en.wikipedia.org/wiki/User:Mandarax
5219 https://en.wikipedia.org/w/index.php%3ftitle=User:Mandeepsandhu&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Mandelbrotdescent&action=edit&
5220
redlink=1
5221 https://en.wikipedia.org/w/index.php%3ftitle=User:Mandyhan&action=edit&redlink=1
5222 https://en.wikipedia.org/w/index.php%3ftitle=User:Maneeshsinha&action=edit&redlink=1
5223 https://en.wikipedia.org/wiki/User:Mangarah
5224 https://en.wikipedia.org/wiki/User:Mange01
5225 https://en.wikipedia.org/wiki/User:Mangojuice
5226 https://en.wikipedia.org/wiki/User:Mani1
5227 https://en.wikipedia.org/w/index.php%3ftitle=User:Manik762007&action=edit&redlink=1
5228 https://en.wikipedia.org/w/index.php%3ftitle=User:Manish_tezu&action=edit&redlink=1
5229 https://en.wikipedia.org/w/index.php%3ftitle=User:Manish181192&action=edit&redlink=1
5230 https://en.wikipedia.org/w/index.php%3ftitle=User:Manjuadoor&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Manmanmamamamama&action=edit&
5231
redlink=1
5232 https://en.wikipedia.org/wiki/User:Mannerheimcross
5233 https://en.wikipedia.org/wiki/User:Manoguru

1878
External links

1 Manoj kumar regar5234


4 Manstie5235
2 Mansur565236
5 Mantipula5237
1 Manuae5238
2 Manuel Anastácio5239
1 Manuel.mas125240
2 Manul5241
1 Manuscript5242
1 Manushand5243
1 Manway5244
1 Maowtm5245
2 Maplesoon5246
5 Maproom5247
1 Mapsax5248
6 MarSch5249
2 MarSukiasyan5250
8 Marble machine5251
1 Marc Venot5252
16 Marc van Leeuwen5253
1 MarcelB6125254
1 Marcelkcs5255
1 Marcgal5256
1 Marcin Suwalczan5257
1 Marcin.olek5258

https://en.wikipedia.org/w/index.php%3ftitle=User:Manoj_kumar_regar&action=edit&
5234
redlink=1
5235 https://en.wikipedia.org/w/index.php%3ftitle=User:Manstie&action=edit&redlink=1
5236 https://en.wikipedia.org/w/index.php%3ftitle=User:Mansur56&action=edit&redlink=1
5237 https://en.wikipedia.org/wiki/User:Mantipula
5238 https://en.wikipedia.org/wiki/User:Manuae
5239 https://en.wikipedia.org/wiki/User:Manuel_Anast%25C3%25A1cio
5240 https://en.wikipedia.org/wiki/User:Manuel.mas12
5241 https://en.wikipedia.org/wiki/User:Manul
5242 https://en.wikipedia.org/w/index.php%3ftitle=User:Manuscript&action=edit&redlink=1
5243 https://en.wikipedia.org/wiki/User:Manushand
5244 https://en.wikipedia.org/wiki/User:Manway
5245 https://en.wikipedia.org/wiki/User:Maowtm
5246 https://en.wikipedia.org/w/index.php%3ftitle=User:Maplesoon&action=edit&redlink=1
5247 https://en.wikipedia.org/wiki/User:Maproom
5248 https://en.wikipedia.org/wiki/User:Mapsax
5249 https://en.wikipedia.org/wiki/User:MarSch
5250 https://en.wikipedia.org/wiki/User:MarSukiasyan
5251 https://en.wikipedia.org/wiki/User:Marble_machine
5252 https://en.wikipedia.org/wiki/User:Marc_Venot
5253 https://en.wikipedia.org/wiki/User:Marc_van_Leeuwen
5254 https://en.wikipedia.org/wiki/User:MarcelB612
5255 https://en.wikipedia.org/w/index.php%3ftitle=User:Marcelkcs&action=edit&redlink=1
5256 https://en.wikipedia.org/w/index.php%3ftitle=User:Marcgal&action=edit&redlink=1
5257 https://en.wikipedia.org/wiki/User:Marcin_Suwalczan
5258 https://en.wikipedia.org/w/index.php%3ftitle=User:Marcin.olek&action=edit&redlink=1

1879
Contributors

2 Marcisoo5259
1 Marco Polo5260
4 Marcoavjr5261
4 Marcocapelle5262
2 Marek695263
2 Marekpetrik5264
3 MarginalCost5265
3 Margosbot~enwiki5266
2 Maria Johnson christ5267
2 Mariam Asatryan5268
1 Marianna2515269
6 Marianocecowski5270
2 Marie Poise5271
1 Mario777Zelda5272
9 MarioS5273
1 Mariolj5274
10 Mariosal5275
1 Marj Tiefert5276
1 Mark Foskey5277
1 Mark L MacDonald5278
1 Mark Lentczner5279
1 Mark Martinec5280
10 Mark Renier5281
2 Mark Schröder5282
2 Mark T5283

5259 https://en.wikipedia.org/wiki/User:Marcisoo
5260 https://en.wikipedia.org/wiki/User:Marco_Polo
5261 https://en.wikipedia.org/w/index.php%3ftitle=User:Marcoavjr&action=edit&redlink=1
5262 https://en.wikipedia.org/wiki/User:Marcocapelle
5263 https://en.wikipedia.org/wiki/User:Marek69
5264 https://en.wikipedia.org/wiki/User:Marekpetrik
5265 https://en.wikipedia.org/w/index.php%3ftitle=User:MarginalCost&action=edit&redlink=1
5266 https://en.wikipedia.org/wiki/User:Margosbot~enwiki
https://en.wikipedia.org/w/index.php%3ftitle=User:Maria_Johnson_christ&action=edit&
5267
redlink=1
5268 https://en.wikipedia.org/wiki/User:Mariam_Asatryan
5269 https://en.wikipedia.org/wiki/User:Marianna251
5270 https://en.wikipedia.org/wiki/User:Marianocecowski
5271 https://en.wikipedia.org/wiki/User:Marie_Poise
5272 https://en.wikipedia.org/wiki/User:Mario777Zelda
5273 https://en.wikipedia.org/wiki/User:MarioS
5274 https://en.wikipedia.org/w/index.php%3ftitle=User:Mariolj&action=edit&redlink=1
5275 https://en.wikipedia.org/wiki/User:Mariosal
5276 https://en.wikipedia.org/wiki/User:Marj_Tiefert
5277 https://en.wikipedia.org/w/index.php%3ftitle=User:Mark_Foskey&action=edit&redlink=1
5278 https://en.wikipedia.org/wiki/User:Mark_L_MacDonald
5279 https://en.wikipedia.org/wiki/User:Mark_Lentczner
5280 https://en.wikipedia.org/w/index.php%3ftitle=User:Mark_Martinec&action=edit&redlink=1
5281 https://en.wikipedia.org/wiki/User:Mark_Renier
https://en.wikipedia.org/w/index.php%3ftitle=User:Mark_Schr%25C3%25B6der&action=edit&
5282
redlink=1
5283 https://en.wikipedia.org/wiki/User:Mark_T

1880
External links

1 Mark cummins5284
29 Mark viking5285
1 Mark14215286
1 Mark222075287
1 MarkGyver5288
1 MarkH215289
2 MarkOlah5290
17 MarkSweep5291
1 MarkWegman5292
8 Marked5293
3 MarketDesigner5294
1 Markgforbes5295
1 MarkisLandis5296
2 Markjin19905297
1 Markk4745298
1 Marknau5299
2 Markoid5300
1 Markov Odometer5301
1 Markskil5302
4 Markulf5303
1 Markus Krötzsch5304
1 Markus Kuhn5305
4 Markvs885306
1 Marlon'n'marion5307
8 Marnie Hawes5308

5284 https://en.wikipedia.org/w/index.php%3ftitle=User:Mark_cummins&action=edit&redlink=1
5285 https://en.wikipedia.org/wiki/User:Mark_viking
5286 https://en.wikipedia.org/w/index.php%3ftitle=User:Mark1421&action=edit&redlink=1
5287 https://en.wikipedia.org/w/index.php%3ftitle=User:Mark22207&action=edit&redlink=1
5288 https://en.wikipedia.org/wiki/User:MarkGyver
5289 https://en.wikipedia.org/wiki/User:MarkH21
5290 https://en.wikipedia.org/w/index.php%3ftitle=User:MarkOlah&action=edit&redlink=1
5291 https://en.wikipedia.org/wiki/User:MarkSweep
5292 https://en.wikipedia.org/w/index.php%3ftitle=User:MarkWegman&action=edit&redlink=1
5293 https://en.wikipedia.org/wiki/User:Marked
https://en.wikipedia.org/w/index.php%3ftitle=User:MarketDesigner&action=edit&redlink=
5294
1
5295 https://en.wikipedia.org/w/index.php%3ftitle=User:Markgforbes&action=edit&redlink=1
5296 https://en.wikipedia.org/w/index.php%3ftitle=User:MarkisLandis&action=edit&redlink=1
5297 https://en.wikipedia.org/w/index.php%3ftitle=User:Markjin1990&action=edit&redlink=1
5298 https://en.wikipedia.org/w/index.php%3ftitle=User:Markk474&action=edit&redlink=1
5299 https://en.wikipedia.org/wiki/User:Marknau
5300 https://en.wikipedia.org/wiki/User:Markoid
5301 https://en.wikipedia.org/wiki/User:Markov_Odometer
5302 https://en.wikipedia.org/w/index.php%3ftitle=User:Markskil&action=edit&redlink=1
5303 https://en.wikipedia.org/w/index.php%3ftitle=User:Markulf&action=edit&redlink=1
5304 https://en.wikipedia.org/wiki/User:Markus_Kr%25C3%25B6tzsch
5305 https://en.wikipedia.org/wiki/User:Markus_Kuhn
5306 https://en.wikipedia.org/wiki/User:Markvs88
https://en.wikipedia.org/w/index.php%3ftitle=User:Marlon%2527n%2527marion&action=
5307
edit&redlink=1
5308 https://en.wikipedia.org/wiki/User:Marnie_Hawes

1881
Contributors

1 Maroicj895309
1 Marokwitz5310
1 Marozols5311
1 MarshBot5312
1 Marshmeli5313
2 Mart0715314
1 Martani5315
2 Martarius5316
1 Martin Bjeldbak5317
1 Martin Bravenboer5318
1 Martin Urbanec5319
11 MartinBot5320
3 MartinHarper5321
2 MartinMusatov5322
2 Martinalex0005323
4 Martinkunev5324
1 Martious5325
4 Martnym5326
5 Martynas Patasius5327
1 Martyr25665328
4 Marvel2015329
2 Marvellous Spider-Man5330
1 MarvinCZ5331
2 Marvon7Newby5332
2 MarvonNewby5333

5309 https://en.wikipedia.org/w/index.php%3ftitle=User:Maroicj89&action=edit&redlink=1
5310 https://en.wikipedia.org/wiki/User:Marokwitz
5311 https://en.wikipedia.org/wiki/User:Marozols
5312 https://en.wikipedia.org/wiki/User:MarshBot
5313 https://en.wikipedia.org/w/index.php%3ftitle=User:Marshmeli&action=edit&redlink=1
5314 https://en.wikipedia.org/wiki/User:Mart071
5315 https://en.wikipedia.org/wiki/User:Martani
5316 https://en.wikipedia.org/wiki/User:Martarius
5317 https://en.wikipedia.org/wiki/User:Martin_Bjeldbak
5318 https://en.wikipedia.org/wiki/User:Martin_Bravenboer
5319 https://en.wikipedia.org/wiki/User:Martin_Urbanec
5320 https://en.wikipedia.org/wiki/User:MartinBot
5321 https://en.wikipedia.org/wiki/User:MartinHarper
5322 https://en.wikipedia.org/wiki/User:MartinMusatov
5323 https://en.wikipedia.org/w/index.php%3ftitle=User:Martinalex000&action=edit&redlink=1
5324 https://en.wikipedia.org/wiki/User:Martinkunev
5325 https://en.wikipedia.org/w/index.php%3ftitle=User:Martious&action=edit&redlink=1
5326 https://en.wikipedia.org/w/index.php%3ftitle=User:Martnym&action=edit&redlink=1
5327 https://en.wikipedia.org/wiki/User:Martynas_Patasius
5328 https://en.wikipedia.org/wiki/User:Martyr2566
5329 https://en.wikipedia.org/w/index.php%3ftitle=User:Marvel201&action=edit&redlink=1
5330 https://en.wikipedia.org/wiki/User:Marvellous_Spider-Man
5331 https://en.wikipedia.org/wiki/User:MarvinCZ
5332 https://en.wikipedia.org/w/index.php%3ftitle=User:Marvon7Newby&action=edit&redlink=1
5333 https://en.wikipedia.org/w/index.php%3ftitle=User:MarvonNewby&action=edit&redlink=1

1882
External links

1 Mas.morozov5334
1 Masamage5335
1 Masao5336
6 Maschelos5337
1 Mashiah Davidson5338
1 Masnevets5339
2 MassGalactusUniversum5340
1 MassimoLauria5341
2 Massood.khaari5342
3 Masssly5343
1 Mastarh5344
1 Master Lenman5345
1 MasterRadius5346
2 Mastergreg825347
6 MastiBot5348
1 Masumrezarock1005349
2 Mat-C5350
1 Matecsaj5351
1 Matekm5352
86 Materialscientist5353
2 Matgrioni5354
47 MathMartin5355
1 MathStuf5356
30 Mathbot5357
1 Mathel5358

5334 https://en.wikipedia.org/w/index.php%3ftitle=User:Mas.morozov&action=edit&redlink=1
5335 https://en.wikipedia.org/wiki/User:Masamage
5336 https://en.wikipedia.org/wiki/User:Masao
5337 https://en.wikipedia.org/w/index.php%3ftitle=User:Maschelos&action=edit&redlink=1
5338 https://en.wikipedia.org/wiki/User:Mashiah_Davidson
5339 https://en.wikipedia.org/wiki/User:Masnevets
5340 https://en.wikipedia.org/wiki/User:MassGalactusUniversum
5341 https://en.wikipedia.org/wiki/User:MassimoLauria
5342 https://en.wikipedia.org/wiki/User:Massood.khaari
5343 https://en.wikipedia.org/wiki/User:Masssly
5344 https://en.wikipedia.org/w/index.php%3ftitle=User:Mastarh&action=edit&redlink=1
5345 https://en.wikipedia.org/w/index.php%3ftitle=User:Master_Lenman&action=edit&redlink=1
5346 https://en.wikipedia.org/wiki/User:MasterRadius
5347 https://en.wikipedia.org/w/index.php%3ftitle=User:Mastergreg82&action=edit&redlink=1
5348 https://en.wikipedia.org/wiki/User:MastiBot
5349 https://en.wikipedia.org/wiki/User:Masumrezarock100
5350 https://en.wikipedia.org/wiki/User:Mat-C
5351 https://en.wikipedia.org/w/index.php%3ftitle=User:Matecsaj&action=edit&redlink=1
5352 https://en.wikipedia.org/wiki/User:Matekm
5353 https://en.wikipedia.org/wiki/User:Materialscientist
5354 https://en.wikipedia.org/wiki/User:Matgrioni
5355 https://en.wikipedia.org/wiki/User:MathMartin
5356 https://en.wikipedia.org/wiki/User:MathStuf
5357 https://en.wikipedia.org/wiki/User:Mathbot
5358 https://en.wikipedia.org/w/index.php%3ftitle=User:Mathel&action=edit&redlink=1

1883
Contributors

1 Matheus Faria5359
1 Mathgorges5360
1 Mathias-S5361
6 Mathiastck5362
1 Mathieu ottawa5363
4 Mathmensch5364
1 Mathmike5365
2 Mathnerd3141595366
3 Mathrick5367
1 MathsIsFun5368
1 Matiasholte5369
2 Matiasmoreno5370
1 Matju25371
3 MatrixFrog5372
1 Matrixmike25373
7 Matroids5374
1 Mats Kindahl5375
3 Mats.sxz5376
20 Matt Crypto5377
2 Matt Gies5378
1 Matt Heard5379
1 Matt Kwan5380
1 Matt.smart5381
38 MattGiuca5382
2 MattIPv45383

5359 https://en.wikipedia.org/wiki/User:Matheus_Faria
5360 https://en.wikipedia.org/w/index.php%3ftitle=User:Mathgorges&action=edit&redlink=1
5361 https://en.wikipedia.org/wiki/User:Mathias-S
5362 https://en.wikipedia.org/wiki/User:Mathiastck
5363 https://en.wikipedia.org/wiki/User:Mathieu_ottawa
5364 https://en.wikipedia.org/wiki/User:Mathmensch
5365 https://en.wikipedia.org/w/index.php%3ftitle=User:Mathmike&action=edit&redlink=1
5366 https://en.wikipedia.org/wiki/User:Mathnerd314159
5367 https://en.wikipedia.org/wiki/User:Mathrick
5368 https://en.wikipedia.org/wiki/User:MathsIsFun
5369 https://en.wikipedia.org/wiki/User:Matiasholte
5370 https://en.wikipedia.org/w/index.php%3ftitle=User:Matiasmoreno&action=edit&redlink=1
5371 https://en.wikipedia.org/wiki/User:Matju2
5372 https://en.wikipedia.org/wiki/User:MatrixFrog
5373 https://en.wikipedia.org/wiki/User:Matrixmike2
5374 https://en.wikipedia.org/w/index.php%3ftitle=User:Matroids&action=edit&redlink=1
5375 https://en.wikipedia.org/wiki/User:Mats_Kindahl
5376 https://en.wikipedia.org/w/index.php%3ftitle=User:Mats.sxz&action=edit&redlink=1
5377 https://en.wikipedia.org/wiki/User:Matt_Crypto
5378 https://en.wikipedia.org/wiki/User:Matt_Gies
5379 https://en.wikipedia.org/wiki/User:Matt_Heard
5380 https://en.wikipedia.org/wiki/User:Matt_Kwan
5381 https://en.wikipedia.org/wiki/User:Matt.smart
5382 https://en.wikipedia.org/wiki/User:MattGiuca
5383 https://en.wikipedia.org/wiki/User:MattIPv4

1884
External links

3 MattWade5384
3 Mattbuck5385
4 Mattesnider5386
2 Mattflaschen5387
2 Matthew Woodcraft5388
1 Matthew Yeager5389
3 Matthew rutland5390
1 Matthew00285391
1 MatthewAwesome5392
1 MatthewBauer5393
1 MatthewBuchwalder5394
2 MatthewH5395
6 Matthiaspaul5396
2 Matthieu Vergne5397
1 Matthijskooijman5398
4 Mattjohnson5399
1 Mattplumlee5400
1 Matttoothman5401
4 Matusz5402
8 Matěj Grabovský5403
5 Mauls5404
5 Maurice Carbonaro5405
2 Maurobio5406
9 Mav5407
3 Mavis Damea5408

5384 https://en.wikipedia.org/wiki/User:MattWade
5385 https://en.wikipedia.org/wiki/User:Mattbuck
5386 https://en.wikipedia.org/w/index.php%3ftitle=User:Mattesnider&action=edit&redlink=1
5387 https://en.wikipedia.org/wiki/User:Mattflaschen
5388 https://en.wikipedia.org/wiki/User:Matthew_Woodcraft
5389 https://en.wikipedia.org/wiki/User:Matthew_Yeager
https://en.wikipedia.org/w/index.php%3ftitle=User:Matthew_rutland&action=edit&
5390
redlink=1
5391 https://en.wikipedia.org/wiki/User:Matthew0028
5392 https://en.wikipedia.org/wiki/User:MatthewAwesome
5393 https://en.wikipedia.org/w/index.php%3ftitle=User:MatthewBauer&action=edit&redlink=1
5394 https://en.wikipedia.org/wiki/User:MatthewBuchwalder
5395 https://en.wikipedia.org/wiki/User:MatthewH
5396 https://en.wikipedia.org/wiki/User:Matthiaspaul
https://en.wikipedia.org/w/index.php%3ftitle=User:Matthieu_Vergne&action=edit&
5397
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Matthijskooijman&action=edit&
5398
redlink=1
5399 https://en.wikipedia.org/wiki/User:Mattjohnson
5400 https://en.wikipedia.org/w/index.php%3ftitle=User:Mattplumlee&action=edit&redlink=1
5401 https://en.wikipedia.org/wiki/User:Matttoothman
5402 https://en.wikipedia.org/wiki/User:Matusz
5403 https://en.wikipedia.org/wiki/User:Mat%25C4%259Bj_Grabovsk%25C3%25BD
5404 https://en.wikipedia.org/wiki/User:Mauls
5405 https://en.wikipedia.org/wiki/User:Maurice_Carbonaro
5406 https://en.wikipedia.org/w/index.php%3ftitle=User:Maurobio&action=edit&redlink=1
5407 https://en.wikipedia.org/wiki/User:Mav
5408 https://en.wikipedia.org/w/index.php%3ftitle=User:Mavis_Damea&action=edit&redlink=1

1885
Contributors

1 Max Libbrecht5409
16 Max Longint5410
1 Max.goedjen5411
1 MaxEnt5412
3 MaxRadin5413
59 Maxal5414
1 Maxalister5415
1 Maxdan945416
1 Maxim5417
1 Maxim Masiutin5418
1 Maximalgo5419
2 Maximaximax5420
1 Maxime.Debosschere5421
1 Maximin~enwiki5422
5 Maximus Rex5423
1 Maxironpayne5424
1 Maxsharples5425
3 Maxwell bernard5426
1 Mayazcherquoi5427
1 Mayfanning75428
1 Mayooranathan5429
1 Mayrel5430
1 Maysak5431
1 Mazi5432
1 Mazin075433

5409 https://en.wikipedia.org/w/index.php%3ftitle=User:Max_Libbrecht&action=edit&redlink=1
5410 https://en.wikipedia.org/wiki/User:Max_Longint
5411 https://en.wikipedia.org/w/index.php%3ftitle=User:Max.goedjen&action=edit&redlink=1
5412 https://en.wikipedia.org/wiki/User:MaxEnt
5413 https://en.wikipedia.org/w/index.php%3ftitle=User:MaxRadin&action=edit&redlink=1
5414 https://en.wikipedia.org/w/index.php%3ftitle=User:Maxal&action=edit&redlink=1
5415 https://en.wikipedia.org/w/index.php%3ftitle=User:Maxalister&action=edit&redlink=1
5416 https://en.wikipedia.org/w/index.php%3ftitle=User:Maxdan94&action=edit&redlink=1
5417 https://en.wikipedia.org/wiki/User:Maxim
5418 https://en.wikipedia.org/wiki/User:Maxim_Masiutin
5419 https://en.wikipedia.org/w/index.php%3ftitle=User:Maximalgo&action=edit&redlink=1
5420 https://en.wikipedia.org/wiki/User:Maximaximax
5421 https://en.wikipedia.org/wiki/User:Maxime.Debosschere
https://en.wikipedia.org/w/index.php%3ftitle=User:Maximin~enwiki&action=edit&redlink=
5422
1
5423 https://en.wikipedia.org/wiki/User:Maximus_Rex
5424 https://en.wikipedia.org/w/index.php%3ftitle=User:Maxironpayne&action=edit&redlink=1
5425 https://en.wikipedia.org/w/index.php%3ftitle=User:Maxsharples&action=edit&redlink=1
5426 https://en.wikipedia.org/wiki/User:Maxwell_bernard
5427 https://en.wikipedia.org/w/index.php%3ftitle=User:Mayazcherquoi&action=edit&redlink=1
5428 https://en.wikipedia.org/wiki/User:Mayfanning7
5429 https://en.wikipedia.org/wiki/User:Mayooranathan
5430 https://en.wikipedia.org/wiki/User:Mayrel
5431 https://en.wikipedia.org/w/index.php%3ftitle=User:Maysak&action=edit&redlink=1
5432 https://en.wikipedia.org/wiki/User:Mazi
5433 https://en.wikipedia.org/wiki/User:Mazin07

1886
External links

1 Mb10005434
1 Mbarrenecheajr5435
1 Mbell5436
2 Mbennett5555437
3 Mbernard7075438
6 Mblumber5439
2 Mboverload5440
4 Mbtnt5441
1 Mbutts5442
1 Mc mosa5443
1 McGucket5444
1 McIntosh Natura5445
89 McKay5446
1 Mcarling5447
1 Mcb20015448
2 Mccapra5449
2 Mccraig5450
3 Mcculley5451
3 Mcdonaldjosh75452
2 Mcichelli5453
2 Mciura5454
1 MclareN2125455
44 Mcld5456
1 Mcom3205457
5 McoreD5458

5434 https://en.wikipedia.org/wiki/User:Mb1000
https://en.wikipedia.org/w/index.php%3ftitle=User:Mbarrenecheajr&action=edit&redlink=
5435
1
5436 https://en.wikipedia.org/wiki/User:Mbell
5437 https://en.wikipedia.org/w/index.php%3ftitle=User:Mbennett555&action=edit&redlink=1
5438 https://en.wikipedia.org/w/index.php%3ftitle=User:Mbernard707&action=edit&redlink=1
5439 https://en.wikipedia.org/wiki/User:Mblumber
5440 https://en.wikipedia.org/wiki/User:Mboverload
5441 https://en.wikipedia.org/w/index.php%3ftitle=User:Mbtnt&action=edit&redlink=1
5442 https://en.wikipedia.org/w/index.php%3ftitle=User:Mbutts&action=edit&redlink=1
5443 https://en.wikipedia.org/w/index.php%3ftitle=User:Mc_mosa&action=edit&redlink=1
5444 https://en.wikipedia.org/wiki/User:McGucket
https://en.wikipedia.org/w/index.php%3ftitle=User:McIntosh_Natura&action=edit&
5445
redlink=1
5446 https://en.wikipedia.org/wiki/User:McKay
5447 https://en.wikipedia.org/wiki/User:Mcarling
5448 https://en.wikipedia.org/w/index.php%3ftitle=User:Mcb2001&action=edit&redlink=1
5449 https://en.wikipedia.org/wiki/User:Mccapra
5450 https://en.wikipedia.org/w/index.php%3ftitle=User:Mccraig&action=edit&redlink=1
5451 https://en.wikipedia.org/w/index.php%3ftitle=User:Mcculley&action=edit&redlink=1
5452 https://en.wikipedia.org/w/index.php%3ftitle=User:Mcdonaldjosh7&action=edit&redlink=1
5453 https://en.wikipedia.org/w/index.php%3ftitle=User:Mcichelli&action=edit&redlink=1
5454 https://en.wikipedia.org/w/index.php%3ftitle=User:Mciura&action=edit&redlink=1
5455 https://en.wikipedia.org/wiki/User:MclareN212
5456 https://en.wikipedia.org/wiki/User:Mcld
5457 https://en.wikipedia.org/w/index.php%3ftitle=User:Mcom320&action=edit&redlink=1
5458 https://en.wikipedia.org/wiki/User:McoreD

1887
Contributors

1 Mcoupal5459
2 Mcsee5460
11 Mcstrother5461
4 Mctpyt5462
5 Md haris4u5463
1 Md.aftabuddin5464
1 Mdasim5465
16 Mdd5466
1 Mdd46965467
1 Mdebets5468
3 Mdf5469
1 Mdkess5470
1 Mdnahas5471
1 Mdtr5472
2 Mdvs5473
10 Me and5474
14 Me, Myself, and I are Here5475
1 Meachamus.Prime5476
1 Mean as custard5477
3 Mecanismo5478
15 Mecej45479
1 MediKate5480
3 Medich19855481
1 Medinoc5482
2 Meekohi5483

5459 https://en.wikipedia.org/w/index.php%3ftitle=User:Mcoupal&action=edit&redlink=1
5460 https://en.wikipedia.org/wiki/User:Mcsee
5461 https://en.wikipedia.org/wiki/User:Mcstrother
5462 https://en.wikipedia.org/w/index.php%3ftitle=User:Mctpyt&action=edit&redlink=1
5463 https://en.wikipedia.org/wiki/User:Md_haris4u
5464 https://en.wikipedia.org/w/index.php%3ftitle=User:Md.aftabuddin&action=edit&redlink=1
5465 https://en.wikipedia.org/w/index.php%3ftitle=User:Mdasim&action=edit&redlink=1
5466 https://en.wikipedia.org/wiki/User:Mdd
5467 https://en.wikipedia.org/wiki/User:Mdd4696
5468 https://en.wikipedia.org/wiki/User:Mdebets
5469 https://en.wikipedia.org/wiki/User:Mdf
5470 https://en.wikipedia.org/wiki/User:Mdkess
5471 https://en.wikipedia.org/wiki/User:Mdnahas
5472 https://en.wikipedia.org/w/index.php%3ftitle=User:Mdtr&action=edit&redlink=1
5473 https://en.wikipedia.org/w/index.php%3ftitle=User:Mdvs&action=edit&redlink=1
5474 https://en.wikipedia.org/wiki/User:Me_and
5475 https://en.wikipedia.org/wiki/User:Me,_Myself,_and_I_are_Here
https://en.wikipedia.org/w/index.php%3ftitle=User:Meachamus.Prime&action=edit&
5476
redlink=1
5477 https://en.wikipedia.org/wiki/User:Mean_as_custard
5478 https://en.wikipedia.org/wiki/User:Mecanismo
5479 https://en.wikipedia.org/w/index.php%3ftitle=User:Mecej4&action=edit&redlink=1
5480 https://en.wikipedia.org/w/index.php%3ftitle=User:MediKate&action=edit&redlink=1
5481 https://en.wikipedia.org/w/index.php%3ftitle=User:Medich1985&action=edit&redlink=1
5482 https://en.wikipedia.org/w/index.php%3ftitle=User:Medinoc&action=edit&redlink=1
5483 https://en.wikipedia.org/wiki/User:Meekohi

1888
External links

1 Meerpirat5484
3 MeesterDaan5485
28 MegaHasher5486
5 Megajosh25487
2 Megajuice5488
1 Megaritzmom5489
1 Megatherium5490
2 Megharajv5491
2 Mehrotraparth5492
1 Meinhard Benn5493
1 Meiskam5494
1 Mekeor5495
1 Melab-15496
8 MelbourneStar5497
3 Melchoir5498
4 Melcombe5499
1 Melcous5500
2 Meldraft5501
101 Mellum5502
2 Melonkelon5503
1 Melsaran5504
6 MementoVivere5505
2 Memfrob5506
2 Memming5507
1 Memodude5508

5484 https://en.wikipedia.org/wiki/User:Meerpirat
5485 https://en.wikipedia.org/w/index.php%3ftitle=User:MeesterDaan&action=edit&redlink=1
5486 https://en.wikipedia.org/wiki/User:MegaHasher
5487 https://en.wikipedia.org/wiki/User:Megajosh2
5488 https://en.wikipedia.org/wiki/User:Megajuice
5489 https://en.wikipedia.org/w/index.php%3ftitle=User:Megaritzmom&action=edit&redlink=1
5490 https://en.wikipedia.org/w/index.php%3ftitle=User:Megatherium&action=edit&redlink=1
5491 https://en.wikipedia.org/w/index.php%3ftitle=User:Megharajv&action=edit&redlink=1
5492 https://en.wikipedia.org/w/index.php%3ftitle=User:Mehrotraparth&action=edit&redlink=1
5493 https://en.wikipedia.org/wiki/User:Meinhard_Benn
5494 https://en.wikipedia.org/wiki/User:Meiskam
5495 https://en.wikipedia.org/wiki/User:Mekeor
5496 https://en.wikipedia.org/wiki/User:Melab-1
5497 https://en.wikipedia.org/wiki/User:MelbourneStar
5498 https://en.wikipedia.org/wiki/User:Melchoir
5499 https://en.wikipedia.org/wiki/User:Melcombe
5500 https://en.wikipedia.org/wiki/User:Melcous
5501 https://en.wikipedia.org/wiki/User:Meldraft
5502 https://en.wikipedia.org/wiki/User:Mellum
5503 https://en.wikipedia.org/wiki/User:Melonkelon
5504 https://en.wikipedia.org/wiki/User:Melsaran
5505 https://en.wikipedia.org/wiki/User:MementoVivere
5506 https://en.wikipedia.org/wiki/User:Memfrob
5507 https://en.wikipedia.org/wiki/User:Memming
5508 https://en.wikipedia.org/w/index.php%3ftitle=User:Memodude&action=edit&redlink=1

1889
Contributors

1 Meneth5509
4 Meng65510
1 Meni Rosenfeld5511
2 MenoBot5512
1 MensanDeltiologist5513
4 Mentibot5514
1 Meredyth5515
2 Merendoglu5516
1 Merinjose5517
1 MerlIwBot5518
1 MerlLinkBot5519
2 Merlinme5520
4 Merlion4445521
2 Merocastle5522
3 Merovingian5523
1 Merritt.alex5524
2 MertyWiki5525
5 Mesospheric5526
1 Mess5527
1 Messy Thinking5528
1 Mesterharm5529
1 Metalmax5530
1 Metaprimer5531
2 Metaxal5532
1 Meteficha5533

5509 https://en.wikipedia.org/wiki/User:Meneth
5510 https://en.wikipedia.org/wiki/User:Meng6
5511 https://en.wikipedia.org/wiki/User:Meni_Rosenfeld
5512 https://en.wikipedia.org/wiki/User:MenoBot
5513 https://en.wikipedia.org/wiki/User:MensanDeltiologist
5514 https://en.wikipedia.org/wiki/User:Mentibot
5515 https://en.wikipedia.org/wiki/User:Meredyth
5516 https://en.wikipedia.org/wiki/User:Merendoglu
5517 https://en.wikipedia.org/w/index.php%3ftitle=User:Merinjose&action=edit&redlink=1
5518 https://en.wikipedia.org/wiki/User:MerlIwBot
5519 https://en.wikipedia.org/wiki/User:MerlLinkBot
5520 https://en.wikipedia.org/wiki/User:Merlinme
5521 https://en.wikipedia.org/wiki/User:Merlion444
5522 https://en.wikipedia.org/w/index.php%3ftitle=User:Merocastle&action=edit&redlink=1
5523 https://en.wikipedia.org/wiki/User:Merovingian
5524 https://en.wikipedia.org/w/index.php%3ftitle=User:Merritt.alex&action=edit&redlink=1
5525 https://en.wikipedia.org/wiki/User:MertyWiki
5526 https://en.wikipedia.org/wiki/User:Mesospheric
5527 https://en.wikipedia.org/wiki/User:Mess
https://en.wikipedia.org/w/index.php%3ftitle=User:Messy_Thinking&action=edit&redlink=
5528
1
5529 https://en.wikipedia.org/w/index.php%3ftitle=User:Mesterharm&action=edit&redlink=1
5530 https://en.wikipedia.org/w/index.php%3ftitle=User:Metalmax&action=edit&redlink=1
5531 https://en.wikipedia.org/w/index.php%3ftitle=User:Metaprimer&action=edit&redlink=1
5532 https://en.wikipedia.org/w/index.php%3ftitle=User:Metaxal&action=edit&redlink=1
5533 https://en.wikipedia.org/w/index.php%3ftitle=User:Meteficha&action=edit&redlink=1

1890
External links

5 Meters5534
2 Methcub5535
1 MetsBot5536
3 Meyavuz5537
1 Mezhaka5538
1 Mezzanine~enwiki5539
1 MfR5540
1 Mfb5541
5 Mfwitten5542
2 Mgasparin5543
1 Mgccl5544
1 Mghgtg5545
1 Mgiganteus15546
3 Mgius5547
2 Mgraham8315548
13 Mgreenbe5549
2 Mgwalker5550
1 Mh265551
4 Mhahsler5552
1 Mhavard9995553
2 Mhilferink5554
3 Mhou15555
1 Mhss5556
89 Mhym5557
2 MiNiWolF5558

5534 https://en.wikipedia.org/wiki/User:Meters
5535 https://en.wikipedia.org/wiki/User:Methcub
5536 https://en.wikipedia.org/wiki/User:MetsBot
5537 https://en.wikipedia.org/wiki/User:Meyavuz
5538 https://en.wikipedia.org/w/index.php%3ftitle=User:Mezhaka&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Mezzanine~enwiki&action=edit&
5539
redlink=1
5540 https://en.wikipedia.org/w/index.php%3ftitle=User:MfR&action=edit&redlink=1
5541 https://en.wikipedia.org/wiki/User:Mfb
5542 https://en.wikipedia.org/wiki/User:Mfwitten
5543 https://en.wikipedia.org/wiki/User:Mgasparin
5544 https://en.wikipedia.org/wiki/User:Mgccl
5545 https://en.wikipedia.org/w/index.php%3ftitle=User:Mghgtg&action=edit&redlink=1
5546 https://en.wikipedia.org/wiki/User:Mgiganteus1
5547 https://en.wikipedia.org/w/index.php%3ftitle=User:Mgius&action=edit&redlink=1
5548 https://en.wikipedia.org/w/index.php%3ftitle=User:Mgraham831&action=edit&redlink=1
5549 https://en.wikipedia.org/wiki/User:Mgreenbe
5550 https://en.wikipedia.org/wiki/User:Mgwalker
5551 https://en.wikipedia.org/wiki/User:Mh26
5552 https://en.wikipedia.org/wiki/User:Mhahsler
5553 https://en.wikipedia.org/w/index.php%3ftitle=User:Mhavard999&action=edit&redlink=1
5554 https://en.wikipedia.org/w/index.php%3ftitle=User:Mhilferink&action=edit&redlink=1
5555 https://en.wikipedia.org/wiki/User:Mhou1
5556 https://en.wikipedia.org/wiki/User:Mhss
5557 https://en.wikipedia.org/wiki/User:Mhym
5558 https://en.wikipedia.org/w/index.php%3ftitle=User:MiNiWolF&action=edit&redlink=1

1891
Contributors

1 Miana5559
2 Miaow Miaow5560
1 MicGrigni5561
1 MicTwi5562
1 Micahcowan5563
1 Micahsaint5564
1 MichaK5565
1 Michael Barera5566
3 Michael Devore5567
1 Michael Greiner5568
355 Michael Hardy5569
1 Michael Lee Baker5570
4 Michael Rogers5571
1 Michael Ross5572
1 Michael Shields5573
30 Michael Slone5574
5 Michael Veksler5575
1 Michael nju5576
7 Michael-stanton5577
1 Michael.jaeger5578
1 Michael935555579
1 MichaelBillington5580
5 MichaelMcGuffin5581
1 MichaelPloujnikov5582
3 Michaelbluejay5583

5559 https://en.wikipedia.org/w/index.php%3ftitle=User:Miana&action=edit&redlink=1
5560 https://en.wikipedia.org/wiki/User:Miaow_Miaow
5561 https://en.wikipedia.org/wiki/User:MicGrigni
5562 https://en.wikipedia.org/w/index.php%3ftitle=User:MicTwi&action=edit&redlink=1
5563 https://en.wikipedia.org/w/index.php%3ftitle=User:Micahcowan&action=edit&redlink=1
5564 https://en.wikipedia.org/w/index.php%3ftitle=User:Micahsaint&action=edit&redlink=1
5565 https://en.wikipedia.org/wiki/User:MichaK
5566 https://en.wikipedia.org/wiki/User:Michael_Barera
5567 https://en.wikipedia.org/wiki/User:Michael_Devore
5568 https://en.wikipedia.org/wiki/User:Michael_Greiner
5569 https://en.wikipedia.org/wiki/User:Michael_Hardy
5570 https://en.wikipedia.org/wiki/User:Michael_Lee_Baker
https://en.wikipedia.org/w/index.php%3ftitle=User:Michael_Rogers&action=edit&redlink=
5571
1
5572 https://en.wikipedia.org/wiki/User:Michael_Ross
5573 https://en.wikipedia.org/wiki/User:Michael_Shields
5574 https://en.wikipedia.org/wiki/User:Michael_Slone
5575 https://en.wikipedia.org/wiki/User:Michael_Veksler
5576 https://en.wikipedia.org/w/index.php%3ftitle=User:Michael_nju&action=edit&redlink=1
5577 https://en.wikipedia.org/wiki/User:Michael-stanton
https://en.wikipedia.org/w/index.php%3ftitle=User:Michael.jaeger&action=edit&redlink=
5578
1
5579 https://en.wikipedia.org/wiki/User:Michael93555
5580 https://en.wikipedia.org/wiki/User:MichaelBillington
5581 https://en.wikipedia.org/wiki/User:MichaelMcGuffin
https://en.wikipedia.org/w/index.php%3ftitle=User:MichaelPloujnikov&action=edit&
5582
redlink=1
5583 https://en.wikipedia.org/wiki/User:Michaelbluejay

1892
External links

1 Michaelfavor5584
1 Michaelhilton5585
4 Michaelhurwicz5586
1 Michal.burda5587
1 MichealH5588
4 Michele bon5589
2 Michele.dallachiesa5590
1 Micheleorsi5591
1 Michi5592
1 MichiHenning5593
1 Michocio5594
1 Mickster8105595
5 Midas025596
1 Midleading5597
1 Midnightcomm5598
1 Midoreigh5599
19 Mighty Firebat5600
4 Miguel~enwiki5601
4 Mihai Capotă5602
1 Mihai Damian5603
14 MihalOrela5604
1 Miia5605
1 Mik01aj5606
9 Mikaey5607
4 Mike Christie5608

5584 https://en.wikipedia.org/wiki/User:Michaelfavor
5585 https://en.wikipedia.org/wiki/User:Michaelhilton
5586 https://en.wikipedia.org/wiki/User:Michaelhurwicz
5587 https://en.wikipedia.org/w/index.php%3ftitle=User:Michal.burda&action=edit&redlink=1
5588 https://en.wikipedia.org/wiki/User:MichealH
5589 https://en.wikipedia.org/w/index.php%3ftitle=User:Michele_bon&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Michele.dallachiesa&action=edit&
5590
redlink=1
5591 https://en.wikipedia.org/w/index.php%3ftitle=User:Micheleorsi&action=edit&redlink=1
5592 https://en.wikipedia.org/wiki/User:Michi
5593 https://en.wikipedia.org/wiki/User:MichiHenning
5594 https://en.wikipedia.org/w/index.php%3ftitle=User:Michocio&action=edit&redlink=1
5595 https://en.wikipedia.org/w/index.php%3ftitle=User:Mickster810&action=edit&redlink=1
5596 https://en.wikipedia.org/wiki/User:Midas02
5597 https://en.wikipedia.org/w/index.php%3ftitle=User:Midleading&action=edit&redlink=1
5598 https://en.wikipedia.org/wiki/User:Midnightcomm
5599 https://en.wikipedia.org/wiki/User:Midoreigh
https://en.wikipedia.org/w/index.php%3ftitle=User:Mighty_Firebat&action=edit&redlink=
5600
1
5601 https://en.wikipedia.org/wiki/User:Miguel~enwiki
5602 https://en.wikipedia.org/wiki/User:Mihai_Capot%25C4%2583
5603 https://en.wikipedia.org/wiki/User:Mihai_Damian
5604 https://en.wikipedia.org/wiki/User:MihalOrela
5605 https://en.wikipedia.org/wiki/User:Miia
5606 https://en.wikipedia.org/w/index.php%3ftitle=User:Mik01aj&action=edit&redlink=1
5607 https://en.wikipedia.org/wiki/User:Mikaey
5608 https://en.wikipedia.org/wiki/User:Mike_Christie

1893
Contributors

1 Mike Novikoff5609
3 Mike Rosoft5610
5 Mike Schwartz5611
2 Mike.aizatsky5612
2 Mike.lifeguard5613
1 Mike.pr5614
3 Mike00015615
1 Mike005005616
1 Mike12425617
2 Mike13415618
1 Mike221205619
1 Mike2vil5620
20 Mikeblas5621
1 Mikeedla5622
2 Mikeo5623
1 Mikepelley5624
1 Mikeputnam5625
2 Mikewebkist5626
1 Mikeynap5627
16 Mikhail Ryazanov5628
1 Mikhat5629
2 Mikkel2thorup5630
1 Miknight5631
1 Miko3k5632
1 Mikofski5633

5609 https://en.wikipedia.org/wiki/User:Mike_Novikoff
5610 https://en.wikipedia.org/wiki/User:Mike_Rosoft
5611 https://en.wikipedia.org/wiki/User:Mike_Schwartz
5612 https://en.wikipedia.org/wiki/User:Mike.aizatsky
5613 https://en.wikipedia.org/wiki/User:Mike.lifeguard
5614 https://en.wikipedia.org/w/index.php%3ftitle=User:Mike.pr&action=edit&redlink=1
5615 https://en.wikipedia.org/w/index.php%3ftitle=User:Mike0001&action=edit&redlink=1
5616 https://en.wikipedia.org/w/index.php%3ftitle=User:Mike00500&action=edit&redlink=1
5617 https://en.wikipedia.org/w/index.php%3ftitle=User:Mike1242&action=edit&redlink=1
5618 https://en.wikipedia.org/w/index.php%3ftitle=User:Mike1341&action=edit&redlink=1
5619 https://en.wikipedia.org/wiki/User:Mike22120
5620 https://en.wikipedia.org/wiki/User:Mike2vil
5621 https://en.wikipedia.org/wiki/User:Mikeblas
5622 https://en.wikipedia.org/w/index.php%3ftitle=User:Mikeedla&action=edit&redlink=1
5623 https://en.wikipedia.org/wiki/User:Mikeo
5624 https://en.wikipedia.org/wiki/User:Mikepelley
5625 https://en.wikipedia.org/wiki/User:Mikeputnam
5626 https://en.wikipedia.org/w/index.php%3ftitle=User:Mikewebkist&action=edit&redlink=1
5627 https://en.wikipedia.org/w/index.php%3ftitle=User:Mikeynap&action=edit&redlink=1
5628 https://en.wikipedia.org/wiki/User:Mikhail_Ryazanov
5629 https://en.wikipedia.org/wiki/User:Mikhat
5630 https://en.wikipedia.org/w/index.php%3ftitle=User:Mikkel2thorup&action=edit&redlink=1
5631 https://en.wikipedia.org/wiki/User:Miknight
5632 https://en.wikipedia.org/wiki/User:Miko3k
5633 https://en.wikipedia.org/wiki/User:Mikofski

1894
External links

4 Mikrosam Akademija 25634


4 Mikymaione5635
4 Milcke~enwiki5636
16 Mild Bill Hiccup5637
4 Miles5638
1 Milicevic015639
1 MiljanV5640
1 Milko Zec5641
1 Millerdl5642
1 Mimibar5643
2 Mina865644
1 Minchenko Michael5645
7 MindAfterMath5646
1 Mindbleach5647
1 Mindbuilder5648
173 Mindmatrix5649
7 Mindotaur5650
1 Mindstalk5651
1 Minecraft1000005652
1 Minervous5653
7 Minesweeper5654
1 Minghong5655
2 Minhaskamal5656
1 Mini-Geek5657
2 Minidiable5658

https://en.wikipedia.org/w/index.php%3ftitle=User:Mikrosam_Akademija_2&action=edit&
5634
redlink=1
5635 https://en.wikipedia.org/w/index.php%3ftitle=User:Mikymaione&action=edit&redlink=1
5636 https://en.wikipedia.org/w/index.php%3ftitle=User:Milcke~enwiki&action=edit&redlink=1
5637 https://en.wikipedia.org/wiki/User:Mild_Bill_Hiccup
5638 https://en.wikipedia.org/w/index.php%3ftitle=User:Miles&action=edit&redlink=1
5639 https://en.wikipedia.org/wiki/User:Milicevic01
5640 https://en.wikipedia.org/w/index.php%3ftitle=User:MiljanV&action=edit&redlink=1
5641 https://en.wikipedia.org/wiki/User:Milko_Zec
5642 https://en.wikipedia.org/wiki/User:Millerdl
5643 https://en.wikipedia.org/w/index.php%3ftitle=User:Mimibar&action=edit&redlink=1
5644 https://en.wikipedia.org/w/index.php%3ftitle=User:Mina86&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Minchenko_Michael&action=edit&
5645
redlink=1
5646 https://en.wikipedia.org/wiki/User:MindAfterMath
5647 https://en.wikipedia.org/w/index.php%3ftitle=User:Mindbleach&action=edit&redlink=1
5648 https://en.wikipedia.org/wiki/User:Mindbuilder
5649 https://en.wikipedia.org/wiki/User:Mindmatrix
5650 https://en.wikipedia.org/w/index.php%3ftitle=User:Mindotaur&action=edit&redlink=1
5651 https://en.wikipedia.org/wiki/User:Mindstalk
https://en.wikipedia.org/w/index.php%3ftitle=User:Minecraft100000&action=edit&
5652
redlink=1
5653 https://en.wikipedia.org/w/index.php%3ftitle=User:Minervous&action=edit&redlink=1
5654 https://en.wikipedia.org/wiki/User:Minesweeper
5655 https://en.wikipedia.org/wiki/User:Minghong
5656 https://en.wikipedia.org/wiki/User:Minhaskamal
5657 https://en.wikipedia.org/wiki/User:Mini-Geek
5658 https://en.wikipedia.org/w/index.php%3ftitle=User:Minidiable&action=edit&redlink=1

1895
Contributors

1 Minimac5659
1 Minime123585660
2 Minimiscience5661
3 Minna Sora no Shita5662
2 Minsuper5663
1 Mintleaf~enwiki5664
1 MinusBot5665
6 Mipadi5666
5 MiqayelMinasyan5667
2 Miracle Pen5668
7 Miracle1735669
1 Mirddes5670
1 Mirer5671
1 MiroBrada5672
1 Mironearth5673
1 Mischling5674
2 Miscode5675
7 Miserlou5676
1 Misha Vargas5677
1 Mishlai5678
2 Miskaton5679
19 Miskin5680
2 Misof5681
1 MistWiz5682
22 MisterSheik5683

5659 https://en.wikipedia.org/wiki/User:Minimac
5660 https://en.wikipedia.org/wiki/User:Minime12358
5661 https://en.wikipedia.org/w/index.php%3ftitle=User:Minimiscience&action=edit&redlink=1
5662 https://en.wikipedia.org/wiki/User:Minna_Sora_no_Shita
5663 https://en.wikipedia.org/w/index.php%3ftitle=User:Minsuper&action=edit&redlink=1
5664 https://en.wikipedia.org/wiki/User:Mintleaf~enwiki
5665 https://en.wikipedia.org/wiki/User:MinusBot
5666 https://en.wikipedia.org/wiki/User:Mipadi
5667 https://en.wikipedia.org/wiki/User:MiqayelMinasyan
5668 https://en.wikipedia.org/w/index.php%3ftitle=User:Miracle_Pen&action=edit&redlink=1
5669 https://en.wikipedia.org/wiki/User:Miracle173
5670 https://en.wikipedia.org/w/index.php%3ftitle=User:Mirddes&action=edit&redlink=1
5671 https://en.wikipedia.org/wiki/User:Mirer
5672 https://en.wikipedia.org/w/index.php%3ftitle=User:MiroBrada&action=edit&redlink=1
5673 https://en.wikipedia.org/w/index.php%3ftitle=User:Mironearth&action=edit&redlink=1
5674 https://en.wikipedia.org/wiki/User:Mischling
5675 https://en.wikipedia.org/w/index.php%3ftitle=User:Miscode&action=edit&redlink=1
5676 https://en.wikipedia.org/wiki/User:Miserlou
5677 https://en.wikipedia.org/wiki/User:Misha_Vargas
5678 https://en.wikipedia.org/wiki/User:Mishlai
5679 https://en.wikipedia.org/wiki/User:Miskaton
5680 https://en.wikipedia.org/wiki/User:Miskin
5681 https://en.wikipedia.org/wiki/User:Misof
5682 https://en.wikipedia.org/wiki/User:MistWiz
5683 https://en.wikipedia.org/wiki/User:MisterSheik

1896
External links

2 Misza135684
1 Miszatomic5685
6 Mitch Ames5686
2 Mitchellduffy5687
1 Mitchoyoshitaka5688
2 MithrandirAgain5689
1 Mitsou~dewiki5690
142 Miym5691
4 Mizaoku5692
1 Mjaredd5693
1 Mjbmrbot5694
1 Mjethalia65695
1 Mjoyce30125696
2 Mjs19915697
1 Mkarutz5698
1 Mkhan31895699
5 Mkroeger5700
2 Mkw8135701
1 MladenWiki5702
2 Mlhetland5703
14 Mlpkr5704
2 Mm405705
1 Mmernex5706
1 Mmhyamin5707
1 Mmmready5708

5684 https://en.wikipedia.org/wiki/User:Misza13
5685 https://en.wikipedia.org/wiki/User:Miszatomic
5686 https://en.wikipedia.org/wiki/User:Mitch_Ames
5687 https://en.wikipedia.org/w/index.php%3ftitle=User:Mitchellduffy&action=edit&redlink=1
5688 https://en.wikipedia.org/wiki/User:Mitchoyoshitaka
5689 https://en.wikipedia.org/wiki/User:MithrandirAgain
5690 https://en.wikipedia.org/w/index.php%3ftitle=User:Mitsou~dewiki&action=edit&redlink=1
5691 https://en.wikipedia.org/wiki/User:Miym
5692 https://en.wikipedia.org/w/index.php%3ftitle=User:Mizaoku&action=edit&redlink=1
5693 https://en.wikipedia.org/w/index.php%3ftitle=User:Mjaredd&action=edit&redlink=1
5694 https://en.wikipedia.org/wiki/User:Mjbmrbot
5695 https://en.wikipedia.org/w/index.php%3ftitle=User:Mjethalia6&action=edit&redlink=1
5696 https://en.wikipedia.org/w/index.php%3ftitle=User:Mjoyce3012&action=edit&redlink=1
5697 https://en.wikipedia.org/wiki/User:Mjs1991
5698 https://en.wikipedia.org/w/index.php%3ftitle=User:Mkarutz&action=edit&redlink=1
5699 https://en.wikipedia.org/wiki/User:Mkhan3189
5700 https://en.wikipedia.org/w/index.php%3ftitle=User:Mkroeger&action=edit&redlink=1
5701 https://en.wikipedia.org/wiki/User:Mkw813
5702 https://en.wikipedia.org/w/index.php%3ftitle=User:MladenWiki&action=edit&redlink=1
5703 https://en.wikipedia.org/w/index.php%3ftitle=User:Mlhetland&action=edit&redlink=1
5704 https://en.wikipedia.org/wiki/User:Mlpkr
5705 https://en.wikipedia.org/wiki/User:Mm40
5706 https://en.wikipedia.org/wiki/User:Mmernex
5707 https://en.wikipedia.org/w/index.php%3ftitle=User:Mmhyamin&action=edit&redlink=1
5708 https://en.wikipedia.org/w/index.php%3ftitle=User:Mmmready&action=edit&redlink=1

1897
Contributors

1 Mmo~enwiki5709
2 Mmtux5710
2 Mnbf9rca5711
2 Mneedes5712
2 Mntlchaos5713
4 MoMaT5714
1 MoRsE5715
1 Moa18e5716
1 MobileWill5717
1 Mobileseth5718
5 Mobius5719
1 ModalPeak5720
1 Modeha5721
1 Modest Genius5722
1 Modiashutosh5723
4 Modster5724
1 Moe Epsilon5725
1 Mogers5726
5 Mogism5727
1 Mogren5728
2 Mohamedafattah5729
1 Mohammadali695730
1 Mohit050119925731
1 Moink5732
5 Mojo Hand5733

5709 https://en.wikipedia.org/w/index.php%3ftitle=User:Mmo~enwiki&action=edit&redlink=1
5710 https://en.wikipedia.org/wiki/User:Mmtux
5711 https://en.wikipedia.org/wiki/User:Mnbf9rca
5712 https://en.wikipedia.org/w/index.php%3ftitle=User:Mneedes&action=edit&redlink=1
5713 https://en.wikipedia.org/wiki/User:Mntlchaos
5714 https://en.wikipedia.org/w/index.php%3ftitle=User:MoMaT&action=edit&redlink=1
5715 https://en.wikipedia.org/wiki/User:MoRsE
5716 https://en.wikipedia.org/w/index.php%3ftitle=User:Moa18e&action=edit&redlink=1
5717 https://en.wikipedia.org/w/index.php%3ftitle=User:MobileWill&action=edit&redlink=1
5718 https://en.wikipedia.org/wiki/User:Mobileseth
5719 https://en.wikipedia.org/wiki/User:Mobius
5720 https://en.wikipedia.org/w/index.php%3ftitle=User:ModalPeak&action=edit&redlink=1
5721 https://en.wikipedia.org/wiki/User:Modeha
5722 https://en.wikipedia.org/wiki/User:Modest_Genius
5723 https://en.wikipedia.org/w/index.php%3ftitle=User:Modiashutosh&action=edit&redlink=1
5724 https://en.wikipedia.org/wiki/User:Modster
5725 https://en.wikipedia.org/wiki/User:Moe_Epsilon
5726 https://en.wikipedia.org/w/index.php%3ftitle=User:Mogers&action=edit&redlink=1
5727 https://en.wikipedia.org/w/index.php%3ftitle=User:Mogism&action=edit&redlink=1
5728 https://en.wikipedia.org/wiki/User:Mogren
https://en.wikipedia.org/w/index.php%3ftitle=User:Mohamedafattah&action=edit&redlink=
5729
1
5730 https://en.wikipedia.org/w/index.php%3ftitle=User:Mohammadali69&action=edit&redlink=1
5731 https://en.wikipedia.org/w/index.php%3ftitle=User:Mohit05011992&action=edit&redlink=1
5732 https://en.wikipedia.org/wiki/User:Moink
5733 https://en.wikipedia.org/wiki/User:Mojo_Hand

1898
External links

4 Mojoworker5734
1 Mollmerx5735
1 Momeara5736
1 Mommi845737
1 Mona Borham5738
4 Mona.tehrani5739
5 Monadial5740
1 Monchoman455741
4 MondalorBot5742
1 Mondhir~enwiki5743
2 Moneky5744
1 Monfornot5745
3 Mongol5746
78 Monkbot5747
2 Monmnohas5748
1 Mononomic5749
3 Monsday5750
1 Monstergurkan5751
1 Monty8455752
1 Monza66 1405753
1 Moogwrench5754
1 Mooncake20125755
1 Moosebumps5756
3 MopTop5757
3 MoraSique5758

5734 https://en.wikipedia.org/wiki/User:Mojoworker
5735 https://en.wikipedia.org/wiki/User:Mollmerx
5736 https://en.wikipedia.org/w/index.php%3ftitle=User:Momeara&action=edit&redlink=1
5737 https://en.wikipedia.org/wiki/User:Mommi84
5738 https://en.wikipedia.org/w/index.php%3ftitle=User:Mona_Borham&action=edit&redlink=1
5739 https://en.wikipedia.org/w/index.php%3ftitle=User:Mona.tehrani&action=edit&redlink=1
5740 https://en.wikipedia.org/w/index.php%3ftitle=User:Monadial&action=edit&redlink=1
5741 https://en.wikipedia.org/wiki/User:Monchoman45
5742 https://en.wikipedia.org/wiki/User:MondalorBot
5743 https://en.wikipedia.org/wiki/User:Mondhir~enwiki
5744 https://en.wikipedia.org/wiki/User:Moneky
5745 https://en.wikipedia.org/wiki/User:Monfornot
5746 https://en.wikipedia.org/wiki/User:Mongol
5747 https://en.wikipedia.org/wiki/User:Monkbot
5748 https://en.wikipedia.org/w/index.php%3ftitle=User:Monmnohas&action=edit&redlink=1
5749 https://en.wikipedia.org/wiki/User:Mononomic
5750 https://en.wikipedia.org/w/index.php%3ftitle=User:Monsday&action=edit&redlink=1
5751 https://en.wikipedia.org/w/index.php%3ftitle=User:Monstergurkan&action=edit&redlink=1
5752 https://en.wikipedia.org/wiki/User:Monty845
5753 https://en.wikipedia.org/w/index.php%3ftitle=User:Monza66_140&action=edit&redlink=1
5754 https://en.wikipedia.org/wiki/User:Moogwrench
5755 https://en.wikipedia.org/w/index.php%3ftitle=User:Mooncake2012&action=edit&redlink=1
5756 https://en.wikipedia.org/wiki/User:Moosebumps
5757 https://en.wikipedia.org/wiki/User:MopTop
5758 https://en.wikipedia.org/wiki/User:MoraSique

1899
Contributors

2 Mordomo5759
4 MoreNet5760
1 Morel5761
2 MorganGreen5762
5 Morgoth1065763
7 Mormegil5764
2 Morn5765
1 Morphh5766
1 Morris Kurz5767
10 Mortdeus5768
1 Mortee5769
6 Mortense5770
2 Mortusmox5771
2 Morty915772
1 Mosrod5773
1 Mostafa.vafi5774
1 Mostafamahmoud 075775
2 Mostargue5776
2 Motyzk5777
1 Mountain5778
1 MountainGoat85779
6 Mousehousemd5780
1 Mousetrails5781
1 Movses-bot5782
1 Moxfyre5783

5759 https://en.wikipedia.org/wiki/User:Mordomo
5760 https://en.wikipedia.org/wiki/User:MoreNet
5761 https://en.wikipedia.org/wiki/User:Morel
5762 https://en.wikipedia.org/w/index.php%3ftitle=User:MorganGreen&action=edit&redlink=1
5763 https://en.wikipedia.org/w/index.php%3ftitle=User:Morgoth106&action=edit&redlink=1
5764 https://en.wikipedia.org/wiki/User:Mormegil
5765 https://en.wikipedia.org/wiki/User:Morn
5766 https://en.wikipedia.org/wiki/User:Morphh
5767 https://en.wikipedia.org/w/index.php%3ftitle=User:Morris_Kurz&action=edit&redlink=1
5768 https://en.wikipedia.org/w/index.php%3ftitle=User:Mortdeus&action=edit&redlink=1
5769 https://en.wikipedia.org/wiki/User:Mortee
5770 https://en.wikipedia.org/wiki/User:Mortense
5771 https://en.wikipedia.org/w/index.php%3ftitle=User:Mortusmox&action=edit&redlink=1
5772 https://en.wikipedia.org/w/index.php%3ftitle=User:Morty91&action=edit&redlink=1
5773 https://en.wikipedia.org/wiki/User:Mosrod
5774 https://en.wikipedia.org/w/index.php%3ftitle=User:Mostafa.vafi&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Mostafamahmoud_07&action=edit&
5775
redlink=1
5776 https://en.wikipedia.org/wiki/User:Mostargue
5777 https://en.wikipedia.org/w/index.php%3ftitle=User:Motyzk&action=edit&redlink=1
5778 https://en.wikipedia.org/wiki/User:Mountain
5779 https://en.wikipedia.org/w/index.php%3ftitle=User:MountainGoat8&action=edit&redlink=1
5780 https://en.wikipedia.org/w/index.php%3ftitle=User:Mousehousemd&action=edit&redlink=1
5781 https://en.wikipedia.org/w/index.php%3ftitle=User:Mousetrails&action=edit&redlink=1
5782 https://en.wikipedia.org/wiki/User:Movses-bot
5783 https://en.wikipedia.org/wiki/User:Moxfyre

1900
External links

2 Moxy5784
1 Mpagano5785
1 Mpatel5786
3 Mpeisenbr5787
2 Mpkuse5788
1 Mqchen5789
3 Mr Elmo5790
2 Mr Stephen5791
1 Mr flea5792
1 Mr. Fitzwilliam Darcy5793
1 Mr.nickdaking5794
1 MrBlok5795
2 MrDemeanour5796
1 MrMaThaMi5797
1 MrSeasword5798
1 MrShoggoth5799
1 MrSomeone5800
4 MrVanBot5801
2 Mrauto565802
2 Mrck@charter.net5803
1 Mrincodi5804
3 Mrjeff5805
1 Mrmagoo20065806
1 Mrocklin5807
1 Mrtngslr5808

5784 https://en.wikipedia.org/wiki/User:Moxy
5785 https://en.wikipedia.org/wiki/User:Mpagano
5786 https://en.wikipedia.org/wiki/User:Mpatel
5787 https://en.wikipedia.org/wiki/User:Mpeisenbr
5788 https://en.wikipedia.org/w/index.php%3ftitle=User:Mpkuse&action=edit&redlink=1
5789 https://en.wikipedia.org/wiki/User:Mqchen
5790 https://en.wikipedia.org/wiki/User:Mr_Elmo
5791 https://en.wikipedia.org/wiki/User:Mr_Stephen
5792 https://en.wikipedia.org/wiki/User:Mr_flea
https://en.wikipedia.org/w/index.php%3ftitle=User:Mr._Fitzwilliam_Darcy&action=edit&
5793
redlink=1
5794 https://en.wikipedia.org/w/index.php%3ftitle=User:Mr.nickdaking&action=edit&redlink=1
5795 https://en.wikipedia.org/w/index.php%3ftitle=User:MrBlok&action=edit&redlink=1
5796 https://en.wikipedia.org/wiki/User:MrDemeanour
5797 https://en.wikipedia.org/wiki/User:MrMaThaMi
5798 https://en.wikipedia.org/w/index.php%3ftitle=User:MrSeasword&action=edit&redlink=1
5799 https://en.wikipedia.org/wiki/User:MrShoggoth
5800 https://en.wikipedia.org/wiki/User:MrSomeone
5801 https://en.wikipedia.org/wiki/User:MrVanBot
5802 https://en.wikipedia.org/w/index.php%3ftitle=User:Mrauto56&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Mrck@charter.net&action=edit&
5803
redlink=1
5804 https://en.wikipedia.org/wiki/User:Mrincodi
5805 https://en.wikipedia.org/wiki/User:Mrjeff
5806 https://en.wikipedia.org/wiki/User:Mrmagoo2006
5807 https://en.wikipedia.org/wiki/User:Mrocklin
5808 https://en.wikipedia.org/wiki/User:Mrtngslr

1901
Contributors

2 Mrypsilon5809
2 Ms2ger5810
2 MsIternity5811
1 Msb7315812
3 Mscuthbert5813
1 Msg5555814
3 Msh2105815
1 Mshebanow5816
3 Mshonle~enwiki5817
1 Mslusky5818
2 Msmith09575819
4 Msnicki5820
2 Msoos5821
5 Msswp5822
1 Mstrahl5823
3 Mstuomel5824
2 Mswen5825
1 Mtahmed5826
4 Mtching5827
1 Mtcv5828
1 Mtnorthpoplar5829
1 Mtodinov5830
1 Mtzguido5831
2 Mudd15832
4 Muditjai5833

5809 https://en.wikipedia.org/w/index.php%3ftitle=User:Mrypsilon&action=edit&redlink=1
5810 https://en.wikipedia.org/wiki/User:Ms2ger
5811 https://en.wikipedia.org/w/index.php%3ftitle=User:MsIternity&action=edit&redlink=1
5812 https://en.wikipedia.org/wiki/User:Msb731
5813 https://en.wikipedia.org/wiki/User:Mscuthbert
5814 https://en.wikipedia.org/wiki/User:Msg555
5815 https://en.wikipedia.org/wiki/User:Msh210
5816 https://en.wikipedia.org/w/index.php%3ftitle=User:Mshebanow&action=edit&redlink=1
5817 https://en.wikipedia.org/wiki/User:Mshonle~enwiki
5818 https://en.wikipedia.org/w/index.php%3ftitle=User:Mslusky&action=edit&redlink=1
5819 https://en.wikipedia.org/w/index.php%3ftitle=User:Msmith0957&action=edit&redlink=1
5820 https://en.wikipedia.org/wiki/User:Msnicki
5821 https://en.wikipedia.org/wiki/User:Msoos
5822 https://en.wikipedia.org/w/index.php%3ftitle=User:Msswp&action=edit&redlink=1
5823 https://en.wikipedia.org/w/index.php%3ftitle=User:Mstrahl&action=edit&redlink=1
5824 https://en.wikipedia.org/wiki/User:Mstuomel
5825 https://en.wikipedia.org/w/index.php%3ftitle=User:Mswen&action=edit&redlink=1
5826 https://en.wikipedia.org/w/index.php%3ftitle=User:Mtahmed&action=edit&redlink=1
5827 https://en.wikipedia.org/w/index.php%3ftitle=User:Mtching&action=edit&redlink=1
5828 https://en.wikipedia.org/wiki/User:Mtcv
5829 https://en.wikipedia.org/w/index.php%3ftitle=User:Mtnorthpoplar&action=edit&redlink=1
5830 https://en.wikipedia.org/w/index.php%3ftitle=User:Mtodinov&action=edit&redlink=1
5831 https://en.wikipedia.org/wiki/User:Mtzguido
5832 https://en.wikipedia.org/wiki/User:Mudd1
5833 https://en.wikipedia.org/w/index.php%3ftitle=User:Muditjai&action=edit&redlink=1

1902
External links

1 Muffincorp5834
1 Muhends5835
1 Mukerjee5836
1 Mukov5837
3 MulberryBeacon5838
1 Mulder416sBot5839
1 Mullacy5840
1 MultiPoly5841
9 Multipundit5842
1 Mundhenk5843
3 Muon5844
1 MureninC5845
1 Muro de Aguas5846
3 Murphychen5847
4 Murray Langton5848
1 Murraycu5849
1 Murtaza.aliakbar5850
6 Musashiaharon5851
2 Mushroom5852
1 Music Sorter5853
3 MusicScience5854
2 MusikAnimal5855
9 MusikBot5856
9 Musiphil5857
1 Mussab ElDash5858

5834 https://en.wikipedia.org/w/index.php%3ftitle=User:Muffincorp&action=edit&redlink=1
5835 https://en.wikipedia.org/w/index.php%3ftitle=User:Muhends&action=edit&redlink=1
5836 https://en.wikipedia.org/wiki/User:Mukerjee
5837 https://en.wikipedia.org/w/index.php%3ftitle=User:Mukov&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:MulberryBeacon&action=edit&redlink=
5838
1
5839 https://en.wikipedia.org/wiki/User:Mulder416sBot
5840 https://en.wikipedia.org/w/index.php%3ftitle=User:Mullacy&action=edit&redlink=1
5841 https://en.wikipedia.org/wiki/User:MultiPoly
5842 https://en.wikipedia.org/w/index.php%3ftitle=User:Multipundit&action=edit&redlink=1
5843 https://en.wikipedia.org/wiki/User:Mundhenk
5844 https://en.wikipedia.org/wiki/User:Muon
5845 https://en.wikipedia.org/wiki/User:MureninC
5846 https://en.wikipedia.org/wiki/User:Muro_de_Aguas
5847 https://en.wikipedia.org/w/index.php%3ftitle=User:Murphychen&action=edit&redlink=1
5848 https://en.wikipedia.org/wiki/User:Murray_Langton
5849 https://en.wikipedia.org/w/index.php%3ftitle=User:Murraycu&action=edit&redlink=1
5850 https://en.wikipedia.org/wiki/User:Murtaza.aliakbar
5851 https://en.wikipedia.org/wiki/User:Musashiaharon
5852 https://en.wikipedia.org/wiki/User:Mushroom
5853 https://en.wikipedia.org/wiki/User:Music_Sorter
5854 https://en.wikipedia.org/w/index.php%3ftitle=User:MusicScience&action=edit&redlink=1
5855 https://en.wikipedia.org/wiki/User:MusikAnimal
5856 https://en.wikipedia.org/wiki/User:MusikBot
5857 https://en.wikipedia.org/wiki/User:Musiphil
5858 https://en.wikipedia.org/w/index.php%3ftitle=User:Mussab_ElDash&action=edit&redlink=1

1903
Contributors

2 Mutantoe5859
2 Mutinus5860
1 Muurder5861
5 MwGamera5862
2 Mwarren us5863
2 Mweber~enwiki5864
1 Mwilso245865
1 Mwk soul5866
2 Mwoelkde5867
2 Mwtoews5868
6 Mxn5869
1 My-2-bits5870
2 MyKingKong5871
4 Myanw5872
13 Myasuda5873
1 Mycer1nus5874
50 Myconix5875
3 Mykhal5876
1 Myleslong5877
1 MynameisJayden5878
1 Mynameisntbob15879
2 Mynkash165880
1 Myoglobin5881
2 Mypa4me5882
1 Mys 721tx5883

5859 https://en.wikipedia.org/wiki/User:Mutantoe
5860 https://en.wikipedia.org/w/index.php%3ftitle=User:Mutinus&action=edit&redlink=1
5861 https://en.wikipedia.org/w/index.php%3ftitle=User:Muurder&action=edit&redlink=1
5862 https://en.wikipedia.org/w/index.php%3ftitle=User:MwGamera&action=edit&redlink=1
5863 https://en.wikipedia.org/wiki/User:Mwarren_us
5864 https://en.wikipedia.org/w/index.php%3ftitle=User:Mweber~enwiki&action=edit&redlink=1
5865 https://en.wikipedia.org/wiki/User:Mwilso24
5866 https://en.wikipedia.org/w/index.php%3ftitle=User:Mwk_soul&action=edit&redlink=1
5867 https://en.wikipedia.org/wiki/User:Mwoelkde
5868 https://en.wikipedia.org/wiki/User:Mwtoews
5869 https://en.wikipedia.org/wiki/User:Mxn
5870 https://en.wikipedia.org/wiki/User:My-2-bits
5871 https://en.wikipedia.org/w/index.php%3ftitle=User:MyKingKong&action=edit&redlink=1
5872 https://en.wikipedia.org/wiki/User:Myanw
5873 https://en.wikipedia.org/wiki/User:Myasuda
5874 https://en.wikipedia.org/w/index.php%3ftitle=User:Mycer1nus&action=edit&redlink=1
5875 https://en.wikipedia.org/wiki/User:Myconix
5876 https://en.wikipedia.org/wiki/User:Mykhal
5877 https://en.wikipedia.org/wiki/User:Myleslong
5878 https://en.wikipedia.org/wiki/User:MynameisJayden
https://en.wikipedia.org/w/index.php%3ftitle=User:Mynameisntbob1&action=edit&redlink=
5879
1
5880 https://en.wikipedia.org/w/index.php%3ftitle=User:Mynkash16&action=edit&redlink=1
5881 https://en.wikipedia.org/wiki/User:Myoglobin
5882 https://en.wikipedia.org/wiki/User:Mypa4me
5883 https://en.wikipedia.org/wiki/User:Mys_721tx

1904
External links

1 Myselfine5884
1 Mysid5885
10 MystBot5886
1 Myth0101015887
1 Mytochondria5888
1 Mzamora25889
1 N Shar5890
3 N Vale5891
3 N.Esayan5892
2 N12345n5893
2 N26ankur5894
1 N2e5895
1 N4nojohn5896
1 N4ut1lus5897
1 N5iln5898
1 NBMATT5899
1 NC4PK5900
1 NFD90015901
3 NTF5902
1 NYKevin5903
2 Nabla5904
2 Nad5905
1 Naddy5906
4 Naderra5907
2 Naerbnic5908

5884 https://en.wikipedia.org/w/index.php%3ftitle=User:Myselfine&action=edit&redlink=1
5885 https://en.wikipedia.org/wiki/User:Mysid
5886 https://en.wikipedia.org/wiki/User:MystBot
5887 https://en.wikipedia.org/w/index.php%3ftitle=User:Myth010101&action=edit&redlink=1
5888 https://en.wikipedia.org/wiki/User:Mytochondria
5889 https://en.wikipedia.org/wiki/User:Mzamora2
5890 https://en.wikipedia.org/wiki/User:N_Shar
5891 https://en.wikipedia.org/w/index.php%3ftitle=User:N_Vale&action=edit&redlink=1
5892 https://en.wikipedia.org/w/index.php%3ftitle=User:N.Esayan&action=edit&redlink=1
5893 https://en.wikipedia.org/wiki/User:N12345n
5894 https://en.wikipedia.org/w/index.php%3ftitle=User:N26ankur&action=edit&redlink=1
5895 https://en.wikipedia.org/wiki/User:N2e
5896 https://en.wikipedia.org/wiki/User:N4nojohn
5897 https://en.wikipedia.org/w/index.php%3ftitle=User:N4ut1lus&action=edit&redlink=1
5898 https://en.wikipedia.org/wiki/User:N5iln
5899 https://en.wikipedia.org/wiki/User:NBMATT
5900 https://en.wikipedia.org/wiki/User:NC4PK
5901 https://en.wikipedia.org/wiki/User:NFD9001
5902 https://en.wikipedia.org/wiki/User:NTF
5903 https://en.wikipedia.org/wiki/User:NYKevin
5904 https://en.wikipedia.org/wiki/User:Nabla
5905 https://en.wikipedia.org/wiki/User:Nad
5906 https://en.wikipedia.org/wiki/User:Naddy
5907 https://en.wikipedia.org/wiki/User:Naderra
5908 https://en.wikipedia.org/wiki/User:Naerbnic

1905
Contributors

1 Naereen5909
1 Naff895910
53 Nageh5911
4 Nagualdesign5912
1 Nahabedere5913
1 Nahum Reduta5914
1 Najeeb10105915
2 Nakarumaka5916
2 Naku~enwiki5917
1 Nalgene1235918
2 Nallimbot5919
2 NameIsRon5920
2 Namesisbala5921
3 Nandesuka5922
17 Nanobear~enwiki5923
7 Nanshu5924
1 Narayanese5925
1 Nard the Bard5926
2 Narendrak5927
5 Narky Blert5928
2 Narmtarkib5929
2 Naroza5930
1 Narsil5931
1 Nashn5932
1 Nasnema5933

5909 https://en.wikipedia.org/wiki/User:Naereen
5910 https://en.wikipedia.org/wiki/User:Naff89
5911 https://en.wikipedia.org/wiki/User:Nageh
5912 https://en.wikipedia.org/wiki/User:Nagualdesign
5913 https://en.wikipedia.org/wiki/User:Nahabedere
5914 https://en.wikipedia.org/wiki/User:Nahum_Reduta
5915 https://en.wikipedia.org/wiki/User:Najeeb1010
5916 https://en.wikipedia.org/w/index.php%3ftitle=User:Nakarumaka&action=edit&redlink=1
5917 https://en.wikipedia.org/w/index.php%3ftitle=User:Naku~enwiki&action=edit&redlink=1
5918 https://en.wikipedia.org/w/index.php%3ftitle=User:Nalgene123&action=edit&redlink=1
5919 https://en.wikipedia.org/wiki/User:Nallimbot
5920 https://en.wikipedia.org/wiki/User:NameIsRon
5921 https://en.wikipedia.org/w/index.php%3ftitle=User:Namesisbala&action=edit&redlink=1
5922 https://en.wikipedia.org/wiki/User:Nandesuka
5923 https://en.wikipedia.org/wiki/User:Nanobear~enwiki
5924 https://en.wikipedia.org/wiki/User:Nanshu
5925 https://en.wikipedia.org/wiki/User:Narayanese
5926 https://en.wikipedia.org/wiki/User:Nard_the_Bard
5927 https://en.wikipedia.org/w/index.php%3ftitle=User:Narendrak&action=edit&redlink=1
5928 https://en.wikipedia.org/wiki/User:Narky_Blert
5929 https://en.wikipedia.org/w/index.php%3ftitle=User:Narmtarkib&action=edit&redlink=1
5930 https://en.wikipedia.org/w/index.php%3ftitle=User:Naroza&action=edit&redlink=1
5931 https://en.wikipedia.org/wiki/User:Narsil
5932 https://en.wikipedia.org/w/index.php%3ftitle=User:Nashn&action=edit&redlink=1
5933 https://en.wikipedia.org/wiki/User:Nasnema

1906
External links

1 Nasradu85934
1 Nassrat5935
3 Nate Wessel5936
2 Natematic5937
1 Nateofark5938
1 Nathan Yim5939
1 Nathan11g5940
1 Nathan20555941
1 NathanHurst5942
1 Nathaniel.buck5943
1 Natuur125944
1 Naugtur5945
1 Nausher5946
2 Naveenkodali1235947
2 Navigatr855948
1 Navneet Singhvi5949
4 Navstar555950
1 NavyFlyer13255951
4 NawlinWiki5952
8 Nayuki5953
71 Nbarth5954
3 Nbhatia9115955
1 Nbhatla5956
100 Nbro5957
2 Nburden5958

5934 https://en.wikipedia.org/w/index.php%3ftitle=User:Nasradu8&action=edit&redlink=1
5935 https://en.wikipedia.org/w/index.php%3ftitle=User:Nassrat&action=edit&redlink=1
5936 https://en.wikipedia.org/wiki/User:Nate_Wessel
5937 https://en.wikipedia.org/wiki/User:Natematic
5938 https://en.wikipedia.org/wiki/User:Nateofark
5939 https://en.wikipedia.org/w/index.php%3ftitle=User:Nathan_Yim&action=edit&redlink=1
5940 https://en.wikipedia.org/w/index.php%3ftitle=User:Nathan11g&action=edit&redlink=1
5941 https://en.wikipedia.org/wiki/User:Nathan2055
5942 https://en.wikipedia.org/wiki/User:NathanHurst
https://en.wikipedia.org/w/index.php%3ftitle=User:Nathaniel.buck&action=edit&redlink=
5943
1
5944 https://en.wikipedia.org/wiki/User:Natuur12
5945 https://en.wikipedia.org/wiki/User:Naugtur
5946 https://en.wikipedia.org/wiki/User:Nausher
https://en.wikipedia.org/w/index.php%3ftitle=User:Naveenkodali123&action=edit&
5947
redlink=1
5948 https://en.wikipedia.org/wiki/User:Navigatr85
https://en.wikipedia.org/w/index.php%3ftitle=User:Navneet_Singhvi&action=edit&
5949
redlink=1
5950 https://en.wikipedia.org/w/index.php%3ftitle=User:Navstar55&action=edit&redlink=1
5951 https://en.wikipedia.org/w/index.php%3ftitle=User:NavyFlyer1325&action=edit&redlink=1
5952 https://en.wikipedia.org/wiki/User:NawlinWiki
5953 https://en.wikipedia.org/wiki/User:Nayuki
5954 https://en.wikipedia.org/wiki/User:Nbarth
5955 https://en.wikipedia.org/w/index.php%3ftitle=User:Nbhatia911&action=edit&redlink=1
5956 https://en.wikipedia.org/wiki/User:Nbhatla
5957 https://en.wikipedia.org/wiki/User:Nbro
5958 https://en.wikipedia.org/w/index.php%3ftitle=User:Nburden&action=edit&redlink=1

1907
Contributors

3 Nealmcb5959
1 Nearffxx5960
2 Necklace5961
1 Ned145962
12 Nedaljo5963
1 Neel basu5964
2 Neelix5965
10 Negrulio5966
2 Nehalem5967
2 NehpestTheFirst5968
1 Neil P. Quinn5969
1 Neil Smithline5970
2 Neil.steiner5971
2 NeilFraser5972
2 NeilMacLeanCanada5973
26 Neilc5974
1 NeilenMarais5975
7 Neils515976
2 Nejko5977
1 Neko-chan5978
1 Nelhage5979
2 NellieBly5980
1 Nemnkim5981
1 Nemo Null5982
25 Nemo bis5983

5959 https://en.wikipedia.org/wiki/User:Nealmcb
5960 https://en.wikipedia.org/w/index.php%3ftitle=User:Nearffxx&action=edit&redlink=1
5961 https://en.wikipedia.org/w/index.php%3ftitle=User:Necklace&action=edit&redlink=1
5962 https://en.wikipedia.org/wiki/User:Ned14
5963 https://en.wikipedia.org/w/index.php%3ftitle=User:Nedaljo&action=edit&redlink=1
5964 https://en.wikipedia.org/w/index.php%3ftitle=User:Neel_basu&action=edit&redlink=1
5965 https://en.wikipedia.org/wiki/User:Neelix
5966 https://en.wikipedia.org/wiki/User:Negrulio
5967 https://en.wikipedia.org/wiki/User:Nehalem
5968 https://en.wikipedia.org/wiki/User:NehpestTheFirst
5969 https://en.wikipedia.org/wiki/User:Neil_P._Quinn
5970 https://en.wikipedia.org/wiki/User:Neil_Smithline
5971 https://en.wikipedia.org/wiki/User:Neil.steiner
5972 https://en.wikipedia.org/wiki/User:NeilFraser
https://en.wikipedia.org/w/index.php%3ftitle=User:NeilMacLeanCanada&action=edit&
5973
redlink=1
5974 https://en.wikipedia.org/wiki/User:Neilc
5975 https://en.wikipedia.org/wiki/User:NeilenMarais
5976 https://en.wikipedia.org/wiki/User:Neils51
5977 https://en.wikipedia.org/wiki/User:Nejko
5978 https://en.wikipedia.org/wiki/User:Neko-chan
5979 https://en.wikipedia.org/w/index.php%3ftitle=User:Nelhage&action=edit&redlink=1
5980 https://en.wikipedia.org/wiki/User:NellieBly
5981 https://en.wikipedia.org/w/index.php%3ftitle=User:Nemnkim&action=edit&redlink=1
5982 https://en.wikipedia.org/wiki/User:Nemo_Null
5983 https://en.wikipedia.org/wiki/User:Nemo_bis

1908
External links

10 Nemo81305984
1 Neo1395985
2 NeoUrfahraner5986
1 Neodymium-1425987
1 Neoglez5988
1 Neohaven5989
1 Neollie5990
5 NeonMerlin5991
4 Neonfreon5992
3 Nerdgerl5993
1 Nero hu5994
1 Nesasio5995
1 NetBot5996
2 NetRolller 3D5997
1 Netha Hussain5998
3 Netheril965999
30 Nethgirb6000
1 Netoholic6001
4 Netojinn6002
2 Netrapt6003
3 Netvor6004
13 Netzwerkerin6005
1 Neupane Pratik6006
3 Neuralwarp6007
1 Neuralwiki6008

5984 https://en.wikipedia.org/wiki/User:Nemo8130
5985 https://en.wikipedia.org/wiki/User:Neo139
5986 https://en.wikipedia.org/wiki/User:NeoUrfahraner
5987 https://en.wikipedia.org/w/index.php%3ftitle=User:Neodymium-142&action=edit&redlink=1
5988 https://en.wikipedia.org/wiki/User:Neoglez
5989 https://en.wikipedia.org/wiki/User:Neohaven
5990 https://en.wikipedia.org/w/index.php%3ftitle=User:Neollie&action=edit&redlink=1
5991 https://en.wikipedia.org/wiki/User:NeonMerlin
5992 https://en.wikipedia.org/wiki/User:Neonfreon
5993 https://en.wikipedia.org/w/index.php%3ftitle=User:Nerdgerl&action=edit&redlink=1
5994 https://en.wikipedia.org/w/index.php%3ftitle=User:Nero_hu&action=edit&redlink=1
5995 https://en.wikipedia.org/w/index.php%3ftitle=User:Nesasio&action=edit&redlink=1
5996 https://en.wikipedia.org/wiki/User:NetBot
5997 https://en.wikipedia.org/wiki/User:NetRolller_3D
5998 https://en.wikipedia.org/wiki/User:Netha_Hussain
5999 https://en.wikipedia.org/wiki/User:Netheril96
6000 https://en.wikipedia.org/wiki/User:Nethgirb
6001 https://en.wikipedia.org/wiki/User:Netoholic
6002 https://en.wikipedia.org/w/index.php%3ftitle=User:Netojinn&action=edit&redlink=1
6003 https://en.wikipedia.org/wiki/User:Netrapt
6004 https://en.wikipedia.org/wiki/User:Netvor
6005 https://en.wikipedia.org/wiki/User:Netzwerkerin
https://en.wikipedia.org/w/index.php%3ftitle=User:Neupane_Pratik&action=edit&redlink=
6006
1
6007 https://en.wikipedia.org/wiki/User:Neuralwarp
6008 https://en.wikipedia.org/w/index.php%3ftitle=User:Neuralwiki&action=edit&redlink=1

1909
Contributors

1 Neurodino6009
30 Neurodivergent6010
1 Neurogeek6011
4 Neuromancer376012
1 Neuroneutron6013
1 Neutrality6014
3 Neutronstar26015
1 Neuviemeporte6016
1 Nevakee116017
13 Neverwinterx6018
1 NevilleDNZ6019
1 New Thought6020
7 NewEnglandYankee6021
1 NewTestLeper796022
1 Newfraferz876023
1 Newone6024
1 Newslinger6025
1 Newuserwiki6026
1 Neypot6027
1 Nezzima6028
1 Neøn6029
1 Nfm6030
1 Ngorade6031
2 Ngs1116032
5 Nguyen Thanh Quang6033

6009 https://en.wikipedia.org/w/index.php%3ftitle=User:Neurodino&action=edit&redlink=1
6010 https://en.wikipedia.org/wiki/User:Neurodivergent
6011 https://en.wikipedia.org/wiki/User:Neurogeek
6012 https://en.wikipedia.org/w/index.php%3ftitle=User:Neuromancer37&action=edit&redlink=1
6013 https://en.wikipedia.org/w/index.php%3ftitle=User:Neuroneutron&action=edit&redlink=1
6014 https://en.wikipedia.org/wiki/User:Neutrality
6015 https://en.wikipedia.org/wiki/User:Neutronstar2
6016 https://en.wikipedia.org/wiki/User:Neuviemeporte
6017 https://en.wikipedia.org/w/index.php%3ftitle=User:Nevakee11&action=edit&redlink=1
6018 https://en.wikipedia.org/wiki/User:Neverwinterx
6019 https://en.wikipedia.org/wiki/User:NevilleDNZ
6020 https://en.wikipedia.org/wiki/User:New_Thought
6021 https://en.wikipedia.org/wiki/User:NewEnglandYankee
https://en.wikipedia.org/w/index.php%3ftitle=User:NewTestLeper79&action=edit&redlink=
6022
1
6023 https://en.wikipedia.org/wiki/User:Newfraferz87
6024 https://en.wikipedia.org/wiki/User:Newone
6025 https://en.wikipedia.org/wiki/User:Newslinger
6026 https://en.wikipedia.org/w/index.php%3ftitle=User:Newuserwiki&action=edit&redlink=1
6027 https://en.wikipedia.org/w/index.php%3ftitle=User:Neypot&action=edit&redlink=1
6028 https://en.wikipedia.org/w/index.php%3ftitle=User:Nezzima&action=edit&redlink=1
6029 https://en.wikipedia.org/wiki/User:Ne%25C3%25B8n
6030 https://en.wikipedia.org/wiki/User:Nfm
6031 https://en.wikipedia.org/w/index.php%3ftitle=User:Ngorade&action=edit&redlink=1
6032 https://en.wikipedia.org/w/index.php%3ftitle=User:Ngs111&action=edit&redlink=1
6033 https://en.wikipedia.org/wiki/User:Nguyen_Thanh_Quang

1910
External links

10 Nhinchey6034
1 Nialsh6035
5 Nibbio846036
1 NibrasWasTaken6037
1 NicApicella6038
12 Niceguyedc6039
2 Nichehole6040
2 Nicholasbishop6041
2 Nichtich~enwiki6042
2 Nick6043
6 Nick Levine6044
1 Nick Moyes6045
3 Nick twisper6046
1 NickCatal6047
1 NickGarvey6048
1 NickLewycky6049
2 NickShaforostoff6050
2 NickT9886051
1 NickW5576052
1 Nickaschenbach6053
2 Nickls6054
1 Nickmalik6055
2 Nickthequik6056
2 Nicktohzyu6057
102 NickyMcLean6058

6034 https://en.wikipedia.org/wiki/User:Nhinchey
6035 https://en.wikipedia.org/wiki/User:Nialsh
6036 https://en.wikipedia.org/wiki/User:Nibbio84
6037 https://en.wikipedia.org/wiki/User:NibrasWasTaken
6038 https://en.wikipedia.org/wiki/User:NicApicella
6039 https://en.wikipedia.org/wiki/User:Niceguyedc
6040 https://en.wikipedia.org/w/index.php%3ftitle=User:Nichehole&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Nicholasbishop&action=edit&redlink=
6041
1
6042 https://en.wikipedia.org/wiki/User:Nichtich~enwiki
6043 https://en.wikipedia.org/wiki/User:Nick
6044 https://en.wikipedia.org/wiki/User:Nick_Levine
6045 https://en.wikipedia.org/wiki/User:Nick_Moyes
6046 https://en.wikipedia.org/w/index.php%3ftitle=User:Nick_twisper&action=edit&redlink=1
6047 https://en.wikipedia.org/wiki/User:NickCatal
6048 https://en.wikipedia.org/wiki/User:NickGarvey
6049 https://en.wikipedia.org/w/index.php%3ftitle=User:NickLewycky&action=edit&redlink=1
6050 https://en.wikipedia.org/wiki/User:NickShaforostoff
6051 https://en.wikipedia.org/w/index.php%3ftitle=User:NickT988&action=edit&redlink=1
6052 https://en.wikipedia.org/wiki/User:NickW557
https://en.wikipedia.org/w/index.php%3ftitle=User:Nickaschenbach&action=edit&redlink=
6053
1
6054 https://en.wikipedia.org/wiki/User:Nickls
6055 https://en.wikipedia.org/wiki/User:Nickmalik
6056 https://en.wikipedia.org/w/index.php%3ftitle=User:Nickthequik&action=edit&redlink=1
6057 https://en.wikipedia.org/w/index.php%3ftitle=User:Nicktohzyu&action=edit&redlink=1
6058 https://en.wikipedia.org/w/index.php%3ftitle=User:NickyMcLean&action=edit&redlink=1

1911
Contributors

2 Nicmcd6059
1 Nicolaennio6060
3 Nicolasbock6061
1 Nicolaum6062
2 Nicovank6063
2 Nigellwh6064
3 Night Gyr6065
1 Nightkhaos6066
1 Nightscream6067
1 Nigos6068
3 Nihiltres6069
2 Nihlus6070
1 NihlusBOT6071
2 Nihola6072
1 Nik 0 06073
2 Nik-t6074
4 Nikai6075
1 Nikevcowsky6076
2 Nikhil.cs6077
1 Nikhilneo16078
2 Nikhilvavs6079
3 Nikkimaria6080
1 Nikola Smolenski6081
1 Nile6082
1 Nillerdk6083

6059 https://en.wikipedia.org/w/index.php%3ftitle=User:Nicmcd&action=edit&redlink=1
6060 https://en.wikipedia.org/wiki/User:Nicolaennio
6061 https://en.wikipedia.org/w/index.php%3ftitle=User:Nicolasbock&action=edit&redlink=1
6062 https://en.wikipedia.org/wiki/User:Nicolaum
6063 https://en.wikipedia.org/w/index.php%3ftitle=User:Nicovank&action=edit&redlink=1
6064 https://en.wikipedia.org/w/index.php%3ftitle=User:Nigellwh&action=edit&redlink=1
6065 https://en.wikipedia.org/wiki/User:Night_Gyr
6066 https://en.wikipedia.org/wiki/User:Nightkhaos
6067 https://en.wikipedia.org/wiki/User:Nightscream
6068 https://en.wikipedia.org/wiki/User:Nigos
6069 https://en.wikipedia.org/wiki/User:Nihiltres
6070 https://en.wikipedia.org/wiki/User:Nihlus
6071 https://en.wikipedia.org/wiki/User:NihlusBOT
6072 https://en.wikipedia.org/wiki/User:Nihola
6073 https://en.wikipedia.org/w/index.php%3ftitle=User:Nik_0_0&action=edit&redlink=1
6074 https://en.wikipedia.org/wiki/User:Nik-t
6075 https://en.wikipedia.org/wiki/User:Nikai
6076 https://en.wikipedia.org/w/index.php%3ftitle=User:Nikevcowsky&action=edit&redlink=1
6077 https://en.wikipedia.org/wiki/User:Nikhil.cs
6078 https://en.wikipedia.org/w/index.php%3ftitle=User:Nikhilneo1&action=edit&redlink=1
6079 https://en.wikipedia.org/w/index.php%3ftitle=User:Nikhilvavs&action=edit&redlink=1
6080 https://en.wikipedia.org/wiki/User:Nikkimaria
6081 https://en.wikipedia.org/wiki/User:Nikola_Smolenski
6082 https://en.wikipedia.org/wiki/User:Nile
6083 https://en.wikipedia.org/wiki/User:Nillerdk

1912
External links

1 Niloofar piroozi6084
70 Nils Grimsmo6085
1 Nils schmidt hamburg6086
2 Nima1016087
2 Nina Cerutti6088
1 Ninaddb6089
1 Ninenines6090
2 Niner9116091
1 NinjaRobotPirate6092
2 Ninjagecko6093
1 Ninjakannon6094
2 Ninjalectual6095
3 Ninjatummen~enwiki6096
4 Ninly6097
2 Nippashish6098
2 Niraj Aher6099
2 NisansaDdS6100
1 Nisargtanna6101
1 Nish00096102
4 Nishaniayola6103
1 Nishant ingle6104
1 Nishantjr6105
1 Nishantsny6106
1 Nissenbenyitskhak6107
1 Nitayj6108

https://en.wikipedia.org/w/index.php%3ftitle=User:Niloofar_piroozi&action=edit&
6084
redlink=1
6085 https://en.wikipedia.org/wiki/User:Nils_Grimsmo
6086 https://en.wikipedia.org/wiki/User:Nils_schmidt_hamburg
6087 https://en.wikipedia.org/wiki/User:Nima101
6088 https://en.wikipedia.org/w/index.php%3ftitle=User:Nina_Cerutti&action=edit&redlink=1
6089 https://en.wikipedia.org/w/index.php%3ftitle=User:Ninaddb&action=edit&redlink=1
6090 https://en.wikipedia.org/w/index.php%3ftitle=User:Ninenines&action=edit&redlink=1
6091 https://en.wikipedia.org/w/index.php%3ftitle=User:Niner911&action=edit&redlink=1
6092 https://en.wikipedia.org/wiki/User:NinjaRobotPirate
6093 https://en.wikipedia.org/wiki/User:Ninjagecko
6094 https://en.wikipedia.org/wiki/User:Ninjakannon
6095 https://en.wikipedia.org/w/index.php%3ftitle=User:Ninjalectual&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Ninjatummen~enwiki&action=edit&
6096
redlink=1
6097 https://en.wikipedia.org/wiki/User:Ninly
6098 https://en.wikipedia.org/wiki/User:Nippashish
6099 https://en.wikipedia.org/w/index.php%3ftitle=User:Niraj_Aher&action=edit&redlink=1
6100 https://en.wikipedia.org/wiki/User:NisansaDdS
6101 https://en.wikipedia.org/w/index.php%3ftitle=User:Nisargtanna&action=edit&redlink=1
6102 https://en.wikipedia.org/w/index.php%3ftitle=User:Nish0009&action=edit&redlink=1
6103 https://en.wikipedia.org/w/index.php%3ftitle=User:Nishaniayola&action=edit&redlink=1
6104 https://en.wikipedia.org/w/index.php%3ftitle=User:Nishant_ingle&action=edit&redlink=1
6105 https://en.wikipedia.org/wiki/User:Nishantjr
6106 https://en.wikipedia.org/w/index.php%3ftitle=User:Nishantsny&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Nissenbenyitskhak&action=edit&
6107
redlink=1
6108 https://en.wikipedia.org/w/index.php%3ftitle=User:Nitayj&action=edit&redlink=1

1913
Contributors

1 Nitefood6109
1 Niteshj6110
1 Nith Sahor6111
1 Nithin6112
1 Nithin.A.P6113
1 NitishP6114
1 Nitishch6115
3 Nitishkorula6116
21 Nixdorf6117
2 Nixeagle6118
2 Nizonstolz6119
1 Njahnke6120
1 Njanani6121
2 NjardarBot6122
1 Njol6123
1 Nkarthiks6124
2 Nknight6125
4 Nkojuharov6126
1 Nl746127
2 Nmndlr6128
11 Nmnogueira6129
1 Nnarasimhakaushik6130
13 Nneonneo6131
1 Nnomi6132
1 Nnp6133

6109 https://en.wikipedia.org/wiki/User:Nitefood
6110 https://en.wikipedia.org/w/index.php%3ftitle=User:Niteshj&action=edit&redlink=1
6111 https://en.wikipedia.org/w/index.php%3ftitle=User:Nith_Sahor&action=edit&redlink=1
6112 https://en.wikipedia.org/wiki/User:Nithin
6113 https://en.wikipedia.org/w/index.php%3ftitle=User:Nithin.A.P&action=edit&redlink=1
6114 https://en.wikipedia.org/w/index.php%3ftitle=User:NitishP&action=edit&redlink=1
6115 https://en.wikipedia.org/wiki/User:Nitishch
6116 https://en.wikipedia.org/wiki/User:Nitishkorula
6117 https://en.wikipedia.org/wiki/User:Nixdorf
6118 https://en.wikipedia.org/wiki/User:Nixeagle
6119 https://en.wikipedia.org/wiki/User:Nizonstolz
6120 https://en.wikipedia.org/wiki/User:Njahnke
6121 https://en.wikipedia.org/w/index.php%3ftitle=User:Njanani&action=edit&redlink=1
6122 https://en.wikipedia.org/wiki/User:NjardarBot
6123 https://en.wikipedia.org/wiki/User:Njol
6124 https://en.wikipedia.org/w/index.php%3ftitle=User:Nkarthiks&action=edit&redlink=1
6125 https://en.wikipedia.org/wiki/User:Nknight
6126 https://en.wikipedia.org/wiki/User:Nkojuharov
6127 https://en.wikipedia.org/wiki/User:Nl74
6128 https://en.wikipedia.org/w/index.php%3ftitle=User:Nmndlr&action=edit&redlink=1
6129 https://en.wikipedia.org/wiki/User:Nmnogueira
https://en.wikipedia.org/w/index.php%3ftitle=User:Nnarasimhakaushik&action=edit&
6130
redlink=1
6131 https://en.wikipedia.org/wiki/User:Nneonneo
6132 https://en.wikipedia.org/w/index.php%3ftitle=User:Nnomi&action=edit&redlink=1
6133 https://en.wikipedia.org/wiki/User:Nnp

1914
External links

1 NoSoulApophis6134
4 NoToleranceForIntolerance6135
1 Noaa6136
5 Noah Salzman6137
2 Noam feinstein6138
1 Noamz6139
3 Nobbie6140
1 Nocrno6141
1 Noformation6142
5 Nog336143
8 Nohat6144
7 Noisylo656145
1 Nokib Sarkar6146
1 Nolandda6147
6 Noldoaran6148
2 NoldorinElf6149
2 Noloader6150
1 NomePang6151
49 Nomen4Omen6152
1 Nomis806153
2 Nonamea7746154
5 Nonenmac6155
1 Nonforma6156
1 NongBot~enwiki6157
1 Nonhermitian6158

6134 https://en.wikipedia.org/w/index.php%3ftitle=User:NoSoulApophis&action=edit&redlink=1
6135 https://en.wikipedia.org/wiki/User:NoToleranceForIntolerance
6136 https://en.wikipedia.org/wiki/User:Noaa
6137 https://en.wikipedia.org/wiki/User:Noah_Salzman
https://en.wikipedia.org/w/index.php%3ftitle=User:Noam_feinstein&action=edit&redlink=
6138
1
6139 https://en.wikipedia.org/wiki/User:Noamz
6140 https://en.wikipedia.org/wiki/User:Nobbie
6141 https://en.wikipedia.org/w/index.php%3ftitle=User:Nocrno&action=edit&redlink=1
6142 https://en.wikipedia.org/wiki/User:Noformation
6143 https://en.wikipedia.org/wiki/User:Nog33
6144 https://en.wikipedia.org/wiki/User:Nohat
6145 https://en.wikipedia.org/w/index.php%3ftitle=User:Noisylo65&action=edit&redlink=1
6146 https://en.wikipedia.org/wiki/User:Nokib_Sarkar
6147 https://en.wikipedia.org/wiki/User:Nolandda
6148 https://en.wikipedia.org/wiki/User:Noldoaran
6149 https://en.wikipedia.org/w/index.php%3ftitle=User:NoldorinElf&action=edit&redlink=1
6150 https://en.wikipedia.org/wiki/User:Noloader
6151 https://en.wikipedia.org/w/index.php%3ftitle=User:NomePang&action=edit&redlink=1
6152 https://en.wikipedia.org/w/index.php%3ftitle=User:Nomen4Omen&action=edit&redlink=1
6153 https://en.wikipedia.org/w/index.php%3ftitle=User:Nomis80&action=edit&redlink=1
6154 https://en.wikipedia.org/wiki/User:Nonamea774
6155 https://en.wikipedia.org/wiki/User:Nonenmac
6156 https://en.wikipedia.org/wiki/User:Nonforma
6157 https://en.wikipedia.org/wiki/User:NongBot~enwiki
6158 https://en.wikipedia.org/w/index.php%3ftitle=User:Nonhermitian&action=edit&redlink=1

1915
Contributors

1 Nonsanity6159
1 Nonsenseferret6160
1 Nooby person16161
1 Noosentaal6162
1 Norbornene6163
1 Nordald6164
1 Norgas6165
1 Norlesh6166
5 Norm mit6167
1 NormDor6168
1 Nornagon~enwiki6169
2 Northamerica10006170
1 Northumbrian6171
1 Nosbig6172
1 Nosebagbear6173
2 Nostalgius6174
1 NostinAdrek6175
2 Not-just-yeti6176
2 NotARusski6177
2 NotAnonymous06178
1 NotQuiteEXPComplete6179
4 Notheruser6180
2 Nothing12126181
4 Notmyjob886182
1 Notsniwiast6183

6159 https://en.wikipedia.org/wiki/User:Nonsanity
6160 https://en.wikipedia.org/wiki/User:Nonsenseferret
6161 https://en.wikipedia.org/wiki/User:Nooby_person1
6162 https://en.wikipedia.org/wiki/User:Noosentaal
6163 https://en.wikipedia.org/wiki/User:Norbornene
6164 https://en.wikipedia.org/wiki/User:Nordald
6165 https://en.wikipedia.org/w/index.php%3ftitle=User:Norgas&action=edit&redlink=1
6166 https://en.wikipedia.org/w/index.php%3ftitle=User:Norlesh&action=edit&redlink=1
6167 https://en.wikipedia.org/wiki/User:Norm_mit
6168 https://en.wikipedia.org/wiki/User:NormDor
https://en.wikipedia.org/w/index.php%3ftitle=User:Nornagon~enwiki&action=edit&
6169
redlink=1
6170 https://en.wikipedia.org/wiki/User:Northamerica1000
6171 https://en.wikipedia.org/wiki/User:Northumbrian
6172 https://en.wikipedia.org/wiki/User:Nosbig
6173 https://en.wikipedia.org/wiki/User:Nosebagbear
6174 https://en.wikipedia.org/w/index.php%3ftitle=User:Nostalgius&action=edit&redlink=1
6175 https://en.wikipedia.org/wiki/User:NostinAdrek
6176 https://en.wikipedia.org/wiki/User:Not-just-yeti
6177 https://en.wikipedia.org/wiki/User:NotARusski
6178 https://en.wikipedia.org/wiki/User:NotAnonymous0
6179 https://en.wikipedia.org/wiki/User:NotQuiteEXPComplete
6180 https://en.wikipedia.org/wiki/User:Notheruser
6181 https://en.wikipedia.org/wiki/User:Nothing1212
6182 https://en.wikipedia.org/w/index.php%3ftitle=User:Notmyjob88&action=edit&redlink=1
6183 https://en.wikipedia.org/wiki/User:Notsniwiast

1916
External links

1 NottNott6184
1 NovaDog6185
2 Novas0x2a6186
1 NovellGuy6187
1 Novwik6188
4 Nowhere man6189
2 Nr96190
1 Nsheth176191
1 Nskernel6192
1 Nskillen6193
2 Nsrao2k6194
2 Nthns436195
3 Ntsimp6196
1 NubKnacker6197
3 NuclearWarfare6198
2 Nucleosynth6199
2 Nuggetboy6200
7 Nullzero6201
3 Number7746202
3 Numbermaniac6203
2 Numbo3-bot6204
10 Nuno Tavares6205
1 Nurbudapest6206
5 Nutcracker20076207
1 Nutster6208

6184 https://en.wikipedia.org/wiki/User:NottNott
6185 https://en.wikipedia.org/wiki/User:NovaDog
6186 https://en.wikipedia.org/w/index.php%3ftitle=User:Novas0x2a&action=edit&redlink=1
6187 https://en.wikipedia.org/wiki/User:NovellGuy
6188 https://en.wikipedia.org/wiki/User:Novwik
6189 https://en.wikipedia.org/wiki/User:Nowhere_man
6190 https://en.wikipedia.org/wiki/User:Nr9
6191 https://en.wikipedia.org/w/index.php%3ftitle=User:Nsheth17&action=edit&redlink=1
6192 https://en.wikipedia.org/w/index.php%3ftitle=User:Nskernel&action=edit&redlink=1
6193 https://en.wikipedia.org/wiki/User:Nskillen
6194 https://en.wikipedia.org/w/index.php%3ftitle=User:Nsrao2k&action=edit&redlink=1
6195 https://en.wikipedia.org/w/index.php%3ftitle=User:Nthns43&action=edit&redlink=1
6196 https://en.wikipedia.org/wiki/User:Ntsimp
6197 https://en.wikipedia.org/wiki/User:NubKnacker
6198 https://en.wikipedia.org/wiki/User:NuclearWarfare
6199 https://en.wikipedia.org/wiki/User:Nucleosynth
6200 https://en.wikipedia.org/wiki/User:Nuggetboy
6201 https://en.wikipedia.org/wiki/User:Nullzero
6202 https://en.wikipedia.org/wiki/User:Number774
6203 https://en.wikipedia.org/wiki/User:Numbermaniac
6204 https://en.wikipedia.org/wiki/User:Numbo3-bot
6205 https://en.wikipedia.org/wiki/User:Nuno_Tavares
6206 https://en.wikipedia.org/wiki/User:Nurbudapest
https://en.wikipedia.org/w/index.php%3ftitle=User:Nutcracker2007&action=edit&redlink=
6207
1
6208 https://en.wikipedia.org/wiki/User:Nutster

1917
Contributors

1 Nux6209
1 Nuxnut6210
4 Nvrmnd6211
1 Nwalton1256212
1 Nwbeeson6213
1 Nwerneck6214
16 Nxavar6215
4 Nyenyec6216
2 Nyh6217
3 Nyook6218
9 Nyq6219
2 Nyttend6220
2 Nyxos6221
1 Nøkkenbuer6222
1 O Pavlos6223
1 O(n log n)6224
4 O.Koslowski6225
1 O1ive6226
80 OAbot6227
5 OKBot6228
3 OMurgo6229
2 OVooVo6230
1 Oaf26231
1 Oakwood6232
2 Obakeneko6233

6209 https://en.wikipedia.org/wiki/User:Nux
6210 https://en.wikipedia.org/w/index.php%3ftitle=User:Nuxnut&action=edit&redlink=1
6211 https://en.wikipedia.org/wiki/User:Nvrmnd
6212 https://en.wikipedia.org/w/index.php%3ftitle=User:Nwalton125&action=edit&redlink=1
6213 https://en.wikipedia.org/wiki/User:Nwbeeson
6214 https://en.wikipedia.org/wiki/User:Nwerneck
6215 https://en.wikipedia.org/w/index.php%3ftitle=User:Nxavar&action=edit&redlink=1
6216 https://en.wikipedia.org/wiki/User:Nyenyec
6217 https://en.wikipedia.org/wiki/User:Nyh
6218 https://en.wikipedia.org/wiki/User:Nyook
6219 https://en.wikipedia.org/wiki/User:Nyq
6220 https://en.wikipedia.org/wiki/User:Nyttend
6221 https://en.wikipedia.org/wiki/User:Nyxos
6222 https://en.wikipedia.org/wiki/User:N%25C3%25B8kkenbuer
6223 https://en.wikipedia.org/wiki/User:O_Pavlos
6224 https://en.wikipedia.org/w/index.php%3ftitle=User:O(n_log_n)&action=edit&redlink=1
6225 https://en.wikipedia.org/wiki/User:O.Koslowski
6226 https://en.wikipedia.org/wiki/User:O1ive
6227 https://en.wikipedia.org/wiki/User:OAbot
6228 https://en.wikipedia.org/wiki/User:OKBot
6229 https://en.wikipedia.org/w/index.php%3ftitle=User:OMurgo&action=edit&redlink=1
6230 https://en.wikipedia.org/w/index.php%3ftitle=User:OVooVo&action=edit&redlink=1
6231 https://en.wikipedia.org/w/index.php%3ftitle=User:Oaf2&action=edit&redlink=1
6232 https://en.wikipedia.org/wiki/User:Oakwood
6233 https://en.wikipedia.org/wiki/User:Obakeneko

1918
External links

2 Obankston6234
1 Oberiko6235
3 Obersachsebot6236
1 ObfuscatePenguin6237
1 Obina6238
2 Oblatenhaller6239
1 Obli6240
16 Obradovic Goran6241
3 Obscurans6242
4 Obscuranym6243
2 Occamster6244
4 OckRaz6245
2 Ocli55686246
3 Ocolon6247
8 Octahedron806248
1 Octalc0de6249
2 Octavdruta6250
8 Octotron6251
1 Od Mishehu6252
1 Oddbod76253
3 Oddity-6254
1 Oecology6255
1 Oeihiko6256
8 Oeoi6257
1 Oerjan6258

6234 https://en.wikipedia.org/wiki/User:Obankston
6235 https://en.wikipedia.org/wiki/User:Oberiko
6236 https://en.wikipedia.org/wiki/User:Obersachsebot
6237 https://en.wikipedia.org/wiki/User:ObfuscatePenguin
6238 https://en.wikipedia.org/wiki/User:Obina
6239 https://en.wikipedia.org/w/index.php%3ftitle=User:Oblatenhaller&action=edit&redlink=1
6240 https://en.wikipedia.org/w/index.php%3ftitle=User:Obli&action=edit&redlink=1
6241 https://en.wikipedia.org/wiki/User:Obradovic_Goran
6242 https://en.wikipedia.org/wiki/User:Obscurans
6243 https://en.wikipedia.org/w/index.php%3ftitle=User:Obscuranym&action=edit&redlink=1
6244 https://en.wikipedia.org/w/index.php%3ftitle=User:Occamster&action=edit&redlink=1
6245 https://en.wikipedia.org/wiki/User:OckRaz
6246 https://en.wikipedia.org/w/index.php%3ftitle=User:Ocli5568&action=edit&redlink=1
6247 https://en.wikipedia.org/wiki/User:Ocolon
6248 https://en.wikipedia.org/wiki/User:Octahedron80
6249 https://en.wikipedia.org/wiki/User:Octalc0de
6250 https://en.wikipedia.org/w/index.php%3ftitle=User:Octavdruta&action=edit&redlink=1
6251 https://en.wikipedia.org/wiki/User:Octotron
6252 https://en.wikipedia.org/wiki/User:Od_Mishehu
6253 https://en.wikipedia.org/w/index.php%3ftitle=User:Oddbod7&action=edit&redlink=1
6254 https://en.wikipedia.org/wiki/User:Oddity-
6255 https://en.wikipedia.org/wiki/User:Oecology
6256 https://en.wikipedia.org/w/index.php%3ftitle=User:Oeihiko&action=edit&redlink=1
6257 https://en.wikipedia.org/wiki/User:Oeoi
6258 https://en.wikipedia.org/wiki/User:Oerjan

1919
Contributors

5 OfekRon6259
3 Officialhopsof6260
1 Ogai6261
1 OgreBot6262
1 Ohad trabelsi6263
4 Ohconfucius6264
42 Ohnoitsjamie6265
2 Oicumayberight6266
1 Oisguad6267
1 Ojan6268
1 Okosenkov6269
2 OlEnglish6270
1 Olaitanroberts6271
1 Olaolaola~enwiki6272
7 Olathe6273
1 Old Naval Rooftops6274
2 OldManNIck6275
1 Oleaster6276
76 Oleg Alexandrov6277
3 Oleksandr Skliarchuk6278
182 Oli Filth6279
2 OliAtlason6280
29 Oliphaunt6281
1 OlivePasta6282

6259 https://en.wikipedia.org/w/index.php%3ftitle=User:OfekRon&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Officialhopsof&action=edit&redlink=
6260
1
6261 https://en.wikipedia.org/wiki/User:Ogai
6262 https://en.wikipedia.org/wiki/User:OgreBot
6263 https://en.wikipedia.org/w/index.php%3ftitle=User:Ohad_trabelsi&action=edit&redlink=1
6264 https://en.wikipedia.org/wiki/User:Ohconfucius
6265 https://en.wikipedia.org/wiki/User:Ohnoitsjamie
6266 https://en.wikipedia.org/wiki/User:Oicumayberight
6267 https://en.wikipedia.org/w/index.php%3ftitle=User:Oisguad&action=edit&redlink=1
6268 https://en.wikipedia.org/wiki/User:Ojan
6269 https://en.wikipedia.org/w/index.php%3ftitle=User:Okosenkov&action=edit&redlink=1
6270 https://en.wikipedia.org/wiki/User:OlEnglish
https://en.wikipedia.org/w/index.php%3ftitle=User:Olaitanroberts&action=edit&redlink=
6271
1
https://en.wikipedia.org/w/index.php%3ftitle=User:Olaolaola~enwiki&action=edit&
6272
redlink=1
6273 https://en.wikipedia.org/wiki/User:Olathe
6274 https://en.wikipedia.org/wiki/User:Old_Naval_Rooftops
6275 https://en.wikipedia.org/w/index.php%3ftitle=User:OldManNIck&action=edit&redlink=1
6276 https://en.wikipedia.org/wiki/User:Oleaster
6277 https://en.wikipedia.org/wiki/User:Oleg_Alexandrov
https://en.wikipedia.org/w/index.php%3ftitle=User:Oleksandr_Skliarchuk&action=edit&
6278
redlink=1
6279 https://en.wikipedia.org/wiki/User:Oli_Filth
6280 https://en.wikipedia.org/wiki/User:OliAtlason
6281 https://en.wikipedia.org/wiki/User:Oliphaunt
6282 https://en.wikipedia.org/w/index.php%3ftitle=User:OlivePasta&action=edit&redlink=1

1920
External links

2 Olivernina6283
1 Oliver~enwiki6284
1 OliviaGuest6285
5 Olivier6286
1 Olivier Debre6287
1 Ollie3146288
6 Ollieinc6289
1 Ollj6290
1 Olsen-Fan6291
1 Om Sao6292
1 OmarEmaraDev6293
1 Omaramin19926294
2 Omargamil6295
1 OmegaTwiddle6296
2 Omegatron6297
1 Omer Abu El Haija6298
1 Omgigotanaccount6299
1 Omicron186300
2 Omicronpersei86301
1 Omkar2111966302
15 Omnipaedista6303
1 Omri.mor6304
2 OmriSegal6305
2 Onco p536306
1 Ondra.pelech6307

6283 https://en.wikipedia.org/w/index.php%3ftitle=User:Olivernina&action=edit&redlink=1
6284 https://en.wikipedia.org/wiki/User:Oliver~enwiki
6285 https://en.wikipedia.org/wiki/User:OliviaGuest
6286 https://en.wikipedia.org/wiki/User:Olivier
6287 https://en.wikipedia.org/wiki/User:Olivier_Debre
6288 https://en.wikipedia.org/w/index.php%3ftitle=User:Ollie314&action=edit&redlink=1
6289 https://en.wikipedia.org/wiki/User:Ollieinc
6290 https://en.wikipedia.org/wiki/User:Ollj
6291 https://en.wikipedia.org/wiki/User:Olsen-Fan
6292 https://en.wikipedia.org/w/index.php%3ftitle=User:Om_Sao&action=edit&redlink=1
6293 https://en.wikipedia.org/w/index.php%3ftitle=User:OmarEmaraDev&action=edit&redlink=1
6294 https://en.wikipedia.org/w/index.php%3ftitle=User:Omaramin1992&action=edit&redlink=1
6295 https://en.wikipedia.org/w/index.php%3ftitle=User:Omargamil&action=edit&redlink=1
6296 https://en.wikipedia.org/w/index.php%3ftitle=User:OmegaTwiddle&action=edit&redlink=1
6297 https://en.wikipedia.org/wiki/User:Omegatron
https://en.wikipedia.org/w/index.php%3ftitle=User:Omer_Abu_El_Haija&action=edit&
6298
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Omgigotanaccount&action=edit&
6299
redlink=1
6300 https://en.wikipedia.org/wiki/User:Omicron18
6301 https://en.wikipedia.org/wiki/User:Omicronpersei8
6302 https://en.wikipedia.org/w/index.php%3ftitle=User:Omkar211196&action=edit&redlink=1
6303 https://en.wikipedia.org/wiki/User:Omnipaedista
6304 https://en.wikipedia.org/w/index.php%3ftitle=User:Omri.mor&action=edit&redlink=1
6305 https://en.wikipedia.org/w/index.php%3ftitle=User:OmriSegal&action=edit&redlink=1
6306 https://en.wikipedia.org/wiki/User:Onco_p53
6307 https://en.wikipedia.org/w/index.php%3ftitle=User:Ondra.pelech&action=edit&redlink=1

1921
Contributors

1 One half 35446308


2 OnePt6186309
2 Oneappletwoideas6310
5 Onel59696311
3 Oni Lukos6312
1 Onkaratnitk6313
1 Only2sea6314
2 Onomou6315
2 Onorem6316
1 Ontarioboy6317
2 Ontariolot6318
3 OoS6319
3 OpenToppedBus6320
7 Opencooper6321
1 Opera fera6322
5 Opium6323
1 Opraveen6324
2 Optakeover6325
1 Optigan136326
12 Optikos6327
4 Optim6328
2 OranL6329
4 Orange Suede Sofa6330
1 OrangeDog6331
2 Oravec6332

6308 https://en.wikipedia.org/wiki/User:One_half_3544
6309 https://en.wikipedia.org/wiki/User:OnePt618
https://en.wikipedia.org/w/index.php%3ftitle=User:Oneappletwoideas&action=edit&
6310
redlink=1
6311 https://en.wikipedia.org/wiki/User:Onel5969
6312 https://en.wikipedia.org/wiki/User:Oni_Lukos
6313 https://en.wikipedia.org/w/index.php%3ftitle=User:Onkaratnitk&action=edit&redlink=1
6314 https://en.wikipedia.org/wiki/User:Only2sea
6315 https://en.wikipedia.org/wiki/User:Onomou
6316 https://en.wikipedia.org/w/index.php%3ftitle=User:Onorem&action=edit&redlink=1
6317 https://en.wikipedia.org/wiki/User:Ontarioboy
6318 https://en.wikipedia.org/w/index.php%3ftitle=User:Ontariolot&action=edit&redlink=1
6319 https://en.wikipedia.org/wiki/User:OoS
6320 https://en.wikipedia.org/wiki/User:OpenToppedBus
6321 https://en.wikipedia.org/wiki/User:Opencooper
6322 https://en.wikipedia.org/wiki/User:Opera_fera
6323 https://en.wikipedia.org/wiki/User:Opium
6324 https://en.wikipedia.org/w/index.php%3ftitle=User:Opraveen&action=edit&redlink=1
6325 https://en.wikipedia.org/wiki/User:Optakeover
6326 https://en.wikipedia.org/wiki/User:Optigan13
6327 https://en.wikipedia.org/wiki/User:Optikos
6328 https://en.wikipedia.org/wiki/User:Optim
6329 https://en.wikipedia.org/wiki/User:OranL
6330 https://en.wikipedia.org/wiki/User:Orange_Suede_Sofa
6331 https://en.wikipedia.org/wiki/User:OrangeDog
6332 https://en.wikipedia.org/wiki/User:Oravec

1922
External links

1 Orbst6333
1 Orca4566334
3 Orenburg16335
1 Orendona6336
1 Orenmn6337
1 Orenzah6338
1 Orfest6339
1 OrgasGirl6340
1 Orielno6341
1 Orkybash6342
1 Oronsay6343
1 Oroso6344
1 Orphan Wiki6345
7 OrphanBot6346
2 Orthogeek6347
1 Orubt6348
1 Orz6349
1 Orzechowskid6350
1 Oshkosher6351
31 Oshwah6352
2 Osias6353
1 Oskar Flordal6354
19 Oskar Sigvardsson6355
1 OsmanRF346356
1 Osric6357

6333 https://en.wikipedia.org/wiki/User:Orbst
6334 https://en.wikipedia.org/w/index.php%3ftitle=User:Orca456&action=edit&redlink=1
6335 https://en.wikipedia.org/wiki/User:Orenburg1
6336 https://en.wikipedia.org/wiki/User:Orendona
6337 https://en.wikipedia.org/w/index.php%3ftitle=User:Orenmn&action=edit&redlink=1
6338 https://en.wikipedia.org/w/index.php%3ftitle=User:Orenzah&action=edit&redlink=1
6339 https://en.wikipedia.org/w/index.php%3ftitle=User:Orfest&action=edit&redlink=1
6340 https://en.wikipedia.org/w/index.php%3ftitle=User:OrgasGirl&action=edit&redlink=1
6341 https://en.wikipedia.org/wiki/User:Orielno
6342 https://en.wikipedia.org/w/index.php%3ftitle=User:Orkybash&action=edit&redlink=1
6343 https://en.wikipedia.org/wiki/User:Oronsay
6344 https://en.wikipedia.org/wiki/User:Oroso
6345 https://en.wikipedia.org/wiki/User:Orphan_Wiki
6346 https://en.wikipedia.org/wiki/User:OrphanBot
6347 https://en.wikipedia.org/w/index.php%3ftitle=User:Orthogeek&action=edit&redlink=1
6348 https://en.wikipedia.org/w/index.php%3ftitle=User:Orubt&action=edit&redlink=1
6349 https://en.wikipedia.org/wiki/User:Orz
6350 https://en.wikipedia.org/wiki/User:Orzechowskid
6351 https://en.wikipedia.org/w/index.php%3ftitle=User:Oshkosher&action=edit&redlink=1
6352 https://en.wikipedia.org/wiki/User:Oshwah
6353 https://en.wikipedia.org/wiki/User:Osias
6354 https://en.wikipedia.org/wiki/User:Oskar_Flordal
6355 https://en.wikipedia.org/wiki/User:Oskar_Sigvardsson
6356 https://en.wikipedia.org/wiki/User:OsmanRF34
6357 https://en.wikipedia.org/wiki/User:Osric

1923
Contributors

2 Ossi~enwiki6358
4 Ost3166359
1 Ostomachion6360
1 Oszkar006361
2 Otisjimmy16362
39 Ott26363
1 OtterSmith6364
1 Ottoshmidt6365
4 Otus6366
1 Ouishoebean6367
1 Outraged duck6368
4 Outriggr6369
8 OverlordQ6370
2 Owen6371
3 Owen2146372
4 OwenX6373
1 Owenjonesuk6374
2 Owl-syme6375
3 Owlf6376
4 Owlstead6377
12 Oxaric6378
1 Oxfordwang6379
1 Oylenshpeegul6380
1 Ozziev6381
13 Oğuz Ergin6382

6358 https://en.wikipedia.org/w/index.php%3ftitle=User:Ossi~enwiki&action=edit&redlink=1
6359 https://en.wikipedia.org/wiki/User:Ost316
6360 https://en.wikipedia.org/w/index.php%3ftitle=User:Ostomachion&action=edit&redlink=1
6361 https://en.wikipedia.org/w/index.php%3ftitle=User:Oszkar00&action=edit&redlink=1
6362 https://en.wikipedia.org/wiki/User:Otisjimmy1
6363 https://en.wikipedia.org/wiki/User:Ott2
6364 https://en.wikipedia.org/wiki/User:OtterSmith
6365 https://en.wikipedia.org/w/index.php%3ftitle=User:Ottoshmidt&action=edit&redlink=1
6366 https://en.wikipedia.org/wiki/User:Otus
6367 https://en.wikipedia.org/wiki/User:Ouishoebean
6368 https://en.wikipedia.org/w/index.php%3ftitle=User:Outraged_duck&action=edit&redlink=1
6369 https://en.wikipedia.org/wiki/User:Outriggr
6370 https://en.wikipedia.org/wiki/User:OverlordQ
6371 https://en.wikipedia.org/wiki/User:Owen
6372 https://en.wikipedia.org/wiki/User:Owen214
6373 https://en.wikipedia.org/wiki/User:OwenX
6374 https://en.wikipedia.org/wiki/User:Owenjonesuk
6375 https://en.wikipedia.org/wiki/User:Owl-syme
6376 https://en.wikipedia.org/wiki/User:Owlf
6377 https://en.wikipedia.org/wiki/User:Owlstead
6378 https://en.wikipedia.org/w/index.php%3ftitle=User:Oxaric&action=edit&redlink=1
6379 https://en.wikipedia.org/wiki/User:Oxfordwang
6380 https://en.wikipedia.org/w/index.php%3ftitle=User:Oylenshpeegul&action=edit&redlink=1
6381 https://en.wikipedia.org/w/index.php%3ftitle=User:Ozziev&action=edit&redlink=1
6382 https://en.wikipedia.org/wiki/User:O%25C4%259Fuz_Ergin

1924
External links

2 P b19996383
2 P0nc6384
1 P3d4r06385
2 PA Math Prof6386
1 PAStheLoD6387
2 PDFbot6388
1 PGSONIC6389
1 PGWG6390
7 PIerre.Lescanne6391
1 PJ Cabral6392
1 PJY6393
1 PL2906394
1 PListing6395
1 PNattrass6396
1 PS.6397
3 PV=nRT6398
1 PWilkinson6399
2 Paanini6400
1 Pabix6401
4 Pablo.cl6402
4 Pacerier6403
2 PackMecEng6404
7 Paddu6405
3 Paddy31186406
1 PaePae6407

6383 https://en.wikipedia.org/wiki/User:P_b1999
6384 https://en.wikipedia.org/w/index.php%3ftitle=User:P0nc&action=edit&redlink=1
6385 https://en.wikipedia.org/w/index.php%3ftitle=User:P3d4r0&action=edit&redlink=1
6386 https://en.wikipedia.org/wiki/User:PA_Math_Prof
6387 https://en.wikipedia.org/wiki/User:PAStheLoD
6388 https://en.wikipedia.org/wiki/User:PDFbot
6389 https://en.wikipedia.org/wiki/User:PGSONIC
6390 https://en.wikipedia.org/wiki/User:PGWG
https://en.wikipedia.org/w/index.php%3ftitle=User:PIerre.Lescanne&action=edit&
6391
redlink=1
6392 https://en.wikipedia.org/wiki/User:PJ_Cabral
6393 https://en.wikipedia.org/wiki/User:PJY
6394 https://en.wikipedia.org/wiki/User:PL290
6395 https://en.wikipedia.org/w/index.php%3ftitle=User:PListing&action=edit&redlink=1
6396 https://en.wikipedia.org/w/index.php%3ftitle=User:PNattrass&action=edit&redlink=1
6397 https://en.wikipedia.org/wiki/User:PS.
6398 https://en.wikipedia.org/wiki/User:PV%253DnRT
6399 https://en.wikipedia.org/wiki/User:PWilkinson
6400 https://en.wikipedia.org/w/index.php%3ftitle=User:Paanini&action=edit&redlink=1
6401 https://en.wikipedia.org/wiki/User:Pabix
6402 https://en.wikipedia.org/wiki/User:Pablo.cl
6403 https://en.wikipedia.org/wiki/User:Pacerier
6404 https://en.wikipedia.org/wiki/User:PackMecEng
6405 https://en.wikipedia.org/wiki/User:Paddu
6406 https://en.wikipedia.org/wiki/User:Paddy3118
6407 https://en.wikipedia.org/wiki/User:PaePae

1925
Contributors

13 Pagh6408
1 Paine Ellsworth6409
1 Painted Fox6410
1 Paintman6411
3 Pajz6412
2 Pakaraki6413
11 Pakaran6414
4 Pako6415
2 Pale blue dot6416
7 Palica6417
3 Palit Suchitto6418
3 Pamulapati6419
1 PanLevan6420
1 PanRagon6421
2 Panarchy6422
1 Panchobook6423
2 Pandemias6424
1 Panecasareccio6425
1 Pano386426
1 Panoptical6427
1 Panos19626428
1 Panu-Kristian Poiksalo6429
4 Panzi6430
3 Paolo Lipparini6431
1 Pappu00076432

6408 https://en.wikipedia.org/w/index.php%3ftitle=User:Pagh&action=edit&redlink=1
6409 https://en.wikipedia.org/wiki/User:Paine_Ellsworth
6410 https://en.wikipedia.org/w/index.php%3ftitle=User:Painted_Fox&action=edit&redlink=1
6411 https://en.wikipedia.org/wiki/User:Paintman
6412 https://en.wikipedia.org/wiki/User:Pajz
6413 https://en.wikipedia.org/wiki/User:Pakaraki
6414 https://en.wikipedia.org/wiki/User:Pakaran
6415 https://en.wikipedia.org/wiki/User:Pako
6416 https://en.wikipedia.org/wiki/User:Pale_blue_dot
6417 https://en.wikipedia.org/wiki/User:Palica
https://en.wikipedia.org/w/index.php%3ftitle=User:Palit_Suchitto&action=edit&redlink=
6418
1
6419 https://en.wikipedia.org/w/index.php%3ftitle=User:Pamulapati&action=edit&redlink=1
6420 https://en.wikipedia.org/w/index.php%3ftitle=User:PanLevan&action=edit&redlink=1
6421 https://en.wikipedia.org/wiki/User:PanRagon
6422 https://en.wikipedia.org/w/index.php%3ftitle=User:Panarchy&action=edit&redlink=1
6423 https://en.wikipedia.org/wiki/User:Panchobook
6424 https://en.wikipedia.org/wiki/User:Pandemias
https://en.wikipedia.org/w/index.php%3ftitle=User:Panecasareccio&action=edit&redlink=
6425
1
6426 https://en.wikipedia.org/wiki/User:Pano38
6427 https://en.wikipedia.org/wiki/User:Panoptical
6428 https://en.wikipedia.org/w/index.php%3ftitle=User:Panos1962&action=edit&redlink=1
6429 https://en.wikipedia.org/wiki/User:Panu-Kristian_Poiksalo
6430 https://en.wikipedia.org/wiki/User:Panzi
6431 https://en.wikipedia.org/wiki/User:Paolo_Lipparini
6432 https://en.wikipedia.org/w/index.php%3ftitle=User:Pappu0007&action=edit&redlink=1

1926
External links

1 PappyK6433
1 PaprikaDreams6434
12 ParAlgMergeSort6435
2 Para150006436
2 Paradoctor6437
1 Paradoxolog6438
1 Parakkum6439
1 Parallelized6440
1 Param Mudgal6441
2 Paranoid6442
1 Parcly Taxel6443
1 Parham ap6444
2 Parloviz6445
2 ParotWise6446
2 Parpaluck6447
2 Parsons.eu6448
1 Parudox6449
19 Pascal.Tesson6450
1 Pascal6666451
1 Pashley6452
6 Pashute6453
1 Pasixxxx6454
12 PatPeter6455
3 Patelhiren.1016456
2 Patelm6457

6433 https://en.wikipedia.org/w/index.php%3ftitle=User:PappyK&action=edit&redlink=1
6434 https://en.wikipedia.org/wiki/User:PaprikaDreams
https://en.wikipedia.org/w/index.php%3ftitle=User:ParAlgMergeSort&action=edit&
6435
redlink=1
6436 https://en.wikipedia.org/w/index.php%3ftitle=User:Para15000&action=edit&redlink=1
6437 https://en.wikipedia.org/wiki/User:Paradoctor
6438 https://en.wikipedia.org/w/index.php%3ftitle=User:Paradoxolog&action=edit&redlink=1
6439 https://en.wikipedia.org/w/index.php%3ftitle=User:Parakkum&action=edit&redlink=1
6440 https://en.wikipedia.org/wiki/User:Parallelized
6441 https://en.wikipedia.org/wiki/User:Param_Mudgal
6442 https://en.wikipedia.org/wiki/User:Paranoid
6443 https://en.wikipedia.org/wiki/User:Parcly_Taxel
6444 https://en.wikipedia.org/w/index.php%3ftitle=User:Parham_ap&action=edit&redlink=1
6445 https://en.wikipedia.org/w/index.php%3ftitle=User:Parloviz&action=edit&redlink=1
6446 https://en.wikipedia.org/w/index.php%3ftitle=User:ParotWise&action=edit&redlink=1
6447 https://en.wikipedia.org/w/index.php%3ftitle=User:Parpaluck&action=edit&redlink=1
6448 https://en.wikipedia.org/wiki/User:Parsons.eu
6449 https://en.wikipedia.org/wiki/User:Parudox
6450 https://en.wikipedia.org/wiki/User:Pascal.Tesson
6451 https://en.wikipedia.org/wiki/User:Pascal666
6452 https://en.wikipedia.org/wiki/User:Pashley
6453 https://en.wikipedia.org/wiki/User:Pashute
6454 https://en.wikipedia.org/wiki/User:Pasixxxx
6455 https://en.wikipedia.org/wiki/User:PatPeter
https://en.wikipedia.org/w/index.php%3ftitle=User:Patelhiren.101&action=edit&redlink=
6456
1
6457 https://en.wikipedia.org/w/index.php%3ftitle=User:Patelm&action=edit&redlink=1

1927
Contributors

21 Patmorin6458
1 Patr0rc6459
27 Patrick6460
1 Patrick Lucas6461
1 Patrick O'Leary6462
1 Patrick Roncagliolo6463
1 PatrickFisher6464
48 Paul August6465
1 Paul Carpenter 746466
4 Paul Ebermann6467
1 Paul G6468
2 Paul Koning6469
2 Paul Kube6470
1 Paul Mackay~enwiki6471
4 Paul Murray6472
2 Paul Richter6473
1 Paul Silverman6474
2 Paul.Ogilvie.nl6475
1 Paul.cabot6476
9 Paul25206477
1 PaulHoadley6478
1 PaulKeeperson6479
19 PaulTanenbaum6480
1 Pauldinhqd6481

6458 https://en.wikipedia.org/wiki/User:Patmorin
6459 https://en.wikipedia.org/w/index.php%3ftitle=User:Patr0rc&action=edit&redlink=1
6460 https://en.wikipedia.org/wiki/User:Patrick
6461 https://en.wikipedia.org/wiki/User:Patrick_Lucas
6462 https://en.wikipedia.org/wiki/User:Patrick_O%2527Leary
https://en.wikipedia.org/w/index.php%3ftitle=User:Patrick_Roncagliolo&action=edit&
6463
redlink=1
6464 https://en.wikipedia.org/wiki/User:PatrickFisher
6465 https://en.wikipedia.org/wiki/User:Paul_August
https://en.wikipedia.org/w/index.php%3ftitle=User:Paul_Carpenter_74&action=edit&
6466
redlink=1
6467 https://en.wikipedia.org/wiki/User:Paul_Ebermann
6468 https://en.wikipedia.org/wiki/User:Paul_G
6469 https://en.wikipedia.org/wiki/User:Paul_Koning
6470 https://en.wikipedia.org/w/index.php%3ftitle=User:Paul_Kube&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Paul_Mackay~enwiki&action=edit&
6471
redlink=1
6472 https://en.wikipedia.org/wiki/User:Paul_Murray
6473 https://en.wikipedia.org/wiki/User:Paul_Richter
6474 https://en.wikipedia.org/wiki/User:Paul_Silverman
https://en.wikipedia.org/w/index.php%3ftitle=User:Paul.Ogilvie.nl&action=edit&
6475
redlink=1
6476 https://en.wikipedia.org/w/index.php%3ftitle=User:Paul.cabot&action=edit&redlink=1
6477 https://en.wikipedia.org/wiki/User:Paul2520
6478 https://en.wikipedia.org/wiki/User:PaulHoadley
6479 https://en.wikipedia.org/wiki/User:PaulKeeperson
6480 https://en.wikipedia.org/wiki/User:PaulTanenbaum
6481 https://en.wikipedia.org/w/index.php%3ftitle=User:Pauldinhqd&action=edit&redlink=1

1928
External links

2 Paulduif6482
5 Pauli1336483
1 PauliKL6484
1 Paulorcjr6485
1 Paulschou6486
8 Paulsheer6487
1 Paushali6488
1 Pavel Fusu6489
2 PavelY6490
1 Pax:Vobiscum6491
2 Paxcoder6492
2 Pazabo6493
1 Pbassan6494
2 Pbruneau6495
5 Pbsouthwood6496
21 Pcap6497
7 Pcb216498
28 Pce3@ij.net6499
1 Pcp0710986500
3 Pcuff6501
3 Pdcook6502
1 Pdelong6503
1 Pdokj6504
1 Pdvyas6505
1 PeR6506

6482 https://en.wikipedia.org/w/index.php%3ftitle=User:Paulduif&action=edit&redlink=1
6483 https://en.wikipedia.org/w/index.php%3ftitle=User:Pauli133&action=edit&redlink=1
6484 https://en.wikipedia.org/wiki/User:PauliKL
6485 https://en.wikipedia.org/wiki/User:Paulorcjr
6486 https://en.wikipedia.org/w/index.php%3ftitle=User:Paulschou&action=edit&redlink=1
6487 https://en.wikipedia.org/wiki/User:Paulsheer
6488 https://en.wikipedia.org/w/index.php%3ftitle=User:Paushali&action=edit&redlink=1
6489 https://en.wikipedia.org/w/index.php%3ftitle=User:Pavel_Fusu&action=edit&redlink=1
6490 https://en.wikipedia.org/w/index.php%3ftitle=User:PavelY&action=edit&redlink=1
6491 https://en.wikipedia.org/wiki/User:Pax:Vobiscum
6492 https://en.wikipedia.org/wiki/User:Paxcoder
6493 https://en.wikipedia.org/w/index.php%3ftitle=User:Pazabo&action=edit&redlink=1
6494 https://en.wikipedia.org/w/index.php%3ftitle=User:Pbassan&action=edit&redlink=1
6495 https://en.wikipedia.org/w/index.php%3ftitle=User:Pbruneau&action=edit&redlink=1
6496 https://en.wikipedia.org/wiki/User:Pbsouthwood
6497 https://en.wikipedia.org/wiki/User:Pcap
6498 https://en.wikipedia.org/wiki/User:Pcb21
6499 https://en.wikipedia.org/w/index.php%3ftitle=User:Pce3@ij.net&action=edit&redlink=1
6500 https://en.wikipedia.org/wiki/User:Pcp071098
6501 https://en.wikipedia.org/w/index.php%3ftitle=User:Pcuff&action=edit&redlink=1
6502 https://en.wikipedia.org/wiki/User:Pdcook
6503 https://en.wikipedia.org/wiki/User:Pdelong
6504 https://en.wikipedia.org/wiki/User:Pdokj
6505 https://en.wikipedia.org/w/index.php%3ftitle=User:Pdvyas&action=edit&redlink=1
6506 https://en.wikipedia.org/wiki/User:PeR

1929
Contributors

1 Peacedance6507
1 Peaceray6508
2 Peak6509
1 Pearle6510
6 Peasaep6511
8 Peatar6512
10 PedjaNbg6513
1 Pedro6514
2 PedroContipelli6515
1 Pegasusbupt6516
1 Pegua6517
3 PeizonChen6518
2 Pekinensis6519
1 Pekrau6520
1 Pelirodri12486521
1 Pelirojopajaro6522
1 Pelister6523
1 Pelzflorian6524
1 Penguins Are Animals 53276525
2 Pengyifan6526
1 Peng~enwiki6527
1 Penumbroso6528
1 Peppy Paneer6529
1 PerVognsen6530
1 Peregrine9816531

6507 https://en.wikipedia.org/wiki/User:Peacedance
6508 https://en.wikipedia.org/wiki/User:Peaceray
6509 https://en.wikipedia.org/wiki/User:Peak
6510 https://en.wikipedia.org/wiki/User:Pearle
6511 https://en.wikipedia.org/wiki/User:Peasaep
6512 https://en.wikipedia.org/wiki/User:Peatar
6513 https://en.wikipedia.org/wiki/User:PedjaNbg
6514 https://en.wikipedia.org/wiki/User:Pedro
https://en.wikipedia.org/w/index.php%3ftitle=User:PedroContipelli&action=edit&
6515
redlink=1
6516 https://en.wikipedia.org/w/index.php%3ftitle=User:Pegasusbupt&action=edit&redlink=1
6517 https://en.wikipedia.org/wiki/User:Pegua
6518 https://en.wikipedia.org/w/index.php%3ftitle=User:PeizonChen&action=edit&redlink=1
6519 https://en.wikipedia.org/wiki/User:Pekinensis
6520 https://en.wikipedia.org/w/index.php%3ftitle=User:Pekrau&action=edit&redlink=1
6521 https://en.wikipedia.org/w/index.php%3ftitle=User:Pelirodri1248&action=edit&redlink=1
6522 https://en.wikipedia.org/wiki/User:Pelirojopajaro
6523 https://en.wikipedia.org/wiki/User:Pelister
6524 https://en.wikipedia.org/w/index.php%3ftitle=User:Pelzflorian&action=edit&redlink=1
6525 https://en.wikipedia.org/wiki/User:Penguins_Are_Animals_5327
6526 https://en.wikipedia.org/w/index.php%3ftitle=User:Pengyifan&action=edit&redlink=1
6527 https://en.wikipedia.org/wiki/User:Peng~enwiki
6528 https://en.wikipedia.org/wiki/User:Penumbroso
6529 https://en.wikipedia.org/wiki/User:Peppy_Paneer
6530 https://en.wikipedia.org/w/index.php%3ftitle=User:PerVognsen&action=edit&redlink=1
6531 https://en.wikipedia.org/wiki/User:Peregrine981

1930
External links

2 Perey6532
2 Peristarkawan6533
1 Perlmonger426534
3 Perpetuo26535
1 Perry Bebbington6536
1 PerryTachett6537
1 Persian Poet Gal6538
1 Person who formerly started with ”216”6539
1 Personman6540
1 Perstar6541
1 Peruvianllama6542
1 Peskoj6543
2 PesoSwe6544
3 Pete.Hurd6545
6 Pete1426546
1 Pete45126547
1 Petebu6548
1 Peter Alan McAllister~enwiki6549
5 Peter Flass6550
1 Peter Horn6551
1 Peter Karlsen6552
12 Peter Kwok6553
1 Peter M Gerdes6554
1 Peter Winnberg6555
1 Peter bertok6556

6532 https://en.wikipedia.org/wiki/User:Perey
6533 https://en.wikipedia.org/w/index.php%3ftitle=User:Peristarkawan&action=edit&redlink=1
6534 https://en.wikipedia.org/w/index.php%3ftitle=User:Perlmonger42&action=edit&redlink=1
6535 https://en.wikipedia.org/w/index.php%3ftitle=User:Perpetuo2&action=edit&redlink=1
6536 https://en.wikipedia.org/wiki/User:Perry_Bebbington
6537 https://en.wikipedia.org/wiki/User:PerryTachett
6538 https://en.wikipedia.org/wiki/User:Persian_Poet_Gal
6539 https://en.wikipedia.org/wiki/User:Person_who_formerly_started_with_%2522216%2522
6540 https://en.wikipedia.org/wiki/User:Personman
6541 https://en.wikipedia.org/wiki/User:Perstar
6542 https://en.wikipedia.org/wiki/User:Peruvianllama
6543 https://en.wikipedia.org/w/index.php%3ftitle=User:Peskoj&action=edit&redlink=1
6544 https://en.wikipedia.org/w/index.php%3ftitle=User:PesoSwe&action=edit&redlink=1
6545 https://en.wikipedia.org/wiki/User:Pete.Hurd
6546 https://en.wikipedia.org/wiki/User:Pete142
6547 https://en.wikipedia.org/w/index.php%3ftitle=User:Pete4512&action=edit&redlink=1
6548 https://en.wikipedia.org/w/index.php%3ftitle=User:Petebu&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Peter_Alan_McAllister~enwiki&
6549
action=edit&redlink=1
6550 https://en.wikipedia.org/wiki/User:Peter_Flass
6551 https://en.wikipedia.org/wiki/User:Peter_Horn
6552 https://en.wikipedia.org/wiki/User:Peter_Karlsen
6553 https://en.wikipedia.org/wiki/User:Peter_Kwok
6554 https://en.wikipedia.org/wiki/User:Peter_M_Gerdes
6555 https://en.wikipedia.org/wiki/User:Peter_Winnberg
6556 https://en.wikipedia.org/wiki/User:Peter_bertok

1931
Contributors

1 Peter0101016557
1 Peter1c6558
1 PeterC6559
1 PeterCanthropus6560
1 Peterc266561
8 Peterdjones6562
1 Peterjoel6563
1 Peterkos6564
6 Peterl6565
1 Peterven6566
1 Petiatil6567
1 Petr Matas6568
2 Petranteatr1396569
4 Petrb6570
6 Petri Krohn6571
1 Petrus~enwiki6572
2 Petter Strandmark6573
1 Peturb6574
2 Peyre6575
25 Pfalstad6576
14 Pfunk426577
1 Pgallert6578
64 Pgan0026579
2 Pgimeno~enwiki6580
5 Pgr946581

6557 https://en.wikipedia.org/wiki/User:Peter010101
6558 https://en.wikipedia.org/wiki/User:Peter1c
6559 https://en.wikipedia.org/w/index.php%3ftitle=User:PeterC&action=edit&redlink=1
6560 https://en.wikipedia.org/wiki/User:PeterCanthropus
6561 https://en.wikipedia.org/w/index.php%3ftitle=User:Peterc26&action=edit&redlink=1
6562 https://en.wikipedia.org/wiki/User:Peterdjones
6563 https://en.wikipedia.org/w/index.php%3ftitle=User:Peterjoel&action=edit&redlink=1
6564 https://en.wikipedia.org/w/index.php%3ftitle=User:Peterkos&action=edit&redlink=1
6565 https://en.wikipedia.org/wiki/User:Peterl
6566 https://en.wikipedia.org/w/index.php%3ftitle=User:Peterven&action=edit&redlink=1
6567 https://en.wikipedia.org/wiki/User:Petiatil
6568 https://en.wikipedia.org/wiki/User:Petr_Matas
https://en.wikipedia.org/w/index.php%3ftitle=User:Petranteatr139&action=edit&redlink=
6569
1
6570 https://en.wikipedia.org/wiki/User:Petrb
6571 https://en.wikipedia.org/wiki/User:Petri_Krohn
6572 https://en.wikipedia.org/w/index.php%3ftitle=User:Petrus~enwiki&action=edit&redlink=1
6573 https://en.wikipedia.org/wiki/User:Petter_Strandmark
6574 https://en.wikipedia.org/w/index.php%3ftitle=User:Peturb&action=edit&redlink=1
6575 https://en.wikipedia.org/wiki/User:Peyre
6576 https://en.wikipedia.org/wiki/User:Pfalstad
6577 https://en.wikipedia.org/w/index.php%3ftitle=User:Pfunk42&action=edit&redlink=1
6578 https://en.wikipedia.org/wiki/User:Pgallert
6579 https://en.wikipedia.org/wiki/User:Pgan002
6580 https://en.wikipedia.org/wiki/User:Pgimeno~enwiki
6581 https://en.wikipedia.org/wiki/User:Pgr94

1932
External links

1 Ph.eyes6582
1 Phamnhatkhanh6583
3 Phamtrongthang1236584
4 Phanikumarv6585
1 PhantomTech6586
2 Pharaoh of the Wizards6587
1 Phasma Felis6588
1 Phatsphere6589
2 Phc16590
3 Phcho86591
1 Phe6592
3 Phe-bot6593
1 Pheartheceal6594
1 Phelanpt6595
2 Phenolla6596
1 Phi1618pi3146597
1 Phiarc6598
18 Phil Boswell6599
2 Phil Sandifer6600
2 Phil Spectre6601
1 PhilHibbs6602
7 PhilKnight6603
18 Philip Trueman6604
2 PhilipMW6605
2 PhilippWeissenbacher6606

6582 https://en.wikipedia.org/wiki/User:Ph.eyes
6583 https://en.wikipedia.org/w/index.php%3ftitle=User:Phamnhatkhanh&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Phamtrongthang123&action=edit&
6584
redlink=1
6585 https://en.wikipedia.org/wiki/User:Phanikumarv
6586 https://en.wikipedia.org/wiki/User:PhantomTech
6587 https://en.wikipedia.org/wiki/User:Pharaoh_of_the_Wizards
6588 https://en.wikipedia.org/w/index.php%3ftitle=User:Phasma_Felis&action=edit&redlink=1
6589 https://en.wikipedia.org/wiki/User:Phatsphere
6590 https://en.wikipedia.org/w/index.php%3ftitle=User:Phc1&action=edit&redlink=1
6591 https://en.wikipedia.org/w/index.php%3ftitle=User:Phcho8&action=edit&redlink=1
6592 https://en.wikipedia.org/wiki/User:Phe
6593 https://en.wikipedia.org/wiki/User:Phe-bot
6594 https://en.wikipedia.org/wiki/User:Pheartheceal
6595 https://en.wikipedia.org/wiki/User:Phelanpt
6596 https://en.wikipedia.org/wiki/User:Phenolla
6597 https://en.wikipedia.org/w/index.php%3ftitle=User:Phi1618pi314&action=edit&redlink=1
6598 https://en.wikipedia.org/wiki/User:Phiarc
6599 https://en.wikipedia.org/wiki/User:Phil_Boswell
6600 https://en.wikipedia.org/wiki/User:Phil_Sandifer
6601 https://en.wikipedia.org/wiki/User:Phil_Spectre
6602 https://en.wikipedia.org/wiki/User:PhilHibbs
6603 https://en.wikipedia.org/wiki/User:PhilKnight
6604 https://en.wikipedia.org/wiki/User:Philip_Trueman
6605 https://en.wikipedia.org/wiki/User:PhilipMW
6606 https://en.wikipedia.org/wiki/User:PhilippWeissenbacher

1933
Contributors

1 Phillip Griffith6607
2 Philnap6608
1 Philngo6609
4 Philologia Sæculārēs6610
1 Philomathoholic6611
1 Philoserf6612
7 Phils6613
1 Phishman35796614
1 Phlusko6615
3 Phoe66616
2 Phoenixthebird6617
4 Photonique6618
2 Phuzion6619
1 Phyburn6620
10 Pi Delport6621
1 Pi bot6622
4 Piano non troppo6623
1 PianoSpleen6624
1 Piccolomomo~enwiki6625
10 Pichpich6626
1 PieMan.EXE6627
1 Pierre de Lyon6628
10 PierreAbbat6629
2 PierreBoudes6630
1 PierreCA226631

https://en.wikipedia.org/w/index.php%3ftitle=User:Phillip_Griffith&action=edit&
6607
redlink=1
6608 https://en.wikipedia.org/w/index.php%3ftitle=User:Philnap&action=edit&redlink=1
6609 https://en.wikipedia.org/w/index.php%3ftitle=User:Philngo&action=edit&redlink=1
6610 https://en.wikipedia.org/wiki/User:Philologia_S%25C3%25A6cul%25C4%2581r%25C4%2593s
6611 https://en.wikipedia.org/wiki/User:Philomathoholic
6612 https://en.wikipedia.org/wiki/User:Philoserf
6613 https://en.wikipedia.org/wiki/User:Phils
6614 https://en.wikipedia.org/w/index.php%3ftitle=User:Phishman3579&action=edit&redlink=1
6615 https://en.wikipedia.org/w/index.php%3ftitle=User:Phlusko&action=edit&redlink=1
6616 https://en.wikipedia.org/wiki/User:Phoe6
6617 https://en.wikipedia.org/wiki/User:Phoenixthebird
6618 https://en.wikipedia.org/wiki/User:Photonique
6619 https://en.wikipedia.org/wiki/User:Phuzion
6620 https://en.wikipedia.org/w/index.php%3ftitle=User:Phyburn&action=edit&redlink=1
6621 https://en.wikipedia.org/wiki/User:Pi_Delport
6622 https://en.wikipedia.org/wiki/User:Pi_bot
6623 https://en.wikipedia.org/wiki/User:Piano_non_troppo
6624 https://en.wikipedia.org/w/index.php%3ftitle=User:PianoSpleen&action=edit&redlink=1
6625 https://en.wikipedia.org/wiki/User:Piccolomomo~enwiki
6626 https://en.wikipedia.org/wiki/User:Pichpich
6627 https://en.wikipedia.org/wiki/User:PieMan.EXE
6628 https://en.wikipedia.org/wiki/User:Pierre_de_Lyon
6629 https://en.wikipedia.org/wiki/User:PierreAbbat
6630 https://en.wikipedia.org/wiki/User:PierreBoudes
6631 https://en.wikipedia.org/w/index.php%3ftitle=User:PierreCA22&action=edit&redlink=1

1934
External links

1 PierreSelim6632
1 PierreSenellart6633
3 Pierrotvelo6634
1 Pierzz6635
2 Pifactorial6636
1 PigFlu Oink6637
2 Pigorsch6638
1 Pigsonthewing6639
2 Piguy1016640
1 Pikamander26641
1 Piledhigheranddeeper6642
2 Pilode6643
1 Pilotguy6644
1 Pinar6645
1 Pinball226646
2 Pine6647
10 Pinethicket6648
2 Pingveno6649
1 PinoEire6650
1 Pintman6651
36 Pintoch6652
3 Piojo6653
3 Pion6654
1 Piotrus6655
2 Pipasharto6656

6632 https://en.wikipedia.org/wiki/User:PierreSelim
6633 https://en.wikipedia.org/wiki/User:PierreSenellart
6634 https://en.wikipedia.org/w/index.php%3ftitle=User:Pierrotvelo&action=edit&redlink=1
6635 https://en.wikipedia.org/w/index.php%3ftitle=User:Pierzz&action=edit&redlink=1
6636 https://en.wikipedia.org/wiki/User:Pifactorial
6637 https://en.wikipedia.org/wiki/User:PigFlu_Oink
6638 https://en.wikipedia.org/w/index.php%3ftitle=User:Pigorsch&action=edit&redlink=1
6639 https://en.wikipedia.org/wiki/User:Pigsonthewing
6640 https://en.wikipedia.org/wiki/User:Piguy101
6641 https://en.wikipedia.org/wiki/User:Pikamander2
6642 https://en.wikipedia.org/wiki/User:Piledhigheranddeeper
6643 https://en.wikipedia.org/w/index.php%3ftitle=User:Pilode&action=edit&redlink=1
6644 https://en.wikipedia.org/wiki/User:Pilotguy
6645 https://en.wikipedia.org/wiki/User:Pinar
6646 https://en.wikipedia.org/wiki/User:Pinball22
6647 https://en.wikipedia.org/wiki/User:Pine
6648 https://en.wikipedia.org/wiki/User:Pinethicket
6649 https://en.wikipedia.org/wiki/User:Pingveno
6650 https://en.wikipedia.org/wiki/User:PinoEire
6651 https://en.wikipedia.org/wiki/User:Pintman
6652 https://en.wikipedia.org/wiki/User:Pintoch
6653 https://en.wikipedia.org/w/index.php%3ftitle=User:Piojo&action=edit&redlink=1
6654 https://en.wikipedia.org/wiki/User:Pion
6655 https://en.wikipedia.org/wiki/User:Piotrus
6656 https://en.wikipedia.org/w/index.php%3ftitle=User:Pipasharto&action=edit&redlink=1

1935
Contributors

1 Pipedreambomb6657
2 Pipemore6658
4 PipepBot6659
1 Piperh6660
2 Pipetricker6661
1 Pitel6662
2 Pit~enwiki6663
10 PixelBot6664
2 Pixelu6665
3 Pixie Farmer6666
4 Pixor6667
1 Pizza10166668
18 PizzaMargherita6669
1 Pjoef6670
3 Pjrm6671
4 Pkbwcgs6672
5 Pkirlin6673
3 Pkrecker6674
2 Plaba1236675
2 Plandu6676
1 Planet-man8286677
1 Planetscape6678
1 Plasticphilosopher6679
1 Plasticup6680
1 Plastikspork6681

6657 https://en.wikipedia.org/wiki/User:Pipedreambomb
6658 https://en.wikipedia.org/w/index.php%3ftitle=User:Pipemore&action=edit&redlink=1
6659 https://en.wikipedia.org/wiki/User:PipepBot
6660 https://en.wikipedia.org/wiki/User:Piperh
6661 https://en.wikipedia.org/wiki/User:Pipetricker
6662 https://en.wikipedia.org/wiki/User:Pitel
6663 https://en.wikipedia.org/wiki/User:Pit~enwiki
6664 https://en.wikipedia.org/wiki/User:PixelBot
6665 https://en.wikipedia.org/w/index.php%3ftitle=User:Pixelu&action=edit&redlink=1
6666 https://en.wikipedia.org/w/index.php%3ftitle=User:Pixie_Farmer&action=edit&redlink=1
6667 https://en.wikipedia.org/wiki/User:Pixor
6668 https://en.wikipedia.org/wiki/User:Pizza1016
6669 https://en.wikipedia.org/wiki/User:PizzaMargherita
6670 https://en.wikipedia.org/wiki/User:Pjoef
6671 https://en.wikipedia.org/w/index.php%3ftitle=User:Pjrm&action=edit&redlink=1
6672 https://en.wikipedia.org/wiki/User:Pkbwcgs
6673 https://en.wikipedia.org/wiki/User:Pkirlin
6674 https://en.wikipedia.org/wiki/User:Pkrecker
6675 https://en.wikipedia.org/wiki/User:Plaba123
6676 https://en.wikipedia.org/wiki/User:Plandu
6677 https://en.wikipedia.org/w/index.php%3ftitle=User:Planet-man828&action=edit&redlink=1
6678 https://en.wikipedia.org/w/index.php%3ftitle=User:Planetscape&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Plasticphilosopher&action=edit&
6679
redlink=1
6680 https://en.wikipedia.org/wiki/User:Plasticup
6681 https://en.wikipedia.org/wiki/User:Plastikspork

1936
External links

1 Plattler016682
2 Platypus2226683
2 Player 036684
1 Playerwithgraphs6685
2 Playswithfire6686
8 Plbogen6687
12 Pleasantville6688
1 Pleiotrop36689
1 Plokmijnuhby6690
1 Plop6691
1 Pluggi6692
1 Plugwash6693
5 Plurmiscuous6694
1 Plustgarten6695
1 Plutor6696
3 PlyrStar936697
2 Pm2156698
1 Pmaccabe6699
1 Pmadrid6700
1 Pmanderson6701
1 Pmcjones6702
1 Pmdboi6703
1 Pmdusso6704
1 Pmezard6705
2 Pmlineditor6706

6682 https://en.wikipedia.org/wiki/User:Plattler01
6683 https://en.wikipedia.org/wiki/User:Platypus222
6684 https://en.wikipedia.org/wiki/User:Player_03
6685 https://en.wikipedia.org/wiki/User:Playerwithgraphs
6686 https://en.wikipedia.org/w/index.php%3ftitle=User:Playswithfire&action=edit&redlink=1
6687 https://en.wikipedia.org/wiki/User:Plbogen
6688 https://en.wikipedia.org/wiki/User:Pleasantville
6689 https://en.wikipedia.org/w/index.php%3ftitle=User:Pleiotrop3&action=edit&redlink=1
6690 https://en.wikipedia.org/w/index.php%3ftitle=User:Plokmijnuhby&action=edit&redlink=1
6691 https://en.wikipedia.org/wiki/User:Plop
6692 https://en.wikipedia.org/w/index.php%3ftitle=User:Pluggi&action=edit&redlink=1
6693 https://en.wikipedia.org/wiki/User:Plugwash
6694 https://en.wikipedia.org/wiki/User:Plurmiscuous
6695 https://en.wikipedia.org/wiki/User:Plustgarten
6696 https://en.wikipedia.org/wiki/User:Plutor
6697 https://en.wikipedia.org/wiki/User:PlyrStar93
6698 https://en.wikipedia.org/w/index.php%3ftitle=User:Pm215&action=edit&redlink=1
6699 https://en.wikipedia.org/wiki/User:Pmaccabe
6700 https://en.wikipedia.org/wiki/User:Pmadrid
6701 https://en.wikipedia.org/wiki/User:Pmanderson
6702 https://en.wikipedia.org/wiki/User:Pmcjones
6703 https://en.wikipedia.org/wiki/User:Pmdboi
6704 https://en.wikipedia.org/w/index.php%3ftitle=User:Pmdusso&action=edit&redlink=1
6705 https://en.wikipedia.org/w/index.php%3ftitle=User:Pmezard&action=edit&redlink=1
6706 https://en.wikipedia.org/wiki/User:Pmlineditor

1937
Contributors

1 Pmokeefe6707
1 Pmronchi6708
11 Pmussler6709
1 Pndfam056710
5 Pne6711
4 Pnm6712
1 Pnmdln6713
2 PohranicniStraze6714
6 Pointillist6715
2 PoisonedQuill6716
10 Pol0986717
4 Policron6718
1 PolicyReformer6719
2 PoliticalJunkie6720
1 Polo6721
3 Polyguo6722
1 Polymath96366723
1 Polymerbringer6724
2 Pombredanne6725
2 Ponggr6726
1 Pontificalibus6727
1 Ponyo6728
1 PoolGuy6729
24 Poor Yorick6730
2 Popcrate6731

6707 https://en.wikipedia.org/wiki/User:Pmokeefe
6708 https://en.wikipedia.org/wiki/User:Pmronchi
6709 https://en.wikipedia.org/wiki/User:Pmussler
6710 https://en.wikipedia.org/w/index.php%3ftitle=User:Pndfam05&action=edit&redlink=1
6711 https://en.wikipedia.org/wiki/User:Pne
6712 https://en.wikipedia.org/wiki/User:Pnm
6713 https://en.wikipedia.org/w/index.php%3ftitle=User:Pnmdln&action=edit&redlink=1
6714 https://en.wikipedia.org/wiki/User:PohranicniStraze
6715 https://en.wikipedia.org/wiki/User:Pointillist
6716 https://en.wikipedia.org/wiki/User:PoisonedQuill
6717 https://en.wikipedia.org/wiki/User:Pol098
6718 https://en.wikipedia.org/wiki/User:Policron
6719 https://en.wikipedia.org/wiki/User:PolicyReformer
6720 https://en.wikipedia.org/wiki/User:PoliticalJunkie
6721 https://en.wikipedia.org/wiki/User:Polo
6722 https://en.wikipedia.org/w/index.php%3ftitle=User:Polyguo&action=edit&redlink=1
6723 https://en.wikipedia.org/w/index.php%3ftitle=User:Polymath9636&action=edit&redlink=1
6724 https://en.wikipedia.org/wiki/User:Polymerbringer
6725 https://en.wikipedia.org/wiki/User:Pombredanne
6726 https://en.wikipedia.org/w/index.php%3ftitle=User:Ponggr&action=edit&redlink=1
6727 https://en.wikipedia.org/wiki/User:Pontificalibus
6728 https://en.wikipedia.org/wiki/User:Ponyo
6729 https://en.wikipedia.org/wiki/User:PoolGuy
6730 https://en.wikipedia.org/wiki/User:Poor_Yorick
6731 https://en.wikipedia.org/wiki/User:Popcrate

1938
External links

1 Poponuro6732
1 Popovitsj6733
18 Populus6734
2 Porges6735
2 Porphyro6736
1 Porqin6737
1 Portalian6738
1 Posix4e6739
1 Postdlf6740
3 Postrach6741
1 Potterman28wxcv6742
2 Powerdesi6743
1 Powerqm6744
1 Powerthirst1236745
1 Pownuk6746
8 Powo6747
3 Pparent6748
1 Pppery6749
1 Pps6750
1 Pragmocialist6751
2 Prakashmeansvictory6752
1 Prakhar99986753
2 PramodSPhadke6754
1 Pramodk-git6755

6732 https://en.wikipedia.org/w/index.php%3ftitle=User:Poponuro&action=edit&redlink=1
6733 https://en.wikipedia.org/w/index.php%3ftitle=User:Popovitsj&action=edit&redlink=1
6734 https://en.wikipedia.org/wiki/User:Populus
6735 https://en.wikipedia.org/wiki/User:Porges
6736 https://en.wikipedia.org/wiki/User:Porphyro
6737 https://en.wikipedia.org/wiki/User:Porqin
6738 https://en.wikipedia.org/wiki/User:Portalian
6739 https://en.wikipedia.org/w/index.php%3ftitle=User:Posix4e&action=edit&redlink=1
6740 https://en.wikipedia.org/wiki/User:Postdlf
6741 https://en.wikipedia.org/wiki/User:Postrach
https://en.wikipedia.org/w/index.php%3ftitle=User:Potterman28wxcv&action=edit&
6742
redlink=1
6743 https://en.wikipedia.org/w/index.php%3ftitle=User:Powerdesi&action=edit&redlink=1
6744 https://en.wikipedia.org/w/index.php%3ftitle=User:Powerqm&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Powerthirst123&action=edit&redlink=
6745
1
6746 https://en.wikipedia.org/wiki/User:Pownuk
6747 https://en.wikipedia.org/wiki/User:Powo
6748 https://en.wikipedia.org/wiki/User:Pparent
6749 https://en.wikipedia.org/wiki/User:Pppery
6750 https://en.wikipedia.org/wiki/User:Pps
6751 https://en.wikipedia.org/w/index.php%3ftitle=User:Pragmocialist&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Prakashmeansvictory&action=edit&
6752
redlink=1
6753 https://en.wikipedia.org/w/index.php%3ftitle=User:Prakhar9998&action=edit&redlink=1
6754 https://en.wikipedia.org/w/index.php%3ftitle=User:PramodSPhadke&action=edit&redlink=1
6755 https://en.wikipedia.org/w/index.php%3ftitle=User:Pramodk-git&action=edit&redlink=1

1939
Contributors

1 Pranav.deotale6756
2 Pranay Varma6757
1 Pranvk6758
2 Prara6759
4 Prari6760
2 Prasanth.moothedath6761
1 Prashantvatsjha6762
1 Prashmohan6763
5 Pratyya Ghosh6764
5 Praxidicae6765
1 Prayerfortheworld6766
2 Pred6767
1 Predator1066768
1 Predawn6769
2 Preetham A6770
1 Premil6771
4 Prestonmag6772
1 Primal dual6773
2 PrimeBOT6774
3 PrimeHunter6775
2 Primefac6776
2 Primergrey6777
1 Prinsgezinde6778
1 Priya17796779
2 ProBoj!6780

https://en.wikipedia.org/w/index.php%3ftitle=User:Pranav.deotale&action=edit&redlink=
6756
1
6757 https://en.wikipedia.org/w/index.php%3ftitle=User:Pranay_Varma&action=edit&redlink=1
6758 https://en.wikipedia.org/wiki/User:Pranvk
6759 https://en.wikipedia.org/w/index.php%3ftitle=User:Prara&action=edit&redlink=1
6760 https://en.wikipedia.org/wiki/User:Prari
6761 https://en.wikipedia.org/wiki/User:Prasanth.moothedath
https://en.wikipedia.org/w/index.php%3ftitle=User:Prashantvatsjha&action=edit&
6762
redlink=1
6763 https://en.wikipedia.org/wiki/User:Prashmohan
6764 https://en.wikipedia.org/wiki/User:Pratyya_Ghosh
6765 https://en.wikipedia.org/wiki/User:Praxidicae
6766 https://en.wikipedia.org/wiki/User:Prayerfortheworld
6767 https://en.wikipedia.org/wiki/User:Pred
6768 https://en.wikipedia.org/wiki/User:Predator106
6769 https://en.wikipedia.org/w/index.php%3ftitle=User:Predawn&action=edit&redlink=1
6770 https://en.wikipedia.org/wiki/User:Preetham_A
6771 https://en.wikipedia.org/w/index.php%3ftitle=User:Premil&action=edit&redlink=1
6772 https://en.wikipedia.org/wiki/User:Prestonmag
6773 https://en.wikipedia.org/w/index.php%3ftitle=User:Primal_dual&action=edit&redlink=1
6774 https://en.wikipedia.org/wiki/User:PrimeBOT
6775 https://en.wikipedia.org/wiki/User:PrimeHunter
6776 https://en.wikipedia.org/wiki/User:Primefac
6777 https://en.wikipedia.org/wiki/User:Primergrey
6778 https://en.wikipedia.org/wiki/User:Prinsgezinde
6779 https://en.wikipedia.org/w/index.php%3ftitle=User:Priya1779&action=edit&redlink=1
6780 https://en.wikipedia.org/w/index.php%3ftitle=User:ProBoj!&action=edit&redlink=1

1940
External links

2 Prochron6781
1 Prodego6782
7 ProfessionalProgrammer6783
3 ProfessorMK6784
1 Profvalente6785
1 ProgVal6786
2 Programmerdouts6787
1 Prolinko6788
1 Prolo 16789
4 PrologFan6790
2 Promethean6791
10 Prosfilaes6792
1 ProtoFire6793
4 Protonk6794
1 Proxyma6795
2 Prumpf6796
1 Pryrnjn6797
1 PrzemekMajewski6798
1 Pschaus6799
1 Psdey16800
3 Pseudomonas6801
2 Pshanka6802
2 Psherm856803
2 Pshirishreddy6804
1 PsiXi6805

6781 https://en.wikipedia.org/wiki/User:Prochron
6782 https://en.wikipedia.org/wiki/User:Prodego
6783 https://en.wikipedia.org/wiki/User:ProfessionalProgrammer
6784 https://en.wikipedia.org/w/index.php%3ftitle=User:ProfessorMK&action=edit&redlink=1
6785 https://en.wikipedia.org/wiki/User:Profvalente
6786 https://en.wikipedia.org/w/index.php%3ftitle=User:ProgVal&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Programmerdouts&action=edit&
6787
redlink=1
6788 https://en.wikipedia.org/w/index.php%3ftitle=User:Prolinko&action=edit&redlink=1
6789 https://en.wikipedia.org/w/index.php%3ftitle=User:Prolo_1&action=edit&redlink=1
6790 https://en.wikipedia.org/wiki/User:PrologFan
6791 https://en.wikipedia.org/wiki/User:Promethean
6792 https://en.wikipedia.org/wiki/User:Prosfilaes
6793 https://en.wikipedia.org/wiki/User:ProtoFire
6794 https://en.wikipedia.org/wiki/User:Protonk
6795 https://en.wikipedia.org/wiki/User:Proxyma
6796 https://en.wikipedia.org/wiki/User:Prumpf
6797 https://en.wikipedia.org/w/index.php%3ftitle=User:Pryrnjn&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:PrzemekMajewski&action=edit&
6798
redlink=1
6799 https://en.wikipedia.org/w/index.php%3ftitle=User:Pschaus&action=edit&redlink=1
6800 https://en.wikipedia.org/w/index.php%3ftitle=User:Psdey1&action=edit&redlink=1
6801 https://en.wikipedia.org/wiki/User:Pseudomonas
6802 https://en.wikipedia.org/w/index.php%3ftitle=User:Pshanka&action=edit&redlink=1
6803 https://en.wikipedia.org/w/index.php%3ftitle=User:Psherm85&action=edit&redlink=1
6804 https://en.wikipedia.org/w/index.php%3ftitle=User:Pshirishreddy&action=edit&redlink=1
6805 https://en.wikipedia.org/wiki/User:PsiXi

1941
Contributors

4 Psilopteros6806
2 Psiphiorg6807
4 Pskjs~enwiki6808
1 Psorakis6809
1 PsyberS6810
1 PsycheMan6811
1 Psychlohexane6812
1 PsychoAlienDog6813
1 Psym6814
4 Pt6815
6 Ptbotgourou6816
5 Ptrillian6817
1 Publichealthguru6818
1 Publius19896819
4 Puckly6820
1 Puffinry6821
2 Pugget6822
1 Puhfyn6823
1 Pukeye~enwiki6824
1 Pulkitism42156825
23 Pullpull1236826
1 Pulveriser6827
1 Pup5006828
2 Pur3r4ngelw6829
3 Purgy Purgatorio6830

6806 https://en.wikipedia.org/w/index.php%3ftitle=User:Psilopteros&action=edit&redlink=1
6807 https://en.wikipedia.org/wiki/User:Psiphiorg
6808 https://en.wikipedia.org/w/index.php%3ftitle=User:Pskjs~enwiki&action=edit&redlink=1
6809 https://en.wikipedia.org/w/index.php%3ftitle=User:Psorakis&action=edit&redlink=1
6810 https://en.wikipedia.org/wiki/User:PsyberS
6811 https://en.wikipedia.org/w/index.php%3ftitle=User:PsycheMan&action=edit&redlink=1
6812 https://en.wikipedia.org/wiki/User:Psychlohexane
6813 https://en.wikipedia.org/wiki/User:PsychoAlienDog
6814 https://en.wikipedia.org/wiki/User:Psym
6815 https://en.wikipedia.org/wiki/User:Pt
6816 https://en.wikipedia.org/wiki/User:Ptbotgourou
6817 https://en.wikipedia.org/w/index.php%3ftitle=User:Ptrillian&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Publichealthguru&action=edit&
6818
redlink=1
6819 https://en.wikipedia.org/w/index.php%3ftitle=User:Publius1989&action=edit&redlink=1
6820 https://en.wikipedia.org/wiki/User:Puckly
6821 https://en.wikipedia.org/w/index.php%3ftitle=User:Puffinry&action=edit&redlink=1
6822 https://en.wikipedia.org/wiki/User:Pugget
6823 https://en.wikipedia.org/wiki/User:Puhfyn
6824 https://en.wikipedia.org/w/index.php%3ftitle=User:Pukeye~enwiki&action=edit&redlink=1
6825 https://en.wikipedia.org/w/index.php%3ftitle=User:Pulkitism4215&action=edit&redlink=1
6826 https://en.wikipedia.org/wiki/User:Pullpull123
6827 https://en.wikipedia.org/wiki/User:Pulveriser
6828 https://en.wikipedia.org/w/index.php%3ftitle=User:Pup500&action=edit&redlink=1
6829 https://en.wikipedia.org/w/index.php%3ftitle=User:Pur3r4ngelw&action=edit&redlink=1
6830 https://en.wikipedia.org/wiki/User:Purgy_Purgatorio

1942
External links

2 Purnendu Karmakar6831
1 Purple acid6832
1 Purplie6833
1 Pushingbits6834
1 Puzne~enwiki6835
2 PuzzletChung6836
1 Pvamsiece6837
1 Pvza6838
1 Pvza856839
1 Pwp3336840
4 Pxtreme756841
2 Pyfan6842
1 Pypmannetjies6843
1 Pyrothepenguin6844
1 Pyrrhonist056845
2 Pyschobbens6846
1 Pythonwiz6847
4 Pzoxicuvybtnrm6848
1 Pzrq6849
2 QEDK6850
1 QFlyer6851
1 Qaramazov6852
1 Qazi Ammar Arshad6853
1 Qcomp6854
1 Qef6855

6831 https://en.wikipedia.org/wiki/User:Purnendu_Karmakar
6832 https://en.wikipedia.org/wiki/User:Purple_acid
6833 https://en.wikipedia.org/w/index.php%3ftitle=User:Purplie&action=edit&redlink=1
6834 https://en.wikipedia.org/w/index.php%3ftitle=User:Pushingbits&action=edit&redlink=1
6835 https://en.wikipedia.org/w/index.php%3ftitle=User:Puzne~enwiki&action=edit&redlink=1
6836 https://en.wikipedia.org/wiki/User:PuzzletChung
6837 https://en.wikipedia.org/w/index.php%3ftitle=User:Pvamsiece&action=edit&redlink=1
6838 https://en.wikipedia.org/w/index.php%3ftitle=User:Pvza&action=edit&redlink=1
6839 https://en.wikipedia.org/w/index.php%3ftitle=User:Pvza85&action=edit&redlink=1
6840 https://en.wikipedia.org/w/index.php%3ftitle=User:Pwp333&action=edit&redlink=1
6841 https://en.wikipedia.org/wiki/User:Pxtreme75
6842 https://en.wikipedia.org/wiki/User:Pyfan
6843 https://en.wikipedia.org/w/index.php%3ftitle=User:Pypmannetjies&action=edit&redlink=1
6844 https://en.wikipedia.org/wiki/User:Pyrothepenguin
6845 https://en.wikipedia.org/w/index.php%3ftitle=User:Pyrrhonist05&action=edit&redlink=1
6846 https://en.wikipedia.org/w/index.php%3ftitle=User:Pyschobbens&action=edit&redlink=1
6847 https://en.wikipedia.org/w/index.php%3ftitle=User:Pythonwiz&action=edit&redlink=1
6848 https://en.wikipedia.org/wiki/User:Pzoxicuvybtnrm
6849 https://en.wikipedia.org/w/index.php%3ftitle=User:Pzrq&action=edit&redlink=1
6850 https://en.wikipedia.org/wiki/User:QEDK
6851 https://en.wikipedia.org/wiki/User:QFlyer
6852 https://en.wikipedia.org/wiki/User:Qaramazov
https://en.wikipedia.org/w/index.php%3ftitle=User:Qazi_Ammar_Arshad&action=edit&
6853
redlink=1
6854 https://en.wikipedia.org/wiki/User:Qcomp
6855 https://en.wikipedia.org/wiki/User:Qef

1943
Contributors

1 Qinnamiami6856
1 Qiq~enwiki6857
1 Qiutongs6858
1 Qleafriver6859
2 Qleem6860
1 QmunkE6861
1 Qorilla6862
1 Qqliu6863
8 Qrancik6864
2 QrczakMK6865
2 Qsq6866
1 Qst6867
4 Qu1j0t36868
4 Quackor6869
1 Quackriot6870
1 Quadell6871
4 Quadrescence6872
8 Quaeler6873
1 Quaestor~enwiki6874
1 Quaizar.vohra6875
16 Quale6876
1 Quang thai6877
1 QuangThong816878
1 QuantifiedElf6879
2 QuantitativeFinanceKinderChocolate6880

6856 https://en.wikipedia.org/w/index.php%3ftitle=User:Qinnamiami&action=edit&redlink=1
6857 https://en.wikipedia.org/wiki/User:Qiq~enwiki
6858 https://en.wikipedia.org/w/index.php%3ftitle=User:Qiutongs&action=edit&redlink=1
6859 https://en.wikipedia.org/w/index.php%3ftitle=User:Qleafriver&action=edit&redlink=1
6860 https://en.wikipedia.org/wiki/User:Qleem
6861 https://en.wikipedia.org/wiki/User:QmunkE
6862 https://en.wikipedia.org/wiki/User:Qorilla
6863 https://en.wikipedia.org/w/index.php%3ftitle=User:Qqliu&action=edit&redlink=1
6864 https://en.wikipedia.org/w/index.php%3ftitle=User:Qrancik&action=edit&redlink=1
6865 https://en.wikipedia.org/w/index.php%3ftitle=User:QrczakMK&action=edit&redlink=1
6866 https://en.wikipedia.org/w/index.php%3ftitle=User:Qsq&action=edit&redlink=1
6867 https://en.wikipedia.org/wiki/User:Qst
6868 https://en.wikipedia.org/w/index.php%3ftitle=User:Qu1j0t3&action=edit&redlink=1
6869 https://en.wikipedia.org/wiki/User:Quackor
6870 https://en.wikipedia.org/wiki/User:Quackriot
6871 https://en.wikipedia.org/wiki/User:Quadell
6872 https://en.wikipedia.org/wiki/User:Quadrescence
6873 https://en.wikipedia.org/wiki/User:Quaeler
https://en.wikipedia.org/w/index.php%3ftitle=User:Quaestor~enwiki&action=edit&
6874
redlink=1
6875 https://en.wikipedia.org/w/index.php%3ftitle=User:Quaizar.vohra&action=edit&redlink=1
6876 https://en.wikipedia.org/wiki/User:Quale
6877 https://en.wikipedia.org/w/index.php%3ftitle=User:Quang_thai&action=edit&redlink=1
6878 https://en.wikipedia.org/w/index.php%3ftitle=User:QuangThong81&action=edit&redlink=1
6879 https://en.wikipedia.org/wiki/User:QuantifiedElf
https://en.wikipedia.org/w/index.php%3ftitle=User:QuantitativeFinanceKinderChocolate&
6880
action=edit&redlink=1

1944
External links

2 Quantumelixir6881
1 Quarague6882
2 Quarl6883
7 Quaternion6884
4 Quebec996885
1 Queeg6886
1 Quendus6887
2 Quenhitran6888
1 Quentonamos6889
2 Quercus solaris6890
2 Quest for Truth6891
1 Quiark6892
4 Quibik6893
1 Quickfingers6894
6 Quickwik6895
1 Quiddity6896
4 Quidquam6897
3 Quietbritishjim6898
12 Quill186899
2 Quinton Feldberg6900
2 Quintus3146901
3 Quittle6902
3 Quondum6903
4 Quota6904
1 Quoth-226905

6881 https://en.wikipedia.org/w/index.php%3ftitle=User:Quantumelixir&action=edit&redlink=1
6882 https://en.wikipedia.org/w/index.php%3ftitle=User:Quarague&action=edit&redlink=1
6883 https://en.wikipedia.org/wiki/User:Quarl
6884 https://en.wikipedia.org/w/index.php%3ftitle=User:Quaternion&action=edit&redlink=1
6885 https://en.wikipedia.org/wiki/User:Quebec99
6886 https://en.wikipedia.org/w/index.php%3ftitle=User:Queeg&action=edit&redlink=1
6887 https://en.wikipedia.org/wiki/User:Quendus
6888 https://en.wikipedia.org/wiki/User:Quenhitran
6889 https://en.wikipedia.org/w/index.php%3ftitle=User:Quentonamos&action=edit&redlink=1
6890 https://en.wikipedia.org/wiki/User:Quercus_solaris
6891 https://en.wikipedia.org/wiki/User:Quest_for_Truth
6892 https://en.wikipedia.org/w/index.php%3ftitle=User:Quiark&action=edit&redlink=1
6893 https://en.wikipedia.org/wiki/User:Quibik
6894 https://en.wikipedia.org/wiki/User:Quickfingers
6895 https://en.wikipedia.org/wiki/User:Quickwik
6896 https://en.wikipedia.org/wiki/User:Quiddity
6897 https://en.wikipedia.org/w/index.php%3ftitle=User:Quidquam&action=edit&redlink=1
6898 https://en.wikipedia.org/wiki/User:Quietbritishjim
6899 https://en.wikipedia.org/w/index.php%3ftitle=User:Quill18&action=edit&redlink=1
6900 https://en.wikipedia.org/wiki/User:Quinton_Feldberg
6901 https://en.wikipedia.org/wiki/User:Quintus314
6902 https://en.wikipedia.org/w/index.php%3ftitle=User:Quittle&action=edit&redlink=1
6903 https://en.wikipedia.org/wiki/User:Quondum
6904 https://en.wikipedia.org/wiki/User:Quota
6905 https://en.wikipedia.org/wiki/User:Quoth-22

1945
Contributors

2 Quotient group6906
5 Qutezuce6907
1 Quuux6908
36 Quuxplusone6909
1 Qwertpi6910
1 Qwertysam956911
596 Qwertyus6912
2 Qwfp6913
3 Qwyrxian6914
4 Qx20206915
2 Qzd6916
26 R'n'B6917
2 R. J. Mathar6918
28 R. S. Shaw6919
19 R.e.b.6920
1 R06921
1 R1D16922
1 R271828186923
8 R3m0t6924
4 R61446925
5 RA08086926
1 RAX MECH066927
1 RAYDENFilipp6928
1 RAYEES Ahmad Ganaie6929
8 RDBury6930

6906 https://en.wikipedia.org/wiki/User:Quotient_group
6907 https://en.wikipedia.org/wiki/User:Qutezuce
6908 https://en.wikipedia.org/wiki/User:Quuux
6909 https://en.wikipedia.org/wiki/User:Quuxplusone
6910 https://en.wikipedia.org/w/index.php%3ftitle=User:Qwertpi&action=edit&redlink=1
6911 https://en.wikipedia.org/w/index.php%3ftitle=User:Qwertysam95&action=edit&redlink=1
6912 https://en.wikipedia.org/wiki/User:Qwertyus
6913 https://en.wikipedia.org/wiki/User:Qwfp
6914 https://en.wikipedia.org/wiki/User:Qwyrxian
6915 https://en.wikipedia.org/wiki/User:Qx2020
6916 https://en.wikipedia.org/wiki/User:Qzd
6917 https://en.wikipedia.org/wiki/User:R%2527n%2527B
6918 https://en.wikipedia.org/wiki/User:R._J._Mathar
6919 https://en.wikipedia.org/wiki/User:R._S._Shaw
6920 https://en.wikipedia.org/wiki/User:R.e.b.
6921 https://en.wikipedia.org/wiki/User:R0
6922 https://en.wikipedia.org/w/index.php%3ftitle=User:R1D1&action=edit&redlink=1
6923 https://en.wikipedia.org/wiki/User:R27182818
6924 https://en.wikipedia.org/wiki/User:R3m0t
6925 https://en.wikipedia.org/wiki/User:R6144
6926 https://en.wikipedia.org/wiki/User:RA0808
6927 https://en.wikipedia.org/w/index.php%3ftitle=User:RAX_MECH06&action=edit&redlink=1
6928 https://en.wikipedia.org/w/index.php%3ftitle=User:RAYDENFilipp&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:RAYEES_Ahmad_Ganaie&action=edit&
6929
redlink=1
6930 https://en.wikipedia.org/wiki/User:RDBury

1946
External links

2 RFBailey6931
1 RHaworth6932
2 RISHARTHA6933
1 RJB6934
25 RJFJR6935
2 RJK19846936
1 RJaguar36937
2 RMCD bot6938
1 RMFan16939
1 RSaunders6940
1 RTBarnard6941
3 RTC6942
1 RUVARD6943
1 RW Marloe6944
3 Ra17296945
1 Raaghu036946
1 Raanoo6947
1 Rabarberski6948
1 RachulAdmas6949
3 Radagast36950
4 Radagast836951
2 Radbug6952
1 Radiated Humanoid6953
2 Radim Baca6954
4 Radiozilla6955

6931 https://en.wikipedia.org/wiki/User:RFBailey
6932 https://en.wikipedia.org/wiki/User:RHaworth
6933 https://en.wikipedia.org/w/index.php%3ftitle=User:RISHARTHA&action=edit&redlink=1
6934 https://en.wikipedia.org/wiki/User:RJB
6935 https://en.wikipedia.org/wiki/User:RJFJR
6936 https://en.wikipedia.org/w/index.php%3ftitle=User:RJK1984&action=edit&redlink=1
6937 https://en.wikipedia.org/wiki/User:RJaguar3
6938 https://en.wikipedia.org/wiki/User:RMCD_bot
6939 https://en.wikipedia.org/wiki/User:RMFan1
6940 https://en.wikipedia.org/wiki/User:RSaunders
6941 https://en.wikipedia.org/wiki/User:RTBarnard
6942 https://en.wikipedia.org/wiki/User:RTC
6943 https://en.wikipedia.org/w/index.php%3ftitle=User:RUVARD&action=edit&redlink=1
6944 https://en.wikipedia.org/wiki/User:RW_Marloe
6945 https://en.wikipedia.org/w/index.php%3ftitle=User:Ra1729&action=edit&redlink=1
6946 https://en.wikipedia.org/w/index.php%3ftitle=User:Raaghu03&action=edit&redlink=1
6947 https://en.wikipedia.org/wiki/User:Raanoo
6948 https://en.wikipedia.org/w/index.php%3ftitle=User:Rabarberski&action=edit&redlink=1
6949 https://en.wikipedia.org/wiki/User:RachulAdmas
6950 https://en.wikipedia.org/w/index.php%3ftitle=User:Radagast3&action=edit&redlink=1
6951 https://en.wikipedia.org/wiki/User:Radagast83
6952 https://en.wikipedia.org/wiki/User:Radbug
https://en.wikipedia.org/w/index.php%3ftitle=User:Radiated_Humanoid&action=edit&
6953
redlink=1
6954 https://en.wikipedia.org/w/index.php%3ftitle=User:Radim_Baca&action=edit&redlink=1
6955 https://en.wikipedia.org/wiki/User:Radiozilla

1947
Contributors

1 RadixRadiant6956
1 RafG6957
4 Rafaec6958
2 Rafaelgm6959
1 Rafaldawid6960
1 Raffamaiden6961
4 Rafikamal6962
1 Rafomo6963
1 Raghaw6964
10 Raghuv.adhepalli6965
3 Ragzouken6966
2 Rahulj123456967
6 Rahulngaikwad6968
1 Rahulov6969
2 Raiden096970
1 RainR6971
1 RainbowCrane6972
1 Rainwarrior6973
4 Rajah6974
1 Rajarammallya6975
1 Rajathsbhat6976
1 Rajesh Kumar Ukaro6977
1 Rajesh.krissh6978
11 Rajgopalbh46979
1 Rajpaj6980

6956 https://en.wikipedia.org/w/index.php%3ftitle=User:RadixRadiant&action=edit&redlink=1
6957 https://en.wikipedia.org/wiki/User:RafG
6958 https://en.wikipedia.org/wiki/User:Rafaec
6959 https://en.wikipedia.org/w/index.php%3ftitle=User:Rafaelgm&action=edit&redlink=1
6960 https://en.wikipedia.org/w/index.php%3ftitle=User:Rafaldawid&action=edit&redlink=1
6961 https://en.wikipedia.org/wiki/User:Raffamaiden
6962 https://en.wikipedia.org/w/index.php%3ftitle=User:Rafikamal&action=edit&redlink=1
6963 https://en.wikipedia.org/w/index.php%3ftitle=User:Rafomo&action=edit&redlink=1
6964 https://en.wikipedia.org/w/index.php%3ftitle=User:Raghaw&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Raghuv.adhepalli&action=edit&
6965
redlink=1
6966 https://en.wikipedia.org/wiki/User:Ragzouken
6967 https://en.wikipedia.org/w/index.php%3ftitle=User:Rahulj12345&action=edit&redlink=1
6968 https://en.wikipedia.org/w/index.php%3ftitle=User:Rahulngaikwad&action=edit&redlink=1
6969 https://en.wikipedia.org/w/index.php%3ftitle=User:Rahulov&action=edit&redlink=1
6970 https://en.wikipedia.org/wiki/User:Raiden09
6971 https://en.wikipedia.org/wiki/User:RainR
6972 https://en.wikipedia.org/wiki/User:RainbowCrane
6973 https://en.wikipedia.org/wiki/User:Rainwarrior
6974 https://en.wikipedia.org/wiki/User:Rajah
6975 https://en.wikipedia.org/w/index.php%3ftitle=User:Rajarammallya&action=edit&redlink=1
6976 https://en.wikipedia.org/wiki/User:Rajathsbhat
6977 https://en.wikipedia.org/wiki/User:Rajesh_Kumar_Ukaro
6978 https://en.wikipedia.org/w/index.php%3ftitle=User:Rajesh.krissh&action=edit&redlink=1
6979 https://en.wikipedia.org/w/index.php%3ftitle=User:Rajgopalbh4&action=edit&redlink=1
6980 https://en.wikipedia.org/wiki/User:Rajpaj

1948
External links

2 Raju1986.patel6981
3 Raknarf446982
1 RalfKoch6983
3 Ralph Corderoy6984
1 Ramaksoud20006985
1 Ramesh Ramaiah6986
2 Rameshngbot6987
30 Rami R6988
2 RamiWissa6989
1 Ramiyam6990
1 Ramkumaran76991
1 Ramzysamman6992
1 Ranching6993
1 Random contributor6994
3 RandomAct6995
1 RandomNoob1436996
5 Randykitty6997
2 Randywombat6998
1 Ranmocy6999
3 Ranværing7000
3 Raph Levien7001
1 Raphael02027002
1 Raphaelbwiki7003
1 RapidR7004
1 Rapidflash7005

https://en.wikipedia.org/w/index.php%3ftitle=User:Raju1986.patel&action=edit&redlink=
6981
1
6982 https://en.wikipedia.org/w/index.php%3ftitle=User:Raknarf44&action=edit&redlink=1
6983 https://en.wikipedia.org/wiki/User:RalfKoch
https://en.wikipedia.org/w/index.php%3ftitle=User:Ralph_Corderoy&action=edit&redlink=
6984
1
6985 https://en.wikipedia.org/wiki/User:Ramaksoud2000
6986 https://en.wikipedia.org/wiki/User:Ramesh_Ramaiah
6987 https://en.wikipedia.org/wiki/User:Rameshngbot
6988 https://en.wikipedia.org/wiki/User:Rami_R
6989 https://en.wikipedia.org/w/index.php%3ftitle=User:RamiWissa&action=edit&redlink=1
6990 https://en.wikipedia.org/w/index.php%3ftitle=User:Ramiyam&action=edit&redlink=1
6991 https://en.wikipedia.org/wiki/User:Ramkumaran7
6992 https://en.wikipedia.org/w/index.php%3ftitle=User:Ramzysamman&action=edit&redlink=1
6993 https://en.wikipedia.org/wiki/User:Ranching
6994 https://en.wikipedia.org/wiki/User:Random_contributor
6995 https://en.wikipedia.org/wiki/User:RandomAct
6996 https://en.wikipedia.org/w/index.php%3ftitle=User:RandomNoob143&action=edit&redlink=1
6997 https://en.wikipedia.org/wiki/User:Randykitty
6998 https://en.wikipedia.org/wiki/User:Randywombat
6999 https://en.wikipedia.org/w/index.php%3ftitle=User:Ranmocy&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Ranv%25C3%25A6ring&action=edit&
7000
redlink=1
7001 https://en.wikipedia.org/wiki/User:Raph_Levien
7002 https://en.wikipedia.org/w/index.php%3ftitle=User:Raphael0202&action=edit&redlink=1
7003 https://en.wikipedia.org/w/index.php%3ftitle=User:Raphaelbwiki&action=edit&redlink=1
7004 https://en.wikipedia.org/wiki/User:RapidR
7005 https://en.wikipedia.org/w/index.php%3ftitle=User:Rapidflash&action=edit&redlink=1

1949
Contributors

1 RaptureBot7006
2 Rasalimbu7007
1 Rasinj7008
4 Rasmus Faber7009
1 RasmusRST7010
3 Rasmusdf7011
2 Rasmusgude7012
2 Rasulnrasul7013
1 Ratfox7014
2 Ratheesh nan7015
1 Rathfelder7016
1 Ratiocinate7017
1 Rattatosk7018
1 RattusMaximus7019
7 Raul6547020
1 RaulMetumtam7021
1 Raveendra Lakpriya7022
1 Raven4x4x7023
1 Ravindra53377024
1 Raviryan847025
8 Ravishanker.bit7026
1 Rawafmail7027
2 Rax20957028
1 Raxrion7029
8 Ray Van De Walker7030

7006 https://en.wikipedia.org/wiki/User:RaptureBot
7007 https://en.wikipedia.org/w/index.php%3ftitle=User:Rasalimbu&action=edit&redlink=1
7008 https://en.wikipedia.org/w/index.php%3ftitle=User:Rasinj&action=edit&redlink=1
7009 https://en.wikipedia.org/wiki/User:Rasmus_Faber
7010 https://en.wikipedia.org/w/index.php%3ftitle=User:RasmusRST&action=edit&redlink=1
7011 https://en.wikipedia.org/wiki/User:Rasmusdf
7012 https://en.wikipedia.org/w/index.php%3ftitle=User:Rasmusgude&action=edit&redlink=1
7013 https://en.wikipedia.org/wiki/User:Rasulnrasul
7014 https://en.wikipedia.org/wiki/User:Ratfox
7015 https://en.wikipedia.org/w/index.php%3ftitle=User:Ratheesh_nan&action=edit&redlink=1
7016 https://en.wikipedia.org/wiki/User:Rathfelder
7017 https://en.wikipedia.org/wiki/User:Ratiocinate
7018 https://en.wikipedia.org/w/index.php%3ftitle=User:Rattatosk&action=edit&redlink=1
7019 https://en.wikipedia.org/wiki/User:RattusMaximus
7020 https://en.wikipedia.org/wiki/User:Raul654
7021 https://en.wikipedia.org/wiki/User:RaulMetumtam
https://en.wikipedia.org/w/index.php%3ftitle=User:Raveendra_Lakpriya&action=edit&
7022
redlink=1
7023 https://en.wikipedia.org/wiki/User:Raven4x4x
7024 https://en.wikipedia.org/w/index.php%3ftitle=User:Ravindra5337&action=edit&redlink=1
7025 https://en.wikipedia.org/wiki/User:Raviryan84
https://en.wikipedia.org/w/index.php%3ftitle=User:Ravishanker.bit&action=edit&
7026
redlink=1
7027 https://en.wikipedia.org/wiki/User:Rawafmail
7028 https://en.wikipedia.org/w/index.php%3ftitle=User:Rax2095&action=edit&redlink=1
7029 https://en.wikipedia.org/w/index.php%3ftitle=User:Raxrion&action=edit&redlink=1
7030 https://en.wikipedia.org/wiki/User:Ray_Van_De_Walker

1950
External links

1 Raymond9311187031
2 Raypereda7032
1 Razibot7033
2 Razimantv7034
3 RazorICE7035
1 Razorflame7036
1 Rbaezay7037
3 Rbarreira7038
1 Rbix7039
1 Rbka7040
2 Rboehning7041
1 Rbonvall7042
1 Rbraunwa7043
1 Rbrewer427044
1 Rcbarnes7045
190 Rcgldr7046
1 Rchrd7047
2 Rcrr7048
4 Rcsprinter1237049
1 Rctngl7050
3 Rdargent7051
1 Rdemar7052
4 Rdhettinger7053
5 Rdnk7054
1 Rdohna7055

7031 https://en.wikipedia.org/w/index.php%3ftitle=User:Raymond931118&action=edit&redlink=1
7032 https://en.wikipedia.org/w/index.php%3ftitle=User:Raypereda&action=edit&redlink=1
7033 https://en.wikipedia.org/wiki/User:Razibot
7034 https://en.wikipedia.org/wiki/User:Razimantv
7035 https://en.wikipedia.org/wiki/User:RazorICE
7036 https://en.wikipedia.org/wiki/User:Razorflame
7037 https://en.wikipedia.org/w/index.php%3ftitle=User:Rbaezay&action=edit&redlink=1
7038 https://en.wikipedia.org/wiki/User:Rbarreira
7039 https://en.wikipedia.org/wiki/User:Rbix
7040 https://en.wikipedia.org/wiki/User:Rbka
7041 https://en.wikipedia.org/w/index.php%3ftitle=User:Rboehning&action=edit&redlink=1
7042 https://en.wikipedia.org/wiki/User:Rbonvall
7043 https://en.wikipedia.org/wiki/User:Rbraunwa
7044 https://en.wikipedia.org/wiki/User:Rbrewer42
7045 https://en.wikipedia.org/wiki/User:Rcbarnes
7046 https://en.wikipedia.org/wiki/User:Rcgldr
7047 https://en.wikipedia.org/wiki/User:Rchrd
7048 https://en.wikipedia.org/w/index.php%3ftitle=User:Rcrr&action=edit&redlink=1
7049 https://en.wikipedia.org/wiki/User:Rcsprinter123
7050 https://en.wikipedia.org/wiki/User:Rctngl
7051 https://en.wikipedia.org/w/index.php%3ftitle=User:Rdargent&action=edit&redlink=1
7052 https://en.wikipedia.org/w/index.php%3ftitle=User:Rdemar&action=edit&redlink=1
7053 https://en.wikipedia.org/w/index.php%3ftitle=User:Rdhettinger&action=edit&redlink=1
7054 https://en.wikipedia.org/wiki/User:Rdnk
7055 https://en.wikipedia.org/w/index.php%3ftitle=User:Rdohna&action=edit&redlink=1

1951
Contributors

1 Rdsmith47056
7 Readams7057
4 Readyheavygo7058
2 Readytohelp7059
1 RealFakeKim7060
2 RealFoxX7061
3 Reaper Eternal7062
2 Rebecca hoysted7063
1 Rebroad7064
2 Recognizance7065
1 Reconsider the static7066
1 Red Director7067
1 Red Jay7068
2 Red Thrush7069
1 Red-eyed demon7070
14 RedBot7071
1 RedHouse187072
1 RedLyons7073
4 RedWolf7074
3 Redactyll7075
1 Reddbredd7076
1 Reddevyl7077
1 Reddi7078
2 Reddishmariposa7079
1 Redet-G7080

7056 https://en.wikipedia.org/wiki/User:Rdsmith4
7057 https://en.wikipedia.org/wiki/User:Readams
7058 https://en.wikipedia.org/wiki/User:Readyheavygo
7059 https://en.wikipedia.org/w/index.php%3ftitle=User:Readytohelp&action=edit&redlink=1
7060 https://en.wikipedia.org/wiki/User:RealFakeKim
7061 https://en.wikipedia.org/w/index.php%3ftitle=User:RealFoxX&action=edit&redlink=1
7062 https://en.wikipedia.org/wiki/User:Reaper_Eternal
https://en.wikipedia.org/w/index.php%3ftitle=User:Rebecca_hoysted&action=edit&
7063
redlink=1
7064 https://en.wikipedia.org/wiki/User:Rebroad
7065 https://en.wikipedia.org/wiki/User:Recognizance
7066 https://en.wikipedia.org/wiki/User:Reconsider_the_static
7067 https://en.wikipedia.org/wiki/User:Red_Director
7068 https://en.wikipedia.org/wiki/User:Red_Jay
7069 https://en.wikipedia.org/wiki/User:Red_Thrush
7070 https://en.wikipedia.org/wiki/User:Red-eyed_demon
7071 https://en.wikipedia.org/wiki/User:RedBot
7072 https://en.wikipedia.org/wiki/User:RedHouse18
7073 https://en.wikipedia.org/wiki/User:RedLyons
7074 https://en.wikipedia.org/wiki/User:RedWolf
7075 https://en.wikipedia.org/wiki/User:Redactyll
7076 https://en.wikipedia.org/wiki/User:Reddbredd
7077 https://en.wikipedia.org/wiki/User:Reddevyl
7078 https://en.wikipedia.org/wiki/User:Reddi
7079 https://en.wikipedia.org/wiki/User:Reddishmariposa
7080 https://en.wikipedia.org/w/index.php%3ftitle=User:Redet-G&action=edit&redlink=1

1952
External links

3 Redjamjar7081
1 Rednas12347082
1 Redrobsche7083
1 Reedbeta7084
5 Reedy7085
1 Reedy Bot7086
2 Registreernu7087
21 Regnaron~enwiki7088
3 Regulov7089
4 Rehua7090
5 Rei-bot7091
1 Reidhoch7092
5 Reinderien7093
4 ReiniUrban7094
1 Reinyday7095
1 Rekinser7096
1 RekishiEJ7097
2 Relikangus7098
3 Rememberway7099
1 Remko~enwiki7100
1 Remohammadi7101
1 Remram447102
2 Remy B7103
1 Renamed User 08hig4er8njo9w0nti7104
6 Renamed user 7hq09uwypo226qfc7105

7081 https://en.wikipedia.org/wiki/User:Redjamjar
7082 https://en.wikipedia.org/wiki/User:Rednas1234
7083 https://en.wikipedia.org/wiki/User:Redrobsche
7084 https://en.wikipedia.org/wiki/User:Reedbeta
7085 https://en.wikipedia.org/wiki/User:Reedy
7086 https://en.wikipedia.org/wiki/User:Reedy_Bot
7087 https://en.wikipedia.org/w/index.php%3ftitle=User:Registreernu&action=edit&redlink=1
7088 https://en.wikipedia.org/wiki/User:Regnaron~enwiki
7089 https://en.wikipedia.org/wiki/User:Regulov
7090 https://en.wikipedia.org/wiki/User:Rehua
7091 https://en.wikipedia.org/wiki/User:Rei-bot
7092 https://en.wikipedia.org/wiki/User:Reidhoch
7093 https://en.wikipedia.org/wiki/User:Reinderien
7094 https://en.wikipedia.org/wiki/User:ReiniUrban
7095 https://en.wikipedia.org/wiki/User:Reinyday
7096 https://en.wikipedia.org/w/index.php%3ftitle=User:Rekinser&action=edit&redlink=1
7097 https://en.wikipedia.org/wiki/User:RekishiEJ
7098 https://en.wikipedia.org/w/index.php%3ftitle=User:Relikangus&action=edit&redlink=1
7099 https://en.wikipedia.org/wiki/User:Rememberway
7100 https://en.wikipedia.org/wiki/User:Remko~enwiki
7101 https://en.wikipedia.org/wiki/User:Remohammadi
7102 https://en.wikipedia.org/wiki/User:Remram44
7103 https://en.wikipedia.org/wiki/User:Remy_B
https://en.wikipedia.org/w/index.php%3ftitle=User:Renamed_User_08hig4er8njo9w0nti&
7104
action=edit&redlink=1
7105 https://en.wikipedia.org/wiki/User:Renamed_user_7hq09uwypo226qfc

1953
Contributors

1 Renamed user 943a06d1c37106


2 Renamed user KdYpUvMgT7107
1 Renamed user U1krw4txwPvuEp3lqV382vOcqa77108
1 Renamed user Y7tw0Ef0XR7109
2 Renamed user awfwvowjvwrvnwio7110
1 RenamedUser013020137111
1 Rend~enwiki7112
1 Renice7113
2 Renku7114
3 Requestion7115
1 Reshadipoor7116
1 Resident Mario7117
1 Restname7118
10 Retimuko7119
2 Retro7120
1 RetroCraft3147121
1 Retrolord7122
1 Rettetast7123
1 Rev3nant7124
1 Revathi7277125
2 Revelation607126
3 Revent7127
1 Revolus7128
2 Revolver7129

https://en.wikipedia.org/w/index.php%3ftitle=User:Renamed_user_943a06d1c3&action=
7106
edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Renamed_user_KdYpUvMgT&action=edit&
7107
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Renamed_user_
7108
U1krw4txwPvuEp3lqV382vOcqa7&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Renamed_user_Y7tw0Ef0XR&action=
7109
edit&redlink=1
7110 https://en.wikipedia.org/wiki/User:Renamed_user_awfwvowjvwrvnwio
https://en.wikipedia.org/w/index.php%3ftitle=User:RenamedUser01302013&action=edit&
7111
redlink=1
7112 https://en.wikipedia.org/w/index.php%3ftitle=User:Rend~enwiki&action=edit&redlink=1
7113 https://en.wikipedia.org/wiki/User:Renice
7114 https://en.wikipedia.org/wiki/User:Renku
7115 https://en.wikipedia.org/wiki/User:Requestion
7116 https://en.wikipedia.org/wiki/User:Reshadipoor
7117 https://en.wikipedia.org/wiki/User:Resident_Mario
7118 https://en.wikipedia.org/wiki/User:Restname
7119 https://en.wikipedia.org/wiki/User:Retimuko
7120 https://en.wikipedia.org/wiki/User:Retro
7121 https://en.wikipedia.org/wiki/User:RetroCraft314
7122 https://en.wikipedia.org/wiki/User:Retrolord
7123 https://en.wikipedia.org/wiki/User:Rettetast
7124 https://en.wikipedia.org/w/index.php%3ftitle=User:Rev3nant&action=edit&redlink=1
7125 https://en.wikipedia.org/w/index.php%3ftitle=User:Revathi727&action=edit&redlink=1
7126 https://en.wikipedia.org/w/index.php%3ftitle=User:Revelation60&action=edit&redlink=1
7127 https://en.wikipedia.org/wiki/User:Revent
7128 https://en.wikipedia.org/wiki/User:Revolus
7129 https://en.wikipedia.org/wiki/User:Revolver

1954
External links

5 RexNL7130
3 ReyBrujo7131
38 Reyk7132
9 Rezabot7133
1 Reziprok7134
3 Rfl7135
1 Rfl02167136
1 Rgamble7137
2 Rgdboer7138
2 Rgiuly7139
1 Rgoodermote7140
1 Rhaleblian7141
15 Rhanekom7142
1 Rhcarvalho7143
1 Rhebus7144
2 Rheun7145
1 Rhinestone K7146
1 Rhobite7147
1 Rholton7148
1 Rhyain7149
1 Rhythm7150
1 Rian1187151
1 Riana7152
12 RibotBOT7153
2 Ricardo Ferreira de Oliveira7154

7130 https://en.wikipedia.org/wiki/User:RexNL
7131 https://en.wikipedia.org/wiki/User:ReyBrujo
7132 https://en.wikipedia.org/wiki/User:Reyk
7133 https://en.wikipedia.org/wiki/User:Rezabot
7134 https://en.wikipedia.org/w/index.php%3ftitle=User:Reziprok&action=edit&redlink=1
7135 https://en.wikipedia.org/wiki/User:Rfl
7136 https://en.wikipedia.org/wiki/User:Rfl0216
7137 https://en.wikipedia.org/wiki/User:Rgamble
7138 https://en.wikipedia.org/wiki/User:Rgdboer
7139 https://en.wikipedia.org/w/index.php%3ftitle=User:Rgiuly&action=edit&redlink=1
7140 https://en.wikipedia.org/wiki/User:Rgoodermote
7141 https://en.wikipedia.org/wiki/User:Rhaleblian
7142 https://en.wikipedia.org/w/index.php%3ftitle=User:Rhanekom&action=edit&redlink=1
7143 https://en.wikipedia.org/w/index.php%3ftitle=User:Rhcarvalho&action=edit&redlink=1
7144 https://en.wikipedia.org/wiki/User:Rhebus
7145 https://en.wikipedia.org/w/index.php%3ftitle=User:Rheun&action=edit&redlink=1
7146 https://en.wikipedia.org/wiki/User:Rhinestone_K
7147 https://en.wikipedia.org/wiki/User:Rhobite
7148 https://en.wikipedia.org/wiki/User:Rholton
7149 https://en.wikipedia.org/w/index.php%3ftitle=User:Rhyain&action=edit&redlink=1
7150 https://en.wikipedia.org/wiki/User:Rhythm
7151 https://en.wikipedia.org/w/index.php%3ftitle=User:Rian118&action=edit&redlink=1
7152 https://en.wikipedia.org/wiki/User:Riana
7153 https://en.wikipedia.org/wiki/User:RibotBOT
7154 https://en.wikipedia.org/wiki/User:Ricardo_Ferreira_de_Oliveira

1955
Contributors

1 Riccardo.fabris7155
1 Riceplaytexas7156
38 Rich Farmbrough7157
2 Rich Smith7158
1 Rich.lewis7159
1 RichF7160
2 Richard Ogier7161
11 Richard Yin7162
2 RichardMarioFratini7163
1 RichardVeryard7164
1 Richardchaven7165
4 Richardj3117166
1 Richfaber7167
1 RichiH7168
1 Richie7169
2 Richmeister7170
1 Richss7171
3 Richwales7172
1 Rick Block7173
1 Rick Norwood7174
1 Rickjpelleg7175
1 Ricksmt7176
1 Ricky816827177
2 RicoRico7178
1 Riedel~enwiki7179

https://en.wikipedia.org/w/index.php%3ftitle=User:Riccardo.fabris&action=edit&
7155
redlink=1
7156 https://en.wikipedia.org/wiki/User:Riceplaytexas
7157 https://en.wikipedia.org/wiki/User:Rich_Farmbrough
7158 https://en.wikipedia.org/wiki/User:Rich_Smith
7159 https://en.wikipedia.org/wiki/User:Rich.lewis
7160 https://en.wikipedia.org/wiki/User:RichF
7161 https://en.wikipedia.org/wiki/User:Richard_Ogier
7162 https://en.wikipedia.org/wiki/User:Richard_Yin
https://en.wikipedia.org/w/index.php%3ftitle=User:RichardMarioFratini&action=edit&
7163
redlink=1
7164 https://en.wikipedia.org/wiki/User:RichardVeryard
7165 https://en.wikipedia.org/w/index.php%3ftitle=User:Richardchaven&action=edit&redlink=1
7166 https://en.wikipedia.org/w/index.php%3ftitle=User:Richardj311&action=edit&redlink=1
7167 https://en.wikipedia.org/w/index.php%3ftitle=User:Richfaber&action=edit&redlink=1
7168 https://en.wikipedia.org/wiki/User:RichiH
7169 https://en.wikipedia.org/wiki/User:Richie
7170 https://en.wikipedia.org/wiki/User:Richmeister
7171 https://en.wikipedia.org/wiki/User:Richss
7172 https://en.wikipedia.org/wiki/User:Richwales
7173 https://en.wikipedia.org/wiki/User:Rick_Block
7174 https://en.wikipedia.org/wiki/User:Rick_Norwood
7175 https://en.wikipedia.org/wiki/User:Rickjpelleg
7176 https://en.wikipedia.org/w/index.php%3ftitle=User:Ricksmt&action=edit&redlink=1
7177 https://en.wikipedia.org/wiki/User:Ricky81682
7178 https://en.wikipedia.org/wiki/User:RicoRico
7179 https://en.wikipedia.org/wiki/User:Riedel~enwiki

1956
External links

10 Riemann'sZeta7180
1 Rigadoun7181
3 Riitoken7182
1 RileyBugz7183
1 Rileyjmurray7184
1 Rinaku7185
3 Ringbang7186
4 Ripchip Bot7187
3 Ripe7188
14 Ripper2347189
2 Risc647190
1 Rishi.bedi7191
1 Risos7192
1 Ritchy7193
10 Rivascalps27194
7 Rjlipton7195
184 Rjwilmsi7196
38 RjwilmsiBot7197
3 Rks227198
3 Rl7199
1 Rlendog7200
1 Rlneumiller7201
1 Rmhermen7202
1 Rmhughes7203
1 Rmiesen7204

https://en.wikipedia.org/w/index.php%3ftitle=User:Riemann%2527sZeta&action=edit&
7180
redlink=1
7181 https://en.wikipedia.org/wiki/User:Rigadoun
7182 https://en.wikipedia.org/wiki/User:Riitoken
7183 https://en.wikipedia.org/wiki/User:RileyBugz
7184 https://en.wikipedia.org/wiki/User:Rileyjmurray
7185 https://en.wikipedia.org/wiki/User:Rinaku
7186 https://en.wikipedia.org/wiki/User:Ringbang
7187 https://en.wikipedia.org/wiki/User:Ripchip_Bot
7188 https://en.wikipedia.org/wiki/User:Ripe
7189 https://en.wikipedia.org/wiki/User:Ripper234
7190 https://en.wikipedia.org/wiki/User:Risc64
7191 https://en.wikipedia.org/wiki/User:Rishi.bedi
7192 https://en.wikipedia.org/w/index.php%3ftitle=User:Risos&action=edit&redlink=1
7193 https://en.wikipedia.org/w/index.php%3ftitle=User:Ritchy&action=edit&redlink=1
7194 https://en.wikipedia.org/w/index.php%3ftitle=User:Rivascalps2&action=edit&redlink=1
7195 https://en.wikipedia.org/w/index.php%3ftitle=User:Rjlipton&action=edit&redlink=1
7196 https://en.wikipedia.org/wiki/User:Rjwilmsi
7197 https://en.wikipedia.org/wiki/User:RjwilmsiBot
7198 https://en.wikipedia.org/wiki/User:Rks22
7199 https://en.wikipedia.org/wiki/User:Rl
7200 https://en.wikipedia.org/wiki/User:Rlendog
7201 https://en.wikipedia.org/wiki/User:Rlneumiller
7202 https://en.wikipedia.org/wiki/User:Rmhermen
7203 https://en.wikipedia.org/w/index.php%3ftitle=User:Rmhughes&action=edit&redlink=1
7204 https://en.wikipedia.org/w/index.php%3ftitle=User:Rmiesen&action=edit&redlink=1

1957
Contributors

1 Rnezami7205
1 Rninty7206
1 Rnsanchez7207
2 Roachmeister7208
1 Roadrunner7209
2 Rob Bednark7210
6 Rob Zako7211
1 Robbar~enwiki7212
35 Robbot7213
1 Robd757214
2 Robert Dober7215
1 Robert Geisberger7216
25 Robert Holte7217
1 Robert Illes7218
2 Robert K S7219
1 Robert L7220
2 Robert McClenon7221
28 Robert Merkel7222
2 Robert Nitsch7223
1 Robert Samal7224
2 Robert Southworth7225
2 Robert The Rebuilder7226
1 RobertBorgersen7227
3 RobertHannah897228
1 Robertadkins007229

7205 https://en.wikipedia.org/w/index.php%3ftitle=User:Rnezami&action=edit&redlink=1
7206 https://en.wikipedia.org/wiki/User:Rninty
7207 https://en.wikipedia.org/wiki/User:Rnsanchez
7208 https://en.wikipedia.org/wiki/User:Roachmeister
7209 https://en.wikipedia.org/wiki/User:Roadrunner
7210 https://en.wikipedia.org/wiki/User:Rob_Bednark
7211 https://en.wikipedia.org/w/index.php%3ftitle=User:Rob_Zako&action=edit&redlink=1
7212 https://en.wikipedia.org/wiki/User:Robbar~enwiki
7213 https://en.wikipedia.org/wiki/User:Robbot
7214 https://en.wikipedia.org/w/index.php%3ftitle=User:Robd75&action=edit&redlink=1
7215 https://en.wikipedia.org/wiki/User:Robert_Dober
https://en.wikipedia.org/w/index.php%3ftitle=User:Robert_Geisberger&action=edit&
7216
redlink=1
7217 https://en.wikipedia.org/w/index.php%3ftitle=User:Robert_Holte&action=edit&redlink=1
7218 https://en.wikipedia.org/wiki/User:Robert_Illes
7219 https://en.wikipedia.org/wiki/User:Robert_K_S
7220 https://en.wikipedia.org/w/index.php%3ftitle=User:Robert_L&action=edit&redlink=1
7221 https://en.wikipedia.org/wiki/User:Robert_McClenon
7222 https://en.wikipedia.org/wiki/User:Robert_Merkel
7223 https://en.wikipedia.org/w/index.php%3ftitle=User:Robert_Nitsch&action=edit&redlink=1
7224 https://en.wikipedia.org/w/index.php%3ftitle=User:Robert_Samal&action=edit&redlink=1
7225 https://en.wikipedia.org/wiki/User:Robert_Southworth
7226 https://en.wikipedia.org/wiki/User:Robert_The_Rebuilder
7227 https://en.wikipedia.org/wiki/User:RobertBorgersen
7228 https://en.wikipedia.org/wiki/User:RobertHannah89
https://en.wikipedia.org/w/index.php%3ftitle=User:Robertadkins00&action=edit&redlink=
7229
1

1958
External links

51 Robertocfc17230
2 Robertvan17231
2 Robin S7232
3 Robin klein7233
233 RobinK7234
9 Robinh7235
1 Robmccoll7236
1 Robofish7237
1 Robost7238
7 RobotE7239
1 RobotJcb7240
1 Robotje7241
10 Roboto de Ajvol7242
1 Robrohan7243
1 Rocafort87244
1 Rocarvaj7245
1 Rocastelo7246
1 Rocchini7247
1 RockMFR7248
1 RockMagnetist7249
2 Rocketrod19607250
1 Rockiesfan197251
2 Rockingravi7252
1 RodC7253
1 Rodhullandemu7254

7230 https://en.wikipedia.org/wiki/User:Robertocfc1
7231 https://en.wikipedia.org/wiki/User:Robertvan1
7232 https://en.wikipedia.org/wiki/User:Robin_S
7233 https://en.wikipedia.org/wiki/User:Robin_klein
7234 https://en.wikipedia.org/wiki/User:RobinK
7235 https://en.wikipedia.org/wiki/User:Robinh
7236 https://en.wikipedia.org/w/index.php%3ftitle=User:Robmccoll&action=edit&redlink=1
7237 https://en.wikipedia.org/wiki/User:Robofish
7238 https://en.wikipedia.org/wiki/User:Robost
7239 https://en.wikipedia.org/wiki/User:RobotE
7240 https://en.wikipedia.org/wiki/User:RobotJcb
7241 https://en.wikipedia.org/wiki/User:Robotje
7242 https://en.wikipedia.org/wiki/User:Roboto_de_Ajvol
7243 https://en.wikipedia.org/w/index.php%3ftitle=User:Robrohan&action=edit&redlink=1
7244 https://en.wikipedia.org/w/index.php%3ftitle=User:Rocafort8&action=edit&redlink=1
7245 https://en.wikipedia.org/w/index.php%3ftitle=User:Rocarvaj&action=edit&redlink=1
7246 https://en.wikipedia.org/wiki/User:Rocastelo
7247 https://en.wikipedia.org/wiki/User:Rocchini
7248 https://en.wikipedia.org/wiki/User:RockMFR
7249 https://en.wikipedia.org/wiki/User:RockMagnetist
7250 https://en.wikipedia.org/wiki/User:Rocketrod1960
7251 https://en.wikipedia.org/wiki/User:Rockiesfan19
7252 https://en.wikipedia.org/w/index.php%3ftitle=User:Rockingravi&action=edit&redlink=1
7253 https://en.wikipedia.org/wiki/User:RodC
7254 https://en.wikipedia.org/wiki/User:Rodhullandemu

1959
Contributors

1 Rodion Gork7255
59 Rodion.Efremov7256
1 Rodney Topor7257
5 RodrigoAiEs7258
2 RodrigoCamargo7259
1 Rodspade7260
2 Rodzilla7261
1 Roenbaeck7262
1 Roentgenium1117263
3 Roger Hui7264
11 Rogerdpack7265
2 RogierBrussee7266
2 Rohit03037267
1 Rohra.devkishan7268
1 Roi 19867269
3 Roland wiese7270
4 RolandH7271
1 RolandKluge7272
1 Roll-Morton7273
1 Rolpa7274
1 Romain Thouvenin7275
1 Roman Munich7276
1 Roman V. Odaisky7277
2 RomanSpa7278
1 RomanZeyde7279

7255 https://en.wikipedia.org/wiki/User:Rodion_Gork
7256 https://en.wikipedia.org/wiki/User:Rodion.Efremov
7257 https://en.wikipedia.org/w/index.php%3ftitle=User:Rodney_Topor&action=edit&redlink=1
7258 https://en.wikipedia.org/w/index.php%3ftitle=User:RodrigoAiEs&action=edit&redlink=1
7259 https://en.wikipedia.org/wiki/User:RodrigoCamargo
7260 https://en.wikipedia.org/w/index.php%3ftitle=User:Rodspade&action=edit&redlink=1
7261 https://en.wikipedia.org/wiki/User:Rodzilla
7262 https://en.wikipedia.org/wiki/User:Roenbaeck
7263 https://en.wikipedia.org/wiki/User:Roentgenium111
7264 https://en.wikipedia.org/wiki/User:Roger_Hui
7265 https://en.wikipedia.org/wiki/User:Rogerdpack
7266 https://en.wikipedia.org/wiki/User:RogierBrussee
7267 https://en.wikipedia.org/w/index.php%3ftitle=User:Rohit0303&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Rohra.devkishan&action=edit&
7268
redlink=1
7269 https://en.wikipedia.org/w/index.php%3ftitle=User:Roi_1986&action=edit&redlink=1
7270 https://en.wikipedia.org/w/index.php%3ftitle=User:Roland_wiese&action=edit&redlink=1
7271 https://en.wikipedia.org/wiki/User:RolandH
7272 https://en.wikipedia.org/w/index.php%3ftitle=User:RolandKluge&action=edit&redlink=1
7273 https://en.wikipedia.org/wiki/User:Roll-Morton
7274 https://en.wikipedia.org/w/index.php%3ftitle=User:Rolpa&action=edit&redlink=1
7275 https://en.wikipedia.org/wiki/User:Romain_Thouvenin
7276 https://en.wikipedia.org/w/index.php%3ftitle=User:Roman_Munich&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Roman_V._Odaisky&action=edit&
7277
redlink=1
7278 https://en.wikipedia.org/wiki/User:RomanSpa
7279 https://en.wikipedia.org/w/index.php%3ftitle=User:RomanZeyde&action=edit&redlink=1

1960
External links

5 Romanm7280
1 Romatt7281
2 Rome was built in a day7282
1 RomeW7283
1 Ron B. Thomson7284
1 RonC7285
2 Ronaldo~enwiki7286
1 Ronaz7287
3 Ronhjones7288
1 Rony fhebrian7289
1 Ronzii7290
1 Root4(one)7291
2 Ropez7292
4 Rorro7293
6 Rory O'Kane7294
5 Rory0967295
1 Roseperrone7296
1 Rosguill7297
1 Rosiedawn7298
9 Ross Fraser7299
6 Rossdw747300
1 Rotational7301
4 Rottenpotatoes1007302
1 Rounaksalim19957303
1 Roux7304

7280 https://en.wikipedia.org/wiki/User:Romanm
7281 https://en.wikipedia.org/w/index.php%3ftitle=User:Romatt&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Rome_was_built_in_a_day&action=
7282
edit&redlink=1
7283 https://en.wikipedia.org/wiki/User:RomeW
7284 https://en.wikipedia.org/wiki/User:Ron_B._Thomson
7285 https://en.wikipedia.org/w/index.php%3ftitle=User:RonC&action=edit&redlink=1
7286 https://en.wikipedia.org/wiki/User:Ronaldo~enwiki
7287 https://en.wikipedia.org/wiki/User:Ronaz
7288 https://en.wikipedia.org/wiki/User:Ronhjones
7289 https://en.wikipedia.org/wiki/User:Rony_fhebrian
7290 https://en.wikipedia.org/w/index.php%3ftitle=User:Ronzii&action=edit&redlink=1
7291 https://en.wikipedia.org/wiki/User:Root4(one)
7292 https://en.wikipedia.org/wiki/User:Ropez
7293 https://en.wikipedia.org/wiki/User:Rorro
7294 https://en.wikipedia.org/wiki/User:Rory_O%2527Kane
7295 https://en.wikipedia.org/wiki/User:Rory096
7296 https://en.wikipedia.org/w/index.php%3ftitle=User:Roseperrone&action=edit&redlink=1
7297 https://en.wikipedia.org/wiki/User:Rosguill
7298 https://en.wikipedia.org/w/index.php%3ftitle=User:Rosiedawn&action=edit&redlink=1
7299 https://en.wikipedia.org/wiki/User:Ross_Fraser
7300 https://en.wikipedia.org/wiki/User:Rossdw74
7301 https://en.wikipedia.org/wiki/User:Rotational
7302 https://en.wikipedia.org/wiki/User:Rottenpotatoes100
https://en.wikipedia.org/w/index.php%3ftitle=User:Rounaksalim1995&action=edit&
7303
redlink=1
7304 https://en.wikipedia.org/wiki/User:Roux

1961
Contributors

1 Rovenhot7305
2 Roxy the dog7306
1 Roy hu7307
2 RoyBoy7308
11 RoySmith7309
2 Royote7310
1 Rozmichelle7311
8 Rp7312
2 Rponamgi7313
2 Rps7314
2 Rrburke7315
5 Rror7316
1 Rrufai7317
1 Rrusin7318
1 Rrwright7319
1 Rsanchezsaez7320
3 Rsarge7321
1 Rsathish7322
4 Rschwieb7323
1 Rshrinivas7324
31 Rspeer7325
1 Rspence12347326
3 Rsrikanth057327
2 Rssr sy7328
1 Rswarbrick7329

7305 https://en.wikipedia.org/wiki/User:Rovenhot
7306 https://en.wikipedia.org/wiki/User:Roxy_the_dog
7307 https://en.wikipedia.org/w/index.php%3ftitle=User:Roy_hu&action=edit&redlink=1
7308 https://en.wikipedia.org/wiki/User:RoyBoy
7309 https://en.wikipedia.org/wiki/User:RoySmith
7310 https://en.wikipedia.org/wiki/User:Royote
7311 https://en.wikipedia.org/wiki/User:Rozmichelle
7312 https://en.wikipedia.org/wiki/User:Rp
7313 https://en.wikipedia.org/w/index.php%3ftitle=User:Rponamgi&action=edit&redlink=1
7314 https://en.wikipedia.org/w/index.php%3ftitle=User:Rps&action=edit&redlink=1
7315 https://en.wikipedia.org/wiki/User:Rrburke
7316 https://en.wikipedia.org/wiki/User:Rror
7317 https://en.wikipedia.org/w/index.php%3ftitle=User:Rrufai&action=edit&redlink=1
7318 https://en.wikipedia.org/w/index.php%3ftitle=User:Rrusin&action=edit&redlink=1
7319 https://en.wikipedia.org/w/index.php%3ftitle=User:Rrwright&action=edit&redlink=1
7320 https://en.wikipedia.org/wiki/User:Rsanchezsaez
7321 https://en.wikipedia.org/w/index.php%3ftitle=User:Rsarge&action=edit&redlink=1
7322 https://en.wikipedia.org/w/index.php%3ftitle=User:Rsathish&action=edit&redlink=1
7323 https://en.wikipedia.org/wiki/User:Rschwieb
7324 https://en.wikipedia.org/wiki/User:Rshrinivas
7325 https://en.wikipedia.org/wiki/User:Rspeer
7326 https://en.wikipedia.org/w/index.php%3ftitle=User:Rspence1234&action=edit&redlink=1
7327 https://en.wikipedia.org/wiki/User:Rsrikanth05
7328 https://en.wikipedia.org/w/index.php%3ftitle=User:Rssr_sy&action=edit&redlink=1
7329 https://en.wikipedia.org/wiki/User:Rswarbrick

1962
External links

4 Rtc7330
1 Rtcasey7331
2 Ruakh7332
1 Rubbish computer7333
2 Rubenlagus7334
12 Rubinbot7335
1 Rubyjunk7336
1 Rudo.Thomas7337
1 Rufous7338
1 RuleWorks7339
1 Rulnick7340
2 Rummelsworth7341
2 Runawayangel7342
1 Runefurb7343
7 Runner19287344
1 Running7345
3 Runtime7346
2 Ruolin597347
1 Rupert Clayton7348
15 Rursus7349
2 RushilU7350
1 Ruslan.Sennov7351
1 Ruslo7352
1 Russ3Z7353
1 RussBlau7354

7330 https://en.wikipedia.org/w/index.php%3ftitle=User:Rtc&action=edit&redlink=1
7331 https://en.wikipedia.org/w/index.php%3ftitle=User:Rtcasey&action=edit&redlink=1
7332 https://en.wikipedia.org/wiki/User:Ruakh
7333 https://en.wikipedia.org/wiki/User:Rubbish_computer
7334 https://en.wikipedia.org/wiki/User:Rubenlagus
7335 https://en.wikipedia.org/wiki/User:Rubinbot
7336 https://en.wikipedia.org/wiki/User:Rubyjunk
7337 https://en.wikipedia.org/wiki/User:Rudo.Thomas
7338 https://en.wikipedia.org/wiki/User:Rufous
7339 https://en.wikipedia.org/w/index.php%3ftitle=User:RuleWorks&action=edit&redlink=1
7340 https://en.wikipedia.org/w/index.php%3ftitle=User:Rulnick&action=edit&redlink=1
7341 https://en.wikipedia.org/w/index.php%3ftitle=User:Rummelsworth&action=edit&redlink=1
7342 https://en.wikipedia.org/wiki/User:Runawayangel
7343 https://en.wikipedia.org/w/index.php%3ftitle=User:Runefurb&action=edit&redlink=1
7344 https://en.wikipedia.org/wiki/User:Runner1928
7345 https://en.wikipedia.org/wiki/User:Running
7346 https://en.wikipedia.org/wiki/User:Runtime
7347 https://en.wikipedia.org/wiki/User:Ruolin59
7348 https://en.wikipedia.org/wiki/User:Rupert_Clayton
7349 https://en.wikipedia.org/wiki/User:Rursus
7350 https://en.wikipedia.org/w/index.php%3ftitle=User:RushilU&action=edit&redlink=1
7351 https://en.wikipedia.org/wiki/User:Ruslan.Sennov
7352 https://en.wikipedia.org/w/index.php%3ftitle=User:Ruslo&action=edit&redlink=1
7353 https://en.wikipedia.org/wiki/User:Russ3Z
7354 https://en.wikipedia.org/wiki/User:RussBlau

1963
Contributors

10 RussBot7355
2 RussHolsclaw7356
1 Russell C. Sibley7357
2 Rustednickel7358
1 Ruthans7359
1 Ruthhe7360
1 Rutilant7361
164 Ruud Koot7362
2 Ruyter7363
1 Rvprasad7364
2 Rwalker7365
1 Rwwww7366
1 RxS7367
2 Ryajinor7368
1 Ryan 17297369
5 Ryan Postlethwaite7370
5 Ryan Roos7371
2 Ryan Vesey7372
1 RyanCross7373
1 RyanEberhart7374
2 Ryancook20027375
2 Ryangerard7376
2 Ryanli7377
1 Ryanmeeru7378
3 Ryanoo7379

7355 https://en.wikipedia.org/wiki/User:RussBot
7356 https://en.wikipedia.org/wiki/User:RussHolsclaw
7357 https://en.wikipedia.org/wiki/User:Russell_C._Sibley
7358 https://en.wikipedia.org/w/index.php%3ftitle=User:Rustednickel&action=edit&redlink=1
7359 https://en.wikipedia.org/w/index.php%3ftitle=User:Ruthans&action=edit&redlink=1
7360 https://en.wikipedia.org/w/index.php%3ftitle=User:Ruthhe&action=edit&redlink=1
7361 https://en.wikipedia.org/wiki/User:Rutilant
7362 https://en.wikipedia.org/wiki/User:Ruud_Koot
7363 https://en.wikipedia.org/wiki/User:Ruyter
7364 https://en.wikipedia.org/w/index.php%3ftitle=User:Rvprasad&action=edit&redlink=1
7365 https://en.wikipedia.org/wiki/User:Rwalker
7366 https://en.wikipedia.org/wiki/User:Rwwww
7367 https://en.wikipedia.org/wiki/User:RxS
7368 https://en.wikipedia.org/wiki/User:Ryajinor
7369 https://en.wikipedia.org/w/index.php%3ftitle=User:Ryan_1729&action=edit&redlink=1
7370 https://en.wikipedia.org/wiki/User:Ryan_Postlethwaite
7371 https://en.wikipedia.org/wiki/User:Ryan_Roos
7372 https://en.wikipedia.org/wiki/User:Ryan_Vesey
7373 https://en.wikipedia.org/wiki/User:RyanCross
7374 https://en.wikipedia.org/wiki/User:RyanEberhart
7375 https://en.wikipedia.org/w/index.php%3ftitle=User:Ryancook2002&action=edit&redlink=1
7376 https://en.wikipedia.org/w/index.php%3ftitle=User:Ryangerard&action=edit&redlink=1
7377 https://en.wikipedia.org/wiki/User:Ryanli
7378 https://en.wikipedia.org/w/index.php%3ftitle=User:Ryanmeeru&action=edit&redlink=1
7379 https://en.wikipedia.org/wiki/User:Ryanoo

1964
External links

3 Ryguasu7380
2 Ryk7381
1 Rylandcml7382
1 Rynishere7383
1 Rytyho usa7384
1 Ryugecin7385
1 Ryulong7386
5 Ryuunoshounen7387
1 S Roper7388
1 S t hathliss7389
2 S.K.7390
1 S.zahiri7391
1 S00917392
1 S2000magician7393
3 S30007394
1 SAM153415347395
1 SAgasthiyan7396
1 SCEhardt7397
1 SCriBu7398
3 SERINE7399
1 SEWilcoBot7400
1 SGBailey7401
2 SJ Defender7402
1 SKATEMANKING7403
5 SLi7404

7380 https://en.wikipedia.org/w/index.php%3ftitle=User:Ryguasu&action=edit&redlink=1
7381 https://en.wikipedia.org/wiki/User:Ryk
7382 https://en.wikipedia.org/w/index.php%3ftitle=User:Rylandcml&action=edit&redlink=1
7383 https://en.wikipedia.org/w/index.php%3ftitle=User:Rynishere&action=edit&redlink=1
7384 https://en.wikipedia.org/wiki/User:Rytyho_usa
7385 https://en.wikipedia.org/w/index.php%3ftitle=User:Ryugecin&action=edit&redlink=1
7386 https://en.wikipedia.org/wiki/User:Ryulong
7387 https://en.wikipedia.org/wiki/User:Ryuunoshounen
7388 https://en.wikipedia.org/wiki/User:S_Roper
7389 https://en.wikipedia.org/wiki/User:S_t_hathliss
7390 https://en.wikipedia.org/w/index.php%3ftitle=User:S.K.&action=edit&redlink=1
7391 https://en.wikipedia.org/w/index.php%3ftitle=User:S.zahiri&action=edit&redlink=1
7392 https://en.wikipedia.org/wiki/User:S0091
7393 https://en.wikipedia.org/w/index.php%3ftitle=User:S2000magician&action=edit&redlink=1
7394 https://en.wikipedia.org/wiki/User:S3000
7395 https://en.wikipedia.org/wiki/User:SAM15341534
7396 https://en.wikipedia.org/w/index.php%3ftitle=User:SAgasthiyan&action=edit&redlink=1
7397 https://en.wikipedia.org/wiki/User:SCEhardt
7398 https://en.wikipedia.org/wiki/User:SCriBu
7399 https://en.wikipedia.org/wiki/User:SERINE
7400 https://en.wikipedia.org/wiki/User:SEWilcoBot
7401 https://en.wikipedia.org/wiki/User:SGBailey
7402 https://en.wikipedia.org/wiki/User:SJ_Defender
7403 https://en.wikipedia.org/w/index.php%3ftitle=User:SKATEMANKING&action=edit&redlink=1
7404 https://en.wikipedia.org/wiki/User:SLi

1965
Contributors

2 SMcCandlish7405
9 SPTWriter7406
2 SQGibbon7407
1 SRFerg7408
2 SS007S7409
1 ST477410
6 STBot7411
2 STBotD7412
1 STGM7413
1 SUM17414
1 SVoid7415
1 SWAdair7416
3 Saaska7417
4 Sabalka7418
4 Sabb0ur7419
1 Sabbut7420
1 Saccenti7421
1 Sachhaduh7422
1 Sacredmint7423
1 Sadads7424
7 Sae19627425
2 Saeed.Veradi7426
1 Saeed.gh.sh7427
1 Saehry7428
1 Safadalvi7429

7405 https://en.wikipedia.org/wiki/User:SMcCandlish
7406 https://en.wikipedia.org/w/index.php%3ftitle=User:SPTWriter&action=edit&redlink=1
7407 https://en.wikipedia.org/wiki/User:SQGibbon
7408 https://en.wikipedia.org/wiki/User:SRFerg
7409 https://en.wikipedia.org/w/index.php%3ftitle=User:SS007S&action=edit&redlink=1
7410 https://en.wikipedia.org/wiki/User:ST47
7411 https://en.wikipedia.org/wiki/User:STBot
7412 https://en.wikipedia.org/wiki/User:STBotD
7413 https://en.wikipedia.org/wiki/User:STGM
7414 https://en.wikipedia.org/wiki/User:SUM1
7415 https://en.wikipedia.org/wiki/User:SVoid
7416 https://en.wikipedia.org/wiki/User:SWAdair
7417 https://en.wikipedia.org/wiki/User:Saaska
7418 https://en.wikipedia.org/w/index.php%3ftitle=User:Sabalka&action=edit&redlink=1
7419 https://en.wikipedia.org/wiki/User:Sabb0ur
7420 https://en.wikipedia.org/wiki/User:Sabbut
7421 https://en.wikipedia.org/w/index.php%3ftitle=User:Saccenti&action=edit&redlink=1
7422 https://en.wikipedia.org/w/index.php%3ftitle=User:Sachhaduh&action=edit&redlink=1
7423 https://en.wikipedia.org/w/index.php%3ftitle=User:Sacredmint&action=edit&redlink=1
7424 https://en.wikipedia.org/wiki/User:Sadads
7425 https://en.wikipedia.org/wiki/User:Sae1962
7426 https://en.wikipedia.org/wiki/User:Saeed.Veradi
7427 https://en.wikipedia.org/w/index.php%3ftitle=User:Saeed.gh.sh&action=edit&redlink=1
7428 https://en.wikipedia.org/wiki/User:Saehry
7429 https://en.wikipedia.org/w/index.php%3ftitle=User:Safadalvi&action=edit&redlink=1

1966
External links

1 Safek7430
2 Saforrest7431
1 Saftschachtel7432
1 Sahuagin7433
1 Saibo7434
1 Saifhhasan7435
1 Saintrain7436
1 Saitanay7437
1 Sakaimover7438
1 Sakanarm7439
7 SakeUPenn7440
1 Sakura Cartelet7441
2 Salgueiro~enwiki7442
12 Salix alba7443
1 Sallen20067444
1 Sallupandit7445
1 Sallypally7446
2 Salon Essahj7447
2 Saloni.68917448
6 Salrizvy7449
1 Saltine7450
3 Salvar7451
2 Salzahrani7452
1 Sam Derbyshire7453
12 Sam Hocevar7454

7430 https://en.wikipedia.org/w/index.php%3ftitle=User:Safek&action=edit&redlink=1
7431 https://en.wikipedia.org/wiki/User:Saforrest
7432 https://en.wikipedia.org/w/index.php%3ftitle=User:Saftschachtel&action=edit&redlink=1
7433 https://en.wikipedia.org/wiki/User:Sahuagin
7434 https://en.wikipedia.org/wiki/User:Saibo
7435 https://en.wikipedia.org/w/index.php%3ftitle=User:Saifhhasan&action=edit&redlink=1
7436 https://en.wikipedia.org/wiki/User:Saintrain
7437 https://en.wikipedia.org/w/index.php%3ftitle=User:Saitanay&action=edit&redlink=1
7438 https://en.wikipedia.org/w/index.php%3ftitle=User:Sakaimover&action=edit&redlink=1
7439 https://en.wikipedia.org/w/index.php%3ftitle=User:Sakanarm&action=edit&redlink=1
7440 https://en.wikipedia.org/wiki/User:SakeUPenn
7441 https://en.wikipedia.org/wiki/User:Sakura_Cartelet
https://en.wikipedia.org/w/index.php%3ftitle=User:Salgueiro~enwiki&action=edit&
7442
redlink=1
7443 https://en.wikipedia.org/wiki/User:Salix_alba
7444 https://en.wikipedia.org/wiki/User:Sallen2006
7445 https://en.wikipedia.org/w/index.php%3ftitle=User:Sallupandit&action=edit&redlink=1
7446 https://en.wikipedia.org/w/index.php%3ftitle=User:Sallypally&action=edit&redlink=1
7447 https://en.wikipedia.org/w/index.php%3ftitle=User:Salon_Essahj&action=edit&redlink=1
7448 https://en.wikipedia.org/w/index.php%3ftitle=User:Saloni.6891&action=edit&redlink=1
7449 https://en.wikipedia.org/w/index.php%3ftitle=User:Salrizvy&action=edit&redlink=1
7450 https://en.wikipedia.org/wiki/User:Saltine
7451 https://en.wikipedia.org/w/index.php%3ftitle=User:Salvar&action=edit&redlink=1
7452 https://en.wikipedia.org/w/index.php%3ftitle=User:Salzahrani&action=edit&redlink=1
7453 https://en.wikipedia.org/wiki/User:Sam_Derbyshire
7454 https://en.wikipedia.org/wiki/User:Sam_Hocevar

1967
Contributors

1 Sam Korn7455
2 Sam nead7456
2 Sam.ldite7457
1 SamB7458
1 SamHartman7459
2 SamaelBeThouMyAlly7460
3 SamatBot7461
2 Sambayless7462
15 Samboy7463
1 SamePaul7464
2 Sameer0s7465
2 Samf4u7466
1 Samiamqqq7467
1 Samkass7468
1 Samkohn7469
5 Sammy tp1237470
1 Samois987471
1 Sampalahnuk7472
1 Sampo7473
1 Samuel Buca7474
1 SamuelRiv7475
1 Samusoft19947476
1 Samwass7477
1 SanAnMan7478
4 Sandal bandit7479

7455 https://en.wikipedia.org/wiki/User:Sam_Korn
7456 https://en.wikipedia.org/wiki/User:Sam_nead
7457 https://en.wikipedia.org/wiki/User:Sam.ldite
7458 https://en.wikipedia.org/wiki/User:SamB
7459 https://en.wikipedia.org/wiki/User:SamHartman
https://en.wikipedia.org/w/index.php%3ftitle=User:SamaelBeThouMyAlly&action=edit&
7460
redlink=1
7461 https://en.wikipedia.org/wiki/User:SamatBot
7462 https://en.wikipedia.org/w/index.php%3ftitle=User:Sambayless&action=edit&redlink=1
7463 https://en.wikipedia.org/wiki/User:Samboy
7464 https://en.wikipedia.org/wiki/User:SamePaul
7465 https://en.wikipedia.org/w/index.php%3ftitle=User:Sameer0s&action=edit&redlink=1
7466 https://en.wikipedia.org/wiki/User:Samf4u
7467 https://en.wikipedia.org/wiki/User:Samiamqqq
7468 https://en.wikipedia.org/wiki/User:Samkass
7469 https://en.wikipedia.org/w/index.php%3ftitle=User:Samkohn&action=edit&redlink=1
7470 https://en.wikipedia.org/w/index.php%3ftitle=User:Sammy_tp123&action=edit&redlink=1
7471 https://en.wikipedia.org/wiki/User:Samois98
7472 https://en.wikipedia.org/w/index.php%3ftitle=User:Sampalahnuk&action=edit&redlink=1
7473 https://en.wikipedia.org/wiki/User:Sampo
7474 https://en.wikipedia.org/wiki/User:Samuel_Buca
7475 https://en.wikipedia.org/wiki/User:SamuelRiv
7476 https://en.wikipedia.org/w/index.php%3ftitle=User:Samusoft1994&action=edit&redlink=1
7477 https://en.wikipedia.org/w/index.php%3ftitle=User:Samwass&action=edit&redlink=1
7478 https://en.wikipedia.org/wiki/User:SanAnMan
7479 https://en.wikipedia.org/w/index.php%3ftitle=User:Sandal_bandit&action=edit&redlink=1

1968
External links

2 Sandare7480
1 SandeepGfG7481
20 Sander1237482
2 Sanderd177483
1 Sander~enwiki7484
1 Sandos7485
1 Sandyjee7486
1 Sangak7487
1 Sangameshh7488
2 Sangdol7489
3 Sanjay7427490
2 Sanketh7491
1 Sanketpatel.3010907492
1 Sanoonan7493
3 Santhoshreddym7494
1 Santhy7495
1 Santo3157496
1 SantoshBot7497
2 Saparagus7498
1 Sapeli7499
1 Sapeur7500
104 Sapphorain7501
1 Sarabjot kaur7502
1 Sarcelles7503
2 Sardanaphalus7504

7480 https://en.wikipedia.org/w/index.php%3ftitle=User:Sandare&action=edit&redlink=1
7481 https://en.wikipedia.org/w/index.php%3ftitle=User:SandeepGfG&action=edit&redlink=1
7482 https://en.wikipedia.org/wiki/User:Sander123
7483 https://en.wikipedia.org/wiki/User:Sanderd17
7484 https://en.wikipedia.org/wiki/User:Sander~enwiki
7485 https://en.wikipedia.org/wiki/User:Sandos
7486 https://en.wikipedia.org/w/index.php%3ftitle=User:Sandyjee&action=edit&redlink=1
7487 https://en.wikipedia.org/wiki/User:Sangak
7488 https://en.wikipedia.org/w/index.php%3ftitle=User:Sangameshh&action=edit&redlink=1
7489 https://en.wikipedia.org/w/index.php%3ftitle=User:Sangdol&action=edit&redlink=1
7490 https://en.wikipedia.org/w/index.php%3ftitle=User:Sanjay742&action=edit&redlink=1
7491 https://en.wikipedia.org/w/index.php%3ftitle=User:Sanketh&action=edit&redlink=1
7492 https://en.wikipedia.org/wiki/User:Sanketpatel.301090
7493 https://en.wikipedia.org/w/index.php%3ftitle=User:Sanoonan&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Santhoshreddym&action=edit&redlink=
7494
1
7495 https://en.wikipedia.org/w/index.php%3ftitle=User:Santhy&action=edit&redlink=1
7496 https://en.wikipedia.org/w/index.php%3ftitle=User:Santo315&action=edit&redlink=1
7497 https://en.wikipedia.org/wiki/User:SantoshBot
7498 https://en.wikipedia.org/wiki/User:Saparagus
7499 https://en.wikipedia.org/wiki/User:Sapeli
7500 https://en.wikipedia.org/wiki/User:Sapeur
7501 https://en.wikipedia.org/wiki/User:Sapphorain
7502 https://en.wikipedia.org/w/index.php%3ftitle=User:Sarabjot_kaur&action=edit&redlink=1
7503 https://en.wikipedia.org/wiki/User:Sarcelles
7504 https://en.wikipedia.org/wiki/User:Sardanaphalus

1969
Contributors

3 Sarkar1127505
2 Sarrtaj7506
5 Sarveshbhatnagar7507
2 SashaMarievskaya7508
3 SashatoBot7509
1 SassoBot7510
1 Sat259407511
1 Satellizer7512
1 Sathyanazre7513
1 Satishjoglekar7514
1 Satya612297515
1 Satyaanveshee7516
1 Saumaun7517
5 Saung Tadashi7518
11 Saurabh.harsh7519
1 Saurabhc7520
1 Saurabhnbanerji7521
2 SausageLady7522
1 SavantEdge7523
1 Saxton7524
1 SaxxonPike7525
330 Sbalfour7526
1 Sbjesse7527
1 Sbluen7528
6 Sbmeirow7529

7505 https://en.wikipedia.org/wiki/User:Sarkar112
7506 https://en.wikipedia.org/wiki/User:Sarrtaj
https://en.wikipedia.org/w/index.php%3ftitle=User:Sarveshbhatnagar&action=edit&
7507
redlink=1
7508 https://en.wikipedia.org/wiki/User:SashaMarievskaya
7509 https://en.wikipedia.org/wiki/User:SashatoBot
7510 https://en.wikipedia.org/wiki/User:SassoBot
7511 https://en.wikipedia.org/w/index.php%3ftitle=User:Sat25940&action=edit&redlink=1
7512 https://en.wikipedia.org/wiki/User:Satellizer
7513 https://en.wikipedia.org/w/index.php%3ftitle=User:Sathyanazre&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Satishjoglekar&action=edit&redlink=
7514
1
7515 https://en.wikipedia.org/w/index.php%3ftitle=User:Satya61229&action=edit&redlink=1
7516 https://en.wikipedia.org/w/index.php%3ftitle=User:Satyaanveshee&action=edit&redlink=1
7517 https://en.wikipedia.org/wiki/User:Saumaun
7518 https://en.wikipedia.org/wiki/User:Saung_Tadashi
7519 https://en.wikipedia.org/w/index.php%3ftitle=User:Saurabh.harsh&action=edit&redlink=1
7520 https://en.wikipedia.org/w/index.php%3ftitle=User:Saurabhc&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Saurabhnbanerji&action=edit&
7521
redlink=1
7522 https://en.wikipedia.org/wiki/User:SausageLady
7523 https://en.wikipedia.org/w/index.php%3ftitle=User:SavantEdge&action=edit&redlink=1
7524 https://en.wikipedia.org/w/index.php%3ftitle=User:Saxton&action=edit&redlink=1
7525 https://en.wikipedia.org/wiki/User:SaxxonPike
7526 https://en.wikipedia.org/wiki/User:Sbalfour
7527 https://en.wikipedia.org/w/index.php%3ftitle=User:Sbjesse&action=edit&redlink=1
7528 https://en.wikipedia.org/wiki/User:Sbluen
7529 https://en.wikipedia.org/wiki/User:Sbmeirow

1970
External links

1 Sboosali7530
2 Sburke7531
1 Sbwoodside7532
1 ScWizard7533
3 ScaledLizard7534
1 Scalene7535
7 Scandum7536
1 Scasa1557537
2 Scebert7538
1 Schazjmd7539
1 SchfiftyThree7540
1 Schmiddtchen7541
1 Schmiteye7542
5 Schneelocke7543
2 Schnozzinkobenstein7544
1 Scholarlyworks7545
2 Schorzman787546
3 SchreiberBike7547
1 SchreyP7548
1 Schulllz7549
1 SchumacherTechnologies7550
2 SchuminWeb7551
1 Schuncal7552
1 Schwarzbichler7553

7530 https://en.wikipedia.org/wiki/User:Sboosali
7531 https://en.wikipedia.org/wiki/User:Sburke
7532 https://en.wikipedia.org/wiki/User:Sbwoodside
7533 https://en.wikipedia.org/wiki/User:ScWizard
7534 https://en.wikipedia.org/w/index.php%3ftitle=User:ScaledLizard&action=edit&redlink=1
7535 https://en.wikipedia.org/wiki/User:Scalene
7536 https://en.wikipedia.org/wiki/User:Scandum
7537 https://en.wikipedia.org/w/index.php%3ftitle=User:Scasa155&action=edit&redlink=1
7538 https://en.wikipedia.org/wiki/User:Scebert
7539 https://en.wikipedia.org/wiki/User:Schazjmd
7540 https://en.wikipedia.org/wiki/User:SchfiftyThree
7541 https://en.wikipedia.org/wiki/User:Schmiddtchen
7542 https://en.wikipedia.org/wiki/User:Schmiteye
7543 https://en.wikipedia.org/wiki/User:Schneelocke
https://en.wikipedia.org/w/index.php%3ftitle=User:Schnozzinkobenstein&action=edit&
7544
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Scholarlyworks&action=edit&redlink=
7545
1
7546 https://en.wikipedia.org/w/index.php%3ftitle=User:Schorzman78&action=edit&redlink=1
7547 https://en.wikipedia.org/wiki/User:SchreiberBike
7548 https://en.wikipedia.org/wiki/User:SchreyP
7549 https://en.wikipedia.org/wiki/User:Schulllz
https://en.wikipedia.org/w/index.php%3ftitle=User:SchumacherTechnologies&action=edit&
7550
redlink=1
7551 https://en.wikipedia.org/wiki/User:SchuminWeb
7552 https://en.wikipedia.org/w/index.php%3ftitle=User:Schuncal&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Schwarzbichler&action=edit&redlink=
7553
1

1971
Contributors

1 Scientus7554
3 Scil1007555
1 Scipex7556
1 Sciurinæ7557
2 Scjessey7558
1 Scorbunny7559
2 Scorintha7560
1 ScotXW7561
1 ScotsmanRS7562
5 Scott Paeth7563
1 Scott Ritchie7564
1 Scott sauyet7565
2 ScottBurson7566
20 ScottDNelson7567
1 ScottJ7568
2 ScottNaturals7569
3 Scottcraig7570
1 Scottkwong7571
7 Scrabbler947572
1 Scravy~enwiki7573
3 Sct727574
1 Sctfn7575
1 ScudLee7576
1 Scullder7577
1 Sculliam7578

7554 https://en.wikipedia.org/wiki/User:Scientus
7555 https://en.wikipedia.org/w/index.php%3ftitle=User:Scil100&action=edit&redlink=1
7556 https://en.wikipedia.org/wiki/User:Scipex
7557 https://en.wikipedia.org/wiki/User:Sciurin%25C3%25A6
7558 https://en.wikipedia.org/wiki/User:Scjessey
7559 https://en.wikipedia.org/w/index.php%3ftitle=User:Scorbunny&action=edit&redlink=1
7560 https://en.wikipedia.org/w/index.php%3ftitle=User:Scorintha&action=edit&redlink=1
7561 https://en.wikipedia.org/wiki/User:ScotXW
7562 https://en.wikipedia.org/wiki/User:ScotsmanRS
7563 https://en.wikipedia.org/wiki/User:Scott_Paeth
7564 https://en.wikipedia.org/wiki/User:Scott_Ritchie
7565 https://en.wikipedia.org/w/index.php%3ftitle=User:Scott_sauyet&action=edit&redlink=1
7566 https://en.wikipedia.org/w/index.php%3ftitle=User:ScottBurson&action=edit&redlink=1
7567 https://en.wikipedia.org/w/index.php%3ftitle=User:ScottDNelson&action=edit&redlink=1
7568 https://en.wikipedia.org/wiki/User:ScottJ
7569 https://en.wikipedia.org/w/index.php%3ftitle=User:ScottNaturals&action=edit&redlink=1
7570 https://en.wikipedia.org/w/index.php%3ftitle=User:Scottcraig&action=edit&redlink=1
7571 https://en.wikipedia.org/w/index.php%3ftitle=User:Scottkwong&action=edit&redlink=1
7572 https://en.wikipedia.org/w/index.php%3ftitle=User:Scrabbler94&action=edit&redlink=1
7573 https://en.wikipedia.org/wiki/User:Scravy~enwiki
7574 https://en.wikipedia.org/wiki/User:Sct72
7575 https://en.wikipedia.org/wiki/User:Sctfn
7576 https://en.wikipedia.org/wiki/User:ScudLee
7577 https://en.wikipedia.org/wiki/User:Scullder
7578 https://en.wikipedia.org/w/index.php%3ftitle=User:Sculliam&action=edit&redlink=1

1972
External links

1 Sd2315g86435sdsdg7579
1 Sderose7580
1 Sdornan7581
1 Sdr7582
1 Sdrucker7583
1 Seahawks03127584
1 Sean Kelly7585
2 SeanAhern7586
3 Seanandjason7587
6 Seanhalle7588
1 Seano17589
2 SeanofThePierce7590
12 Seaphoto7591
1 Seattle Jörg7592
1 Seb7593
1 SebastianHelm7594
8 Sebastiangarth7595
3 Sebbe7596
1 Sebleblanc7597
1 Secfan7598
1 Secretlondon7599
3 SectionFinale7600
1 Seet827601
1 Seffer7602
2 Seizethedave7603

7579 https://en.wikipedia.org/wiki/User:Sd2315g86435sdsdg
7580 https://en.wikipedia.org/w/index.php%3ftitle=User:Sderose&action=edit&redlink=1
7581 https://en.wikipedia.org/wiki/User:Sdornan
7582 https://en.wikipedia.org/wiki/User:Sdr
7583 https://en.wikipedia.org/w/index.php%3ftitle=User:Sdrucker&action=edit&redlink=1
7584 https://en.wikipedia.org/w/index.php%3ftitle=User:Seahawks0312&action=edit&redlink=1
7585 https://en.wikipedia.org/wiki/User:Sean_Kelly
7586 https://en.wikipedia.org/w/index.php%3ftitle=User:SeanAhern&action=edit&redlink=1
7587 https://en.wikipedia.org/w/index.php%3ftitle=User:Seanandjason&action=edit&redlink=1
7588 https://en.wikipedia.org/wiki/User:Seanhalle
7589 https://en.wikipedia.org/wiki/User:Seano1
https://en.wikipedia.org/w/index.php%3ftitle=User:SeanofThePierce&action=edit&
7590
redlink=1
7591 https://en.wikipedia.org/wiki/User:Seaphoto
7592 https://en.wikipedia.org/wiki/User:Seattle_J%25C3%25B6rg
7593 https://en.wikipedia.org/wiki/User:Seb
7594 https://en.wikipedia.org/wiki/User:SebastianHelm
7595 https://en.wikipedia.org/wiki/User:Sebastiangarth
7596 https://en.wikipedia.org/wiki/User:Sebbe
7597 https://en.wikipedia.org/wiki/User:Sebleblanc
7598 https://en.wikipedia.org/wiki/User:Secfan
7599 https://en.wikipedia.org/wiki/User:Secretlondon
7600 https://en.wikipedia.org/w/index.php%3ftitle=User:SectionFinale&action=edit&redlink=1
7601 https://en.wikipedia.org/w/index.php%3ftitle=User:Seet82&action=edit&redlink=1
7602 https://en.wikipedia.org/wiki/User:Seffer
7603 https://en.wikipedia.org/w/index.php%3ftitle=User:Seizethedave&action=edit&redlink=1

1973
Contributors

2 Selecsosi7604
1 Selfawareai7605
1 Sembrestels7606
3 SemiHypercube7607
1 SemperIocundus7608
3 Senator20297609
1 Senatorpjt7610
1 Senethior4597611
1 Senfo7612
1 Senitiel7613
2 Senu7614
1 Sephiroth BCR7615
2 Sephiroth storm7616
2 Sepreece7617
1 Septagrite7618
1 Sequoia 427619
3 Seraphimblade7620
1 Serbanalex22027621
1 Serge93937622
1 Sergey Bon.7623
1 Sergey5397624
3 Serggasp7625
1 Sergio017626
3 Serketan7627
1 Serknap7628

7604 https://en.wikipedia.org/w/index.php%3ftitle=User:Selecsosi&action=edit&redlink=1
7605 https://en.wikipedia.org/wiki/User:Selfawareai
7606 https://en.wikipedia.org/w/index.php%3ftitle=User:Sembrestels&action=edit&redlink=1
7607 https://en.wikipedia.org/wiki/User:SemiHypercube
7608 https://en.wikipedia.org/wiki/User:SemperIocundus
7609 https://en.wikipedia.org/wiki/User:Senator2029
7610 https://en.wikipedia.org/wiki/User:Senatorpjt
7611 https://en.wikipedia.org/w/index.php%3ftitle=User:Senethior459&action=edit&redlink=1
7612 https://en.wikipedia.org/wiki/User:Senfo
7613 https://en.wikipedia.org/w/index.php%3ftitle=User:Senitiel&action=edit&redlink=1
7614 https://en.wikipedia.org/wiki/User:Senu
7615 https://en.wikipedia.org/wiki/User:Sephiroth_BCR
7616 https://en.wikipedia.org/wiki/User:Sephiroth_storm
7617 https://en.wikipedia.org/wiki/User:Sepreece
7618 https://en.wikipedia.org/w/index.php%3ftitle=User:Septagrite&action=edit&redlink=1
7619 https://en.wikipedia.org/wiki/User:Sequoia_42
7620 https://en.wikipedia.org/wiki/User:Seraphimblade
https://en.wikipedia.org/w/index.php%3ftitle=User:Serbanalex2202&action=edit&redlink=
7621
1
7622 https://en.wikipedia.org/w/index.php%3ftitle=User:Serge9393&action=edit&redlink=1
7623 https://en.wikipedia.org/w/index.php%3ftitle=User:Sergey_Bon.&action=edit&redlink=1
7624 https://en.wikipedia.org/w/index.php%3ftitle=User:Sergey539&action=edit&redlink=1
7625 https://en.wikipedia.org/wiki/User:Serggasp
7626 https://en.wikipedia.org/w/index.php%3ftitle=User:Sergio01&action=edit&redlink=1
7627 https://en.wikipedia.org/w/index.php%3ftitle=User:Serketan&action=edit&redlink=1
7628 https://en.wikipedia.org/wiki/User:Serknap

1974
External links

17 Serols7629
1 Serprex7630
1 Servalo7631
3 ServiceAT7632
2 Sesse7633
1 Set theorist7634
1 Sethleb7635
1 Sethwoodworth7636
1 Sevenp7637
1 SeventyThree7638
3 Sf2227639
1 Sfan00 IMG7640
1 Sfandino7641
2 Sg313d7642
1 Sgord5127643
1 Sh.pouriya7644
2 Shabda7645
1 Shaded07646
1 Shadinamrouti7647
1 Shadow433757648
10 Shadowjams7649
5 Shafaet7650
11 Shafigoldwasser7651
2 Shailen.sobhee7652
2 ShakespeareFan007653

7629 https://en.wikipedia.org/w/index.php%3ftitle=User:Serols&action=edit&redlink=1
7630 https://en.wikipedia.org/wiki/User:Serprex
7631 https://en.wikipedia.org/wiki/User:Servalo
7632 https://en.wikipedia.org/wiki/User:ServiceAT
7633 https://en.wikipedia.org/wiki/User:Sesse
7634 https://en.wikipedia.org/w/index.php%3ftitle=User:Set_theorist&action=edit&redlink=1
7635 https://en.wikipedia.org/w/index.php%3ftitle=User:Sethleb&action=edit&redlink=1
7636 https://en.wikipedia.org/wiki/User:Sethwoodworth
7637 https://en.wikipedia.org/w/index.php%3ftitle=User:Sevenp&action=edit&redlink=1
7638 https://en.wikipedia.org/wiki/User:SeventyThree
7639 https://en.wikipedia.org/wiki/User:Sf222
7640 https://en.wikipedia.org/wiki/User:Sfan00_IMG
7641 https://en.wikipedia.org/w/index.php%3ftitle=User:Sfandino&action=edit&redlink=1
7642 https://en.wikipedia.org/w/index.php%3ftitle=User:Sg313d&action=edit&redlink=1
7643 https://en.wikipedia.org/w/index.php%3ftitle=User:Sgord512&action=edit&redlink=1
7644 https://en.wikipedia.org/w/index.php%3ftitle=User:Sh.pouriya&action=edit&redlink=1
7645 https://en.wikipedia.org/wiki/User:Shabda
7646 https://en.wikipedia.org/wiki/User:Shaded0
7647 https://en.wikipedia.org/w/index.php%3ftitle=User:Shadinamrouti&action=edit&redlink=1
7648 https://en.wikipedia.org/w/index.php%3ftitle=User:Shadow43375&action=edit&redlink=1
7649 https://en.wikipedia.org/wiki/User:Shadowjams
7650 https://en.wikipedia.org/wiki/User:Shafaet
https://en.wikipedia.org/w/index.php%3ftitle=User:Shafigoldwasser&action=edit&
7651
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Shailen.sobhee&action=edit&redlink=
7652
1
7653 https://en.wikipedia.org/wiki/User:ShakespeareFan00

1975
Contributors

1 ShakingSpirit7654
2 Shakir287655
3 Shalevku7656
7 Shalinmomin7657
1 Shalom Yechiel7658
7 Shanes7659
1 Shantanulmp7660
1 Shantavira7661
1 SharShar7662
1 Sharat sc7663
2 Sharcho7664
1 Sharouser7665
1 Shashi200087666
4 Shashmik117667
2 Shashwat26917668
1 Shashwat9867669
4 Shaun richard7670
1 Shawn comes7671
1 ShawnVW7672
1 Shawnc7673
8 Shd~enwiki7674
4 Shearsongs787675
2 Sheepeatgrass7676
1 SheldonYoung7677
6 ShelfSkewed7678

7654 https://en.wikipedia.org/wiki/User:ShakingSpirit
7655 https://en.wikipedia.org/w/index.php%3ftitle=User:Shakir28&action=edit&redlink=1
7656 https://en.wikipedia.org/w/index.php%3ftitle=User:Shalevku&action=edit&redlink=1
7657 https://en.wikipedia.org/w/index.php%3ftitle=User:Shalinmomin&action=edit&redlink=1
7658 https://en.wikipedia.org/wiki/User:Shalom_Yechiel
7659 https://en.wikipedia.org/wiki/User:Shanes
7660 https://en.wikipedia.org/w/index.php%3ftitle=User:Shantanulmp&action=edit&redlink=1
7661 https://en.wikipedia.org/wiki/User:Shantavira
7662 https://en.wikipedia.org/wiki/User:SharShar
7663 https://en.wikipedia.org/w/index.php%3ftitle=User:Sharat_sc&action=edit&redlink=1
7664 https://en.wikipedia.org/w/index.php%3ftitle=User:Sharcho&action=edit&redlink=1
7665 https://en.wikipedia.org/wiki/User:Sharouser
7666 https://en.wikipedia.org/w/index.php%3ftitle=User:Shashi20008&action=edit&redlink=1
7667 https://en.wikipedia.org/w/index.php%3ftitle=User:Shashmik11&action=edit&redlink=1
7668 https://en.wikipedia.org/w/index.php%3ftitle=User:Shashwat2691&action=edit&redlink=1
7669 https://en.wikipedia.org/wiki/User:Shashwat986
7670 https://en.wikipedia.org/w/index.php%3ftitle=User:Shaun_richard&action=edit&redlink=1
7671 https://en.wikipedia.org/w/index.php%3ftitle=User:Shawn_comes&action=edit&redlink=1
7672 https://en.wikipedia.org/wiki/User:ShawnVW
7673 https://en.wikipedia.org/wiki/User:Shawnc
7674 https://en.wikipedia.org/wiki/User:Shd~enwiki
7675 https://en.wikipedia.org/wiki/User:Shearsongs78
7676 https://en.wikipedia.org/w/index.php%3ftitle=User:Sheepeatgrass&action=edit&redlink=1
7677 https://en.wikipedia.org/wiki/User:SheldonYoung
7678 https://en.wikipedia.org/wiki/User:ShelfSkewed

1976
External links

1 Shelke.disha7679
1 Shellgirl7680
4 Shellreef7681
25 Shellwood7682
11 Shen7683
1 Shenme7684
1 Shentino7685
2 Shepazu7686
1 Sherbet-head7687
4 SheriffIsInTown7688
1 Sherool7689
4 Shi Hou7690
10 Shia19937691
3 Shifra9877692
2 Shiftchange7693
1 Shikhar19867694
1 Shil887695
1 Shimpu Borthakur7696
1 ShindoNana7697
1 Shingo numtech7698
2 Shingra7699
1 Shinigami37700
2 Shinkevich.robo7701
1 Shirifan7702
1 Shirik7703

7679 https://en.wikipedia.org/w/index.php%3ftitle=User:Shelke.disha&action=edit&redlink=1
7680 https://en.wikipedia.org/wiki/User:Shellgirl
7681 https://en.wikipedia.org/wiki/User:Shellreef
7682 https://en.wikipedia.org/wiki/User:Shellwood
7683 https://en.wikipedia.org/wiki/User:Shen
7684 https://en.wikipedia.org/wiki/User:Shenme
7685 https://en.wikipedia.org/wiki/User:Shentino
7686 https://en.wikipedia.org/wiki/User:Shepazu
7687 https://en.wikipedia.org/wiki/User:Sherbet-head
7688 https://en.wikipedia.org/wiki/User:SheriffIsInTown
7689 https://en.wikipedia.org/wiki/User:Sherool
7690 https://en.wikipedia.org/wiki/User:Shi_Hou
7691 https://en.wikipedia.org/w/index.php%3ftitle=User:Shia1993&action=edit&redlink=1
7692 https://en.wikipedia.org/w/index.php%3ftitle=User:Shifra987&action=edit&redlink=1
7693 https://en.wikipedia.org/wiki/User:Shiftchange
7694 https://en.wikipedia.org/w/index.php%3ftitle=User:Shikhar1986&action=edit&redlink=1
7695 https://en.wikipedia.org/w/index.php%3ftitle=User:Shil88&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Shimpu_Borthakur&action=edit&
7696
redlink=1
7697 https://en.wikipedia.org/wiki/User:ShindoNana
https://en.wikipedia.org/w/index.php%3ftitle=User:Shingo_numtech&action=edit&redlink=
7698
1
7699 https://en.wikipedia.org/wiki/User:Shingra
7700 https://en.wikipedia.org/w/index.php%3ftitle=User:Shinigami3&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Shinkevich.robo&action=edit&
7701
redlink=1
7702 https://en.wikipedia.org/w/index.php%3ftitle=User:Shirifan&action=edit&redlink=1
7703 https://en.wikipedia.org/wiki/User:Shirik

1977
Contributors

1 Shirt587704
1 Shivajivarma7705
10 Shiyu Ji7706
6 Shizhao7707
22 Shizny7708
1 Shlomif7709
1 Shlomo Fingerer7710
1 Shmageggy7711
1 Shmomuffin7712
1 Shmor7713
1 ShmuelKohn7714
1 Shnowflake7715
3 Sho Uemura7716
1 Shoeofdeath7717
2 Shoessss7718
1 Short Circuit7719
1 Short0007720
1 Shoueiko7721
2 Shoujun7722
1 ShouldReboot7723
1 Shredwheat7724
3 Shreeniwasiyer7725
69 Shreevatsa7726
2 Shreya paste7727
2 Shreya.bits7728

7704 https://en.wikipedia.org/wiki/User:Shirt58
7705 https://en.wikipedia.org/wiki/User:Shivajivarma
7706 https://en.wikipedia.org/wiki/User:Shiyu_Ji
7707 https://en.wikipedia.org/wiki/User:Shizhao
7708 https://en.wikipedia.org/w/index.php%3ftitle=User:Shizny&action=edit&redlink=1
7709 https://en.wikipedia.org/wiki/User:Shlomif
7710 https://en.wikipedia.org/wiki/User:Shlomo_Fingerer
7711 https://en.wikipedia.org/w/index.php%3ftitle=User:Shmageggy&action=edit&redlink=1
7712 https://en.wikipedia.org/w/index.php%3ftitle=User:Shmomuffin&action=edit&redlink=1
7713 https://en.wikipedia.org/w/index.php%3ftitle=User:Shmor&action=edit&redlink=1
7714 https://en.wikipedia.org/w/index.php%3ftitle=User:ShmuelKohn&action=edit&redlink=1
7715 https://en.wikipedia.org/wiki/User:Shnowflake
7716 https://en.wikipedia.org/wiki/User:Sho_Uemura
7717 https://en.wikipedia.org/wiki/User:Shoeofdeath
7718 https://en.wikipedia.org/w/index.php%3ftitle=User:Shoessss&action=edit&redlink=1
7719 https://en.wikipedia.org/wiki/User:Short_Circuit
7720 https://en.wikipedia.org/wiki/User:Short000
7721 https://en.wikipedia.org/w/index.php%3ftitle=User:Shoueiko&action=edit&redlink=1
7722 https://en.wikipedia.org/wiki/User:Shoujun
7723 https://en.wikipedia.org/w/index.php%3ftitle=User:ShouldReboot&action=edit&redlink=1
7724 https://en.wikipedia.org/w/index.php%3ftitle=User:Shredwheat&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Shreeniwasiyer&action=edit&redlink=
7725
1
7726 https://en.wikipedia.org/wiki/User:Shreevatsa
7727 https://en.wikipedia.org/w/index.php%3ftitle=User:Shreya_paste&action=edit&redlink=1
7728 https://en.wikipedia.org/w/index.php%3ftitle=User:Shreya.bits&action=edit&redlink=1

1978
External links

1 Shreyasjoshis7729
10 Shuaiqicao7730
1 Shubhrasankar7731
1 Shuchung~enwiki7732
1 Shuiberts7733
1 Shuisman7734
2 Shumface7735
1 Shunjaruff7736
3 Shuri org7737
28 Shuroo7738
8 Shwaathi7739
1 Shyamal7740
1 Sialtschuler7741
1 Sibian7742
4 Siddhant7743
1 Siddhant Goel7744
1 Siddharthgondhi7745
2 Siddharthist7746
1 Sidgalt7747
4 Sidonath~enwiki7748
42 SieBot7749
2 Sietse Snel7750
2 SigbertW7751
2 Sigkill7752
1 Sigma 77753

7729 https://en.wikipedia.org/wiki/User:Shreyasjoshis
7730 https://en.wikipedia.org/w/index.php%3ftitle=User:Shuaiqicao&action=edit&redlink=1
7731 https://en.wikipedia.org/w/index.php%3ftitle=User:Shubhrasankar&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Shuchung~enwiki&action=edit&
7732
redlink=1
7733 https://en.wikipedia.org/wiki/User:Shuiberts
7734 https://en.wikipedia.org/w/index.php%3ftitle=User:Shuisman&action=edit&redlink=1
7735 https://en.wikipedia.org/w/index.php%3ftitle=User:Shumface&action=edit&redlink=1
7736 https://en.wikipedia.org/w/index.php%3ftitle=User:Shunjaruff&action=edit&redlink=1
7737 https://en.wikipedia.org/w/index.php%3ftitle=User:Shuri_org&action=edit&redlink=1
7738 https://en.wikipedia.org/wiki/User:Shuroo
7739 https://en.wikipedia.org/w/index.php%3ftitle=User:Shwaathi&action=edit&redlink=1
7740 https://en.wikipedia.org/wiki/User:Shyamal
7741 https://en.wikipedia.org/w/index.php%3ftitle=User:Sialtschuler&action=edit&redlink=1
7742 https://en.wikipedia.org/w/index.php%3ftitle=User:Sibian&action=edit&redlink=1
7743 https://en.wikipedia.org/wiki/User:Siddhant
7744 https://en.wikipedia.org/wiki/User:Siddhant_Goel
https://en.wikipedia.org/w/index.php%3ftitle=User:Siddharthgondhi&action=edit&
7745
redlink=1
7746 https://en.wikipedia.org/wiki/User:Siddharthist
7747 https://en.wikipedia.org/w/index.php%3ftitle=User:Sidgalt&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Sidonath~enwiki&action=edit&
7748
redlink=1
7749 https://en.wikipedia.org/wiki/User:SieBot
7750 https://en.wikipedia.org/wiki/User:Sietse_Snel
7751 https://en.wikipedia.org/w/index.php%3ftitle=User:SigbertW&action=edit&redlink=1
7752 https://en.wikipedia.org/wiki/User:Sigkill
7753 https://en.wikipedia.org/wiki/User:Sigma_7

1979
Contributors

1 SigmaEpsilon7754
1 Sigmalmtd7755
1 Sigmundur7756
2 Sigurd1207757
1 Sijarvis7758
1 SikFeng7759
2 Silas S. Brown7760
36 Silly rabbit7761
1 Silnarm7762
1 Silvercast2347763
4 Silverfish7764
1 Silvonen7765
7 SilvonenBot7766
1 Silvrous7767
9 Simeon7768
8 Simetrical7769
1 Simguru7770
4 Simon Fenney7771
1 Simon047772
1 Simon127773
1 SimonAlling7774
1 SimonMayer7775
1 SimonP7776
5 SimonTrew7777
1 SimoneBrigante7778

7754 https://en.wikipedia.org/wiki/User:SigmaEpsilon
7755 https://en.wikipedia.org/wiki/User:Sigmalmtd
7756 https://en.wikipedia.org/wiki/User:Sigmundur
7757 https://en.wikipedia.org/w/index.php%3ftitle=User:Sigurd120&action=edit&redlink=1
7758 https://en.wikipedia.org/wiki/User:Sijarvis
7759 https://en.wikipedia.org/w/index.php%3ftitle=User:SikFeng&action=edit&redlink=1
7760 https://en.wikipedia.org/wiki/User:Silas_S._Brown
7761 https://en.wikipedia.org/wiki/User:Silly_rabbit
7762 https://en.wikipedia.org/w/index.php%3ftitle=User:Silnarm&action=edit&redlink=1
7763 https://en.wikipedia.org/w/index.php%3ftitle=User:Silvercast234&action=edit&redlink=1
7764 https://en.wikipedia.org/wiki/User:Silverfish
7765 https://en.wikipedia.org/wiki/User:Silvonen
7766 https://en.wikipedia.org/wiki/User:SilvonenBot
7767 https://en.wikipedia.org/wiki/User:Silvrous
7768 https://en.wikipedia.org/wiki/User:Simeon
7769 https://en.wikipedia.org/wiki/User:Simetrical
7770 https://en.wikipedia.org/wiki/User:Simguru
7771 https://en.wikipedia.org/wiki/User:Simon_Fenney
7772 https://en.wikipedia.org/wiki/User:Simon04
7773 https://en.wikipedia.org/wiki/User:Simon12
7774 https://en.wikipedia.org/w/index.php%3ftitle=User:SimonAlling&action=edit&redlink=1
7775 https://en.wikipedia.org/wiki/User:SimonMayer
7776 https://en.wikipedia.org/wiki/User:SimonP
7777 https://en.wikipedia.org/wiki/User:SimonTrew
https://en.wikipedia.org/w/index.php%3ftitle=User:SimoneBrigante&action=edit&redlink=
7778
1

1980
External links

2 Simoneau7779
1 Simonemainardi7780
1 Simonfl7781
2 Simonfqy7782
1 Simonides7783
6 Simonpratt7784
1 Simonsarris7785
9 Simplexity227786
2 Simpsons contributor7787
2 Simsong7788
1 Simulationelson7789
2 Sina.recherche7790
1 Sinanabaei7791
1 Sinar~enwiki7792
2 SingSighSep7793
1 Singerng7794
2 Singhriju7795
1 Singleheart7796
1 Sinha K7797
1 Sintharas7798
20 SiobhanHansa7799
1 Sioux.cz7800
2 Sippinbacardi7801
1 Sir Beluga7802
4 Sir Edward V7803

7779 https://en.wikipedia.org/wiki/User:Simoneau
https://en.wikipedia.org/w/index.php%3ftitle=User:Simonemainardi&action=edit&redlink=
7780
1
7781 https://en.wikipedia.org/w/index.php%3ftitle=User:Simonfl&action=edit&redlink=1
7782 https://en.wikipedia.org/wiki/User:Simonfqy
7783 https://en.wikipedia.org/wiki/User:Simonides
7784 https://en.wikipedia.org/wiki/User:Simonpratt
7785 https://en.wikipedia.org/wiki/User:Simonsarris
7786 https://en.wikipedia.org/wiki/User:Simplexity22
7787 https://en.wikipedia.org/wiki/User:Simpsons_contributor
7788 https://en.wikipedia.org/wiki/User:Simsong
https://en.wikipedia.org/w/index.php%3ftitle=User:Simulationelson&action=edit&
7789
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Sina.recherche&action=edit&redlink=
7790
1
7791 https://en.wikipedia.org/w/index.php%3ftitle=User:Sinanabaei&action=edit&redlink=1
7792 https://en.wikipedia.org/wiki/User:Sinar~enwiki
7793 https://en.wikipedia.org/w/index.php%3ftitle=User:SingSighSep&action=edit&redlink=1
7794 https://en.wikipedia.org/wiki/User:Singerng
7795 https://en.wikipedia.org/w/index.php%3ftitle=User:Singhriju&action=edit&redlink=1
7796 https://en.wikipedia.org/w/index.php%3ftitle=User:Singleheart&action=edit&redlink=1
7797 https://en.wikipedia.org/w/index.php%3ftitle=User:Sinha_K&action=edit&redlink=1
7798 https://en.wikipedia.org/wiki/User:Sintharas
7799 https://en.wikipedia.org/wiki/User:SiobhanHansa
7800 https://en.wikipedia.org/w/index.php%3ftitle=User:Sioux.cz&action=edit&redlink=1
7801 https://en.wikipedia.org/w/index.php%3ftitle=User:Sippinbacardi&action=edit&redlink=1
7802 https://en.wikipedia.org/wiki/User:Sir_Beluga
7803 https://en.wikipedia.org/wiki/User:Sir_Edward_V

1981
Contributors

5 Sir Nicholas de Mimsy-Porpington7804


1 SirJective7805
1 SirPigwig7806
2 Sira957807
2 Sirex987808
1 Sirfurboy7809
1 SiriusB7810
2 Siroxo7811
1 Sirreallysam7812
1 Sisodia7813
1 Sith Lord 137814
1 Sivaselviselvam7815
1 SixWingedSeraph7816
2 SixelaNoegip7817
1 Sizeofint7818
1 Sj7819
6 Sjakkalle7820
1 Sjones237821
1 Sjtu.bzhu7822
4 Sjö7823
8 Sk26137824
1 Skapur7825
2 Skaraoke7826
1 Skarz7827
2 Skatalites7828

7804 https://en.wikipedia.org/wiki/User:Sir_Nicholas_de_Mimsy-Porpington
7805 https://en.wikipedia.org/wiki/User:SirJective
7806 https://en.wikipedia.org/wiki/User:SirPigwig
7807 https://en.wikipedia.org/w/index.php%3ftitle=User:Sira95&action=edit&redlink=1
7808 https://en.wikipedia.org/wiki/User:Sirex98
7809 https://en.wikipedia.org/wiki/User:Sirfurboy
7810 https://en.wikipedia.org/wiki/User:SiriusB
7811 https://en.wikipedia.org/wiki/User:Siroxo
7812 https://en.wikipedia.org/w/index.php%3ftitle=User:Sirreallysam&action=edit&redlink=1
7813 https://en.wikipedia.org/wiki/User:Sisodia
7814 https://en.wikipedia.org/w/index.php%3ftitle=User:Sith_Lord_13&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Sivaselviselvam&action=edit&
7815
redlink=1
7816 https://en.wikipedia.org/wiki/User:SixWingedSeraph
7817 https://en.wikipedia.org/w/index.php%3ftitle=User:SixelaNoegip&action=edit&redlink=1
7818 https://en.wikipedia.org/w/index.php%3ftitle=User:Sizeofint&action=edit&redlink=1
7819 https://en.wikipedia.org/wiki/User:Sj
7820 https://en.wikipedia.org/wiki/User:Sjakkalle
7821 https://en.wikipedia.org/wiki/User:Sjones23
7822 https://en.wikipedia.org/w/index.php%3ftitle=User:Sjtu.bzhu&action=edit&redlink=1
7823 https://en.wikipedia.org/wiki/User:Sj%25C3%25B6
7824 https://en.wikipedia.org/w/index.php%3ftitle=User:Sk2613&action=edit&redlink=1
7825 https://en.wikipedia.org/wiki/User:Skapur
7826 https://en.wikipedia.org/wiki/User:Skaraoke
7827 https://en.wikipedia.org/wiki/User:Skarz
7828 https://en.wikipedia.org/w/index.php%3ftitle=User:Skatalites&action=edit&redlink=1

1982
External links

1 Skatche7829
1 Skchawala7830
1 Skeptical scientist7831
2 Sketch-The-Fox7832
1 SkiddyX7833
1 Skier Dude7834
1 Skinsmoke7835
1 Skippydo7836
1 Skittleys7837
1 Skizzik7838
1 Sklender7839
2 Skolemnormalforma7840
3 Skwa7841
1 SkyWalker7842
8 Skyerise7843
1 Skysmith7844
1 Skyy20107845
1 SlackerMom7846
2 Sladen7847
2 Slakr7848
1 SlamDiego7849
1 Slamb7850
1 Slaniel7851
1 Slarson7852
1 Slartibartfastibast7853

7829 https://en.wikipedia.org/wiki/User:Skatche
7830 https://en.wikipedia.org/w/index.php%3ftitle=User:Skchawala&action=edit&redlink=1
7831 https://en.wikipedia.org/wiki/User:Skeptical_scientist
https://en.wikipedia.org/w/index.php%3ftitle=User:Sketch-The-Fox&action=edit&redlink=
7832
1
7833 https://en.wikipedia.org/w/index.php%3ftitle=User:SkiddyX&action=edit&redlink=1
7834 https://en.wikipedia.org/wiki/User:Skier_Dude
7835 https://en.wikipedia.org/wiki/User:Skinsmoke
7836 https://en.wikipedia.org/wiki/User:Skippydo
7837 https://en.wikipedia.org/wiki/User:Skittleys
7838 https://en.wikipedia.org/wiki/User:Skizzik
7839 https://en.wikipedia.org/wiki/User:Sklender
https://en.wikipedia.org/w/index.php%3ftitle=User:Skolemnormalforma&action=edit&
7840
redlink=1
7841 https://en.wikipedia.org/wiki/User:Skwa
7842 https://en.wikipedia.org/wiki/User:SkyWalker
7843 https://en.wikipedia.org/wiki/User:Skyerise
7844 https://en.wikipedia.org/wiki/User:Skysmith
7845 https://en.wikipedia.org/w/index.php%3ftitle=User:Skyy2010&action=edit&redlink=1
7846 https://en.wikipedia.org/wiki/User:SlackerMom
7847 https://en.wikipedia.org/wiki/User:Sladen
7848 https://en.wikipedia.org/wiki/User:Slakr
7849 https://en.wikipedia.org/wiki/User:SlamDiego
7850 https://en.wikipedia.org/wiki/User:Slamb
7851 https://en.wikipedia.org/wiki/User:Slaniel
7852 https://en.wikipedia.org/wiki/User:Slarson
7853 https://en.wikipedia.org/wiki/User:Slartibartfastibast

1983
Contributors

2 Slashme7854
1 Slaunger7855
2 Sleepinj7856
1 Sleepyrobot7857
10 Sleske7858
1 Slightsmile7859
18 Sligocki7860
1 Slon027861
1 Slonkar7862
2 SlowJog7863
1 SlumdogAramis7864
1 Sluzzelin7865
3 SlvrKy7866
1 Smack7867
2 Smaines7868
2 Smalljim7869
4 Smallman12q7870
1 Smartech~enwiki7871
5 Smasongarrison7872
3 Smckenna9997873
1 Smclemon7874
1 Smith6097875
1 Smj21187876
19 Smjg7877
5 Smk655367878

7854 https://en.wikipedia.org/wiki/User:Slashme
7855 https://en.wikipedia.org/wiki/User:Slaunger
7856 https://en.wikipedia.org/w/index.php%3ftitle=User:Sleepinj&action=edit&redlink=1
7857 https://en.wikipedia.org/wiki/User:Sleepyrobot
7858 https://en.wikipedia.org/wiki/User:Sleske
7859 https://en.wikipedia.org/wiki/User:Slightsmile
7860 https://en.wikipedia.org/wiki/User:Sligocki
7861 https://en.wikipedia.org/wiki/User:Slon02
7862 https://en.wikipedia.org/w/index.php%3ftitle=User:Slonkar&action=edit&redlink=1
7863 https://en.wikipedia.org/w/index.php%3ftitle=User:SlowJog&action=edit&redlink=1
7864 https://en.wikipedia.org/wiki/User:SlumdogAramis
7865 https://en.wikipedia.org/wiki/User:Sluzzelin
7866 https://en.wikipedia.org/wiki/User:SlvrKy
7867 https://en.wikipedia.org/wiki/User:Smack
7868 https://en.wikipedia.org/wiki/User:Smaines
7869 https://en.wikipedia.org/wiki/User:Smalljim
7870 https://en.wikipedia.org/wiki/User:Smallman12q
https://en.wikipedia.org/w/index.php%3ftitle=User:Smartech~enwiki&action=edit&
7871
redlink=1
7872 https://en.wikipedia.org/wiki/User:Smasongarrison
7873 https://en.wikipedia.org/w/index.php%3ftitle=User:Smckenna999&action=edit&redlink=1
7874 https://en.wikipedia.org/w/index.php%3ftitle=User:Smclemon&action=edit&redlink=1
7875 https://en.wikipedia.org/wiki/User:Smith609
7876 https://en.wikipedia.org/w/index.php%3ftitle=User:Smj2118&action=edit&redlink=1
7877 https://en.wikipedia.org/wiki/User:Smjg
7878 https://en.wikipedia.org/wiki/User:Smk65536

1984
External links

4 Smmurphy7879
2 Smoke737880
1 SmokingCrop7881
1 Smpcole7882
1 Smremde7883
1 Smurfix7884
2 Smyth7885
1 Snailwalker7886
1 Snasna7887
1 Snaxe9207888
1 Sneftel7889
5 Snehalshekatkar7890
3 Snickel117891
11 Sniedo7892
2 Sniffnoy7893
1 Sniperboy7227894
2 SnippyHolloW7895
1 Snood12057896
1 Snookerr7897
2 Snoops~enwiki7898
1 Snori7899
5 Snotbot7900
2 Snow Blizzard7901
1 SnowFire7902
2 Snowgeek227903

7879 https://en.wikipedia.org/wiki/User:Smmurphy
7880 https://en.wikipedia.org/w/index.php%3ftitle=User:Smoke73&action=edit&redlink=1
7881 https://en.wikipedia.org/w/index.php%3ftitle=User:SmokingCrop&action=edit&redlink=1
7882 https://en.wikipedia.org/w/index.php%3ftitle=User:Smpcole&action=edit&redlink=1
7883 https://en.wikipedia.org/wiki/User:Smremde
7884 https://en.wikipedia.org/wiki/User:Smurfix
7885 https://en.wikipedia.org/wiki/User:Smyth
7886 https://en.wikipedia.org/wiki/User:Snailwalker
7887 https://en.wikipedia.org/w/index.php%3ftitle=User:Snasna&action=edit&redlink=1
7888 https://en.wikipedia.org/wiki/User:Snaxe920
7889 https://en.wikipedia.org/wiki/User:Sneftel
7890 https://en.wikipedia.org/wiki/User:Snehalshekatkar
7891 https://en.wikipedia.org/w/index.php%3ftitle=User:Snickel11&action=edit&redlink=1
7892 https://en.wikipedia.org/wiki/User:Sniedo
7893 https://en.wikipedia.org/w/index.php%3ftitle=User:Sniffnoy&action=edit&redlink=1
7894 https://en.wikipedia.org/wiki/User:Sniperboy722
7895 https://en.wikipedia.org/w/index.php%3ftitle=User:SnippyHolloW&action=edit&redlink=1
7896 https://en.wikipedia.org/wiki/User:Snood1205
7897 https://en.wikipedia.org/wiki/User:Snookerr
7898 https://en.wikipedia.org/w/index.php%3ftitle=User:Snoops~enwiki&action=edit&redlink=1
7899 https://en.wikipedia.org/wiki/User:Snori
7900 https://en.wikipedia.org/wiki/User:Snotbot
7901 https://en.wikipedia.org/wiki/User:Snow_Blizzard
7902 https://en.wikipedia.org/wiki/User:SnowFire
7903 https://en.wikipedia.org/wiki/User:Snowgeek22

1985
Contributors

4 Snowolf7904
1 Snoyes7905
1 SoCalSuperEagle7906
2 SoSivr7907
2 Sobreira7908
1 Sofia karampataki7909
1 Softtest1237910
1 Sohomdeep7911
2 Sokari7912
2 Solberg7913
2 Solde97914
1 SoledadKabocha7915
2 Solidpoint7916
1 SolifyDolphin~enwiki7917
1 Soliloquial7918
1 Solitude7919
15 Solomon79687920
1 Solon.KR7921
1 Solsan887922
4 Sombra Corp.7923
2 Some Gadget Geek7924
1 Some P. Erson7925
3 Some jerk on the Internet7926
1 Some standardized rigour7927
3 Somenathmaji7928

7904 https://en.wikipedia.org/wiki/User:Snowolf
7905 https://en.wikipedia.org/wiki/User:Snoyes
7906 https://en.wikipedia.org/wiki/User:SoCalSuperEagle
7907 https://en.wikipedia.org/wiki/User:SoSivr
7908 https://en.wikipedia.org/wiki/User:Sobreira
https://en.wikipedia.org/w/index.php%3ftitle=User:Sofia_karampataki&action=edit&
7909
redlink=1
7910 https://en.wikipedia.org/wiki/User:Softtest123
7911 https://en.wikipedia.org/w/index.php%3ftitle=User:Sohomdeep&action=edit&redlink=1
7912 https://en.wikipedia.org/wiki/User:Sokari
7913 https://en.wikipedia.org/wiki/User:Solberg
7914 https://en.wikipedia.org/wiki/User:Solde9
7915 https://en.wikipedia.org/wiki/User:SoledadKabocha
7916 https://en.wikipedia.org/wiki/User:Solidpoint
https://en.wikipedia.org/w/index.php%3ftitle=User:SolifyDolphin~enwiki&action=edit&
7917
redlink=1
7918 https://en.wikipedia.org/wiki/User:Soliloquial
7919 https://en.wikipedia.org/wiki/User:Solitude
7920 https://en.wikipedia.org/wiki/User:Solomon7968
7921 https://en.wikipedia.org/wiki/User:Solon.KR
7922 https://en.wikipedia.org/w/index.php%3ftitle=User:Solsan88&action=edit&redlink=1
7923 https://en.wikipedia.org/wiki/User:Sombra_Corp.
7924 https://en.wikipedia.org/wiki/User:Some_Gadget_Geek
7925 https://en.wikipedia.org/wiki/User:Some_P._Erson
7926 https://en.wikipedia.org/wiki/User:Some_jerk_on_the_Internet
7927 https://en.wikipedia.org/wiki/User:Some_standardized_rigour
7928 https://en.wikipedia.org/w/index.php%3ftitle=User:Somenathmaji&action=edit&redlink=1

1986
External links

2 Someone else7929
1 Sometree7930
1 Somtechcue7931
1 Sonett727932
1 Sonia7933
1 Sonictrey7934
2 Sonicwave327935
1 Sonicyouth867936
1 SoniyaR7937
9 Sonjaaa7938
1 Sootyboy7939
3 Sophus Bie7940
1 Sorrel7941
3 Soulbot7942
12 Soultaco7943
1 Soumyasch7944
1 Soupz7945
1 Sourabh Katagade7946
1 Sourasis7947
1 South Texas Waterboy7948
1 SouthernNights7949
3 SoxBot III7950
2 Soytuny7951
3 SpNeo7952
1 SpaceMoose7953

7929 https://en.wikipedia.org/wiki/User:Someone_else
7930 https://en.wikipedia.org/wiki/User:Sometree
7931 https://en.wikipedia.org/w/index.php%3ftitle=User:Somtechcue&action=edit&redlink=1
7932 https://en.wikipedia.org/wiki/User:Sonett72
7933 https://en.wikipedia.org/wiki/User:Sonia
7934 https://en.wikipedia.org/wiki/User:Sonictrey
7935 https://en.wikipedia.org/wiki/User:Sonicwave32
7936 https://en.wikipedia.org/wiki/User:Sonicyouth86
7937 https://en.wikipedia.org/wiki/User:SoniyaR
7938 https://en.wikipedia.org/wiki/User:Sonjaaa
7939 https://en.wikipedia.org/wiki/User:Sootyboy
7940 https://en.wikipedia.org/wiki/User:Sophus_Bie
7941 https://en.wikipedia.org/wiki/User:Sorrel
7942 https://en.wikipedia.org/wiki/User:Soulbot
7943 https://en.wikipedia.org/wiki/User:Soultaco
7944 https://en.wikipedia.org/wiki/User:Soumyasch
7945 https://en.wikipedia.org/w/index.php%3ftitle=User:Soupz&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Sourabh_Katagade&action=edit&
7946
redlink=1
7947 https://en.wikipedia.org/w/index.php%3ftitle=User:Sourasis&action=edit&redlink=1
7948 https://en.wikipedia.org/wiki/User:South_Texas_Waterboy
7949 https://en.wikipedia.org/wiki/User:SouthernNights
7950 https://en.wikipedia.org/wiki/User:SoxBot_III
7951 https://en.wikipedia.org/wiki/User:Soytuny
7952 https://en.wikipedia.org/wiki/User:SpNeo
7953 https://en.wikipedia.org/wiki/User:SpaceMoose

1987
Contributors

1 Spacefarer7954
1 Spacemanaki7955
1 Spacetimewave7956
1 Spadgos7957
1 Spammyammy7958
1 Spangineer7959
1 Spariggio827960
1 Sparklehunt7961
1 Sparshong7962
1 Spearhead7963
1 Specs1127964
3 Speculatrix7965
1 SpellingBot7966
1 Spellsinger1807967
3 Spencer7968
8 Sperling7969
1 Sperxios7970
1 Sphenocorona7971
3 Spicemix7972
5 Spidern7973
3 Spiff~enwiki7974
1 Spikebrennan7975
5 Spinningspark7976
2 Spiritia7977
1 Spitfire7978

7954 https://en.wikipedia.org/wiki/User:Spacefarer
7955 https://en.wikipedia.org/w/index.php%3ftitle=User:Spacemanaki&action=edit&redlink=1
7956 https://en.wikipedia.org/wiki/User:Spacetimewave
7957 https://en.wikipedia.org/w/index.php%3ftitle=User:Spadgos&action=edit&redlink=1
7958 https://en.wikipedia.org/w/index.php%3ftitle=User:Spammyammy&action=edit&redlink=1
7959 https://en.wikipedia.org/wiki/User:Spangineer
7960 https://en.wikipedia.org/wiki/User:Spariggio82
7961 https://en.wikipedia.org/w/index.php%3ftitle=User:Sparklehunt&action=edit&redlink=1
7962 https://en.wikipedia.org/w/index.php%3ftitle=User:Sparshong&action=edit&redlink=1
7963 https://en.wikipedia.org/w/index.php%3ftitle=User:Spearhead&action=edit&redlink=1
7964 https://en.wikipedia.org/wiki/User:Specs112
7965 https://en.wikipedia.org/wiki/User:Speculatrix
7966 https://en.wikipedia.org/wiki/User:SpellingBot
7967 https://en.wikipedia.org/wiki/User:Spellsinger180
7968 https://en.wikipedia.org/wiki/User:Spencer
7969 https://en.wikipedia.org/wiki/User:Sperling
7970 https://en.wikipedia.org/wiki/User:Sperxios
7971 https://en.wikipedia.org/wiki/User:Sphenocorona
7972 https://en.wikipedia.org/wiki/User:Spicemix
7973 https://en.wikipedia.org/wiki/User:Spidern
7974 https://en.wikipedia.org/wiki/User:Spiff~enwiki
7975 https://en.wikipedia.org/wiki/User:Spikebrennan
7976 https://en.wikipedia.org/wiki/User:Spinningspark
7977 https://en.wikipedia.org/wiki/User:Spiritia
7978 https://en.wikipedia.org/wiki/User:Spitfire

1988
External links

2 Spitfire85207979
1 Spitzak7980
2 Spl7981
1 Splatg7982
1 Splessnosi7983
1 Splttingatms7984
2 Spock of Vulcan7985
9 Spoon!7986
4 SporkBot7987
1 Spotsoft7988
1 Spottedowl7989
2 Sprhodes7990
1 Spur7991
5 SpuriousQ7992
2 Spy-cicle7993
2 SpyMagician7994
2 Sqasim1907995
1 Squids and Chips7996
1 Squire557997
1 Squizzz~enwiki7998
1 Sr3d7999
1 Srbislav Nesic8000
1 Srchulo8001
2 Srchvrs8002
1 Sreejajayesh8003

7979 https://en.wikipedia.org/wiki/User:Spitfire8520
7980 https://en.wikipedia.org/w/index.php%3ftitle=User:Spitzak&action=edit&redlink=1
7981 https://en.wikipedia.org/wiki/User:Spl
7982 https://en.wikipedia.org/wiki/User:Splatg
7983 https://en.wikipedia.org/w/index.php%3ftitle=User:Splessnosi&action=edit&redlink=1
7984 https://en.wikipedia.org/w/index.php%3ftitle=User:Splttingatms&action=edit&redlink=1
7985 https://en.wikipedia.org/wiki/User:Spock_of_Vulcan
7986 https://en.wikipedia.org/wiki/User:Spoon!
7987 https://en.wikipedia.org/wiki/User:SporkBot
7988 https://en.wikipedia.org/w/index.php%3ftitle=User:Spotsoft&action=edit&redlink=1
7989 https://en.wikipedia.org/wiki/User:Spottedowl
7990 https://en.wikipedia.org/wiki/User:Sprhodes
7991 https://en.wikipedia.org/wiki/User:Spur
7992 https://en.wikipedia.org/wiki/User:SpuriousQ
7993 https://en.wikipedia.org/wiki/User:Spy-cicle
7994 https://en.wikipedia.org/wiki/User:SpyMagician
7995 https://en.wikipedia.org/w/index.php%3ftitle=User:Sqasim190&action=edit&redlink=1
7996 https://en.wikipedia.org/wiki/User:Squids_and_Chips
7997 https://en.wikipedia.org/w/index.php%3ftitle=User:Squire55&action=edit&redlink=1
7998 https://en.wikipedia.org/wiki/User:Squizzz~enwiki
7999 https://en.wikipedia.org/w/index.php%3ftitle=User:Sr3d&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Srbislav_Nesic&action=edit&redlink=
8000
1
8001 https://en.wikipedia.org/w/index.php%3ftitle=User:Srchulo&action=edit&redlink=1
8002 https://en.wikipedia.org/wiki/User:Srchvrs
8003 https://en.wikipedia.org/w/index.php%3ftitle=User:Sreejajayesh&action=edit&redlink=1

1989
Contributors

9 Srich329778004
1 Sriharsh12348005
1 Srikeit8006
1 Srinivasasha8007
1 Srleffler8008
3 Sro238009
4 Srossd8010
2 Srrrgei8011
1 Ssbohio8012
1 Ssd8013
1 Ssnseawolf8014
1 Ssokota8015
8 Ssolanki078016
2 Sss418017
1 Ssudarshaniitb8018
1 Ssx3max8019
1 Ssydyc8020
2 StAnselm8021
5 Stack8022
1 Staecker8023
1 Stan Marian C-tin8024
1 Stan Shebs8025
4 StanLeeP8026
1 StanfordProgrammer8027
1 Stangaa8028

8004 https://en.wikipedia.org/wiki/User:Srich32977
8005 https://en.wikipedia.org/wiki/User:Sriharsh1234
8006 https://en.wikipedia.org/wiki/User:Srikeit
8007 https://en.wikipedia.org/wiki/User:Srinivasasha
8008 https://en.wikipedia.org/wiki/User:Srleffler
8009 https://en.wikipedia.org/wiki/User:Sro23
8010 https://en.wikipedia.org/w/index.php%3ftitle=User:Srossd&action=edit&redlink=1
8011 https://en.wikipedia.org/w/index.php%3ftitle=User:Srrrgei&action=edit&redlink=1
8012 https://en.wikipedia.org/wiki/User:Ssbohio
8013 https://en.wikipedia.org/wiki/User:Ssd
8014 https://en.wikipedia.org/wiki/User:Ssnseawolf
8015 https://en.wikipedia.org/wiki/User:Ssokota
8016 https://en.wikipedia.org/w/index.php%3ftitle=User:Ssolanki07&action=edit&redlink=1
8017 https://en.wikipedia.org/w/index.php%3ftitle=User:Sss41&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Ssudarshaniitb&action=edit&redlink=
8018
1
8019 https://en.wikipedia.org/w/index.php%3ftitle=User:Ssx3max&action=edit&redlink=1
8020 https://en.wikipedia.org/w/index.php%3ftitle=User:Ssydyc&action=edit&redlink=1
8021 https://en.wikipedia.org/wiki/User:StAnselm
8022 https://en.wikipedia.org/wiki/User:Stack
8023 https://en.wikipedia.org/wiki/User:Staecker
https://en.wikipedia.org/w/index.php%3ftitle=User:Stan_Marian_C-tin&action=edit&
8024
redlink=1
8025 https://en.wikipedia.org/wiki/User:Stan_Shebs
8026 https://en.wikipedia.org/wiki/User:StanLeeP
8027 https://en.wikipedia.org/wiki/User:StanfordProgrammer
8028 https://en.wikipedia.org/w/index.php%3ftitle=User:Stangaa&action=edit&redlink=1

1990
External links

1 Stannered8029
3 Stannic8030
2 Staplesauce8031
1 Stardust82128032
2 Stargazer71218033
1 Starkana8034
41 StarryGrandma8035
1 Starylon8036
47 Staszek Lem8037
6 StaticElectricity8038
1 Staticshakedown8039
5 Stdazi8040
2 Stderr.dk8041
3 StealthFox8042
1 Stebbins8043
2 Sted8044
2 Steel19438045
1 SteelPangolin8046
2 Steelgraham8047
1 Steemanrene8048
1 SteenthIWbot8049
1 Stefan Knauf8050
1 Stefan-S8051
1 Stefan.karpinski8052
1 StefanOllinger8053

8029 https://en.wikipedia.org/wiki/User:Stannered
8030 https://en.wikipedia.org/wiki/User:Stannic
8031 https://en.wikipedia.org/w/index.php%3ftitle=User:Staplesauce&action=edit&redlink=1
8032 https://en.wikipedia.org/wiki/User:Stardust8212
8033 https://en.wikipedia.org/w/index.php%3ftitle=User:Stargazer7121&action=edit&redlink=1
8034 https://en.wikipedia.org/w/index.php%3ftitle=User:Starkana&action=edit&redlink=1
8035 https://en.wikipedia.org/wiki/User:StarryGrandma
8036 https://en.wikipedia.org/wiki/User:Starylon
8037 https://en.wikipedia.org/wiki/User:Staszek_Lem
https://en.wikipedia.org/w/index.php%3ftitle=User:StaticElectricity&action=edit&
8038
redlink=1
8039 https://en.wikipedia.org/wiki/User:Staticshakedown
8040 https://en.wikipedia.org/wiki/User:Stdazi
8041 https://en.wikipedia.org/w/index.php%3ftitle=User:Stderr.dk&action=edit&redlink=1
8042 https://en.wikipedia.org/wiki/User:StealthFox
8043 https://en.wikipedia.org/wiki/User:Stebbins
8044 https://en.wikipedia.org/w/index.php%3ftitle=User:Sted&action=edit&redlink=1
8045 https://en.wikipedia.org/wiki/User:Steel1943
8046 https://en.wikipedia.org/wiki/User:SteelPangolin
8047 https://en.wikipedia.org/wiki/User:Steelgraham
8048 https://en.wikipedia.org/w/index.php%3ftitle=User:Steemanrene&action=edit&redlink=1
8049 https://en.wikipedia.org/wiki/User:SteenthIWbot
8050 https://en.wikipedia.org/wiki/User:Stefan_Knauf
8051 https://en.wikipedia.org/wiki/User:Stefan-S
https://en.wikipedia.org/w/index.php%3ftitle=User:Stefan.karpinski&action=edit&
8052
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:StefanOllinger&action=edit&redlink=
8053
1

1991
Contributors

1 Stefano~enwiki8054
1 Stelian Dumitrascu8055
1 Steliotron8056
1 Stellaathena8057
2 Stemoc8058
3 Stemonitis8059
1 Stepanps8060
3 Stephan Leeds8061
15 Stephen B Streater8062
1 Stephen C. Carlson8063
1 Stephen Compall8064
1 Stephen Gilbert8065
2 Stephen Howe8066
1 Stephen Morley8067
2 Stephen70edwards8068
1 StephenDow8069
7 Stephenb8070
2 Stephengmatthews8071
1 StereoSanctity8072
2 Stern~enwiki8073
2 Stesmo8074
1 SteveAyre8075
1 SteveCoast8076
12 SteveJothen8077
1 SteveMao8078

https://en.wikipedia.org/w/index.php%3ftitle=User:Stefano~enwiki&action=edit&redlink=
8054
1
https://en.wikipedia.org/w/index.php%3ftitle=User:Stelian_Dumitrascu&action=edit&
8055
redlink=1
8056 https://en.wikipedia.org/w/index.php%3ftitle=User:Steliotron&action=edit&redlink=1
8057 https://en.wikipedia.org/w/index.php%3ftitle=User:Stellaathena&action=edit&redlink=1
8058 https://en.wikipedia.org/wiki/User:Stemoc
8059 https://en.wikipedia.org/wiki/User:Stemonitis
8060 https://en.wikipedia.org/w/index.php%3ftitle=User:Stepanps&action=edit&redlink=1
8061 https://en.wikipedia.org/wiki/User:Stephan_Leeds
8062 https://en.wikipedia.org/wiki/User:Stephen_B_Streater
8063 https://en.wikipedia.org/wiki/User:Stephen_C._Carlson
8064 https://en.wikipedia.org/wiki/User:Stephen_Compall
8065 https://en.wikipedia.org/wiki/User:Stephen_Gilbert
8066 https://en.wikipedia.org/w/index.php%3ftitle=User:Stephen_Howe&action=edit&redlink=1
8067 https://en.wikipedia.org/wiki/User:Stephen_Morley
https://en.wikipedia.org/w/index.php%3ftitle=User:Stephen70edwards&action=edit&
8068
redlink=1
8069 https://en.wikipedia.org/w/index.php%3ftitle=User:StephenDow&action=edit&redlink=1
8070 https://en.wikipedia.org/wiki/User:Stephenb
8071 https://en.wikipedia.org/wiki/User:Stephengmatthews
8072 https://en.wikipedia.org/wiki/User:StereoSanctity
8073 https://en.wikipedia.org/wiki/User:Stern~enwiki
8074 https://en.wikipedia.org/wiki/User:Stesmo
8075 https://en.wikipedia.org/w/index.php%3ftitle=User:SteveAyre&action=edit&redlink=1
8076 https://en.wikipedia.org/wiki/User:SteveCoast
8077 https://en.wikipedia.org/wiki/User:SteveJothen
8078 https://en.wikipedia.org/w/index.php%3ftitle=User:SteveMao&action=edit&redlink=1

1992
External links

1 SteveMcKay8079
1 SteveT848080
1 Stevecooperorg8081
2 Steven Crossin8082
1 Steven jones8083
2 StevenBell8084
1 StevenOdd8085
39 Stevenj8086
1 Stevenmitchell8087
1 Steveprutz8088
4 Steverapaport8089
11 Stevertigo8090
1 Stevey77888091
2 Stevietheman8092
1 Stevo20018093
1 StewartMH8094
3 StewieK8095
1 Stfg8096
3 Stickee8097
1 Sticky Parkin8098
1 Stiel8099
3 Stifle8100
2 Stillnotelf8101
6 Stimpy8102
1 Stivlo8103

8079 https://en.wikipedia.org/wiki/User:SteveMcKay
8080 https://en.wikipedia.org/w/index.php%3ftitle=User:SteveT84&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Stevecooperorg&action=edit&redlink=
8081
1
8082 https://en.wikipedia.org/wiki/User:Steven_Crossin
8083 https://en.wikipedia.org/wiki/User:Steven_jones
8084 https://en.wikipedia.org/wiki/User:StevenBell
8085 https://en.wikipedia.org/w/index.php%3ftitle=User:StevenOdd&action=edit&redlink=1
8086 https://en.wikipedia.org/wiki/User:Stevenj
8087 https://en.wikipedia.org/wiki/User:Stevenmitchell
8088 https://en.wikipedia.org/wiki/User:Steveprutz
8089 https://en.wikipedia.org/wiki/User:Steverapaport
8090 https://en.wikipedia.org/wiki/User:Stevertigo
8091 https://en.wikipedia.org/wiki/User:Stevey7788
8092 https://en.wikipedia.org/wiki/User:Stevietheman
8093 https://en.wikipedia.org/w/index.php%3ftitle=User:Stevo2001&action=edit&redlink=1
8094 https://en.wikipedia.org/wiki/User:StewartMH
8095 https://en.wikipedia.org/wiki/User:StewieK
8096 https://en.wikipedia.org/wiki/User:Stfg
8097 https://en.wikipedia.org/wiki/User:Stickee
8098 https://en.wikipedia.org/wiki/User:Sticky_Parkin
8099 https://en.wikipedia.org/w/index.php%3ftitle=User:Stiel&action=edit&redlink=1
8100 https://en.wikipedia.org/wiki/User:Stifle
8101 https://en.wikipedia.org/wiki/User:Stillnotelf
8102 https://en.wikipedia.org/wiki/User:Stimpy
8103 https://en.wikipedia.org/w/index.php%3ftitle=User:Stivlo&action=edit&redlink=1

1993
Contributors

1 Stkni8104
1 Stochastix8105
10 Stochata8106
2 Stokkink8107
1 Stopflingingthebull8108
2 Stormie8109
1 StoryMachine8110
1 Stpasha8111
1 Str82no18112
1 StradivariusTV8113
1 Strainu8114
5 Strait8115
1 StraussInTheHouse8116
1 StrawberryBlondy8117
5 StrayBolt8118
18 Strcat8119
10 Streak3248120
1 Streaver918121
1 Strife9118122
1 Strike Eagle8123
1 Strimo8124
3 Strobilomyces8125
1 StrongMan8126
2 Sttsao8127
7 StuRat8128

8104 https://en.wikipedia.org/w/index.php%3ftitle=User:Stkni&action=edit&redlink=1
8105 https://en.wikipedia.org/w/index.php%3ftitle=User:Stochastix&action=edit&redlink=1
8106 https://en.wikipedia.org/wiki/User:Stochata
8107 https://en.wikipedia.org/w/index.php%3ftitle=User:Stokkink&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Stopflingingthebull&action=edit&
8108
redlink=1
8109 https://en.wikipedia.org/wiki/User:Stormie
8110 https://en.wikipedia.org/w/index.php%3ftitle=User:StoryMachine&action=edit&redlink=1
8111 https://en.wikipedia.org/wiki/User:Stpasha
8112 https://en.wikipedia.org/w/index.php%3ftitle=User:Str82no1&action=edit&redlink=1
8113 https://en.wikipedia.org/wiki/User:StradivariusTV
8114 https://en.wikipedia.org/wiki/User:Strainu
8115 https://en.wikipedia.org/wiki/User:Strait
8116 https://en.wikipedia.org/wiki/User:StraussInTheHouse
https://en.wikipedia.org/w/index.php%3ftitle=User:StrawberryBlondy&action=edit&
8117
redlink=1
8118 https://en.wikipedia.org/wiki/User:StrayBolt
8119 https://en.wikipedia.org/wiki/User:Strcat
8120 https://en.wikipedia.org/w/index.php%3ftitle=User:Streak324&action=edit&redlink=1
8121 https://en.wikipedia.org/w/index.php%3ftitle=User:Streaver91&action=edit&redlink=1
8122 https://en.wikipedia.org/wiki/User:Strife911
8123 https://en.wikipedia.org/wiki/User:Strike_Eagle
8124 https://en.wikipedia.org/w/index.php%3ftitle=User:Strimo&action=edit&redlink=1
8125 https://en.wikipedia.org/wiki/User:Strobilomyces
8126 https://en.wikipedia.org/w/index.php%3ftitle=User:StrongMan&action=edit&redlink=1
8127 https://en.wikipedia.org/w/index.php%3ftitle=User:Sttsao&action=edit&redlink=1
8128 https://en.wikipedia.org/wiki/User:StuRat

1994
External links

1 Stualden8129
1 Stuart Morrow8130
1 Stuart P. Bentley8131
2 StuartBrady8132
1 Stucalcal8133
1 Stud3768134
3 StussyEdit8135
4 Stux8136
1 Stwalkerster8137
2 Styfle8138
1 Stygiansonic8139
1 Styner328140
4 Suanshsinghal8141
16 Subh838142
2 Subshiri8143
1 Subtilior8144
1 Sudozero8145
1 Suelru8146
1 Suffusion of Yellow8147
1 Suisui8148
1 Sumit b130338cs8149
1 Sumit2108150
2 Summentier8151
1 Sumsci8152
8 Sun Creator8153

8129 https://en.wikipedia.org/w/index.php%3ftitle=User:Stualden&action=edit&redlink=1
8130 https://en.wikipedia.org/wiki/User:Stuart_Morrow
8131 https://en.wikipedia.org/wiki/User:Stuart_P._Bentley
8132 https://en.wikipedia.org/wiki/User:StuartBrady
8133 https://en.wikipedia.org/w/index.php%3ftitle=User:Stucalcal&action=edit&redlink=1
8134 https://en.wikipedia.org/w/index.php%3ftitle=User:Stud376&action=edit&redlink=1
8135 https://en.wikipedia.org/w/index.php%3ftitle=User:StussyEdit&action=edit&redlink=1
8136 https://en.wikipedia.org/wiki/User:Stux
8137 https://en.wikipedia.org/wiki/User:Stwalkerster
8138 https://en.wikipedia.org/wiki/User:Styfle
8139 https://en.wikipedia.org/w/index.php%3ftitle=User:Stygiansonic&action=edit&redlink=1
8140 https://en.wikipedia.org/w/index.php%3ftitle=User:Styner32&action=edit&redlink=1
8141 https://en.wikipedia.org/w/index.php%3ftitle=User:Suanshsinghal&action=edit&redlink=1
8142 https://en.wikipedia.org/wiki/User:Subh83
8143 https://en.wikipedia.org/w/index.php%3ftitle=User:Subshiri&action=edit&redlink=1
8144 https://en.wikipedia.org/w/index.php%3ftitle=User:Subtilior&action=edit&redlink=1
8145 https://en.wikipedia.org/wiki/User:Sudozero
8146 https://en.wikipedia.org/wiki/User:Suelru
8147 https://en.wikipedia.org/wiki/User:Suffusion_of_Yellow
8148 https://en.wikipedia.org/wiki/User:Suisui
https://en.wikipedia.org/w/index.php%3ftitle=User:Sumit_b130338cs&action=edit&
8149
redlink=1
8150 https://en.wikipedia.org/w/index.php%3ftitle=User:Sumit210&action=edit&redlink=1
8151 https://en.wikipedia.org/wiki/User:Summentier
8152 https://en.wikipedia.org/w/index.php%3ftitle=User:Sumsci&action=edit&redlink=1
8153 https://en.wikipedia.org/wiki/User:Sun_Creator

1995
Contributors

1 Sunapi3868154
1 Sunayanaa8155
10 Sundar8156
1 Sundar sando8157
1 Sunderland068158
4 Sundirac8159
1 Sunflower428160
1 Sungheeyun8161
2 Sunny2568162
1 Sunpengfeiyear8163
1 Sunrise8164
1 Supdiop8165
1 Super fish28166
1 Super-Magician8167
3 Super48paul8168
1 SuperFLoh8169
1 SuperMidget8170
1 SuperSack568171
7 Superamin8172
1 Superbeecat8173
1 SuperbowserX8174
1 Superdosh8175
1 Superninja8176
2 Supersteve14408177
1 Supertouch8178

8154 https://en.wikipedia.org/wiki/User:Sunapi386
8155 https://en.wikipedia.org/w/index.php%3ftitle=User:Sunayanaa&action=edit&redlink=1
8156 https://en.wikipedia.org/wiki/User:Sundar
8157 https://en.wikipedia.org/w/index.php%3ftitle=User:Sundar_sando&action=edit&redlink=1
8158 https://en.wikipedia.org/wiki/User:Sunderland06
8159 https://en.wikipedia.org/w/index.php%3ftitle=User:Sundirac&action=edit&redlink=1
8160 https://en.wikipedia.org/w/index.php%3ftitle=User:Sunflower42&action=edit&redlink=1
8161 https://en.wikipedia.org/wiki/User:Sungheeyun
8162 https://en.wikipedia.org/wiki/User:Sunny256
https://en.wikipedia.org/w/index.php%3ftitle=User:Sunpengfeiyear&action=edit&redlink=
8163
1
8164 https://en.wikipedia.org/wiki/User:Sunrise
8165 https://en.wikipedia.org/wiki/User:Supdiop
8166 https://en.wikipedia.org/w/index.php%3ftitle=User:Super_fish2&action=edit&redlink=1
8167 https://en.wikipedia.org/wiki/User:Super-Magician
8168 https://en.wikipedia.org/wiki/User:Super48paul
8169 https://en.wikipedia.org/w/index.php%3ftitle=User:SuperFLoh&action=edit&redlink=1
8170 https://en.wikipedia.org/wiki/User:SuperMidget
8171 https://en.wikipedia.org/w/index.php%3ftitle=User:SuperSack56&action=edit&redlink=1
8172 https://en.wikipedia.org/w/index.php%3ftitle=User:Superamin&action=edit&redlink=1
8173 https://en.wikipedia.org/wiki/User:Superbeecat
8174 https://en.wikipedia.org/wiki/User:SuperbowserX
8175 https://en.wikipedia.org/wiki/User:Superdosh
8176 https://en.wikipedia.org/wiki/User:Superninja
https://en.wikipedia.org/w/index.php%3ftitle=User:Supersteve1440&action=edit&redlink=
8177
1
8178 https://en.wikipedia.org/wiki/User:Supertouch

1996
External links

1 Suppendieb8179
1 Supreetha19948180
2 Supritdk8181
1 Surakmath8182
4 Surement8183
1 SurendraMatavalam8184
1 Surfer438185
1 Surikataru8186
3 Surlycyborg8187
3 Suruena8188
1 SusanChime8189
1 Suttiwit8190
1 Suzhouwuyue8191
1 SvartMan8192
16 Sven nestle28193
4 Sverdrup8194
1 Sverigekillen8195
126 Svick8196
1 Svivian8197
1 Svnpenn8198
3 Svnsvn8199
1 Svznamensk8200
1 Swamp Ig8201
1 Swanyboy28202
2 Swapsy8203

8179 https://en.wikipedia.org/w/index.php%3ftitle=User:Suppendieb&action=edit&redlink=1
8180 https://en.wikipedia.org/w/index.php%3ftitle=User:Supreetha1994&action=edit&redlink=1
8181 https://en.wikipedia.org/w/index.php%3ftitle=User:Supritdk&action=edit&redlink=1
8182 https://en.wikipedia.org/w/index.php%3ftitle=User:Surakmath&action=edit&redlink=1
8183 https://en.wikipedia.org/w/index.php%3ftitle=User:Surement&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:SurendraMatavalam&action=edit&
8184
redlink=1
8185 https://en.wikipedia.org/wiki/User:Surfer43
8186 https://en.wikipedia.org/w/index.php%3ftitle=User:Surikataru&action=edit&redlink=1
8187 https://en.wikipedia.org/wiki/User:Surlycyborg
8188 https://en.wikipedia.org/wiki/User:Suruena
8189 https://en.wikipedia.org/w/index.php%3ftitle=User:SusanChime&action=edit&redlink=1
8190 https://en.wikipedia.org/wiki/User:Suttiwit
8191 https://en.wikipedia.org/w/index.php%3ftitle=User:Suzhouwuyue&action=edit&redlink=1
8192 https://en.wikipedia.org/w/index.php%3ftitle=User:SvartMan&action=edit&redlink=1
8193 https://en.wikipedia.org/wiki/User:Sven_nestle2
8194 https://en.wikipedia.org/wiki/User:Sverdrup
8195 https://en.wikipedia.org/wiki/User:Sverigekillen
8196 https://en.wikipedia.org/wiki/User:Svick
8197 https://en.wikipedia.org/w/index.php%3ftitle=User:Svivian&action=edit&redlink=1
8198 https://en.wikipedia.org/wiki/User:Svnpenn
8199 https://en.wikipedia.org/w/index.php%3ftitle=User:Svnsvn&action=edit&redlink=1
8200 https://en.wikipedia.org/w/index.php%3ftitle=User:Svznamensk&action=edit&redlink=1
8201 https://en.wikipedia.org/wiki/User:Swamp_Ig
8202 https://en.wikipedia.org/w/index.php%3ftitle=User:Swanyboy2&action=edit&redlink=1
8203 https://en.wikipedia.org/w/index.php%3ftitle=User:Swapsy&action=edit&redlink=1

1997
Contributors

1 Swarnendu.biswas8204
1 Sweet tea van8205
16 Swfung88206
1 Swierczek8207
11 Swift8208
2 SwiftBot8209
3 Swifty slow8210
1 SwisterTwister8211
1 Switchercat8212
1 Swordsmankirby8213
1 Swoög8214
4 Swpb8215
2 Swrobinson268216
2 SyG8217
6 Sychen8218
1 Sycomonkey8219
1 Sydbarrett748220
2 Syhon8221
1 Sylverone8222
1 Sylvestersteele8223
1 Symane8224
4 Syncategoremata8225
1 Synergy8226
4 Synthebot8227
2 Syp8228

https://en.wikipedia.org/w/index.php%3ftitle=User:Swarnendu.biswas&action=edit&
8204
redlink=1
8205 https://en.wikipedia.org/wiki/User:Sweet_tea_van
8206 https://en.wikipedia.org/wiki/User:Swfung8
8207 https://en.wikipedia.org/wiki/User:Swierczek
8208 https://en.wikipedia.org/wiki/User:Swift
8209 https://en.wikipedia.org/wiki/User:SwiftBot
8210 https://en.wikipedia.org/w/index.php%3ftitle=User:Swifty_slow&action=edit&redlink=1
8211 https://en.wikipedia.org/wiki/User:SwisterTwister
8212 https://en.wikipedia.org/wiki/User:Switchercat
https://en.wikipedia.org/w/index.php%3ftitle=User:Swordsmankirby&action=edit&redlink=
8213
1
https://en.wikipedia.org/w/index.php%3ftitle=User:Swo%25C3%25B6g&action=edit&redlink=
8214
1
8215 https://en.wikipedia.org/wiki/User:Swpb
8216 https://en.wikipedia.org/w/index.php%3ftitle=User:Swrobinson26&action=edit&redlink=1
8217 https://en.wikipedia.org/wiki/User:SyG
8218 https://en.wikipedia.org/w/index.php%3ftitle=User:Sychen&action=edit&redlink=1
8219 https://en.wikipedia.org/wiki/User:Sycomonkey
8220 https://en.wikipedia.org/wiki/User:Sydbarrett74
8221 https://en.wikipedia.org/wiki/User:Syhon
8222 https://en.wikipedia.org/w/index.php%3ftitle=User:Sylverone&action=edit&redlink=1
8223 https://en.wikipedia.org/wiki/User:Sylvestersteele
8224 https://en.wikipedia.org/wiki/User:Symane
8225 https://en.wikipedia.org/wiki/User:Syncategoremata
8226 https://en.wikipedia.org/wiki/User:Synergy
8227 https://en.wikipedia.org/wiki/User:Synthebot
8228 https://en.wikipedia.org/wiki/User:Syp

1998
External links

1 Syr08229
1 Syrak8230
1 Syrthiss8231
28 Sytelus8232
1 Syxb8233
1 Sz-iwbot8234
2 Szabolcs Nagy8235
2 Szepi~enwiki8236
1 Szozdakosvi8237
12 T Long8238
1 T.seppelt8239
2 T0ljan~enwiki8240
2 T0m8241
1 T0pem08242
1 T1458243
5 T2theurbo8244
1 T4bits8245
10 TAnthony8246
1 TBingmann8247
4 TBloemink8248
1 TEcHNOpl8249
1 TFA Protector Bot8250
1 THEFlint Shrubwood8251
2 THEN WHO WAS PHONE?8252
1 TMott8253

8229 https://en.wikipedia.org/w/index.php%3ftitle=User:Syr0&action=edit&redlink=1
8230 https://en.wikipedia.org/w/index.php%3ftitle=User:Syrak&action=edit&redlink=1
8231 https://en.wikipedia.org/wiki/User:Syrthiss
8232 https://en.wikipedia.org/wiki/User:Sytelus
8233 https://en.wikipedia.org/w/index.php%3ftitle=User:Syxb&action=edit&redlink=1
8234 https://en.wikipedia.org/wiki/User:Sz-iwbot
8235 https://en.wikipedia.org/w/index.php%3ftitle=User:Szabolcs_Nagy&action=edit&redlink=1
8236 https://en.wikipedia.org/w/index.php%3ftitle=User:Szepi~enwiki&action=edit&redlink=1
8237 https://en.wikipedia.org/wiki/User:Szozdakosvi
8238 https://en.wikipedia.org/w/index.php%3ftitle=User:T_Long&action=edit&redlink=1
8239 https://en.wikipedia.org/wiki/User:T.seppelt
8240 https://en.wikipedia.org/w/index.php%3ftitle=User:T0ljan~enwiki&action=edit&redlink=1
8241 https://en.wikipedia.org/wiki/User:T0m
8242 https://en.wikipedia.org/wiki/User:T0pem0
8243 https://en.wikipedia.org/w/index.php%3ftitle=User:T145&action=edit&redlink=1
8244 https://en.wikipedia.org/w/index.php%3ftitle=User:T2theurbo&action=edit&redlink=1
8245 https://en.wikipedia.org/w/index.php%3ftitle=User:T4bits&action=edit&redlink=1
8246 https://en.wikipedia.org/wiki/User:TAnthony
8247 https://en.wikipedia.org/w/index.php%3ftitle=User:TBingmann&action=edit&redlink=1
8248 https://en.wikipedia.org/wiki/User:TBloemink
8249 https://en.wikipedia.org/w/index.php%3ftitle=User:TEcHNOpl&action=edit&redlink=1
8250 https://en.wikipedia.org/wiki/User:TFA_Protector_Bot
8251 https://en.wikipedia.org/wiki/User:THEFlint_Shrubwood
8252 https://en.wikipedia.org/wiki/User:THEN_WHO_WAS_PHONE%253F
8253 https://en.wikipedia.org/wiki/User:TMott

1999
Contributors

1 TNARasslin8254
1 TOGASHI Jin8255
1 TPReal8256
1 TRauMa8257
2 TShilo128258
2 TSornalingam8259
5 TWiStErRob8260
30 TXiKiBoT8261
1 TYelliot8262
4 TaBOT-zerem8263
3 Tabletop8264
3 Tablizer8265
1 Tac-Tics8266
1 Tachyon018267
1 Tachyon778268
1 Tackline8269
1 Tadanaranu8270
1 TadejM8271
8 Taejo8272
1 Taeshadow8273
1 TaintedMustard8274
1 Takaitra8275
49 TakuyaMurata8276
1 Tal.senyor8277
2 Taleev Aalam8278

8254 https://en.wikipedia.org/w/index.php%3ftitle=User:TNARasslin&action=edit&redlink=1
8255 https://en.wikipedia.org/wiki/User:TOGASHI_Jin
8256 https://en.wikipedia.org/w/index.php%3ftitle=User:TPReal&action=edit&redlink=1
8257 https://en.wikipedia.org/wiki/User:TRauMa
8258 https://en.wikipedia.org/wiki/User:TShilo12
8259 https://en.wikipedia.org/wiki/User:TSornalingam
8260 https://en.wikipedia.org/wiki/User:TWiStErRob
8261 https://en.wikipedia.org/wiki/User:TXiKiBoT
8262 https://en.wikipedia.org/wiki/User:TYelliot
8263 https://en.wikipedia.org/wiki/User:TaBOT-zerem
8264 https://en.wikipedia.org/wiki/User:Tabletop
8265 https://en.wikipedia.org/w/index.php%3ftitle=User:Tablizer&action=edit&redlink=1
8266 https://en.wikipedia.org/w/index.php%3ftitle=User:Tac-Tics&action=edit&redlink=1
8267 https://en.wikipedia.org/wiki/User:Tachyon01
8268 https://en.wikipedia.org/wiki/User:Tachyon77
8269 https://en.wikipedia.org/w/index.php%3ftitle=User:Tackline&action=edit&redlink=1
8270 https://en.wikipedia.org/w/index.php%3ftitle=User:Tadanaranu&action=edit&redlink=1
8271 https://en.wikipedia.org/wiki/User:TadejM
8272 https://en.wikipedia.org/wiki/User:Taejo
8273 https://en.wikipedia.org/wiki/User:Taeshadow
8274 https://en.wikipedia.org/wiki/User:TaintedMustard
8275 https://en.wikipedia.org/wiki/User:Takaitra
8276 https://en.wikipedia.org/wiki/User:TakuyaMurata
8277 https://en.wikipedia.org/w/index.php%3ftitle=User:Tal.senyor&action=edit&redlink=1
8278 https://en.wikipedia.org/w/index.php%3ftitle=User:Taleev_Aalam&action=edit&redlink=1

2000
External links

2 Talgalili8279
3 Talldean8280
1 Talrias8281
1 Tambre8282
2 Tamer ih~enwiki8283
4 TamerShlash8284
2 Tameralkuly8285
8 Tamfang8286
1 Tampopo1st8287
1 Tanadeau8288
1 TangMaxin8289
5 Tangi-tamma8290
3 Tanumon8291
1 Tanvir Ahmmed8292
1 Tanzeel11228293
2 Tapanjk8294
2 Tapiocozzo8295
2 Tapirtrust8296
1 TappyDoggy3658297
6 Taral8298
6 Tardis8299
13 Tarotcards8300
2 Tarquin8301
5 Tas508302
1 Tashdeed8303

8279 https://en.wikipedia.org/wiki/User:Talgalili
8280 https://en.wikipedia.org/wiki/User:Talldean
8281 https://en.wikipedia.org/wiki/User:Talrias
8282 https://en.wikipedia.org/w/index.php%3ftitle=User:Tambre&action=edit&redlink=1
8283 https://en.wikipedia.org/wiki/User:Tamer_ih~enwiki
8284 https://en.wikipedia.org/wiki/User:TamerShlash
8285 https://en.wikipedia.org/w/index.php%3ftitle=User:Tameralkuly&action=edit&redlink=1
8286 https://en.wikipedia.org/wiki/User:Tamfang
8287 https://en.wikipedia.org/w/index.php%3ftitle=User:Tampopo1st&action=edit&redlink=1
8288 https://en.wikipedia.org/w/index.php%3ftitle=User:Tanadeau&action=edit&redlink=1
8289 https://en.wikipedia.org/w/index.php%3ftitle=User:TangMaxin&action=edit&redlink=1
8290 https://en.wikipedia.org/wiki/User:Tangi-tamma
8291 https://en.wikipedia.org/w/index.php%3ftitle=User:Tanumon&action=edit&redlink=1
8292 https://en.wikipedia.org/wiki/User:Tanvir_Ahmmed
8293 https://en.wikipedia.org/w/index.php%3ftitle=User:Tanzeel1122&action=edit&redlink=1
8294 https://en.wikipedia.org/w/index.php%3ftitle=User:Tapanjk&action=edit&redlink=1
8295 https://en.wikipedia.org/w/index.php%3ftitle=User:Tapiocozzo&action=edit&redlink=1
8296 https://en.wikipedia.org/wiki/User:Tapirtrust
8297 https://en.wikipedia.org/wiki/User:TappyDoggy365
8298 https://en.wikipedia.org/wiki/User:Taral
8299 https://en.wikipedia.org/wiki/User:Tardis
8300 https://en.wikipedia.org/wiki/User:Tarotcards
8301 https://en.wikipedia.org/wiki/User:Tarquin
8302 https://en.wikipedia.org/w/index.php%3ftitle=User:Tas50&action=edit&redlink=1
8303 https://en.wikipedia.org/wiki/User:Tashdeed

2001
Contributors

3 Tassedethe8304
1 Tastyduck8305
1 Tastyllama8306
2 Tatarize8307
1 Tauwasser8308
1 Tavianator8309
9 Taw8310
1 Tawker8311
35 Taxipom8312
2 Taxman8313
1 Tayste8314
12 Tb8315
3 Tbhotch8316
3 Tbvdm8317
1 Tckma8318
1 Tcl168319
1 Tcncv8320
1 Tcotco8321
1 Tcoulon20108322
4 Tcshasaposse8323
1 Tdecaluwe8324
1 Tdgs8325
144 Tea2min8326
1 Teabinger8327
4 Teacup8328

8304 https://en.wikipedia.org/wiki/User:Tassedethe
8305 https://en.wikipedia.org/wiki/User:Tastyduck
8306 https://en.wikipedia.org/wiki/User:Tastyllama
8307 https://en.wikipedia.org/wiki/User:Tatarize
8308 https://en.wikipedia.org/wiki/User:Tauwasser
8309 https://en.wikipedia.org/wiki/User:Tavianator
8310 https://en.wikipedia.org/wiki/User:Taw
8311 https://en.wikipedia.org/wiki/User:Tawker
8312 https://en.wikipedia.org/wiki/User:Taxipom
8313 https://en.wikipedia.org/wiki/User:Taxman
8314 https://en.wikipedia.org/wiki/User:Tayste
8315 https://en.wikipedia.org/wiki/User:Tb
8316 https://en.wikipedia.org/wiki/User:Tbhotch
8317 https://en.wikipedia.org/wiki/User:Tbvdm
8318 https://en.wikipedia.org/wiki/User:Tckma
8319 https://en.wikipedia.org/w/index.php%3ftitle=User:Tcl16&action=edit&redlink=1
8320 https://en.wikipedia.org/wiki/User:Tcncv
8321 https://en.wikipedia.org/wiki/User:Tcotco
8322 https://en.wikipedia.org/w/index.php%3ftitle=User:Tcoulon2010&action=edit&redlink=1
8323 https://en.wikipedia.org/wiki/User:Tcshasaposse
8324 https://en.wikipedia.org/w/index.php%3ftitle=User:Tdecaluwe&action=edit&redlink=1
8325 https://en.wikipedia.org/wiki/User:Tdgs
8326 https://en.wikipedia.org/wiki/User:Tea2min
8327 https://en.wikipedia.org/w/index.php%3ftitle=User:Teabinger&action=edit&redlink=1
8328 https://en.wikipedia.org/w/index.php%3ftitle=User:Teacup&action=edit&redlink=1

2002
External links

3 Teamtheo8329
2 Teapeat8330
1 Tech778331
2 Techie0078332
2 TechnicalWriting3768333
1 Ted Longstaffe8334
3 Ted.tem.parker8335
1 TedDunning8336
5 Tedder8337
1 Tedzdog8338
2 TeeEmCee8339
2 TehKeg8340
4 Tehwikipwnerer8341
1 Teika kazura8342
4 Teimu.tm8343
3 Tejas818344
1 Tekhnofiend8345
1 TeleTeddy8346
1 Telekid8347
8 Teles8348
1 Telespiza8349
1 Temblast8350
3 TempeteCoeur8351
8 Template namespace initialisation script8352

8329 https://en.wikipedia.org/wiki/User:Teamtheo
8330 https://en.wikipedia.org/wiki/User:Teapeat
8331 https://en.wikipedia.org/wiki/User:Tech77
8332 https://en.wikipedia.org/wiki/User:Techie007
https://en.wikipedia.org/w/index.php%3ftitle=User:TechnicalWriting376&action=edit&
8333
redlink=1
8334 https://en.wikipedia.org/wiki/User:Ted_Longstaffe
https://en.wikipedia.org/w/index.php%3ftitle=User:Ted.tem.parker&action=edit&redlink=
8335
1
8336 https://en.wikipedia.org/wiki/User:TedDunning
8337 https://en.wikipedia.org/wiki/User:Tedder
8338 https://en.wikipedia.org/wiki/User:Tedzdog
8339 https://en.wikipedia.org/w/index.php%3ftitle=User:TeeEmCee&action=edit&redlink=1
8340 https://en.wikipedia.org/w/index.php%3ftitle=User:TehKeg&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Tehwikipwnerer&action=edit&redlink=
8341
1
8342 https://en.wikipedia.org/wiki/User:Teika_kazura
8343 https://en.wikipedia.org/wiki/User:Teimu.tm
8344 https://en.wikipedia.org/wiki/User:Tejas81
8345 https://en.wikipedia.org/wiki/User:Tekhnofiend
8346 https://en.wikipedia.org/w/index.php%3ftitle=User:TeleTeddy&action=edit&redlink=1
8347 https://en.wikipedia.org/wiki/User:Telekid
8348 https://en.wikipedia.org/wiki/User:Teles
8349 https://en.wikipedia.org/w/index.php%3ftitle=User:Telespiza&action=edit&redlink=1
8350 https://en.wikipedia.org/wiki/User:Temblast
8351 https://en.wikipedia.org/w/index.php%3ftitle=User:TempeteCoeur&action=edit&redlink=1
8352 https://en.wikipedia.org/wiki/User:Template_namespace_initialisation_script

2003
Contributors

5 Templatetypedef8353
7 Tentinator8354
2 Teorth8355
2 Terencehonles8356
4 TerraFrost8357
1 Terrek8358
2 TerribleTadpole8359
1 Territory8360
1 Terry02018361
4 Terrycojones8362
1 Terryn38363
1 Terryspitz8364
4 Tesse8365
1 TestEditBot8366
1 TestPilot8367
5 Tetha8368
1 Tetracube8369
1 Tetraedycal8370
1 Teutanic8371
1 Textdoc8372
1 Tezero8373
1 Tezeti8374
1 Tfr.didi8375
15 Tgdwyer8376
1 Tghe-retford8377

https://en.wikipedia.org/w/index.php%3ftitle=User:Templatetypedef&action=edit&
8353
redlink=1
8354 https://en.wikipedia.org/wiki/User:Tentinator
8355 https://en.wikipedia.org/wiki/User:Teorth
8356 https://en.wikipedia.org/w/index.php%3ftitle=User:Terencehonles&action=edit&redlink=1
8357 https://en.wikipedia.org/wiki/User:TerraFrost
8358 https://en.wikipedia.org/wiki/User:Terrek
https://en.wikipedia.org/w/index.php%3ftitle=User:TerribleTadpole&action=edit&
8359
redlink=1
8360 https://en.wikipedia.org/wiki/User:Territory
8361 https://en.wikipedia.org/wiki/User:Terry0201
8362 https://en.wikipedia.org/wiki/User:Terrycojones
8363 https://en.wikipedia.org/w/index.php%3ftitle=User:Terryn3&action=edit&redlink=1
8364 https://en.wikipedia.org/w/index.php%3ftitle=User:Terryspitz&action=edit&redlink=1
8365 https://en.wikipedia.org/w/index.php%3ftitle=User:Tesse&action=edit&redlink=1
8366 https://en.wikipedia.org/wiki/User:TestEditBot
8367 https://en.wikipedia.org/wiki/User:TestPilot
8368 https://en.wikipedia.org/w/index.php%3ftitle=User:Tetha&action=edit&redlink=1
8369 https://en.wikipedia.org/wiki/User:Tetracube
8370 https://en.wikipedia.org/wiki/User:Tetraedycal
8371 https://en.wikipedia.org/wiki/User:Teutanic
8372 https://en.wikipedia.org/w/index.php%3ftitle=User:Textdoc&action=edit&redlink=1
8373 https://en.wikipedia.org/wiki/User:Tezero
8374 https://en.wikipedia.org/wiki/User:Tezeti
8375 https://en.wikipedia.org/w/index.php%3ftitle=User:Tfr.didi&action=edit&redlink=1
8376 https://en.wikipedia.org/wiki/User:Tgdwyer
8377 https://en.wikipedia.org/wiki/User:Tghe-retford

2004
External links

1 Tgm10248378
1 Tgr8379
1 Th1rt3en8380
1 ThG8381
6 Thaddy8382
1 Thadius8568383
1 Thaiduongtrieuvu8384
1 Thailyn8385
1 Thanabhat.jo8386
6 Thanhdominica8387
1 Thargor Orlando8388
1 Tharwen8389
9 That Guy, From That Show!8390
2 ThatGuyCalledChris8391
1 Thatemooverthere8392
2 Thatsme3148393
3 Thayts8394
1 Thaïti Bob8395
63 The Anome8396
8 The Anomebot8397
2 The Arbiter8398
2 The Average Wikipedian8399
1 The Banner8400
2 The Cave Troll8401
2 The Earwig8402

8378 https://en.wikipedia.org/w/index.php%3ftitle=User:Tgm1024&action=edit&redlink=1
8379 https://en.wikipedia.org/wiki/User:Tgr
8380 https://en.wikipedia.org/wiki/User:Th1rt3en
8381 https://en.wikipedia.org/w/index.php%3ftitle=User:ThG&action=edit&redlink=1
8382 https://en.wikipedia.org/w/index.php%3ftitle=User:Thaddy&action=edit&redlink=1
8383 https://en.wikipedia.org/wiki/User:Thadius856
https://en.wikipedia.org/w/index.php%3ftitle=User:Thaiduongtrieuvu&action=edit&
8384
redlink=1
8385 https://en.wikipedia.org/w/index.php%3ftitle=User:Thailyn&action=edit&redlink=1
8386 https://en.wikipedia.org/w/index.php%3ftitle=User:Thanabhat.jo&action=edit&redlink=1
8387 https://en.wikipedia.org/w/index.php%3ftitle=User:Thanhdominica&action=edit&redlink=1
8388 https://en.wikipedia.org/wiki/User:Thargor_Orlando
8389 https://en.wikipedia.org/w/index.php%3ftitle=User:Tharwen&action=edit&redlink=1
8390 https://en.wikipedia.org/wiki/User:That_Guy,_From_That_Show!
https://en.wikipedia.org/w/index.php%3ftitle=User:ThatGuyCalledChris&action=edit&
8391
redlink=1
8392 https://en.wikipedia.org/wiki/User:Thatemooverthere
8393 https://en.wikipedia.org/wiki/User:Thatsme314
8394 https://en.wikipedia.org/wiki/User:Thayts
https://en.wikipedia.org/w/index.php%3ftitle=User:Tha%25C3%25AFti_Bob&action=edit&
8395
redlink=1
8396 https://en.wikipedia.org/wiki/User:The_Anome
8397 https://en.wikipedia.org/wiki/User:The_Anomebot
8398 https://en.wikipedia.org/wiki/User:The_Arbiter
8399 https://en.wikipedia.org/wiki/User:The_Average_Wikipedian
8400 https://en.wikipedia.org/wiki/User:The_Banner
8401 https://en.wikipedia.org/wiki/User:The_Cave_Troll
8402 https://en.wikipedia.org/wiki/User:The_Earwig

2005
Contributors

1 The Eloquent Peasant8403


1 The Isiah8404
4 The Last Username In The World8405
1 The Nameless8406
1 The Nut8407
2 The Obfuscator8408
19 The Ostrich8409
3 The Rambling Man8410
8 The Thing That Should Not Be8411
3 The Transhumanist8412
1 The Wilschon8413
1 The last username left was taken8414
1 The stuart8415
1 The undertow8416
1 The wub8417
1 The.megapode8418
1 The12game8419
1 The9muse8420
1 TheAmazingVanishingCasper4208421
1 TheArguer8422
2 TheAwesomeHwyh8423
1 TheBiggestFootballFan8424
1 TheCois8425
1 TheCranberryMan588426

8403 https://en.wikipedia.org/wiki/User:The_Eloquent_Peasant
8404 https://en.wikipedia.org/w/index.php%3ftitle=User:The_Isiah&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:The_Last_Username_In_The_World&
8405
action=edit&redlink=1
8406 https://en.wikipedia.org/wiki/User:The_Nameless
8407 https://en.wikipedia.org/wiki/User:The_Nut
https://en.wikipedia.org/w/index.php%3ftitle=User:The_Obfuscator&action=edit&redlink=
8408
1
8409 https://en.wikipedia.org/wiki/User:The_Ostrich
8410 https://en.wikipedia.org/wiki/User:The_Rambling_Man
8411 https://en.wikipedia.org/wiki/User:The_Thing_That_Should_Not_Be
8412 https://en.wikipedia.org/wiki/User:The_Transhumanist
8413 https://en.wikipedia.org/wiki/User:The_Wilschon
8414 https://en.wikipedia.org/wiki/User:The_last_username_left_was_taken
8415 https://en.wikipedia.org/wiki/User:The_stuart
8416 https://en.wikipedia.org/wiki/User:The_undertow
8417 https://en.wikipedia.org/wiki/User:The_wub
8418 https://en.wikipedia.org/wiki/User:The.megapode
8419 https://en.wikipedia.org/w/index.php%3ftitle=User:The12game&action=edit&redlink=1
8420 https://en.wikipedia.org/w/index.php%3ftitle=User:The9muse&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:TheAmazingVanishingCasper420&
8421
action=edit&redlink=1
8422 https://en.wikipedia.org/wiki/User:TheArguer
8423 https://en.wikipedia.org/wiki/User:TheAwesomeHwyh
8424 https://en.wikipedia.org/wiki/User:TheBiggestFootballFan
8425 https://en.wikipedia.org/w/index.php%3ftitle=User:TheCois&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:TheCranberryMan58&action=edit&
8426
redlink=1

2006
External links

1 TheDragonFire8427
2 TheFreddoT8428
1 TheFrog0018429
1 TheGoblin8430
1 TheGoodBadWorst8431
1 TheHardestAspectOfCreatingAnAccountIsAlwaysTheUsername8432
1 TheImaCow8433
2 TheJJJunk8434
2 TheKMan8435
3 TheMagikBOT8436
2 TheMagikCow8437
1 TheMandarin8438
2 TheMrBenstein8439
1 TheNightFly8440
1 TheParanoidOne8441
2 ThePianoGuy8442
2 ThePiston8443
1 ThePlatypusofDoom8444
7 TheRingess8445
1 TheSandDoctor8446
15 TheSeven8447
2 TheTraveler38448
2 Theanswertolifetheuniverseandeverything8449
2 Theatre gurl8450
1 Theclapp8451

8427 https://en.wikipedia.org/wiki/User:TheDragonFire
8428 https://en.wikipedia.org/w/index.php%3ftitle=User:TheFreddoT&action=edit&redlink=1
8429 https://en.wikipedia.org/wiki/User:TheFrog001
8430 https://en.wikipedia.org/wiki/User:TheGoblin
8431 https://en.wikipedia.org/wiki/User:TheGoodBadWorst
8432 https://en.wikipedia.org/wiki/User:TheHardestAspectOfCreatingAnAccountIsAlwaysTheUsername
8433 https://en.wikipedia.org/wiki/User:TheImaCow
8434 https://en.wikipedia.org/wiki/User:TheJJJunk
8435 https://en.wikipedia.org/wiki/User:TheKMan
8436 https://en.wikipedia.org/wiki/User:TheMagikBOT
8437 https://en.wikipedia.org/wiki/User:TheMagikCow
8438 https://en.wikipedia.org/wiki/User:TheMandarin
8439 https://en.wikipedia.org/w/index.php%3ftitle=User:TheMrBenstein&action=edit&redlink=1
8440 https://en.wikipedia.org/w/index.php%3ftitle=User:TheNightFly&action=edit&redlink=1
8441 https://en.wikipedia.org/wiki/User:TheParanoidOne
8442 https://en.wikipedia.org/w/index.php%3ftitle=User:ThePianoGuy&action=edit&redlink=1
8443 https://en.wikipedia.org/wiki/User:ThePiston
8444 https://en.wikipedia.org/wiki/User:ThePlatypusofDoom
8445 https://en.wikipedia.org/wiki/User:TheRingess
8446 https://en.wikipedia.org/wiki/User:TheSandDoctor
8447 https://en.wikipedia.org/wiki/User:TheSeven
8448 https://en.wikipedia.org/w/index.php%3ftitle=User:TheTraveler3&action=edit&redlink=1
8449 https://en.wikipedia.org/wiki/User:Theanswertolifetheuniverseandeverything
8450 https://en.wikipedia.org/w/index.php%3ftitle=User:Theatre_gurl&action=edit&redlink=1
8451 https://en.wikipedia.org/wiki/User:Theclapp

2007
Contributors

1 Theclawen8452
1 Thecoolsops8453
1 Thedrx8454
2 Theemathas8455
2 Thefourlinestar8456
2 Thegarybakery8457
4 Thegeneralguy8458
2 Thegreekgonzo8459
7 Thegzak8460
3 Thehelpfulbot8461
2 Thehotelambush8462
9 Theinstantmatrix8463
23 Themania8464
4 Themfromspace8465
1 Themidget178466
1 Themusicgod18467
2 Themysteriousimmigrant8468
1 Thenowhereman8469
1 Thenub3148470
1 Theodora.ser8471
1 Theodore Kloba8472
2 Theone2568473
1 Theonefoster8474
1 Theopolisme8475
1 Thepenguin98476

8452 https://en.wikipedia.org/wiki/User:Theclawen
8453 https://en.wikipedia.org/w/index.php%3ftitle=User:Thecoolsops&action=edit&redlink=1
8454 https://en.wikipedia.org/w/index.php%3ftitle=User:Thedrx&action=edit&redlink=1
8455 https://en.wikipedia.org/wiki/User:Theemathas
8456 https://en.wikipedia.org/wiki/User:Thefourlinestar
8457 https://en.wikipedia.org/w/index.php%3ftitle=User:Thegarybakery&action=edit&redlink=1
8458 https://en.wikipedia.org/w/index.php%3ftitle=User:Thegeneralguy&action=edit&redlink=1
8459 https://en.wikipedia.org/w/index.php%3ftitle=User:Thegreekgonzo&action=edit&redlink=1
8460 https://en.wikipedia.org/w/index.php%3ftitle=User:Thegzak&action=edit&redlink=1
8461 https://en.wikipedia.org/wiki/User:Thehelpfulbot
8462 https://en.wikipedia.org/wiki/User:Thehotelambush
8463 https://en.wikipedia.org/wiki/User:Theinstantmatrix
8464 https://en.wikipedia.org/w/index.php%3ftitle=User:Themania&action=edit&redlink=1
8465 https://en.wikipedia.org/wiki/User:Themfromspace
8466 https://en.wikipedia.org/wiki/User:Themidget17
8467 https://en.wikipedia.org/wiki/User:Themusicgod1
https://en.wikipedia.org/w/index.php%3ftitle=User:Themysteriousimmigrant&action=edit&
8468
redlink=1
8469 https://en.wikipedia.org/wiki/User:Thenowhereman
8470 https://en.wikipedia.org/wiki/User:Thenub314
8471 https://en.wikipedia.org/w/index.php%3ftitle=User:Theodora.ser&action=edit&redlink=1
8472 https://en.wikipedia.org/wiki/User:Theodore_Kloba
8473 https://en.wikipedia.org/wiki/User:Theone256
8474 https://en.wikipedia.org/w/index.php%3ftitle=User:Theonefoster&action=edit&redlink=1
8475 https://en.wikipedia.org/wiki/User:Theopolisme
8476 https://en.wikipedia.org/wiki/User:Thepenguin9

2008
External links

1 Theprogrammer8477
1 Thermon8478
1 Thesevenseas8479
12 Thesilverbail8480
1 Thesquaregroot8481
3 Theta48482
1 Thewsomeguy8483
1 Thiagohirai8484
1 Thibaut1200948485
1 Thierry Abaléa8486
45 Thijs!bot8487
6 Thijswijs8488
1 Thinboy00P8489
3 Thingg8490
2 Thinking of England8491
4 Thinktdub8492
1 ThirdDolphin8493
2 Thirdright8494
1 ThirstyPanda8495
1 Thirtythreeforty8496
4 This lousy T-shirt8497
2 Thisisbossi8498
1 Thisisnotcam8499
2 Tholme8500
2 Thom27298501

8477 https://en.wikipedia.org/w/index.php%3ftitle=User:Theprogrammer&action=edit&redlink=1
8478 https://en.wikipedia.org/w/index.php%3ftitle=User:Thermon&action=edit&redlink=1
8479 https://en.wikipedia.org/wiki/User:Thesevenseas
8480 https://en.wikipedia.org/wiki/User:Thesilverbail
https://en.wikipedia.org/w/index.php%3ftitle=User:Thesquaregroot&action=edit&redlink=
8481
1
8482 https://en.wikipedia.org/wiki/User:Theta4
8483 https://en.wikipedia.org/wiki/User:Thewsomeguy
8484 https://en.wikipedia.org/w/index.php%3ftitle=User:Thiagohirai&action=edit&redlink=1
8485 https://en.wikipedia.org/wiki/User:Thibaut120094
https://en.wikipedia.org/w/index.php%3ftitle=User:Thierry_Abal%25C3%25A9a&action=
8486
edit&redlink=1
8487 https://en.wikipedia.org/wiki/User:Thijs!bot
8488 https://en.wikipedia.org/w/index.php%3ftitle=User:Thijswijs&action=edit&redlink=1
8489 https://en.wikipedia.org/wiki/User:Thinboy00P
8490 https://en.wikipedia.org/wiki/User:Thingg
8491 https://en.wikipedia.org/wiki/User:Thinking_of_England
8492 https://en.wikipedia.org/w/index.php%3ftitle=User:Thinktdub&action=edit&redlink=1
8493 https://en.wikipedia.org/wiki/User:ThirdDolphin
8494 https://en.wikipedia.org/wiki/User:Thirdright
8495 https://en.wikipedia.org/w/index.php%3ftitle=User:ThirstyPanda&action=edit&redlink=1
8496 https://en.wikipedia.org/wiki/User:Thirtythreeforty
8497 https://en.wikipedia.org/wiki/User:This_lousy_T-shirt
8498 https://en.wikipedia.org/wiki/User:Thisisbossi
8499 https://en.wikipedia.org/wiki/User:Thisisnotcam
8500 https://en.wikipedia.org/wiki/User:Tholme
8501 https://en.wikipedia.org/w/index.php%3ftitle=User:Thom2729&action=edit&redlink=1

2009
Contributors

1 ThomHImself8502
1 Thomas Hard8503
1 Thomas J. S. Greenfield8504
7 Thomas.W8505
2 ThomasGHenry8506
11 ThomasMueller8507
5 ThomasTomMueller8508
5 Thomasda8509
1 Thomasgravina8510
2 Thomasjurriaan8511
4 Thomasp19858512
1 Thompsonb248513
1 Thorbjørn Ravn Andersen8514
2 Thore8515
143 Thore Husfeldt8516
3 Thorwald8517
1 Thoughtpuzzle8518
2 Thsgrn8519
2 Thtriumph8520
12 Thue8521
4 Thumperward8522
1 Thunderbird28523
1 Thunderboltz8524
7 Thv8525
1 Thái Nhi8526

8502 https://en.wikipedia.org/wiki/User:ThomHImself
8503 https://en.wikipedia.org/wiki/User:Thomas_Hard
8504 https://en.wikipedia.org/wiki/User:Thomas_J._S._Greenfield
8505 https://en.wikipedia.org/wiki/User:Thomas.W
8506 https://en.wikipedia.org/w/index.php%3ftitle=User:ThomasGHenry&action=edit&redlink=1
8507 https://en.wikipedia.org/wiki/User:ThomasMueller
https://en.wikipedia.org/w/index.php%3ftitle=User:ThomasTomMueller&action=edit&
8508
redlink=1
8509 https://en.wikipedia.org/w/index.php%3ftitle=User:Thomasda&action=edit&redlink=1
8510 https://en.wikipedia.org/w/index.php%3ftitle=User:Thomasgravina&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Thomasjurriaan&action=edit&redlink=
8511
1
8512 https://en.wikipedia.org/w/index.php%3ftitle=User:Thomasp1985&action=edit&redlink=1
8513 https://en.wikipedia.org/w/index.php%3ftitle=User:Thompsonb24&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Thorbj%25C3%25B8rn_Ravn_Andersen&
8514
action=edit&redlink=1
8515 https://en.wikipedia.org/w/index.php%3ftitle=User:Thore&action=edit&redlink=1
8516 https://en.wikipedia.org/wiki/User:Thore_Husfeldt
8517 https://en.wikipedia.org/wiki/User:Thorwald
8518 https://en.wikipedia.org/wiki/User:Thoughtpuzzle
8519 https://en.wikipedia.org/wiki/User:Thsgrn
8520 https://en.wikipedia.org/wiki/User:Thtriumph
8521 https://en.wikipedia.org/wiki/User:Thue
8522 https://en.wikipedia.org/wiki/User:Thumperward
8523 https://en.wikipedia.org/wiki/User:Thunderbird2
8524 https://en.wikipedia.org/wiki/User:Thunderboltz
8525 https://en.wikipedia.org/wiki/User:Thv
8526 https://en.wikipedia.org/wiki/User:Th%25C3%25A1i_Nhi

2010
External links

1 Tiagomlalves8527
2 Ticklemepink428528
1 Tiddly Tom8529
10 Tide rolls8530
1 Tigerqin8531
1 Tigre2008532
36 Tijfo0988533
2 Tijmen Wil8534
1 Tim Goodwyn8535
1 Tim19888536
67 Tim328537
1 Tim362728538
2 TimBentley8539
2 Timawesomeness8540
1 Timc8541
4 Timde8542
1 Timdumol8543
2 Timeroot8544
1 Timir28545
3 Timkaler8546
1 Timotei218547
4 Timothy Gu8548
1 Timrem8549
1 Timtb8550
2 Timtempleton8551

8527 https://en.wikipedia.org/w/index.php%3ftitle=User:Tiagomlalves&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Ticklemepink42&action=edit&redlink=
8528
1
8529 https://en.wikipedia.org/wiki/User:Tiddly_Tom
8530 https://en.wikipedia.org/wiki/User:Tide_rolls
8531 https://en.wikipedia.org/w/index.php%3ftitle=User:Tigerqin&action=edit&redlink=1
8532 https://en.wikipedia.org/wiki/User:Tigre200
8533 https://en.wikipedia.org/wiki/User:Tijfo098
8534 https://en.wikipedia.org/w/index.php%3ftitle=User:Tijmen_Wil&action=edit&redlink=1
8535 https://en.wikipedia.org/wiki/User:Tim_Goodwyn
8536 https://en.wikipedia.org/wiki/User:Tim1988
8537 https://en.wikipedia.org/wiki/User:Tim32
8538 https://en.wikipedia.org/w/index.php%3ftitle=User:Tim36272&action=edit&redlink=1
8539 https://en.wikipedia.org/wiki/User:TimBentley
8540 https://en.wikipedia.org/wiki/User:Timawesomeness
8541 https://en.wikipedia.org/wiki/User:Timc
8542 https://en.wikipedia.org/wiki/User:Timde
8543 https://en.wikipedia.org/w/index.php%3ftitle=User:Timdumol&action=edit&redlink=1
8544 https://en.wikipedia.org/wiki/User:Timeroot
8545 https://en.wikipedia.org/w/index.php%3ftitle=User:Timir2&action=edit&redlink=1
8546 https://en.wikipedia.org/w/index.php%3ftitle=User:Timkaler&action=edit&redlink=1
8547 https://en.wikipedia.org/w/index.php%3ftitle=User:Timotei21&action=edit&redlink=1
8548 https://en.wikipedia.org/wiki/User:Timothy_Gu
8549 https://en.wikipedia.org/wiki/User:Timrem
8550 https://en.wikipedia.org/w/index.php%3ftitle=User:Timtb&action=edit&redlink=1
8551 https://en.wikipedia.org/wiki/User:Timtempleton

2011
Contributors

54 Timwi8552
2 Tinman8553
1 Tinsvagelj8554
1 Tinus748555
1 Tiptoety8556
1 Tirab8557
1 Titanic40008558
2 Titodutta8559
4 Tizio8560
3 TjBot8561
3 Tjdw8562
2 Tjwood8563
1 Tkgd20078564
9 Tlhslobus8565
1 Tmdean8566
3 Tmferrara8567
1 Tmladek8568
2 Tmt5148569
1 Tmusgrove8570
1 Tneller8571
1 Tnmikeinia8572
1 Tnullnull8573
4 To'hajiilee8574
3 ToBeFree8575
1 Tobby728576

8552 https://en.wikipedia.org/wiki/User:Timwi
8553 https://en.wikipedia.org/w/index.php%3ftitle=User:Tinman&action=edit&redlink=1
8554 https://en.wikipedia.org/w/index.php%3ftitle=User:Tinsvagelj&action=edit&redlink=1
8555 https://en.wikipedia.org/w/index.php%3ftitle=User:Tinus74&action=edit&redlink=1
8556 https://en.wikipedia.org/w/index.php%3ftitle=User:Tiptoety&action=edit&redlink=1
8557 https://en.wikipedia.org/wiki/User:Tirab
8558 https://en.wikipedia.org/w/index.php%3ftitle=User:Titanic4000&action=edit&redlink=1
8559 https://en.wikipedia.org/wiki/User:Titodutta
8560 https://en.wikipedia.org/wiki/User:Tizio
8561 https://en.wikipedia.org/wiki/User:TjBot
8562 https://en.wikipedia.org/wiki/User:Tjdw
8563 https://en.wikipedia.org/wiki/User:Tjwood
8564 https://en.wikipedia.org/wiki/User:Tkgd2007
8565 https://en.wikipedia.org/wiki/User:Tlhslobus
8566 https://en.wikipedia.org/wiki/User:Tmdean
8567 https://en.wikipedia.org/w/index.php%3ftitle=User:Tmferrara&action=edit&redlink=1
8568 https://en.wikipedia.org/wiki/User:Tmladek
8569 https://en.wikipedia.org/w/index.php%3ftitle=User:Tmt514&action=edit&redlink=1
8570 https://en.wikipedia.org/wiki/User:Tmusgrove
8571 https://en.wikipedia.org/w/index.php%3ftitle=User:Tneller&action=edit&redlink=1
8572 https://en.wikipedia.org/w/index.php%3ftitle=User:Tnmikeinia&action=edit&redlink=1
8573 https://en.wikipedia.org/w/index.php%3ftitle=User:Tnullnull&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:To%2527hajiilee&action=edit&
8574
redlink=1
8575 https://en.wikipedia.org/wiki/User:ToBeFree
8576 https://en.wikipedia.org/wiki/User:Tobby72

2012
External links

2 TobeBot8577
2 Tobei8578
1 Tobi Kellner8579
1 Tobias Hoevekamp8580
2 TobiasPersson8581
1 Toby Bartels8582
4 Tobych8583
3 Tocharianne8584
1 Toddcs8585
1 Toddst18586
1 Toddy18587
3 TodorBozhinov8588
2 ToePeu.bot8589
1 Tofergregg8590
2 Tohd8BohaithuGh18591
1 Tokataro8592
2 Tokenzero8593
4 Tokidokix8594
1 Tolcso8595
4 Tolly4bolly8596
1 Tom 998597
1 Tom Duff8598
1 Tom Morris8599
4 Tom harrison8600
15 Tom.Reding8601

8577 https://en.wikipedia.org/wiki/User:TobeBot
8578 https://en.wikipedia.org/wiki/User:Tobei
8579 https://en.wikipedia.org/wiki/User:Tobi_Kellner
8580 https://en.wikipedia.org/wiki/User:Tobias_Hoevekamp
8581 https://en.wikipedia.org/wiki/User:TobiasPersson
8582 https://en.wikipedia.org/wiki/User:Toby_Bartels
8583 https://en.wikipedia.org/wiki/User:Tobych
8584 https://en.wikipedia.org/wiki/User:Tocharianne
8585 https://en.wikipedia.org/w/index.php%3ftitle=User:Toddcs&action=edit&redlink=1
8586 https://en.wikipedia.org/wiki/User:Toddst1
8587 https://en.wikipedia.org/wiki/User:Toddy1
8588 https://en.wikipedia.org/wiki/User:TodorBozhinov
8589 https://en.wikipedia.org/wiki/User:ToePeu.bot
8590 https://en.wikipedia.org/wiki/User:Tofergregg
8591 https://en.wikipedia.org/wiki/User:Tohd8BohaithuGh1
8592 https://en.wikipedia.org/wiki/User:Tokataro
8593 https://en.wikipedia.org/wiki/User:Tokenzero
8594 https://en.wikipedia.org/w/index.php%3ftitle=User:Tokidokix&action=edit&redlink=1
8595 https://en.wikipedia.org/w/index.php%3ftitle=User:Tolcso&action=edit&redlink=1
8596 https://en.wikipedia.org/wiki/User:Tolly4bolly
8597 https://en.wikipedia.org/wiki/User:Tom_99
8598 https://en.wikipedia.org/wiki/User:Tom_Duff
8599 https://en.wikipedia.org/wiki/User:Tom_Morris
8600 https://en.wikipedia.org/wiki/User:Tom_harrison
8601 https://en.wikipedia.org/wiki/User:Tom.Reding

2013
Contributors

1 Tom31188602
1 Tom714uk8603
1 TomCAnthony8604
1 TomCerul8605
1 TomJF8606
1 TomStar818607
2 TomViza8608
2 Tomandjerry2118609
1 TomasRiker8610
2 Tomaxer8611
4 Tomcatjerrymouse8612
4 Tomchiukc8613
1 Tomgally8614
5 Tomgsmith998615
2 Tomhubbard8616
2 Tomisti8617
2 Tomixdf8618
2 Tommunist8619
4 Tommy20108620
6 TommyG8621
1 Tommyjb8622
11 Tomo8623
3 Tomp8624
2 Tompagenet8625
2 Tompop8888626

8602 https://en.wikipedia.org/w/index.php%3ftitle=User:Tom3118&action=edit&redlink=1
8603 https://en.wikipedia.org/w/index.php%3ftitle=User:Tom714uk&action=edit&redlink=1
8604 https://en.wikipedia.org/w/index.php%3ftitle=User:TomCAnthony&action=edit&redlink=1
8605 https://en.wikipedia.org/wiki/User:TomCerul
8606 https://en.wikipedia.org/w/index.php%3ftitle=User:TomJF&action=edit&redlink=1
8607 https://en.wikipedia.org/wiki/User:TomStar81
8608 https://en.wikipedia.org/wiki/User:TomViza
8609 https://en.wikipedia.org/wiki/User:Tomandjerry211
8610 https://en.wikipedia.org/w/index.php%3ftitle=User:TomasRiker&action=edit&redlink=1
8611 https://en.wikipedia.org/wiki/User:Tomaxer
https://en.wikipedia.org/w/index.php%3ftitle=User:Tomcatjerrymouse&action=edit&
8612
redlink=1
8613 https://en.wikipedia.org/wiki/User:Tomchiukc
8614 https://en.wikipedia.org/wiki/User:Tomgally
8615 https://en.wikipedia.org/w/index.php%3ftitle=User:Tomgsmith99&action=edit&redlink=1
8616 https://en.wikipedia.org/wiki/User:Tomhubbard
8617 https://en.wikipedia.org/wiki/User:Tomisti
8618 https://en.wikipedia.org/wiki/User:Tomixdf
8619 https://en.wikipedia.org/wiki/User:Tommunist
8620 https://en.wikipedia.org/wiki/User:Tommy2010
8621 https://en.wikipedia.org/w/index.php%3ftitle=User:TommyG&action=edit&redlink=1
8622 https://en.wikipedia.org/wiki/User:Tommyjb
8623 https://en.wikipedia.org/wiki/User:Tomo
8624 https://en.wikipedia.org/wiki/User:Tomp
8625 https://en.wikipedia.org/wiki/User:Tompagenet
8626 https://en.wikipedia.org/wiki/User:Tompop888

2014
External links

1 Tompw8627
2 Tomruen8628
1 Tomt228629
1 Tomthecool8630
2 Tomzx8631
3 Toncek8632
3 ToneDaBass8633
1 Tong chuang8634
5 Tonkawa688635
1 Tony Fox8636
3 Tony18637
2 TonyW8638
1 Tonyfaull8639
1 Tonyli000008640
1 Tonynater8641
1 Tonysan8642
1 Too Old8643
1 Toolan8644
1 Tooto8645
4 TopHatCroat8646
2 Tore.opsahl8647
1 Toresbe8648
2 Torla428649
3 Tornado chaser8650
1 Torsmo8651

8627 https://en.wikipedia.org/wiki/User:Tompw
8628 https://en.wikipedia.org/wiki/User:Tomruen
8629 https://en.wikipedia.org/w/index.php%3ftitle=User:Tomt22&action=edit&redlink=1
8630 https://en.wikipedia.org/wiki/User:Tomthecool
8631 https://en.wikipedia.org/wiki/User:Tomzx
8632 https://en.wikipedia.org/w/index.php%3ftitle=User:Toncek&action=edit&redlink=1
8633 https://en.wikipedia.org/w/index.php%3ftitle=User:ToneDaBass&action=edit&redlink=1
8634 https://en.wikipedia.org/w/index.php%3ftitle=User:Tong_chuang&action=edit&redlink=1
8635 https://en.wikipedia.org/w/index.php%3ftitle=User:Tonkawa68&action=edit&redlink=1
8636 https://en.wikipedia.org/wiki/User:Tony_Fox
8637 https://en.wikipedia.org/wiki/User:Tony1
8638 https://en.wikipedia.org/wiki/User:TonyW
8639 https://en.wikipedia.org/wiki/User:Tonyfaull
8640 https://en.wikipedia.org/w/index.php%3ftitle=User:Tonyli00000&action=edit&redlink=1
8641 https://en.wikipedia.org/w/index.php%3ftitle=User:Tonynater&action=edit&redlink=1
8642 https://en.wikipedia.org/w/index.php%3ftitle=User:Tonysan&action=edit&redlink=1
8643 https://en.wikipedia.org/wiki/User:Too_Old
8644 https://en.wikipedia.org/w/index.php%3ftitle=User:Toolan&action=edit&redlink=1
8645 https://en.wikipedia.org/wiki/User:Tooto
8646 https://en.wikipedia.org/w/index.php%3ftitle=User:TopHatCroat&action=edit&redlink=1
8647 https://en.wikipedia.org/wiki/User:Tore.opsahl
8648 https://en.wikipedia.org/wiki/User:Toresbe
8649 https://en.wikipedia.org/w/index.php%3ftitle=User:Torla42&action=edit&redlink=1
8650 https://en.wikipedia.org/wiki/User:Tornado_chaser
8651 https://en.wikipedia.org/w/index.php%3ftitle=User:Torsmo&action=edit&redlink=1

2015
Contributors

3 Torsten Will8652
1 Tortoise 748653
1 Torvin8654
4 Torzsmokus8655
2 Tosha8656
1 Tostie148657
2 Totan4208658
3 Touriste8659
2 Tourorist8660
1 Touyats8661
1 TowerDragon8662
2 Towopedia8663
1 Toxin10008664
1 Toyentory8665
2 Toypod1008666
1 Tr00st8667
1 Tracytheta8668
2 Trade nobody wolf8669
3 TrainUnderwater8670
2 TranquilHope8671
1 Transcendental calico8672
1 Transonlohk8673
6 Trantd.vn8674
28 Trappist the monk8675

8652 https://en.wikipedia.org/w/index.php%3ftitle=User:Torsten_Will&action=edit&redlink=1
8653 https://en.wikipedia.org/wiki/User:Tortoise_74
8654 https://en.wikipedia.org/w/index.php%3ftitle=User:Torvin&action=edit&redlink=1
8655 https://en.wikipedia.org/wiki/User:Torzsmokus
8656 https://en.wikipedia.org/wiki/User:Tosha
8657 https://en.wikipedia.org/wiki/User:Tostie14
8658 https://en.wikipedia.org/w/index.php%3ftitle=User:Totan420&action=edit&redlink=1
8659 https://en.wikipedia.org/wiki/User:Touriste
8660 https://en.wikipedia.org/wiki/User:Tourorist
8661 https://en.wikipedia.org/w/index.php%3ftitle=User:Touyats&action=edit&redlink=1
8662 https://en.wikipedia.org/wiki/User:TowerDragon
8663 https://en.wikipedia.org/w/index.php%3ftitle=User:Towopedia&action=edit&redlink=1
8664 https://en.wikipedia.org/w/index.php%3ftitle=User:Toxin1000&action=edit&redlink=1
8665 https://en.wikipedia.org/w/index.php%3ftitle=User:Toyentory&action=edit&redlink=1
8666 https://en.wikipedia.org/w/index.php%3ftitle=User:Toypod100&action=edit&redlink=1
8667 https://en.wikipedia.org/wiki/User:Tr00st
8668 https://en.wikipedia.org/w/index.php%3ftitle=User:Tracytheta&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Trade_nobody_wolf&action=edit&
8669
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:TrainUnderwater&action=edit&
8670
redlink=1
8671 https://en.wikipedia.org/wiki/User:TranquilHope
https://en.wikipedia.org/w/index.php%3ftitle=User:Transcendental_calico&action=edit&
8672
redlink=1
8673 https://en.wikipedia.org/wiki/User:Transonlohk
8674 https://en.wikipedia.org/w/index.php%3ftitle=User:Trantd.vn&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Trappist_the_monk&action=edit&
8675
redlink=1

2016
External links

2 Traroth8676
1 Travelbird8677
1 Travuun8678
1 Treaster8679
2 TreasuryTag8680
1 Treesmill8681
1 TrentonLipscomb8682
1 TreveX8683
2 Trevor Andersen8684
4 Treyshonuff8685
2 Trianam8686
1 Trianam~enwiki8687
2 TricksterWolf8688
2 Trieper8689
48 Trimutius8690
3 Trinitrix8691
1 TripleF8692
1 Tristan Surtel8693
2 Tristanb8694
1 Triston J. Taylor8695
1 Trisweb8696
3 Tritium68697
1 Trivialist8698
1 Triwas8699
1 Triwbe8700

8676 https://en.wikipedia.org/wiki/User:Traroth
8677 https://en.wikipedia.org/wiki/User:Travelbird
8678 https://en.wikipedia.org/w/index.php%3ftitle=User:Travuun&action=edit&redlink=1
8679 https://en.wikipedia.org/w/index.php%3ftitle=User:Treaster&action=edit&redlink=1
8680 https://en.wikipedia.org/wiki/User:TreasuryTag
8681 https://en.wikipedia.org/wiki/User:Treesmill
8682 https://en.wikipedia.org/wiki/User:TrentonLipscomb
8683 https://en.wikipedia.org/wiki/User:TreveX
8684 https://en.wikipedia.org/wiki/User:Trevor_Andersen
8685 https://en.wikipedia.org/w/index.php%3ftitle=User:Treyshonuff&action=edit&redlink=1
8686 https://en.wikipedia.org/w/index.php%3ftitle=User:Trianam&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Trianam~enwiki&action=edit&redlink=
8687
1
8688 https://en.wikipedia.org/w/index.php%3ftitle=User:TricksterWolf&action=edit&redlink=1
8689 https://en.wikipedia.org/w/index.php%3ftitle=User:Trieper&action=edit&redlink=1
8690 https://en.wikipedia.org/wiki/User:Trimutius
8691 https://en.wikipedia.org/wiki/User:Trinitrix
8692 https://en.wikipedia.org/w/index.php%3ftitle=User:TripleF&action=edit&redlink=1
8693 https://en.wikipedia.org/wiki/User:Tristan_Surtel
8694 https://en.wikipedia.org/wiki/User:Tristanb
https://en.wikipedia.org/w/index.php%3ftitle=User:Triston_J._Taylor&action=edit&
8695
redlink=1
8696 https://en.wikipedia.org/wiki/User:Trisweb
8697 https://en.wikipedia.org/wiki/User:Tritium6
8698 https://en.wikipedia.org/wiki/User:Trivialist
8699 https://en.wikipedia.org/w/index.php%3ftitle=User:Triwas&action=edit&redlink=1
8700 https://en.wikipedia.org/wiki/User:Triwbe

2017
Contributors

1 Trixter8701
1 Trjumpet8702
1 Trlkly8703
1 Tromp8704
1 TroncTest8705
29 Trovatore8706
1 Trudslev8707
1 Truehalley~enwiki8708
2 Trumpsternator8709
6 Trunks1758710
3 Trusilver8711
13 TruthIIPower8712
1 Tryptophan48713
3 Tsca.bot8714
1 Tschmidt238715
1 Tsf8716
2 Tshubham8717
3 Tshuva8718
1 Tsiaojian lee8719
5 Tsirel8720
1 Tspguy8721
1 Tsplog8722
1 Tsunanet8723
1 Ttiotsw8724
1 Ttonyb18725

8701 https://en.wikipedia.org/w/index.php%3ftitle=User:Trixter&action=edit&redlink=1
8702 https://en.wikipedia.org/wiki/User:Trjumpet
8703 https://en.wikipedia.org/wiki/User:Trlkly
8704 https://en.wikipedia.org/wiki/User:Tromp
8705 https://en.wikipedia.org/w/index.php%3ftitle=User:TroncTest&action=edit&redlink=1
8706 https://en.wikipedia.org/wiki/User:Trovatore
8707 https://en.wikipedia.org/w/index.php%3ftitle=User:Trudslev&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Truehalley~enwiki&action=edit&
8708
redlink=1
8709 https://en.wikipedia.org/wiki/User:Trumpsternator
8710 https://en.wikipedia.org/w/index.php%3ftitle=User:Trunks175&action=edit&redlink=1
8711 https://en.wikipedia.org/wiki/User:Trusilver
8712 https://en.wikipedia.org/wiki/User:TruthIIPower
8713 https://en.wikipedia.org/w/index.php%3ftitle=User:Tryptophan4&action=edit&redlink=1
8714 https://en.wikipedia.org/wiki/User:Tsca.bot
8715 https://en.wikipedia.org/w/index.php%3ftitle=User:Tschmidt23&action=edit&redlink=1
8716 https://en.wikipedia.org/w/index.php%3ftitle=User:Tsf&action=edit&redlink=1
8717 https://en.wikipedia.org/w/index.php%3ftitle=User:Tshubham&action=edit&redlink=1
8718 https://en.wikipedia.org/wiki/User:Tshuva
8719 https://en.wikipedia.org/wiki/User:Tsiaojian_lee
8720 https://en.wikipedia.org/wiki/User:Tsirel
8721 https://en.wikipedia.org/w/index.php%3ftitle=User:Tspguy&action=edit&redlink=1
8722 https://en.wikipedia.org/w/index.php%3ftitle=User:Tsplog&action=edit&redlink=1
8723 https://en.wikipedia.org/wiki/User:Tsunanet
8724 https://en.wikipedia.org/wiki/User:Ttiotsw
8725 https://en.wikipedia.org/wiki/User:Ttonyb1

2018
External links

1 Ttwaring8726
4 Ttzz8727
1 TuHan-Bot8728
3 Tualha8729
1 Tuankiet658730
2 Tudor9878731
1 Tuketu78732
2 Tuna0278733
3 Tuonawa8734
1 Turbo pencil8735
4 Turketwh8736
1 Turn6858737
5 Tusharsoni0998738
2 Tuttu4u8739
2 TuukkaH8740
1 TuvicBot8741
1 Tuxianyu8742
1 Tvguide12348743
1 Tvidas8744
2 Twanvl8745
1 Twas Now8746
8 Twexcom8747
2 Twikir8748
5 Twin Bird8749
3 Twinmind8750

8726 https://en.wikipedia.org/wiki/User:Ttwaring
8727 https://en.wikipedia.org/wiki/User:Ttzz
8728 https://en.wikipedia.org/wiki/User:TuHan-Bot
8729 https://en.wikipedia.org/wiki/User:Tualha
8730 https://en.wikipedia.org/w/index.php%3ftitle=User:Tuankiet65&action=edit&redlink=1
8731 https://en.wikipedia.org/wiki/User:Tudor987
8732 https://en.wikipedia.org/w/index.php%3ftitle=User:Tuketu7&action=edit&redlink=1
8733 https://en.wikipedia.org/wiki/User:Tuna027
8734 https://en.wikipedia.org/w/index.php%3ftitle=User:Tuonawa&action=edit&redlink=1
8735 https://en.wikipedia.org/wiki/User:Turbo_pencil
8736 https://en.wikipedia.org/wiki/User:Turketwh
8737 https://en.wikipedia.org/wiki/User:Turn685
8738 https://en.wikipedia.org/w/index.php%3ftitle=User:Tusharsoni099&action=edit&redlink=1
8739 https://en.wikipedia.org/w/index.php%3ftitle=User:Tuttu4u&action=edit&redlink=1
8740 https://en.wikipedia.org/wiki/User:TuukkaH
8741 https://en.wikipedia.org/wiki/User:TuvicBot
8742 https://en.wikipedia.org/w/index.php%3ftitle=User:Tuxianyu&action=edit&redlink=1
8743 https://en.wikipedia.org/w/index.php%3ftitle=User:Tvguide1234&action=edit&redlink=1
8744 https://en.wikipedia.org/w/index.php%3ftitle=User:Tvidas&action=edit&redlink=1
8745 https://en.wikipedia.org/w/index.php%3ftitle=User:Twanvl&action=edit&redlink=1
8746 https://en.wikipedia.org/wiki/User:Twas_Now
8747 https://en.wikipedia.org/w/index.php%3ftitle=User:Twexcom&action=edit&redlink=1
8748 https://en.wikipedia.org/w/index.php%3ftitle=User:Twikir&action=edit&redlink=1
8749 https://en.wikipedia.org/wiki/User:Twin_Bird
8750 https://en.wikipedia.org/w/index.php%3ftitle=User:Twinmind&action=edit&redlink=1

2019
Contributors

1 TwistOfCain8751
7 Two Bananas8752
7 TwoTwoHello8753
1 Twocs8754
46 Twri8755
1 Twsx8756
1 Tyco.skinner8757
2 Tyilo8758
2 Tyir8759
7 Tyler McHenry8760
1 Tylerbittner8761
1 Tylerl8762
1 Tymon.r8763
2 Typinaway8764
1 Typobox8765
1 Tyranitar 648766
2 Tyytsang8767
2 Tzanger8768
2 U2fanboi8769
2 UKoch8770
1 UTSRelativity8771
1 Uadhd8772
2 UberWoman8773
1 Ubiquity8774
4 Ubsan8775

8751 https://en.wikipedia.org/wiki/User:TwistOfCain
8752 https://en.wikipedia.org/wiki/User:Two_Bananas
8753 https://en.wikipedia.org/wiki/User:TwoTwoHello
8754 https://en.wikipedia.org/wiki/User:Twocs
8755 https://en.wikipedia.org/wiki/User:Twri
8756 https://en.wikipedia.org/wiki/User:Twsx
8757 https://en.wikipedia.org/wiki/User:Tyco.skinner
8758 https://en.wikipedia.org/wiki/User:Tyilo
8759 https://en.wikipedia.org/wiki/User:Tyir
8760 https://en.wikipedia.org/wiki/User:Tyler_McHenry
8761 https://en.wikipedia.org/w/index.php%3ftitle=User:Tylerbittner&action=edit&redlink=1
8762 https://en.wikipedia.org/wiki/User:Tylerl
8763 https://en.wikipedia.org/wiki/User:Tymon.r
8764 https://en.wikipedia.org/wiki/User:Typinaway
8765 https://en.wikipedia.org/wiki/User:Typobox
8766 https://en.wikipedia.org/w/index.php%3ftitle=User:Tyranitar_64&action=edit&redlink=1
8767 https://en.wikipedia.org/w/index.php%3ftitle=User:Tyytsang&action=edit&redlink=1
8768 https://en.wikipedia.org/w/index.php%3ftitle=User:Tzanger&action=edit&redlink=1
8769 https://en.wikipedia.org/wiki/User:U2fanboi
8770 https://en.wikipedia.org/wiki/User:UKoch
8771 https://en.wikipedia.org/wiki/User:UTSRelativity
8772 https://en.wikipedia.org/w/index.php%3ftitle=User:Uadhd&action=edit&redlink=1
8773 https://en.wikipedia.org/w/index.php%3ftitle=User:UberWoman&action=edit&redlink=1
8774 https://en.wikipedia.org/wiki/User:Ubiquity
8775 https://en.wikipedia.org/w/index.php%3ftitle=User:Ubsan&action=edit&redlink=1

2020
External links

1 Ucucha8776
6 Udirock8777
1 Udo.bellack8778
1 UffeHThygesen8779
2 Ugog Nizdast8780
1 Ugur Basak Bot~enwiki8781
1 Ugusensei8782
3 UkPaolo8783
1 Ukexpat8784
4 Ukraroad8785
2 Ulfalizer8786
2 Ulfben8787
5 Ultimus8788
1 Ultra two8789
1 Ultramarine8790
1 Ululation8791
1 UmFuZ8792
3 Umbreus8793
1 Un11imig8794
1 UnCatBot8795
1 Unbitwise8796
1 Unbuttered Parsnip8797
1 Uncle Dick8798
3 UncleDouggie8799
1 Underbar dk8800

8776 https://en.wikipedia.org/wiki/User:Ucucha
8777 https://en.wikipedia.org/w/index.php%3ftitle=User:Udirock&action=edit&redlink=1
8778 https://en.wikipedia.org/w/index.php%3ftitle=User:Udo.bellack&action=edit&redlink=1
8779 https://en.wikipedia.org/wiki/User:UffeHThygesen
8780 https://en.wikipedia.org/wiki/User:Ugog_Nizdast
8781 https://en.wikipedia.org/wiki/User:Ugur_Basak_Bot~enwiki
8782 https://en.wikipedia.org/w/index.php%3ftitle=User:Ugusensei&action=edit&redlink=1
8783 https://en.wikipedia.org/wiki/User:UkPaolo
8784 https://en.wikipedia.org/wiki/User:Ukexpat
8785 https://en.wikipedia.org/w/index.php%3ftitle=User:Ukraroad&action=edit&redlink=1
8786 https://en.wikipedia.org/wiki/User:Ulfalizer
8787 https://en.wikipedia.org/wiki/User:Ulfben
8788 https://en.wikipedia.org/wiki/User:Ultimus
8789 https://en.wikipedia.org/wiki/User:Ultra_two
8790 https://en.wikipedia.org/wiki/User:Ultramarine
8791 https://en.wikipedia.org/w/index.php%3ftitle=User:Ululation&action=edit&redlink=1
8792 https://en.wikipedia.org/w/index.php%3ftitle=User:UmFuZ&action=edit&redlink=1
8793 https://en.wikipedia.org/w/index.php%3ftitle=User:Umbreus&action=edit&redlink=1
8794 https://en.wikipedia.org/wiki/User:Un11imig
8795 https://en.wikipedia.org/wiki/User:UnCatBot
8796 https://en.wikipedia.org/wiki/User:Unbitwise
8797 https://en.wikipedia.org/wiki/User:Unbuttered_Parsnip
8798 https://en.wikipedia.org/w/index.php%3ftitle=User:Uncle_Dick&action=edit&redlink=1
8799 https://en.wikipedia.org/wiki/User:UncleDouggie
8800 https://en.wikipedia.org/wiki/User:Underbar_dk

2021
Contributors

1 Underwater8801
2 Undsoweiter8802
1 Unflavoured8803
1 Ungzd8804
1 Uni4dfx8805
1 Universalss8806
1 UniverseBlueShadows8807
15 Unixxx8808
3 Uniyal.sumedha8809
1 Unknown1048810
4 Unknown14228811
3 Unmwiki8812
1 UnnamedUser8813
7 Unquietwiki8814
1 Unready8815
1 Unyoyega8816
1 Updatehelper8817
3 Updeshgarg8818
2 Uploader4u8819
1 Upulcranga8820
3 Uranographer8821
2 Urdutext8822
1 Urhixidur8823
3 Uriah1238824
1 Uriber8825

8801 https://en.wikipedia.org/wiki/User:Underwater
8802 https://en.wikipedia.org/wiki/User:Undsoweiter
8803 https://en.wikipedia.org/wiki/User:Unflavoured
8804 https://en.wikipedia.org/w/index.php%3ftitle=User:Ungzd&action=edit&redlink=1
8805 https://en.wikipedia.org/w/index.php%3ftitle=User:Uni4dfx&action=edit&redlink=1
8806 https://en.wikipedia.org/wiki/User:Universalss
https://en.wikipedia.org/w/index.php%3ftitle=User:UniverseBlueShadows&action=edit&
8807
redlink=1
8808 https://en.wikipedia.org/wiki/User:Unixxx
8809 https://en.wikipedia.org/wiki/User:Uniyal.sumedha
8810 https://en.wikipedia.org/wiki/User:Unknown104
8811 https://en.wikipedia.org/w/index.php%3ftitle=User:Unknown1422&action=edit&redlink=1
8812 https://en.wikipedia.org/w/index.php%3ftitle=User:Unmwiki&action=edit&redlink=1
8813 https://en.wikipedia.org/wiki/User:UnnamedUser
8814 https://en.wikipedia.org/wiki/User:Unquietwiki
8815 https://en.wikipedia.org/wiki/User:Unready
8816 https://en.wikipedia.org/wiki/User:Unyoyega
8817 https://en.wikipedia.org/wiki/User:Updatehelper
8818 https://en.wikipedia.org/w/index.php%3ftitle=User:Updeshgarg&action=edit&redlink=1
8819 https://en.wikipedia.org/w/index.php%3ftitle=User:Uploader4u&action=edit&redlink=1
8820 https://en.wikipedia.org/w/index.php%3ftitle=User:Upulcranga&action=edit&redlink=1
8821 https://en.wikipedia.org/wiki/User:Uranographer
8822 https://en.wikipedia.org/wiki/User:Urdutext
8823 https://en.wikipedia.org/wiki/User:Urhixidur
8824 https://en.wikipedia.org/w/index.php%3ftitle=User:Uriah123&action=edit&redlink=1
8825 https://en.wikipedia.org/wiki/User:Uriber

2022
External links

1 Urod8826
3 UrsaFoot8827
1 UrsusArctos8828
3 Usability8829
1 Usama7078830
2 Uselesswarrior8831
3 Uselv988832
1 User 388833
12 User A18834
7 User-duck8835
1 User59108836
1 UserGoogol8837
5 Userabc8838
1 Userask8839
1 UsernameNotTaken8840
1 Usernameasdf8841
1 Useurandom8842
2 Ushallnotedit8843
1 Ushkin N8844
1 Usien68845
2 Usmanity8846
1 Usrnme h8er8847
1 Ustadny8848
3 Utcursch8849
2 UtherSRG8850

8826 https://en.wikipedia.org/wiki/User:Urod
8827 https://en.wikipedia.org/w/index.php%3ftitle=User:UrsaFoot&action=edit&redlink=1
8828 https://en.wikipedia.org/wiki/User:UrsusArctos
8829 https://en.wikipedia.org/wiki/User:Usability
8830 https://en.wikipedia.org/wiki/User:Usama707
https://en.wikipedia.org/w/index.php%3ftitle=User:Uselesswarrior&action=edit&redlink=
8831
1
8832 https://en.wikipedia.org/w/index.php%3ftitle=User:Uselv98&action=edit&redlink=1
8833 https://en.wikipedia.org/wiki/User:User_38
8834 https://en.wikipedia.org/wiki/User:User_A1
8835 https://en.wikipedia.org/wiki/User:User-duck
8836 https://en.wikipedia.org/wiki/User:User5910
8837 https://en.wikipedia.org/wiki/User:UserGoogol
8838 https://en.wikipedia.org/w/index.php%3ftitle=User:Userabc&action=edit&redlink=1
8839 https://en.wikipedia.org/w/index.php%3ftitle=User:Userask&action=edit&redlink=1
8840 https://en.wikipedia.org/wiki/User:UsernameNotTaken
8841 https://en.wikipedia.org/w/index.php%3ftitle=User:Usernameasdf&action=edit&redlink=1
8842 https://en.wikipedia.org/w/index.php%3ftitle=User:Useurandom&action=edit&redlink=1
8843 https://en.wikipedia.org/w/index.php%3ftitle=User:Ushallnotedit&action=edit&redlink=1
8844 https://en.wikipedia.org/wiki/User:Ushkin_N
8845 https://en.wikipedia.org/wiki/User:Usien6
8846 https://en.wikipedia.org/w/index.php%3ftitle=User:Usmanity&action=edit&redlink=1
8847 https://en.wikipedia.org/wiki/User:Usrnme_h8er
8848 https://en.wikipedia.org/w/index.php%3ftitle=User:Ustadny&action=edit&redlink=1
8849 https://en.wikipedia.org/wiki/User:Utcursch
8850 https://en.wikipedia.org/wiki/User:UtherSRG

2023
Contributors

1 Utopcell8851
2 Utopiah8852
3 Utopianheaven8853
1 Utuado8854
1 Uurtamo8855
2 Uwcs8856
1 Uzdzislaw8857
2 Uziel3028858
2 Uzonyiakos8859
1 V-Teq~enwiki8860
1 V.Sindhuja8861
1 V0d01ey8862
2 V35b8863
4 VKokielov8864
11 VMS Mosaic8865
2 VNosenko8866
1 VOThijs8867
1 VS65078868
6 VTBassMatt8869
4 VVVBot8870
1 VZakharov8871
2 Vacation98872
1 Vacilandois8873
4 Vacio8874
1 Vaclav.brozek8875

8851 https://en.wikipedia.org/w/index.php%3ftitle=User:Utopcell&action=edit&redlink=1
8852 https://en.wikipedia.org/wiki/User:Utopiah
8853 https://en.wikipedia.org/wiki/User:Utopianheaven
8854 https://en.wikipedia.org/wiki/User:Utuado
8855 https://en.wikipedia.org/wiki/User:Uurtamo
8856 https://en.wikipedia.org/w/index.php%3ftitle=User:Uwcs&action=edit&redlink=1
8857 https://en.wikipedia.org/wiki/User:Uzdzislaw
8858 https://en.wikipedia.org/wiki/User:Uziel302
8859 https://en.wikipedia.org/w/index.php%3ftitle=User:Uzonyiakos&action=edit&redlink=1
8860 https://en.wikipedia.org/wiki/User:V-Teq~enwiki
8861 https://en.wikipedia.org/w/index.php%3ftitle=User:V.Sindhuja&action=edit&redlink=1
8862 https://en.wikipedia.org/w/index.php%3ftitle=User:V0d01ey&action=edit&redlink=1
8863 https://en.wikipedia.org/w/index.php%3ftitle=User:V35b&action=edit&redlink=1
8864 https://en.wikipedia.org/wiki/User:VKokielov
8865 https://en.wikipedia.org/wiki/User:VMS_Mosaic
8866 https://en.wikipedia.org/w/index.php%3ftitle=User:VNosenko&action=edit&redlink=1
8867 https://en.wikipedia.org/w/index.php%3ftitle=User:VOThijs&action=edit&redlink=1
8868 https://en.wikipedia.org/wiki/User:VS6507
8869 https://en.wikipedia.org/wiki/User:VTBassMatt
8870 https://en.wikipedia.org/wiki/User:VVVBot
8871 https://en.wikipedia.org/wiki/User:VZakharov
8872 https://en.wikipedia.org/wiki/User:Vacation9
8873 https://en.wikipedia.org/wiki/User:Vacilandois
8874 https://en.wikipedia.org/wiki/User:Vacio
8875 https://en.wikipedia.org/w/index.php%3ftitle=User:Vaclav.brozek&action=edit&redlink=1

2024
External links

1 Vadim Makarov8876
18 Vadimvadim8877
3 Vadmium8878
1 Vaibhav.misra278879
4 Vaibhav19928880
1 Vaidersith8881
2 Vald8882
1 Valenciano8883
1 Valentas.Kurauskas8884
1 Valentyn.boreiko8885
1 Valery.vv8886
1 Valkrynnus8887
2 Vampamp8888
2 Van Hohenheim8889
1 Van Parunak8890
3 Vanessaaleung8891
1 Vanisaac8892
1 Vanished User 8a9b4725f83768893
2 Vanished user 048894
2 Vanished user 399482828895
1 Vanished user wdjklasdjskla8896
1 VanishedUser sdu9aya9fasdsopa8897
3 VanishedUser sdu9aya9fs787sads8898

8876 https://en.wikipedia.org/wiki/User:Vadim_Makarov
8877 https://en.wikipedia.org/wiki/User:Vadimvadim
8878 https://en.wikipedia.org/wiki/User:Vadmium
https://en.wikipedia.org/w/index.php%3ftitle=User:Vaibhav.misra27&action=edit&
8879
redlink=1
8880 https://en.wikipedia.org/wiki/User:Vaibhav1992
8881 https://en.wikipedia.org/w/index.php%3ftitle=User:Vaidersith&action=edit&redlink=1
8882 https://en.wikipedia.org/wiki/User:Vald
8883 https://en.wikipedia.org/wiki/User:Valenciano
https://en.wikipedia.org/w/index.php%3ftitle=User:Valentas.Kurauskas&action=edit&
8884
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Valentyn.boreiko&action=edit&
8885
redlink=1
8886 https://en.wikipedia.org/w/index.php%3ftitle=User:Valery.vv&action=edit&redlink=1
8887 https://en.wikipedia.org/w/index.php%3ftitle=User:Valkrynnus&action=edit&redlink=1
8888 https://en.wikipedia.org/wiki/User:Vampamp
8889 https://en.wikipedia.org/w/index.php%3ftitle=User:Van_Hohenheim&action=edit&redlink=1
8890 https://en.wikipedia.org/wiki/User:Van_Parunak
8891 https://en.wikipedia.org/w/index.php%3ftitle=User:Vanessaaleung&action=edit&redlink=1
8892 https://en.wikipedia.org/wiki/User:Vanisaac
https://en.wikipedia.org/w/index.php%3ftitle=User:Vanished_User_8a9b4725f8376&action=
8893
edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Vanished_user_04&action=edit&
8894
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Vanished_user_39948282&action=edit&
8895
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Vanished_user_wdjklasdjskla&action=
8896
edit&redlink=1
8897 https://en.wikipedia.org/wiki/User:VanishedUser_sdu9aya9fasdsopa
https://en.wikipedia.org/w/index.php%3ftitle=User:VanishedUser_sdu9aya9fs787sads&
8898
action=edit&redlink=1

2025
Contributors

2 VanishedUserABC8899
3 Vanisheduser12a678900
1 Vanis~enwiki8901
1 Vanjagenije8902
1 Varma rockzz8903
2 Varocarbas8904
1 Varuna8905
2 Vasiľ8906
1 Vassloff8907
1 Vasudevan.selvaganesh8908
8 Vaughan Pratt8909
7 Vcfahrenbruck8910
3 Vdaghan8911
1 Vdm8912
2 Vecrumba8913
1 Vecter8914
1 Vectornaut8915
1 Vectorpaladin138916
12 Vedant8917
5 Veganfanatic8918
1 Vegas9498919
2 Vegasprof8920
4 Vegaswikian8921
1 Vegetator8922

https://en.wikipedia.org/w/index.php%3ftitle=User:VanishedUserABC&action=edit&
8899
redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Vanisheduser12a67&action=edit&
8900
redlink=1
8901 https://en.wikipedia.org/w/index.php%3ftitle=User:Vanis~enwiki&action=edit&redlink=1
8902 https://en.wikipedia.org/wiki/User:Vanjagenije
8903 https://en.wikipedia.org/w/index.php%3ftitle=User:Varma_rockzz&action=edit&redlink=1
8904 https://en.wikipedia.org/w/index.php%3ftitle=User:Varocarbas&action=edit&redlink=1
8905 https://en.wikipedia.org/wiki/User:Varuna
8906 https://en.wikipedia.org/wiki/User:Vasi%25C4%25BE
8907 https://en.wikipedia.org/w/index.php%3ftitle=User:Vassloff&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Vasudevan.selvaganesh&action=edit&
8908
redlink=1
8909 https://en.wikipedia.org/wiki/User:Vaughan_Pratt
8910 https://en.wikipedia.org/w/index.php%3ftitle=User:Vcfahrenbruck&action=edit&redlink=1
8911 https://en.wikipedia.org/w/index.php%3ftitle=User:Vdaghan&action=edit&redlink=1
8912 https://en.wikipedia.org/w/index.php%3ftitle=User:Vdm&action=edit&redlink=1
8913 https://en.wikipedia.org/wiki/User:Vecrumba
8914 https://en.wikipedia.org/wiki/User:Vecter
8915 https://en.wikipedia.org/wiki/User:Vectornaut
https://en.wikipedia.org/w/index.php%3ftitle=User:Vectorpaladin13&action=edit&
8916
redlink=1
8917 https://en.wikipedia.org/wiki/User:Vedant
8918 https://en.wikipedia.org/wiki/User:Veganfanatic
8919 https://en.wikipedia.org/wiki/User:Vegas949
8920 https://en.wikipedia.org/wiki/User:Vegasprof
8921 https://en.wikipedia.org/wiki/User:Vegaswikian
8922 https://en.wikipedia.org/wiki/User:Vegetator

2026
External links

1 Vegpuff8923
7 Velella8924
1 Velociostrich8925
1 Veluca938926
3 Venkatarun958927
8 Verbal8928
12 Verdy p8929
2 Vermooten8930
6 VernoWhitney8931
1 Versageek8932
2 Versus228933
1 VeryVerily8934
1 Veryangrypenguin8935
4 Verycuriousboy8936
2 Verydark258937
1 Vespristiano8938
4 Vevek8939
1 Vexations8940
7 Vexis8941
1 Vgy7ujm8942
1 Vicarious8943
1 Victolunik8944
2 Victor Chmara8945
1 Victor veitch8946
3 VictorAnyakin8947

8923 https://en.wikipedia.org/wiki/User:Vegpuff
8924 https://en.wikipedia.org/wiki/User:Velella
8925 https://en.wikipedia.org/w/index.php%3ftitle=User:Velociostrich&action=edit&redlink=1
8926 https://en.wikipedia.org/w/index.php%3ftitle=User:Veluca93&action=edit&redlink=1
8927 https://en.wikipedia.org/w/index.php%3ftitle=User:Venkatarun95&action=edit&redlink=1
8928 https://en.wikipedia.org/wiki/User:Verbal
8929 https://en.wikipedia.org/wiki/User:Verdy_p
8930 https://en.wikipedia.org/wiki/User:Vermooten
8931 https://en.wikipedia.org/wiki/User:VernoWhitney
8932 https://en.wikipedia.org/wiki/User:Versageek
8933 https://en.wikipedia.org/wiki/User:Versus22
8934 https://en.wikipedia.org/wiki/User:VeryVerily
8935 https://en.wikipedia.org/wiki/User:Veryangrypenguin
8936 https://en.wikipedia.org/wiki/User:Verycuriousboy
8937 https://en.wikipedia.org/w/index.php%3ftitle=User:Verydark25&action=edit&redlink=1
8938 https://en.wikipedia.org/wiki/User:Vespristiano
8939 https://en.wikipedia.org/w/index.php%3ftitle=User:Vevek&action=edit&redlink=1
8940 https://en.wikipedia.org/wiki/User:Vexations
8941 https://en.wikipedia.org/wiki/User:Vexis
8942 https://en.wikipedia.org/wiki/User:Vgy7ujm
8943 https://en.wikipedia.org/wiki/User:Vicarious
8944 https://en.wikipedia.org/w/index.php%3ftitle=User:Victolunik&action=edit&redlink=1
8945 https://en.wikipedia.org/wiki/User:Victor_Chmara
8946 https://en.wikipedia.org/wiki/User:Victor_veitch
8947 https://en.wikipedia.org/wiki/User:VictorAnyakin

2027
Contributors

2 VictorPorton8948
1 VictorYarema8949
1 Victorliuwiki8950
1 Viebel8951
1 Vieledgalaxy8952
1 Vieque8953
1 Vigi908954
24 Vigna8955
1 Vikas.menon8956
1 Vike20008957
2 Vikreykja8958
1 Vikrum8959
5 Villaone568960
2 Vin098961
2 Vina-iwbot~enwiki8962
1 Vinccool968963
14 Vince.jennings8964
1 Vincent Liu8965
1 VincentP8966
4 VineetKumar8967
2 Vineetzone8968
2 VinhyDahPooh8969
3 Vipinhari8970
1 Viriditas8971
1 VirtualDemon8972

8948 https://en.wikipedia.org/wiki/User:VictorPorton
8949 https://en.wikipedia.org/w/index.php%3ftitle=User:VictorYarema&action=edit&redlink=1
8950 https://en.wikipedia.org/w/index.php%3ftitle=User:Victorliuwiki&action=edit&redlink=1
8951 https://en.wikipedia.org/w/index.php%3ftitle=User:Viebel&action=edit&redlink=1
8952 https://en.wikipedia.org/w/index.php%3ftitle=User:Vieledgalaxy&action=edit&redlink=1
8953 https://en.wikipedia.org/wiki/User:Vieque
8954 https://en.wikipedia.org/w/index.php%3ftitle=User:Vigi90&action=edit&redlink=1
8955 https://en.wikipedia.org/w/index.php%3ftitle=User:Vigna&action=edit&redlink=1
8956 https://en.wikipedia.org/w/index.php%3ftitle=User:Vikas.menon&action=edit&redlink=1
8957 https://en.wikipedia.org/wiki/User:Vike2000
8958 https://en.wikipedia.org/wiki/User:Vikreykja
8959 https://en.wikipedia.org/wiki/User:Vikrum
8960 https://en.wikipedia.org/w/index.php%3ftitle=User:Villaone56&action=edit&redlink=1
8961 https://en.wikipedia.org/wiki/User:Vin09
8962 https://en.wikipedia.org/wiki/User:Vina-iwbot~enwiki
8963 https://en.wikipedia.org/w/index.php%3ftitle=User:Vinccool96&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Vince.jennings&action=edit&redlink=
8964
1
8965 https://en.wikipedia.org/wiki/User:Vincent_Liu
8966 https://en.wikipedia.org/w/index.php%3ftitle=User:VincentP&action=edit&redlink=1
8967 https://en.wikipedia.org/wiki/User:VineetKumar
8968 https://en.wikipedia.org/w/index.php%3ftitle=User:Vineetzone&action=edit&redlink=1
8969 https://en.wikipedia.org/w/index.php%3ftitle=User:VinhyDahPooh&action=edit&redlink=1
8970 https://en.wikipedia.org/wiki/User:Vipinhari
8971 https://en.wikipedia.org/wiki/User:Viriditas
8972 https://en.wikipedia.org/w/index.php%3ftitle=User:VirtualDemon&action=edit&redlink=1

2028
External links

1 Virtualblackfox8973
1 Virus3268974
1 Virus928975
1 Vishaldbhat8976
1 Vishnu09198977
1 Visor8978
2 Vivekk8979
2 Vix288980
2 Viz8981
1 Vkuncak8982
1 Vlad Petrean8983
2 VladH8984
1 Vladhed8985
2 VladimirReshetnikov8986
1 Vladkornea8987
2 Vmohanaraj8988
9 VoABot II8989
1 Vocaro8990
8 Voidxor8991
2 Volkan YAZICI8992
32 VolkovBot8993
2 Voltran468994
2 Vonbrand8995
5 Vonkje8996
1 Voomoo8997

8973 https://en.wikipedia.org/wiki/User:Virtualblackfox
8974 https://en.wikipedia.org/w/index.php%3ftitle=User:Virus326&action=edit&redlink=1
8975 https://en.wikipedia.org/w/index.php%3ftitle=User:Virus92&action=edit&redlink=1
8976 https://en.wikipedia.org/w/index.php%3ftitle=User:Vishaldbhat&action=edit&redlink=1
8977 https://en.wikipedia.org/w/index.php%3ftitle=User:Vishnu0919&action=edit&redlink=1
8978 https://en.wikipedia.org/wiki/User:Visor
8979 https://en.wikipedia.org/wiki/User:Vivekk
8980 https://en.wikipedia.org/w/index.php%3ftitle=User:Vix28&action=edit&redlink=1
8981 https://en.wikipedia.org/wiki/User:Viz
8982 https://en.wikipedia.org/wiki/User:Vkuncak
8983 https://en.wikipedia.org/w/index.php%3ftitle=User:Vlad_Petrean&action=edit&redlink=1
8984 https://en.wikipedia.org/wiki/User:VladH
8985 https://en.wikipedia.org/wiki/User:Vladhed
8986 https://en.wikipedia.org/wiki/User:VladimirReshetnikov
8987 https://en.wikipedia.org/w/index.php%3ftitle=User:Vladkornea&action=edit&redlink=1
8988 https://en.wikipedia.org/w/index.php%3ftitle=User:Vmohanaraj&action=edit&redlink=1
8989 https://en.wikipedia.org/wiki/User:VoABot_II
8990 https://en.wikipedia.org/wiki/User:Vocaro
8991 https://en.wikipedia.org/wiki/User:Voidxor
8992 https://en.wikipedia.org/w/index.php%3ftitle=User:Volkan_YAZICI&action=edit&redlink=1
8993 https://en.wikipedia.org/wiki/User:VolkovBot
8994 https://en.wikipedia.org/w/index.php%3ftitle=User:Voltran46&action=edit&redlink=1
8995 https://en.wikipedia.org/w/index.php%3ftitle=User:Vonbrand&action=edit&redlink=1
8996 https://en.wikipedia.org/wiki/User:Vonkje
8997 https://en.wikipedia.org/w/index.php%3ftitle=User:Voomoo&action=edit&redlink=1

2029
Contributors

2 Voorlandt8998
1 Vorastrix8999
2 Vorn9000
1 Vortexrealm9001
1 Voyevoda9002
1 Vpdesai9003
5 Vpieterse~enwiki9004
1 Vpshastry9005
2 Vramasub9006
7 Vrenator9007
2 Vromascanu9008
1 Vroo9009
1 Vsethuooo9010
3 Vsh3r9011
1 Vsion9012
2 Vstarre9013
1 Vstarsky9014
7 Vueza9015
1 Vujkovica brdo9016
1 VukAnd129017
1 Vukašin9018
1 Vvarkey9019
2 Vwm9020
1 Vycl19949021
2 Vyznev Xnebara9022

8998 https://en.wikipedia.org/wiki/User:Voorlandt
8999 https://en.wikipedia.org/w/index.php%3ftitle=User:Vorastrix&action=edit&redlink=1
9000 https://en.wikipedia.org/wiki/User:Vorn
9001 https://en.wikipedia.org/wiki/User:Vortexrealm
9002 https://en.wikipedia.org/wiki/User:Voyevoda
9003 https://en.wikipedia.org/w/index.php%3ftitle=User:Vpdesai&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Vpieterse~enwiki&action=edit&
9004
redlink=1
9005 https://en.wikipedia.org/w/index.php%3ftitle=User:Vpshastry&action=edit&redlink=1
9006 https://en.wikipedia.org/wiki/User:Vramasub
9007 https://en.wikipedia.org/wiki/User:Vrenator
9008 https://en.wikipedia.org/w/index.php%3ftitle=User:Vromascanu&action=edit&redlink=1
9009 https://en.wikipedia.org/w/index.php%3ftitle=User:Vroo&action=edit&redlink=1
9010 https://en.wikipedia.org/w/index.php%3ftitle=User:Vsethuooo&action=edit&redlink=1
9011 https://en.wikipedia.org/w/index.php%3ftitle=User:Vsh3r&action=edit&redlink=1
9012 https://en.wikipedia.org/wiki/User:Vsion
9013 https://en.wikipedia.org/wiki/User:Vstarre
9014 https://en.wikipedia.org/w/index.php%3ftitle=User:Vstarsky&action=edit&redlink=1
9015 https://en.wikipedia.org/w/index.php%3ftitle=User:Vueza&action=edit&redlink=1
9016 https://en.wikipedia.org/wiki/User:Vujkovica_brdo
9017 https://en.wikipedia.org/wiki/User:VukAnd12
https://en.wikipedia.org/w/index.php%3ftitle=User:Vuka%25C5%25A1in&action=edit&
9018
redlink=1
9019 https://en.wikipedia.org/wiki/User:Vvarkey
9020 https://en.wikipedia.org/w/index.php%3ftitle=User:Vwm&action=edit&redlink=1
9021 https://en.wikipedia.org/wiki/User:Vycl1994
9022 https://en.wikipedia.org/wiki/User:Vyznev_Xnebara

2030
External links

2 VzjrZ9023
2 W Nowicki9024
1 W1k1p3dddd149025
1 W1r3d29026
7 W3bbo9027
1 WEF brat sad9028
1 WISo9029
1 WMC9030
2 WOSlinker9031
1 WPLR9032
1 WPPatrol9033
2 WaddSpoiley9034
2 Waddie969035
2 Wafulz9036
1 Wagino 201005169037
1 Wahrmund9038
1 Wakebrdkid9039
2 Wakimakirolls9040
13 Waldyrious9041
1 Walk&check9042
6 Walkerlala9043
1 Walkerma9044
1 Walkie9045
2 Walrus0689046
2 Walter Görlitz9047

9023 https://en.wikipedia.org/wiki/User:VzjrZ
9024 https://en.wikipedia.org/wiki/User:W_Nowicki
9025 https://en.wikipedia.org/w/index.php%3ftitle=User:W1k1p3dddd14&action=edit&redlink=1
9026 https://en.wikipedia.org/w/index.php%3ftitle=User:W1r3d2&action=edit&redlink=1
9027 https://en.wikipedia.org/w/index.php%3ftitle=User:W3bbo&action=edit&redlink=1
9028 https://en.wikipedia.org/w/index.php%3ftitle=User:WEF_brat_sad&action=edit&redlink=1
9029 https://en.wikipedia.org/wiki/User:WISo
9030 https://en.wikipedia.org/wiki/User:WMC
9031 https://en.wikipedia.org/wiki/User:WOSlinker
9032 https://en.wikipedia.org/wiki/User:WPLR
9033 https://en.wikipedia.org/w/index.php%3ftitle=User:WPPatrol&action=edit&redlink=1
9034 https://en.wikipedia.org/w/index.php%3ftitle=User:WaddSpoiley&action=edit&redlink=1
9035 https://en.wikipedia.org/wiki/User:Waddie96
9036 https://en.wikipedia.org/wiki/User:Wafulz
9037 https://en.wikipedia.org/wiki/User:Wagino_20100516
9038 https://en.wikipedia.org/wiki/User:Wahrmund
9039 https://en.wikipedia.org/wiki/User:Wakebrdkid
9040 https://en.wikipedia.org/wiki/User:Wakimakirolls
9041 https://en.wikipedia.org/wiki/User:Waldyrious
9042 https://en.wikipedia.org/wiki/User:Walk%2526check
9043 https://en.wikipedia.org/wiki/User:Walkerlala
9044 https://en.wikipedia.org/wiki/User:Walkerma
9045 https://en.wikipedia.org/wiki/User:Walkie
9046 https://en.wikipedia.org/wiki/User:Walrus068
9047 https://en.wikipedia.org/wiki/User:Walter_G%25C3%25B6rlitz

2031
Contributors

2 Walterandrei229048
7 Waltnmi9049
1 Waltpohl9050
3 Wandering0079051
1 Wangbing79289052
1 Wanli339053
3 Wantnot9054
1 Wap9055
1 Wapcaplet9056
1 Wasell9057
1 Washa rednecks9058
1 Washington Irving Esquire9059
1 Watarok9060
7 Watcher9061
1 Watersmeetfreak9062
1 Watlok9063
4 Watson Ladd9064
1 Watts3pt09065
7 WavePart9066
1 Waveguy9067
40 Wavelength9068
2 Wayne Slam9069
1 Wayward9070
2 Wazimuko9071
2 Wbeek9072

https://en.wikipedia.org/w/index.php%3ftitle=User:Walterandrei22&action=edit&redlink=
9048
1
9049 https://en.wikipedia.org/wiki/User:Waltnmi
9050 https://en.wikipedia.org/wiki/User:Waltpohl
9051 https://en.wikipedia.org/w/index.php%3ftitle=User:Wandering007&action=edit&redlink=1
9052 https://en.wikipedia.org/w/index.php%3ftitle=User:Wangbing7928&action=edit&redlink=1
9053 https://en.wikipedia.org/wiki/User:Wanli33
9054 https://en.wikipedia.org/wiki/User:Wantnot
9055 https://en.wikipedia.org/wiki/User:Wap
9056 https://en.wikipedia.org/wiki/User:Wapcaplet
9057 https://en.wikipedia.org/wiki/User:Wasell
https://en.wikipedia.org/w/index.php%3ftitle=User:Washa_rednecks&action=edit&redlink=
9058
1
9059 https://en.wikipedia.org/wiki/User:Washington_Irving_Esquire
9060 https://en.wikipedia.org/w/index.php%3ftitle=User:Watarok&action=edit&redlink=1
9061 https://en.wikipedia.org/wiki/User:Watcher
https://en.wikipedia.org/w/index.php%3ftitle=User:Watersmeetfreak&action=edit&
9062
redlink=1
9063 https://en.wikipedia.org/w/index.php%3ftitle=User:Watlok&action=edit&redlink=1
9064 https://en.wikipedia.org/w/index.php%3ftitle=User:Watson_Ladd&action=edit&redlink=1
9065 https://en.wikipedia.org/w/index.php%3ftitle=User:Watts3pt0&action=edit&redlink=1
9066 https://en.wikipedia.org/wiki/User:WavePart
9067 https://en.wikipedia.org/wiki/User:Waveguy
9068 https://en.wikipedia.org/wiki/User:Wavelength
9069 https://en.wikipedia.org/wiki/User:Wayne_Slam
9070 https://en.wikipedia.org/wiki/User:Wayward
9071 https://en.wikipedia.org/w/index.php%3ftitle=User:Wazimuko&action=edit&redlink=1
9072 https://en.wikipedia.org/wiki/User:Wbeek

2032
External links

6 Wbm10589073
1 Wchargin9074
38 Wcherowi9075
1 Wctaiwan9076
1 Wdr19077
1 Wdvorak9078
1 Weaktofu9079
1 Weburbia9080
1 Wee Curry Monster9081
1 Weedwhacker1289082
2 WeggeBot9083
5 Wei.cs9084
2 WeileiZeng9085
1 Weixifan9086
1 Wekigo9087
1 Welsh9088
8 Wen D House9089
1 Wenteng9090
4 Werdna9091
4 WereSpielChequers9092
2 Wereon9093
1 Wernetom9094
2 Wernher9095
1 West.andrew.g9096
3 Westley Turner9097

9073 https://en.wikipedia.org/wiki/User:Wbm1058
9074 https://en.wikipedia.org/wiki/User:Wchargin
9075 https://en.wikipedia.org/wiki/User:Wcherowi
9076 https://en.wikipedia.org/wiki/User:Wctaiwan
9077 https://en.wikipedia.org/wiki/User:Wdr1
9078 https://en.wikipedia.org/wiki/User:Wdvorak
9079 https://en.wikipedia.org/wiki/User:Weaktofu
9080 https://en.wikipedia.org/wiki/User:Weburbia
9081 https://en.wikipedia.org/wiki/User:Wee_Curry_Monster
9082 https://en.wikipedia.org/wiki/User:Weedwhacker128
9083 https://en.wikipedia.org/wiki/User:WeggeBot
9084 https://en.wikipedia.org/wiki/User:Wei.cs
9085 https://en.wikipedia.org/w/index.php%3ftitle=User:WeileiZeng&action=edit&redlink=1
9086 https://en.wikipedia.org/w/index.php%3ftitle=User:Weixifan&action=edit&redlink=1
9087 https://en.wikipedia.org/w/index.php%3ftitle=User:Wekigo&action=edit&redlink=1
9088 https://en.wikipedia.org/wiki/User:Welsh
9089 https://en.wikipedia.org/wiki/User:Wen_D_House
9090 https://en.wikipedia.org/wiki/User:Wenteng
9091 https://en.wikipedia.org/wiki/User:Werdna
9092 https://en.wikipedia.org/wiki/User:WereSpielChequers
9093 https://en.wikipedia.org/wiki/User:Wereon
9094 https://en.wikipedia.org/w/index.php%3ftitle=User:Wernetom&action=edit&redlink=1
9095 https://en.wikipedia.org/wiki/User:Wernher
9096 https://en.wikipedia.org/wiki/User:West.andrew.g
9097 https://en.wikipedia.org/wiki/User:Westley_Turner

2033
Contributors

2 Wewtaco9098
1 Weyes9099
3 Wfaulk9100
3 Wfaxon9101
1 Wfbergmann9102
1 Wfunction9103
1 Whaa?9104
1 WhackTheWiki9105
1 Whacks9106
1 WhaleyTim9107
1 WhatisFeelings?9108
1 WheezePuppet9109
10 WhereAreMyPointersAt9110
1 Wheresthebrain9111
2 Whikie9112
1 White Trillium9113
2 WhiteCrane9114
1 WhiteNebula9115
1 WhiteOak20069116
2 Whiteknox9117
1 Whkoh9118
1 Who9119
1 Who then was a gentleman?9120
2 Whoami499121
2 Whoopieisbae9122

9098 https://en.wikipedia.org/wiki/User:Wewtaco
9099 https://en.wikipedia.org/wiki/User:Weyes
9100 https://en.wikipedia.org/wiki/User:Wfaulk
9101 https://en.wikipedia.org/w/index.php%3ftitle=User:Wfaxon&action=edit&redlink=1
9102 https://en.wikipedia.org/w/index.php%3ftitle=User:Wfbergmann&action=edit&redlink=1
9103 https://en.wikipedia.org/w/index.php%3ftitle=User:Wfunction&action=edit&redlink=1
9104 https://en.wikipedia.org/wiki/User:Whaa%253F
9105 https://en.wikipedia.org/w/index.php%3ftitle=User:WhackTheWiki&action=edit&redlink=1
9106 https://en.wikipedia.org/w/index.php%3ftitle=User:Whacks&action=edit&redlink=1
9107 https://en.wikipedia.org/wiki/User:WhaleyTim
9108 https://en.wikipedia.org/wiki/User:WhatisFeelings%253F
9109 https://en.wikipedia.org/wiki/User:WheezePuppet
https://en.wikipedia.org/w/index.php%3ftitle=User:WhereAreMyPointersAt&action=edit&
9110
redlink=1
9111 https://en.wikipedia.org/wiki/User:Wheresthebrain
9112 https://en.wikipedia.org/wiki/User:Whikie
9113 https://en.wikipedia.org/wiki/User:White_Trillium
9114 https://en.wikipedia.org/w/index.php%3ftitle=User:WhiteCrane&action=edit&redlink=1
9115 https://en.wikipedia.org/w/index.php%3ftitle=User:WhiteNebula&action=edit&redlink=1
9116 https://en.wikipedia.org/wiki/User:WhiteOak2006
9117 https://en.wikipedia.org/wiki/User:Whiteknox
9118 https://en.wikipedia.org/wiki/User:Whkoh
9119 https://en.wikipedia.org/wiki/User:Who
9120 https://en.wikipedia.org/wiki/User:Who_then_was_a_gentleman%253F
9121 https://en.wikipedia.org/w/index.php%3ftitle=User:Whoami49&action=edit&redlink=1
9122 https://en.wikipedia.org/w/index.php%3ftitle=User:Whoopieisbae&action=edit&redlink=1

2034
External links

3 Whosyourjudas9123
1 Whouk9124
1 Whyfish9125
7 Wiae9126
1 Wickethewok9127
4 Wicknicks9128
49 Widefox9129
1 Wierdcowman9130
2 Wierdy10249131
2 Wik9132
1 Wik.Swar909133
1 Wiker639134
2 Wiki-229135
1 Wiki.Tango.Foxtrot9136
1 Wiki.phylo9137
1 Wiki.ryansmith9138
11 Wiki2016edit9139
4 WikiCleanerBot9140
1 WikiDreamer Bot9141
1 WikiEnthusiastNumberTwenty-Two9142
1 WikiLaurent9143
2 WikiPedant9144
3 WikiSlasher9145
2 WikiSzepi9146
1 WikiWizard9147

9123 https://en.wikipedia.org/wiki/User:Whosyourjudas
9124 https://en.wikipedia.org/wiki/User:Whouk
9125 https://en.wikipedia.org/w/index.php%3ftitle=User:Whyfish&action=edit&redlink=1
9126 https://en.wikipedia.org/wiki/User:Wiae
9127 https://en.wikipedia.org/wiki/User:Wickethewok
9128 https://en.wikipedia.org/w/index.php%3ftitle=User:Wicknicks&action=edit&redlink=1
9129 https://en.wikipedia.org/wiki/User:Widefox
9130 https://en.wikipedia.org/w/index.php%3ftitle=User:Wierdcowman&action=edit&redlink=1
9131 https://en.wikipedia.org/wiki/User:Wierdy1024
9132 https://en.wikipedia.org/wiki/User:Wik
9133 https://en.wikipedia.org/w/index.php%3ftitle=User:Wik.Swar90&action=edit&redlink=1
9134 https://en.wikipedia.org/wiki/User:Wiker63
9135 https://en.wikipedia.org/w/index.php%3ftitle=User:Wiki-22&action=edit&redlink=1
9136 https://en.wikipedia.org/wiki/User:Wiki.Tango.Foxtrot
9137 https://en.wikipedia.org/w/index.php%3ftitle=User:Wiki.phylo&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Wiki.ryansmith&action=edit&redlink=
9138
1
9139 https://en.wikipedia.org/w/index.php%3ftitle=User:Wiki2016edit&action=edit&redlink=1
9140 https://en.wikipedia.org/wiki/User:WikiCleanerBot
9141 https://en.wikipedia.org/wiki/User:WikiDreamer_Bot
9142 https://en.wikipedia.org/wiki/User:WikiEnthusiastNumberTwenty-Two
9143 https://en.wikipedia.org/wiki/User:WikiLaurent
9144 https://en.wikipedia.org/wiki/User:WikiPedant
9145 https://en.wikipedia.org/wiki/User:WikiSlasher
9146 https://en.wikipedia.org/w/index.php%3ftitle=User:WikiSzepi&action=edit&redlink=1
9147 https://en.wikipedia.org/w/index.php%3ftitle=User:WikiWizard&action=edit&redlink=1

2035
Contributors

1 Wikibbexpi9148
7 Wikibot9149
2 Wikibuki9150
6 Wikid779151
1 Wikidsp9152
2 Wikidushyant9153
1 Wikiedit7389154
1 Wikiisgreat1239155
7 Wikiklrsc9156
18 Wikikoff9157
11 Wikilolo9158
1 Wikimol9159
1 Wikimunter9160
2 Wikinick~enwiki9161
1 Wikip rhyre9162
1 WikipedianMarlith9163
1 Wikipediatrist9164
11 Wikipelli9165
1 Wikirao9166
1 Wikirodde9167
2 Wikisian9168
23 WikitanvirBot9169
3 Wikiwikiwikipedi49170
7 Wikiwonky9171
1 Wikizoli9172

9148 https://en.wikipedia.org/w/index.php%3ftitle=User:Wikibbexpi&action=edit&redlink=1
9149 https://en.wikipedia.org/wiki/User:Wikibot
9150 https://en.wikipedia.org/w/index.php%3ftitle=User:Wikibuki&action=edit&redlink=1
9151 https://en.wikipedia.org/wiki/User:Wikid77
9152 https://en.wikipedia.org/w/index.php%3ftitle=User:Wikidsp&action=edit&redlink=1
9153 https://en.wikipedia.org/wiki/User:Wikidushyant
9154 https://en.wikipedia.org/w/index.php%3ftitle=User:Wikiedit738&action=edit&redlink=1
9155 https://en.wikipedia.org/wiki/User:Wikiisgreat123
9156 https://en.wikipedia.org/wiki/User:Wikiklrsc
9157 https://en.wikipedia.org/w/index.php%3ftitle=User:Wikikoff&action=edit&redlink=1
9158 https://en.wikipedia.org/wiki/User:Wikilolo
9159 https://en.wikipedia.org/wiki/User:Wikimol
9160 https://en.wikipedia.org/w/index.php%3ftitle=User:Wikimunter&action=edit&redlink=1
9161 https://en.wikipedia.org/wiki/User:Wikinick~enwiki
9162 https://en.wikipedia.org/wiki/User:Wikip_rhyre
9163 https://en.wikipedia.org/wiki/User:WikipedianMarlith
https://en.wikipedia.org/w/index.php%3ftitle=User:Wikipediatrist&action=edit&redlink=
9164
1
9165 https://en.wikipedia.org/wiki/User:Wikipelli
9166 https://en.wikipedia.org/w/index.php%3ftitle=User:Wikirao&action=edit&redlink=1
9167 https://en.wikipedia.org/w/index.php%3ftitle=User:Wikirodde&action=edit&redlink=1
9168 https://en.wikipedia.org/w/index.php%3ftitle=User:Wikisian&action=edit&redlink=1
9169 https://en.wikipedia.org/wiki/User:WikitanvirBot
https://en.wikipedia.org/w/index.php%3ftitle=User:Wikiwikiwikipedi4&action=edit&
9170
redlink=1
9171 https://en.wikipedia.org/wiki/User:Wikiwonky
9172 https://en.wikipedia.org/wiki/User:Wikizoli

2036
External links

2 Wilagobler9173
1 Wilberth9174
2 Wildcat dunny9175
1 Wile E. Heresiarch9176
1 Wilhelmina Will9177
3 Wilkerson.james729178
1 Will Beback9179
6 Will Faught9180
1 Will Gladstone9181
6 Will Orrick9182
1 Will.fiset9183
22 WillNess9184
1 WillUther9185
1 Willard7209186
1 Willdm349187
2 Willem9188
2 WillemienH9189
2 Willfindyou9190
1 William Avery9191
1 William Di Luigi9192
1 William Evans9193
1 William Ortiz9194
1 WilliamThweatt9195
6 Williamyf9196
2 Willielassiter9197

9173 https://en.wikipedia.org/w/index.php%3ftitle=User:Wilagobler&action=edit&redlink=1
9174 https://en.wikipedia.org/wiki/User:Wilberth
9175 https://en.wikipedia.org/wiki/User:Wildcat_dunny
9176 https://en.wikipedia.org/wiki/User:Wile_E._Heresiarch
9177 https://en.wikipedia.org/wiki/User:Wilhelmina_Will
https://en.wikipedia.org/w/index.php%3ftitle=User:Wilkerson.james72&action=edit&
9178
redlink=1
9179 https://en.wikipedia.org/wiki/User:Will_Beback
9180 https://en.wikipedia.org/w/index.php%3ftitle=User:Will_Faught&action=edit&redlink=1
9181 https://en.wikipedia.org/wiki/User:Will_Gladstone
9182 https://en.wikipedia.org/wiki/User:Will_Orrick
9183 https://en.wikipedia.org/w/index.php%3ftitle=User:Will.fiset&action=edit&redlink=1
9184 https://en.wikipedia.org/wiki/User:WillNess
9185 https://en.wikipedia.org/wiki/User:WillUther
9186 https://en.wikipedia.org/w/index.php%3ftitle=User:Willard720&action=edit&redlink=1
9187 https://en.wikipedia.org/w/index.php%3ftitle=User:Willdm34&action=edit&redlink=1
9188 https://en.wikipedia.org/wiki/User:Willem
9189 https://en.wikipedia.org/wiki/User:WillemienH
9190 https://en.wikipedia.org/w/index.php%3ftitle=User:Willfindyou&action=edit&redlink=1
9191 https://en.wikipedia.org/wiki/User:William_Avery
https://en.wikipedia.org/w/index.php%3ftitle=User:William_Di_Luigi&action=edit&
9192
redlink=1
9193 https://en.wikipedia.org/w/index.php%3ftitle=User:William_Evans&action=edit&redlink=1
9194 https://en.wikipedia.org/wiki/User:William_Ortiz
9195 https://en.wikipedia.org/wiki/User:WilliamThweatt
9196 https://en.wikipedia.org/w/index.php%3ftitle=User:Williamyf&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Willielassiter&action=edit&redlink=
9197
1

2037
Contributors

2 Willking19799198
1 Willp41399199
2 Willtron9200
1 Wimt9201
5 WinBot9202
4 Winecellar9203
1 WinerFresh9204
1 Wingardiumleviosa29205
3 Wingedsubmariner9206
1 WingkeeLEE9207
2 Winston Chuen-Shih Yang9208
5 Winterheat9209
1 Winterst9210
4 Wintonian9211
1 Wintricular9212
1 Wireless Keyboard9213
1 Wisgary9214
1 Wismon9215
1 WissensDürster9216
5 WithWhich9217
1 Wittawat9218
2 Wizard1919219
2 Wizardman9220
3 Wizeguytristram9221
2 Wjaguar9222

9198 https://en.wikipedia.org/wiki/User:Willking1979
9199 https://en.wikipedia.org/w/index.php%3ftitle=User:Willp4139&action=edit&redlink=1
9200 https://en.wikipedia.org/wiki/User:Willtron
9201 https://en.wikipedia.org/wiki/User:Wimt
9202 https://en.wikipedia.org/wiki/User:WinBot
9203 https://en.wikipedia.org/w/index.php%3ftitle=User:Winecellar&action=edit&redlink=1
9204 https://en.wikipedia.org/w/index.php%3ftitle=User:WinerFresh&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Wingardiumleviosa2&action=edit&
9205
redlink=1
9206 https://en.wikipedia.org/wiki/User:Wingedsubmariner
9207 https://en.wikipedia.org/wiki/User:WingkeeLEE
https://en.wikipedia.org/w/index.php%3ftitle=User:Winston_Chuen-Shih_Yang&action=
9208
edit&redlink=1
9209 https://en.wikipedia.org/wiki/User:Winterheat
9210 https://en.wikipedia.org/wiki/User:Winterst
9211 https://en.wikipedia.org/wiki/User:Wintonian
9212 https://en.wikipedia.org/w/index.php%3ftitle=User:Wintricular&action=edit&redlink=1
9213 https://en.wikipedia.org/wiki/User:Wireless_Keyboard
9214 https://en.wikipedia.org/w/index.php%3ftitle=User:Wisgary&action=edit&redlink=1
9215 https://en.wikipedia.org/wiki/User:Wismon
9216 https://en.wikipedia.org/wiki/User:WissensD%25C3%25BCrster
9217 https://en.wikipedia.org/wiki/User:WithWhich
9218 https://en.wikipedia.org/wiki/User:Wittawat
9219 https://en.wikipedia.org/wiki/User:Wizard191
9220 https://en.wikipedia.org/wiki/User:Wizardman
https://en.wikipedia.org/w/index.php%3ftitle=User:Wizeguytristram&action=edit&
9221
redlink=1
9222 https://en.wikipedia.org/w/index.php%3ftitle=User:Wjaguar&action=edit&redlink=1

2038
External links

3 Wjrl599223
2 Wkudrle9224
1 Wladston9225
1 Wlievens9226
4 Wmahan9227
3 Wmayner9228
3 Wmbolle9229
2 Wocky9230
2 WojciechSwiderski~enwiki9231
160 Wolfkeeper9232
1 Wolkykim9233
2 Wombleme9234
4 Womiller999235
13 WonderPhil9236
1 Woodlot9237
3 Woohookitty9238
2 WookieInHeat9239
2 Wootery9240
6 Worch9241
2 Wordsputtogether9242
1 Workaphobia9243
1 WormNut9244
1 Worthr9245
2 Woshiqiqiye9246
1 Wouldyoujust9247

9223 https://en.wikipedia.org/wiki/User:Wjrl59
9224 https://en.wikipedia.org/w/index.php%3ftitle=User:Wkudrle&action=edit&redlink=1
9225 https://en.wikipedia.org/w/index.php%3ftitle=User:Wladston&action=edit&redlink=1
9226 https://en.wikipedia.org/wiki/User:Wlievens
9227 https://en.wikipedia.org/wiki/User:Wmahan
9228 https://en.wikipedia.org/w/index.php%3ftitle=User:Wmayner&action=edit&redlink=1
9229 https://en.wikipedia.org/w/index.php%3ftitle=User:Wmbolle&action=edit&redlink=1
9230 https://en.wikipedia.org/wiki/User:Wocky
9231 https://en.wikipedia.org/wiki/User:WojciechSwiderski~enwiki
9232 https://en.wikipedia.org/wiki/User:Wolfkeeper
9233 https://en.wikipedia.org/w/index.php%3ftitle=User:Wolkykim&action=edit&redlink=1
9234 https://en.wikipedia.org/wiki/User:Wombleme
9235 https://en.wikipedia.org/w/index.php%3ftitle=User:Womiller99&action=edit&redlink=1
9236 https://en.wikipedia.org/w/index.php%3ftitle=User:WonderPhil&action=edit&redlink=1
9237 https://en.wikipedia.org/wiki/User:Woodlot
9238 https://en.wikipedia.org/wiki/User:Woohookitty
9239 https://en.wikipedia.org/wiki/User:WookieInHeat
9240 https://en.wikipedia.org/wiki/User:Wootery
9241 https://en.wikipedia.org/wiki/User:Worch
https://en.wikipedia.org/w/index.php%3ftitle=User:Wordsputtogether&action=edit&
9242
redlink=1
9243 https://en.wikipedia.org/w/index.php%3ftitle=User:Workaphobia&action=edit&redlink=1
9244 https://en.wikipedia.org/w/index.php%3ftitle=User:WormNut&action=edit&redlink=1
9245 https://en.wikipedia.org/w/index.php%3ftitle=User:Worthr&action=edit&redlink=1
9246 https://en.wikipedia.org/w/index.php%3ftitle=User:Woshiqiqiye&action=edit&redlink=1
9247 https://en.wikipedia.org/w/index.php%3ftitle=User:Wouldyoujust&action=edit&redlink=1

2039
Contributors

1 Wow9248
4 Wphamilton9249
3 Wqwt9250
1 Wraithful9251
1 Wrapash9252
1 Writer on wiki9253
1 Writer1309254
1 WriterHound9255
1 Writtenonsand9256
3 Wrp1039257
1 Wrs18649258
1 Wrusan9259
3 Wshun9260
1 Wsloand9261
1 Wstomv9262
1 Wsu-dm-jb9263
3 Wsu-f9264
1 Wtarreau9265
1 Wtmf9266
15 Wtmitchell9267
6 Wtshymanski9268
1 Wtt9269
2 Wtuvell9270
1 WuBot9271
1 WuTheFWasThat9272

9248 https://en.wikipedia.org/w/index.php%3ftitle=User:Wow&action=edit&redlink=1
9249 https://en.wikipedia.org/wiki/User:Wphamilton
9250 https://en.wikipedia.org/wiki/User:Wqwt
9251 https://en.wikipedia.org/w/index.php%3ftitle=User:Wraithful&action=edit&redlink=1
9252 https://en.wikipedia.org/w/index.php%3ftitle=User:Wrapash&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Writer_on_wiki&action=edit&redlink=
9253
1
9254 https://en.wikipedia.org/wiki/User:Writer130
9255 https://en.wikipedia.org/wiki/User:WriterHound
9256 https://en.wikipedia.org/wiki/User:Writtenonsand
9257 https://en.wikipedia.org/wiki/User:Wrp103
9258 https://en.wikipedia.org/wiki/User:Wrs1864
9259 https://en.wikipedia.org/w/index.php%3ftitle=User:Wrusan&action=edit&redlink=1
9260 https://en.wikipedia.org/wiki/User:Wshun
9261 https://en.wikipedia.org/w/index.php%3ftitle=User:Wsloand&action=edit&redlink=1
9262 https://en.wikipedia.org/w/index.php%3ftitle=User:Wstomv&action=edit&redlink=1
9263 https://en.wikipedia.org/w/index.php%3ftitle=User:Wsu-dm-jb&action=edit&redlink=1
9264 https://en.wikipedia.org/w/index.php%3ftitle=User:Wsu-f&action=edit&redlink=1
9265 https://en.wikipedia.org/w/index.php%3ftitle=User:Wtarreau&action=edit&redlink=1
9266 https://en.wikipedia.org/w/index.php%3ftitle=User:Wtmf&action=edit&redlink=1
9267 https://en.wikipedia.org/wiki/User:Wtmitchell
9268 https://en.wikipedia.org/wiki/User:Wtshymanski
9269 https://en.wikipedia.org/wiki/User:Wtt
9270 https://en.wikipedia.org/w/index.php%3ftitle=User:Wtuvell&action=edit&redlink=1
9271 https://en.wikipedia.org/wiki/User:WuBot
9272 https://en.wikipedia.org/w/index.php%3ftitle=User:WuTheFWasThat&action=edit&redlink=1

2040
External links

1 Wuhwuzdat9273
1 Wulfskin9274
6 Wullschj9275
1 Wumpus30009276
1 WurmWoode9277
1 Wurtech9278
1 Wutzofant9279
1 Wvbailey9280
48 Ww9281
1 Ww2censor9282
1 Wwagner9283
1 Wweck9284
7 Wwmbes9285
1 Wwwwolf9286
1 Wylie399287
1 WójcikBartosz9288
4 X-Fi69289
4 X10249290
5 X1539291
53 X7q9292
4 XJaM9293
66 XLinkBot9294
25 XOR'easter9295
3 XP19296
4 XVirus9297

9273 https://en.wikipedia.org/wiki/User:Wuhwuzdat
9274 https://en.wikipedia.org/w/index.php%3ftitle=User:Wulfskin&action=edit&redlink=1
9275 https://en.wikipedia.org/w/index.php%3ftitle=User:Wullschj&action=edit&redlink=1
9276 https://en.wikipedia.org/w/index.php%3ftitle=User:Wumpus3000&action=edit&redlink=1
9277 https://en.wikipedia.org/wiki/User:WurmWoode
9278 https://en.wikipedia.org/wiki/User:Wurtech
9279 https://en.wikipedia.org/wiki/User:Wutzofant
9280 https://en.wikipedia.org/wiki/User:Wvbailey
9281 https://en.wikipedia.org/wiki/User:Ww
9282 https://en.wikipedia.org/wiki/User:Ww2censor
9283 https://en.wikipedia.org/wiki/User:Wwagner
9284 https://en.wikipedia.org/w/index.php%3ftitle=User:Wweck&action=edit&redlink=1
9285 https://en.wikipedia.org/wiki/User:Wwmbes
9286 https://en.wikipedia.org/wiki/User:Wwwwolf
9287 https://en.wikipedia.org/wiki/User:Wylie39
https://en.wikipedia.org/w/index.php%3ftitle=User:W%25C3%25B3jcikBartosz&action=edit&
9288
redlink=1
9289 https://en.wikipedia.org/wiki/User:X-Fi6
9290 https://en.wikipedia.org/w/index.php%3ftitle=User:X1024&action=edit&redlink=1
9291 https://en.wikipedia.org/wiki/User:X153
9292 https://en.wikipedia.org/wiki/User:X7q
9293 https://en.wikipedia.org/wiki/User:XJaM
9294 https://en.wikipedia.org/wiki/User:XLinkBot
9295 https://en.wikipedia.org/wiki/User:XOR%2527easter
9296 https://en.wikipedia.org/wiki/User:XP1
9297 https://en.wikipedia.org/wiki/User:XVirus

2041
Contributors

2 XZeroBot9298
1 Xack~plwiki9299
1 Xakari9300
1 Xanchester9301
3 Xanzzibar9302
1 Xaosflux9303
3 Xavier Combelle9304
13 Xbao9305
2 Xceptor9306
1 Xcez-be9307
2 Xdenizen9308
1 Xe7al9309
1 Xelgen9310
1 Xembel9311
5 Xeno89312
1 Xenonoxid9313
1 Xenophon (bot)9314
1 Xenxax9315
2 Xerox 5B9316
1 XethroG9317
1 Xevior9318
2 Xezbeth9319
1 Xguru9320
2 Xhackeranywhere9321
1 Xiangyujames9322

9298 https://en.wikipedia.org/wiki/User:XZeroBot
9299 https://en.wikipedia.org/w/index.php%3ftitle=User:Xack~plwiki&action=edit&redlink=1
9300 https://en.wikipedia.org/w/index.php%3ftitle=User:Xakari&action=edit&redlink=1
9301 https://en.wikipedia.org/wiki/User:Xanchester
9302 https://en.wikipedia.org/wiki/User:Xanzzibar
9303 https://en.wikipedia.org/wiki/User:Xaosflux
9304 https://en.wikipedia.org/wiki/User:Xavier_Combelle
9305 https://en.wikipedia.org/w/index.php%3ftitle=User:Xbao&action=edit&redlink=1
9306 https://en.wikipedia.org/w/index.php%3ftitle=User:Xceptor&action=edit&redlink=1
9307 https://en.wikipedia.org/w/index.php%3ftitle=User:Xcez-be&action=edit&redlink=1
9308 https://en.wikipedia.org/wiki/User:Xdenizen
9309 https://en.wikipedia.org/wiki/User:Xe7al
9310 https://en.wikipedia.org/wiki/User:Xelgen
9311 https://en.wikipedia.org/wiki/User:Xembel
9312 https://en.wikipedia.org/w/index.php%3ftitle=User:Xeno8&action=edit&redlink=1
9313 https://en.wikipedia.org/wiki/User:Xenonoxid
9314 https://en.wikipedia.org/wiki/User:Xenophon_(bot)
9315 https://en.wikipedia.org/wiki/User:Xenxax
9316 https://en.wikipedia.org/w/index.php%3ftitle=User:Xerox_5B&action=edit&redlink=1
9317 https://en.wikipedia.org/wiki/User:XethroG
9318 https://en.wikipedia.org/wiki/User:Xevior
9319 https://en.wikipedia.org/wiki/User:Xezbeth
9320 https://en.wikipedia.org/w/index.php%3ftitle=User:Xguru&action=edit&redlink=1
9321 https://en.wikipedia.org/wiki/User:Xhackeranywhere
9322 https://en.wikipedia.org/w/index.php%3ftitle=User:Xiangyujames&action=edit&redlink=1

2042
External links

2 Xiaodai~enwiki9323
1 Xiaojeng~enwiki9324
1 Xiaokaoy9325
2 Xiaoyang9326
5 Xijiahe9327
1 Xinbenlv9328
3 Xiong9329
1 Xitaowen9330
1 Xjcl9331
1 Xklpop9332
1 Xnk9333
3 Xnn9334
2 Xodarap009335
1 Xonev9336
1 Xonqnopp9337
1 Xpavlic49338
1 Xperimental~enwiki9339
1 Xprycker9340
36 Xqbot9341
1 Xrchz9342
1 XreDuex9343
1 Xstephen95x9344
2 Xueshengyao9345
2 Xuxing7169346
2 XxTDSxX9347

9323 https://en.wikipedia.org/wiki/User:Xiaodai~enwiki
9324 https://en.wikipedia.org/wiki/User:Xiaojeng~enwiki
9325 https://en.wikipedia.org/w/index.php%3ftitle=User:Xiaokaoy&action=edit&redlink=1
9326 https://en.wikipedia.org/w/index.php%3ftitle=User:Xiaoyang&action=edit&redlink=1
9327 https://en.wikipedia.org/w/index.php%3ftitle=User:Xijiahe&action=edit&redlink=1
9328 https://en.wikipedia.org/wiki/User:Xinbenlv
9329 https://en.wikipedia.org/wiki/User:Xiong
9330 https://en.wikipedia.org/w/index.php%3ftitle=User:Xitaowen&action=edit&redlink=1
9331 https://en.wikipedia.org/wiki/User:Xjcl
9332 https://en.wikipedia.org/w/index.php%3ftitle=User:Xklpop&action=edit&redlink=1
9333 https://en.wikipedia.org/w/index.php%3ftitle=User:Xnk&action=edit&redlink=1
9334 https://en.wikipedia.org/w/index.php%3ftitle=User:Xnn&action=edit&redlink=1
9335 https://en.wikipedia.org/wiki/User:Xodarap00
9336 https://en.wikipedia.org/w/index.php%3ftitle=User:Xonev&action=edit&redlink=1
9337 https://en.wikipedia.org/wiki/User:Xonqnopp
9338 https://en.wikipedia.org/w/index.php%3ftitle=User:Xpavlic4&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Xperimental~enwiki&action=edit&
9339
redlink=1
9340 https://en.wikipedia.org/w/index.php%3ftitle=User:Xprycker&action=edit&redlink=1
9341 https://en.wikipedia.org/wiki/User:Xqbot
9342 https://en.wikipedia.org/wiki/User:Xrchz
9343 https://en.wikipedia.org/w/index.php%3ftitle=User:XreDuex&action=edit&redlink=1
9344 https://en.wikipedia.org/w/index.php%3ftitle=User:Xstephen95x&action=edit&redlink=1
9345 https://en.wikipedia.org/wiki/User:Xueshengyao
9346 https://en.wikipedia.org/w/index.php%3ftitle=User:Xuxing716&action=edit&redlink=1
9347 https://en.wikipedia.org/w/index.php%3ftitle=User:XxTDSxX&action=edit&redlink=1

2043
Contributors

1 XxjwuxX9348
2 Xypron9349
4 Xyzzy n9350
3 Y0n1cafebabe9351
1 YAEL GROSSNASS9352
8 YFdyh-bot9353
2 YQUVWynjszHUwDzv9354
1 YUL89YYZ9355
1 Yacs9356
1 Yaderbh9357
1 Yadra9358
2 Yadra~enwiki9359
1 Yaframa9360
4 YahoKa9361
1 YahyA9362
5 Yahya Abdal-Aziz9363
6 Yamaguchi 9364

1 Yamaha5 9365

1 Yamla9366
1 Yan Kuligin9367
1 Yangtseyangtse9368
1 Yaniv.pariente9369
1 Yanpas9370
1 Yansa9371

9348 https://en.wikipedia.org/w/index.php%3ftitle=User:XxjwuxX&action=edit&redlink=1
9349 https://en.wikipedia.org/wiki/User:Xypron
9350 https://en.wikipedia.org/wiki/User:Xyzzy_n
9351 https://en.wikipedia.org/w/index.php%3ftitle=User:Y0n1cafebabe&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:YAEL_GROSSNASS&action=edit&redlink=
9352
1
9353 https://en.wikipedia.org/wiki/User:YFdyh-bot
https://en.wikipedia.org/w/index.php%3ftitle=User:YQUVWynjszHUwDzv&action=edit&
9354
redlink=1
9355 https://en.wikipedia.org/wiki/User:YUL89YYZ
9356 https://en.wikipedia.org/w/index.php%3ftitle=User:Yacs&action=edit&redlink=1
9357 https://en.wikipedia.org/w/index.php%3ftitle=User:Yaderbh&action=edit&redlink=1
9358 https://en.wikipedia.org/w/index.php%3ftitle=User:Yadra&action=edit&redlink=1
9359 https://en.wikipedia.org/w/index.php%3ftitle=User:Yadra~enwiki&action=edit&redlink=1
9360 https://en.wikipedia.org/w/index.php%3ftitle=User:Yaframa&action=edit&redlink=1
9361 https://en.wikipedia.org/wiki/User:YahoKa
9362 https://en.wikipedia.org/wiki/User:YahyA
9363 https://en.wikipedia.org/wiki/User:Yahya_Abdal-Aziz
9364 https://en.wikipedia.org/wiki/User:Yamaguchi%25E5%2585%2588%25E7%2594%259F
9365 https://en.wikipedia.org/wiki/User:Yamaha5
9366 https://en.wikipedia.org/wiki/User:Yamla
9367 https://en.wikipedia.org/w/index.php%3ftitle=User:Yan_Kuligin&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:Yangtseyangtse&action=edit&redlink=
9368
1
https://en.wikipedia.org/w/index.php%3ftitle=User:Yaniv.pariente&action=edit&redlink=
9369
1
9370 https://en.wikipedia.org/wiki/User:Yanpas
9371 https://en.wikipedia.org/wiki/User:Yansa

2044
External links

3 Yar Kramer9372
1 Yarin Kaul9373
1 Yaron1m9374
2 Yaroslav Nikitenko9375
1 Yashshah28209376
48 Yashykt9377
1 Yavoh9378
1 Yay2959379
1 Yb29380
1 Ybungalobill9381
9 Ycl69382
1 Ydw9383
1 Yeaske9384
1 Yecril9385
1 Yeda1239386
2 Yekaixiong9387
3 Yellowdesk9388
2 Yelod9389
1 Yetisyny9390
4 YiFeiBot9391
7 Yifanhu9392
1 Yihkrys9393
2 Yill5779394
7 Yintan9395
1 Yiqic19939396

9372 https://en.wikipedia.org/wiki/User:Yar_Kramer
9373 https://en.wikipedia.org/wiki/User:Yarin_Kaul
9374 https://en.wikipedia.org/wiki/User:Yaron1m
9375 https://en.wikipedia.org/wiki/User:Yaroslav_Nikitenko
9376 https://en.wikipedia.org/w/index.php%3ftitle=User:Yashshah2820&action=edit&redlink=1
9377 https://en.wikipedia.org/wiki/User:Yashykt
9378 https://en.wikipedia.org/wiki/User:Yavoh
9379 https://en.wikipedia.org/wiki/User:Yay295
9380 https://en.wikipedia.org/wiki/User:Yb2
9381 https://en.wikipedia.org/w/index.php%3ftitle=User:Ybungalobill&action=edit&redlink=1
9382 https://en.wikipedia.org/w/index.php%3ftitle=User:Ycl6&action=edit&redlink=1
9383 https://en.wikipedia.org/w/index.php%3ftitle=User:Ydw&action=edit&redlink=1
9384 https://en.wikipedia.org/w/index.php%3ftitle=User:Yeaske&action=edit&redlink=1
9385 https://en.wikipedia.org/wiki/User:Yecril
9386 https://en.wikipedia.org/w/index.php%3ftitle=User:Yeda123&action=edit&redlink=1
9387 https://en.wikipedia.org/w/index.php%3ftitle=User:Yekaixiong&action=edit&redlink=1
9388 https://en.wikipedia.org/wiki/User:Yellowdesk
9389 https://en.wikipedia.org/wiki/User:Yelod
9390 https://en.wikipedia.org/wiki/User:Yetisyny
9391 https://en.wikipedia.org/wiki/User:YiFeiBot
9392 https://en.wikipedia.org/w/index.php%3ftitle=User:Yifanhu&action=edit&redlink=1
9393 https://en.wikipedia.org/w/index.php%3ftitle=User:Yihkrys&action=edit&redlink=1
9394 https://en.wikipedia.org/w/index.php%3ftitle=User:Yill577&action=edit&redlink=1
9395 https://en.wikipedia.org/wiki/User:Yintan
9396 https://en.wikipedia.org/w/index.php%3ftitle=User:Yiqic1993&action=edit&redlink=1

2045
Contributors

1 YiruJiao9397
1 Yixin.cao9398
1 Ykhandor9399
1 Ykhwong9400
1 Yknott9401
1 Yksyksyks9402
14 Ylai9403
36 Ylloh9404
5 Yloreander9405
1 Ymgve9406
1 Ynaamad9407
1 Ynhockey9408
1 Yoavt9409
196 Yobot9410
14 Yodamgod9411
2 YonaBot9412
1 Yonestar9413
1 Yongcat9414
4 Yonidebot9415
1 Yoric~enwiki9416
1 Yorik sar9417
1 Yotex99418
2 Youandme9419
1 Young Pioneer9420
1 Youngster689421

9397 https://en.wikipedia.org/w/index.php%3ftitle=User:YiruJiao&action=edit&redlink=1
9398 https://en.wikipedia.org/w/index.php%3ftitle=User:Yixin.cao&action=edit&redlink=1
9399 https://en.wikipedia.org/w/index.php%3ftitle=User:Ykhandor&action=edit&redlink=1
9400 https://en.wikipedia.org/wiki/User:Ykhwong
9401 https://en.wikipedia.org/wiki/User:Yknott
9402 https://en.wikipedia.org/w/index.php%3ftitle=User:Yksyksyks&action=edit&redlink=1
9403 https://en.wikipedia.org/w/index.php%3ftitle=User:Ylai&action=edit&redlink=1
9404 https://en.wikipedia.org/wiki/User:Ylloh
9405 https://en.wikipedia.org/w/index.php%3ftitle=User:Yloreander&action=edit&redlink=1
9406 https://en.wikipedia.org/w/index.php%3ftitle=User:Ymgve&action=edit&redlink=1
9407 https://en.wikipedia.org/wiki/User:Ynaamad
9408 https://en.wikipedia.org/wiki/User:Ynhockey
9409 https://en.wikipedia.org/wiki/User:Yoavt
9410 https://en.wikipedia.org/wiki/User:Yobot
9411 https://en.wikipedia.org/wiki/User:Yodamgod
9412 https://en.wikipedia.org/wiki/User:YonaBot
9413 https://en.wikipedia.org/wiki/User:Yonestar
9414 https://en.wikipedia.org/w/index.php%3ftitle=User:Yongcat&action=edit&redlink=1
9415 https://en.wikipedia.org/wiki/User:Yonidebot
9416 https://en.wikipedia.org/wiki/User:Yoric~enwiki
9417 https://en.wikipedia.org/wiki/User:Yorik_sar
9418 https://en.wikipedia.org/wiki/User:Yotex9
9419 https://en.wikipedia.org/wiki/User:Youandme
9420 https://en.wikipedia.org/wiki/User:Young_Pioneer
9421 https://en.wikipedia.org/wiki/User:Youngster68

2046
External links

1 Your Lord and Master9422


3 Yoyo9423
1 Yparjis9424
6 Ysaad us9425
1 Ysangkok9426
1 Ysf ysf9427
1 Ytx21cn9428
1 Yufeizhao9429
4 Yugsdrawkcabeht9430
7 Yuide9431
4 Yujianzhao9432
1 YukinonKanade9433
1 Yukuairoy9434
1 Yulin119435
1 Yuri V.9436
102 YurikBot9437
2 Yurivict9438
1 Yut239439
2 Yutsi9440
2 Yuval Baror9441
2 Yuval madar9442
3 Yves berset9443
3 Ywang4169444
3 Ywaz9445
5 Yzzhang cs9446

9422 https://en.wikipedia.org/wiki/User:Your_Lord_and_Master
9423 https://en.wikipedia.org/w/index.php%3ftitle=User:Yoyo&action=edit&redlink=1
9424 https://en.wikipedia.org/w/index.php%3ftitle=User:Yparjis&action=edit&redlink=1
9425 https://en.wikipedia.org/w/index.php%3ftitle=User:Ysaad_us&action=edit&redlink=1
9426 https://en.wikipedia.org/wiki/User:Ysangkok
9427 https://en.wikipedia.org/w/index.php%3ftitle=User:Ysf_ysf&action=edit&redlink=1
9428 https://en.wikipedia.org/wiki/User:Ytx21cn
9429 https://en.wikipedia.org/w/index.php%3ftitle=User:Yufeizhao&action=edit&redlink=1
9430 https://en.wikipedia.org/wiki/User:Yugsdrawkcabeht
9431 https://en.wikipedia.org/wiki/User:Yuide
9432 https://en.wikipedia.org/w/index.php%3ftitle=User:Yujianzhao&action=edit&redlink=1
9433 https://en.wikipedia.org/w/index.php%3ftitle=User:YukinonKanade&action=edit&redlink=1
9434 https://en.wikipedia.org/w/index.php%3ftitle=User:Yukuairoy&action=edit&redlink=1
9435 https://en.wikipedia.org/w/index.php%3ftitle=User:Yulin11&action=edit&redlink=1
9436 https://en.wikipedia.org/wiki/User:Yuri_V.
9437 https://en.wikipedia.org/wiki/User:YurikBot
9438 https://en.wikipedia.org/wiki/User:Yurivict
9439 https://en.wikipedia.org/w/index.php%3ftitle=User:Yut23&action=edit&redlink=1
9440 https://en.wikipedia.org/wiki/User:Yutsi
9441 https://en.wikipedia.org/w/index.php%3ftitle=User:Yuval_Baror&action=edit&redlink=1
9442 https://en.wikipedia.org/wiki/User:Yuval_madar
9443 https://en.wikipedia.org/w/index.php%3ftitle=User:Yves_berset&action=edit&redlink=1
9444 https://en.wikipedia.org/w/index.php%3ftitle=User:Ywang416&action=edit&redlink=1
9445 https://en.wikipedia.org/w/index.php%3ftitle=User:Ywaz&action=edit&redlink=1
9446 https://en.wikipedia.org/wiki/User:Yzzhang_cs

2047
Contributors

1 Z10x9447
2 Z5eacom9448
1 ZAB9449
2 ZX819450
1 ZX959451
1 Zacharysyoung9452
1 Zachwaltman9453
2 Zachwf9454
2 Zachwlewis9455
3 Zackchase9456
1 Zacmitton9457
1 Zad689458
1 Zaffy8069459
2 Zahlentheorie9460
2 Zairwolf9461
1 Zakfong9462
1 Zakhalesh9463
5 Zaktan23199464
2 Zamfi9465
1 Zanetu9466
1 Zaphod Beeblebrox9467
1 Zaphraud9468
1 Zaradaqaw9469
1 Zarcadia9470
1 Zarrandreas9471

9447 https://en.wikipedia.org/wiki/User:Z10x
9448 https://en.wikipedia.org/w/index.php%3ftitle=User:Z5eacom&action=edit&redlink=1
9449 https://en.wikipedia.org/wiki/User:ZAB
9450 https://en.wikipedia.org/wiki/User:ZX81
9451 https://en.wikipedia.org/wiki/User:ZX95
9452 https://en.wikipedia.org/wiki/User:Zacharysyoung
9453 https://en.wikipedia.org/w/index.php%3ftitle=User:Zachwaltman&action=edit&redlink=1
9454 https://en.wikipedia.org/w/index.php%3ftitle=User:Zachwf&action=edit&redlink=1
9455 https://en.wikipedia.org/w/index.php%3ftitle=User:Zachwlewis&action=edit&redlink=1
9456 https://en.wikipedia.org/w/index.php%3ftitle=User:Zackchase&action=edit&redlink=1
9457 https://en.wikipedia.org/w/index.php%3ftitle=User:Zacmitton&action=edit&redlink=1
9458 https://en.wikipedia.org/wiki/User:Zad68
9459 https://en.wikipedia.org/w/index.php%3ftitle=User:Zaffy806&action=edit&redlink=1
9460 https://en.wikipedia.org/wiki/User:Zahlentheorie
9461 https://en.wikipedia.org/w/index.php%3ftitle=User:Zairwolf&action=edit&redlink=1
9462 https://en.wikipedia.org/w/index.php%3ftitle=User:Zakfong&action=edit&redlink=1
9463 https://en.wikipedia.org/wiki/User:Zakhalesh
9464 https://en.wikipedia.org/w/index.php%3ftitle=User:Zaktan2319&action=edit&redlink=1
9465 https://en.wikipedia.org/wiki/User:Zamfi
9466 https://en.wikipedia.org/wiki/User:Zanetu
9467 https://en.wikipedia.org/wiki/User:Zaphod_Beeblebrox
9468 https://en.wikipedia.org/wiki/User:Zaphraud
9469 https://en.wikipedia.org/w/index.php%3ftitle=User:Zaradaqaw&action=edit&redlink=1
9470 https://en.wikipedia.org/wiki/User:Zarcadia
9471 https://en.wikipedia.org/w/index.php%3ftitle=User:Zarrandreas&action=edit&redlink=1

2048
External links

5 Zarvok9472
36 Zaslav9473
2 Zaspagety9474
4 ZatoKentai9475
3 Zaunlen9476
6 Zawadx9477
3 Zawersh9478
2 Zbjornson9479
1 Zcia.9480
3 Zdeneks9481
1 Zdravozdravo69482
1 Zedeyepee9483
1 Zedla9484
2 Zeitgeist2.7189485
1 Zeke pbuh9486
3 Zemyla9487
4 Zeno Gantner9488
1 Zeno of Elea9489
2 Zephyrus Tavvier9490
1 Zer0dept9491
2 Zero sharp9492
32 Zero00009493
37 ZeroOne9494
1 Zerodamage9495
1 ZeroxAX9496

9472 https://en.wikipedia.org/wiki/User:Zarvok
9473 https://en.wikipedia.org/wiki/User:Zaslav
9474 https://en.wikipedia.org/w/index.php%3ftitle=User:Zaspagety&action=edit&redlink=1
9475 https://en.wikipedia.org/w/index.php%3ftitle=User:ZatoKentai&action=edit&redlink=1
9476 https://en.wikipedia.org/wiki/User:Zaunlen
9477 https://en.wikipedia.org/w/index.php%3ftitle=User:Zawadx&action=edit&redlink=1
9478 https://en.wikipedia.org/wiki/User:Zawersh
9479 https://en.wikipedia.org/w/index.php%3ftitle=User:Zbjornson&action=edit&redlink=1
9480 https://en.wikipedia.org/w/index.php%3ftitle=User:Zcia.&action=edit&redlink=1
9481 https://en.wikipedia.org/w/index.php%3ftitle=User:Zdeneks&action=edit&redlink=1
9482 https://en.wikipedia.org/w/index.php%3ftitle=User:Zdravozdravo6&action=edit&redlink=1
9483 https://en.wikipedia.org/w/index.php%3ftitle=User:Zedeyepee&action=edit&redlink=1
9484 https://en.wikipedia.org/wiki/User:Zedla
https://en.wikipedia.org/w/index.php%3ftitle=User:Zeitgeist2.718&action=edit&redlink=
9485
1
9486 https://en.wikipedia.org/wiki/User:Zeke_pbuh
9487 https://en.wikipedia.org/wiki/User:Zemyla
9488 https://en.wikipedia.org/wiki/User:Zeno_Gantner
9489 https://en.wikipedia.org/wiki/User:Zeno_of_Elea
9490 https://en.wikipedia.org/wiki/User:Zephyrus_Tavvier
9491 https://en.wikipedia.org/wiki/User:Zer0dept
9492 https://en.wikipedia.org/wiki/User:Zero_sharp
9493 https://en.wikipedia.org/wiki/User:Zero0000
9494 https://en.wikipedia.org/wiki/User:ZeroOne
9495 https://en.wikipedia.org/wiki/User:Zerodamage
9496 https://en.wikipedia.org/w/index.php%3ftitle=User:ZeroxAX&action=edit&redlink=1

2049
Contributors

1 Zerpi9497
1 Zetifree9498
2 Zettaphone9499
1 Zeycus9500
4 Zhaladshar9501
1 Zhankus9502
3 Zhaocb9503
1 Zhefurui9504
1 Zholdas9505
1 Zieglerk9506
1 ZiggyMo9507
1 Zigswatson9508
2 Ziiv9509
1 Zilvador9510
4 Zingvin9511
2 Zinnober99512
1 Zipcodeman9513
1 Zipcube9514
2 ZipoBibrok5x10^89515
1 Zippanova9516
1 Zippedmartin9517
1 Zitronenquetscher9518
1 Zlangley9519
2 Zntrip9520
1 Znupi9521

9497 https://en.wikipedia.org/w/index.php%3ftitle=User:Zerpi&action=edit&redlink=1
9498 https://en.wikipedia.org/wiki/User:Zetifree
9499 https://en.wikipedia.org/wiki/User:Zettaphone
9500 https://en.wikipedia.org/wiki/User:Zeycus
9501 https://en.wikipedia.org/wiki/User:Zhaladshar
9502 https://en.wikipedia.org/w/index.php%3ftitle=User:Zhankus&action=edit&redlink=1
9503 https://en.wikipedia.org/w/index.php%3ftitle=User:Zhaocb&action=edit&redlink=1
9504 https://en.wikipedia.org/w/index.php%3ftitle=User:Zhefurui&action=edit&redlink=1
9505 https://en.wikipedia.org/w/index.php%3ftitle=User:Zholdas&action=edit&redlink=1
9506 https://en.wikipedia.org/wiki/User:Zieglerk
9507 https://en.wikipedia.org/w/index.php%3ftitle=User:ZiggyMo&action=edit&redlink=1
9508 https://en.wikipedia.org/w/index.php%3ftitle=User:Zigswatson&action=edit&redlink=1
9509 https://en.wikipedia.org/w/index.php%3ftitle=User:Ziiv&action=edit&redlink=1
9510 https://en.wikipedia.org/w/index.php%3ftitle=User:Zilvador&action=edit&redlink=1
9511 https://en.wikipedia.org/wiki/User:Zingvin
9512 https://en.wikipedia.org/wiki/User:Zinnober9
9513 https://en.wikipedia.org/wiki/User:Zipcodeman
9514 https://en.wikipedia.org/w/index.php%3ftitle=User:Zipcube&action=edit&redlink=1
9515 https://en.wikipedia.org/wiki/User:ZipoBibrok5x10%255E8
9516 https://en.wikipedia.org/wiki/User:Zippanova
9517 https://en.wikipedia.org/wiki/User:Zippedmartin
https://en.wikipedia.org/w/index.php%3ftitle=User:Zitronenquetscher&action=edit&
9518
redlink=1
9519 https://en.wikipedia.org/w/index.php%3ftitle=User:Zlangley&action=edit&redlink=1
9520 https://en.wikipedia.org/wiki/User:Zntrip
9521 https://en.wikipedia.org/w/index.php%3ftitle=User:Znupi&action=edit&redlink=1

2050
External links

1 Zocke1r9522
1 Zodon9523
2 Zoicon59524
3 Zophar19525
2 Zorawar879526
5 Zorrobot9527
1 Zowayix9528
1 Zowch9529
1 Zppix9530
6 Zr2d29531
5 Zsoftua9532
8 Ztothefifth9533
1 Zudu299534
1 Zultan9535
1 Zuludogm9536
15 Zundark9537
3 Zvar9538
15 Zvika9539
1 ZweiOhren9540
1 Zweije9541
1 Zwliew9542
1 Zwobot9543
1 Zxcv20009544
1 ZxxZxxZ9545
3 Zyqqh9546

9522 https://en.wikipedia.org/wiki/User:Zocke1r
9523 https://en.wikipedia.org/wiki/User:Zodon
9524 https://en.wikipedia.org/wiki/User:Zoicon5
9525 https://en.wikipedia.org/wiki/User:Zophar1
9526 https://en.wikipedia.org/wiki/User:Zorawar87
9527 https://en.wikipedia.org/wiki/User:Zorrobot
9528 https://en.wikipedia.org/wiki/User:Zowayix
9529 https://en.wikipedia.org/w/index.php%3ftitle=User:Zowch&action=edit&redlink=1
9530 https://en.wikipedia.org/wiki/User:Zppix
9531 https://en.wikipedia.org/wiki/User:Zr2d2
9532 https://en.wikipedia.org/wiki/User:Zsoftua
9533 https://en.wikipedia.org/wiki/User:Ztothefifth
9534 https://en.wikipedia.org/w/index.php%3ftitle=User:Zudu29&action=edit&redlink=1
9535 https://en.wikipedia.org/wiki/User:Zultan
9536 https://en.wikipedia.org/w/index.php%3ftitle=User:Zuludogm&action=edit&redlink=1
9537 https://en.wikipedia.org/wiki/User:Zundark
9538 https://en.wikipedia.org/wiki/User:Zvar
9539 https://en.wikipedia.org/wiki/User:Zvika
9540 https://en.wikipedia.org/wiki/User:ZweiOhren
9541 https://en.wikipedia.org/w/index.php%3ftitle=User:Zweije&action=edit&redlink=1
9542 https://en.wikipedia.org/w/index.php%3ftitle=User:Zwliew&action=edit&redlink=1
9543 https://en.wikipedia.org/wiki/User:Zwobot
9544 https://en.wikipedia.org/w/index.php%3ftitle=User:Zxcv2000&action=edit&redlink=1
9545 https://en.wikipedia.org/wiki/User:ZxxZxxZ
9546 https://en.wikipedia.org/wiki/User:Zyqqh

2051
Contributors

3 Zzedar9547
3 Zziccardi9548
1 Zzuuzz9549
23 ZéroBot9550
1 \wowzeryest\9551
1 ~riley9552
1 Étale.cohomology9553
1 Île flottante9554
3 Þjarkur9555
8 Štefica Horvat9556
20 Ɯ9557
1 Александър9558
1 Владимир Паронджанов9559
1 Дарко Максимовић9560
3 Канеюку9561
2 Михајло Анђелковић9562
4 НСНУ9563
2 Олександр Кравчук9564
1 Тиверополник9565
1 9566

9547 https://en.wikipedia.org/wiki/User:Zzedar
9548 https://en.wikipedia.org/wiki/User:Zziccardi
9549 https://en.wikipedia.org/wiki/User:Zzuuzz
9550 https://en.wikipedia.org/wiki/User:Z%25C3%25A9roBot
9551 https://en.wikipedia.org/wiki/User:%255Cwowzeryest%255C
9552 https://en.wikipedia.org/wiki/User:~riley
https://en.wikipedia.org/w/index.php%3ftitle=User:%25C3%2589tale.cohomology&action=
9553
edit&redlink=1
9554 https://en.wikipedia.org/wiki/User:%25C3%258Ele_flottante
9555 https://en.wikipedia.org/wiki/User:%25C3%259Ejarkur
https://en.wikipedia.org/w/index.php%3ftitle=User:%25C5%25A0tefica_Horvat&action=
9556
edit&redlink=1
9557 https://en.wikipedia.org/wiki/User:%25C6%259C
https://en.wikipedia.org/wiki/User:%25D0%2590%25D0%25BB%25D0%25B5%25D0%25BA%25D1%
9558
2581%25D0%25B0%25D0%25BD%25D0%25B4%25D1%258A%25D1%2580
https://en.wikipedia.org/wiki/User:%25D0%2592%25D0%25BB%25D0%25B0%25D0%25B4%25D0%
9559 25B8%25D0%25BC%25D0%25B8%25D1%2580_%25D0%259F%25D0%25B0%25D1%2580%25D0%25BE%25D0%
25BD%25D0%25B4%25D0%25B6%25D0%25B0%25D0%25BD%25D0%25BE%25D0%25B2
https://en.wikipedia.org/wiki/User:%25D0%2594%25D0%25B0%25D1%2580%25D0%25BA%25D0%
9560 25BE_%25D0%259C%25D0%25B0%25D0%25BA%25D1%2581%25D0%25B8%25D0%25BC%25D0%25BE%25D0%
25B2%25D0%25B8%25D1%259B
https://en.wikipedia.org/wiki/User:%25D0%259A%25D0%25B0%25D0%25BD%25D0%25B5%25D1%
9561
258E%25D0%25BA%25D1%2583
https://en.wikipedia.org/wiki/User:%25D0%259C%25D0%25B8%25D1%2585%25D0%25B0%25D1%
9562 2598%25D0%25BB%25D0%25BE_%25D0%2590%25D0%25BD%25D1%2592%25D0%25B5%25D0%25BB%25D0%
25BA%25D0%25BE%25D0%25B2%25D0%25B8%25D1%259B
9563 https://en.wikipedia.org/wiki/User:%25D0%259D%25D0%25A1%25D0%259D%25D0%25A3
https://en.wikipedia.org/wiki/User:%25D0%259E%25D0%25BB%25D0%25B5%25D0%25BA%25D1%
9564 2581%25D0%25B0%25D0%25BD%25D0%25B4%25D1%2580_%25D0%259A%25D1%2580%25D0%25B0%25D0%
25B2%25D1%2587%25D1%2583%25D0%25BA
https://en.wikipedia.org/wiki/User:%25D0%25A2%25D0%25B8%25D0%25B2%25D0%25B5%25D1%
9565
2580%25D0%25BE%25D0%25BF%25D0%25BE%25D0%25BB%25D0%25BD%25D0%25B8%25D0%25BA
https://en.wikipedia.org/w/index.php%3ftitle=User:%25D5%258D%25D5%25A1%25D5%25B0%
9566
25D5%25A1%25D5%25AF&action=edit&redlink=1

2052
External links

1 17 9567

1 9568

1 9569

2 9570

1 9571

3 9572

1 . 9573

1 9574

1 9575

1 9576

1 9577

1 9578

2 9579

1 9580

1 9581

1 robot9582
1 9583

1 9584

4 9585

1 9586

2 9587

https://en.wikipedia.org/w/index.php%3ftitle=User:%25D7%2590%25D7%25A0%25D7%2595%
9567
25D7%25A0%25D7%2599%25D7%259E%25D7%259917&action=edit&redlink=1
9568 https://en.wikipedia.org/wiki/User:%25D7%2593%25D7%2595%25D7%2593
https://en.wikipedia.org/wiki/User:%25D7%2593%25D7%2595%25D7%2593_%25D7%25A9%25D7%
9569
2599
https://en.wikipedia.org/w/index.php%3ftitle=User:%25D7%2597%25D7%2595%25D7%2591%
9570
25D7%2591%25D7%25A9%25D7%2599%25D7%25A8%25D7%2594&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:%25D7%2597%25D7%25A6%25D7%25A8%
9571
25D7%2595%25D7%25A0%25D7%2599&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:%25D7%2599%25D7%2595%25D7%2591%
9572
25D7%259C_%25D7%259E%25D7%2593%25D7%25A8&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:%25D7%25A8%25D7%2595%25D7%25A2%
9573
25D7%2599.%25D7%25A1&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:%25D8%25AC%25D9%2586%25DA%25AF%
9574
25D9%2588%25D9%2584%25DA%25A9&action=edit&redlink=1
9575 https://en.wikipedia.org/wiki/User:%25D8%25AC%25D9%2588%25D8%25A7%25D8%25AF
9576 https://en.wikipedia.org/wiki/User:%25D8%25B1%25D9%2588%25D8%25AE%25D9%2588
https://en.wikipedia.org/w/index.php%3ftitle=User:%25D8%25B3%25D8%25B9%25DB%258C&
9577
action=edit&redlink=1
https://en.wikipedia.org/wiki/User:%25D9%2582%25D9%2584%25DB%258C_%25D8%25B2%25D8%
9578
25A7%25D8%25AF%25DA%25AF%25D8%25A7%25D9%2586
9579 https://en.wikipedia.org/wiki/User:%25D9%25BE%25D9%2588%25D9%2588%25DB%258C%25D8%25A7
https://en.wikipedia.org/w/index.php%3ftitle=User:%25E0%25A4%2585%25E0%25A4%25A8%
9580 25E0%25A5%2581%25E0%25A4%25A8%25E0%25A4%25BE%25E0%25A4%25A6_%25E0%25A4%25B8%25E0%
25A4%25BF%25E0%25A4%2582%25E0%25A4%25B9&action=edit&redlink=1
9581 https://en.wikipedia.org/wiki/User:%25E0%25A4%25B6%25E0%25A4%25BF%25E0%25A4%25B5
https://en.wikipedia.org/wiki/User:%25E3%2582%25BF%25E3%2583%2581%25E3%2582%25B3%
9582
25E3%2583%259E_robot
9583 https://en.wikipedia.org/wiki/User:%25E3%2585%2582%25E3%2584%25B1%25E3%2585%2587
https://en.wikipedia.org/wiki/User:%25E6%2588%2591%25E8%25BC%25A9%25E3%2581%25AF%
9584
25E7%258A%25AC%25E3%2581%25A7%25E3%2581%2582%25E3%2582%258B
https://en.wikipedia.org/w/index.php%3ftitle=User:%25E6%2597%25A0%25E4%25B8%258D%
9585
25E8%25AF%25A6&action=edit&redlink=1
9586 https://en.wikipedia.org/wiki/User:%25E7%2599%25BD%25E9%25A7%2592
9587 https://en.wikipedia.org/wiki/User:%25E8%2591%25A3%25E8%25BE%25B0%25E5%2585%25B4

2053
Contributors

3 9588

2 9589

2 9590

1 9591

https://en.wikipedia.org/w/index.php%3ftitle=User:%25E8%25B1%25A1%25E9%2581%2593&
9588
action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:%25E8%25BE%259C%25E4%25B9%25A7%
9589
25E6%2594%25BF&action=edit&redlink=1
https://en.wikipedia.org/w/index.php%3ftitle=User:%25EC%25A0%2595%25EA%25B3%25BC&
9590
action=edit&redlink=1
9591 https://en.wikipedia.org/wiki/User:%25F0%259F%2598%2582

2054
List of Figures

• GFDL: Gnu Free Documentation License. http://www.gnu.org/licenses/fdl.


html
• cc-by-sa-3.0: Creative Commons Attribution ShareAlike 3.0 License. http://
creativecommons.org/licenses/by-sa/3.0/
• cc-by-sa-2.5: Creative Commons Attribution ShareAlike 2.5 License. http://
creativecommons.org/licenses/by-sa/2.5/
• cc-by-sa-2.0: Creative Commons Attribution ShareAlike 2.0 License. http://
creativecommons.org/licenses/by-sa/2.0/
• cc-by-sa-1.0: Creative Commons Attribution ShareAlike 1.0 License. http://
creativecommons.org/licenses/by-sa/1.0/
• cc-by-2.0: Creative Commons Attribution 2.0 License. http://creativecommons.
org/licenses/by/2.0/
• cc-by-2.0: Creative Commons Attribution 2.0 License. http://creativecommons.
org/licenses/by/2.0/deed.en
• cc-by-2.5: Creative Commons Attribution 2.5 License. http://creativecommons.
org/licenses/by/2.5/deed.en
• cc-by-3.0: Creative Commons Attribution 3.0 License. http://creativecommons.
org/licenses/by/3.0/deed.en
• GPL: GNU General Public License. http://www.gnu.org/licenses/gpl-2.0.txt
• LGPL: GNU Lesser General Public License. http://www.gnu.org/licenses/lgpl.
html
• PD: This image is in the public domain.
• ATTR: The copyright holder of this file allows anyone to use it for any purpose,
provided that the copyright holder is properly attributed. Redistribution, derivative
work, commercial use, and all other use is permitted.
• EURO: This is the common (reverse) face of a euro coin. The copyright on the design
of the common face of the euro coins belongs to the European Commission. Authorised
is reproduction in a format without relief (drawings, paintings, films) provided they
are not detrimental to the image of the euro.
• LFK: Lizenz Freie Kunst. http://artlibre.org/licence/lal/de
• CFR: Copyright free use.

2055
List of Figures

• EPL: Eclipse Public License. http://www.eclipse.org/org/documents/epl-v10.


php
Copies of the GPL, the LGPL as well as a GFDL are included in chapter Licenses9592 .
Please note that images in the public domain do not require attribution. You may click
on the image numbers in the following table to open the webpage of the images in your
webbrower.

9592 Chapter 156 on page 2085

2056
List of Figures

1 en:User:Saranphat.cha9593 , Anomie, J.delanoy, Jo-Jo Eu-


merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
2 User:Dcoetzee9594 , User:WDGraham9595 , User:Dcoetzee9596 ,
User:WDGraham9597
3 User:Dcoetzee9598 , User:WDGraham9599 , User:Dcoetzee9600 ,
User:WDGraham9601
4 Balu Ertl9602 , Balu Ertl9603
5 en:User:Saranphat.cha9604 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
6 Pmdumuid9605 , Pmdumuid9606
7 Photographie personnelle User:Poussin jean9607 , Photogra-
phie personnelle User:Poussin jean9608
8 Wikipedia:en:User:RolandH9609
9 en:Joestape899610
10 en:User:Saranphat.cha9611 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
11 Swfung89612 , Swfung89613
12 Dcoetzee9614 at English Q529615 , Dcoetzee9616 at English
Wikipedia9617
13 Dcoetzee9618 at English Wikipedia9619

9593 https://en.wikipedia.org/wiki/User:Saranphat.cha
9594 http://commons.wikimedia.org/wiki/User:Dcoetzee
9595 http://commons.wikimedia.org/wiki/User:WDGraham
9596 https://commons.wikimedia.org/wiki/User:Dcoetzee
9597 https://commons.wikimedia.org/wiki/User:WDGraham
9598 http://commons.wikimedia.org/wiki/User:Dcoetzee
9599 http://commons.wikimedia.org/wiki/User:WDGraham
9600 https://commons.wikimedia.org/wiki/User:Dcoetzee
9601 https://commons.wikimedia.org/wiki/User:WDGraham
9602 http://commons.wikimedia.org/wiki/User:Balu.ertl
9603 https://commons.wikimedia.org/wiki/User:Balu.ertl
9604 https://en.wikipedia.org/wiki/User:Saranphat.cha
9605 http://commons.wikimedia.org/w/index.php?title=User:Pmdumuid&action=edit&redlink=1
9606 https://commons.wikimedia.org/w/index.php?title=User:Pmdumuid&action=edit&redlink=1
9607 http://commons.wikimedia.org/wiki/User:Poussin_jean
9608 https://commons.wikimedia.org/wiki/User:Poussin_jean
9609 https://en.wikipedia.org/wiki/en:User:RolandH
9610 https://en.wikipedia.org/wiki/Joestape89
9611 https://en.wikipedia.org/wiki/User:Saranphat.cha
9612 http://commons.wikimedia.org/w/index.php?title=User:Swfung8&action=edit&redlink=1
9613 https://commons.wikimedia.org/w/index.php?title=User:Swfung8&action=edit&redlink=1
9614 https://en.wikipedia.org/wiki/User:Dcoetzee
9615 https://en.wikipedia.org/wiki/
9616 https://en.wikipedia.org/wiki/User:Dcoetzee
9617 https://en.wikipedia.org/wiki/
9618 https://en.wikipedia.org/wiki/User:Dcoetzee
9619 https://en.wikipedia.org/wiki/

2057
List of Figures

14 VineetKumar9620 at English Q529621 , VineetKumar9622 at


English Wikipedia9623
15 NASA
16 CobaltBlue
17 Prof. Dr. Peter Sanders
18 VineetKumar9624 at English Q529625 , VineetKumar9626 at
English Wikipedia9627
19 NASA
20 CobaltBlue
21 Prof. Dr. Peter Sanders
22 Znupi9628 , Znupi9629
23 GitHub
24 Explorer099630 , Explorer099631
25 Swfung89632 , Swfung89633
26 en:User:Saranphat.cha9634 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
27 Swfung89635 , Swfung89636
28 The original uploader was Nmnogueira9637 at English
Q529638 .
29 Balu Ertl9639 , Balu Ertl9640
30 Mciura9641 , Mciura9642
31 Zieben0079643 , Zieben0079644
32 Zieben0079645 , Zieben0079646
33 National Institute of Standards and Technology

9620 https://en.wikipedia.org/wiki/User:VineetKumar
9621 https://en.wikipedia.org/wiki/
9622 https://en.wikipedia.org/wiki/User:VineetKumar
9623 https://en.wikipedia.org/wiki/
9624 https://en.wikipedia.org/wiki/User:VineetKumar
9625 https://en.wikipedia.org/wiki/
9626 https://en.wikipedia.org/wiki/User:VineetKumar
9627 https://en.wikipedia.org/wiki/
9628 http://commons.wikimedia.org/w/index.php?title=User:Znupi&action=edit&redlink=1
9629 https://commons.wikimedia.org/w/index.php?title=User:Znupi&action=edit&redlink=1
9630 http://commons.wikimedia.org/wiki/User_talk:Explorer09
9631 https://commons.wikimedia.org/wiki/User_talk:Explorer09
9632 http://commons.wikimedia.org/w/index.php?title=User:Swfung8&action=edit&redlink=1
9633 https://commons.wikimedia.org/w/index.php?title=User:Swfung8&action=edit&redlink=1
9634 https://en.wikipedia.org/wiki/User:Saranphat.cha
9635 http://commons.wikimedia.org/w/index.php?title=User:Swfung8&action=edit&redlink=1
9636 https://commons.wikimedia.org/w/index.php?title=User:Swfung8&action=edit&redlink=1
9637 https://en.wikipedia.org/wiki/User:Nmnogueira
9638 https://en.wikipedia.org/wiki/
9639 http://commons.wikimedia.org/wiki/User:Balu.ertl
9640 https://commons.wikimedia.org/wiki/User:Balu.ertl
9641 http://commons.wikimedia.org/w/index.php?title=User:Mciura&action=edit&redlink=1
9642 https://commons.wikimedia.org/w/index.php?title=User:Mciura&action=edit&redlink=1
9643 http://commons.wikimedia.org/w/index.php?title=User:Zieben007&action=edit&redlink=1
9644 https://commons.wikimedia.org/w/index.php?title=User:Zieben007&action=edit&redlink=1
9645 http://commons.wikimedia.org/w/index.php?title=User:Zieben007&action=edit&redlink=1
9646 https://commons.wikimedia.org/w/index.php?title=User:Zieben007&action=edit&redlink=1

2058
List of Figures

34 Jorge Stolfi9647 , Jorge Stolfi9648


35 en:User:Saranphat.cha9649 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
36 Jorge Stolfi9650 , Jorge Stolfi9651
37 en:User:Saranphat.cha9652 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
38 en:User:Saranphat.cha9653 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
39 Esquivalience9654 , Esquivalience9655
40 Esquivalience9656 , Esquivalience9657
41 Esquivalience9658 , Esquivalience9659
42 Binary_search_tree.svg9660 : Booyabazooka9661
• derivative work: movax
, Binary_search_tree.svg9662 : Booyabazooka9663
• derivative work: movax

43 Esquivalience9664 , Esquivalience9665
44 Esquivalience9666 , Esquivalience9667
45 Esquivalience9668 , Esquivalience9669

9647 http://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9648 https://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9649 https://en.wikipedia.org/wiki/User:Saranphat.cha
9650 http://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9651 https://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9652 https://en.wikipedia.org/wiki/User:Saranphat.cha
9653 https://en.wikipedia.org/wiki/User:Saranphat.cha
http://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9654
redlink=1
https://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9655
redlink=1
http://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9656
redlink=1
https://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9657
redlink=1
http://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9658
redlink=1
https://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9659
redlink=1
9660 http://commons.wikimedia.org/wiki/File:Binary_search_tree.svg
9661 http://commons.wikimedia.org/wiki/User:Booyabazooka
9662 https://commons.wikimedia.org/wiki/File:Binary_search_tree.svg
9663 https://commons.wikimedia.org/wiki/User:Booyabazooka
http://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9664
redlink=1
https://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9665
redlink=1
http://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9666
redlink=1
https://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9667
redlink=1
http://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9668
redlink=1
https://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9669
redlink=1

2059
List of Figures

46 Esquivalience9670 , Esquivalience9671
47 Esquivalience9672 , Esquivalience9673
48 No machine-readable author provided. Dcoetzee9674 as-
sumed (based on copyright claims)., No machine-readable
author provided. Dcoetzee9675 assumed (based on copyright
claims).
49 Nomen4Omen9676 , Nomen4Omen9677
50 Josell79678 , Josell79679
51 Booyabazooka9680 (based on PNG image by Deco9681 ). Mod-
ifications by Superm4019682 ., Booyabazooka9683 (based on
PNG image by Deco9684 ). Modifications by Superm4019685 .
52 Qwertyus9686 , Qwertyus9687
53 en:User:Saranphat.cha9688 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
54 Jorge Stolfi9689 , Jorge Stolfi9690
55 Jorge Stolfi9691 , Jorge Stolfi9692
56 Jorge Stolfi9693 , Jorge Stolfi9694
57 Jorge Stolfi9695 , Jorge Stolfi9696
58 Derrick Coetzee (User:Dcoetzee9697 ), Derrick Coetzee
(User:Dcoetzee9698 )

http://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9670
redlink=1
https://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9671
redlink=1
http://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9672
redlink=1
https://commons.wikimedia.org/w/index.php?title=User:Esquivalience&action=edit&
9673
redlink=1
9674 http://commons.wikimedia.org/wiki/User:Dcoetzee
9675 https://commons.wikimedia.org/wiki/User:Dcoetzee
9676 http://commons.wikimedia.org/w/index.php?title=User:Nomen4Omen&action=edit&redlink=1
9677 https://commons.wikimedia.org/w/index.php?title=User:Nomen4Omen&action=edit&redlink=1
9678 http://commons.wikimedia.org/w/index.php?title=User:Josell7&action=edit&redlink=1
9679 https://commons.wikimedia.org/w/index.php?title=User:Josell7&action=edit&redlink=1
9680 https://en.wikipedia.org/wiki/User:Booyabazooka
9681 https://en.wikipedia.org/wiki/User:Deco
9682 http://commons.wikimedia.org/wiki/User:Superm401
9683 https://en.wikipedia.org/wiki/User:Booyabazooka
9684 https://en.wikipedia.org/wiki/User:Deco
9685 https://commons.wikimedia.org/wiki/User:Superm401
9686 http://commons.wikimedia.org/w/index.php?title=User:Qwertyus&action=edit&redlink=1
9687 https://commons.wikimedia.org/w/index.php?title=User:Qwertyus&action=edit&redlink=1
9688 https://en.wikipedia.org/wiki/User:Saranphat.cha
9689 http://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9690 https://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9691 http://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9692 https://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9693 http://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9694 https://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9695 http://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9696 https://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9697 http://commons.wikimedia.org/wiki/User:Dcoetzee
9698 https://commons.wikimedia.org/wiki/User:Dcoetzee

2060
List of Figures

59 en:User:Saranphat.cha9699 , Anomie, J.delanoy, Jo-Jo Eu-


merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
60 en:User:Saranphat.cha9700 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
61 Jorge Stolfi9701 , Jorge Stolfi9702
62 en:User:Saranphat.cha9703 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
63 en:User:Saranphat.cha9704 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
64 en:User:Saranphat.cha9705 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
65 svg by Booyabazooka9706
original png by Wapcaplet9707 , svg by Booyabazooka9708
original png by Wapcaplet9709
66 Jorge Stolfi9710 , Jorge Stolfi9711
67 Jorge Stolfi9712 , Jorge Stolfi9713
68 Emijrpbot, Hazard-Bot, Helix84, JarektBot, Simeon87, Ve-
lociostrich, Xhienne
69 Emijrpbot, Hazard-Bot, Helix84, JarektBot, Simeon87, Ve-
lociostrich, Xhienne
70 Cryptic C629714 , Cryptic C629715
71 en:User:Saranphat.cha9716 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
72 SVG Jarkko Piiroinen9717 ; rights, design and origin Wikime-
dia Foundation, Anomie, Jo-Jo Eumerus, Mifter
73 Rasmus Pagh

9699 https://en.wikipedia.org/wiki/User:Saranphat.cha
9700 https://en.wikipedia.org/wiki/User:Saranphat.cha
9701 http://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9702 https://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9703 https://en.wikipedia.org/wiki/User:Saranphat.cha
9704 https://en.wikipedia.org/wiki/User:Saranphat.cha
9705 https://en.wikipedia.org/wiki/User:Saranphat.cha
9706 http://commons.wikimedia.org/wiki/User:Booyabazooka
9707 https://en.wikipedia.org/wiki/User:Wapcaplet
9708 https://commons.wikimedia.org/wiki/User:Booyabazooka
9709 https://en.wikipedia.org/wiki/User:Wapcaplet
9710 http://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9711 https://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9712 http://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9713 https://commons.wikimedia.org/wiki/User:Jorge_Stolfi
9714 http://commons.wikimedia.org/wiki/User:Cryptic_C62
9715 https://commons.wikimedia.org/wiki/User:Cryptic_C62
9716 https://en.wikipedia.org/wiki/User:Saranphat.cha
9717 https://commons.wikimedia.org/wiki/User:Jarkko_Piiroinen

2061
List of Figures

74 en:User:Saranphat.cha9718 , Anomie, J.delanoy, Jo-Jo Eu-


merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
75 Stephen Silver
76 David Vignoni9719 (original icon); Flamurai9720 (SVG con-
vertion); bayo9721 (color), David Vignoni9722 (original icon);
Flamurai9723 (SVG convertion); bayo9724 (color)
77 Cmglee9725 , Cmglee9726
78 en:User:Saranphat.cha9727 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
79 User:JN~commonswiki9728 , User:JN~commonswiki9729
80 Anarkman commonswiki, Emijrpbot, Fastfission common-
swiki, Hazard-Bot, JarektBot
81 Cmglee9730 , Cmglee9731
82 Cmglee9732 , Cmglee9733
83 derivative work by Thumperward9734 / * File:Wiki let-
ter w.svg9735 : Jarkko Piiroinen9736 , derivative work by
Thumperward9737 / * File:Wiki letter w.svg9738 : Jarkko Pi-
iroinen9739
84 en:User:Saranphat.cha9740 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
85 Google

9718 https://en.wikipedia.org/wiki/User:Saranphat.cha
9719 https://en.wikipedia.org/wiki/David_Vignoni
9720 http://commons.wikimedia.org/wiki/User:Flamurai
9721 http://commons.wikimedia.org/wiki/User:Bayo
9722 https://en.wikipedia.org/wiki/David_Vignoni
9723 https://commons.wikimedia.org/wiki/User:Flamurai
9724 https://commons.wikimedia.org/wiki/User:Bayo
9725 http://commons.wikimedia.org/wiki/User:Cmglee
9726 https://commons.wikimedia.org/wiki/User:Cmglee
9727 https://en.wikipedia.org/wiki/User:Saranphat.cha
9728 http://commons.wikimedia.org/wiki/User:JN~commonswiki
9729 https://commons.wikimedia.org/wiki/User:JN~commonswiki
9730 http://commons.wikimedia.org/wiki/User:Cmglee
9731 https://commons.wikimedia.org/wiki/User:Cmglee
9732 http://commons.wikimedia.org/wiki/User:Cmglee
9733 https://commons.wikimedia.org/wiki/User:Cmglee
9734 http://commons.wikimedia.org/wiki/User:Thumperward
9735 http://commons.wikimedia.org/wiki/File:Wiki_letter_w.svg
9736 http://commons.wikimedia.org/wiki/User:Jarkko_Piiroinen
9737 https://commons.wikimedia.org/wiki/User:Thumperward
9738 https://commons.wikimedia.org/wiki/File:Wiki_letter_w.svg
9739 https://commons.wikimedia.org/wiki/User:Jarkko_Piiroinen
9740 https://en.wikipedia.org/wiki/User:Saranphat.cha

2062
List of Figures

86 No machine-readable author provided. Andrew


Hyde~commonswiki9741 assumed (based on copyright
claims)., No machine-readable author provided. Andrew
Hyde~commonswiki9742 assumed (based on copyright
claims).
87 The original uploader was Jasampler9743 at Spanish
Q529744 ., The original uploader was Jasampler9745 at Span-
ish Wikipedia9746 .
88 Kilom6919747 , Kilom6919748
89 Leshabirukov9749 , Leshabirukov9750
90 KSmrq9751 , KSmrq9752
91 User:Rocchini9753 , simplification by User:Life of Riley9754 ,
User:Rocchini9755 , simplification by User:Life of Riley9756
92 Arichnad
93 Axa29757 , Axa29758
94 User:DTR9759 , User:DTR9760
95 Kilom6919761 , Kilom6919762
96 The original uploader was Robertwb9763 at English Q529764 .,
The original uploader was Robertwb9765 at English
Wikipedia9766 .

http://commons.wikimedia.org/w/index.php?title=User:Andrew_Hyde~commonswiki&action=
9741
edit&redlink=1
https://commons.wikimedia.org/w/index.php?title=User:Andrew_Hyde~commonswiki&action=
9742
edit&redlink=1
9743 https://en.wikipedia.org/wiki/es:User:Jasampler
9744 https://en.wikipedia.org/wiki/es:
9745 https://en.wikipedia.org/wiki/es:User:Jasampler
9746 https://en.wikipedia.org/wiki/es:
9747 http://commons.wikimedia.org/wiki/User:Kilom691
9748 https://commons.wikimedia.org/wiki/User:Kilom691
http://commons.wikimedia.org/w/index.php?title=User:Leshabirukov&action=edit&redlink=
9749
1
https://commons.wikimedia.org/w/index.php?title=User:Leshabirukov&action=edit&
9750
redlink=1
9751 http://commons.wikimedia.org/w/index.php?title=User:KSmrq&action=edit&redlink=1
9752 https://commons.wikimedia.org/w/index.php?title=User:KSmrq&action=edit&redlink=1
9753 http://commons.wikimedia.org/wiki/User:Rocchini
9754 http://commons.wikimedia.org/wiki/User:Life_of_Riley
9755 https://commons.wikimedia.org/wiki/User:Rocchini
9756 https://commons.wikimedia.org/wiki/User:Life_of_Riley
9757 http://commons.wikimedia.org/w/index.php?title=User:Axa2&action=edit&redlink=1
9758 https://commons.wikimedia.org/w/index.php?title=User:Axa2&action=edit&redlink=1
9759 http://commons.wikimedia.org/wiki/User:DTR
9760 https://commons.wikimedia.org/wiki/User:DTR
9761 http://commons.wikimedia.org/wiki/User:Kilom691
9762 https://commons.wikimedia.org/wiki/User:Kilom691
9763 https://en.wikipedia.org/wiki/User:Robertwb
9764 https://en.wikipedia.org/wiki/
9765 https://en.wikipedia.org/wiki/User:Robertwb
9766 https://en.wikipedia.org/wiki/

2063
List of Figures

97 David Vignoni9767 (original icon); Flamurai9768 (SVG con-


vertion); bayo9769 (color), David Vignoni9770 (original icon);
Flamurai9771 (SVG convertion); bayo9772 (color)
98 David Eppstein9773 , David Eppstein9774
99 David Eppstein9775 , David Eppstein9776
100 No machine-readable author provided. User A19777 assumed
(based on copyright claims)., No machine-readable author
provided. User A19778 assumed (based on copyright claims).
101 User:AzaToth9779 , User:AzaToth9780
102 Allforrous, BetacommandBot, CommonSupporter, Hazard-
Bot, JMCC1, JarektBot, Jmarchn, Josette, Kilom691, Lar-
bot, Watchduck
103 Allforrous, BetacommandBot, CommonSupporter,
Grafite commonswiki, Hazard-Bot, JarektBot, Jcb,
Jmarchn, Josette, Larbot, Watchduck
104 Computermacgyver9781 , Computermacgyver9782
105 Martin Grandjean9783 , Martin Grandjean9784
106 Bogdan Giuşcă
107 Aditya8795, Dcoetzee, JarektBot, Watchduck
108 Arbor9785 at English Q529786 (PNG file), Booyabazooka9787
at English Q529788 (corrections + SVG conversion),
Arbor9789 at English Wikipedia9790 (PNG file), Booy-
abazooka9791 at English Wikipedia9792 (corrections + SVG
conversion)
109 Thore Husfeldt9793 (talk9794 )

9767 https://en.wikipedia.org/wiki/David_Vignoni
9768 http://commons.wikimedia.org/wiki/User:Flamurai
9769 http://commons.wikimedia.org/wiki/User:Bayo
9770 https://en.wikipedia.org/wiki/David_Vignoni
9771 https://commons.wikimedia.org/wiki/User:Flamurai
9772 https://commons.wikimedia.org/wiki/User:Bayo
9773 http://commons.wikimedia.org/wiki/User:David_Eppstein
9774 https://commons.wikimedia.org/wiki/User:David_Eppstein
9775 http://commons.wikimedia.org/wiki/User:David_Eppstein
9776 https://commons.wikimedia.org/wiki/User:David_Eppstein
9777 http://commons.wikimedia.org/wiki/User:User_A1
9778 https://commons.wikimedia.org/wiki/User:User_A1
9779 http://commons.wikimedia.org/wiki/User:AzaToth
9780 https://commons.wikimedia.org/wiki/User:AzaToth
9781 http://commons.wikimedia.org/wiki/User:Computermacgyver
9782 https://commons.wikimedia.org/wiki/User:Computermacgyver
9783 http://commons.wikimedia.org/wiki/User:SlvrKy
9784 https://commons.wikimedia.org/wiki/User:SlvrKy
9785 https://en.wikipedia.org/wiki/User:Arbor
9786 https://en.wikipedia.org/wiki/
9787 https://en.wikipedia.org/wiki/User:Booyabazooka
9788 https://en.wikipedia.org/wiki/
9789 https://en.wikipedia.org/wiki/User:Arbor
9790 https://en.wikipedia.org/wiki/
9791 https://en.wikipedia.org/wiki/User:Booyabazooka
9792 https://en.wikipedia.org/wiki/
9793 https://en.wikipedia.org/wiki/User:Thore_Husfeldt
9794 https://en.wikipedia.org/wiki/User_talk:Thore_Husfeldt

2064
List of Figures

110 Thore Husfeldt9795 , Thore Husfeldt9796


111 SRI International
112 Subh839797
113 CountingPine9798 , CountingPine9799
114 Srossd9800 , Srossd9801
115 Subh839802 , Subh839803
116 Tristan Shin
117 Tristan Shin
118 Tristan Shin
119 Fractalone9804 , Fractalone9805
120 Jez99999806 at English Q529807 , Jez99999808 at English
Wikipedia9809
121 Maschelos
122 David Eppstein9810 at English Wikipedia9811
123 David Eppstein9812 at English Wikipedia9813
124 HeMath9814 , HeMath9815
125 Horváth Árpád9816 , Horváth Árpád9817
126 Yves berset9818 , Yves berset9819
127 Arpad Horvath9820 , Arpad Horvath9821
128 Khassan du9822 , Khassan du9823

http://commons.wikimedia.org/w/index.php?title=User:Thore_Husfeldt&action=edit&
9795
redlink=1
https://commons.wikimedia.org/w/index.php?title=User:Thore_Husfeldt&action=edit&
9796
redlink=1
9797 https://commons.wikimedia.org/wiki/User:Subh83
http://commons.wikimedia.org/w/index.php?title=User:CountingPine&action=edit&redlink=
9798
1
https://commons.wikimedia.org/w/index.php?title=User:CountingPine&action=edit&
9799
redlink=1
9800 http://commons.wikimedia.org/w/index.php?title=User:Srossd&action=edit&redlink=1
9801 https://commons.wikimedia.org/w/index.php?title=User:Srossd&action=edit&redlink=1
9802 http://commons.wikimedia.org/wiki/User:Subh83
9803 https://commons.wikimedia.org/wiki/User:Subh83
9804 http://commons.wikimedia.org/w/index.php?title=User:Fractalone&action=edit&redlink=1
9805 https://commons.wikimedia.org/w/index.php?title=User:Fractalone&action=edit&redlink=1
9806 https://en.wikipedia.org/wiki/User:Jez9999
9807 https://en.wikipedia.org/wiki/
9808 https://en.wikipedia.org/wiki/User:Jez9999
9809 https://en.wikipedia.org/wiki/
9810 https://en.wikipedia.org/wiki/User:David_Eppstein
9811 https://en.wikipedia.org/wiki/
9812 https://en.wikipedia.org/wiki/User:David_Eppstein
9813 https://en.wikipedia.org/wiki/
9814 http://commons.wikimedia.org/w/index.php?title=User:HeMath&action=edit&redlink=1
9815 https://commons.wikimedia.org/w/index.php?title=User:HeMath&action=edit&redlink=1
9816 http://commons.wikimedia.org/wiki/User:Harp
9817 https://commons.wikimedia.org/wiki/User:Harp
9818 http://commons.wikimedia.org/w/index.php?title=User:Yves_berset&action=edit&redlink=1
https://commons.wikimedia.org/w/index.php?title=User:Yves_berset&action=edit&redlink=
9819
1
9820 http://commons.wikimedia.org/wiki/User:Harp
9821 https://commons.wikimedia.org/wiki/User:Harp
9822 http://commons.wikimedia.org/w/index.php?title=User:Khassan_du&action=edit&redlink=1
9823 https://commons.wikimedia.org/w/index.php?title=User:Khassan_du&action=edit&redlink=1

2065
List of Figures

129 Khassan du9824 , Khassan du9825


130 User:Dcoetzee9826 , User:Dcoetzee9827
131 en:User:Saranphat.cha9828 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
132 Alieseraj9829 , Alieseraj9830
133 User:Dcoetzee9831 , User:Maksim9832 , User:Alexander
Drichel9833 , User:Dcoetzee9834 , User:Maksim9835 ,
User:Alexander Drichel9836
134 User:Dcoetzee9837 , User:Maksim9838 , User:Alexander
Drichel9839 , User:Dcoetzee9840 , User:Maksim9841 ,
User:Alexander Drichel9842
135 User:Dcoetzee9843 , User:Maksim9844 , User:Alexander
Drichel9845 , User:Dcoetzee9846 , User:Maksim9847 ,
User:Alexander Drichel9848
136 en:User:Saranphat.cha9849 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
137 Blake Matheny
138 No machine-readable author provided. Reg-
naron~commonswiki9850 assumed (based on copyright
claims)., No machine-readable author provided. Reg-
naron~commonswiki9851 assumed (based on copyright
claims).

9824 http://commons.wikimedia.org/w/index.php?title=User:Khassan_du&action=edit&redlink=1
9825 https://commons.wikimedia.org/w/index.php?title=User:Khassan_du&action=edit&redlink=1
9826 http://commons.wikimedia.org/wiki/User:Dcoetzee
9827 https://commons.wikimedia.org/wiki/User:Dcoetzee
9828 https://en.wikipedia.org/wiki/User:Saranphat.cha
9829 http://commons.wikimedia.org/w/index.php?title=User:Alieseraj&action=edit&redlink=1
9830 https://commons.wikimedia.org/w/index.php?title=User:Alieseraj&action=edit&redlink=1
9831 http://commons.wikimedia.org/wiki/User:Dcoetzee
9832 http://commons.wikimedia.org/wiki/User:Maksim
9833 http://commons.wikimedia.org/wiki/User:Alexander_Drichel
9834 https://commons.wikimedia.org/wiki/User:Dcoetzee
9835 https://commons.wikimedia.org/wiki/User:Maksim
9836 https://commons.wikimedia.org/wiki/User:Alexander_Drichel
9837 http://commons.wikimedia.org/wiki/User:Dcoetzee
9838 http://commons.wikimedia.org/wiki/User:Maksim
9839 http://commons.wikimedia.org/wiki/User:Alexander_Drichel
9840 https://commons.wikimedia.org/wiki/User:Dcoetzee
9841 https://commons.wikimedia.org/wiki/User:Maksim
9842 https://commons.wikimedia.org/wiki/User:Alexander_Drichel
9843 http://commons.wikimedia.org/wiki/User:Dcoetzee
9844 http://commons.wikimedia.org/wiki/User:Maksim
9845 http://commons.wikimedia.org/wiki/User:Alexander_Drichel
9846 https://commons.wikimedia.org/wiki/User:Dcoetzee
9847 https://commons.wikimedia.org/wiki/User:Maksim
9848 https://commons.wikimedia.org/wiki/User:Alexander_Drichel
9849 https://en.wikipedia.org/wiki/User:Saranphat.cha
9850 http://commons.wikimedia.org/wiki/User:Regnaron~commonswiki
9851 https://commons.wikimedia.org/wiki/User:Regnaron~commonswiki

2066
List of Figures

139 No machine-readable author provided. Reg-


naron~commonswiki9852 assumed (based on copyright
claims)., No machine-readable author provided. Reg-
naron~commonswiki9853 assumed (based on copyright
claims).
140 User:AzaToth9854 , User:AzaToth9855
141 Tomasz P. Michalak
142 Tapiocozzo9856 , Tapiocozzo9857
143 Claudio Rocchini
144 Ajalvare9858 , Ajalvare9859
145 Derrick Coetzee9860 , Derrick Coetzee9861
146 Heiner Oßwald9862 , Heiner Oßwald9863
147 Heiner Oßwald9864 , Heiner Oßwald9865
148 Heiner Oßwald9866 , Heiner Oßwald9867
149 Heiner Oßwald9868 , Heiner Oßwald9869
150 Heiner Oßwald9870 , Heiner Oßwald9871
151 Heiner Oßwald9872 , Heiner Oßwald9873
152 Heiner Oßwald9874 , Heiner Oßwald9875
153 Heiner Oßwald9876 , Heiner Oßwald9877
154 !Original:9878 Gergelypalla9879 Vector: Redrobsche9880 ,
!Original:9881 Gergelypalla9882 Vector: Redrobsche9883

9852 http://commons.wikimedia.org/wiki/User:Regnaron~commonswiki
9853 https://commons.wikimedia.org/wiki/User:Regnaron~commonswiki
9854 http://commons.wikimedia.org/wiki/User:AzaToth
9855 https://commons.wikimedia.org/wiki/User:AzaToth
9856 http://commons.wikimedia.org/w/index.php?title=User:Tapiocozzo&action=edit&redlink=1
9857 https://commons.wikimedia.org/w/index.php?title=User:Tapiocozzo&action=edit&redlink=1
9858 http://commons.wikimedia.org/w/index.php?title=User:Ajalvare&action=edit&redlink=1
9859 https://commons.wikimedia.org/w/index.php?title=User:Ajalvare&action=edit&redlink=1
9860 http://commons.wikimedia.org/wiki/User:Dcoetzee
9861 https://commons.wikimedia.org/wiki/User:Dcoetzee
9862 http://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9863 https://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9864 http://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9865 https://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9866 http://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9867 https://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9868 http://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9869 https://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9870 http://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9871 https://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9872 http://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9873 https://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9874 http://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9875 https://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9876 http://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9877 https://commons.wikimedia.org/w/index.php?title=User:Mastarh&action=edit&redlink=1
9878 http://commons.wikimedia.org/wiki/File:Illustration_of_overlapping_communities.jpg
9879 http://commons.wikimedia.org/wiki/User_talk:Gergelypalla
9880 http://commons.wikimedia.org/wiki/User:Redrobsche
9881 https://commons.wikimedia.org/wiki/File:Illustration_of_overlapping_communities.jpg
9882 https://commons.wikimedia.org/wiki/User_talk:Gergelypalla
9883 https://commons.wikimedia.org/wiki/User:Redrobsche

2067
List of Figures

155 Faridani, ImageTaggingBot, Svenbot


156 WhereAreMyPointersAt9884 , WhereAreMyPointersAt9885
157 WhereAreMyPointersAt9886 , WhereAreMyPointersAt9887
158 WhereAreMyPointersAt9888 , WhereAreMyPointersAt9889
159 Kxx9890 , Kxx9891
160 Kxx9892 , Kxx9893
161 Raptor 2101
162 Raptor 2101
163 Raptor 2101
164 Raptor 2101
165 Raptor 2101
166 Raptor 2101
167 Raptor 2101
168 en:User:Saranphat.cha9894 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
169 Miles9895 , Miles9896
170 Stimpy9897 at English Wikipedia9898
171 BenRG9899 at English Wikipedia9900
172 en:User:Saranphat.cha9901 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
173 Miles9902 , Miles9903
174 Rodion.Efremov9904 , Rodion.Efremov9905
175 Subh839906 , Subh839907

http://commons.wikimedia.org/w/index.php?title=User:WhereAreMyPointersAt&action=edit&
9884
redlink=1
https://commons.wikimedia.org/w/index.php?title=User:WhereAreMyPointersAt&action=
9885
edit&redlink=1
http://commons.wikimedia.org/w/index.php?title=User:WhereAreMyPointersAt&action=edit&
9886
redlink=1
https://commons.wikimedia.org/w/index.php?title=User:WhereAreMyPointersAt&action=
9887
edit&redlink=1
http://commons.wikimedia.org/w/index.php?title=User:WhereAreMyPointersAt&action=edit&
9888
redlink=1
https://commons.wikimedia.org/w/index.php?title=User:WhereAreMyPointersAt&action=
9889
edit&redlink=1
9890 http://commons.wikimedia.org/w/index.php?title=User:Kxx&action=edit&redlink=1
9891 https://commons.wikimedia.org/w/index.php?title=User:Kxx&action=edit&redlink=1
9892 http://commons.wikimedia.org/w/index.php?title=User:Kxx&action=edit&redlink=1
9893 https://commons.wikimedia.org/w/index.php?title=User:Kxx&action=edit&redlink=1
9894 https://en.wikipedia.org/wiki/User:Saranphat.cha
9895 http://commons.wikimedia.org/w/index.php?title=User:Miles&action=edit&redlink=1
9896 https://commons.wikimedia.org/w/index.php?title=User:Miles&action=edit&redlink=1
9897 https://en.wikipedia.org/wiki/User:Stimpy
9898 https://en.wikipedia.org/wiki/
9899 https://en.wikipedia.org/wiki/User:BenRG
9900 https://en.wikipedia.org/wiki/
9901 https://en.wikipedia.org/wiki/User:Saranphat.cha
9902 http://commons.wikimedia.org/w/index.php?title=User:Miles&action=edit&redlink=1
9903 https://commons.wikimedia.org/w/index.php?title=User:Miles&action=edit&redlink=1
9904 http://commons.wikimedia.org/wiki/User:Rodion.Efremov
9905 https://commons.wikimedia.org/wiki/User:Rodion.Efremov
9906 http://commons.wikimedia.org/wiki/User:Subh83
9907 https://commons.wikimedia.org/wiki/User:Subh83

2068
List of Figures

176 Shiyu Ji9908 , Shiyu Ji9909


177 Chin Ho Lee
178 Chin Ho Lee.
Original uploader was Tcshasaposse9910 at en.wikipedia9911
179 Chin Ho Lee
180 Chin Ho Lee
181 Chin Ho Lee
182 Chin Ho Lee.
Original uploader was Tcshasaposse9912 at en.wikipedia9913
183 Chin Ho Lee
184 Chin Ho Lee
185 Chin Ho Lee.
Original uploader was Tcshasaposse9914 at en.wikipedia9915
186 Erel Segal9916 , Erel Segal9917
187 A3 nm9918 , A3 nm9919
188 A3 nm9920 , A3 nm9921
189 A3 nm9922 , A3 nm9923
190 A3 nm9924 , A3 nm9925
191 Markoid9926 at English Wikipedia9927
192 Markoid9928 at English Wikipedia9929
193 Markoid9930 at English Wikipedia9931
194 Markoid9932 at English Wikipedia9933
195 en:User:Cburnett9934
196 en:User:Cburnett9935

9908 http://commons.wikimedia.org/wiki/User:Shiyu_Ji
9909 https://commons.wikimedia.org/wiki/User:Shiyu_Ji
9910 https://en.wikipedia.org/wiki/User:Tcshasaposse
9911 https://en.wikipedia.org
9912 https://en.wikipedia.org/wiki/User:Tcshasaposse
9913 https://en.wikipedia.org
9914 https://en.wikipedia.org/wiki/User:Tcshasaposse
9915 https://en.wikipedia.org
9916 http://commons.wikimedia.org/wiki/User:Erel_Segal
9917 https://commons.wikimedia.org/wiki/User:Erel_Segal
9918 http://commons.wikimedia.org/wiki/User:A3_nm
9919 https://commons.wikimedia.org/wiki/User:A3_nm
9920 http://commons.wikimedia.org/wiki/User:A3_nm
9921 https://commons.wikimedia.org/wiki/User:A3_nm
9922 http://commons.wikimedia.org/wiki/User:A3_nm
9923 https://commons.wikimedia.org/wiki/User:A3_nm
9924 http://commons.wikimedia.org/wiki/User:A3_nm
9925 https://commons.wikimedia.org/wiki/User:A3_nm
9926 https://en.wikipedia.org/wiki/User:Markoid
9927 https://en.wikipedia.org/wiki/
9928 https://en.wikipedia.org/wiki/User:Markoid
9929 https://en.wikipedia.org/wiki/
9930 https://en.wikipedia.org/wiki/User:Markoid
9931 https://en.wikipedia.org/wiki/
9932 https://en.wikipedia.org/wiki/User:Markoid
9933 https://en.wikipedia.org/wiki/
9934 https://en.wikipedia.org/wiki/User:Cburnett
9935 https://en.wikipedia.org/wiki/User:Cburnett

2069
List of Figures

197 en:User:Cburnett9936
198 en:User:Cburnett9937
199 en:User:Cburnett9938
200 David Eppstein9939 , David Eppstein9940
201 The original uploader was Bender2k149941 at English
Q529942 ., The original uploader was Bender2k149943 at En-
glish Wikipedia9944 .
202 Limaner9945 , Limaner9946
203
• Network_flow_residual.png9947 : Maksim9948
• derivative work: DustyComputer9949 (talk9950 )
,
• Network_flow_residual.png9951 : Maksim9952
• derivative work: DustyComputer9953 (talk9954 )

204 User:Dcoetzee9955 , User:Dcoetzee9956


205 Martin Grandjean9957 , Martin Grandjean9958
206 Chris Davis9959 at en.wikipedia9960
207 en:User:Cburnett9961
208 en:User:Cburnett9962
209 en:User:Cburnett9963
210 en:User:Cburnett9964

9936 https://en.wikipedia.org/wiki/User:Cburnett
9937 https://en.wikipedia.org/wiki/User:Cburnett
9938 https://en.wikipedia.org/wiki/User:Cburnett
9939 http://commons.wikimedia.org/wiki/User:David_Eppstein
9940 https://commons.wikimedia.org/wiki/User:David_Eppstein
9941 https://en.wikipedia.org/wiki/User:Bender2k14
9942 https://en.wikipedia.org/wiki/
9943 https://en.wikipedia.org/wiki/User:Bender2k14
9944 https://en.wikipedia.org/wiki/
9945 http://commons.wikimedia.org/w/index.php?title=User:Limaner&action=edit&redlink=1
9946 https://commons.wikimedia.org/w/index.php?title=User:Limaner&action=edit&redlink=1
9947 http://commons.wikimedia.org/wiki/File:Network_flow_residual.png
9948 http://commons.wikimedia.org/wiki/User:Maksim
9949 http://commons.wikimedia.org/wiki/User:DustyComputer
9950 http://commons.wikimedia.org/wiki/User_talk:DustyComputer
9951 https://commons.wikimedia.org/wiki/File:Network_flow_residual.png
9952 https://commons.wikimedia.org/wiki/User:Maksim
9953 https://commons.wikimedia.org/wiki/User:DustyComputer
9954 https://commons.wikimedia.org/wiki/User_talk:DustyComputer
9955 http://commons.wikimedia.org/wiki/User:Dcoetzee
9956 https://commons.wikimedia.org/wiki/User:Dcoetzee
9957 http://commons.wikimedia.org/wiki/User:SlvrKy
9958 https://commons.wikimedia.org/wiki/User:SlvrKy
9959 https://en.wikipedia.org/wiki/User:Mr3641
9960 https://en.wikipedia.org
9961 https://en.wikipedia.org/wiki/User:Cburnett
9962 https://en.wikipedia.org/wiki/User:Cburnett
9963 https://en.wikipedia.org/wiki/User:Cburnett
9964 https://en.wikipedia.org/wiki/User:Cburnett

2070
List of Figures

211 Svick9965 , Svick9966


212 SVG version was created by User:Grunt9967 and cleaned up
by 32479968 , based on the earlier PNG version9969 , created
by Reidab9970 ., Anomie, Callanecc, CambridgeBayWeather,
Jo-Jo Eumerus, RHaworth
213 svg by Booyabazooka9971
original png by Wapcaplet9972 , svg by Booyabazooka9973
original png by Wapcaplet9974
214 BotMultichill, Helpful Pixie Bot, Tcshasaposse
215 BotMultichill, Helpful Pixie Bot, Tcshasaposse
216 BotMultichill, Helpful Pixie Bot, Tcshasaposse
217 BotMultichill, Helpful Pixie Bot, Tcshasaposse
218 BotMultichill, Helpful Pixie Bot, Tcshasaposse
219 BotMultichill, Helpful Pixie Bot, Tcshasaposse
220 BotMultichill, Helpful Pixie Bot, Tcshasaposse
221 BotMultichill, Helpful Pixie Bot, Tcshasaposse
222 BotMultichill, Helpful Pixie Bot, Tcshasaposse
223 BotMultichill, Helpful Pixie Bot, Tcshasaposse
224 BotMultichill, Helpful Pixie Bot, Tcshasaposse
225 BotMultichill, Helpful Pixie Bot, Tcshasaposse
226 BotMultichill, Helpful Pixie Bot, Tcshasaposse
227 Allforrous, BotMultichill, Emijrpbot, Hazard-Bot, Jarekt-
Bot, Jochen Burghardt, MGA73bot2, Tamorlan, Watch-
duck, robot
228 BotMultichill, Emijrpbot, Grafite commonswiki, Hazard-
Bot, JarektBot, MGA73bot2, Tamorlan, Watchduck,
robot

9965 http://commons.wikimedia.org/wiki/User:Svick
9966 https://commons.wikimedia.org/wiki/User:Svick
9967 https://commons.wikimedia.org/w/index.php?title=User:Grunt&action=edit&redlink=1
9968 https://commons.wikimedia.org/wiki/User:3247
9969 https://commons.wikimedia.org/wiki/File:Commons-logo.png
9970 https://meta.wikimedia.org/wiki/User:Reidab
9971 http://commons.wikimedia.org/wiki/User:Booyabazooka
9972 https://en.wikipedia.org/wiki/User:Wapcaplet
9973 https://commons.wikimedia.org/wiki/User:Booyabazooka
9974 https://en.wikipedia.org/wiki/User:Wapcaplet

2071
List of Figures

229 User:Dcoetzee9975 *derivative work Dcoetzee9976


• File:Complete graph K3.svg9977 : Dbenbenn9978
• File:Complete bipartite graph K3,1.svg9979 : Dben-
benn9980
, User:Dcoetzee9981 *derivative work Dcoetzee9982
• File:Complete graph K3.svg9983 : Dbenbenn9984
• File:Complete bipartite graph K3,1.svg9985 : Dben-
benn9986

230 svg by Booyabazooka9987


original png by Wapcaplet9988 , svg by Booyabazooka9989
original png by Wapcaplet9990
231 originaly uploaded by en:User:Miko3k9991 and trans-
fered to here by User:Μυρμηγκάκι9992 , originaly up-
loaded by en:User:Miko3k9993 and transfered to here by
User:Μυρμηγκάκι9994
232 originaly uploaded by en:User:Miko3k9995 and trans-
fered to here by User:Μυρμηγκάκι9996 , originaly up-
loaded by en:User:Miko3k9997 and transfered to here by
User:Μυρμηγκάκι9998

9975 http://commons.wikimedia.org/wiki/User:Dcoetzee
9976 http://commons.wikimedia.org/wiki/User:Dcoetzee
9977 http://commons.wikimedia.org/wiki/File:Complete_graph_K3.svg
9978 http://commons.wikimedia.org/wiki/User:Dbenbenn
9979 http://commons.wikimedia.org/wiki/File:Complete_bipartite_graph_K3,1.svg
9980 http://commons.wikimedia.org/wiki/User:Dbenbenn
9981 https://commons.wikimedia.org/wiki/User:Dcoetzee
9982 https://commons.wikimedia.org/wiki/User:Dcoetzee
9983 https://commons.wikimedia.org/wiki/File:Complete_graph_K3.svg
9984 https://commons.wikimedia.org/wiki/User:Dbenbenn
9985 https://commons.wikimedia.org/wiki/File:Complete_bipartite_graph_K3,1.svg
9986 https://commons.wikimedia.org/wiki/User:Dbenbenn
9987 http://commons.wikimedia.org/wiki/User:Booyabazooka
9988 https://en.wikipedia.org/wiki/User:Wapcaplet
9989 https://commons.wikimedia.org/wiki/User:Booyabazooka
9990 https://en.wikipedia.org/wiki/User:Wapcaplet
9991 https://en.wikipedia.org/wiki/User:Miko3k
http://commons.wikimedia.org/w/index.php?title=User:%CE%9C%CF%85%CF%81%CE%BC%CE%B7%
9992
CE%B3%CE%BA%CE%AC%CE%BA%CE%B9&action=edit&redlink=1
9993 https://en.wikipedia.org/wiki/User:Miko3k
https://commons.wikimedia.org/w/index.php?title=User:%CE%9C%CF%85%CF%81%CE%BC%CE%B7%
9994
CE%B3%CE%BA%CE%AC%CE%BA%CE%B9&action=edit&redlink=1
9995 https://en.wikipedia.org/wiki/User:Miko3k
http://commons.wikimedia.org/w/index.php?title=User:%CE%9C%CF%85%CF%81%CE%BC%CE%B7%
9996
CE%B3%CE%BA%CE%AC%CE%BA%CE%B9&action=edit&redlink=1
9997 https://en.wikipedia.org/wiki/User:Miko3k
https://commons.wikimedia.org/w/index.php?title=User:%CE%9C%CF%85%CF%81%CE%BC%CE%B7%
9998
CE%B3%CE%BA%CE%AC%CE%BA%CE%B9&action=edit&redlink=1

2072
List of Figures

233 originaly uploaded by en:User:Miko3k9999 and trans-


fered to here by User:Μυρμηγκάκι10000 , originaly up-
loaded by en:User:Miko3k10001 and transfered to here by
User:Μυρμηγκάκι10002
234 en:User:Saranphat.cha10003 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
235 Sándor P. Fekete, Sebastian Morr, and Sebastian Stiller
236 derivative work by Thumperward10004 / * File:Wiki let-
ter w.svg10005 : Jarkko Piiroinen10006 , derivative work by
Thumperward10007 / * File:Wiki letter w.svg10008 : Jarkko
Piiroinen10009
237 derivative work by Thumperward10010 / * File:Wiki let-
ter w.svg10011 : Jarkko Piiroinen10012 , derivative work by
Thumperward10013 / * File:Wiki letter w.svg10014 : Jarkko
Piiroinen10015
238 Sytelus10016 , Sytelus10017
239 en:User:Saranphat.cha10018 , Anomie, J.delanoy, Jo-Jo Eu-
merus, Kimchi.sg, Luk, MSGJ, PeterSymonds, Salvidrim!,
Topbanana
240 Miles10019 , Miles10020
241 Rodion.Efremov10021 , Rodion.Efremov10022
242 David Eppstein10023 , David Eppstein10024
243 Ccalmen10025 , Ccalmen10026

9999 https://en.wikipedia.org/wiki/User:Miko3k
http://commons.wikimedia.org/w/index.php?title=User:%CE%9C%CF%85%CF%81%CE%BC%CE%B7%
10000
CE%B3%CE%BA%CE%AC%CE%BA%CE%B9&action=edit&redlink=1
10001https://en.wikipedia.org/wiki/User:Miko3k
https://commons.wikimedia.org/w/index.php?title=User:%CE%9C%CF%85%CF%81%CE%BC%CE%B7%
10002
CE%B3%CE%BA%CE%AC%CE%BA%CE%B9&action=edit&redlink=1
10003https://en.wikipedia.org/wiki/User:Saranphat.cha
10004http://commons.wikimedia.org/wiki/User:Thumperward
10005http://commons.wikimedia.org/wiki/File:Wiki_letter_w.svg
10006http://commons.wikimedia.org/wiki/User:Jarkko_Piiroinen
10007https://commons.wikimedia.org/wiki/User:Thumperward
10008https://commons.wikimedia.org/wiki/File:Wiki_letter_w.svg
10009https://commons.wikimedia.org/wiki/User:Jarkko_Piiroinen
10010http://commons.wikimedia.org/wiki/User:Thumperward
10011http://commons.wikimedia.org/wiki/File:Wiki_letter_w.svg
10012http://commons.wikimedia.org/wiki/User:Jarkko_Piiroinen
10013https://commons.wikimedia.org/wiki/User:Thumperward
10014https://commons.wikimedia.org/wiki/File:Wiki_letter_w.svg
10015https://commons.wikimedia.org/wiki/User:Jarkko_Piiroinen
10016http://commons.wikimedia.org/w/index.php?title=User:Sytelus&action=edit&redlink=1
10017https://commons.wikimedia.org/w/index.php?title=User:Sytelus&action=edit&redlink=1
10018https://en.wikipedia.org/wiki/User:Saranphat.cha
10019http://commons.wikimedia.org/w/index.php?title=User:Miles&action=edit&redlink=1
10020https://commons.wikimedia.org/w/index.php?title=User:Miles&action=edit&redlink=1
10021http://commons.wikimedia.org/wiki/User:Rodion.Efremov
10022https://commons.wikimedia.org/wiki/User:Rodion.Efremov
10023http://commons.wikimedia.org/wiki/User:David_Eppstein
10024https://commons.wikimedia.org/wiki/User:David_Eppstein
10025http://commons.wikimedia.org/w/index.php?title=User:Ccalmen&action=edit&redlink=1
10026https://commons.wikimedia.org/w/index.php?title=User:Ccalmen&action=edit&redlink=1

2073
List of Figures

244 Kilom69110027 , Kilom69110028


245 Thore Husfeldt10029 , Thore Husfeldt10030
246 Thore Husfeldt10031 , Thore Husfeldt10032
247 Thore Husfeldt10033 , Thore Husfeldt10034
248 Thore Husfeldt10035 , Thore Husfeldt10036
249 Ilmari Karonen10037 , Ilmari Karonen10038
250 Hsilgneymerej10039 , Hsilgneymerej10040
251 R. A. Nonenmache

Вам также может понравиться