Вы находитесь на странице: 1из 15

Software and Mind — related articles

The mechanistic myth and the software frauds

by Andrei Sorin

Using the mechanistic myth as warrant, the universities foster invalid software theories; and the
software companies create useless systems based on these theories. Along with industry experts
and professional associations, these institutions are promoting fraudulent software concepts in
order to prevent independence and expertise in software-related activities. While in reality they
have little to offer society, they have succeeded in convincing us that the best way to create and
use software is by depending on their theories and development systems. This article is a brief
discussion of the method used to induce this dependence. The full discussion can be found in my
book, Software and Mind (for example, in chapter 6, “Software as Weapon,” in chapter 7,
“Software Engineering,” and in the section “The Software Theories” in chapter 3). (The book, as
well as individual chapters and sections, can be downloaded free at www.softwareandmind.com.)

The mechanistic myth

The mechanistic myth is the belief that every phenomenon can be described as a hierarchical
structure of elements; that is, as elements within elements, on lower and lower levels. This is
the same as saying that every phenomenon can be explained. All we have to do is discover a
hierarchical structure that reduces it to simpler and simpler phenomena, one level at a time,
until we reach some trivial ones.

For example, a problem in academic research can be solved by breaking it down into simpler
problems, then breaking those down into even simpler ones, and so on, until we reach problems
simple enough to solve directly. A complicated machine can be built by designing it as levels of
subassemblies, as parts within parts, down to some simple parts that can be made directly. And
a software application can be developed by breaking it down into separate modules, each
module into separate constructs, and each construct into separate statements, which can then
be programmed directly. The term “mechanism” derives from the fact that in the seventeenth
century, when this ideology was born, mechanics was the only science that offered exact
explanations; so it was believed that every phenomenon could be explained by reducing it,
ultimately, to simple mechanical phenomena.

This ideology promises two benefits. The first one is its capacity to replace any challenge with a
series of relatively simple steps: instead of trying to solve a difficult problem, what we do now is
reduce it to simpler and simpler ones. Whether the problem entails a transition from high to low
levels or from low to high (that is, from the whole to its parts or from parts to the whole), it is
easier to solve it by dealing with one level at a time. Also, we can sever the links between levels
and separate a structure into smaller ones that remain, individually, correct hierarchies. Since
each element at a given level depends only on the lower-level elements below it, it is in effect
both the terminal (i.e., top) element of the partial structure below and one of the starting
elements for the partial structure above. We can simplify projects, therefore, by reducing the
number of levels in each one and combining the resulting structures hierarchically: what is the
terminal element in one project becomes one of the starting elements in another.
The second benefit is the potential to explain, precisely and completely, any phenomenon. Each
element in the structure depends only on the lower-level elements that make it up, so the
phenomenon is completely defined by the starting elements and the totality of hierarchical
relations. This means that we can represent any phenomenon with mathematical precision,
because mathematical systems are themselves based on hierarchical structures. (Each theorem
in a given system is expressed as a combination of simpler theorems, which are then reduced to
even simpler ones, and so on, until we reach the system’s premises, axioms, and basic entities.)
Thus, if we depict a phenomenon with a hierarchical structure, we will also have a corresponding
mathematical model, and will find answers to our problems by means of equations, graphs, and
the like. The power of mathematics can be enjoyed in any field, therefore, not just in the exact

The mechanistic ideology is a myth, because most phenomena cannot be represented with an
isolated hierarchical structure. This is easy to understand if we recall how these structures are
formed. A structure’s elements possess certain attributes, and the hierarchical relations between
them are based on these attributes. Elements that share one particular attribute (even if the
attribute has a different value for each one) are related and form an element at the next higher
level. Several such elements are formed from other groups of elements, related through other
attributes. Then the new elements are similarly related to form the next higher-level element,
and so on. The elements, therefore, possess their attributes hierarchically, as one within
another, and there are no other relations. For example, if one of the attributes were repeated
elsewhere in the structure, that would generate additional, non-hierarchical relations.

The mechanistic delusion should now be obvious: mechanism is valid only for phenomena that
fulfil the conditions just described; it cannot be simply assumed to work for any phenomenon.
Specifically, if a phenomenon is made up of elements whose attributes give rise to the relations
needed to create a single hierarchical structure, mechanism works; but if the elements possess
attributes that relate them also in ways that are not part of one structure, mechanism fails.

The elements that make up real-world phenomena possess countless attributes, and most of
them relate the elements in non-hierarchical ways. Thus, if we were to take them all into
account, no mechanistic representation would be found for any phenomenon. Fortunately, in
practice we can pick just the attributes that are important when the phenomenon is observed in
a particular context. The resulting structure is then an approximation of the actual phenomenon.
But if the approximation is close enough to be useful despite the missing attributes, we can say
that a mechanistic representation of the phenomenon has been found. On the other hand, if we
ignore some attributes that are in fact important, the approximation is not close enough to be
useful, and mechanism fails. It is this simple difference that the mechanists refuse to accept
when attempting to represent every phenomenon mechanistically.

A phenomenon that cannot be represented mechanistically needs several hierarchical structures

if we want to include all important attributes. And these structures cannot be studied separately,
because it is their totality and their interactions that depict the phenomenon. This is the only
way to relate the same elements based on several attributes at the same time, and thus attain a
useful approximation of the phenomenon. But then we lose the benefits of mechanism, which
can only be attained with a single, isolated structure. In a system of interacting structures, each
element no longer depends just on the lower-level elements that make it up, as in an isolated
structure, but also on other elements. The phenomenon, therefore, can no longer be described
with mathematical precision; nor can we sever the links between levels and separate it into
several hierarchies. The only way to study such a phenomenon is as a whole, by processing all
structures at once and starting from their low levels.

The isolated structures and the phenomena they depict are called simple, or mechanistic; the
systems of interacting structures and the phenomena they depict are called complex, or non-

An example of a mechanistic phenomenon is the manufacturing process. The parts that make up
an appliance possess many attributes: dimensions, weight, cost, colour, supplier, delivery date,
life expectancy, etc. But we purposely design the appliance in such a way that only those
attributes that determine the position and function of each part are important in the assembly
process, while attributes like cost and supplier can be ignored. The assembly operations can
then be correctly represented with a hierarchical structure. The delivery operations, or the
accounting operations, or the maintenance operations can also be represented with hierarchical
structures; but these are different structures, based on other attributes (supplier, delivery date,
cost, etc.). The parts with all their attributes are a non-mechanistic phenomenon and involve
several structures. But in this case it is possible to attain useful mechanistic approximations by
isolating the structures.

An example of a non-mechanistic phenomenon that cannot be usefully approximated with

isolated structures is language. The things represented by words possess many attributes, and
are therefore related through many structures. Consequently, the words themselves are related
through these structures. To comprehend a sentence, our mind must process together all the
structures formed by its words, for this is the only way to discover the meaning of the sentence.
If we tried to isolate the structures and interpret them separately, we would only discover some
small and disconnected pieces of the message conveyed by the sentence. In the case of
language there is no useful mechanistic approximation.

In conclusion, mechanism works in domains like the exact sciences, engineering, manufacturing,
and construction because their phenomena can be accurately represented with isolated
structures; and it fails in domains like psychology, sociology, linguistics, economics, and
software because their phenomena consist of structures whose interactions cannot be ignored.

The traditional mechanistic frauds

To understand the software frauds we must start by examining the traditional ones, for their
fallacies are the same. The academics take the mechanistic myth as unquestionable truth. The
belief that every phenomenon can be studied by reducing it to isolated structures is seen as the
only valid method of science, and is the foundation of all academic research. Doubting this belief
is tantamount to doubting science. But if the phenomena involving minds and societies can only
be represented with systems of interacting structures, the mechanistic study of these
phenomena is a fraud. In three hundred years of mechanistic philosophy, not one mechanistic
model was successful in the human sciences.
The academics like mechanism because this myth affords them a privileged position in society
regardless of whether their activities are useful or not. As long as we accept mechanism
unquestioningly, all they have to do to gain our respect is practise mechanism. It is irrelevant
whether their theories work or not, or whether mechanism is valid at all in their field.

A good example of this corruption is the linguistic theory known as Universal Grammar.
Introduced in the 1950s, and starting with the premise that the grammatical structure is the
only important one, this theory attempts to represent mathematically all sentences that are
grammatically correct (and to recognize mathematically those that are not) in a natural
language like English. This absurd idea, derived from nothing more substantial than the
observation of a few linguistic patterns, was enough to legitimize a vast research program,
involving thousands of academics. Thus, for over half a century, the mechanistic dogma has
been the only justification for the pursuit of a fantasy in the world’s most prestigious

Universal Grammar started as a small set of simple principles. But when this naive attempt failed
to explain more than a few English sentences, an endless series of new versions and sub-
theories were contrived in order to isolate and study separately additional structures from the
complex phenomenon of language. This is a futile quest, of course, because when separating the
structures the mechanists lose the most important aspect of the phenomenon – their
interactions. Thus, simply by claiming that mechanism is a universal scientific principle,
academic charlatans can spend their entire career doing nothing useful, while being trusted and
respected by everyone. (The full discussion of this fraud can be found in the subsection
“Universal Grammar” in chapter 3 of Software and Mind.)

This theory also exemplifies how the mechanists delude themselves and the public about the
value of their work. They begin by announcing a theory that claims to explain with mathematical
precision a certain complex phenomenon. The theory is just a speculation at this point, although
it may work in a few simple situations. The mechanists merely noticed a pattern in the
phenomenon, and they even discovered perhaps a mathematical representation. But this is a
trivial achievement: all they did is to extract one of the structures that make up the
phenomenon; and isolated structures, of course, can be represented mathematically. They did
not prove that the other structures are unimportant and can be ignored.

Then, since the theory is generally useless, the mechanists start an endless process of
“improvements”: they modify the theory to cover up its failures, again and again, while
describing this activity as research. In reality, what they do is acknowledge the importance of
the other structures, which they originally ignored. Also, they introduce artificial and increasingly
complicated means to restore the interactions between these structures and the one they
isolated. But this work is futile, because the interactions in a complex phenomenon cannot be
described with precision. The theory appears to improve, but it never attains the promised
benefits – a useful mechanistic representation of the phenomenon. So it is eventually
abandoned, usually when a new theory becomes fashionable in that domain; the whole process
is then repeated with the new theory. These theories, thus, are fraudulent from the beginning,
because we could always tell that the phenomenon consists of interacting structures and a
mechanistic theory cannot work.

An important aspect of the mechanistic myth is the process of peer review – the academic
system of controls believed by everyone to ensure rigour in research work. But peer review only
verifies that the work adheres to the mechanistic principles; it does not verify whether these
principles are valid in the field concerned. So peer review is in reality part of the fraud: since it is
grounded on the same premise as the research itself – the belief that mechanism is valid in all
fields – it is meaningless as control. All it can do is confirm that the research is correct within the
mechanistic ideology. It is a self-serving process.

Another fact worth mentioning is that these theories are easily shown to be pseudoscientific
when analyzed with Karl Popper’s well-known principles of demarcation between science and
pseudoscience (see the related article “A summary of Popper’s principles of demarcation”). Thus,
when modifying the theory again and again, as previously described, the mechanists try to save
it from refutation by expanding it: they suppress the endless falsifications by incorporating them
into the theory in the guise of new features. And this violates Popper’s principles. The theory
starts with bold and exact claims, but when expanded in this fashion its exactness is gradually
reduced and it eventually becomes worthless.

The software mechanistic frauds

In the past, it was only in universities that individuals could pursue mechanistic fantasies that
looked like serious activities. Through software, however, the pursuit of mechanistic fantasies
has become possible everywhere. Here we are discussing the world of programming, but similar
software-related fantasies are spreading now in business, and even in our personal affairs.

This started around 1970, when the academics decided that the phenomena associated with
programming must be reduced to a mechanistic representation. Rather than depending on such
uncertain qualities as the knowledge and skills of programmers, said the academics, the
mechanistic ideology will permit even inexperienced persons to write software. Then, lacking
real-world programming experience, they confidently asserted that the development of software
applications is akin to the activities performed in a factory, renamed it “software engineering,”
and insisted that programmers restrict themselves to small and isolated tasks, just like factory
workers. Finally, based on these mechanistic ideas, they proceeded to invent a series of fantastic
theories, each one claiming to have revolutionized programming by turning it into an exact,
efficient, error-free activity similar to modern manufacturing.

Unlike the mechanistic theories in the human sciences, however, which had little bearing on our
activities outside academia, the mechanistic programming theories were embraced with
enthusiasm by individuals, businesses, and governments. Unaware of the long history of
mechanistic delusions in universities, millions of practitioners working in the real world believed
the claims made by the academics and actually tried to develop software applications using
these theories. This naivety was encouraged by respected computer associations and institutes,
and by renowned experts and gurus, who praised and taught the theories and the related
methodologies. Then the software companies started to create various development systems
that incorporated these concepts, and soon any programming done using just skills and
experience, rather than depending on the latest systems, was condemned as unprofessional.

Thus, software mechanistic concepts that are in fact as worthless as the traditional mechanistic
ones are now dominating the world of programming, preventing expertise and making software
development far more complicated and expensive than it ought to be. As a result, instead of a
true programming profession, a huge software bureaucracy has evolved. Just like the academic
bureaucrats, the software bureaucrats are trusted and respected by everyone simply because
they practise mechanism. It is irrelevant how inefficient their work is, and whether the resulting
applications are adequate or not.

Like the traditional mechanistic theories, the software theories and systems keep failing and are
continually modified in an attempt to make them useful. But this only makes them more
complicated. In the end, the only way to make them useful is by reinstating the traditional
concepts. Every mechanistic principle must be annulled, so the theories and systems lose all the
benefits claimed for them; but they continue to be promoted with the same claims. To
appreciate this fraudulent evolution, let us review first the nature of software applications, and
why it is impossible to represent them mechanistically.

The elements that make up an application (statements, blocks of statements, modules) possess
certain attributes. Any process that can affect more than one element gives rise to an attribute,
because it relates the elements logically: memory variables, database fields, file operations,
subroutine calls, business practices, and so on. And, as we saw earlier, the relations between
elements generate hierarchical structures. If we pick just one of these attributes, or just a few,
we may be able to depict the relations with one structure. But if we take all attributes into
account (which we must, because they are all important), we need many structures to depict the
relations. These structures exist at the same time and interact, because they share their
elements; they cannot be isolated or created separately.

Software applications, then, are complex structures. The reason is that they must reflect
accurately our personal, social, and business affairs, which themselves consist of interacting
structures. It is absurd to search for ways to represent applications with isolated structures, as
the software mechanists do, seeing that isolated structures cannot possibly provide accurate
approximations of our affairs. Thus, an application developed using strictly mechanistic principles
is necessarily useless. Language too consists of interacting structures, as we saw, and for the
same reason: it must reflect accurately our affairs. Both natural languages and programming
languages have the qualities needed to generate interacting structures; but both require also a
human mind, because only minds can process these structures. When separating the structures,
the mechanists forsake those qualities; so it is not surprising that their theories fail.

Everyone agrees that it is possible to create applications using just our minds and the traditional
programming languages and methods. We start with a combination of such elements as the
statements of a typical language, lower-level elements like the operations of an assembly
language, and higher-level elements like existing subroutines. And we combine these elements
to form larger and larger ones, creating higher and higher levels: constructs, blocks of
statements, modules, and finally the complete application. This is similar to the way we use
words to create sentences, ideas, and complete stories.

In both cases, we follow a concept we all understand intuitively: combining simple things into
more and more complex things in the form of a hierarchical structure. But unlike such structures
as the parts and subassemblies of an appliance, in the case of software and language we must
create several hierarchical structures from the same elements at the same time. For example,
even a small software element may include several memory variables and database fields, a file
operation, an accounting method, and some subroutine calls; and through each one of these
attributes it belongs to a structure that relates it logically to other elements that possess that
attribute. Thus, while creating that element we must also be aware of those structures and the
other elements. We do this by using our minds and the skills we acquired through practice.

While not disputing the fact that we can create applications using nothing but the traditional
concepts, the software mechanists claim that it is possible to simplify this task, speed it up, and,
generally, turn it into an exact and predictable activity. Like all mechanists, they invoke the
benefits of the hierarchical structure, but without proving first that the phenomena associated
with software applications can be reduced to isolated structures. Depending on the theory, they
claim one of two things: either that the whole application can be treated as one hierarchical
structure, or that its constituent structures can be extracted and studied separately.

The next step is to impress naive and inexperienced practitioners by demonstrating the benefits
of the hierarchical structure with trivial examples, and by hailing this concept as a revolution, a
new paradigm, etc. In other words, they rediscover the hierarchical structure and its benefits
with each new theory. Then, even though no one can create real-world applications using the
theory, its mere promises generate enough enthusiasm for the practitioners to adopt it. The
theory is usually implemented in practice through various development systems, and the
software companies quickly create them for the hordes of practitioners who are now convinced
that it is these systems that they need, not greater programming skills.

These theories are a fraud from the beginning, because the claimed mechanistic benefits are
relevant only for isolated structures, not for the system of interacting structures that is the
application. Even when the theory extracts one of the structures and appears to work, the
benefits are lost when we combine that structure with the others to create the actual application.
One benefit, we saw, is the ability to use starting elements that already include other elements.
But this quality, even when valid for an isolated structure, is actually a handicap for the final
application, because fewer features can be implemented. Since even a small element is shared
by several structures through its attributes, if it is replaced with a higher-level element we can
no longer control those attributes and the resulting relations between elements. Thus, if we want
the freedom to implement all conceivable requirements, we must start with the low-level
elements of the traditional languages.

The claimed mathematical precision is also irrelevant. If the theory applies to the complex
structure that is the whole application, the claim is clearly invalid, because only isolated
structures can be represented mathematically. But even if it applies to isolated structures and
the mathematical benefit is real, we cannot develop these structures separately and then
combine them somehow, because they share their elements. Ultimately, even if using a
mechanistic theory, we must create the application’s elements by considering several structures
at the same time, the way we always did, and mathematics cannot help us.

As the practitioners struggle with each new mechanistic theory and with the methodologies and
development systems derived from it, they must reconcile the crippling deficiencies they note
with the intimidating propaganda conducted by the software charlatans. Their difficulties, the
practitioners are told for each theory, stem from clinging to old-fashioned programming habits.
The new concepts are so advanced that a whole new way of thinking is required. So they must
forget all they had learned before, have faith in these concepts, and soon they will get to enjoy
the promised benefits.

In reality, the difficulties are due to the need to solve complex real-world problems under
mechanistic restrictions. And the bad programming habits the practitioners are accused of are
actually the non-mechanistic methods they must resort to in order to bypass these restrictions.
In the end, faced with an endless series of situations where their theory is found to be
inadequate, the academics are compelled to modify it. Change after change is introduced over
the years to make the theory practical, and each one is described as a powerful new feature.
The experts and the computer associations praise the changes, the software companies
incorporate them into their systems, and the practitioners eagerly adopt the new versions.

When analyzed, though, the changes are not new features at all, but a return to the traditional
concepts. They are given fancy names and are described with pretentious terms, but these are
in fact ordinary features that were always available, through traditional programming languages.
So the changes are in reality a reversal of the mechanistic concepts – the concepts on the
strength of which the theory was originally promoted. The theory, and the systems derived from
it, are now silly and pointless. But the academics continue to extol their benefits, even as they
are canceling these benefits by annulling the mechanistic concepts. And the practitioners
continue to depend on them.

What the software elites have achieved through this stratagem is to dominate the world of
programming and software use. This domination is unjustified, because what they can give us –
software tools based on mechanistic principles – has no value. The only thing we need in order
to create and use software is personal skills and experience, and a few relatively simple tools.
But the elites manage to convince us that we can be more productive if we agree to depend
instead on some complicated theories, methodologies, and development systems.

Then they modify these expedients to make them useful, by replacing their mechanistic features
with non-mechanistic ones. The non-mechanistic features, while indeed useful, were always
available to us. But because we are now accessing them through some highly-publicized
expedients, we believe the expedients are essential and we must depend on them.

Also, to enhance this dependence, the elites refuse to keep the traditional languages up to date:
features made possible by general advances in hardware or software, and which have nothing to
do with the mechanistic theories, are implemented only in the latest systems. This stratagem
makes the latest systems appear superior, when in fact the same features could be used also
with the traditional languages.

As programmers or as software users, we must practise our profession if we want to improve

our skills. When we agree to depend on expedients instead, all we learn is how to use the
expedients. Our skills remain undeveloped, so we believe that the only way to advance is by
depending on newer expedients, in a process that feeds on itself. This ensures a continued
domination by the elites in our software-related activities.
This section discusses some of the best-known software theories and systems (fourth-generation
languages, structured programming, object-oriented programming, and the relational database
model) and shows that they are worthless and the elites lied about their benefits.

Fourth-generation languages

The so-called fourth-generation languages (4GL’s) were promoted as the logical next step that
will supersede the traditional, third-generation languages (3GL’s), especially in business
applications. 3GL’s (COBOL, C, etc.) had successfully replaced the second-generation (assembly)
languages by providing higher-level starting elements. The higher levels simplify programming,
while still allowing us to implement all conceivable requirements. (This is true for most
applications; assembly languages remain important in those situations where the higher levels
are inadequate.) Now we were told that even higher starting levels are possible, so the same
success can be repeated with 4GL’s: programming and applications will become even simpler.

This promise, however, was a fraud. Its purpose was to get businesses to abandon the ordinary
languages, and to depend instead on proprietary development systems controlled by the
software elites. In reality, these systems provide no benefits. The 4GL idea is a fraud because
the fourth generation is not, relative to the third, what the third is relative to the second.
Practically all features (the use of memory variables, conditional and iterative constructs,
subroutine calls, etc.) became simpler and entirely different in the transition from second to
third generation, but remained unchanged in the transition from third to fourth. The elites
started by promising a simpler, higher level, and ended by reverting to the previous level and to
the same programming challenges we faced before.

Higher starting levels are practical, perhaps, for specialized applications, in narrow domains; but
for typical business applications we cannot start from levels higher than those found in 3GL’s.
There are indeed a few higher-level features in 4GL’s (built-in operations for simple reports, user
interface, etc.), but these are about the same as those available in 3GL’s by way of subroutines.
To be a true higher level, a language must provide more than just a few higher-level functions;
it must provide a higher level for variables and arrays, for flow-control constructs, for
subroutines and their parameters, and so on. Since no such features can exist for general
purpose applications, the only way to make 4GL’s practical was to endow them with 3GL
features. Claims invoking the idea of a fourth generation are merely a way to get ignorant
practitioners to depend on complicated and unnecessary development systems.

Structured programming

The theory known as structured programming claimed that the structure which represents the
application’s flow of execution is the only important one. It doesn’t even mention other
structures. The flow of execution is determined by flow-control constructs, and each element
(statements, constructs, modules) is part of such a construct. Thus, said the academics, all we
have to do is make the application’s flow diagram a perfect hierarchical structure of flow-control
constructs. We will then be able to represent it mathematically, and prove that the application is
correct even before programming it. This is true because the ultimate flow of execution will
mirror the flow diagram.
To attain that hierarchical structure, we must restrict ourselves to three elementary flow-control
constructs – a sequential, a conditional, and an iterative one – which became known as the
standard constructs. All other flow-control constructs must be reduced to combinations of
standard ones. The goal of structured programming, then, is to represent applications with flow
diagrams consisting of nothing but standard flow-control constructs nested hierarchically, one
within another. Since these constructs can contain elements of any size, from individual
statements to entire modules, we can create perfect applications simply by building larger and
larger constructs, one level at a time, the way we assemble appliances in a factory.

Structured programming is a very naive theory, and shows how little the academics understand
about the nature of software. Not only did they believe that the flow of execution can be
extracted from the system of structures that make up an application (and that the other
structures can be ignored), but they failed to see that the flow of execution is itself made up of
many structures. Thus, the idea that a neat flow diagram can mirror the complex flow of
execution is absurd.

Here is how this complexity arises. Each conditional and iterative construct includes a condition,
which is evaluated while the application is running. This yields a different value, and hence a
different path of execution, at different times. There are thousands of such constructs in a
serious application, and an infinity of combinations of condition values. Each combination gives
rise to a different flow-control structure, and the application’s flow of execution is the totality of
these structures. No mechanistic representation is possible for this complex structure.

And this is not all. The conditions involve such processes as memory variables, database fields,
file operations, and business practices; and each process gives rise to an attribute, which relates
some of the application’s elements through a structure. The conditions, therefore, link the flow-
control structures also to the other structures that make up the application. Finally, the
transformations needed to reduce the application to standard flow-control constructs end up
replacing flow-control structures with structures of other types (based on memory variables, for
instance). This results in even more structures that interact with the flow-control structures.

The transformations, it turned out, were so complicated that many software elements could not
be reduced effectively to standard constructs. Also, no one managed to represent
mathematically, as promised, even those elements that were reduced (because they involved
other structures apart from the flow of execution, obviously). In the end, the charlatans who
were promoting structured programming had no choice but to permit various non-standard flow-
control constructs; and these constructs were seen, from then on, as features of the theory. A
critical principle – the reduction to standard constructs – was thus annulled, so the promised
benefits were now lost even if we forget that they were never real. But instead of admitting that
structured programming had failed, the charlatans continued to promote it with the same

Structured programming, thus, was a fraud from the beginning. Since the flow of execution – to
say nothing of the whole application, with its infinity of other structures – cannot be reduced to a
simple hierarchical structure, the promised benefits were a fantasy. Nevertheless, practitioners
to this day are wasting their time and complicating their applications by obeying the theory, in
the hope of attaining those benefits. (The full discussion of this fraud can be found in the section
“Structured Programming” in chapter 7 of Software and Mind.)
Object-oriented programming

The theory known as object-oriented programming claimed that the notion of software reuse,
which was always appreciated but was poorly applied in practice, could be turned into a formal
body of principles and methods, supported by new languages, methodologies, and development
systems. The promise was to emulate in programming the modern manufacturing methods,
which rely on standard parts and prefabricated subassemblies. With these methods, we will be
as exact and efficient in creating software applications as we are in making appliances.

The secret behind this new technology, said the theorists, is the hierarchical concept: if we
implement our applications as strict hierarchical structures of software elements (now called
objects), we will be able to combine, easily and effectively, existing parts and new ones. One
day, developing a new application will entail little more than putting together ready-made parts.
The only thing we will have to program is the differences between our requirements and the
existing software.

This high level of reuse can be achieved, the theorists assured us, thanks to the hierarchical
property known as inheritance. Each element, in addition to possessing its own, unique
attributes, inherits the attributes of the element at the next higher level. And, since that element
inherits those from even higher levels, an element can benefit from the attributes of all the
elements above it in the hierarchy. (This is a result of the way hierarchies are created: an
element at a given level has only those attributes shared by the elements at the lower level. This
property, called abstraction, is simply the property of inheritance viewed in reverse.)

Thus, as we move from high to low levels, an element can have more and more attributes, even
though only a few are its own. Since the attributes of an element reflect the processes it is
capable of, we can attain any combination of processes by using both existing and new parts. If
we include an existing part at an appropriate level in the structure of elements, an element at a
lower level will be capable of that part’s processes in addition to those of higher-level elements
and its own. Another existing part can then be similarly included at a lower level, and so on. At
the lowest levels, we will attain all the processes required by the application while having to
program only those that did not already exist.

Object-oriented programming is just another mechanistic fantasy. The main delusion is in the
claim that the processes implemented in an application are related hierarchically, as one within
another. The theorists do recognize that these processes are in effect the structures that make
up the application, but they fail to see that these structures must share their elements, so they
must interact. Even the individual hierarchies, included in the form of existing parts, are usually
complex structures; and their totality – the application – is always a complex structure.

It is possible, in principle, to combine these hierarchies as one within another and keep their
totality as a simple structure, as the theory says. But the resulting application would be useless,
because it would be capable of just a fraction of the interactions required between the
structures. Here is a simple example. Suppose we have three hierarchies, for accounting,
database, and display functions. Even if they are correct and complete, we cannot use them to
create an object-oriented accounting application. The reason is that we don’t need accounting
functions within the display or database functions, nor database functions within the display or
accounting functions, nor display functions within the database or accounting functions. What we
need is to use the three types of functions together, in any conceivable combination. And the
object-oriented concept restricts us to hierarchical combinations.

Another delusion is in ignoring the structure that is the flow of execution. As we saw under
structured programming, this is a complex structure, determined by the application’s flow-
control constructs and the links to other structures. And, for a given application, it is necessarily
the same in object-oriented programming as in any other type of programming. Since the flow
of execution does not parallel the other hierarchies, and since it is itself a system of interacting
structures, the object-oriented theory would be refuted even if we did combine all the other
hierarchies into one.

As usual, the software charlatans covered up the failure of object-oriented programming by
reinstating the traditional concepts and incorporating them into the theory in the guise of
improvements. All changes have one purpose: to enable us to relate software elements through
several hierarchies at the same time; in other words, to bypass the original restriction to one
hierarchy. While regaining this freedom, though, we necessarily lose the promised benefits: we
must develop and reuse software informally, the way we always did. Object-oriented
programming became a fraud when its promoters continued to make the same promises while
annulling its most important principles.

The original object-oriented systems enforced those principles, through special languages, and
as a result they were useless for general purpose applications. The first change, therefore, was
to degrade the object-oriented concept into a mere extension to the traditional languages. So
now we can develop the entire application in a traditional language, and employ perhaps an
object-oriented feature here and there, when indeed useful.

In particular, those all-important existing parts are now provided similarly to the traditional
subroutine libraries, rather than as formal hierarchies; and we invoke them as we do
subroutines. This way, an element in the application can have any combination of attributes we
need, simply by invoking several parts; we are no longer restricted to combining attributes
hierarchically, one within another, through the notion of inheritance. Recall the example of three
hierarchies, for accounting, database, and display functions, and the need to use them in any
conceivable combination rather than as one within another. This requirement is easily
implemented now, since we can invoke them freely, as if they were subroutines.

The second change was to degrade the notion of inheritance, from a precise, immutable property
of hierarchical structures, to a vague operation that can do almost anything. Specifically, the
attributes inherited by a given element can be modified, and even omitted; also, the element
can inherit attributes from elements in other hierarchies too, not just from the higher levels of
its own hierarchy. With this change, therefore, an element can have any attributes we like. But
attributes determine how elements are related to form hierarchical structures. Thus, if an
element can be related to the others in any way we like, the application can have any number of
additional structures; it is no longer restricted to one hierarchy. By annulling the fundamental
principle of inheritance, this change has restored the freedom to implement requirements
without the hierarchical restrictions.

In the end, abolishing the object-oriented principles was the only way to make object-oriented
programming practical. The charlatans behind this fraud simply reinstated, in a complicated and
roundabout manner, principles that we always had in the traditional languages. (The full
discussion of this fraud can be found in the section “Object-Oriented Programming” in chapter 7
of Software and Mind.)

The relational database model

The theory behind the relational database model claimed that it is possible to turn database
design and use into an exact, error-free activity based on mathematical logic. All we have to do
is separate the database structures from the application’s other structures, and simplify them to
the point where they can be represented with mathematical entities. Any database requirement
can then be implemented through combinations of some simple mathematical operations. This
also guarantees that, if we start with correct data, the result of these operations will also be

To appreciate the absurdity of these claims, let us start with a brief summary of the traditional
database concepts. A business application’s data is stored mostly in the form of indexed data
files; that is, data files and the associated index files. The data files contain the actual data,
organized as records. Each record has a number of fields, which contain the basic values needed
by the application (customer number, product description, transaction date, and the like). The
index files are based on the values present in certain fields (called key fields), and are needed in
order to access specific data records, or to scan records in a specific sequence; thus, a data file
may have several index files. Most files must be interrelated, and this is done by using their
fields; for example, if an invoice file has fields for customer and invoice numbers, we will be able
to relate it to the customer file and to select the records representing a certain customer’s

Several operations, provided by a file management system, are needed to manipulate indexed
data files; in particular, to add, read, modify, and delete records. In the application, these
operations, as well as the data records and fields, are handled using the regular features of a
programming language – the same features used to handle memory-based data and operations
(assignments, comparisons, iterations, display, etc.). Thus, there is a seamless integration of the
database and the rest of the application. This is an important quality of the traditional database
concepts, as it permits us to implement all conceivable interactions between the database
structures and the application’s other structures.

The theorists noticed a similarity between the elements of data files and those of the logic
system known as predicate calculus, and jumped to the conclusion that a practical database
model can be invented based on this similarity. The concept of data files was replaced with the
concept of tables, while the records and fields became the rows and columns of tables. There are
key fields just as in the traditional data files, but no index files (their function is intrinsic to the
new database operations).

Then, said the theorists, if we restrict the data stored in tables so as to match the entities
permitted in predicate calculus, the traditional file operations can be replaced with operations
based on mathematical logic: selecting certain rows or columns from a table, combining rows or
columns from two tables, and so forth. The operations start with one or two tables and produce
as result a new table. (To access individual records or fields, we create tables with only one row
or one column of a row.) Thus, compared with the traditional operations, the relational
operations are simpler. Also, since they are mathematical, the resulting data is guaranteed to be
an accurate reflection of the original data.

The relational model is a typical mechanistic fantasy. In order to attain an exact, mechanistic
representation of the database, the theorists extracted it from the system of structures that
make up the application. Then they simplified it further by excluding all features that did not
correspond to their intended mathematical model. In the end, what was left of the database was
indeed mathematical – but it had lost all practical value.

The model of an isolated database is perhaps an interesting theoretical concept, but in practice
the database is part of an application, so this model is useless. The theorists deliberately
separated the database from the application in order to avoid the complexity of their
interactions, and then promoted the exactness of the resulting model as a benefit for the whole
application. It is not surprising that academic charlatans pursue such fraudulent ideas; but it is
shocking to see practitioners and businesses accept them.

Let us examine the mathematical claims. Since the relational model was meant from the start
for databases separated from their applications, the mathematical benefits were always a lie.
The model guarantees that the result of an operation is correct if the data in the database is
correct. But the correctness of the data depends on the application’s requirements, so it must be
determined in the application, outside the scope of the model; moreover, the requirements
cannot be expressed mathematically. In the end, data validity must be enforced the way it
always was. Thus, since a system cannot be more exact than the least exact of its parts, the
relational operations are ultimately no more exact than the traditional ones.

The theorists claim also that the database design process is mathematical, and this too is a lie.
This process, called normalization in the relational model, entails decisions for the format, use,
and dependency of fields in interrelated files. Relational textbooks impress us with difficult
terminology, definitions, theorems, formulas, and diagrams, but despite the mathematical tone,
normalization is not a mathematical theory. It is just a formal study of field dependencies, and
cannot help us in the design process. All design decisions must be made informally, using
personal skills, just as they are for traditional databases (because they depend on the
application’s requirements, which cannot be reduced to mathematics, and lie outside the scope
of the model in any case). Thus, the actual design examples shown in textbooks following the
many pages of formal discussion are, ironically, informal.

The separation of the database from the application caused more than the loss of the promised
mathematical benefits. Since an application must interact continuously with its database, many
changes had to be made over the years to reinstate the links that the original model prevented.
Another problem was the restriction to operations on files (tables). These operations are indeed
simpler, but most links with the application must be at the lower levels of records and fields, and
the additional operations required were found to be too complicated and too slow. Thus, changes
also had to be made to reinstate the low-level links that the original model prevented.

In the end, practically all relational concepts had to be abandoned. They were replaced with
concepts that allow us to do, in roundabout and complicated ways, exactly what the traditional
database concepts allowed us to do all along. The relational model continued to be promoted,
though, with the original promises. The following discussion is necessarily only a brief survey of
this degradation.

Strict data normalization was impractical, so this fundamental relational principle was annulled
and replaced with the pragmatic criteria of the traditional design methods (through new
“features” like the ludicrously named denormalization and non-first-normal-form); that is,
common-sense decisions simply aiming to achieve the most effective file relationships.

Then, in order to reduce the need to link the database entities to the application’s entities, and
also to provide the means to access individual records and fields, more and more parts of the
application were moved into the database system in the guise of new relational features. This
stratagem started with data validation functions, but quickly expanded to allow any operations.
And as the operations became increasingly varied, special languages were invented to support
them. Thus, operations that were naturally and logically implemented in the application using
ordinary languages were replaced with complicated alternatives for no other reason than to
bypass the relational restrictions. And all this time, the shift was presented as an enhancement
of the relational model.

SQL is known as the official relational database language, but in reality it is the official means,
not of implementing, but of overriding, the relational principles. From its modest beginning as a
query language, SQL has grown to its enormous size as a result of the enhancements introduced
in order to provide full-fledged programming capabilities, as explained above.

But SQL also allowed the relational charlatans to abolish the last remnants of their fantasy: the
relational database operations are no longer used, and we access files through new operations
that are practically identical to the traditional ones (for example, we can add, read, modify, and
delete individual records, scan records one at a time, and work with individual fields).

Finally, moving more and more parts from the application into the database system made
programming so complicated that a new feature had to be invented to move them back into the
application. This feature, known as embedded SQL, lets us implement in a traditional language
the entire application, including all database requirements, and invoke SQL statements here and
there as needed. So applications look now about the same as those that use the traditional
database concepts.

The relational database model is one of the greatest frauds ever perpetrated in a modern
society. And the ease with which the software charlatans have persuaded practitioners to forsake
proven database management principles and to depend instead on the relational imbecilities is a
vivid demonstration of the incompetence and corruption that permeate the world of
programming. (The full discussion of this fraud can be found in the section “The Relational
Database Model” in chapter 7 of Software and Mind.)