Академический Документы
Профессиональный Документы
Культура Документы
from The Rational Edge: Booch briefly examines factors -- from fundamental to human -- that
limit what technology can achieve.
From fundamental to human, these are the factors that define the limits of technology:
Sometimes we simply cannot solve a problem. There exist a number of non-computable problems,
such as the halting problem. Such problems are undecideable, because no algorithm can be
followed to form a solution.
Sometimes we can't afford to solve a problem. Many problems, such as sorting a list of values,
lend themselves to a variety of algorithmic approaches, each with varying time and space
complexity. Some naive sorting algorithms run in quadratic times (meaning that their execution
time is proportional to the square of the number of items to be sorted) whereas other, more
sophisticated, algorithms run in logarithmic time. Such algorithms may be tedious but they
are nonetheless still tractable. However, there exist other classes of algorithms whose time
complexity is exponential. For example, the classic Towers of Hanoi problem as well as various
graph traversal problems cannot be solved in better than exponential time. Such problems are
considered intractable: they have an algorithmic solution, but their time or space complexity is
such that, even with a relatively small value for N, running that algorithm would take centuries (or
more) to complete.
Sometimes we just don't know how to do it. There exist a large class of problems -- finding the
shortest path in a graph, trying to find the optimal means of packing a bin with various-shaped
objects, matching class schedules to students and instructors, for example, that are tractable
but NP-complete: we just don't know if these problems have better than an exponential time
complexity. At present, the best we can do is seek optimal solutions, often by applying some
simplifying assumptions or trying some more exotic approaches such as Monte Carlo methods,
intense parallelism, or genetic programming.
These laws of software are demanding enough, but the problem is even worse as we consider the
implications of non-continuous systems. For example, if we toss a ball into the air, we can reliably
predict its path because we know that under normal conditions, certain laws of physics apply. We
would be very surprised if just because we threw the ball a little harder, halfway through its flight it
suddenly stopped and shot straight up into the sky. In a not-quite-debugged software simulation of
this ball's motion, exactly that kind of behavior can easily occur.
Within a large application, there may be hundreds or even thousands of variables as well as
multiple threads of control. The entire collection of these variables, their current values, and the
current address and calling stack of each process and thread within the system constitute the
present state of the system. Because we execute our software on digital computers, we have a
system with discrete states. By contrast, analog systems such as the motion of the tossed ball
are continuous systems. Parnas suggests that "when we say that a system is described by a
continuous function, we are saying that it can contain no hidden surprises. Small changes in
inputs will always cause correspondingly small changes in outputs." On the other hand, discrete
systems by their very nature have a finite number of possible states; in large systems, there is a
combinatorial explosion that makes this number very large. We try to design our systems with a
separation of concerns, so that the behavior in one part of a system has minimal impact upon the
behavior in another. However, the fact remains that the phase transitions among discrete states
cannot be modeled by continuous functions. Each event external to a software system has the
potential of placing that system in a new state, and furthermore, the mapping from state to state
is not always deterministic. In the worst circumstances, an external event may corrupt the state
of a system, because its designers failed to take into account certain interactions among events.
For example, imagine a commercial airplane whose flight surfaces and cabin environment are
managed by a single computer. We would be very unhappy if, as a result of a passenger in seat
38J turning on an overhead light, the plane immediately executed a sharp dive. In continuous
systems this kind of behavior would be unlikely, but in discrete systems all external events can
affect any part of the system's internal state. Certainly, this is the primary motivation for vigorous
testing of our systems, but for all except the most trivial systems, exhaustive testing is impossible.
Since we have neither the formal mathematical tools nor the intellectual capacity to model the
complete behavior of large discrete systems, we must be content with acceptable levels of
confidence regarding their correctness.
In the case of compression algorithms, we can state the theoretical limits of compressing an
image, a waveform, video, or some raw stream of bits, but even so there are a myriad of choices
if we allow some degree of information loss. The more we know about the use and form of the
information we seek to compress, the closer we can get to this theoretical limit: it's largely a bit
of hard work, some hairy mathematics, and some trial and error to find a suitable compression
algorithm for a given domain.
In the case of photorealistic rendering, the field is also characterized by hard work, hairy
mathematics, and some trial and error. A few decades ago, it was quite an accomplishment just to
model pitchers and cups -- and even then, the results looked artificial and plastic. Today, the field
has gotten much better -- we can model scenes down to the level of individual hairs on a beast
and we can do a fairly good job of biologically real movement. However, rendering human faces,
ice, and water is not quite to the point where a careful observer can be fooled. There will likely
come a time when we can do all these things, and when we do, the solution will look simple, but in
the meantime, our lack of perfect knowledge adds complexity and compromise to our systems.
We'd like to believe that building distributed systems is only moderately harder than building a
non-distributed one, but it is decidedly not, because the reality of the real world intrudes. As Peter
Deutsch once noted, there are eight fallacies of distributed computing: we'd like to believe that
these are all true, but they are definitely not:
The fact that these elements are not true means that we have to add all sorts of protocols and
mechanisms to our systems so that our applications run as if they were true.
Simplicity is most often expressed in terms of Occam's Razor. William Occam, a 14th-century
logician and Franciscan friar stated, "Entities should not be multiplied unnecessarily." Isaac
Newton projected Occam's work into physics by noting, "We are to admit no more causes
of natural things than such are both true and sufficient to explain their appearances." Put in
contemporary terms, physicists often observe, "When you have two competing theories which
make exactly the same predictions, the one that is simpler is the better." Finally, Albert Einstein
declared that "Everything should be made as simple as possible, but not simpler."
Often, you'll hear programmers talk about "elegance" and "beauty," both of which are projections
of simplicity in design. Don Knuth's work on literate programming -- wherein code reads like a
well-written novel -- attempts to bring beauty to code. Richard Gabriel's work on the "quality with
no name," building upon architect Christopher Alexander's work, also seeks to bring beauty and
elegance to systems. In fact, the very essence of the patterns movement encourages simplicity
in the presence of overwhelming complexity by the application of common solutions to common
problems.
The entire history of software engineering can perhaps be told by the languages, methods,
and tools that help us raise the level of abstraction within our systems, for abstraction is the
primary means whereby we can engineer the illusion of simplicity. At the level of our programming
languages, we seek idioms that codify beautiful writing. At the level of our designs, we seek good
classes and in turn good design patterns that yield a good separation of concerns and a balanced
distribution of responsibilities. At the level of our systems, we seek architectural mechanisms that
regulate societies of these classes and patterns.
The difficulty of design, therefore, is choosing which design and architectural patterns we should
use to best balance the forces that make software development complex. To put it in terms of the
laws of software, this general problem of design is probably NP-complete: there likely exists some
absolutely optimal design for any given problem in context, but pragmatically, we have to settle
for good enough. As we as an industry gain more experience with specific genres of problems,
then we collectively begin to understand a set of design and architectural patterns that are good
enough and that have proven themselves in practice. Thus, designing a new version of an old kind
of system is easier, because we have some idea of how to break it into meaningful parts. However,
designing a new version of a new kind of system with new kinds of forces is fundamentally hard,
because we really don't know the best way to break it into meaningful parts. The best we can do is
create a design based upon past experiences, plagiarize from parts that worked in similar kinds of
situations, and iterate until we get it good enough.
A further complication is the fact that, for industrial-strength software, there are typically a large
number of stakeholders who shape the development process, most of whom are completely
unimpressed by the underlying technology for technology's sake. These stakeholders will bring to
the table a multitude of hidden and not-so-hidden economic, strategic, and political agendas that
often warp the development process through the presence of competing concerns.
For software that matters, the requirements of a system will typically change during its
development -- not just because of reasons of technology churn or resilience -- but also because
the very existence of a software development project alters the rules of the problem. Seeing early
products, such as design documents and prototypes, and then using a system once it is installed
and operational, are forcing functions that lead users to better understand and articulate their real
needs. At the same time, this process helps developers master the problem domain, enabling
them to ask better questions that illuminate the dark corners of a system's desired behavior.
Because a large software system is a capital investment, we cannot afford to scrap an existing
system every time its requirements change. Planned or not, large systems tend to evolve
continuously over time, a condition that is often incorrectly labeled software maintenance. To
be more precise, it is maintenance when we correct errors; it is evolution when we respond to
changing requirements; it is preservation when we continue to use extraordinary means to keep an
ancient and decaying piece of software in operation. Unfortunately, experience suggests that an
inordinate percentage of software development resources are spent on software preservation.
In contemporary software development organizations, the problem is made worse by the reality
that the complete development team typically requires a large mix of skills. For example, in most
Web-centric projects, not only do you have your typical code warriors, but you often have a set
of them who speak different languages (e.g. HTML, XML, Visual Basic, Java, C++, C#, Perl,
Python, VBScript, JavaScript, Active Server Pages, Java Server Pages, SQL, and so on). On top
of that, you'll also have graphic designers who know a lot about HTML and technologies such as
Flash, but very little about traditional software engineering. Database administrators and security
managers will have their own empires with their own languages, as will the network engineers
who control the underlying hardware topology upon which everything runs. As such, the typical
development team is often quite fractured, making it challenging to form a real sense of team.
Not only are there issues of jelling the team, there are also points of friction from the perspective
of the individual developer, friction that eats away at the individual's productivity in subtle ways.
Specifically, there exist the following points of friction:
Thus, all meaningful development is formed by the resonance of activities that beat at different
rhythms: the activities of the individual developer, the social dynamics among small sets of
developers, and the dynamics among teams of teams. Much like the problem of design, finding the
optimal organization at each of these three levels and deciding upon the right set of artifacts for
each to produce using the best workflows is challenging, and is deeply impacted by the specific
forces upon your project, its domain, and the current development culture. To some degree, every
team is self-organizing -- but always within the structure imposed and encouraged by its context,
and that means the organization as a whole and its management. Choosing that organization
structure is not a technical problem, but instead is a human problem, which by its very nature is
complex. As a human problem, there are naturally all the usual human dramas that play out, often
amplified by the stresses of development. Ultimately, however, what drives the structure of an
organization are its surrounding economics.
where
From this equation, we can observe that the complexity of a system can either be amplified by a
bad process or dampened by a good process and that the nature of a team and its tools are equal
contributors to the performance of a project.
During the dot com mania around the turn of the millennium, software economics was pretty much
ignored -- and thus contributed to the dot bomb collapse. Developing software costs money, and
thus for any sustainable business activity, any investment in software development must provide a
good return on that investment. Thus, we might dream up suspicious uses of software that have no
fundamental economic value, but to do so will ultimately end in economic collapse. Alternatively,
we might dream up meaningful uses of software, and to the degree we can develop that software
efficiently and use that software as a strategic weapon in our business, the effort will yield business
success.
In the best of worlds -- which unfortunately is a fairly narrow space -- an organization will leverage
its investment in software development so that software is in fact a strategic weapon for the
company. Anything less and the software development team will be limited in the great things it
could have provided.
(The article is excerpted from the forthcoming third edition of Object-Oriented Analysis and Design
with Applications).