Вы находитесь на странице: 1из 3

Architecture of web applications

While the Web was originally designed to foster collaboration across distributed
networks, stronger requirements, such as fault-tolerance, scalability, and peak
performance were added later as a result of leveraging the existing communication
mechanisms in the Web to support transaction-based applications.

The building blocks of the architecture have not changed much over time, but the
richness of the solution and the flexibility of the interaction between these blocks are
in constant evolution. Software modularization at the program level in the sixties was
extended to client-server architectures in the late seventies and early eighties that,
in turn, evolved into the Web architectures of the nineties. For example, the design
of traditional transaction processing engines is based on the decomposition of its
functions into isolated modules that communicate with the rest of the solution
through well-defined interfaces. Client-server engines embrace the same principle
and provide these functions by means of distributed objects. The architecture of the
solution on the Web still follows the basic decomposition rules, and adds the run-time
identification and binding of resources. The solution may look on paper similar to a
single-server classical application, but the degree of flexibility in the selection of the
components that will actually perform the work forces the designer to build in or
reuse services to integrate the solution.

Mainframes may benefit from economies of scale by centralizing functions that share
critical resources. The cost of designing an application from scratch is lower when its
components can share design elements. Instead, distributed applications have a
better fit when the designer looks at the overall set of applications in which design
elements are shared among different solutions. In that case, single functions can be
assigned to specialized servers. For example, the need to discover and identify
resources in the application leads to name services, and the need to protect the
application from unwanted users leads to authorization servers and encryption
techniques. The end result for the user is that the application can be deployed faster
and is more flexible and scalable because individual components can be replicated or
scaled according to the needs of the solution.

The Web is an Extension of the Client-Server Paradigm

Software modularization was introduced to reduce the cost of building complex


systems. Instead of having to consider the iterations of each component with all the
other components of the solution, modules present a first step in isolating the details
of the implementation of a given function and allowing multiple uses of the same
function. For example, if input data is validated outside of the application, the
resulting application is simpler to develop. Validation was done initially by the
application, then by separate routines in the program, then using intelligent
peripherals, and later on, on a different server. All these options are a consequence
of a natural evolution towards Web applications.

In the case of data repositories, Web servers can be seen as instantiations of the
client-server paradigm where the Web server delivers multimedia content to clients
such as browsers, applets, and search robots. Content may be stored in the server or
built dynamically by the server, either from local data or, most typically, from
application and database servers that feed the data to the Web server. That
perspective of the Web server focuses on the basic services the Web server has to
perform: control communications, control the data flow, and ensure the privacy of
the data if the client requests some form of encryption. Commercial applications
based on a Web-server interface also require the support of a large number of
concurrent users, the distribution of the Web services to different platforms, some
form of fail-safe operation, performance guarantees in terms of response time or
throughput, and the integration of heterogeneous components. Keeping some form
of state of the transaction in that environment has proven to be a challenge that has
limited the migration to the Web of many simple commercial applications that
otherwise would greatly benefit from the lower cost of operations in the Internet
environment.

Scaling the solution from a few thousand users to hundreds of thousands of


concurrent users forces one either to change the algorithms used by the application
to maintain the state by, for example, distributing and replicating the services, or in
simple cases, to upgrade the platform. Resource replication has the added advantage
of providing extra processing power without bringing down the network and
incrementally increasing the resilience of the solution to server downtime.

Replication does not come cost-free. Interfaces have to be standardized, and the
communication mechanisms must preserve the implicit assumptions made during
system design. For example, error checking ideally should be done for each data
unit, but just once. By duplicating the effort on both ends of the communications
pipe, the solution becomes more expensive without giving any extra benefit to the
application. Standardization--either formal or de facto--also encompasses among
other things file formats, markup languages, object brokering, network
management, and high-availability protection.

The decomposition of the application into separate and relatively independent


modules also leads to service specialization. The economies of scale from the
mainframe have become economies of specialization: servers are designed so that
they can accomplish simple tasks and the traditional role of the systems architect is
being complemented by the systems integrator, who may use off-the-shelf
components to build a robust solution with relative ease.

From the user's perspective, except for the user interface, there is little or no
difference between this process and the traditional process in which all the steps are
done over the phone. On the other hand, the expectation that the state of the order
will be kept consistent across the steps of the user interaction is a consequence of
the fact that the user does not have to know what logical services are involved.
When a user accesses an electronic commerce site, the context where the user
operates has to be kept consistent. Session is defined as a set of logical transactions
with a consistent context.

In the last example, the user accesses the home page of the Web site to check the
current offerings. For some users, the home page may change while they are still on
line, but there is an expectation that offers remain valid while the older page still can
be retrieved from the cache. Next, the user may browse through the catalog--while
the catalog is being updated--and put some items in the shopping cart, which is a
mechanism to preserve the state between logical transactions. When the user is
ready to check out, the system validates the credit and shipping information, current
inventory, offers, discounts, and any other information about the context of the user.
Finally, if the user confirms the order, the system executes transactions in other
systems, such as order processing, supply chain, and marketing statistics. Although
each transaction can be processed by a different server, sessions have to be
managed by or at least through one Web server. At this time, state preservation,
scalability, and high availability are the main concerns in the development of Web
applications. Solutions are being developed in the areas of code portability (for
example, by building Java libraries to support industry-dependent functionality),
efficient queuing mechanisms tied to multithreading schemes to improve the
scalability of the application, cookie-less state preservation methodologies, and
caching.

An Evolution-oriented Architecture
for Web Applications
Abstract: The Web has become an efficient environment for application delivery. The
originally intended idea, as a distributed system for knowledge-interchange, has
given way to organizations offering their products and services using the Web as a
global point of sale. Although the arising possibilities look promising, the
development process remains ad-hoc in real-life Web development. The
understanding of Web application development mostly neglects architectural
approaches, resulting in Web sites that fail in achieving typical goals like evolvable
and maintainable structures of the information space. Beyond that, as the
architecture of a Web application matures, more and more knowledge about the
domain becomes embodied into code and therefore burdens maintenance and reuse
of parts of the application. In this paper, we will propose an architecture and a
framework using the notion of services as model entities for Web application
development. The object-oriented WebComposition Markup Language, which is an
application of the XML, will be presented as basis for a generic evolvable framework
for services. Finally, the results of its usage will be described in detail by giving an
example of a large-scale transnational Intranet, where the framework is in use.

Вам также может понравиться