Вы находитесь на странице: 1из 10

Distributed system is very complex to be implement as a number of challenges have to be

considered to achieve its final objective.


The first challenge is heterogeneity. Heterogeneity is one of the important design issue
for the distributed systems. In distributed systems, the communications framework consists of
channels of different capacities. There are different kind of presentation technique that the endsystem will be posed. (K.S Mishra & A.K Tripathi, 2014).
The author will discuss heterogeneity challenges in depth by using the latest studies in
distributed systems. In White, J., Dougherty, B., Schantz, R., Schmidt, D. C., Porter, A., &
Corsaro, A. (2012), they have studied the challenge of encapsulating heterogeneity at scale for
highly complex distributed systems (HCDSs) in a middleware perspective. Distributed system
enables users to access services and run on a variety of processor, operating system, and
middleware platforms. They are interconnected by various types of protocols and networking
technologies. There are different constraints on quality-of-service (QoS) properties on each layer.
In this studies, the authors illustrated those systems that were used to control smart grid
production and large metro subway infrastructure as an example. The system is called as nextgeneration supervisory control and data acquisition (SCADA) systems. It has to integrate with
an extensive variety of devices, ranging from conventional microcontrollers, to the latest
smartphones and tablet devices while dealing with different protocols network and bus
interconnects, software platforms, and processor types and capabilities. In addition, these
systems can use utility computing systems that contain a huge figure of distributed, computeintensive, virtualized and storage intensive elements supported by server framework.

Past researcher has analysed techniques to establishing and configuring HCDSs with
huge number of heterogeneous components that provide management at scale of the various
components that comprise HCDSs. For instance, detecting resource usage anomalies between
heterogeneous components in large scale distributed systems using visualization techniques has
been suggested by Mello Schnorr et al. (2012). The other approach has been introduced by
Albrecht et al. (2007). This technique provides a framework to manage distributed applications
across a various of networks and computing environments.
On the other hand, the open problems also occur in deployment decisions. For example,
in situation of making the decision on which processor should be use to run a software
component on. This happen due to the variety of physical resources in heterogeneous HCDSs. In
addition, it also hard to make a configuration of software and hardware components. This is
because the heterogeneous hardware and software resources need to be adjust synchronously,
both locally and globally across thousands of components.
The challenge of heterogeneity in distributed system can be handle by concentrating both
on interoperability and portability. Consequently, the development of middleware technologies
must be enhanced in concern of the interoperability and portability across and between
applications. The solution that have been suggested in White, J., Dougherty, B., Schantz, R.,
Schmidt, D. C., Porter, A., & Corsaro, A. (2012) is that next-generation middleware needs
integrated configuration approaches for instance that is based on model-driven engineering
(MDE). It will be use to assemble heterogeneous system components. Moreover, end-to-end
system QoS goals can be met as it can be used to optimize the connecting protocols and
middleware. Manual techniques for designing these interconnecting elements scale poorly nor
lack the sophistication required to meet HCDS requirements. As a consequence, automated

configuration methods capable of complexity and scaling to the size of advancing HCDSs must
be developed.
The second challenge is openness. According to K.S Mishra & A.K Tripathi (2014),
openness implies up to what degree a system be designed using standard protocols to support
interoperability. The developers were desired to include new features or replace subsystem in the
future. To achieve this goal, distributed system must have well defined interfaces. They have
proposed that interfaces should be separated and freely accessible to enable easy extensions to
existing components and add new components. In Y. Jiang (2015) studies, the author stated that
distributed systems are often open, and a few nodes and network structures may be unreliable.
For instance, there are no efficient approaches to prevent malicious peers from joining open
systems, hence the distributed systems are very vulnerable to be misuse by selfish and malicious
users (P.Yi, Y.Wu, F.Zou & N.Liu, 2010). In addition, because of the autonomy of nodes, a few
nodes may fail to operate independently. Therefore, fault-tolerance and reliability are significant
for real open distributed systems.
In Stefan Poslad (2016) studies, the author stated that a distributed system is viewed as
open if it is extensible where there is a scope of degrees and there exist diverse models and
designs for openness. Openness is connected to re-configurability. If interfaces to the system are
clearly defined, and parts of the system framework are loosely-coupled, then parts can be
exchanged and improved. Ordinarily, the interfaces at the highest level of abstraction exposed
minimal openness. For instance, the agent platform service API can comprise of a directory
service API, an agent life-cycle management API, and a message transport system API. In an
agent platform, communication facilitators coupled with the utilization of a rich message-passing
protocol suite prompts to natural support for an open service architecture. Service consumer

agents and service provider agents can be dynamically bound and unbound utilizing the
facilitator.
Diverse service domains can require a range of communication support, for example,
more or less throughput, more or less negotiation about the protocol characteristics, and more or
less security. Openness at lower levels of abstraction in the platform enables services to be
dynamically replaced or enhanced. For instance, a specific message transport could be
substituted for or used alongside an alternate transport. Depending on the software language and
the types of interaction, service connects between parts could be statically changed before the
session starts or dynamically changed during a session.
The third challenge is security. The ways on how to apply the security policies to the
interdependent system is a great issue in distributed system. The system must have a strong
security and privacy estimation since distributed systems deal with sensitive data and
information. The collection of distributed system assets that need to be protect are storage,
communications, base resources and user-interface I/O as well as higher-level composites of
these assets, like files, messages, processes, display windows and more complex objects, are
essential issues in distributed system. This is meant by security that must be provided in
distributed systems in terms of confidentiality, integrity and availability. The likely threats are
denial of services, information spillage, integrity violation, and illegitimate usage. In addition,
access to assets ought to be secured to ensure only known users can perform allowed operations.
According to Dan Nessett (1999), while engineers at present occupied with creating
answers for the numerous issues that exist here, they are not addressing security issues that will
emerge as massively distributed systems become prominent. In particular, massively distributed

systems generally will support a large number of end-systems, a significant of which will be
embedded in other equipment and utilized by technologically naive users. These systems will
require management, which presumably will happen using the systems under the control of
service providers. Since many end-systems will either create or contain information that clients
consider private, the management technology for massively distributed systems must ensure, to a
sensible degree, that un-aggregated information is not compromised either by individuals
operating the management sub-system or by the service providers as part of their corporate
strategy, e.g., during the collection of marketing data. To meet this goal, end-systems must be
client anonymous with respect to the management sub-system. That is, there must be no tie
between the identifiers used to manage end-systems and those utilized for billing, warranty or
other services that require the identification of the individuals who possess or utilize these endsystems.
A major role in massively distributed systems will be played by one-to-many
communications. Therefore, engineers will have to address the issues of modification, deletion,
replay, insertion, release of secret state, and masquerade for this sort of communications.
Existing procedures are intended to secure one-to-one communications against these threats.
New processing algorithms and protocols will be required to provide these services for one-tomany communications.
At the point when confronted with the issue of data protection, existing distributed
systems officially should adapt with the controls governments have set on cryptographic
technology. These requirements have hampered engineers in providing suitable levels of security
to the users of distributed systems. Massively distributed systems will create new issues in this
area. Since communications will occur between large numbers of end-systems, security service

implementations will be unable to determine which systems are constrained by specific national
laws concerning encryption and what those constraints might be. As mobile systems turn out to
be more predominant and as they are incorporated into massively distributed systems, implement
of these constraints, regardless of the possibility that they are known, would require tracking the
position of a mobile system, identifying the constraints imposed by its locality, and
communicating these limitations to all systems interacting with it. For the number of endsystems that will engage in a massively distributed application and for the frequency with which
a mobile system's position may change (e.g., those utilized from or embedded in an automobile,
train, boat or airplane), this will be in fact infeasible. Since enforcing these legal limitations will
be just as hard as implementing the technology that satisfies them, after some time the
constraints might be relaxed or even eliminated. In any case, while they are in force, the
technical issuess they raise will be formidable.
A large role in massively distributed applications will be played by commercial enterprises.
Consequently, new threats will emerge as the value of these applications increments. One
concern is the security of internetwork routing and new factors will be introduced by massively
distributed system into this issues. The infrastructure required for such systems will essentially
combine other large networks into a worldwide communications fabric. However, routing service
providers may wish to confine the traffic that transits their networks and subscribers may
demand guarantees that the quality of service publicized by routing service providers is actually
delivered. The previous issues have received a great deal of consideration, leading to the
formulation of policy routing procedures. The previous issues on the other hand, has received
less consideration and will require further study.

The fourth challenge is scalability. Scaling is one of the real issues of distributed system. The
scaling issue comprises of dimensions like communication capacity. The system ought to be
designed to such an extent that the limit might be increased with the increasing demand on the
system. The system must remain effective when there is a significant increment in either number
of assets or number of users. The design and algorithms must be efficiently utilized under these
conditions. According to Takada, H. & Sakamura, K (1995), while current distributed system is
made out of hundreds of nodes, during the first decade of the 21st century massively
distributed systems will be made out of thousands to billions of nodes, supporting hundreds to
millions of users.

The scaling issue for massively distributed systems has a number of dimensions. In the area of
communications capacity, recent events in the telecommunications industry promise to
dramatically change the available worldwide bandwidth for networking. Power, water, natural
gas, vehicle fuel and mass market supply chains are naturally large scale. Traditional information
supply chains (e.g., radio, television, motion picture, the printed news media) are broadcast
based and characterized by a small producer-to-consumer ratio. However, this is undergoing
radical change. The growth of the world wide web has greatly increased the producer-toconsumer ratio in data communications and this trend will continue. There may even come a
time when the ratio approaches 1. This one factor has the potential to completely change how
information supply chains are implemented and could be the major driver behind massively
distributed systems.

Geographically, massively distributed applications supporting information supply chains will be


global in extent. A single application running over a massively distributed system will
interconnect users of significantly different languages, cultures and views. Communication costs
will be structured so that they scale logarithmically with the number of destinations, thereby
utilizing efficiencies inherent in broadcast and multicast communications. Individual subscribers
will be able affordably to communicate with one to a hundred other subscribers. One-to-many
communications will be the norm, as opposed to one-to-one transfers, which are most common
today.

The fifth challenge is failure handling. Thismeanonhowcanfailureofthesystembe

detectedandrepaired.
The sixth challenge is concurrency. Concurrencyisasharedaccesstoresourcesmustbe

madeavailabletotherequiredprocesses.

The seventh challenge is transparency. Transparency means up to what extent the distributed
system should appear to the user as a single system? Distributed system must be designed to hide
the complexity of the system to a greater extent. The distributed systems should be perceived as
a single entity by the users or the application programmers rather than as a collection of
autonomous systems, which are cooperating. The users should be unaware of where the services
are located and also the transferring from a local machine to a remote one should also be
transparent.

There are different kinds of transparencies that the distributed system has to

incorporate. The following are the different transparencies encountered in the distributed systems
[3] [2].

1. Access Transparency: Clients should be unaware of the distribution of the files. The files
could be present on a totally different set of servers which are physically distant apart and
a single set of operations should be provided to access these remote as well as the local
files. Applications written for the local file should be able to be executed even for the
remote files. The examples illustrating this property are the File system in Network File
System (NFS), SQL queries, and Navigation of the web.
2. Location Transparency: Clients should see a uniform file name space. Files or groups of
files may be relocated without changing their pathnames. A location transparent name
contains no information about the named objects physical location. This property is
important to support the movement of the resources and the availability of services. The
location and access transparencies together are sometimes referred as Network
transparency. The examples are File system in NFS and the pages of the web.
3. Concurrency Transparency: Users and Applications should be able to access shared data
or objects without interference between each other. This requires very complex
mechanisms in a distributed system, since there exists true concurrency rather than the
simulated concurrency of a central system. The shared objects are accessed
simultaneously. The concurrency control and its implementation is a hard task. The
examples are NFS, Automatic Teller machine (ATM) network.
4. Replication Transparency: This kind of transparency should be mainly incorporated for
the distributed file systems, which replicate the data at two or more sites for more
reliability. The client generally should not be aware that a replicated copy of the data
exists. The clients should also expect operations to return only one set of values. The
examples are Distributed DBMS and Mirroring of Web pages.
5. Failure Transparency: [4] Enables the concealment of faults, allowing user and
application programs to complete their tasks despite the failure of hardware or software

components. Fault tolerance is provided by the mechanisms that relate to access


transparency. The distributed system are more prone to failures as any of the component
may fail which may lead to degraded service or the total absence of that service. As the
intricacies are hidden the distinction between a failed and a slow running process is
difficult. Examples are Database Management Systems.
6. Migration Transparency: This transparency allows the user to be unaware of the
movement of information or processes within a system without affecting the operations
of the users and the applications that are running. This mechanism allows for the load
balancing of any particular client, which might be overloaded. The systems that
implement this transparency are NFS and Web pages.
7. Performance Transparency: Allows the system to be reconfigured to improve the
performance as the load varies.
8. Scaling Transparency: A system should be able to grow without affecting application
algorithms. Graceful growth and evolution is an important requirement for most
enterprises. A system should also be capable of scaling down to small environments
where required, and be space and/or time efficient as required. The best-distributed
system example implementing this transparency is the World Wide Web.
The eighth challenge is quality of service. Quality of Service is on how to specify the quality of
service given to system users and acceptable level of quality of service delivered to the users.
The quality of service is heavily dependent on the processes to be allocated to the processors in
the system, resource distribution, hardware, adaptability of the system, network etc. A good
performance, availability and reliability are required to reflect good quality of service.

Вам также может понравиться