Вы находитесь на странице: 1из 6

Variability in Engineering (use style: paper title)

What engineering technology can benefit the human life advancement?



Norshahrizan Bt Nordin, Mohammad Harith Almus, Muhammad Ervan Yulio Syamputra, Geraldine Fredina
Freddy Rozario, Lim Qiao Ying, Sang Gita Devi Kertara Ram
School of Business Innovation and Technopreneurship (PPIPT)
University Malaysia Perlis (UniMAP)
Perlis, Malaysia
{ervanyulio, fredina.geraldine, qylimstephy, sagide_taving}@yahoo.com


AbstractThe topic of variability in engineering describes the
possible variations in terms of the changes occur and
improvements in the engineering fields. The purpose of this
paper is to provide a review of the new engineering technology
with an emphasis on the benefits of human life advancement.
Following an introduction of variability in engineering, this
paper considers the improvements made by the engineering
technology from the past to present. A range of products,
applications and case histories are discussed. By superposing
variability mechanisms on current languages for variability
engineering, we envision the possibility of identifying
variability points to increase adaptability and optimize the use
of resources in variability engineering. Through an analysis of
the gaps in the current variability engineering, we identify
challenges for the current variability engineering.
Keywords-variability; new engineering technology; human
life advancement
I. INTRODUCTION
The variability in engineering is an approach to develop a
statistical tool used to detect excessive process variability
due to specify assignable causes that can be corrected. It
serves to determine whether a process is in a state of
statistical control, the extent of variation of the output of the
process does not exceed that which is expected based on the
natural statistical variability of the process.
The role of service abstraction and service variability and
its impact on requirements engineering for service oriented
services is about the time between the design of business
process and the release of corresponding application. Aim at
lowering the effort for developing and releasing IT system
from business process model are the vision of business
process. To support the aim, Service Oriented Architecture
(SOA) is considered as a powerful partner. SOA is the
restructuring of IT system, separated into service engineering
and application engineering.
Moreover, through transforming business process
models, new business process applications can be develop.
Business processes have still to be translated manually into
an executable format due to lack of semantics. The more
business-focused application engineering and the more
technology oriented service engineering cannot be
completely separated, as otherwise there is no assurance that
service landscapes and business requirements will actually fit
together.
Reverse engineering variability in source code using
clone detection is to identify commonalities and variabilities,
we have to use clone detection of a massive comparison.
The original clone detection was from Linux Foundation.
The group companies of HP, Hitachi, IBM, LG, Samsung
and others, all are using the Linux Kernel sources to improve
the re-use of Linux sources.
Thus, that is no common ground for embedded kernel
and manufactures. For an example, Android and Tizen are
released for every six months with the latest kernel. The
industry always wants to reuse the same kernel for a few
product generations. For device vendors, they cannot simply
re-use the code but they can make their own changes. This is
because they are more concern in the quality. They like to
modify device code in system stability improvement.
According to Armijn Hemel, he had done some research
about how much the kernel use on various Android
Gingerbread handsets differed from the upstream kernel. It is
to identify areas for Linux Kernel, make quality
improvement and lower the development cost.
Robots in the nuclear industry of technologies and
applications where nowadays the robots play a major role in
environment that posed a safety threat to humans. For
instance, the nuclear industries have the limitation in the use
of workers where radioactive hazards frequently released.
The workers must stop working immediately once their full
body dose is more than 50mSv per year. However, the
industries get to save a lot of money, like quantity of
protective equipment, clothing needed and a decreased
administrative overhead if they use the robots instead of the
workers.
There are more than 450 nuclear power plants in the
world while 210 nuclear power plants in Europe. In the
Europe, they provide one-third of the total electricity.
For handling heavy radioactive loads like fuel rods,
maintenance, and repair operation in hot zones, the
industries prefer to use the robots to do the work as good
performance are done by the robots. The robots are very
important for nuclear power industries.
Variability and rigour in service computing engineering
aim to lower the price of production of the individual
products by letting them share an overall reference model of
the product family. The production process in product line
engineering (PLE) is organized to maximise commonalities
of the products and minimise the cost of variations at the
same time and to achieve by identifying such variability
points, the variety from a single product platform. Service
Oriented Computing (SOC) is a paradigm use to distribute
computing. The services are used to solve specific tasks, like
ranging from answering simple requests to complex business
processes. SOC is supported by two main technologies, Web
Services and Grid Computing.
This emerging topic in software engineering is between
Software Product Line Engineering (SPLE) and Service
Oriented Computing (SOC) which develop an engineering
methodology for systematic, largescale provision and
market segmentation of software services. It involves some
objectives, like to extend existing formal design notations, to
define rigorous semantics for variability over behavioural
models of services, analysis and verification techniques.
Variability reverse engineering where in the realm of
Product Line Engineering (PLE), variability management is
one of the key issues. Base on the correctness of the
variability models, we can know the success of the whole
product.
For instance, to develop customer specific application,
you have to consider yours company operates in a solution
business. Then, you have the responsibility to satisfy what
the customers needs, what the customers wish and what the
customers like, within their budget.
We will face some problems in reverse engineering.
There are different kind of artefacts, no explicit highlighting
of variability, constraints and dependencies. However, we
still have some steps to solve it. The first step is decide on
input sources, then follow by compare to derive differences,
identify candidates, select solution side variability, map back
to problem side variability and the last step is complete white
spots. Through this reverse engineering, it provides several
benefits. Efficiency, derivation support, and the variability
model reflect knowledge from past projects and others.

II. LITERATURE REVIEW
Variability in engineering which describes about the
changes and improvements in the engineering technology
fields are very essential and beneficial for the human life
advancement.
The changes and improvements in the service-oriented
requirements engineering (SORE) as an appropriate channel
for mediating between service engineering and application
engineering where it is responsible for the identification of
services, i.e., which services are required for achieving a
certain business goal, but also for the specification of
services, i.e. which features and variants a new service
should provide in order to also be beneficial in future
contexts.
Product line engineering (PLE) which deals with the
strategic reuse of software systems sharing a common,
managed set of features and software clone detection for
consumer electronic devices are discussed.
In the area of peaceful uses of nuclear energy, there are
great concerns about nuclear accidents, damage from
nuclear radiation and the question of safe disposal of
radioactive wastes. Hence robots are used in the nuclear
power industry where they minimize operator exposure to
radiation and the frequent difficulties in accessing critical
components such as pressure vessel welds and steam
generator tubes by human. This advance technology
improves industry by making it more affective, vital today
and safer for environment.
The software engineering namely the synergy between
Software Product Line Engineering (SPLE) and Service-
Oriented Computing (SOC) discuss on how to develop
rigorous modeling techniques, analysis and verification
support tools for assisting organizations to plan, optimize,
and control the quality of software service provision, both at
design time and at run time. Besides that, variability reverse
engineering provides benefits such as in terms of efficiency,
the variability model reflects knowledge from past projects
and derivation support.

III. CRITICAL ISSUES FOR HUMAN LIFE ADVANCEMENT
Nowadays, the engineers had been successfully making a
lot of products and services to improve the human life on
earth. We define some of the variability in engineering
which are related to the issue, application and solution of it.
We will discuss some of products such as the role of robots
in nuclear industry and reverse engineering variability in
source code using clone detection and some services such as
using Product Line Engineering (PLE) and Service Oriented
Requirement Engineering (SORE).
A. Products
In the present, robots in industry are really needed to
improve and make the industry running faster also to reduce
the human risk in this industry such as nuclear industry.
This is common issue in this field. There are over 450
nuclear power plants in the world and 210 in Europe. The
USA presently generates approximately 20 per cent of its
power in this manner and the figure in Japan is 25 per cent.
In view of the concerns surrounding greenhouse gases
arising from fossil-fuelled power generation, the nuclear
option is undergoing something of a renaissance; the
installed global capacity continues to rise and is presently
around 400. Further, while some countries, such as the UK,
are presently decommissioning many of its old nuclear
power stations, in some countries there are efforts to extend
the operating lives of older plants. The Nuclear Regulatory
Commissions regulation 10 CFR 20 stipulates that a worker
cannot receive a full body dose of more than 50mSv per
year and once this has been reached they must stop working
immediately. This necessitates an increased number of
workers but by using robots the number is minimized,
which yields additional savings including a reduction in the
quantity of protective equipment and clothing needed and a
decreased administrative overhead. One of the examples of
the new technology in robots for nuclear industry is The
Nukem Neater 660 is a radiation-tolerant six degree-of-
freedom robot with a range of 2 m and a payload of 25 kg.
An interesting robotic application is the laser scabbing of
contaminated concrete and cutting of pipe work. The system
is also equipped with a radiation sensor and following the
initial stripping it uses this to verify that any remaining
contamination is below the statutory limit values. If the
limits are exceeded, the robot automatically changes its tool
and reworks the relevant areas. This is repeated until the
surface is completely decontaminated and may legally be
demolished. As such, it is clearly a significant user of
robots, albeit most often highly specialized units produced
in relatively small quantities. With the anticipated
construction of new plant in many territories, combined with
the massive decommissioning task ahead, prospects are
strong for innovative robotic solutions in this industry.
In the other product, there is another issue for
consumer electronic devices. That issue is about the re-use
process of embedded Linux software for consumer
electronic devices. Among these is the increasing
fragmentation of Linux derivatives. For the solution, they
used large-scale clone detection techniques to compare
various Linux versions to their vendor-specific variants. In
this method, they found many changes that were not back
ported. First, they will list the places where we expected to
find variability. Then, they report the variability of each
device separately and finally give an overall assessment.
These are the step:
1. Expected differences
We mostly expected to see hardware specific differences,
such as chipset support, device drivers and flash specifics.
Flash file systems: One popular flash file system on
Android is YAFFS2. Because this file system had not been
merged upstream in kernel.org (it had been merged into the
Android kernel tree) and YAFFS2 also depends on specifics
of the flash chips being used, it is likely that we will find
some differences here.
Architecture support and specific drivers: Since some
hardware is very specific to a single device, like GPIO
codes, flash chip specifics, touch screens and board support,
it is expected that we will likely find many differences there.
2. Results per Device
They will check the result of every device to get the best
result where is the way to solve the problem.
3. Overall Assessment
More interesting than the overall device results are the
actual code differences between devices. By analyzing all
unique files from all devices it becomes easier to see where
there is variability.
Our results indicate that there is a need for vendors and the
upstream Linux kernel developers to work more closely
together. Some of the changes made by vendors might be
general enough to be integrated into the mainline
development. Likewise, vendors should take more care to
propagate bug fixes into their development branches. Better
communication between Linux kernel developers and
vendors should be a first step.
B. Services
In services area, Variability Management is one of
the key issues. This problem is common in the computing
management problem. Also the variability reverse
engineering, knowledge on the variability is not addressed
explicitly, but embedded in many development artefacts.
This pattern provides an approach to extract that hidden
knowledge, and transform it into the required problem side
commonality/variability model. How can the undocumented
knowledge be made accessible for future work, and
contribute to a viable commonality/variability-model?
The following forces influence the solution:
Different kind of artefacts. The existing base of
information from previous projects spans various types of
artefacts (documents, specifications, tools, code). All of
them might be a source for commonality and variability.
No explicit highlighting of variability. From the
perspective of a single project, variability does not exist,
since the customer wants a specific system. Therefore, each
project supports its own needs, but does not care about the
differences to others. The copy/paste approach has helped to
have a quick start, but couldnt keep the different projects to
follow their own, isolated path, resulting in an overly wide
code base.
Constraints and dependencies. Typically, a
variability model does not only contain the possible
variation points and variants, but also the dependencies
between them, such as conflicting variants.
Another issue is about Service Oriented Requirement
Engineering (SORE) as seen in figure 1. The service-
oriented paradigm typically separates the development of
systems into service engineering and application
engineering. With the Service Oriented Requirement
Engineering (SORE) we try to use SORE as a mediator
between application engineering and service engineering.
Service Oriented Architecture (SOA) is often considered as a
powerful partner supporting this aim. The basic idea of SOA is
the restructuring of IT systems into loosely coupled,
independent services. These services, which can be recursively
integrated into more coarse-grained (business) services, should
allow the reuse of existing functionality in order to shorten the
time between process design and system implementation when
business requirements change. They pointed out the necessity
of providing service capabilities on the right level of
abstraction and variability, as otherwise, a service SOA will
probably not entail any business benefits. Based on their
research-in-progress, they identified basic principles that
SORE must build on and derived open questions. One of the
main results will be the alignment of business process
requirements with the services actually provided by the service
landscape in order to streamline the transition between process
design and the deployment of related applications. For this
purpose, they plan to adapt established product line approaches
in SORE to assure a high degree of reusability and reuse
already in an early phase.


Figure 1: Integrated view of SOA and PLE
IV. DISCUSSION
The Consumer Electronics Working Group (CEWG) in
the Linux Foundation has identified several problems in the
re-use process of embedded Linux software for consumer
electronic devices. This group using the Linux kernel sources
for their embedded consumer electronic devices. There are a
shorter lifetime of their products compared to enterprise
product and, hence, a need for shorter release cycles. So they
have modified devices driver code to improve system
stability and performance. They modified the devices driver
by compare it with a hash-based approach where a file is
hashed for example by using the SHA256 checksum based
on the text contained in the file. The approach has some
shortcomings, however. First, it can detect only identical
files that are identical character by character. Minor
modifications such as layout or comment changes lead to a
different hash value. Possible variabilitys can be detected by
detecting pairs of files with the same name and path,
differing in the hash code. Yet if the files were renamed or
moved, the detection might fail. For that reason they have
prepare the two code bases and then use an inter-system
clone detection technique to compare them The preparing
steps are as follows:
(1) The two code bases are downloaded. In our case
study, we will have the kernel code and the device code. We
will use the two terms device corpus and kernel corpus,
respectively, to distinguish the two code bases further on.
(2) All files that are not part of the comparison are
removed from the two corpora. For instance, data,
documentation, configuration files etc. can be removed if
they are not relevant for the comparison. They will just
increase disk space and the time needed for directory tree
traversals.
(3) In the device corpus, we remove all identical files but
one. Consequently, every file is contained only once in
identical form in the device corpus. This step helps to reduce
the amount of data to be compared. Because we keep the
original sets of identical filenames, no information is lost. If
there is a match with a device file, we know that this match
holds also for all other identical files in the original set.
There are several principles that have to apply for the
service-oriented requirements engineering (SORE). Firstly
to provide service capabilities as business elements to
increase reuse. A service and their capabilities must always
be represented as business elements, independent on how
they are internally implemented due to more technical
issues. Especially the alignment of business activities with
goals could help to identify appropriate business services.
Next are plan service variants systematically to increase
reusability. Reuse can be significantly increased through the
basic idea of product line engineering for example the
systematic identification, planning, and development of
commonalities and variations. Determining in advance the
contexts or configurations in which a service will be used
helps to develop services that are reusable in different
situations. Besides that, hide implicit requirements to
minimize specification effort. There are a lot of general
and implicit requirements guiding the whole enterprise, such
as business rules, IT policies, or information on the
integration of technical services. These requirements must
also be considered, but are not to be described within
business processes in order to avoid an overloading of
process descriptions. Finally do not forget non-process
requirements. There are requirements that are specific to a
particular application but not related to the underlying
business process for example usability of a service-oriented
e Commerce application.
Product Line Engineering (PLE) is an approach to
develop a family of products using a common platform and
mass customization. This engineering approach aims to
lower production costs of the individual products by letting
them share an overall reference model of the product family,
while at the same time allowing them to differ with respect
to particular characteristics in order to serve, e.g., different
markets. As a result, the production process in PLE is
organized so as to maximize commonalities of the products
and at the same time minimize the cost of variations. The
product variants can be derived from the product family,
which allows reuse and differentiation of products of the
family. Software Product Line Engineering (SPLE) is a
paradigm for developing a diversity of software products and
software intensive systems based on the underlying
architecture of an organizations product platform.
Variability among products is made explicit by variation
points, for example places in design artifacts where a specific
decision is reduced to several features but the feature to be
chosen for a particular product is left open such as optional,
mandatory, or alternative features.
Nuclear reactors are essential the subject to extensive test
and inspection regimes. A non-destructive testing (NDT) and
visual inspection methods play a vital role in assessing the
structural integrity of the critical components of nuclear
power plants. Besides that detecting and sizing flaws during
periodic in-service inspection provides input data for fracture
mechanics calculations and on the other, monitoring
component and material conditions can help to assess
degradation processes in terms of defect growth and thus
contributes to possible early warning of component failure.
The importance of robotic inspection to older nuclear power
plant was well illustrated by recent work on the Vermont
Yankee BWR which owns and operates Yankee. The steam
dryer, which is used to extract moisture from steam produced
in the BWR before being fed to the turbine, had received
particular scrutiny because of cracks found in its structure.
The main purposes of robotics are in decommissioning
which is to reduce the radioactive dose to which the workers
are exposed. This is more emphasis on the placed of
immediate decontamination and dismantling (DECON), as it
is opposed to safe storage, the robotics will be required. In
some situations, owing to the degree of radiation and the
very long half-lives of the radioactive materials involved,
robotics is the only feasible option. However, most current
robotic systems employ virtually no autonomy or even
programmed motion invariably there is a human in the
control loop.
A lot of knowledge regarding commonality and
variability that has been accumulated in the past, but is not
explicitly documented. Some undocumented knowledge be
made accessible for future work, and contribute to a viable
variability-model following influence
Different kind of artefacts.
The existing base of information from previous projects
spans various types of artefacts such as documents,
specifications, tools, code. All of them might be a source for
commonality and variability.
No explicit highlighting of variability.
From the perspective of a single project, variability does not
exist, since the customer wants a specific system. Therefore,
each project supports its own needs, but does not care about
the differences to others. The copy and paste approach has
helped to have a quick start, but couldnt keep the different
projects to follow their own, isolated path, resulting in an
overly wide code base.
Constraints and dependencies.
Typically, a variability model does not only contain the
possible variation points and variants, but also the
dependencies between them, such as conflicting variants.
V. CONCLUSION
In this paper, we discussed about the variability in
engineering which included the possible variations in terms
of the changes incurred and improvements in the
engineering technology fields from the past to present. We
have related the critical issues which shows a significant
relevant to the benefits of human life advancement.
Technological advancements have shown a substantial
growth concerned with each and every field whether it be
the service engineering and application engineering of the
service-oriented systems, electronic devices of daily usage,
robotic systems in the nuclear industries, service computing
engineering, product line engineering (PLE) and also others
like communication systems, astronomy, semiconductor
devices, automobiles, building and architectural design
techniques or the computers.
The advancement are also accompanied by the reduction
in the time, effort and cost for production of any material
ranging from the microchips to the state of art automobiles
or form the sophisticated devices to the mega structures
coupled with ease in design and development.
Needless to add that these advancements also invigorate
economic development as the effective use of technology
reduces the material production cost and the overhead
charges which generate savings in the economy and thus
lead to national development. Variability in engineering will
continue to advance and improve rapidly as we move into
the next millennium. The important part is to ensure that
these advances benefit humanity as a whole.
ACKNOWLEDGMENT
We would like to thank to our supervisor of this technical
report project, Madam Norsharizan Bt Nordin and Mr.Harith
bin Amlus for their guidance and advice. They have inspired
us greatly to work in this technical report and motivate us on
our technical report. We also would like to thank them for
showing us some examples that are related to the topic of our
technical report. Besides that, we would like to thank the
authority of journals that are related to variability in
engineering for providing us with valuable information and
facilities to complete this technical report.
REFERENCES
List

[1] R. Nicole, Title of paper with only first word capitalized, J. Name
Stand. Abbrev., in press.
[2] Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, Electron
spectroscopy studies on magneto-optical media and plastic substrate
interface, IEEE Transl. J. Magn. Japan, vol. 2, pp. 740741, August
1987 [Digests 9th Annual Conf. Magnetics Japan, p. 301, 1982].
[3] M. Young, The Technical Writers Handbook. Mill Valley, CA:
University Science, 1989.
[4] Electronic Publication: Digital Object Identifiers (DOIs):
Article in a journal:
[5] D. Kornack and P. Rakic, Cell Proliferation without Neurogenesis in
Adult Primate Neocortex, Science, vol. 294, Dec. 2001, pp. 2127-
2130, doi:10.1126/science.1065467.
Article in a conference proceedings:
[6] H. Goto, Y. Hasegawa, and M. Tanaka, Efficient Scheduling
Focusing on the Duality of MPL Representatives, Proc. IEEE Symp.
Computational Intelligence in Scheduling (SCIS 07), IEEE Press,
Dec. 2007, pp. 57-64, doi:10.1109/SCIS.2007.357670.

Вам также может понравиться