Вы находитесь на странице: 1из 4

Fusion Engineering and Design 81 (2006) 17951798

Experience with RFX-mod data acquisition system


M. Cavinato, A. Luchetta, G. Manduchi, C. Taliercio , F. Baldo, M. Breda,
R. Capobianco, M. Moressa, P. Simionato, E. Zampiva
Consorzio RFX, Euratom-ENEA Association, Corso Stati Uniti 4, 35127 Padova, Italy
Available online 11 May 2006

Abstract
Operation of the new RFX machine has started in fall 2004, using a completely renewed version of its MDSplus data
acquisition system. The new system runs under Linux and is highly distributed, employing approximately 20 machines which
perform local data acquisition and communicate with a central server. The overall experience after several months of operation
has been quite successful. In particular, the following aspects represent the major changes, besides the migration from VMS to
Linux:
- exclusive usage of the MDSplus scripting language TDI to write the device-specific application code;
- exclusive usage of Java, and in particular of jScope, for data display in the control room and in the offices;
- exclusive usage of Java for the coordination tools.
2006 Elsevier B.V. All rights reserved.
Keywords: Data acquisition; Distributed systems; Java interface

1. Introduction
RFX-mod has begun operation in December 2004,
after a 5-year shutdown during which several major
improvements have been performed. During the shutdown, the data acquisition system has been completely
renewed by migrating from VMS to a distributed Linux
architecture. Both the old and the new data acquisition
systems are based on the MDSplus data acquisition

Corresponding author. Tel.: +39 049 8295039;


fax: +39 049 8700718.
E-mail address: cesare.taliercio@igi.cnr.it (C. Taliercio).

0920-3796/$ see front matter 2006 Elsevier B.V. All rights reserved.
doi:10.1016/j.fusengdes.2006.04.037

package. Still retaining the original architecture, the


new package has been completely re-written, and currently represents the de facto standard for data access
in the fusion community. MDSplus can be used in a
variety of different configurations because of its native
support for remote access via mdsip, a communication protocol based on TCP/IP [1]. The configuration
adopted at RFX-mod is described in [2], and in this
paper, we report our experience after more than 1
year of operation (including commissioning), which
has confirmed the validity of the initial choices. We
report also the experience gained in the extensive usage
of the Java framework for data visualization and task
coordination. RFX-mod, in fact, is the only experiment

1796

M. Cavinato et al. / Fusion Engineering and Design 81 (2006) 17951798

in which the Java components of MDSplus are used


extensively.

2. Hardware architecture and data


organization
RFX-mod employs extensive CompactPCI (cPCI)
data acquisition, based on 13 cPCI racks, each hosting
a Pentium III CPU board [3], a timing board [4] and up
to 12 transient recorders [5]. The timing board provides
eight channels capable to generate timing signals such
as multi-speed clock and triggers with 100 ns of time
resolution. The timing boards are connected via a fibre
optic link so that a unique 1 MHz clock synchronizes all
the timing devices, including the older CAMAC timing
devices, still used for diagnostic data acquisition. The
transient recorder boards provide 16 channels with 2 M
samples of memory per channel and a sampling rate of
up to 200 kHz.
The CPU boards run RedHat 8 Linux and MDSplus.
A central Linux server (using RedHat Enterprise
Linux) hosts all the pulse files, except one, for a diagnostic, which produces a large amount (350 Mbytes)
of data and is supervised by a Windows server. Windows PCs are used for some diagnostics, while most use
CAMAC data acquisition by means of two CAMAC
Serial Highway drivers connected via SCSI to the central Linux server.
The usage of Windows instead of Linux is not
encouraged, but there are situations in which this is
unavoidable because of the exclusive availability of the
required drivers for Windows, or the in-house availability of interface boards and drivers, such as GPIB
adapters. Not actively involved in data acquisition, 22
Linux workstations are located in central control room
for user interface. A workstation is used by the operator
to supervise setting of parameters and the operation of
the data acquisition tasks. The others are used to display
acquired waveforms via jScope and to run programs for
data analysis and visualization.
Whereas CAMAC data acquisition is carried out by
processes running in the central Linux server, and therefore read and write pulse file data locally, the rest of data
acquisition is carried out by tasks running in the cPCI
computer boards, or on the diagnostics Windows PCs.
In the latter two cases, network data transfer is required
to read and write the pulse files. We considered the fol-

lowing configurations:
(1) export the Server File System through NFS so that
every computer can access the pulse file as if it
were local;
(2) let the computers access pulse files via mdsip, the
MDSplus protocol for exporting pulse file data. No
file system is exported in this case;
(3) let the corresponding pulse file, hosting cratespecific data be stored in the crates local disk. In
this case the pulse file is handled by MDSplus as a
remote subtree.
The first solution has been excluded because it did
not provide good performance. To access pulse file
data using the second solution, it is only necessary to
change an environment variable used by MDSplus to
locate the directory containing the pulse files. If such
a variable contains an IP address, the data access layer
of MDSplus handles transparently mdsip communication, and no change is needed in the user code. The
third solution may look interesting because, in principle, it reduces network traffic and performs faster data
storage. On the other side, handling tens of different
subtrees in different disks raises a serious management
problem for backup and data integrity. For these reasons, the second solution has been chosen, storing most
pulse files in the central Linux server and using RAID
0 + 1 configuration to ensure data integrity. The mdsip
servers are activated as xinet services and, despite the
fact that 100 Mbytes of data are written into the pulse
file per shot, we never experienced a failure in data
access after more than 1 year of operation once the
system has been properly configured.

3. TDI scripting language and Java device


beans
The porting of the data acquisition system from
VMS to Linux did not only require the adoption of
the new multiplatform version of MDSplus. In order
to perform data acquisition, in fact, MDSplus needs
to be integrated with device-specific components handling the configuration set-up, data acquisition and user
interface for the hardware devices defined in the experiment. It has been therefore necessary to port this code
to the new platform. Such a process can be eased by
a code generation tool, which produces the MDSplus-

M. Cavinato et al. / Fusion Engineering and Design 81 (2006) 17951798

specific code segments (i.e. the code interacting with


the MDSplus framework) and allows the developer to
concentrate on the device-specific portion of code. This
approach has been followed, for example, at Alcator CMOD [6], when MDSplus was ported to Linux.
We preferred however to adopt an alternative way by
using the MDSplus scripting language, called TDI, to
develop all the device-specific code [7]. TDI is the language for expressions, ubiquitous in MDSplus. Expressions can be recursively formed by other expressions
to form TDI programs, whose syntax resembles that of
scripting languages such as IDL and Matlab. Since TDI
natively handles all the MDSplus data types, the resulting TDI code is very compact and usually no more than
few tens of TDI statements are required to implement
a device support routine.
Even more dramatic is the reduction in the development time for the device-specific set-up forms, to be
integrated in jTraverser, the Java graphical user interface of the experiment model. The Java Device Beans
[8], developed to carry out the interface between visual
components and MDSplus, allow the construction of a
new interface in a couple of hours, or even less.

4. Java tools
In RFX-mod, the jDispatcher [9] tool is used for
the coordination of the data acquisition tasks during
the pulse sequence. While in the old system most data
acquisition tasks were running in the same servers hosting the pulse files, the duty of the dispatcher tool was
mainly the coordination of tasks within the same VMS
cluster, and therefore OS-specific communication was
used. The current situation is much different, since 12
cPCI CPUs and 3 Windows PCs are involved during
the sequence. A coordination tool will therefore need
to handle concurrent threads making network communication, and such implementation requires much care
because of the possibility of race conditions and deadlocks. For this reason, we decided to develop a Java
tool, even though a similar tool was already available in the MDSplus package. The experience confirmed our impression: thanks to the clean definition of
threads and network communication in the Java framework, the jDispatcher tool has been developed in less
than 2 months, and since its initial delivery, no bugs
related to race conditions appeared. The experience in

1797

using Java in this context has been so positive that we


decided to implement in Java also the action servers,
i.e. the tasks which receive the execution commands
from jDispatcher and actually carry out data acquisition
tasks.
The requirements for jDispatcher are well addressed
by a Java based solution: a heavily multithreaded environment and support for TCP/IP communication, but
neither memory demanding, nor computing intensive.
If we consider jScope, the Java-based visualization tool
of MDSplus, memory consumption may represent a
problem, and in fact, to our experience, this sometimes
represented a limitation in its usage. On the other side,
users were satisfied with the possibility of displaying
multiple waves and using colors, features extensively
used in almost all of the jScope applications running in
the control room.
A careful tuning of the tool has been therefore
required when jScope started to be extensively used,
in order to remove memory leaks and to reduce the
allocation and deallocation of large objects. There is,
however, a lower bound of 512 Mbytes in memory
requirement in order to use jScope in real-world applications. Such amount of memory is now available in all
Linux workstations running in the control room, and
typically several tens of signals (sometimes hundreds)
are displayed on every workstation. There are however
some diagnostics which produce very large signals:
in this case jScope may still run out of memory, and
waveform update may become slow. For this reason,
a new feature has been added to jScope, which allows
data resampling at the data server site when data are
accessed via mdsip. In this case the mdsip server (not
written in Java) performs data resampling on the fly, so
that only few Kbytes are exchanged in the network and
the memory consumption is reduced as jScope needs to
handle smaller arrays. On the fly resampling works well
when the whole waveform is displayed, as more points
would not add information because of the screen resolution, but the interface would be of poor quality when
the user zooms portions of waveforms. For this reason,
at every zoom operation, jScope requests the (resampled) points for the enlarged portion in order to increase
resolution. The user experiences an increase in the resolution of the zoomed portion a fraction of second after
zooming, which proved well acceptable. JScope is also
extensively used in the Windows PC located in the
offices. Users are encouraged to use jScope on the PC

1798

M. Cavinato et al. / Fusion Engineering and Design 81 (2006) 17951798

instead of opening an X session to some Linux workstation in the control room, in order to better balance
resource consumption.
As a matter of policy, the MDSplus package is not
installed in the office PCs, unless there is a compelling
reason, such as the usage of Matlab, which requires
MDSplus running locally to access experiment data.
JScope represents therefore the only MDSplus tool
installed in every office PC. In order to reduce the
maintenance time, and to keep the tool up-to-date in
every installation, Java Web Start is used to distribute
the latest jScope version. Every time a user starts a
jScope application Java Web Start checks whether the
local version of jScope is up-to-date, possibly downloading the latest version. This utility has been used
originally within RFX, but now it is also available
to external users at http://www.igi.pd.cnr.it/wwwexp/
technologies/MDSplusTools/jScopeDownload.html.
Some users installed jScope also in their personal
PC at home. To guarantee security in data access, and
avoiding therefore punching holes in the RFX firewall,
the secure shell (ssh) tunnelling option has been added
to jScope. With ssh tunnelling, a local port can be configured such as if it were a remote connection, being
communication handled by ssh in a secure way.

5. Conclusions
The paper reported the experience gained after about
1 year of operation at RFX-mod, using new hardware
architecture and novel software tools. The good performance of many MDSplus tools did not represent
a surprise because of the successful experiences in
the usage of MDSplus in several machines. The peculiar aspect of our application is the extensive usage of
Java tools in a demanding environment. Previously, the
usage of Java at RFX had been limited to test cases and
small applications. Our experience has been so positive that we would recommend using extensively Java
in the development of data acquisition systems. Java
cannot of course replace every component: low level
data access and intensive computation represent crit-

ical components and require careful programming in


native code, but the usage of C and C++ in many other
components for data visualization, system supervision
and coordination would imply longer development and
especially debugging times. Even for those components for which memory resources proved more critical, such as data visualization, our experience showed
that investing some resources for a 512 Mbytes memory expansion is well worth if we consider how much
development and debugging time is saved.

Acknowledgement
This work was supported by the European Communities under the contract of Association between
EURATOM/ENEA. The views and opinions expressed
herein do not necessarily reflect those of the European
Commission.

References
[1] T.W. Fredian, J.A. Stillerman, MDSplus. Current developments
and future directions, Fusion Eng. Technol. 60 (2002) 229233.
[2] O. Barana, A. Luchetta, G. Manduchi, C. Taliercio, Recent developments of the RFX control and data acquisition system, Fusion
Eng. Technol. 71 (2004) 9599.
[3] SBS Tecnologies, Inc., CT7 Datasheet at http://www.sbs.
com/products/73/.
[4] INCAA Computers BV. DIO2 timing device at http://www.
incaacomputers.com/products/pdf/dio2-web.pdf.
[5] INCAA Computers BV. TR10 Transient Recorder at http://www.
incaacomputers.com/products/pdf/tr10-web.pdf.
[6] T.W. Fredian, M. Greenwald, J.A. Stillerman, Migration of alcator C-Mod computer infrastructure to Linux, Fusion Eng. Technol. 71 (2004) 8993.
[7] B.P. Duval, X. Llobet, P.F. Isoz, J.B. Lister, B. Marletaz, Ph.
Marmillod, J.-M. Moret, Evolution not revolution in the TCV
tokamak control and acquisition system, Fusion Eng. Des. 5657
(2001) 10231028.
[8] O. Barana, A. Luchetta, G. Manduchi, C. Taliercio, Java development in MDSplus, Fusion Eng. Technol. 60 (2002) 311
317.
[9] O. Barana, A. Luchetta, G. Manduchi, C. Taliercio, A generalpurpose java tool for action dispatching and supervision in
nuclear fusion experiments, IEEE Trans. Nucl. Sci. 49 (2) (2002).

Вам также может понравиться