Вы находитесь на странице: 1из 168

EUROPEAN COMMISSION

16th Workshop of
Marie Curie Fellows :
Research Training in Progress
Held at the Institute for Energy,
Joint Research Centre of the European Commission
Petten (Netherlands), 21-22 October, 2003
Dr T. Papazoglou
European Commission - Directorate-General for Research, Brussels
Prof. Roger Hurst
Joint Research Centre

2004

Directorate-General for Research


Human resources and mobility

EUR 21252

Europe Direct is a service to help you find answers


to your questions about the European Union
Freephone number:

00 800 6 7 8 9 10 11

LEGAL NOTICE:
Neither the European Commission nor any person acting on behalf of the Commission is responsible for the use which might be made of the
following information.
A great deal of additional information on the European Union is available on the Internet. It can be accessed through the Europa server (http://europa.eu.int).
Cataloguing data can be found at the end of this publication.
Luxembourg: Office for Official Publications of the European Communities, 2004
ISBN 92-894-8198-6
European Communities, 2004
Reproduction is authorised provided the source is acknowledged.
Printed in Italy
PRINTED ON WHITE CHLORINE-FREE PAPER

TABLE OF CONTENTS
page

Photos of event

3-6

Forward

7-8

Workshop Programme

9-11

Overview of all Poster Presentations

12-14

List of Participants

15-18

Discussion and Conclusions

19-161

Oral Presentations

19-78

Poster Presentations

79-161

FORWARD
In 1999 the Commission launched a series of Workshops of Marie Curie Fellows on Research
Training in Progress. The purpose of these scientific workshops was to complement the usual
monitoring of contracts with results of the scientific work of the Marie Curie Fellows. These
meetings provide the forum for scientific discussions between fellows, between fellows and
senior scientists as well as between the scientific community and the European Commission. In
addition to the scientific purpose, the social communication and interaction between fellows
could be expected to develop in a positive manner during such a Workshop.
The present workshop organised in the Netherlands at the European Commissions Institute for
Energy is in fact the 16th organised so far in this series of workshops with the following aims
to:

raise the visibility of the Marie Curie Fellowships for young researchers
facilitate interdisciplinary scientific discussion and dissemination of knowledge
support the training and mobility of researchers throughout Europe
promote scientific and technological excellence
provide a showcase for a wide range of research activities
promote active participation of Marie Curie Fellows in the Netherlands in an exchange of
ideas amongst themselves, with JRC Fellows, established scientists and the European
Commission
encourage the presentation of research projects and results through oral or poster means.

The 16th workshop of Marie Curie Fellows on Research Training in Progress was organised on
behalf of the Marie Curie Programme Directorate of EC DG RTD by the EC DG JRC Institute
for Energy situated in the sand dunes of Petten on the North West coast of the Netherlands. The
Workshop was held in the Forum lecture theatre on the Petten site and the Fellows
accommodated in hotels in nearby villages. A visit to the sea aquarium and old Napoleonic Fort
near Den Helder followed by a dinner hosted by the Institute for Energy formed a key social
integration highlight of the event. The organisation of every aspect of the event was in the hands
of the PR and Communication Group of the Institute under the leadership of Mr. Darren
McGarry with Ms Katinka van Lierop carrying out the duties of Workshop Secretary with the
able technical support of Mr. Jan Manten and Mr. Ruud Zitter. The Joint Chairmen of the
Workshop were Prof. Roger Hurst from DG JRC and Dr. Theodore Papzoglou from DG RTD.
The present proceedings have been put together by the above mentioned members of the
Workshop Organising Committee.
All the Marie Curie Fellows working in the Netherlands were invited to this meeting. The range
of research work was found to be performed in a multitude of disciplines from medicine to
physics and from food science to mathematics. Some 70 Fellows were able to attend leading to a
Workshop Programme of some 20 oral presentations and almost 50 posters. Some Institute for
Energy Fellows contributed to both the oral and poster presentations. In addition, the Director of
the Institute for Energy, Prof. Kari Trrnen presented the activities of the Institute and two
senior scientists presented their work on Medical applications of a nuclear research reactor, Dr.
Ray Moss and The future for Energy in Europe. Dr. E. Tzimas. A tour of the Hydrogen Energy
Laboratory and the High Flux Reactor were also included in the heavy schedule for the Fellows.
Most of the papers presented at the Workshop are included in these proceedings.
As a summary of the Workshop as also represented by these proceedings we can say that from
both the point of view of the organisers but more importantly from the Fellows themselves, the
Workshop could certainly be considered a clear success. The striking range of disciplines caused
7

the young speakers and authors of posters to have to master the bringing over of their messages
to a less specialised audience than is usual for them in their own scientific communities. Great
credit should be given to them for achieving this difficult task for the benefit of all concerned. In
addition the integration and communication between the 20 nationalities was manifest during the
technical sessions but was enhanced during the coffee breaks and excelled during the Marie
Curie Quiz at the Workshop Dinner. A successful and enjoyable event which is deserving of
these Proceedings!
Roger Hurst, Darren McGarry
Institute for Energy
DG-Joint Research Centre

WORKSHOP PROGRAMME

Marie Curie Workshop


at the Institute for Energy
Joint Research Centre of the European Commission
Petten (The Netherlands), 21-22 October 2003
(Forum Auditorium)
Day 1 ) 21st October 2003
Morning Session:
08:00

Collection from Alkmaar Station and Hotels


(8:00 Alkmaar Station, 8:15 Hotel Marijke, 8:25 Parkhotel; - 8:30 Tulip Inn Callantsoog)

09:00

Registration

09:45

Opening Session / Welcome


Chairmen:
Prof. Roger HURST, Institute for Energy - Petten, DG JRC, EC
Dr. Theodore PAPAZOGLOU, Marie Curie Fellowships - DG RTD, EC

09:50

Prof. K. TRRNEN, Director of the Institute for Energy - Petten, DG JRC, EC: "The JRC, an
Institution for Research & Training"

10:10

Dr. Theodore PAPAZOGLOU, Marie Curie Fellowships - DG RTD, EC


"Marie Curie Fellowships in the 6th FWP"

10:30

Oral Presentations 1 - 5 (5x20 mins) :


(1)
(2)
(3)
(4)
(5)

Isabel Maria ADAME, Medis BV, Leiden:


"Automatic Segmentation and Plaque Characterization in Atherosclerotic Carotid Artery MR Images"
Andrea AHERN, Shell Global Solutions, Amsterdam:
"On the Move ... with the Silver Catalyst in the Ethylene Oxidation Reaction"
Robert WIMPORY, JRC Marie Curie fellow at JRC's Institute for Energy, Petten: "Neutron
methods for structural integrity assessment"
Beln BELLIURE, Univ. of Amsterdam:
"Does Tomato Spotted Wilt Virus benefit its vector Frankliniella occidentalis?"
Bruno CHAVEZ, Unilever R&D, Colworth:
"Processing and Formulation Effects on Structural and Rheological Properties of Ice Cream Mix,
Aerated Mix and Ice Cream"

12:10

Group photo

12:15

Lunch buffet + Poster Presentations (Forum Reception)


Afternoon Session:

13:30

Key Note Scientific Lecture 1)


Dr. R. MOSS, Institute for Energy - Petten, DG JRC, EC
"Medical applications of a nuclear research reactor"

14:00

Oral Presentations 6 - 10 (5x20 mins) :


(6)
(7)

Mariadriana CREATORE, Techn. Univ. Eindhoven, Dpt. of Applied Physics:


"The expanding thermal plasma technique. An example: deposition of oxide dielectrics"
Cline MORENS, Univ. of Groningen, Dpt. of Animal Physiology:

(8)
(9)
(10)

"Effects of a high fat - high protein diet in two rodents models for obesity, SHU9119-treated
Wistar rats and fa/fa Zucker rats"
Hans-Ruprecht NEUBERGER, Maastricht University:
"A trial fibrillation in a goat model of atrial dilatation"
Catherine ROBIN, Erasmus Medical Center, Rotterdam:
"Localization of functional hematopoietic stem cells in the mouse AGM"
Markus SCHWARZ, Shell Global Solutions, Amsterdam:
"Hydrogen storage materials for mobile applications"

15:40

Coffee + Poster Presentations (Forum Reception)

16:20

Oral Presentations 11 - 15 (5x20 mins) :


(11)

(12)
(13)
(14)
(15)

18:00

Giacomo SPINSANTI, Animal Ecology, Vrije Universiteit, Amsterdam:


"Molecular physiology of metal tolerance in the collembolan Orchesella villosa: Isolation and
characterization of a cadmium-binding Metallothionein"
Kristjan TABRI, Schelde Naval Shipbuilding, Vlissingen:
"Numerical ship collision model"
Elena PAFFUMI, fellow at JRC's Institute for Energy, Petten:
"Thermal fatigue studies on nuclear piping"
Rachel THILWIND, Philips Research, Eindhoven:
"Magnetic sensor technologies for biomolecular diagnostics"
Axel TONINI, Univ. of Wageningen, Dpt. of Social Sciences:
"Productivity growth in agriculture and intertemporal frontier separation for six CEECs and the
EU-15 members"
Close of Day 1
Evening Session:

18:00
19:00
19:15
20:00
22:30

Departure by coach to Workshop Dinner hosted by the JRC's Institute for Energy
Aperitif at Restaurant Fort Kijkduin - Den Helder
Tour of Fort Kijkduin
Buffet Dinner
Close of Dinner - transport by coach to hotels
-o- o-o

Day 2 ) 22 October 2003


Morning Session:
09:00

Key Note Scientific Lecture 2) Dr. V. TZIMAS, Institute for Energy - Petten, DG JRC, EC:
"The future of energy in Europe"

09:30
(16)
(17)

(18)

Oral Presentations 16 - 18 (3x20 mins)


Virginie TREGOAT, Numico Research, Wageningen:
"The use of casein hydrolysate in cow's milk allergy"
Klaus VOGSTAD, Dpt. Industrial Ecology, Univ. of Leiden:
"Designing market-oriented environmental policy instruments: The case of tradable green
certificates"
Ingrid WINTER: Unilever R&D, Vlaardingen:
"Characterization of the phase behaviour of a propanediol based non-phospholipid derived from
a food-grade emulsifier"

10:30

Coffee and Poster Presentations (Forum Reception)

11:00

Visit of the facilities of JRC's Institute for Energy


High Flux Reactor or Clean Energies

12:30

Lunch (Forum Grote Vergaderzaal)

10

Afternoon Session:
14:00

(19)

(20)

16:00

Marie Curie Plenary Session


Discussions on Marie Curie Fellowships Training
Oral Presentation :
Iouri OUDALOV, Univ. of Twente, Enschede:
"Academic, industrial research and Marie Curie Fellowship program - contributing to the
Barcelona R&D objective"
Bernd OHLMEIER, DSM Research, Geleen, and National Coordinator Marie Curie Fellowship
Association in The Netherlands:
"Marie Curie Fellowship Association, an alumni organization of current and former Marie Curie
fellows"
Close of Workshop
Transport to hotels and Alkmaar Station
-o- o-o -

11

OVERVIEW OF ALL POSTER PRESENTATIONS


Marie Curie Workshop - 21 and 22 October 2003
Poster Presentations - Overview
(1)

Nona S. R. Agawin:
"Competition for light and nitrogen between N 2 and non-N2 -fixing phytoplankton"

(2)

Patrice Bertet:
"Quantum computer"

(3)

Katrine Borgen:
"Epidemiology and function of the variant esp gene in Enterococcus faecium

(4)

Georgios Boulougouris:
"A numerical study of the phase behavior of protein solutions."

(5)

Marek Bracisiewicz:
"Si3 N4 based ceramics "

(6)

Agostino Capponi:
"Data association problem"

(7)

va Csajbk:
"Nanosized, Gd-loaded Zeolite Nanoparticles as Potential Magnetic Resonance Imaging Contrast Agents"

(8)

Juan De Vicente:
"The Behaviour of Complex Fluids in Ultra-Thin Films

(9)

Sana Fajdiga:
"Modulation of the expression of heat shock proteins and cytokines in enterocyte-like Caco-2 cells after
exposure to Lactobacilli and Pseudobutyrivibrio"

(10)

Folasade Fawehinmi:
"The Influence of Substrate Composition on Fouling in Membrane Bioreactors."

(11)

Aldo E. Guiducci:
"Non Schulz-Flory Oligomerisation of Ethylene"

(12)

Adrian Haiduc:
"Predictive models and Qualitative information from PLS of time-domain NMR"

(13)

Mark Hamer:
"The Stressed Immune System Can Nutritional Intervention Help?"

(14)

Carmen Herranz:
"Determination of the secretion mechanism of enterocin P"

(15)

Karel Hrncirik:
"Occurrence of Phenolic Compounds in Various Extra Virgin Olive Oils "

(16)

David R. Jones:
"Nuclear PtdInsPs as transducers of stress signalling. An in vivo role for type II PIPkinase"

(17)

Sotirios Kiokias:
"Stability of heated acidified protein-stabilised oil-in-water emulsion gels containing partly crystalline fat"

(18)

Dessislava A. Koleva:
"Durable building technology benefiting electrochemical methods as a preventive"

(19)

Jean-Marie Le Corre:

12

Phase Compositional Analysis of Fats and Oils by Means of Time Domain Nuclear Magnetic Resonance
(40)

Sebastien Zamith:
Control of the production of highly charged ions in femtosecond laser cluster fragmentation

(41)

Klaus Vogstad:
"The transition from fossil fuelled towards a renewable power supply in a deregulated electricity market"

14

15

Andrea
Belen
Patrice

Katrine

Georgios

Marek

45 AHERN
11 BELLIURE
46 BERTET

32 BORGEN

28 BOULOUGOURIS

27 BRACISIEWICZ

Eva

Juan

Sana

Folasade
Tanja

Aldo
Adrian

Mark

Carmen

Verena

Karel

47 DE VICENTE

37 FAJDIGA

49 FAWEHINMI
8 GLADISCH

6 GUIDUCCI
42 HAIDUC

23 HAMER

34 HERRANZ

76 HORNEFFER

38 HRNCIRIK

Mariadriana

25 CREATORE

5 CSAJBOK

Bruno

63 CHAVEZ

Agostino

Isabel Maria
Nona Sheila

9 ADAME
20 AGAWIN

7 CAPPONI

First Name

Name

Unilever R&D, Vlaardingen

Univ. of Groningen, Dpt. of Mol.


Microbiology
Unilever R&D, Vlaardingen

Unilever R&D, Vlaardingen

Shell, Amsterdam
Unilever R&D, Vlaardingen

Univ. of Utrecht, Faculty of


Veterinary Medicine
Univ. of Wageningen
Campina Innovation, Wageningen

Unilever R&D, Colworth (UK)

Techn. Univ. of Delft

Technical Univ. Eindhoven

Unilever R&D, Colworth (UK)

Thales Nederland BV, Hengelo

FOM Institute for Atomic and


Molecular Physics (AMOLF),
Amsterdam
JRC-IE Petten

Shell, Amsterdam
Univ. of Amsterdam
Techn. Univ. of Delft, Dpt. of
Nanoscience
RIVM, Bilthoven

MEDIS BV, Leiden


Univ. of Amsterdam (Aquatic
Microbiology - IBED)

Organisation

karel.hrncirik@unilever.com

verena.horneffer@unilever.com

c.herranz@biol.rug.nl

mark.hamer@unilever.com

aldo.guiducci@shell.com
adrian.haiduc@unilever.com

folasade.fawehinmi@wur.nl
gladit@campina.com

s.fajdiga@vet.uu.nl

juan.vicente@unilever.com

e.csajbok@tnw.tudelft.nl

m.creatore@tue.nl

bruno.chavez@unilever.com

agostino.capponi@nl.thalesgroup.com

marek.bracisiewicz@jrc.nl

gboul@amolf.nl

katrine.borgen@rivm.nl

andrea.ahern@shell.com
belliure@science.uva.nl
bertet@qt.tn.tudelft.nl

I.m.adame@lumc.nl
agawin@science.uva.nl

e-mail

Czech

German

Spanish

British

British
Romanian

Irish
German

Slovene

Spanish

Hungarian

Italian

Mexican

Italian

Polish

Greek

Norwegian

Irish
Spanish
French

Spanish
Spanish/ Filippina

Nationality

Poster

NO

Poster

Poster

Poster
Poster

Poster
-

Poster

Poster

Poster

Oral

Oral

Poster

Poster

Poster

Poster

Oral
Oral
Poster

Oral
Poster

Presentation Remarks

16

Celine

Damjan

Hans -Ruprecht Maastricht Univ.


Bernd
DSM Research, Geleen, and
National Coordinator Marie Curie
Fellship Association in The
Netherlands
Cristina
NKI, Netherlands Cancer Institute,
Amsterdam

Alexandru

Iouri

Elena

Lucie

40 MORENS

51 NEMEC

2 NEUBERGER
75 OHLMEIER

52 OPRAN

61 OUDALOV

78 PAFFUMI

21 PARENICOVA

3 PARIS MARTIN

Laura

Silvia

26 MIRET-CATALAN

14 OLIVO

Ignacio

66 MELIAN-CABRERA

Thales Nederland BV, Hengelo

JRC-IE Petten - INVITED


SPEAKER
DSM Food Specialties, Delft

Univ. of Twente, Enschede

Texas Instruments, Almelo

Univ. of Groningen, Dpt. of Animal


Physiology
Akzo Nobel, Arnhem

Unilever R&D, Vlaardingen

Techn. Univ. of Delft, Dpt. of


Chemical Technology

Xerox Company, Venray

Clement

43 MAGNIEZ

Numico Research, Wageningen

Thales Naval Nederland, Hengelo

Lydia

Jean-Marie

Techn. Univ. of Delft, Van der


Heide
Numico Research, Wageningen

Unilever R&D, Vlaardingen

Netherlands Cancer Institute,


Amsterdam
Unilever R&D, Vlaardingen

50 LEPECUCHEL

4 LE CORRE

Kelly Jane

77 LAMB

Sotirios

1 KIOKIAS

Dessislava

Scott

22 KILLEEN

74 KOLEVA

David

65 JONES

laura.paris@nl.thalesgroup.com

lucie.parenicova@dsm.com

elena.paffumi@jrc.nl

y.b.udalov@tn.utwente.nl

a-opran@ti.com

c.olivo@nki.nl

damjan.nemec@akzonobelchemicals.com
h.neuberger@fys.unimaas.nl
bernd.ohlmeier@dsm.com

c.morens@biol.rug.nl

silvia.miret-catalan@unilever.com

i.v.melian-cabrera@tnw.tudelft.nl

clement.magniez@nld.xerox.com

jeanmarie.lecorre@nl.thalesgroup.com
(private: jm_lecorre@yahoo.fr)
lydia.lepecuchel@numico-research.nl

kelly-jane.lamb@numico-research.nl

d.koleva@vanderheide.nl

sotirios.kiokias@unilever.com

scott.killeen@unilever.com

djones@nki.nl

Spanish

Czech

Dutch/ Russian

Italian

Italian

German
German

Slovene

French

Spanish

Spanish

French

French

French

Scottish

Bulgarian

Greek

Irish

British

Poster

Poster

Oral

Oral

Poster

Poster

Oral
Oral

NO

Oral

Poster

only 22/10

NO - not able to present


results
Poster

NO

Poster

NO

Poster

Poster only only 21/10


on 21/10
NO - not able to present
results
Poster

17

Nuria

Jean-Gabriel

Alois

Maria

Felip

Catherine

Dick

Zuzana

16 PINEIRO COSTAS

13 POINTEAU

53 POPP

69 POZO JIMENEZ

36 RIERA-PALOU

48 ROBIN

70 ROELOFS

24 RYCHNAVSKA

Salvatore

Giacomo

Kristjan

Philippe
Rachel

Axel

67 SPICUGLIA

29 SPINSANTI

33 TABRI

57 THEVENIN
58 THILWIND

41 TONINI

philippe.thevenin@akzonobel.com
rachel_psv@hotmail.com

kristian.tabri@schelde.com

Univ. of Wageningen, Dpt. of Social axel.tonini@wur.nl


Sciences

Schelde Naval Shipbuilding,


Vlissingen
Akzo Nobel, Arnhem
Philips Research, Eindhoven

Univ. of Nijmegen, NCHLS


s.spicuglia@ncmls.kun.nl
(Nijmegen Center for Molecular Life
Sciences)
Vrije Universiteit, Amsterdam
spinsanti@unisi.it
(Animal Ecology)

ysl@ihe.nl

UNESCO-IHE Institute for Water


Education, Delft

Yness March

68 SLOKAR

L.sayas@nki.nl

panagiotis.sarantinopoulos@dsm.com

z.rychnavska@vet.uu.nl

dick.roelofs@ecology.falw.vu.nl

c.robin@erasmusmc.nl

riera@natlab.research.philips.com

m.j.pozo@bio.uu.nl

jeangabriel.pointeau@nl.thalesgroup.c
om
alois.popp@unilever.com

nuria.pineiro-costas@unilever.com

m.pastrnak@tue.nl

RIVM, Bilthoven
helmut.schollnberger@rivm.nl
Shell, Amsterdam
markus.schwarz@shell.com
Vrije Universiteit, Amsterdam
fsindico@yahoo.com
(Institute for environmental studies)

NKI, Netherlands Cancer Institute,


Amsterdam

Erasmus Medical Center,


Rotterdam
Vrije Univ., Amsterdam (Animal
Ecology)
Univ. of Utrecht, Fac. of Veterinary
Medicine
DSM Food Specialties, Delft

Philips Research, Eindhoven

Univ. of Utrecht (Phytopathology)

Unilever R&D, Vlaardingen

Thales Nederland BV, Hengelo

Technical Univ. Eindhoven, Logica


CMG
Unilever R&D, Vlaardingen

10 SCHOELLNBERGER Helmut
55 SCHWARZ
Markus
18 SINDICO
Francesco

19 SARANTINOPOULO Panagiotis
S
72 SAYAS
Laura

Milan

35 PASTRNAK

Italian/ French

French
British

Estonian

Italian

Italian

Slovene

Austrian
German
Italian

Spanish

Greek

Slovak

Dutch

French

Spanish

Spanish

German

French

Spanish

Slovak

Oral

Poster
Oral

Oral

Oral

Poster

Poster

Poster
Oral
Poster

Poster

Poster

Poster

NO

Oral

Poster

NO

Poster

Poster

Poster on
21/10
Poster

with supervisor
(Roelofs)

Supervisor of
Spinsanti
only 21/10

only 21/10

only 21/10

18

Elena

Astrid

Klaus

Robert
Ingrid

Sebastien

31 TREZZA

60 VALLES SANCHEZ

39 VOGSTAD

12 WIMPORY
62 WINTER

64 ZAMITH

MOSS
TZIMAS
HURST
McGARRY
ZAMANA
VAN LIEROP
MANTEN
ZITTER

Ray
Vangelis
Roger
Darren
Sylwia
Katinka
Jan
Ruud

other EC+IE staff + invited


speakers:
PAPAZOGLOU
Theodore

Virginie

44 TREGOAT
elena.trezza@unilever.com

virginie.tregoat@numico-research.nl

DG RTD, Eur. Commission,


Brussels
Institute for Energy
Institute for Energy
Institute for Energy
Institute for Energy
Institute for Energy
Institute for Energy
Institute for Energy
Institute for Energy

AMOLF, Amsterdam

JRC-IE Petten
Unilever R&D, Vlaardingen

University of Leiden, Institute


CML - Centre of Environmental
Science, Department Industrial
Ecology

theodore.papazoglou@cec.eu.int

zamith@amolf.nl

robert.wimpory@jrc.nl
ingrid.winter@unilever.com

klausv@stud.ntnu.no

Solvay Pharmaceuticals BV, Weesp astrid.valles@solvay.com

Unilever R&D, Vlaardingen

Numico Research, Wageningen

French

British
Austrian

Norwegian

Spanish

Italian

French

Poster

Oral
Oral

Oral

NO!!

Poster

Oral

ORAL PRESENTATIONS

Automatic Plaque Characterization and Vessel Wall Segmentation in


Magnetic Resonance Images of Atherosclerotic Carotid Arteries
Isabel Maria Adame-Valero
MEDIS Medical Imaging Systems B.V., Leiden, The Netherlands
Division of Image Processing, Leiden University Medical Center, Leiden, The Netherlands

INTRODUCTION
Atherosclerosis is a systemic disease of the vessel wall that occurs in the aorta, carotid,
coronary and peripheral arteries and is the primary cause of heart disease and stroke. It is
responsible for almost 40 % of all deaths in western societies.
The disease is expressed as lesions or plaques in the intima and media of the arterial walls.
Advanced plaques often have a heterogeneous composition, containing extensive regions of
fibrous tissue, calcium, and intraplaque hemorrhage 1 .
Currently the degree of lumen stenosis is used as a marker for high-risk plaques. However,
lumen narrowing is not a good estimator of plaque size and probably underestimates the
atherosclerotic burden due to compensatory enlargement of the adventitial boundary.
Growing evidence suggests that the decisive risk factor determining plaque vulnerability is
plaque composition rather than the degree of luminal narrowing. Angiography and
intravasculrar ultrasonography identify the luminal diameter or stenosis, wall thickness, and
plaque volume. However, they cannot completely characterize the composition of the
atherosclerotic plaque and therefore are incapable of identifying vulnerable or high-risk
plaques.
High resolution MR has emerged as the potential leading non-invasive imaging modality
for characterizing atherosclerosic plaque in vivo. Recent literature has documented MR
capabilities of identifying atherosclerotic plaque components 2 , plaque areas and volumes,
and its potential for monitoring the progression and regression of plaque lesions 3 .
In recent history computer-aided manual tracing of the vessel boundaries has been the
primary means of extracting quantitative information from MR images. However, manual
tracing tends to be labor intensive and subject to inter- and intra-observer variability. An
automated post-processing algorithm would assist the human labor and remove subjectivity
from the process, gaining in accuracy and reproducibility.
The goal in this study was to develop a computer algorithm to identify the inner and outer
boundaries of the vessel wall, as well as the contour of the lipid core, if present, in carotid
MR images.
MATERIALS AND METHODS
Description of the algorithm
We report an automated approach to segment vascular wall and plaque contours from T1
weighted (T1W) and proton density weighted (PDW) in vivo MR images of the carotid
artery. Yet, identifying these boundaries automatically is a challenging task due to the
complex signal features found in the vicinity of the vessel walls. To overcome these

20

problems, some manual interaction is required: a circle surrounding the vessel and seed
points.
The algorithm is based on prior knowledge of vessel wall morphology and is a
combination of model-based segmentation and fuzzy clustering. It is divided into three
different phases:
I. Outer boundary of the vessel wall: ellipse-fitting.
As the shape of vessels is approximately elliptic, this information may be
incorporated into the computerized analysis. In this work, the outer boundary is created
from an ellipse, which is iteratively translated, rotated and mapped to the vessel. The
best match is determined according to the average intensity gradient. The ellipse with
the greater gradient average is taken as the outer boundary. Afterwards, a minimum
cost approach (based on dynamic programming 4 ) is used to refine the contour.
II. Lumen: fuzzy clustering 5 .
Lumen segmentation is performed using a classification based on the pixel gray
value followed by a minimum cost approach (similar to that for the outer boundary).
Information from the first phase is also taken into account, since the lumen is
constrained within the outer vessel wall boundary.
III. Plaque (lipid core): fuzzy clustering.
The same approach as for the lumen is followed, but plaque is more difficult to
segment, as the pixel gray value can differ considerably from one region of plaque to
another, even when it corresponds to the same tissue. Therefore, to improve the
detection, information from lumen and outer boundary is used to constrain plaque to
the area within the vessel wall.
Image Data
The algorithm has been validated on 30 high-resolution proton-density-weighted (PDW)
and T1-weighted (T1W) in vivo MR images. Out of the set of scanned images, 30 images
(13 PDW and 17 T1W) were selected for analysis, according to vulnerability of plaque. Out
of these 30 images, 28 presented a stenosis (10 in the common carotid artery and 18 in the
internal carotid artery) and 2 correspond to non-stenotic vessels. The pixel size of the
images used for analysis was 0.54 mm.
Accuracy and Reproducibility
To assess accuracy, the output of the algorithm was compared with manually drawn
contours, which were conducted by experts blinded from the results of the algorithm.
To assess reproducibility, an observer conducted 3 repeated analyses for 20 images,
randomly selected out of the 30. Prior to each application, the circle and seed points were
changed.
RESULTS
The quantitative results of the automated detection demonstrate:
High correspondence between automatic and manual area measurements for
lumen and outer boundaries (r=0.91 and r=0.89, respectively); and acceptable

21

correspondence between automatic and manual fibrous cap thickness


measurements (r=0.70).
The results of using the Bland-Altman analysis are presented in Fig. 3. The mean
of the difference was small for these measurements. Beside this, the
area difference (for lumen and outer boundary) distributed evenly, and the
standard deviation were relatively small.

Fig. 2 presents an example of automated contours detected using the algorithm


proposed in this work (2A), and the same contours traced manually by experts (2B).
These examples also show a high correspondence between automatic detection and
manually drawn contours, as has already been pointed out from the statistics.
CONCLUSIONS AND FUTURE WORK
The method presented in this work, detects automatically the lumen and outer
boundaries of the vessel wall, and the contour of the lipid core in human carotid MR
images. Area and fibrous cap thickness measurements were found to be accurate when
compared to those derived from manually traced contours. Nevertheless, further
optimization is required and other components, such as calcium, that have not been
considered in this work, have to be quantified.

Figure 2. T1W in-vivo MR carotid images. A)


Automatic detection of lumen, outer boundary
of the vessel wall and plaque (lipid core). B)
Contours manually traced by an expert.
Figure 1. Flowchart of the first phase of
the algorithm: ellipse-fitting approach to
detect the outer boundary of the vessel

22

REFERENCES
1.

J. Lusis (2001), Atherosclerosis.


Nature 407, pp. 233-241.

2. G.V. Ingersleben, U.P. Schmiedl, T.S.


Hatsukami, J.A. Nelson, D.S.
Subramaniam, M.S. Ferguson, C.
Yuan (1997) Characterization of
atherosclerotic plaques at the carotid
bifurcation: Correlation of high
resolution MR with histology.
RadioGraphic 17, pp. 1417-1423.
3. C. Yuan, L. M. Mitsumori, M. P.
Skinner, C. E. Hayes, E. W. Raines, J.
A. Nelson, R. Ross (1996) Magnetic
resonance imaging techniques for
monitoring the progression of
advanced lesions of atherosclerosis in
rabbit aorta. Magn. Reson. Imaging
14, pp. 93-102.
4. M. Sonka, V. Hlavac, R. Boyle
(1999). Object recognition: Fuzzy
systems: In: Image Processing,
Analysis, and Machine Vision (2nd
ed).
Brooks/Cole
Publishing
Company, USA, pp. 336-343.
5. M. Ramze Rezaee, C. Nyqvist, P.M.J.
van der Zwet, E. Jansen, J.H.C.
Reiber (1995), Segmentation of MR
images by a fuzzy c-mean algorithm.
Computers in Cardiology, New York,
NY, USA: IEEE, pp. 21-24.

Figure 3. The Bland and Altman plot of the


differences of luminal (A), outer wall (B)
areas; and fibrous cap thickness (C)
measured manually and using the proposed
algorithm.

23

ON THE MOVE.WITH THE SILVER CATALYST IN THE


ETHYLENE OXIDATION REACTION
Andrea J. Ahern and Donald Reinalda
Shell Global Solutions International B.V., Shell Research and Technology Centre,
Badhuisweg 3, 1031 CM Amsterdam, The Netherlands; Andrea.Ahern@shell.com
1

INTRODUCTION
The deactivation behaviour of the silver catalyst in the ethylene oxidation reaction
is a topic of industrial [1] and fundamental [2] importance. The interest in ethylene
oxide (EO) arises due to the fact that it is a highly reactive molecule and is therefore a
versatile intermediate, i.e. it can be used to form a wide variety of products. Some of the
products derived from EO include: antifreeze for car engines and polyethylene
terephthalate which is used to form polyester fibres, films and bottles. Commercially,
EO is prepared via selective partial oxidation of ethylene using an Ag/-Al2O3 catalyst
at ca. 220C, 20 bar and in the presence of Cl. However there is a competing reaction
which is the complete combustion reaction to form carbon dioxide and water as
illustrated in Scheme 1. In industrial practice a selective reaction to form EO is highly
desirable.
Ag
O
(EO)
CH2 CH2 + x O2
CH2 CH2

CO2 + H2O
Scheme 1

The competing reactions involved in the EO production process.

The catalyst is necessary so that (i) the reaction can proceed at a reasonable rate at ca.
220C and (ii) good selectivity to EO can be achieved. However the catalyst deactivates
over time and eventually the catalyst needs to be replaced. This is obviously an
expensive process so investigating and improving the catalyst stability is an important
and ongoing goal.
2

RESULTS
In order to slow down the deactivation process it is first necessary to understand the
reason for the loss of catalyst activity. Therefore, commercial catalysts were imaged at
the start and end of life as illustrated in Fig. 1. It is obvious that time has a dramatic
effect on the size of the silver particles. Initially the particles are quite small and
numerous, Fig. 1(a), and as a result the silver surface area is appreciable. However, after
prolonged use, the silver particles have grown significantly and their number has
decreased, Fig. 1(b). During the lifetime of the catalyst the silver surface area on which
reaction can occur has decreased substantially due to particle growth. Clearly, this is the
main reason for the decline in the catalyst performance. A thorough investigation of this
deactivation process is expected to enable the development of more stable (and
therefore improved) catalysts.
The driving force for the growth of these small particles is that they are
thermodynamically unstable so the system strives to reduce the surface energy by
reducing the surface/volume ratio, i.e. the particles grow. The process by which
24

(a)

(b)

Fig. 1 High magnification scanning electron microscopy (SEM) images of a


commercial catalyst: (a) initially and (b) at its end of life.
particles grow is known as sintering. The two main sintering (or growth) mechanisms
are coalescence and Ostwald ripening. Coalescence involves particle mobility and
occurs when the particles come into contact and merge to form a larger particle.
Conversely, in Ostwald ripening particles generally do not move but an atom-by-atom
transfer of matter between particles occurs. In this case larger particles grow at the
expense of smaller particles in their vicinity. The higher rate of loss from the smaller
particles results in their shrinkage and eventual disappearance. The atoms can be
transferred from one particle to another either through the gas phase or over the surface
of the support. In order to determine which process is occurring it is necessary to study
the theory of sintering. The lack of a detailed atomic-level picture of the sintering
process despite its obvious industrial importance was stressed in a recent article by
Bowker [3].
In general, the sintering process is described using a power-law expression for the
average particle radius, r , viz.

r r0 = kt n

(1)

where r0 = offset, k = constant, t = time and n = growth exponent. Conclusions


regarding the growth mechanism have been made based on the value of n in Eqn. (1).
According to the classical theory of Ostwald ripening developed by Lifshitz, Slyozov
and Wagner (LSW) [4], n was found to be 1/3 but this prediction was only valid for a
system at infinite dilution and at infinite time. In real systems it has been found that n
- 1/5 for Ostwald ripening and n 1/7 for coalescence [5].
In order to elucidate the sintering mechanism, the decrease in silver surface area
with respect to cumulative EO production (which is an approximate unit of time) was
measured using X-ray photoelectron spectroscopy (XPS). If the data is fitted with a
power-law, Fig. 2, it is found that a power of 0.35 results which is quite close to the
theoretical value of 1/3 predicted by LSW theory for an Ostwald ripening type process.
However commercial catalysts are quite complex in nature so it was decided to also
study simplified systems, i.e. to simplify the system from a 3D to a 2D system. The
method used was wafer spin-coating which allows the study of flat surfaces. The
advantages of this methodology are that the behaviour of a large number of particles can
be investigated and information such as the change in average particle radius and
particle size distributions can be monitored over time.
The movement and growth of the silver particles is again very evident in the images
25

0.4
Expt
Fit

Ag SA (m 2/g)

0.3

-0.35

A(t) = kt
0.2

0.1
0
0

100

200

300

400

500

Cum EO (Mlbs/ft3 cat)

Fig. 2 A plot of quantified surface area (from XPS) versus cumulative EO production
(approximate time scale unit) for a commercial catalyst.
of our model catalysts, Fig. 3. After 170 hrs in the reactor the silver particle size is quite
small and there are a large number of particles, Fig. 3(a). But after 2500 hrs, much
fewer, larger particles remain, Fig. 3(b). From these images it is clear that the spincoated model catalysts reproduce (at least qualitatively) the behaviour which is
observed with the commercial catalysts. In Fig. 4 the effect of time on the silver particle
size distribution is illustrated. Again from this plot it is clear that the number of particles
decreases and their average diameter increases with time. The shape of these particle
size distributions can also give some valuable information regarding the sintering
mechanism [6]. At present these distributions are being analysed to test if the shape of
the distributions matches an Ostwald ripening or coalescence type process. The plot of
the decrease in surface area with respect to time showed a similar trend to that of the
commercial catalyst, Fig. 2, but in the case of the model catalyst the growth exponent
was found to be . This result yields further evidence that an Ostwald ripening type
mechanism is in existence.

(a)

(b)

Fig. 3 SEM images of Ag/-Al2O3 model catalysts after (a) 170 and (b) 2500 hrs in the
reactor under typical EO synthesis conditions, i.e. T = 280C, P = 1 bar, C2H4 =
30% V, O2 = 8% V, N2/Cl.
26

60

Frequency

50

170 hrs
500 hrs
900 hrs
2500 hrs

40
30
20
10

645

605

565

525

485

445

405

365

325

285

245

205

165

125

85

45

0
Equivalent circle diameter (nm)

Fig. 4 The effect of time on the particle size distribution for Ag/-Al2O3 model
catalysts under the same conditions as detailed in Fig. 3.
3

CONCLUSIONS AND FUTURE WORK


Based on the deactivation results of both commercial and model catalysts, i.e. from
the exponent of the deactivation plots, e.g. Fig. 2, it appears that the silver catalyst in the
ethylene oxidation reaction deactivates via an Ostwald ripening type mechanism. In the
future more sophisticated image analysis techniques will be used to generate Voronoi
plots from the electron micrograph images. Such analysis will yield accurate
information regarding particle-nearest neighbour distances, particle size-distance
correlations and therefore also particle interactions.
4

ACKNOWLEDGEMENTS
The authors would like to acknowledge the assistance of Dr. Guy Verbist, Dr. Ralph
Haswell and Mr. Leo van Noort. This research has been supported by a Marie Curie
Fellowship of the European Community programme Energy, Environment and
Sustainable Development under contract number ENK5-CT2001-50028.
5
[1]
[2]
[3]
[4]

REFERENCES
G.B. Hoflund and D.M. Minahan, J. Catal., 162 (1996) 48.
D.P.C. Bird, C.M.C. de Castilho and R.M. Lambert, Surf. Sci., 449 (2000) L221.
M. Bowker, Nat. Mater., 1 (2002) 205.
(a) I.M. Lifshitz and V.V. Slyozov, J. Phys. Chem. Solids, 19 (1961) 35 and (b) C.
Wagner, Z. Elektrochem., 65 (1961) 58.
[5] P.J.F. Harris, Int. Mater. Rev., 40 (1995) 97.
[6] G.R. Carlow, R.J. Barel and M. Zinke-Allmang, Phys. Rev. B, 56 (1997) 12519.

27

Neutron Methods for Structural Integrity Assessment


R.C. Wimpory, P. Hornak, C. Ohms, D. Katsareas and A.G. Youtsos
European Commission, Joint Research Centre, Westerduinweg 3, 1755 LE Petten,
The Netherlands.
ABSTRACT
The work undertaken concerns non-destructive testing using the neutron scattering
facilities of the High Flux Rector (HFR), JRC, Petten. Material micro-structure, defects,
texture and residual stress are increasingly taken into account in the designs of
aerospace, automotive, nuclear and other advanced engineering industries. There is
therefore a growing requirement for non-destructive and accurate techniques for
investigating these properties internally and through surfaces in components. Neutron
scattering techniques, which have been developed at numerous neutron facilities, are
uniquely suited for non-destructive micro-structural, texture, defects analyses and
mapping internal residual stress fields in engineering components. In fact neutrons can
provide information that is not available otherwise. The majority of the work involves
the improvement and upgrading of the neutron beam facilities at the HFR. This
includes the optimisation of the neutron beam monochromators for beam tube HB5 and
HB4, development of ancillary equipment for ne utron diffraction testing of irradiated
weld specimens and specimens at high temperature and the upgrading and recommissioning of the HB8 neutron radiography facility.
INTRODUCTION
Neutrons are highly penetrating particles and can be used non-destructively in the study
of condensed matter. In Europe and elsewhere, national and international facilities,
which provide neutrons, have been built to service the needs of a wide range of
scientists. More recently engineers, in collaboration with materials scientists, have
started to exploit some of the many uses of neutron beams, in particular to determine
residual stresses.
For the full potential of neutron scattering to be realised by European industry, it is
essential that the European central facilities can all provide a standardised approach
with a quantifiable level of accuracy. Furthermore, in view of the envisaged
enlargement of the EU through the integration of several EU Candidate countries, there
is an urgent need for harmonising the safety culture through the exchange of
scientific/technological know-how. This is particularly valid in technical areas related
to the safety of nuclear installations.
At the HFR of the European Commission in Petten, several beam tubes are available
for the execution of studies in the areas of engineering, fundamental science and
medical science. In the engineering field, extensive studies have been conducted on
residual stresses and structures of materials based on dedicated facilities. On the other
hand facilities with the high potential for defect analysis, such as the Small Angle
Neutron Scattering (SANS) facility and the Neutron Radiography facility have been
underused in recent times.

28

THE MARIE CURIE RESEARCH PROJECT


The research to be conducted consists in the application of neutron beam techniques to
engineering problems related to safety of structural integrity assessment of nuclear
components. The main aims therefore are related to the development and upgrading of
the HFR neutron beam facilities (HB4, HB5 and HB8) and their subsequent use. These
tasks include:
Optimisation of the HB4 (Neutron Diffraction) double monochromator system
Upgrading/optimisation of the HB5 (Neutron Diffraction) monochromator system
Development of neutron optics for HB4 for precise definition of sampling volume
at a distance for use with furnace and shielded container
Installation and use of a furnace on HB4 for residual stress measurement of
specimens at high temperature
Installation and use of container on HB4 for measurement of irradiated weld
specimens
Upgrading and re-commissioning of the HB8 neutron radiography facility
Optimisation of Monochromator systems on HB4 and HB5
In order to make efficient residual stress measurement, the monochromator systems for
the existing residual stress facilities are to be optimised. The monochromator on HB4
(see figure 1) is to be optimised in terms of resolution and neutron intensity whereas the
HB5 monochromator system will be both upgraded and optimised. This is due to the
age of the HB5 mo nochromator, which has visibly corroded surface. The optimisation
will be aided with the use of a powerful neutron ray-tracing package McStas [1]
which is already available and has been developed by Ris (DK) and ILL (F). This
program has already been successfully installed and self-training is underway.

Figure 1. Photograph of the double monochromator at the HB4 beam tube


Installation and use of a furnace and container on HB4
The installation of a furnace on HB4 (see figure 2a) will enable residual stress
measurement of specimens at high temperature. It is planned to measure specimens
with a diameter 50/40 mm and height 100 mm to temperatures up to 1600C utilising
nitrogen purging to prevent oxidation. The construction is mainly from aluminium
29

(228.00mm)

704.00 mm

(88.00 mm)

(68.00 mm)

TRAVEL 80.00 mm

TRAVEL 100.00 mm

( 2 4 8 . 0 0

m m )

oxide to simultaneously provide insulation and limit attenuation of neutron beam


intensity. This project is partly funded by DG RTD (HITHEX). In addition the
installation of shielded container at HB4 (see figure 2b) will enable the measurement of
irradiated weld specimens. An internal positioning table will allow the alignment of
specimen in neutron beam. The final design already complete and is soon to be
manufactured. This project is funded partly by DG RTD (INTERWELD).

260.00 mm
84.00 mm

40.00 mm

432.00 mm
24.00 mm

66.00 mm

298.00 mm

20.0020.00
mm mm

Figure 2. Ancillary equipment for HB4 residual stress facility: a) Furnace for
measurement of stress at high temperature and b) Container for the measurement of
irradiated specimens.
Design of collimator for precise definition of sampling volume at a distance
The use of a collimator allows good definition of sampling volume at a distance, which
is essential for residual stress measurement [2]. This allows vital room for auxiliary
equipment such as the container or furnace. A collimator to be used has already been
constructed by JJ X-ray of Denmark to design specifications provided by JRC. The
installation at facility is in preparation.

Figure 3. The diagram (a) shows two possible different set- ups for strain scanning: on
the left, the common set- up with slits. On the right side the radial collimator. (PSD =
position-sensitive detector). The photograph (b) shows the collimator to be used.

30

Upgrading and re -commissioning of the HB8 neutron radiography facility


Neutron Radiography is an imaging technique that provides images similar to X-ray
radiography. However there is a difference between how neutrons and X-rays interact
with material, which produce significantly different and often complementary
information. The high-penetration of neutrons allow, for instance, the checking of
adhesive layers in composite materials and welded joints even through a few
centimetres thickness of steel. The previous uses of HB8 have included thermal neutron
radiography of spent fuel rods from power reactors, quality control of pyrotechnical
devices, mechanical seals, brazings, ceramic components and turbine blade inspection.
The future intended usage is related to defect analysis, i.e. voids, cracks,
inhomogeneities and eventually radiation damage of specimens. One of the main
factors controlling the spatial and temporal resolution of the neutron radiographic
image is the length to diameter (L/D) ratio of the collimator. Figure 4 shows the
potential minimum and maximum values of L/D of HB8 compared with other European
facilities. This information was obtained from the COST 524 project [3].
HB8 max (750)

HB8 min (150)

Figure 4. Comparison of L/D values for the different beam lines of neutron radiography
facilities in Europe, from the European COST collaboration 524 "Neutron radiography
for the detection of defects in materials, The minimum and maximum L/D modes of
operation of HB8 are shown here for comparison. Diagram from E.H. Lehmann of PSI,
(CH) [3].
ACKNOWLEDGEMENTS
The authors would like to thank the Marie Curie Fellowship Program fo r the support in
this project. Gratitude is also indebted to Paul Green at the JRC and our colleagues at
NRG/ECN especially A. Bontenbal and H. Plas.
REFERENCES
[1]
[2]
[3]

Mcstas website: http://neutron,risoe.dk/mcstas/


Stress measurements on D1A: a new high precision strain-scanner, Pirling, T.
and Wimpory, R. C. (ILL). ILL Annual Report 1997.
COST 524 website: http://www.cost524.com/

31

The expanding thermal plasma technique:


deposition of oxide dielectrics
M. Creatore, Y. Barrell, M.C.M. van de Sanden
Equilibrium and Transport in Plasmas group
Department of Applied Physics, Eindhoven University of Technology
Den Dolech 2, 5600 MB Eind hoven, the Netherlands
Abstract
In this contribution we address the expanding thermal plasma as a remote plasma
source for the deposition of SiO 2 -like films by means of Ar/O 2 /hexamethyldisiloxane
mixtures. In particular, the study has been focused on the deposition of low dielectric
constant, carbon-doped SiO 2 films and carbon- free, hard silicon dioxide films. The film
chemical, optical, mechanical and dielectric properties will be presented in this
contribution.
Introduction
Plasma enhanced chemical vapour deposition of SiO 2 -like films has found in years a
widespread application in IC, MEMS, photonics, TFT and LED technologies, as well as
optics, tribology and packaging. In general, the deposited films must exhibit uniform
thickness and composition, low particulate and chemical contamination, good adhesion
to the substrate, conformal step coverage and low pinhole density. Plasma EnhancedChemical Vapour Deposition (PE-CVD) can successfully meet these requirements,
besides allowing low process temperatures, due to its non- (thermodynamic)
equilibrium character, allowing the synthesis of reactive species (i.e., radicals) at low
thermal budget.
In recent years the research related to SiO 2 -like films as obtained from
organosilicon/O 2 - fed PE-CVD has rapidly been acknowledged for the possibility of
tuning and controlling the organic/inorganic character of the deposited SiC x Hy Oz films.
This eventually allows tailoring the physical-chemical properties of the coating (density,
refractive index, dielectric constant, surface energy, internal stress, hardness).
Radiofrequency (rf, 13.56 MHz) capacitively coupled as well as inductively coupled
plasmas have been mostly used for thin film deposition [1]. Gas precursor molecules are
dissociated in radicals by means of electron impact (electron temperature of few eV).
The film, during its growth, is subjected to ion bombardment because of the negative
potential developed on the powered electrode (as well as on any isolated surface
exposed to the plasma), due to the higher mobility of electrons. As far as SiO 2 -like film
deposition is concerned, ion bombardment has provided a very useful tool in removing
silanol (SiOH) functional groups from the growing film [2], by promoting their
condensation to water and the SiOSi bridge formation, thus, the film matrix
densification.
Within this framework, we address in this contribution the expanding thermal plasma
(ETP) as a remote plasma source for the deposition of SiO 2 -like films by means of
Ar/O 2 /hexamethyldisiloxane (HMDSO) mixtures. In particular, it will be shown that the
the ETP technique leads to an entirely chemistry-controlled process (no ion
bombardment involved) for the tailoring of the film chemistry from C-doped SiO 2
(SiOC) films to C-free and dense (SiOH free) SiO 2 films, at very competitive growth
rates (ranging from 20 to 8 nm/s).

32

Hardness= 1.2 GPa


Modulus= 12 GPa

5.5

k (@ 1 MHz)

5.0
4.5
Hardness= 0.9 GPa
Modulus= 7.1 GPa

4.0
3.5
3.0
2.5

12

O2 flow rate (sccs)

16

Figure 3: Dielectric constant (as measured at 1 MHz) as a function of the O2 flow rate. The hardness and
modulus values are also reported.

Conclusion
Carbon- free, hard (hardness 10 GPa, Young modulus 80 GPa) silicon dioxide films,
at growth rates in the range of 6-12 nm/s, have been deposited by means of a totally
chemistry-controlled process. Low dielectric constant (low-k), carbon-doped SiO 2 films,
presently exhibiting k values (yet not optimized) in the range 2.9-3.4 (at 1 MHz) and
still fairly good mechanical properties (hardness of 1 GPa, Young modulus of 10 GPa)
have been also obtained.
Acknowledgments
The authors would like to thank H. de Jong for the measurements on the MIS
structures and M.J.F. van de Sande, J. Jansen, B. Hsken for their skilful technical
assistance. Nathan Kemeling (ASMI) is acknowledged for the dielectric constant
measurements by means of the Hg probe and for the hardness/modulus measurements.
This work is part of the research supported by a Marie Curie fellowship of the 5th
Framework European Community Programme under Contract Number HPMF-CT2001-01299.
References
[1] A.M. Wrobel, M.R. Wertheimer, in Plasma deposition and treatments of plasma
polymers, R. dAgostino ed., Academic Press Inc, Boston, 1990.
[2] C. Vallee, A. Goullet, A. Granier, Thin Solid Films 311, 212, (1997)
[3] MRS Bulletin 22(10), 1997
[4] Semiconductor International, June 2002
[5] J.W.A.M. Gielen, W.M.M. Kessels, M.C.M. van de Sanden, D.C. Schram, J. Appl.
Phys. 82, 2643 (1997)
[6] J.W.A.M. Gielen, M.C.M. van de Sanden, D.C. Schram, Appl. Phys. Lett. 69, 152
(1996)
[7] M.C.M. van de Sanden, R.J. Severens, W.M.M. Kessels, R.F.G. Meulenbroeks,
D.C. Schram, J. Appl. Phys. 84, 2426 (1998)
[8] M.C.M. van de Sanden, J.M. de Regt, D. C. Schram, Plasma Sources Sci. Technol.
3, 511 (1994)

35

Effects of a high fat/high protein diet in two rodent models for obesity,
SHU9119-treated rats and fa/fa Zucker rats.
Cline Morens & Gertjan van Dijk
Neuroendocrinology, Dpt. of Animal Physiology,
University of Groningen, Haren, The Netherlands
INTRODUCTION
Obesity, defined by the Body
Mass Index (BMI = weight (kg) /
height2 (m)) above 30 kg.m-2 has
become a major public health problem
in most industrialized countries. In
Europe, at least 135 million citizens are
affected. In many countries, more tha n
half of the adult population is
overweight and up to 30% of adults are
clinically
obese.
Moreover,
the
prevalence among children is rising
significantly.
Obesity is a major risk factor for
life-threatening diseases such as type II
diabetes, cardiovascular diseases and
some types of cancer. It is also strongly associated with dyslipidemia, insulin resistance,
breathlessness, sleep apnea, asthma, osteo-arthritis, hyperuricaemia, impaired fertility
and lower back pain. Overall, the costs of obesity have been estimated at up to 8% of
health budgets [1].
Factors promoting obesity are various, both of genetic and environmental origin.
The cause of weight gain is, clearly, an imbalance between the daily energy intake and
expenditure, i.e. if the amount of energy provided to the body by the food eaten each
day is slightly higher than the amount of energy used in daily activities [2]. This can
lead to dramatic obesity over time. In this respect, the macronutrient composition of the
diet seems to play also a major role, although the underlying mechanism is not so clear.
It has, for instance, been shown that a diet rich in fat promotes obesity [3,4]. On the
other hand, a diet rich in protein could prevent body weight gain via a reduction of food
intake [5]. And currently, a diet with high fat, high protein and low carbohydrates
contents is very popular among overweight people because this diet seems to promote
weight loss. However, the interactions between the dietary macronutrients remain
unclear, as well as their effects on the diverse systems involved in the control of energy
homeostasis.
At present, researchers agree on the idea that energy homeostasis is controlled
by several areas in the central nervous system. The most important among those seems
to be the hypothalamus. Insulin and leptin are two hormones often referred to as
adiposity signals, that inform the brain on the level of disposable fuels. Insulin is an
hormone produced by the b-cells of the pancreas, and is involved in the body glucose
homeostasis. Leptin is synthesized by white adipose tissue, and its plasma level is
positively correlated to body fat mass. At the hypothalamic level, two distinct neurons

36

populations are involved in the transmission of the insulin and leptin signals. One of
them is the melanocortin (MC) pathway. Five MC receptors have been identified, and
two of them (MC 3 -R and MC 4 -R) are highly expressed in the hypothalamus. MC 4 -R are
thought to play a major role in the insulin and leptin signaling pathways, and mutation
of the gene coding for this receptor is the most common monogenic cause of massive
obesity known today (4%, [6]).
MATERIALS & METHODS.
In our studies, we aimed at
investigating the behavioral and
Injector
physiological
effects
of
the
macronutrient
composition
of
the
diet
in
MC3-R and MC4-R
rodent models of obesity, the SHU9119Guide cannula
treated rats and the fa/fa Zucker rats. In
ending in the 3rd
the first model, the rats are rendered
ventricule
SHU9119
obese by a chronic infusion of a
synthetic antagonist of the MC 3/4 -R
(SHU9119) into the 3rd cerebral
ventricle via an injector connected to a
subcutaneous minipump. The second
obesity model is the fa/fa Zucker rats in
fa/fa ZUCKER
which the obesity is the result of a
mutation of the gene coding for the
leptin receptor. In both cases, rats are
Schwartz, Nature,
obese due to hyperphagia as well as
2000
increased food efficiency.
Three diets were used in these experiments, i.e. a high carbohydrate diet (HC, 60%
carbohydrates, 23% protein and 14% fat), a high fat diet (HF, 60% fat, 20%
carbohydrates and 20% protein) and a high fat/high protein diet (HF/HP, 60% fat, 35%
protein and 5% carbohydrates). SHU9119-treated rats and their saline-treated controls
were left 14 days on the diets, the fa/fa Zucker rats were fed the diets up to 2 months. A
group of normal Wistar rats was also included in this longer term experiment. Food
intake and body weight were measured daily. An intravenous (i.v.) glucose tolerance
test (IVGTT) was performed on day 10-11 for the SHU9119-treated rats and their
controls, and between day 45 and 50 for the Zucker and normal Wistar rats.
Specifically, rats were infused i.v. with a 10%-glucose solution at a rate of 0.1ml.min-1 ,
over 20 minutes. Blood samples were taken 10 and 5 minutes before the start of the
infusion and then 1, 3, 5, 7, 10, 15, 20, 23, 26, 30, 35, 40 and 50 minutes after the
infusion started. Plasma levels of glucose and insulin were then measured. This test
allows the assessment of the glucose clearance capacity of the body. To have a better
idea of the insulin status of rats in the different diet groups, an insulin sensitivity test
was performed on day 60, in the Zucker and normal Wistar rats. Briefly, after an
overnight fast, rats were injected intraperitoneally (i.p.) a dose of insulin (5U.kg-1 for
the Zucker, and 0.5U.kg-1 for the Wistar), and blood was sampled 15, 30, 45 and 60
minutes after the i.p. injection for glucose and insulin measurements. Two basal
samples were also collected 10 and 5 minutes before the i.p. injection. On the last day of
the experiment (D14 for the SHU9119-treated rats and their controls, and D60 in the case
of the Zucker and normal Wistar rats), rats were decapitated under slight CO2
Subcutaneous
osmotic minipump
filled with saline
or a SHU9119
solution

Piece of
tubing

37

anesthesia, trunk blood was collected for assessment of hormones and fuels levels and
the body composition of the animals was determined.
RESULTS
fuels.

Food intake, body weight gain, body composition, plasma hormones and

SHU9119-treated rats.
Whatever the diet, the SHU9119-treated rats ate significantly more and gained
significantly more weight than the control animals, during the entire experimental
period. Remarkably, the SHU9119-treated animals fed the HF/HP diet ingested
significantly less food (-14%) than those fed the HF diet. Their body weight gain over
the experimental period was reduced by 19% when compared to the rats given the HF
diet. No significant difference was observed between the HF/HP and the HC groups.
The SHU9119-treated rats fed the HF/HP diet had significantly less epididymal and
retroperitoneal fat, but when expressed as a percentage of body weight, the difference
was not significant. Their liver and kidneys were heavier, as a direct consequence of the
stimulation of the body protein metabolism induced by the high quantity of protein
ingested.
On day 14 of treatment, plasma leptin, insulin and adiponectin levels were
markedly elevated in the SHU9119-treated animals when compared to the controls,
whatever their diet was. Plasma glucose, however, was only elevated in the SHU9119treated animals fed the HF diet. Relative to other SHU9119-treated groups, the plasma
insulin level was dramatically augmented in the HF group. The adiponectin level, on the
other hand, was much more increased in the SHU9119-treated rats fed the HC diet than
in other groups. The lowest increase was observed for the rats fed the HF/HP diet.
Zucker rats.
As expected, the food intake and body weight gain of the fa/fa Zucker rats was
much higher than those of normal rats. Interestingly, Zucker rats also responded to the
HF/HP diet by a reduction of their food intake. After 3 weeks on the diets, the
cumulative food intake as well as the mean daily food intake calculated between D1 and
D21 were significantly lower in the rats fed the HF/HP diet when compared to the 2
other groups. In spite of this decreased food intake, the body weight gain of the Zucker
rats fed the HF/HP diet was not different from that of the rats on the HF diet. The
Zucker rats on the HC diet, that actually ingested the same amount of food as the HF fed
ones, showed a lower body weight gain. After 10 weeks on the diets, there was no diet
effect on the body composition of the Zucker rats (weight of the organs expressed as a
percentage of body weight).
On day 60, blood samples were taken for assessment of hormones and fuels
levels. Insulin levels were extremely high, especially in the Zucker rats fed the HC diet
(45.2 2.8 ng.ml-1 to be compared to a level of ~ 2 ng.ml-1 in normal non obese Wistar
rats). The adiponectin levels were also very high, significantly lower in the HF and
HF/HP groups compared to the HC group. Glycemia was not affected by the diet.
Intra venous glucose tolerance test.
SHU9119-treated rats.
Glucose responses were not different between the obese and control rats during
the IVGTT. However, the SHU9119-treated animals fed the HF/HP diet showed a
slightly disturbed response during the IVGTT; after the end of the glc- infusion, the
glucose clearance rate was slightly lower and their glycemia remained elevated for a
longer period when compared either to the other SHU9119-treated groups or to the

38

controls. Moreover, the amount of insulin needed to correct the hyperglycemia induced
(that can be evaluated from the area under the curve) was higher (even if not
significantly) for those animals.
Zucker rats.
In the Zucker rats, the different dietary treatments did not induce any difference
in the glycemia curves during the IVGTT nor was there any difference in the insulin
response of the rats fed the HF or HF/HP diets. However, Zucker rats fed the HC diet
did not show any increase of their plasma insulin level after the start of the IVGTT, as if
their pancreas was unable to respond to the stimulation.
This test was also performed in normal non obese Wistar rats left for 60 days on
the experimental diets, and strikingly, the glucose tolerance of the rats fed the HF/HP
diet was also slightly reduced. However, no difference was observed in the insulin
response to the glc-infusion.
Insulin sensitivity test.
This test was performed in the Zucker and normal Wistar rats only. No
difference due to the usual diet was detected in the Zucker rats. In the Wistar rats,
however, the rats fed the HF/HP diet did not respond to the i.p. insulin injection, no
decrease in plasma glucose could be detected, whereas in the rats fed the HF diet, the
expected decrease in glycemia could be observed.
DISCUSSION & CONCLUSIONS
The major finding in the present studies is that a HF/HP diet induces a reduction
in food intake even in rats with impaired brain leptin signaling pathways, indicating
clearly that the leptin signaling cascade is not solely responsible for the reduction of
food intake observed when rats are fed a ketogenic diet. Other mechanisms underlying
this phenomenon are yet to be discovered. Remarkably, this reduced food intake was
not associated with a lower body weight gain in the Zucker fa/fa rats, suggesting that
feeding a HF/HP diet increases food efficiency independently of the leptin signaling
cascade. This study underlines the importance of the composition of the usual diet
ingested by the rats, obese or not, on the plasma fuels and hormones important in energy
balance regulation, but also on the glucose tolerance and insulin sensitivity of the
animals.
And finally, it challenges the supposed beneficial effect of a ketogenic diet per
se, i.e. when its consumption is not associated with a decreased body weight.
REFERENCES
[1]. International Obesity Task Force and European Association for the Study of Obesity Task
Forces. (2002) Obesity in Europe, the case for action. www.iotf.org
[2]. Mark J. (2003) Cellular warriors at the battle of the bulge. Science 299: 846-849.
[3]. Bray J. A. & Popkin B. M. (1998) Dietary fat does affect obesity. Am. J. Clin. Nutr. 68:
1157-1173.
[4]. Hill J. O., Melanson E. L. & Wyatt H. T. (2000) Dietary fat intake and regulation of energy
balance: implications for obesity. J. Nutr. 130: 284S-288S.
[5]. Jean C., Rome S., Math V., Huneau J. F., Aattouri N., Fromentin G., Larue-Achagiotis C.
& Tom D. (2001) Metabolic evidence for adaptation to a high protein diet. J. Nutr. 131: 91-98.
[6]. Vaisse C., Clment K., Durand E., Hercberg S., Guy-Grand B. & Froguel P. (2000)
Melanocortin-4 receptor mutations are a frequent and heterogenous cause of morbid obesity. J.
Clin. Invest. 106: 253-262.

This research has been supported by a Marie Curie Fellowship of the European
program Quality of Life under contract number QLK4-CT-2001-51977 to CM.

39

Localization of functional hematopoietic stem cells in the mouse AGM.


Catherine Robin and Elaine Dzierzak
Erasmus MC, Rotterdam, The Netherlands.
Introduction
Throughout life, the mature blood cells within each individual are constantly
renewed, due to their limited lifespans (several days to many years). Each mature blood
cell type has its own function. Some cells protect the body against infections and
eliminate foreign substances that are able to cause tissue injury or disease; others repair
vascular damage and provide good oxygenation/detoxification of tissues. In the adult
individual, all mature blood cells originate from a pool of very rare and primitive cells
localized in the bone marrow (BM). These cells, called hematopoietic stem cells
(HSCs), are able (1) to self- renew (upon cell division, at least one daughter cell remains
a stem cell, maintaining by this way the pool of stem cells), (2) to proliferate
extensively (yielding a large number of mature progeny) and (3) to differentiate and
commit into progenitor cells, precursor cells and finally into mature cells (erythrocytes,
platelets, lymphocytes [NK, T and B], granulocytes [basophils, eosinophils,
neutrophils], and monocytes/macrophages) of all the hematopoietic lineages 1, 2.
HSCs are probably the most extensively studied of all stem cells. The easy
accessibility of HSCs (e.g. compared to neural stem cells) makes them the best
candidate cell type for permanent clinical repopulation therapies in blood-related
genetic deficiencies and leukemia. Current interest in stem cells has focused
fundamental research on the embryonic origin and the developmental regulation of
HSCs. Further knowledge of the normal cellular and molecular processes acting on the
earliest of these HSCs will provide important insights into the best protocol designs for
the manipulation of HSCs for clinical therapies.

Origin of HSCs
In the adult, the pyramidal structure of the hematopoietic system with respect to
cell lineage relationships is clearly characterized. The HSCs, at the top of the pyramid,
differentiate into multipotent and then unipotent progenitor cells (that are restricted to
several and then only to one hematopoietic lineage, respectively), and finally these cells
differentiate into intermediate and mature cells (Fig. 1). In comparison, the hierarchical
hematopoietic organization and the precursor-progeny relationships of hematopoietic
cells in the embryo are still unclear. Indeed, the appearance of various hematopoietic
cell types occurs in a somewhat inverted order to that found in the adult hematopoietic
hierarchy and without the expected progressive differentiation steps (Fig. 1). Thus, the
lack of obvious direct lineage relationships between the hematopoietic cells appearing
in the embryo suggests that they ha ve probably different origins. To clarify the origin of
HSCs and their specific progeny, studies have been performed in avian and amphibian
species. It has been demonstrated that hematopoietic cells in non- mammalian
vertebrates are autonomously generated in at least two independent sites: the yolk sac
(YS) and the intraembryonic aortic region.

40

Primitive erythropoiesis

E7.5

YS

Erythroid-myeloid progenitors
Embryo

Multipotential progenitors
Neonatal repopulating progenitors
Emergence
of first adult
HSCs

E10.5

AGM

Progenitors

Bone
marrow

Precursors

NK
cel
ls

Pla
tele
ts
Ba
so
Eo phils
sin
op
hil
s
Ne
utr
op
hil
s
Mo
T l nocy
ym
tes
p
B l hocy
te
ym
ph s
ocy
tes

Mature
cells
Ery
thr
ocy
tes

Adult

Commitment

Immaturity

Liver

Fig. 1: Schematic representation of the hematopoietic system in the embryo and adult
mouse.
Similarly, in mammalian embryos, two independent hematopoietic waves appear
sequentially. In the mouse embryo, the first wave called primitive/embryonic, starts at
embryonic day 7 (E7) (E15 in human) in the YS blood islands and allows the
production of mature nucleated erythrocytes, megakaryocytes and macrophages 3 . These
hematopoietic cells probably ensure adequate embryo oxygenation, an absence of
hemorrhaging during new blood vessel formation and the removal of dead cells,
respectively. The second hematopoietic wave, called definitive/adult, begins between
E8.5 to E10.5 in the mouse (E25-30 in human). Different kinds of progenitors
(clonogenic progenitors and CFU-S, i.e. colony-forming units in the spleen) are
produced in the YS and also in the embryonic region comprising the dorsal aorta with
the surrounding splanchnic mesoderm (the paraaortic splanchnopleura (P-Sp)). The first
definitive HSCs appear only later, from E10.5 to E11.5 in the mouse (E25-30 in the
human) in the AGM (aorta, gonads and mesonephros) region (Fig. 1) that derives from
the P-Sp and also in the vitelline and umbilical vessels 4-7 . Thereafter, these HSCs are
thought to seed the liver and possibly the YS through the circulation (Fig.2). A large
amplification of HSCs occurs in the AGM region and then in the fetal liver. HSCs are
then thought to migrate to the BM just before birth where they constitute a pool of rare

41

cells, with an estimated frequency of 1/105 murine BM cells. E10.5 AGM HSCs are as
potent as the HSCs harbored in the BM of the adult, thus suggesting a linear
relationship.
During embryonic development of several vertebrate species (e.g. mouse, avian,
amphibian, human), cell clusters on the floor of the dorsal aorta in close association
with the aortic endothelium in the AGM region have been described 8-11 . Such cell
clusters have also been found inside the vitelline and umbilical arteries 6 . The temporal
appearance of these clusters coincides with the appearance of HSC activity in the aorta
and vitelline and umbilical vessels suggesting that at least some HSCs are found within
the clusters.
So far, the direct precursors to HSCs, called pre-HSCs, are still unknown.
Different possible candidates have been suggested: (1) more immature hematopoietic
progenitors, (2) endothelial cells from the ventral face of the aorta or (3) hemangioblasts
(the common precursor for hematopoietic and endothelial cells), less differentiated
mesodermal cells or even mesenchymal stem cells underlying the aortic endothelium.

Yolk
sac

Embryo

AGM
Umbilical
vessels
Placenta

Liver
Vitelline
vessels

Fig. 2: Schematic drawing of a whole E11 mouse embryo with the yolk sac attached.
The major hematopoietic territories are indicated in blue (yolk sac, aorta- gonadmesonephros region (AGM) and liver). Schematic drawing of the AGM region (A,
aorta; M, mesonephros; G, genital ridge/gonad; Me, mesenc hyme).

Identification of embryonic HSCs


In contrast to mature hematopoietic cells that are characterized directly by their
morphology, HSCs can only be retrospectively identified by an in vivo test. HSCs have
the unique property to reconstitute an irradia ted adult recipient after injection 1 . In this
transplantation assay, genetically marked donor cells (i.e. cells with an allelic marker,
transgene, etc. that allow a detection of donor versus recipient cells) are injected into
adult recipients depleted of endogenous HSCs by a high dose of irradiation. The
presence of HSCs within the injected cell population leads to long-term, high level,

42

multilineage hematopoietic repopulation of all blood lineages by the progeny of donor


HSCs. Moreover, such HSCs result in full repopulation of secondary recipients when
transplanted under the same stringent conditions, indicating the self- renewal property of
HSCs.
The enrichment of HSCs for functional analysis is possible through a variety of
techniques (density centrifugation, flow cytometric sorting based on activation and/or
cell cycle status and combined surface or intracellularly antigen expression (e.g. growth
factor receptors, adhesion and cell matrix molecules, cytokines, signalling molecules,
transcription factors)) but does not allow the isolation of a pure population of HSCs.
Indeed, to date no unique HSC-specific antigenic surface marker has been found. So far,
sorting for several cell surface markers, in combination with the techniques mentioned
above, is commonly used in the enrichment of HSCs.
The enrichment of embryonic HSCs causes many problems: (1) several markers
commonly used to purify BM HSCs vary in their expression on embryonic HSCs
depending on the development stage, localization and also vary between strains and
species (e.g.. Thy-1, CD38, AA4.1, CD34); (2) hematopoietic and endothelial markers
that normally characterize these two lineages of cells in the adult, are co-expressed by
HSCs in the embryo, possibly because of the close association between HSCs and
endothelium at their site of emergence (for review 12 ).
The usual strategy used to enrich a cell population for HSCs is to deplete, in a
first step, the cells expressing mature lineage markers (CD45R/B220 for B lineage,
Thy-1/CD3/CD4/CD8 for T lineage, NK1-1 for NK lineage, CD11b/Mac-1 or Gr-1 for
myeloid lineage, and TER-119 for erythroid lineage in the mouse). The suspension of
predominantly lineage negative immature cells is called Lin-. The Lin- cells can then be
stained by antibodies characterizing immature progenitors and HSCs and sorted for
these positive markers (e.g. CD45, c-kit, CD34, CD31, AA4.1, Sca1, Flk1, etc.). HSCs
can also be discriminated by their position in the cell cycle using different dyes
(Hoechst, Pyronin Y, Rhodamine 123, BrdU, Ki67) (for review 12-15 ).

Transgenic Markers of HSCs in the Mouse Embryo


The use of specific antibodies to detect and enrich for HSCs is limited by the
number of molecules expressed on the cell surface and by the expression of the
molecules on other cell lineages. So the use of transgenic mice, specifically expressing a
fluorescent marker (e.g. green fluorescent protein, GFP), has proven to be advantageous
in the purification of HSCs from the mouse embryo by (1) the use of HSC specific
expression vectors and (2) the possibility of high- level expression due to multiple copy
transgene integration.
In the laboratory, transgenic mice have been generated in which the fluorescent
marker GFP gene has been placed under the regulatory elements of Sca-1 (Ly-6A gene),
which is the most widely used HSC marker. In these mice, sorting of Ly-6A GFP
transgenic adult bone marrow cells (GFP+ and GFP-) and their transplantion into
irradiated adult recipients has shown hematopoietic engraftment with only the GFP+
cells. This indicates that all adult BM HSCs express the Ly-6A GFP transgene.
Moreover, the sorting based on GFP expression yields a 100-fold enrichment of adult
BM HSCs 16 .
In the AGM region, the Sca-1+ fraction represents 2%. Long-term
transplantation experiments with Sca-1 sorted cells have shown that HSCs are equally
distributed in both Sca-1+ and Sca-1- fractions of AGM cells, and not only in the Sca-1+
fraction as expected from adult BM sorts. Thus, these experiments show the limitation
of the use of antibodies. The same experiments have been performed with Ly-6A GFP

43

sorted cells from E11 AGM regions, that represent 2% (Fig 3A), and show that unlike
the Sca-1 marker, the Ly-6A GFP transgene expression marks all HSCs in the AGM
region (Fig. 3B). Thus the Ly-6A GFP transgene is a better marker than the surface
marker Sca-1, possibly because of the limiting expression of Sca-1 (two copies of the
gene) on the HSCs surface compared to the strong fluorescence signal produced by GFP
in the cytoplasm of HSCs (eight copies of the transgene). In conclusion, the Ly-6A GFP
transgene appears to be an optimally expressed reporter in AGM HSCs and can be
successfully used for HSC isolation from the mouse embryo. The same kind of longterm transplantation experiments with GFP+ sorted cells from subdissected AGM region
in aorta/mesenchyme and urogenital ridges have been performed and show that HSCs
are only localized in the aorta at mid- gestation stage (Fig. 3B) 17 .

Number of repopulated animals/


total number of animals transplanted

AGM

96.9 %

2.0 %

AGM
GFPGFP+
Aorta/Mesenchyme
GFPGFP+

1 ee

2 ee

0/6
4/5

0/4
3/4

0/5
1/5

0/8
3/9

GFP

Fig. 3: Flow cytometric profile indicating the percentage of GFP- and GFP+ cells in the
E11 AGM region. Summary of long-term repopulation of recipients injected with 2
different doses (1 or 2 embryo equivalent, ee) of GFP- or GFP+ cells sorted from E11
AGM regions or subdissected aorta/mesenchyme regions.

Precise localization of functional HSCs in the mouse AGM

The expression of the Ly-6A GFP transgene can be visualized with fluorescence
and confocal microscopy. Ly-6A GFP expression initiates in endothelial cells, and not
mesenchymal/mesodermal cells, at the ventral aspect of the dorsal aorta at day 9. At day
11, GFP-expressing cells are dispersed along the circumference and length of the aorta,
specifically within the endothelial layer lining the wall of the dorsal aorta and also in the
hematopoietic clusters but still not within the underlying mesenchyme (Fig. 4). To
establish what cell lineage the GFP+ cells may represent, immunohistochemical staining
of serially sectioned AGM regions have been performed. GFP+ cells have been shown
to coexpress endothelial markers (e.g. CD31) strongly suggesting a vascular endothelial
origin of the first definitive HSCs (Fig. 4) 17 .

44

Ventral
Hematopoietic
cluster

Endothelial cell
(GFP -CD31+ )
HSC
(GFP +CD31+)

Aorta

Erythrocytes

Dorsal
Fig. 4: Schematic drawing of E11 aorta section.

Conclusion
The temporal and spatial expression pattern of Ly-6A GFP, together with the
finding that all AGM HSCs are GFP+, localize the first HSCs to the aortic endothelium
and/or associated hematopoietic clusters in the E11 mouse embryo. It is still unknown if
these cells are completely restricted to the hematopoietic compartment or if they have a
bigger plasticity allowing them to commit into non-hematopoietic lineages. Such studies
and also a complete understanding of the emergence and induction of HSCs during
development are still in their early stages but most likely will have many applications
for example in future cell replacement therapies and gene therapy.

References

1. Lemischka IR. Clonal, in vivo behavior of the totipotent hematopoietic stem


cell. Semin Immunol. 1991;3(6):349-355.
2. Spangrude GJ, Smith L, Uchida N, et al. Mouse hematopoietic stem cells.
Blood. 1991;78(6):1395-1402.
3. Moore MA, Metcalf D. Ontogeny of the haemopoietic system: yolk sac origin of
in vivo and in vitro colony forming cells in the developing mouse embryo. Br J
Haematol. 1970;18(3):279-296.
4. Medvinsky A, Dzierzak E. Definitive hematopoiesis is autonomously initiated
by the AGM region. Cell. 1996;86(6):897-906.

45

5. Muller AM, Medvinsky A, Strouboulis J, Grosveld F, Dzierzak E. Development


of hematopoietic stem cell activity in the mouse embryo. Immunity.
1994;1(4):291-301.
6. de Bruijn MF, Speck NA, Peeters MC, Dzierzak E. Definitive hematopoietic
stem cells first develop within the major arterial regions of the mouse embryo.
Embo J. 2000;19(11):2465-2474.
7. Cumano A, Dieterlen-Lievre F, Godin I. Lymphoid potential, probed before
circulation in mouse, is restricted to caudal intraembryonic splanchnopleura.
Cell. 1996;86(6):907-916.
8. Dieterlen-Lievre F, Martin C. Diffuse intraembryonic hemopoiesis in normal
and chimeric avian development. Dev Biol. Nov 1981;88(1):180-191.
9. Garcia-Porrero JA, Godin IE, Dieterlen- Lievre F. Potential intraembryonic
hemogenic sites at pre-liver stages in the mouse. Anat Embryol (Berl).
1995;192(5):425-435.
10. North T, Gu TL, Stacy T, et al. Cbfa2 is required for the formation of intraaortic hematopoietic clusters. Development. 1999;126(11):2563-2575.
11. Tavian M, Hallais MF, Peault B. Emergence of intraembryonic hematopoietic
precursors in the pre- liver human embryo. Development. 1999;126(4):793-803.
12. Garcia-Porrero JA, Manaia A, Jimeno J, Lasky LL, Dieterlen-Lievre F, Godin
IE. Antigenic profiles of endothelial and hemopoietic lineages in murine
intraembryonic hemogenic sites. Dev Comp Immunol. 1998;22(3):303-319.
13. Keller G, Lacaud G, Robertson S. Development of the hematopoietic system in
the mouse. Exp Hematol. 1999;27(5):777-787.
14. Nishikawa SI. A complex linkage in the developmental pathway of endothelial
and hematopoietic cells. Curr Opin Cell Biol. 2001;13(6):673-678.
15. Shivdasani RA, Orkin SH. The transcriptional control of hematopoiesis. Blood.
1996;87(10):4025-4039.
16. Ma X, Robin C, Ottersbach K, Dzierzak E. The Ly-6A (Sca-1) GFP transgene is
expressed in all adult mouse hematopoietic stem cells. Stem Cells.
2002;20(6):514-521.
17. de Bruijn MF, Ma X, Robin C, Ottersbach K, Sanchez MJ, Dzierzak E.
Hematopoietic stem cells localize to the endothelial cell layer in the
midgestation mouse aorta. Immunity. 2002;16(5):673-683.

46

Hydrogen storage materials for mobile applications


Markus Schwarz, Anca Haiduc, Hans Stil, Hans Geerlings
Shell Global Solutions International BV, Amsterdam, The Netherlands
Mobility - the transport of people and goods is a socioeconomic reality that will surely
increase in the coming years. It should be safe, economic and clean. Vehicles can be run
either by connecting them to a continuous supply of energy or by storing energy on
board. Hydrogen would be ideal as a synthetic fuel, but storage remains a problem [1]. At
room temperature, hydrogen is gaseous and occupies a large volume: per 100 km about
1,2 kg of hydrogen are consumed, this equals 13.500 liters of gaseous hydrogen at
atmospheric pressure and room temperature. Therefore, hydrogen storage in small tanks,
especially for mobile applications, is difficult and would require very high pressures.
There are several alternatives with specific advantages and drawbacks. The Hydrogen is
converted into electricity by operating a Proton Exchange Membrane (PEM)-Fuelcell.
The basic principle of the fuel cell is the production of electricity by an electrochemical
reaction. The cell consists of two electrodes separated by an electrolyte. There are several
kinds of fuel cells. The process usually involves the oxidation of the fuel (in many cases
hydrogen) at the anode, with the production of protons, which cross the electrolyte
medium to reach the cathode, react with oxygen and form water. These cells operate at
relatively low temperatures (about 80 C), have high power density, can vary their output
quickly to meet shifts in power demand, and are suited for applications, such as in
automobiles, where quick startup is required. In order to compete with the standard
fueled automobiles some more requirements are needed:

5 kg of usable hydrogen stored on board to give sufficient range


Volume tank smaller than 120 L
Full tank contains at least 5 wt.% usable hydrogen
Tank can be filled in less than 2 minutes
Retain capacity during 1000 absorption / desorption cycles

Shell Global Solutions concentrates on complex metal hydrides, e.g. NaAlH4, because
they have advantages with respect to volume, handling and geometrical flexibility over
the other alternatives, e.g.:
Nanofibres / Nanotubes
On-board methanol / Gasoline reforming
Compressed / Liquified hydrogen
Previously, non-transition metal complex hydrides were considered for hydrogen storage
only in the context of releasing hydrogen irreversible via hydrolysis [2]. Since workers in
several laboratories have reported the discovery of a number of catalysts that improve the
reversibility of the hydrogen release by NaAlH4 (Figure 1) and Na3AlH6, interest in the
use of complex hydrides of aluminum as storage media has been rekindled.

47

Figure 1: TEM-pictures of a NaAlH4 particle [3]


The main breakthrough in hydrogen storage materials has been made by Bogdanovi [4].
He has recently reported a study of the catalytic effects of transition metal compounds on
both the hydrogen release and uptake by NaAlH4. The work of Bogdanovi and
Schwickardi in 1997 has proved that they can be made reversible by doping with
transition metals [4,5]. The different methods of doping were presented (from solution,
ball milling, mechanical grinding). Ball milling has the advantage of being simple and
solvent free, but is not always perfectly reproducible and it is difficult to scale up [5].
NaAlH4 + x TiCl3 (1-3x) NaAlH4 + 3x NaCl + 3x Al + 6x H2 + x Ti
Scheme 1: Doping-reaction with TiCl3 as a dopant
Doping with TiCl3 in (Scheme 1) the ball mill results in decreased capacities because
some of the hydrogen is released when the hydride reduces the dopant and forms NaCl
(Figures 2,3).

Figures 2 and 3: TEM-pictures of NaCl-rich (left) and Titanium-rich (right) particles

48

While the recent advances in hydride storage have illustrated the reversibility of specific
complex hydrides, none have been shown to contain the required amount of hydrogen so
that it is compatible with the requirements of a fuel cell for mobile applications. Beside
the two main questions that still need an answer are where does the catalyst go and what
is its role, in the hydrogen cycling process. Although NaAlH4 is the most studied system,
its capacity not as high as desired to meet the requirements for mobile applications. But it
is useful as a model system to understand the mechanism and the catalytic activity. The
state of the art can be summarized as follows:

Moderate reversible capacity (3.9 wt.%)


Still unsatisfactory kinetics of H2 ab/desorption
Cycling stability not thoroughly studied yet
Fairly good thermodynamics: 1 bar H2 at 33 C (first step) and 115 C (second step)
First step: 3 NaAlH4 Na3AlH6 + 2 Al + 3 H2 (3.7 wt.% H)
Second step: Na3AlH6 3 NaH + Al + 3/2 H2 (1.9 wt.% H)

The ball milling produces finely dispersed amorphous Ti not visible in the XPS (Figure
4), which alloys with aluminum when heated.
Counts/sec
90000
80000
70000

no Titanium species
observed

Ti0: 454 eV
TiO2: 459 eV

60000
50000
40000
30000
20000
10000
0
1098 998

898

798

698

598

498

398

298

198

98

-2

Binding Energy [eV]

Figure 4: XPS of Ti-doped NaAlH4 with mono-Al (green line) and twin-Mg (red line)

49

In this alloyed form the Titanium is not able to dissociate H2 and activate it for
absorption. This observation, also stressed out by Schth at the Gordon Research
Conference [6], is verified by analytical work (XRD, XPS and TEM studies) performed
at SRTCA. The Al-Ti-alloy (Figure 5) is the death of Titanium caged in Aluminum and
not a sleeping catalyst. After alloy-formation it is not possible to rehydrogenate the
system.
8000
7000
6000

TiAl2-3

5000
4000
3000

Ti

2000
1000
0
37

37.5

38

38.5

39

39.5

40

40.5

41

Figure 5: XRD of doped NaAlH4 desorbed at different temperatures


(Aluminum-peak Area)
In the near future the knowledge of chemists is needed in order to extend the work to
other (air sensitive) catalysts and formation of alanates, which are not commercially
available. New synthetic procedures have to be developed. Several complex hydrides
must still be synthesized. A first approach had been done by the synthesis of
Mg(AlH4)2(4THF) according to the procedure of Fichtner [7]. He showed the metathesisreaction of MgCl2 with NaAlH4 in THF as a coordinating solvent to form
Mg(AlH4)2(4THF). This compound was fully characterized by elemental analysis, FTIR,
Raman, crystal structure analysis and TGA-MS. But still this compound could not be
prepared from the elements and has not been made reversible.
[1] L. Schlapbach, A. Zttel, Nature, 2001, 414, 353.
[2] D. K. Slattery, M. D. Hampton, Proceedings of the U.S. DOE Hydrogen Program
Review NREL/CP-610-32404, 2002.
[3] Picture taken from Hydride Development for Hydrogen Storage, K. Gross, 2003
DOE Hydrogen and Fuel Cells Annual Merit Review Berkeley, CA May 19-22.
[4] B. Bogdanovi , M. Schwickardi, J. Alloys Comp., 1997, 253, 1.
[5] C.M. Jensen, K.J. Gross, Appl. Phys., 2001, A72, 213.
[6] Oral presentation given by F. Schth at GRC on Hydrogen-Metal Systems July 13-18,
2003, Colby College, Waterville, ME.
[7] M. Fichtner, O. Fuhr, J. Alloys Comp., 2002, 345, 286.

50

Analytical Ship Collision Model


Author: Kristjan Tabri, Schelde Naval Shipbuilding
Supervisor: prof. Petri Varsta, Helsinki University of Technology
Abstract
This paper presents the first stage of ship-ship collision interaction analysis. In this stage
the energy balance of the collision is established by using experimentally measured
ships motions as input. Energy balance is established to evaluate the quality of methods
used to describe the influence of different phenomena like the effects of water
surrounding the ships, water sloshing inside the tanks, bending of the ship hull girder
etc. Descriptions of the used theories are given in brief. The research is carried out in
the framework of Marie Curie Intra-European Fellowship program and in close
cooperation between Schelde Naval Shipbuilding and Helsinki University of
Technology.
Introduction
Regardless of continuous work to prevent ship collisions, accidents still happen. Due
to serious consequences of collision accidents it is important to reduce the probability of
the accidents and to minimize or prevent potential damage to the ships and the
environment. Better understanding of the collision phenomena contributes to the
minimization of the consequences.
The present study is an analysis of the collision interaction occurring when two ships
collide. The aim is to separate the collision interaction into physical constituents, which
can be studied individually. Separation makes it possible to identify the dominating
phenomena of different constituents like collision force and ensuing responses of the
ships. This study is idealized to a case in which the unpowered ship collides on a right
angle with another ship.
The work is motivated by observed phenomena from full scale collision tests, which
could not be at that time explained as existing simulation tools failed to predict the
outcome of the collision experiments. This situation initiated a new study on collision
interaction, where closer attention is paid on the ships dynamics during the collision.
The interaction model treats the behaviour of both ships individually but these are
linked together by common collision force. As the emphasis of the study is on ship
dynamics, the collision force is obtained from experimental test or by numerical
methods.
The purpose of this paper is to establish the energy balance of the collision by using
experimentally measured ship motions as input. Energy balance is established to assess
the quality of methods used to describe the effects of different phenomena occurring
during the collision. Furthermore, better understanding of the energy distribution allows
determination of the amount of energy i.e. the portion of the total collision energy
absorbed by the ship structure and so the contribution to ship design is obvious.
Analysis of the collision interaction
Ships participating in collision experience the contact load resulting from the impact
of the striking ship on the struck ship. This force affects the ships motions, which in
turn cause hydrodynamic forces exerted by the surrounding water in terms of
hydrodynamic damping (waves) and added mass associated with the sway motion of the
struck ship and the surge motion of the striking ship. Further ships motions depend on
the global bending of the struck ships hull girder and in the presence of free surfaces
inside the ship tanks also the effects of water sloshing may become important.

51

Those experiments produced valuable deformation force FC and ships motion time
histories, which are now used as input for the theories described above to obtain the
remaining unknown force components in Eq.2. Based on the measured deformation
force FC and the calculated forces, different energy components for both ships are
determined. Adding together all these components produces a total energy, which
should be equal to the kinetic energy E0 of the striking ship at the beginning of the
collision. This energy balance is depicted in Figure 3. In the figure energies of the
struck and the striking ship consist of energy components obtained by integrating forces
FB, FF, FH and FS over the corresponding displacement. Experimentally measured
deformation energy is presented by separate line. The total energy is the summation of
the both ships energy and the deformation energy.
1.4
ENERGY
E0
1.2

AVAILABLE E0 (MEASURED)
DEFORMATION (MEASURED)
SRIKING SHIP (CALCULATED)
STRUCK SHIP (CALCULATED)
TOTAL (MEAS. + CALC.)

0.8

0.6

0.4

0.2

0.5

1.5

2.5

3.5

4
4.5
TIME [sec]

Figure 3. Non-dimensional energy balance.


Figure 3 shows a satisfactory agreement between the experimentally measured and
the calculated energy. It indicates that the proposed methods are suitable for describing
the collision phenomena. So far the experimentally measured motions were used, which
means that the equation of motion Eq.2 did not need to be solved. For true simulation of
collision interaction motion time history is not known and the Eq.2 needs to be solved
by using a time- integration procedure. Implementation of this time-integration
procedure forms the second stage of this study.
References
1. Cummins WE, 1962, The Impulse Response Function and Ship Motions,
Schifftechnik 9, No 47, pp. 101-109
2. Graham EW, Rodriguez AM, 1952, The Characteristics of Fuel Motion Which
Affect Airplane Dynamics, Trans. Of ASME, J Appl Mech vol 19 no 3, pp. 381-388
3. Hakala MK, 1985, Numerical modelling of fluid-structure and structure-structure
interaction in ship vibration, VTT Publicatio n No 22, Finland, p. 62
4. Wevers LJ, Vredeveldt AW, 1999, Full Scale Ship Collision Experiments 1998,
TNO- report 98-CMC-R1725, The Netherlands p. 26
5. Journe JMJ, 1992, Strip Theory Algorithms, Delft University of Technology,
Report MEMT 24

54

Proceedings of Marie Curie Workshop


JRC-IE & DG RTD, 21-22 October 2003, Petten, The Netherlands

Thermal fatigue studies on nuclear piping


E. Paffumi 1 , R. C. Hurst 2 , N. G. Taylor 3 , K. F. Nilsson4 , M.R. Bache 5

Institute for Energy, Joint Research Centre, P.O. Box 2, 1755 ZG Petten, The Netherlands
Materials Research Centre, School of Engineering, University of Wales, Swansea, SA2 8PP, UK

1,2,3,4
5

ABSTRACT: The paper presents an investigation in progress on the cracking behaviour of thick cylindrical
components of 316L stainless steel subjected to cyclic thermal loading at different maximum temperatures, by
means of induction heating and water quenching. Under the applied loading, network of cracks initiates at the
inner surface and some of them propagate across the test specimen wall thickness. A series of tests with different
maximum temperature have been carried out. The number of cycles for crack initiation was measured from
surface replicas taken during intermittent stops, whereas the crack depth of fatigue cracks is measured using an
ultrasound time of flight diffraction technique (TOFD). Predictions of the crack initiation life are based on the
surface plastic strain range and low cycle fatigue data and crack propagation models for microstructurally short
cracks. The predictions are in good agreement with the test results. The tests are being continued to study crack
propagation and its modelling by the cyclic J-integral parameter, as well as the cyclic crack tip opening
displacement, ?CTOD.
Keywords: thermal fatigue, temperature, crack initiation, crack growth, 316L stainless steel, replica

1 Introduction
The development of analysis procedures and laboratory techniques for the accurate
assessment of cracking under thermal fatigue conditions is a topic of increasing importance,
particularly in relation to the life assessment of main coolant lines in ageing light water
nuclear reactors. Thermal fatigue resulting from fluid mixing (e.g. a mixing tee scenario)
is a recognized problem in this respect, but due to the associated complex loading and
effects of material degradation, it is still not well understood [1]. Generally, this
phenomenon is linked to a turbulent mixing of two fluids at different temperatures, which
induces large temperature variations at the pipe surface with associated stress and strain
variations. The first damage often occurs as crazing, i.e. network of surface cracks, in the
region with the largest thermal fluctuations or as discrete cracks at welds.
This ongoing research project aims to advance understanding of the basic mechanisms
and loading conditions under which thermal fatigue cracks initiate and propagate, and to
translate this into improved practical methods for predicting thermal fatigue.
2 Test methodologies
Thermal shock experiments have been carried out in a special test facility previously
developed for up-shock thermal cycling [2]. The cylindrical specimens are made of a low
carbon 316L stainless steel [2] with an outside diameter of 48mm, a 14mm wall thickness
and a length of 224mm (Fig.1). The specimens are heated continuously by an induction
system from the outside and quenched internally with room temperature water. The resulting
transient thermal stress distributions induce very strong stress gradients through the pipe
thickness that depend on the temperature difference, ?T, the frequency, the material
properties and the heat transfer between the pipe and the water.
1

Ph.D. student, Elena.Paffumi@jrc.nl


Professor , Roger.Hurst@cec.eu.int
3
Dr, Nigel.Taylor@jrc.nl
4
Dr, Karl-Fredrik.Nilsson@jrc.nl
5
Dr, M.R.Bache@Swansea.ac.uk
2

55

Both cracked and un-cracked body analyses were performed using axisymmetrical eightnodes elements and only the upper half of the specimen was modelled because of the
assumed symmetry.
Estimates of the crack initiation life have been derived from low cycle fatigue
represented by the Coffin-Manson type law, while the Paris relationship was used to predict
fatigue crack propagation under linear elastic condition. Under large scale yielding
conditions, it has been proposed that propagation models based on the J- integral or crack tip
opening displacement better describe crack propagation [7].

TF Test #
1

Pipe with
notch
2

Pipe without
notch
3

Pipe without
notch
4

Pipe without
notch

Table 1: Experimental thermal fatigue tests


Cycles to first
Tmax Twater
Damage
damage
Surface cracking;
400C 25C
27,400 N
50kN tensile load then
applied, resulting in
pipe failure
55,600 N
Surface cracking
300C 25C
No intermediate
detected by replicas
stops in between
and X rays
Surface cracking
400C 25C
20,000 N
detected by replicas
and X rays
Test in progress. No
350C 25C
n/a
damage after 10,000
cycles

4 Results and discussions


4.1 Experimental results
Four specimens have been tested to date. For each test, crack initiation was monitored at
specific time intervals by the replica technique and micro-structural analysis.
The number of cycles required to initiate cracking was registered for the different
temperature cycles. The results are summarised in Table 1. For the first three specimens
crack initiation was noted on the inner surface, whereas the fourth specimen has only been
loaded for 10,000 cycles and no crack initiation has been detected yet.
The cracks detected at the inner surface of these three specimens were orientated both
longitudinal and circumferential to the specimen axis.
Clearly, by inc reasing the maximum cyclic temperature, the number of quenching cycles
to initiate cracking decreases.
Thermal cycling of these three specimens will now continue to study the crack
propagation phase in detail, by mean of the non-destructive ultrasonic technique, time of
flight diffraction technique, TOFD.

57

for initiation are: 23,000, 34,000 and 54,000 respectively and are plotted in Fig.2. These
results are in good agreement with the experimental number of cycles to first observations of
cracks on the replicas for the specimens 2 and 3 (see Table 1).
5 Conclusions

The thermal fatigue rig can produce crack initiation and propagation at the inner surface
under controlled temperature down shocks.
A replica technique can be used to detect crack initiation during interrupted tests.
Predictions of the number of cycles to initiate thermal fatigue cracking based on low
cycle fatigue data and the FE computed strain variations are in reasonable agreement
with the experimental data.
The finite element simulations show that the assumptions concerning the boundary
conditions are very important; the more constraint, the higher the crack driving force will
be.
Thermal cycling will continue to study the crack growth through the wall thickness with
application of a non-destructive monitoring technique, such as the time of flight
diffraction ultrasonic technique (TOFD).

References
[1] Experience with Thermal Fatigue in LWR Piping Caused by Mixing and Stratification,
Special Meeting, June 1998, Paris, NEA/CSNI/R(98)8,OECD Nuclear Energy Agency,
Paris, 1998
[2] Gandossi L., Crack Growth Behaviour in Austenitic Stainless Steel Components Under
Combined Thermal Fatigue and Creep Loading, Ph.D. Study, University of WalesSwansea, 2000
[3] Holman J. P., Heat Transfer, McGraw-Hill, Inc., 8th edition, 1997
[4] Kerr D.C., A Investigation of Fatigue Growth in Thermally Loaded Components, Ph.D.
Study, University of Glasgow, p. 119-131, 1993
[5] Fissolo A., Marini B., Nais G. and Wident P., Thermal fatigue behaviour for a 316 L
type steel, Journal of Nuclear Materials, Vol. 233-237, p.156, 1996
[6] Gorlier et al., The Cyclic Plastic Behaviour of a 316 Steel at 20 to 600C, Conf. Proc.
Fatigue 84, p. 41-48, 1984
[7] Dowling N.E. and Begley J.A., Fatigue Crack Growth During Gross Plasticity and the JIntegral, Mechanics of Crack Growth, ASTM STP 590, American Society for Testing
and Materials, pp. 82-103, 1976

59

Productivity Growth in Agriculture and Intertemporal Frontier


Separation for Six CEECs and the EU-15 Members
Axel Tonini*
Introduction
The Central Eastern European Countries (CEECs) are at present facing two important
challenges: completing the transition from a central planned economy into a free market
economy and the preparation for accession to the EU in 2004. Both challenges involve
a severe and compulsory structural change for the agricultural sector of CEECs. The
aim of this paper is to analyse for a selected number of CEECs (Albania, Bulgaria,
Czech and Slovakia, Hungary, Poland and Romania 1 ) and the current EU-15 member
states the total factor productivity (TFP) growth of agriculture. The purpose of this
exercise is to determine firstly how the agricultural productivity growth reacted to the
transition period in the 6-CEECs. Secondly, analyse whether both the 6-CEECs and
EU-15 member states followed similar productivity growth patterns. Finally, the paper
analyses to what extent it is possible to test for a frontier separation between the EU15 members and the 6-CEECs.
Methodology
This paper calculates and explains the Malmquist Index (MI) of TFP growth in
agriculture for a world frontier determined pooling 6-CEECs and the EU-15 members.
We follow the seminal article of Fre, et al., (1994) that using non-parametric methods
and a distance function approach calculates TFP growth that can be decomposed into
technical change and technical efficiency change components.
A MI appears suitable when panel data are available measuring TFP for several
reasons. Firstly, the MI is less restrictive than the Tornqvist Index because it does not
require to specify a behavioural objective and it does not assume full efficiency 2 . This
may appear suitable with the data set at hand, where the TFP index is calculated at a
country level and the neo-classical behavioural assumptions may be arguable especially
with regard to CEECs. Secondly, the MI does not necessarily require price information.
This seems to be also remarkable for CEECs where data availability is one of the major
constraints selecting an opportune methodology. Finally the MI allows to decompose
TFP into technical change or innovation and efficiency change or catching up
enlarging the informative power of the index.
Non-parametric and parametric techniques usually assume a common reference
technology for all the observation in a sample. However, we may argue that CEECs
and EU-15 countries may belong to different production technologies. In order to
analyze if CEECs and EU-15 have different reference technologies we apply the
procedure introduced by Grosskopf and Valdmanis, (1987) and developed later by
Fizel and Nunnikhoven, (1992). The procedure can be summarized as follow. Firstly,
we calculate the within- group technical efficiencies for each country relative to their
*

Is a PhD student in Economia e Politica Agro-alimentare (Doctoral degree associated with Padova University),
at the Agri-food Economics Department, Catholic University of sacred heart, Via Emilia Parmense, 84 29100
Piacenza; email: axel.tonini@wur.nl . He has been awarded with a Marie Curie Host Fellowship to do his
research at the Agricultural Economics and Rural Policy Group, Wageningen University, The Netherlands.
1
Czech and Slovakia were aggregated in one country recovering the Former area of Czechoslovakia this because
of limitations in the data availability.
2
With these terminology we refer to the case when units are contemporaneously technically and allocatively
efficient.

60

separate frontier (i.e. EU-15 frontier and CEECs frontier). Secondly, we calculate the
overall technical efficiencies for each country relative to a pooled frontier. Thirdly, we
form the ratio of own technical efficiency to overall technical efficiency for each
country in the two sub- groups. Fourthly, we take the average of the derived ratio for the
two sub- groups deriving a between-group average ratio. The closer is the ratio to one,
the more the within- group frontier is close to the pooled frontier.
A country level analysis usually has to be carried out with fewer observations
than is normally the case when one relies on conventional decision making units (i.e.
firms, sectors, etc.), making DEA more sensitive to dimensional issues. By simply
applying the technique developed by Grosskopf and Valdmanis, (1987) we would end
up in a potential dimensional problem because of a lack of observation making our
results arbitrary. Therefore we built a single production set constituted pooling the
observations relative to each country during the entire period 1993-2000 obtaining an
intertemporal production set. Similarly we recovered two partitioned intertemporal
production sets, one for CEECs and the other for the EU-15 countries.
Data
A balanced panel data constituted by six CEECs and by the EU-15 members covering
the period from 1993 to 2000 3 was built using the FAO Agrostat and the USDA World
Agriculture Trends and Indicators (WATIVIEW) databases. The CEECs considered are
Albania, Bulgaria, Former Area of Czechoslovakia (see footnote 1), Hungary, Poland
and Romania. With the exception of Albania they are all applicant countries for the EU
accession. The EU-15 members are Austria, Belgium- Luxembourg4 , Denmark, France,
Germany, Greece, Ireland, Italy, Netherlands, Portugal, Spain, Finland, Sweden and
United Kingdom. Therefore, a world agricultural frontier is estimated using annual
data on 20 countries. Agricultural TFP is measured using a one output and five inputs
frontier. The output is a derived quantity index of agricultural production. The inputs
are fertilisers, labour, land, livestock and machinery, the so-called conventional inputs
for the agricultur al sector (see Wiebe, 2003:19).
Results and Conclusions
This paper determines the TFP growth in agriculture for several CEECs and the EU-15
members using DEA. Up to now there are no studies that measured the agricultural
productivity for CEECs. The analysis shows that the agricultural productivity for the 6CEECs annually grew on average by 1.44% over the period considered. The
productivity growth for the 6-CEECs, was characterised during the transition process by
a positive frontier shift and by a catching-down due to a worsening in the input use mix.
This outcome may underline contemporaneously the progressive recovery in
agricultural output after transition and the negative consequences in the input use mix
due to the adjustment process from a central planned economy to a more free market
oriented economy. The adjustment in CEECs usually took either the form of a splitting
process of previously State owned farms into smaller private farm units or a
consolidation process of small scale farms into more viable farm sizes (Tonini and
Jongeneel, 2002) . This requires in both cases an adjustment in the optimal allocation of
resources that for Czechoslovakia and Poland seems to be not yet completed. Figure 1
3

From the FAOSTAT data it was not possible to recover a longer time series for several CEECs. Therefore to
make the analysis feasible we restricted the time series to the period available for the all countries considered in
this analysis and for which information on the variable used was available. Even if a longer time series would
have been more informative, we may argue that the different system of data collection in CEECs before transition
would have plaid a remarkable role in the results.
4
The FAO Agrostat database considers both countries jointly.

61

categorises each country by annual percentage increase in efficiency and technical


changes. The north-eastern quadrant represents the innovative and catching- up
countries, the south-western quadrant represent the worst performing countries.
Figure 1: Annual change in efficiency and technical change

Annual % in Efficiency Change

5.0

2.0

-2.0

0.0

2.0

4.0

6.0

-1.0

-4.0
Annual % in Technical Change
Czechoslovakia

Albania

Bulgaria

Hungary

Poland

Austria

Be-Lux

Denmark

Finland

France

Germany

Greece

Ireland

Italy

Netherlands

Portugal

Spain

Sweden

United Kingdom

Romania

Source: own computations5 .

The analysis showed that after transition, the 6-CEECs experienced respectively from
1993 to 1997 a rather stable growth rate and from 1997 to 2000 a more volatile pattern.
Conversely, the EU-15 members underlined a constant growth rate in agriculture over
the entire period (see figure 2).
Figure 2: TFP growth in agriculture for the 6-CEECs and the EU-15
1.20

0.16

1.15

0.14

1.10

0.12

1.05

0.1

1.00

0.08

0.95

0.06

0.90

0.04

0.85

0.02

0.80

0
1993-94 1994-95 1995-96 1996-97 1997-98 1998-99 1999-00
6-CEECs

EU-15

Deviation

Source: own computations

The efficiency change and technical change result for Greece overlaps in the figure with the result of The
Netherlands.

62

Between the 6-CEECs considered in this analysis and the EU-15 there appears a slight
divergence in TFP growth in agriculture starting from 1995. Moreover the 6-CEECs
and the EU-15 members followed different patterns with respect to the MI
decomposition, where for the latest the agricultural productivity has been fuelled both
by improvements in efficiency and innovation. From the intertemporal frontier
separation it appears that the 6-CEECs are operating on an inner frontier with respect to
the pooled frontier where the EU-15 members belong. That means that for a given level
of input EU-15 countries are able to produce more output than the 6-CEECs confirming
our expectations because of the different agricultural systems in place in CEECs and
EU-15 member states.
With respect to methodology, the paper combined in a novel approach the nonparametric frontier separation technique with the notion of the intertemporal production
set. This approach allows to decrease potential dimensionality problems in DEA
especially for country level analysis, which is usually characterised by a lower number
of observation than the one encountered for more conventional units (firms, sectors,
etc.). The approach preserves the intra-country variations making feasible partitioned
frontiers.
References
Fre, R., et al. "Productivity growth, technical progress, and efficiency change in
industrialized countries." The American Economic Review 84, no. 1(1994): 6683.
Fizel, J. L., and T. S. Nunnikhoven. "Technical efficiency of for-profit and non-profit
nursing homes." Managerial and Decision Economics 13(1992): 429-439.
Grosskopf, S., and V. Valdmanis. "Measuring hospital performance." Journal of Health
Economics 6(1987): 89-107.
Tonini, A., and R.Jongeneel (2002) Dairy farm size restructuring in Poland and
Hungary: quantitative and qualitative analysis, ed. L. Hinners-Tobrgel, and J.
Heinrich, Wissenschaftsverlag Vauk Kiel KG, pp. 317-339.
Wiebe, K. "Linking land quality, agricultural productivity, and food security.". United
States Department of Agriculture.

63

To prevent CMA, the main treatment is the eradication of the allergen from the diet. But a
complete avoidance is almost impossible because many products contain some traces of
caseins such as emulsifier extenders, tenderizers, flavors in food but also nutritional
products, cosmetics, pharmaceutical products To circumvent this problem, a solution is
the use of hydrolysates. According to the individual status, different hydrolysates are
available such as partial hydrolysates useful for atopic child ren (defined as non allergic but
susceptible to develop an allergy), and extensive hydrolysates advised for children with a
diagnosed cows milk allergy. Unfortunately, those hydrolysates are sometimes inefficient
and still trigger some allergic reactions. In this case, formulas consisting in elemental
amino acids are available .
As seen above (Figure 1), T cells play an important role in the induction of allergy. They
stimulate the B cells to produce IgE but are involved also in the delayed inflammatory
responses. A better characterisation of the allergen is necessary for future immuno
therapeutic modalities.
For this purpose, T cell epitope mapping is performed on an enzymatic hydrolysate, after
fractionation, via a T cell proliferation assay and analys is by mass spectrometry. In parallel,
the IgE binding capacity is analysed by dot blot and degranulation assays.
After hydrolysation and fractionation, a T cell proliferation assay is performed. Briefly, The
allergens (fractions) are incubated with irradiated B cells in order to be processed and
presented to the T cells isolated from healthy donors, allergic donors, and the same donors
who have outgrown their allergy. After incubation, the proliferation is determined by
measuring the incorporation of tritiated thymidine via liquid scintillation counting. The
positive fractions are then analyzed by mass spectrometry to pinpoint the peptides
responsible for the proliferation of the T cells.
Several immunodominant peptides were then determined within the hydrolysate fractions.
It is also necessary for safety to analyse the binding of the peptides to IgE in order to inhibit
the degranulation of the basophils. The hydrolysate fractions are then tested in a dot blot to
reveal the fractions that still possess a reactivity with the IgE. Briefly, positive allergen
controls (Milk, Casein, pure alfa s1 casein), negative controls (HSA, BSA), total
hydrolysate and hydrolysate fractions are fixed on a nitrocellulose membrane. The
membrane is saturated by PBS, 0.5%HSA, 0.1% Tween 20 before being incubated with the
serum from healthy or cows milk allergic patients diluted 1/50. After several washes, the
membranes are incubated with a HRP conjugate before the detection staining using the
ECL system.

65

When PBMC are non-stimulated, a low basal degranulation is observed (4.4%). The
degranulation increased to 56.6% when a stimulation with a non specific inducer of
degranulation is used (CL: cross linker able to cross link the receptor Fc eRI). After
stripping, the degranulation is really low but as soon as the serum from allergic serum is
added, a huge degranulation is noted without any stimulation (degranulation: 42.9%) which
is a little increased with the non specific inducer. The same is true for IgE mediated release.

Anti-CD63-PE

PBMC+ 0 Ag

PBMC+ CL

Strip

strip+Se+ 0 Ag

Strip+ Se+ CL

B cells epitopes (revealed by IgE recognition) and T cell epitopes appear to be different.
But more patients need to be tested in order to confirm this observation.

Anti-CD63-FITC

Anti-IgE-FITC

Anti-CD203c-PE

67

Characterization of the phase behaviour of a propanediol based nonphospholipid derived from a food-grade emulsifier
Ingrid Winter1, Georg Pabst2, Richard Koschuch2, Karl Lohner2, Hans-Gerd M. Janssen1,
Roos Horsten-Jeziorski1, Chris G. van Platerink1, and Sergey M. Melnikov1
1

Foods Research Centre, Unilever R&D Vlaardingen, NL-3133 AT Vlaardingen, The


Netherlands; 2Institute of Biophysics and X-ray Structure Research, Austrian Academy of
Sciences, A-8042 Graz, Austria
Introduction
Lipids are compounds of dual nature that combine a hydrophilic headgroup and a hydrophobic tail, which mainly consists of one or two hydrocarbon chains. Due to this amphiphilic property of lipid molecules, they self-assemble into ordered macromolecular
structures in the presence of a polar solution.. This means that the molecules align along
their long axis, and most frequently they pack in micelles or bilayers, where the polar
headgroups shield the hydrophobic hydrocarbon chains from contact with water. Phospholipids are among the best known lipids, as they are the major structural elements of biological membranes. They are also used in various pharmaceutical, cosmetic and food applications, mainly as emulsifying agents, or in the form of vesicles, also called liposomes [1,
2]. Since natural phospholipids exhibit several drawbacks, such as low stability against
oxidation and biodegradation, there is a growing interest in synthetic mimics of phospholipids. We studied the thermotropic phase behaviour of a non-phospholipid based on a propanediol backbone, where a lactic acid is esterified to one OH-group and a fully saturated
fatty acid to the other OH-group (lactic and fatty acid ester of propanediol, LFP). A fraction
enriched in LFP was derived from a food-grade emulsifier by simple column chromatography, and the chemical structure of its main component is shown in Fig. 1. The headgroup of
the molecule composed of polymerized lactic acid moieties makes it an interesting subject
for investigations. In order to explore the potential of this non-phospholipid for the formation of hydrated lipid mesophases, its miscibility with bilayer-forming model lipids was
also investigated.
O
O
O
O

CH3
O
H2C C H O
O
O

O
O

HO

Fig. 1. Main component of the investigated nonphospholipid fraction. Both a fully saturated fatty acid
(C18:0, blue) and one or more lactic acid molecules
(green) are esterified to the propanediol backbone (red).

Results and Discussion


Phase behaviour of LFP
Characterization of the thermotropic phase behaviour of the dry lipid was performed by
differential scanning calorimetry (DSC) and small- and wide-angle X-ray diffraction
(SWAX). X-ray diffraction patterns were recorded simultaneously in the small- and wideangle regions using a SWAX-camera (Hecus-X-ray Systems, Graz, Austria) mounted on a

68

sealed-tube generator (Seifert, Ahrensburg, Germany) operating at 50 kV and 40 mA. Cu


K radiation ( = 1.542 ) was selected using a Ni filter.
The X-ray diffractograms recorded in the small-angle region are shown in Fig. 2, left
panel. As layered structures like stacked lamellar lipid bilayers give rise to several Bragg
peaks in the ratio 1:2:3, it can be unambiguously said that two lamellar lattices with different repeat distances are formed at lower temperatures. The larger lattice has a lamellar
spacing of 51.2 and can be observed at temperatures between 5 and 55 C and again after
cooling to 5 C. The smaller lamellar lattice with a lamellar spacing of 33.4 can be observed at 5 and 20 C.

33.5

4.2

4.6

Intensity [arb. u.]

Intensity [arb. u.]

51.3

3.8

5 C

5 C

20 C

20 C
40 C

40 C

55 C

55 C

65 C

65 C

5 C
0,01 0,02 0,03 0,04 0,05 0,06 0,07 0,08 0,09

5 C
0,20

s [-1]

0,22

0,24

0,26
s [-1]

0,28

0,30

Fig. 2. X-ray diffractograms recorded in the small-angle (left panel) and wide-angle (right
panel) region at the indicated temperatures. s = h/(2) and h = 4(/)sin(), being the
wavelength of the X-ray beam and 2 the scattering angle.
The wide-angle patterns provide information on the lateral hydrocarbon chain packing. Generally, a single peak located at about 4.2 is characteristic for a hexagonally
packed subcell (-crystalline phase), the occurrence of an additional weak reflection at
about 3.8 is caused by pseudohexagonally packed hydrocarbon chains (sub--crystalline
phase), and additional Bragg peaks at about 4.6 can be assigned to a triclinic crystalline
subcell (-crystalline phase) [3, 4]. A melted lipid sample in the isotropic liquid phase
exhibits a broad diffuse reflection at about 4.6 [5, 6]. The wide-angle diffraction data of
LFP (Fig. 2, right panel) indicate that at 5 C the hydrocarbon chains are packed in a pseudohexagonal subcell. With increasing temperatures, a rearrangement of the hydrocarbon
chains to a hexagonal packing occurs, as indicated by the single symmetric peak observed
at 20 C. At 40 C this reflection is vanished, and also the smaller lattice in the small-angle
region is replaced by a broad side maximum. The reflections at about 4.6 are observed in
all wide-angle diffractograms up to 55 C. That indicates that part of the sample forms a crystalline phase, which is associated with the larger lamellar lattice observed in the smallangle diffractogram. Finally, at 65 C the entire sample is in a state, where the positional
correlation of the lamellar lattice is lost as demonstrated by the absence of sharp Bragg reflections. Instead a diffuse scattering with a broad side maximum is observed, indicating

69

the existence of an isotropic liquid phase. Upon cooling to 5 C within 30 min all spacings
initially observed occur again.
The calorimetric heating and cooling scans were performed at scan rates of
10 C/min with a temperature modulated DSC (Perkin Elmer DSC Pyris 1, PE Instruments,
Shelton, USA). DSC thermograms provide information on the temperature, at which phase
transitions occur and on the heat capacity changes involved in phase transitions. The DSC
scan of LFP (Fig. 3) exhibits three endothermic phase transitions upon heating, which is in
accordance with the observed X-ray diffractograms. The transition at 9 C mirrors the
change from the sub -crystalline phase to the -crystalline phase, which melts to an isotropic liquid phase at 40 C. The peak at 53 C has been assigned to the melting of a small
amount of the highly stable -crystalline phase present in the sample.
2.4
2.2

triclinic

2.0

Normalized Heat Flow [W/g]

Fig. 3. DSC heating scan of LFP.


Inserts show the modes of crystalline hydrocarbon chain packing
(according to [7]) as revealed by
X-ray wide-angle diffraction; arrows indicate the regarding temperature ranges. A: transition from
sub -crystalline to -crystalline
phase; B: transition from crystalline to isotropic melt; C:
transition from -crystalline to isotropic melt.

1.8
1.6
1.4

pseudohexagonal

1.2
1.0

hexagonal

0.8
0.6
0.4
0.2

0.0
-10

10

C
20

30

40

50

60

70

80

T [C]

Miscibility with colipids


Binary lipid mixtures were assembled in excess water by mixing LFP with one of the
colipids, dihexadecyl phosphate (DHP), dihexadecyldimethylammonium bromide
(DHDAB) or didodecyldimethylammonium bromide (DDAB), and measured with DSC
(data not shown). Poor miscibility of LFP was observed with DDAB, most probably due to
the large difference in hydrocarbon chain length (C18:0 vs. C12:0). The poor miscibility found
in mixtures with DHP is likely to arise from the difference in headgroup polarity, which is
large for the negatively charged headgroup of DHP, but rather small for the lactylated
headgroup of LFP. DSC revealed a good miscibility of LFP with the positively charged
DHDAB. In the DHDAB headgroup the positive charge is shielded by the methyl groups,
which obviously reduces the effective polarity as compared to DHP. In order to prove these
findings, SWAX measurements were performed with the binary lipid mixtures. The diffraction patterns of all measured LFP/DHDAB mixtures resemble a continuous scattering curve
that is typical of vesicle scattering, which is in contrast to the diffractograms of the single
lipid components (data not shown).
The continuous scattering curve of LFP/DHDAB (60/40, w/w) was further analyzed.
Fig. 4 depicts the diffuse diffraction pattern. It was fitted applying a model for the electron
density profile consisting of two Gaussians representing the electron dense headgroups and

70

1800

1.5

1600

1.0

1400

0.5

1200

r [arb. u.]

Intensity [arb. u.]

one Gaussian with negative amplitude at the center of bilayer [8]. The distance between the
Gaussian headgroup-peaks dHH = 34 . The membrane thickness dB = dHH + 4sH = 51 ,
where sH is the width of the headgroup Gaussian. By applying this model fit, we could
demonstrate that this lipid mixture indeed forms unilamellar vesicles. Wide-angle data (not
shown) indicate that the vesicles are in the gel-phase at ambient temperature.

1000
800
600

0.0
-0.5
-1.0

400

-1.5

200

-2.0
-2.5

0
0.0

0.1

0.2

0.3

0.4

0.5

-30

0.6

q [-1]

-20

-10

z []

10

20

30

Fig. 4. Left panel: Small-angle X-ray diffraction pattern of the mixture LFP/DHDAB
(60/40) at 20 C (black dots) and best fit (red line), where q = s(2). Right panel: Electron
density profile. The geometric center of the profile is located at the origin and the highdensity peaks correspond to the lipid polar headgroups (see scheme).
Conclusions
The study on the phase behaviour of LFP revealed that a stable -crystalline phase is
formed over a broad temperature range. It was confirmed that the formation of binary lipid
mixtures depends on both the chain length and the headgroup characteristics of the colipid.
Full miscibility was only detected for LFP and DHDAB, where we could prove the selfassembly into vesicular structures at a certain lipid ratio. We conclude that the promising
features of LFP might be exploited for the design of novel food formats, as well as for advanced applications, where tailored lipid mesophases with tuned interface characteristics
are needed.
References
[1] Bangham, A.D.; Standish, M.M.; Watkins, J.C. J. Mol. Biol. 1965, 13, 238.
[2] Lasic, D.D. Liposomes: from physics to applications; Elsevier: Amsterdam, 1993.
[3] Sato, K.; Ueno, S.; Yano, J. Prog. Lipid Res. 1999, 38, 91.
[4] Kodali, D.R.; Fahey, D.A.; Small, D.M. Biochemistry 1990, 29, 10771.
[5] Tardieu, A. ; Luzzati, V. ; Reman, F.C. J. Mol. Biol. 1973, 75, 711.
[6] Larsson, K. Acta Chem. Scand. 1966, 20, 2255.
[7] Small, D.M. In Handbook of lipid research; Hanahan, D., Ed.; Plenum Press: New York,
1986; Vol. 4.
[8] Pabst, G.; Koschuch, R.; Pozo-Navas, B.; Rappolt, M.; Lohner, K.; Laggner, P. J. Appl.
Cryst. 2003, 36, 1378.

71

Designing market-oriented environmental policy instruments: The case


of Tradable Green Certificates
Klaus Vogstad : CML Leiden University : klausv@stud.ntnu.no
Note: For a complete description, see www.stud.ntnu.no/~klausv/publications/
TGC2003.pdf

Introduction

Liberalisation of markets previously under regulatory control require new instruments for
environmental policy making, because subsidies and regulatory intervention does not
conform to trans-national, liberalised markets. This is the case for newly regulated electricity markets. An arrangement of Tradable Green Certificates (TGC) as a market-based
subsidy for renewable energy has been proposed in several countries and already implemented in a few. However, introduction of TGCs have been postponed and delayed mainly due to the uncertainties involved for suppliers of renewables. Several studies have been
undertaken using economic static comparative analysis and partial equilibrium models.
Few of these analyses address the dynamic price formation process or the mechanisms
that are important in the design of a well-working stable market. To analyse the stability
of a TGC market, we construct a system dynamic model of the TGC market coupled with
the Nordic electricity market (Nord Pool). A set of trading strategies for the participants
under various marked designs is examined. These trading strategies were deduced from
laboratory experiments.

The principle of tradable green certificates

The main purpose of tradable green certificates is to increase the share of renewable generation at minimum costs. .
Figure 1 TGC markets as an environmental instrument in electricity markets. TGCs
are financial assets that can be traded independent of electricity generation. The value of
a certificate reflects the cost of providing the additional amount of new renewables
needed to fulfil the obligation.
TGC obligation (in % of sales)

TGC market

Futures/termin
market

Wholesale /
Distributor
Consumer

Supplier

Spotmarked

Metering

Balance market

Metering

Physical
transmission

TGCs are financial assets issued to producers of certified green electricity and can be regarded as a market-based environmental subsidy. An issuing Body (IB) issues green certificates at the moment a producer registers the production of actual green electricity.
They are later withdrawn from circulation at when customers account for their obligations
by presenting the certificates to the registration authority, or if the certificates period of
validity expire. Between issuing and withdrawing, the certificates are accounted and can
be traded. The certificates function as an accounting system to measure the amount of
electricity produced from renewable energy sources
Figure 1 shows in principle how a TGC market will work within the Scandinavian electricity market (Nord Pool). In the Nord Pool market, electricity and its derivatives are traded in double-auction markets. The spot market is used for hourly production scheduling.

72

The Balance market coordinates short-term regulation. Futures contracts are used for
electricity trading up to 3 years ahead, and are hence used for long-term planning and investment planning. A TGC market values the environmental benefit of renewables as a
service. The authorities define a mandatory share of demand for renewable generation and
the TGC market then finds the price needed to reach this target.

A system dynamic model of the TGC market

In the electricity spot market, generation can easily be adjusted to respond to price changes. This is not the case in the TGC market, where renewables cannot control generation
in the short term. Due to the intermittency of various renewable sources as wind and small
scale hydro, the supply of renewables can vary considerably from year to year, which will
cause large price fluctuations as depicted in Figure 2a. To circumvent this problem, bankFigure 2

Price stability of TGC markets

a) Price instability from intermittency of renewables


TGC Price
LRMC renewables
Max price

b) Banking and borrowing to increase price elasticitys


TGC Price
LRMC renewables
Max price Demand

Demand

windy yr

calm yr

price

price

Min price

banking,
borrowing

Min price
TWh/yr

TWh/yr

c) Dynamic representation the TGC market with possibilities for banking/borrowing and trading strategies
Investment strategy
Spotmarked

Price elasticity

Profitability
assessment

Electricity
Demand
Purchase strategy? TGC
demand

TGC target (%)


%

TGC
demand

New capacity
Capacity acquisition
balancing loop

TGC market
TGC
price

TGC
Sales strategy?
supply

generation

Price
change

TGC
purchase rate

certificates issued

TGC sales rate

TGC volume

TGC volume

2020 t

ing and/or borrowing has been proposed (see Figure 2b). Banking allows buyers and sellers to store certificates in, for instance, windy years and sell them during calm years. This
will increase the price elasticity (flexibility) of the supply curve. Borrowing means that
TGC obligations can be postponed into the future by buying more certificates later on.
This arrangement increases the price elasticity of demand. The Swedish TGC market
however, only allows unlimited banking. But by allowing banking, it is also possible to
withhold certificates, and by doing so market prices can rise well above equilibrium prices. To analyse the performance of various market designs, a system dynamics model
was constructed. The main variables and their interrelationships are shown in Figure 2c.
The TGC market is hereby given a dynamic continuous representation, in contrast with
the standard demand supply curves. On the supply side, the TGC price and the Spot market price is the basis for the profitability assessment of new capacity. Capacity acquisition

73

involves time delays of approximately 2-3 years. When the capacity comes on line, the
physical electricity will be sold in the spot market, while the green value of renewables
is rewarded through issuing TGC certificates for each MWh of electricity generated. The
certificates can then be sold in the TGC market, or stored for later sales. Consumers must
buy a certain share of their electricity consumption from renewables. The TGC obligation
is defined by a gradually yearly increasing share of electricity consumption. Allowing unlimited storage of certificates makes it possible to adopt several trading strategies. This
TGC market has the same characteristics of a financial asset market, and suggestions of
trading strategies can be found in finance literature. Another approach is that of experimental economics, where controlled laboratory experiments enable us to study trading behaviour from direct observation. The system dynamic approach can accommodate both
approaches.

Revealing trading strategies through experimental economics

The system dynamic TGC market model (Figure 2c) includes mainly three main decision
rules (policies/strategies) namely Investment strategy, Purchase strategy and Sales strategy. By converting the model into an interactive simulation game letting the players make
the buy/sell decisions in the TGC market, it is possible to derive their decision policies by
controlled experiments as well as the impact of various market designs on trading strategies. Figure 3a shows the laboratory experiment, where buyers and sellers were trading
Figure 3 Laboratory experiment and the resulting price development for a market
allowing unlimited banking (no borrowing).

TGC price
2

12

400

300

1 TGC_price
2 Maximum_price
3 Minimum_price

200

100

1
0

3
0

3
5

10

1
3

15

20

Time

certificates through a network simulation game. Figure 3b shows the resulting price formation of our experiment simulated over a period of 20 years. The TGC prices started at
equilibrium, but increased rapidly towards the price cap. The price was kept at the price
cap for several years until the market crashed and price dropped far below the theoretical
equilibrium price. Obviously this market does not converge towards the equilibrium
price. A more comprehensive study undertaken by ECN in the ReCERT project shows
the same mode of behaviour if unlimited banking is allowed. If borrowing is introduced
however, the effect is reduced. In fact, if only borrowing is allowed, prices converge towards the theoretical equilibrium price and hence the market is efficient.

Modelling trading strategies

On the basis of the laboratory experiments, we surveyed the finance literature to find possible candidates for trading strategies. Value traders make subjective evaluation about the
fundamental value of the asset and believe that the market sooner or later will adjust to
this value. They attempt to make profits by selling if they think the market is overpriced

74

and buying if the market is under priced. This kind of behaviour forms a negative feedback loop that will cause prices to converge towards equilibrium. Trend followers believe
that the market has some inertia that can be exploited. A seller would then hold his position of TGCs if the trend is positive, and sell when the trend peaks or becomes negative.
This kind of behaviour forms a positive feedback loop: A positive price trend causes sellers to withhold their certificates because they think it will become more profitable to sell
later. As a result, prices increase further, since fewer traders are willing to sell.
A mixture of these two strategies was implemented in the computer model, where traders
evaluated both the trend and the fundamentals. Renewable generation was given a stochastic representation based on historical data of wind velocities (Figure 4e) The resulting
price formation for various market designs is displayed in Figure 4a-c. The price rise and
the subsequent market crash that was observed in the laboratory experiment are replicated
in the simulations with unlimited banking and no borrowing.
Figure 4

TGC 1 year valid lifetime, no borrowing. No trend strategy

a) 1 year valid lifetime, no borrowing, no trend strategy

b)Unlimited banking, no borrowing, no trend strategy

NO K/MW h

NO K/MW h

300

300

TGC price (High)

TGC price (High)


200

200

TGC price (75% Percentile)

TGC price (75% Percentile)


TGC price (Avera ge)

TGC price (Avera ge)

TGC price (25% Percentile)

TGC price (25% Percentile)


TGC price (Low)

100

0
Jan 01, 2000

Jan 01, 2010

TGC price (Low)

100

0
Jan 01, 2000

Jan 01, 2020

Jan 01, 2010

Jan 01, 2020

Non-commercial use only!

Non-commercial use only!

c) Unlimited banking, no borrowing, trend strategy

d) Unlimited banking, 50% borrowing

NO K/MW h

NO K/MW h

300

300

TGC price (High)

TGC price (High)


200

TGC price (75% Percentile)

200

TGC price (75% Percentile)


TGC price (Avera ge)

TGC price (Avera ge)

TGC price (25% Percentile)

TGC price (25% Percentile)


TGC price (Low)

100

0
Jan 01, 2000

Ja n 01, 2010

0
Jan 01, 2000

Jan 01, 2020

Non-commercial use only!

TGC price (Low)

100

Ja n 01, 2010

Jan 01, 2020

Non-commercial use only!

Conclusion

The trading strategies implemented provide a possible explanation for the experimental
laboratory studies. After this study was undertaken, the results from the Swedish TGC
market are now appearing, and they show the same kind of behaviour as predicted by the
laboratory experiment and the system dynamic simulations. A TGC market with unlimited banking creates an initial market asymetry by allowing sellers to withhold certificates,
while the buyers must fulfil obligations each year to avoid paying the penalty price. If sellers adopt trend following strategies, initial shortage or years with low renewable generation create positive price trends, triggering trend followers to withhold certificates until
the price settles reaches the price cap. High prices stimulate investors to add more capacity, and eventually new capacity comes on line. The amount of new capacity, plus the
TGCs that are withheld causes the market to crash well below the theoretical equilibrium
price. If however, borrowing is allowed, this asymmetry is compensated giving buyers
the possibility to postpone their obligations.

75

European research and Marie Curie Fellowship program contributing to the Barcelona R&D objective
Y.B.Udalov
Faculty for Technical Sciences, University of Twente, P.O. Box 217,
7500 AE Enschede, The Netherlands.
Higher education, research and efficient innovation are considered as a key to
economic growth of the European economy. Although the formation of a single
European market and single European currency gives our continent good chances for
fast and efficient development, these chances are not realized yet. To the contrary,
European economics does not perform that good as it did, for example, in the sixties.
Failure to transform into an innovation-driven economy resulted in unsatisfactory
performance of most European economies during the last decades.
Most people do not realize how alarming in the situation now. While Europeans
are slowing down, new powerful players appear on the horizon. Recently, a report
analyzing possible trends in the economic growth of four countries Brazil, Russia,
India and China (BRIC) was published by Goldman Sachs analysts [1]. The results
presented in [1] are startling.

At the moment, six existing developed economies (the G6) have GDP of more
than $1 trillion (588 billion) the US, Japan, UK, Germany, France, and Italy.
If things go right, in less than 40 years, the BRICs economies together could be larger than
theG6 in US dollar terms. By2025 they could account for over half the size of the G6. Currently
they are worth less than 15%. Of the whole G6, only the US and Japan may be among the six
largest economies in US dollar terms in 2050. [1]

As it was correctly commented in Times few days after the publication of this
report: We are already starting to feel the tremors from these tectonic transformationsand the shockwaves can only become greater[2].
Some people have an illusion that the progress of mankind is a linear function of
time, and it always grows. Unfortunately, this is not the case. At least, it is not always
the case. If one will look to the level of prosperity, the efficiency of medical, civil and
legal systems, that were reached in ancient Rome in the middle of fourth century A.C.,
and compare it to the level of European prosperity six centuries later, he will be
surprised. Half a millennium later, the level of social and economic development was
only 10% (Sic!) of that achieved in Rome. [3]. The first European city that has reached
Roman standards of living, was London, and it happened in XVII century [3].
People must be aware of the history.
For Europe, the only chances to avoid marginalization in the XXI century is to
boost its R&D, to restore the previously existed high level of education, staring from
basic schools and finishing with academic education and get most from the innovation
potential that still left in Europe.
This goal was set during EU meetings in Lisbon and in Barcelona. The so-called
Barcelona objective aims to an ambitious goal. EU member countries should allocate
3% of GDP for research and development. Of this amount, private sector should provide
2%, and 1% should come from public sector.
The target might look ambitious, but some skeptics question if it can be reached.
Current investment level in European R&D varies from over 3% in Sweden and Finland
to 0.68% in Greece. To meet the objective the research expenditures have to rise by 8%
every year, entailing a 6 % increase in public funding and 9% increase in business

76

funding. For countries like Slovenia the latter means 320% increase in business
spending, the figure that hardly can be reached.
Funding is not the only bottleneck here. In order to conduct R&D of this scale,
an extra 700.000 new scientists and technical personnel should be educated, trained and
employed. Where will they come from? Its not a secret that the popularity of higher
education, especially in technical and exact sciences, is not that high any more. Prestige
of scientists in the society is still rather high, according to European opinion polls (only
doctors score better). However the efforts needed to succeed in science are often
considered as being too high.
After the World War II science was somehow industrialized, and the changes
that took place were not always positive. It is hard to stimulate creativity of somebody
whos considered to be a tiny part of a big scientific machine. In a book Noble
Dreams: Power, Deceit and the Ultimate Experiment by Gary Taubes Carlo Rubbia is
quoted as having said : Physicists are like lemons I squeeze them until all the juice
is gone, and then I toss them aside [4]. This dreadful remark you can expect to hear
from the Enron-style leaders of big multinationals and not from an enlighted European
professor.
For those interested in academic career the chances for success are small. For
example, in the Netherlands only 10% of post-docs can count on a permanent position
in academic research. There is a clear contradiction here: the number of researchers
shrinks, while we just expect the opposite. Currently, people begin to talk about
disposable scientists, who are used while they are young and put aside after three or
four post-docs terms [5].
If we indeed want to achieve the Barcelona objective, this attitude has to be
changed. However that will be not enough. A comprehensive set of measures is required
in order to accelerate and qualitatively improve the European R&D.
The instruments of Marie Curie Program can provide an important contribution
here. The fellowship grants allow scientists from new candidate states, but also from
outside the EU to realize their ideas on European ground. This will decrease the
shortage of skilled scientists and engineers. Along with other economical and political
measures [6], it will help us to preserve European cultural heritage and stimulate the
economic growth of Europe.
References
1. D. Wilson, R. Purushothaman Dreaming With BRICs: The Path to 2050
2. Global Economics Paper No: 99, Goldman Sachs Research, 24 pp., October 1, 2003.
2. G.Duncan, Times, Oct 6 2003.
3. J. Zakowski Nowa rewolucja, nowe sredniowiecze. Rozmowa z Lesterem. C.
Thurowem" Gazeta Wyborcza" 27 - 28 September 1997.
4. Gary Taubes, "Nobel Dreams: power, deceit, and the ultimate experiment",
Random House, New York, xxiv + 261 pp., 1986.
5. M. Pomp, R. Venniker and M. Canoy Prikkel de prof. Een analyse van de
bekostiging van universitair onderzoek ISBN: 90-5833-136-9, CPB Document 36, 67
pp., October 2003.
6. Y.B.Udalov New science for United Europe, to be published (2004).

77

POSTER PRESENTATIONS

Epidemiology and function of the variant esp gene in


Enterococcus faecium
Katrine Borgen1* , Janetta Top2 , Helen Levis 2 , Marc J. M. Bonten2
and Rob J. L. Willems2
1

National Institute of Public Health and the Environment (RIVM), Bilthoven


2
Utrecht Medical Centre (UMC), Utrecht, The Nederlands.
*
Marie Curie Fellow, Contract no. QLK2-CT-2001-50991

Background
Enterococcus faecium is a Gram positive bacterium of the genus enterococci.
Traditionally, the enterococci have been recognised as regular intestinal inhabitants,
which serve as indicators for faecal contamination of food and drinking water and
therefore are of importance in food and public health microbiology. Certain
enterococcal strains are used as probiotics to prevent or maintain a normal intestinal
flora of both humans and animals, and enterococci are involved in various food
fermentation processes, for example cheese making.
The last decades, the enterococci have received increasing attention as potential
pathogen causing hospital- and community acquired infections. One important reason
for the increasing problem with enterococcal infections is the vast ability of these
bacteria to withstand antibiotic treatment. Enterococci harbour intrinsic mechanisms of
resistance and can easily acquire resistance determinants from other bacteria through
horizontal gene transfer, often resulting in multiresistant isolates causing disease.
Enterococci most commonly cause urinary tract infections, endocarditis and wound
infections. Systemic infections, mainly occurring in immunocomprimised patients, have
a high mortality rate as the patients often are suffering from other underlying diseases
(1).
The increased importance of enterococci as hospital acquired (nosocomial) pathogen
has evoked the interest in enterococcal pathogenesis. How do these bacteria interact
with the host and what are the factors making normal- flora bacteria turning into lifethreatening pathogens? Compared to the closely related Enterococcus faecalis, only a
few virulence factors have been identified in E. faecium. The esp gene encoding the
enterococcal surface protein (Esp) was first described as a potential virulence factor in
E. faecalis (2) and later a variant of the esp gene was also described in E. faecium (3).
The aim of our work was to characterise the variant esp gene in E. faecium and examine
the function of its product, the Esp protein. Furthermore, we wanted to investigate the
distribution of the variant esp gene among E. faecium isolated from various hospital
outbreaks and human infections, as well as from environmental and animal sources.

80

Characterisation of the variant esp gene of E. faecium


One E. faecium isolate (E300) recovered during a hospital outbreak was used to
characterise the variant esp gene in E. faecium. By using PCR and DNA sequence
analysis one open reading frame of 5703 nucleotides was revealed, which is predicted to
encode a polypeptide with a calculated molecular mass of ~ 205 kDa. The deduced
amino acid sequence of the E. faecium Esp protein revealed a high degree of similarity
with the E. faecalis Esp protein. The E. faecium E300 Esp is predicted to be synthesised
as a precursor with a signal peptide that precedes an N-terminal region, a central repeat
region, and a C-terminal domain (Fig. 1).
Remarkably, the first 23 amino acid residues of the processed protein of E300 are
highly different from E. faecalis Esp. The number of A, B and C repeats are different
form the E. faecalis esp and it also varies between different E. faecium isolates,
however, the structure of the repeats are highly similar to the repeats of the E. faecalis
Esp. The C-terminal domain contains a membrane spanning hydrophobic region, the
YPKTGE cell wall anchor motif, and a charged tail presumably extending into the
cytoplasm. The overall similarity of the E. faecium E300 Esp with the E. faecalis Esp,
disregarding the number of repeats, is 92 % (4).

49

700

R
79

84

C
82

78

160

A1 A2 A3 A4 A5 B1 C C 2 C 3 C C B2
YPKTGE
Figure 1. Schematic illustration of the Esp protein in E. faecium (E300), which shows high homology
with the Esp protein in E. faecalis. Signal sequence (S), N-terminal region (N), repeat region (R) and Cterminal region (C). The size in no. amino acids is indicated. YPKTGE depict the anchor motives in the
C-terminal region (4).

The variant esp gene in E. faecium is located in a putative pathogenicity island


It was recently reported that the E. faecalis esp gene was part of a 150 kilobases large
cluster of genes involved in virulence, a so-called pathogenicity island (PAI) (5). In
isolate E300, an ~ 15 kb genomic region was characterised by using PCR and sequence
analysis. The results showed six open reading frames (ORFs) flanking the variant esp
gene, which all represent putative genes implicated in virulence, regulation of
transcription and antibiotic resistance. Analysis of ~ 100 isolates from various human
and animal sources by using oligo- hybridisation showed an invariably association
between the presence and absence of esp and the other six ORFs. This indicates that
also the E. faecium variant esp is part of a distinct genetic element that constitutes a
novel enterococcal pathogenicity island (Fig. 2) (4).

81

orf1

orf2

orf4

orf3

Uve2-like araC

nox

esp

orf5

orf6

muramidase phage

orf7
permease
(bp)

200

400

600

800

1000

1200

1400

Figure 2. Schematic illustration of the putative pathogenicity island (PAI) in E. faecium (E300) showing
esp flanked by six open reading frames (ORFs) representing putative genes implicated in virulence,
regulation of transcription and antibiotic resistance (4).

Epidemiology and spread of the variant esp gene in E. faecium


In E. faecium, esp has been described almost exclusively in isolates related to human
infections or hospital outbreaks, as opposed to in E. faecalis, where esp also has been
described in isolates from various animal and human sources. We performed multilocus sequence typing (MLST) analysis to investigate the genetic relationship between a
selection of ~ 500 E. faecium isolates from various human and animal sources.
Clustering was performed by using the Minimal Spanning Tree and complete linkage
algorithms (BioNumerics, version 3.5, Applied Maths Sint-Martes-Latem, Belgium).
The results revealed the existence of distinct genetic lineages. Most isolates associated
with hospital outbreaks and human infections clustered in a specific genetic lineage,
clonal complex 17 (CC-17). Interestingly, the esp gene as well as ampicillin resistance
were almost exclusively found among isolates in this clinically relevant lineage. 55 %
of the isolates in CC-17 harbour esp as opposed to only 1 % of all the other isolates
analysed.

esp-positive isolates
esp-negative isolates

CC-17

Figure 3. Minimal Spanning Tree showing distribution of esp in a selection of ~ 500 E. faecium isolates
of various origin. Clonal Complex 17 (CC-17) mainly contains epidemic isolates. Each circle represents
multiple isolates with the same sequence type (ST) as determined by MLST.

82

Function of the variant E. faecium enterococcal surface protein (Esp)


In E. faecalis, Esp is expressed on the surface of the bacteria and is thought to be an
adhesin associated with biofilm formation and colonisation of urinary tract epithelium
(6, 7). Knowing the high degree of structural similarity between esp in E. faecalis and in
E. faecium, a functional similarity would be likely to exist. So far, limited knowledge
exists about the function of Esp in E. faecium, mainly due to large problems with
constructing an esp-knockout mutant. We have, however, performed some initial
functional studies in which unrelated esp positive and esp negative isolates were
compared.
An initial biofilm assay was performed on ~ 200 isolates (100 esp positive and 100 esp
negative isolates from the selection of ~500 genotyped E. faecium) and the preliminary
results show that more esp positive isolates attach to a polystyrene surface than esp
negative isolates. Furthermore, cell adhesion experiments are currently being performed
to investigate the capacity of esp positive and esp negative E. faecium isolates to adhere
to human intestinal and bladder epithelial cells. More experiments will have to be
carried out for conclusions to be drawn from these studies.
Conclusion
The variant esp gene is a potential virulence factor in E. faecium. It is only present in a
distinct genetic lineage of isolates related to hospital infections and outbreaks, it is
associated with ampicillin resistance and it is located on a novel pathogenicity island.
Preliminary results show that, also in E. faecium, Esp is involved in bacterial attachment
to surfaces, which is the initial step in biofilm formation. This might represent an
important mechanism of survival and persistence of E. faecium in the hospital
environment. Work is currently being performed to construct an esp-knockout mutant
and to further study the functional aspects of the variant esp gene, in vitro as well as in
vivo.
References
1. Gilmore, M. S., D. B. Clewell, P. Courvalin, G. M. Dunny, B. E. Murray, L. B. Rice (Ed.). 2002.
The Enterococci: Pathogenesis, Molecular Biology and Antibiotic Resistance. American Society for
Microbiology, Washington, D.C.
2. Shankar, V., A. S. Baghdayan, M. M. Huycke, G. Lindahl, and M. S. Gilmore. 1999. Infectionderived Enterococcus faecalis strains are enriched in esp, a gene encoding a novel surface protein.
Infect. Immun. 67:193-200.
3. Willems, R. J., W. Homan, J. Top, M. van Santen-Verheuvel, D. Tribe, X. Manzioros, C.
Gaillard, C. M. Vandenbroucke-Grauls, E. M. Mascini, E. van Kregten, J. D. van Embden, and
M. J. Bonten. 2001. Variant esp gene as a marker of a distinct genetic lineage of vancomycinresistant Enterococcus faecium spreading in hospitals. Lancet. 357:853-855.
4. Leavis, H., J. Top, N. Shankar, K. Borgen, M. J. Bonten, J. D. van Embden, and R. J. Willems.
2003. A novel enterococcal putative pathogenicity island linked to the esp virulence gene of
Enterococcus faecium associated with epidemicity. Accepted J. Bact.
5. Shankar, N., A. S. Baghdayan, and M. S. Gilmore. 2002. Modulation of virulence within a
pathogenicity island in vancomycin-resistant Enterococcus faecalis. Nature. 417:746-750.
6. Shankar, N., C. V. Lockatell, A. S. Baghdayan, C. Drachenberg, M. S. Gilmore, and D. E.
Johnson. 2001. Role of Enterococcus faecalis surface protein Esp in the pathogenesis of ascending
urinary tract infection. Infect. Immun. 69: 4366-4372.
7. Toledo Arana, A., J. Valle, C. Solano, M. J. Arrizubieta, C. Cucarella, M. Lamata, B.
Amorena, J. Leiva, J. R. Penades, and I. Lasa. 2001. The enterococcal surface protein, Esp, is
involved in Enterococcus faecalis biofilm formation. Appl. Environ. Microbiol. 67: 4538-4545.

83

A numerical study of the phase behavior of protein solutions


Georgios C. Boulougouris, Daan Frenkel
FOM Institute for Atomic and Molecular Physics, Kruislaan 407, 1098 SJ Amsterdam,
The Netherlands
Introduction:
Despite the complexity of interactions occurring in protein solutions, it has long
been argued that the presence of short range interactions are responsible for the
metastable 'Liquid- Liquid' equilibrium [1,2,3], identified by a number of experimental
studies [4,5] as an important intermediate in protein crystallization. It is well established
that there is a surprising analogy between the statistical behavior of complex systems
like the globular protein solutions and that of simple fluids. The statistical
thermodynamic properties of complex solutions can be derived in the same way as for
atomic systems, by treating the solvent as a continuous background that exerts
fluctuating forces [6,7]. According to that description, the metastable 'Liquid-Liquid'
equilibrium in 'globular proteins' solutions corresponds to a metastable vapor- liquid
equlibria of particles with short-ranged attraction potentials. Numerical studies of
spherical particles with a short-range attraction have revealed that in the vicinity of a
liquid- liquid critical point below the freezing curve, the barrier for crystal nucleation
becomes drastically lower [8]. This numerical prediction has subsequently been
supported by theoretical analyses [9,10], while experiments on the crystallization of
globular proteins [11] do indeed provide evidence for a peak in the crystal nucleation
rate as the metastable binodal is approached.
Our work is focused on the accurate prediction of the phase diagram for short range
interacting particles via molecular simulations, under conditions relevant to protein
crystallization. Under those conditions the presence of short-ranged interactions results
in a glassy behavior for the system, which is trapped into local minima. For that reason
we developed a novel numerical method that allows us to sample the phase space more
efficiently, in comparison to traditional methods. The new method has allowed us to
simulate for the first time conditions that were unreachable up to now,but that are at the
same time of great interest in protein crystallization.
Simulation details.
Several numerical studies have been performed in the past, on the Square-Well
(SW) model. The Square-Well is a pair additive model that describes the inter-particle
interactions as follows: if the distance of two particles is less than a parameter s, the
potential energy is infinite, if it is between s and ls then the potential energy of the
system is lowered by -e, otherwise the two particles feel no interaction. The SW model
obeys the principle of two corresponding states for a fixed value of l. As a direct result
the parameters e and s can be directly related with the critical temperature and density
of the system. By changing the width of interaction l, a wide range of systems can be
studied, going from a van der Waals system of infinitely small e and infinitely large l
to the Adhesive Sticky Sphere limit where e goes to infinity and l goes to zero. Most of
previous studies were focused on square-well widths that were considered to be typical
for simple liquids [12-15]. More recently, the interest for systems with shorter range of
interactions has been significantly increased, since it has been argued that the

84

experimentally observed phase behavior [3] of complex systems, like protein and
colloids in solutions can be understood in a first approximation by models of sort range
attractions. As one should expect, protein-protein interactions are highly anisotropic due
to various physical mechanisms: presence of hydrophobic/ hydrophilic zones, formation
of specific hydrogen bonds at specific surface locations, interactions between nonuniformly distributed surface charges. It has been shown that the anisotropy can be
essential in the description of the experimentally observed phase diagrams of protein
solutions [16-19]. Furthermore recently it has been argued that the gel- like behavior of
the dilute protein solutions may be better explained by the interplay of short-range
attraction and weak long-range repulsion. In the present work we havent investigated
those effects although this is part of our future plans.
We investigate the phase behavior of the SW fluid by following two approaches.
First for the case were the interaction width (l) is sufficiently long, the Gibbs Ensemble
[20] method was used to calculate the vapor liquid equilibrium of the SW fluid of 256
molecules for (l = 2 ,1.5). On the other hand, simulating small interaction widths results
in trapping the system in local minima and hence generating a glassy behavior. One of
the consequences of the glassy behavior is that volume fluctuations become unpractical.
For this reason we developed an alternative approach for the calculation of the phase
equlibria without the need of volume fluctuations. The method is based on the accurate
calculation of the free energy difference between the SW fluid and the hard sphere fluid
as a function of temperature and density from a series of NVT simulations (constant
number of particle (N), Volume (V) and temperature (T)), at discrete temperatures and
densities. Furthermore we combined the new method with parallel tempering and the
histogram method [21].
The parallel tempering method was designed [21] to enable simulation under
different conditions in parallel, guaranteeing at the same time that the individual subsystems are maintained in thermal equilibrium. By exchange of configurations between
sub-systems, systems that are trapped in local minima in the lower temperatures can
overcome large free energy barriers, by diffusing up (and subsequently down) in
temperature. In order to achieve reasonable acceptance probability for such
configuration exchange, the difference between the conditions of the sub-systems
involved in the exchange should not be too high, ensuring that there is a subset of
configuration that are probable to be visited by both systems. A similar constrain is
valid for the histogram method used to link simulations under different conditions and
the accurate calculation of the free energy of the system.
The histogram method [21] allows us to combine information from different
simulations and calculate the free energy difference between the conditions simulated.
We use the histogram method to reconstruct the density of states W (U ) as a function of
the internal energy, U, at constant number of molecules N, and constant volume V, from
a series of NVT simulations at different temperatures. It should be noted that a single
simulation at temperature T will give correct information for the ratio of densities of
states, W (U1 ) / W (U 2 ) , around the average potential energy that corresponds to this
temperature. We are able to calculate the absolute value of the density of state with the
help of the histogram method using the fact that at infinite temperature the SW potential
reduces to the hard sphere. The absolute value of the free energy of a hard sphere fluid
can then be accurately calculated by the Carnahan-Starling EoS [22]. From the
reconstructed density of state, the potential energy as a function of the temperature can
be rigorously calculated. Once the former function is known from the temperature of
interest T , up to high temperatures, the difference in free energy between a SW fluid at
temperature T and the hard sphere at the same density can be calculated as the integral
85

Data Association problem in Multiple Target


Tracking
Agostino Capponi
Thales Nederland B.V.
The Nederlands,
agostino.capponi@nl.thalesgroup.com

Introduction

A basic question in the field of air surveillance systems is: how to keep track of
all observed targets using information from all available sensors? Sensors, like
radar or infra red, detect and locate objects named targets that are present
within their coverage. Each time a detection called measurement takes place,
several features are measured, such as the location of the object (expressed
in range, bearing and elevation). The Data Association problem (DAP) is to
find out which sensor detections originate from which target. In this paper we
address the situation when there is only a single scanning sensor and assume
that the sensors measurements are processed after each scan. The Data
Association problem (DAP) is to find out which sensor detections originate
from which target. It aims at finding an optimal assignemt of measurements
to targets under the following assumptions. At each scan:
- a single measurement can be associated with at most one target;
- each target receives at most one measurement per scan.
There are many complicating factors that make the problem difficult to
solve. The first is due to the fact that measurements produced by sensors do
not always correspond to a target detection. For instance, a radar searching
for aircraft targets will also receive reflected signals from hills, structures such
as tall buildings, the surface of the sea, etc... All of these unwanted radar
echoes are called false alarms. Another problem to face is the limitation in
accuracy of the sensors. When tracking closely spaced targets, the sensor
resolution becomes a critical issue in resolving different targets. Due to the

88

limited resolution, one may have a single target detection with a merged
measurement for two closely-spaced targets. The rest of the paper is organized in two sections. The next section is dedicated to describe the different
data association algorithms that have been proposed in the literature. The
last section briefly explains the concept of clustering in the data association
field and gives some references to relevant works.

Overview of the Data Association Algorithms

Every time the data association problem is solved an assignment of sensor


measurements to tracks is made. A track is a set of measurements that are
assumed to be generated from the same target (aircraft, missile, ...). Solution
methods for the data association problem arising in multiple target tracking
developed throughtout the years can be roughly divided in two main categories: sequential methods and batch methods. Sequential methods consider
the set of sensor measurements from the latest scan and immediately make
an assignment between these measurements and the tracks determined by the
last assignment. Batch methods maintain a series of sensor measurements
and produce a preliminary assignment of sensor measurements to tracks.
Final measurement-track assignments are made between the measurements
outside the batch and the tracks in the preliminary assignment. A batch
approach has a major advantage over sequential methods. This is that the
measurement-track assignments within the batch maybe retracted when new
sensor measurements are received. Hence the decision making process is reversible within the batch and this leads to a more robust data association.
On the other hand the disadvantage of the batch approach is the high amount
of CPU time needed to find a good assignment. Note that in operational circumstances fast reaction times are required, normally before the next scan of
measurements is received. An example of a sequential approach is the Global
Nearest Neighbour. It assigns the measurement to the closest track according
to some fixed distance measure. Another important sequential technique is
the Multiple Hypothesis Tracking made popular by the famous work of Reid
[6]. At any given time multiple assignments are maintained. The best assignment is considered to be the solution for that moment. Other less likely
assignments can later grow to become the best one. Informally the best assignment is the one most likely to correspond with the real world situation.
In order to define it formally a score [6] is defined for any assignment. Over
the last fifteen years a new batch approach has been defined and refined [4].
This is the multidimensional assignment (MDA) approach. The main idea
is to mantain a batch of sensor measurements with which plausible tracks

89

are formed. These tracks represent the building blocks with which a global
assignment is formed. To illustrate how data association can be seen as a
multidimensional assignment problem, see Fig.1. Each column represents a
set of measurements collected at a certain scan. Only the measurements of
the last K scans are maintained in memory. This batch is called sliding window, since it moves one step ahead every time a new set of measurements is
processed (at every scan). The measurement-track assignment falling outside
the window (in the discarded scans) are fixed and cannot be retracted.

SLIDING WINDOW

Figure 1: Data association posed as a multidimensional assignment problem


Assignments have to be made between fixed tracks outside the window
and measurements within the sliding window. Thus a K-scan sliding window
results in a K + 1-dimensional assignment problem. Many algorithms based
on the theory of Lagrangian relaxation have been proposed by Poore et al.
[5] to solve this problem. These algorithms basically remove specific constraints from an MDA problem in order to make it simpler to solve. However
violation of removed constraints is discouraged during the problem solving
process. Poore shows by means of experiments that the quality of the assignment found with this procedure is quite satisfactory. Major drawbacks of
Lagrangian relaxation are the high amount of CPU time needed to solve the
problem and the fact that there is no general guarantee that the found assignment is anywhere near the optimal assignment. An alternative approach
has been proposed by Capponi [1]. This approach uses a greedy procedure
to generate multiple assignments and then selects the best one. The algorithm terminates in a polynomial amount of time and it gives a mathematical
guarantee on the quality of the assignment returned.

90

Clustering

In extreme circumstances when hundred of targets must be tracked and/or


measurements coming from many sensors must be taken into account also a
fast data assocation algorithm is not able to fullfil real time stated requirements. The situation can be improved if the data association problem is
decomposed into a number of independent smaller data association problems
that can be solved independently. The major issue here is to come up with
an approach that guarantees that the amount of time needed to decompose
the problem and then solve each independent subproblem is smaller than the
amount of time needed to solve directly the original data association problem. Two interesting approaches [6], [2] have been proposed in the literature
to address this issue. Recently a new approach has been proposed by de
Waard and Capponi [3] to decompose the MDA problem. Besides reducing
the complexity of the original MDA problem, the proposed approach also
reduces the number of correlation tests that are made to decide whether a
measurement can be part of a specific track.

References
[1] A.Capponi. A Polynomial Time Algorithm for the Data Association
Problem in Multiple Target Tracking. To be published in 2003.
[2] M.R. Chummun, T. Kirubarajan, K.R. Pattipati and Y. Bar-Shalom.
Efficient Multisensor-Multitarget Tracking Using Clustering Algorithms,
IEEE Trans. Aerospace and Electronic Systems, Vol. 37, No. 3, July
2001, pp.898-913.
[3] H.de Waard and A.Capponi. An Efficient Approach to decompose the
MDA Problem. Submitted to Information Fusion 2004.
[4] A.B. Poore. Multidimensional Assignment Formulation of Data Association Problems Arising from multitarget and multisensor tracking. Computational Optimization and Applications. Vol 3, pag.27-57, 1994.
[5] A.B. Poore and N. Rijavec. A Langrangian Relaxation Algorithm
for Multidimensional Assignment Problems Arising from Multi Target
Tracking, SIAM Journal of Optimization, Vol. 3, No. 3, pp 554-563,
August 1993.
[6] D.B.Reid. An algorithm for tracking multiple targets. IEEE Transactions on Automatic control, Vol. AC-24 (no.6), 1979.

91

va Csajbk1,2, , Istvn Bnyai2 , Joop Peters1 :


1
Laboratory for Applied Organic Chemistry and Catalysis
Delft University of Technology
The Netherlands
2

Department of Physical Chemistry


University of Debrecen
Hungary

Synthesis and investigation of Gd-loaded zeolite nanoparticles as


potential Magnetic Resonance Imaging contrast agents
Introduction
Imaging of human internal organs with exact and non- invasive methods is very
important for medical diagnosis, treatment and follow-up. The development of modern
magnetic resonance imaging, MRI, which represents a breakthrough in medical
diagnostics and research, has recently awarded with a Nobel Prize in Physiology or
Medicine (Paul Lauterbur and Peter Mansfield, 2003).
The MRI is based on the phenomenon of Nuclear Magnetic Resonance (NMR).
The essence of the phenomenon is the interaction between an external magnetic field
and nuclei, which have magnetic moment different from zero. For this interaction there
are two parameters, which have essential importance in the MRI: the so-called
longitudinal (T1 ) and transversal (T2 ) relaxation times. With these measurable
parameters we can describe the rate of the build-up or the decay of the magnetization
vector of a certain nuclei inlocated or removed from the magnetic field.
Hydrogen nuclei (1 H) are the best target for in vivo NMR with the following
advantages: 1. high sensitivity; 2. high concentration (water) in the body; 3. difference
in T1 and T2 values between different tissues (e.g. normal and malignant tissues), which
can lead to contrast in MRI pictures.
For many applications, however, it is necessary to increase the contrast by
administering certain materials, contrast agents (CAs). They are usually soluble
complexes of paramagnetic ions, e.g. Gd 3+, formed with chelating ligands, containing
one or more coordinated water molecules. The contrast enhancement is based on the
paramagnetic effect of the seven unpaired electrons of the Gd 3+ on the 1 H nuclei of the
coordinated water (T1 or T2 shortening). The effect is spread to the bulk water
molecules around the complex by fast exchange process between them and the
coordinated water molecules. The effectivity of a contrast agent is described by the
relaxivity, R1 or R2 , according to Eq. 1 and Eq. 2, where T1 and T2 are the observed
relaxation times, T1,0 and T2,0 are the relaxation times in absence of the agent, R1 and R2
are the relaxivities and ca is the concentration of the agent.

1 / T1 = 1 / T1, 0 + R1c a

(1)

1 / T2 = 1 / T2,0 + R2ca

( 2)

Recent years, the interest in new type of CAs, paramagnetic nanoparticles (small
but insoluble particles) are increased.

92

Zeolites are crystalline aluminosilicates, where the framework based on a network of


AlO4 and SiO4 tetrahedra linked to each other by oxygen bridges. This structure can be
infinitely extended into a three-dimensional structure.
Zeolites may be represented by the empirical formula:
Mx/m [(Al2 O3 )x (SiO 2 )y ] . wH2 O
where M is the cation with valence m, w is the number of the water molecules in the
unit cell. In our case, M can be Na+ partially exchanged for Gd3+.
Various zeolites with various pore sizes and particle sizes can be used. Since these
particles are thermodynamically stable and kinetically inert, they can be used as
gastrointestinal CAs, or, if the particle size could be made small eno ugh, as CAs for
intravascular administration. Connected to a targeting group, selective investigation of
certain organs or tissues could be possible.
Results and discussion
Several parameters affecting the relaxivity were investigated: the effect of the
gadolinium loading, pore size, dealumination, calcination, the presence of targeting
group.
1. Effect of the gadolinium- loading: with increasing weight percent of gadolinium,
the relative relaxivity (relaxation rate enhancement per mg of zeolite) increases,
while the relaxivity (relaxation rate enhancement per Gd-concentration in
dispersion) decreases.
2. Effect of the pore size: zeolite Y and zeolite A with similar particle size (100 nm
and 150 nm, respectively) and similar Gd- loading (1.3 weight percent of Gd 3+
and 1.5 weight percent of Gd 3+, respectively) were investigated. The relaxivity
of GdNaA appeared significantly lower than in the case of GdNaY, according to
the smaller pore size.
3. Effect of the dealumination: both in GdNaY and GdNaA, dealumination resulted
in higher relaxivity.
4. Effect of the calcination: after calcination (heat treatment) the GdNaY sample
showed decreased relaxivity.
5. Effect of the targeting group: attaching aminopropyl- group to the zeolite, the
relaxivity decreased dramatically.
Based on these results and applying the model used by Platas-Iglesias et al., qualitative
explanation already can be made, the quantitative evaluation is in progress.
The Solomon-Bloembergen-Morgan theory describes the relaxation rate enhancement
obtained in the case of using soluble Gd(III)-complexes as CAs.
Here, the paramagnetic effect spreads to the bulk water by fast exchange between the
bulk and the water molecules coordinated to the Gd(III) in the complex.
In the case of the zeolite-encapsulated Gd(III), a two-step exchange model can be used,
assuming that the slower process is the exchange between the zeolitic water and the
bulk through the channels.
With increasing Gd(III)- loading, the content of water inside the zeolite decreases,
therefore the water-exchange between the Gd(III)-coordinated water and the zeolitic
water becomes slower.

93

With decreasing pore size, the water exchange between the zeolitic water and the bulk
water becomes slower, and the zeolitic water content becomes lower. Dealumination
has opposite effect: it makes the crystallinity worse, produces some larger pores with
more zeolitic water, and facilitates the water exchange between the zeolitic water and
the bulk water.
As a consequence of the calcination, a certain amount of Gd 3+ is moving from the larger
pores of GdNaY zeolite towards the smaller cages, where they cannot be reached by the
water exchange process any more, therefore they are lost from the point of view of the
relaxation enhancement.
Introducing the aminopropyl- group, probably they are not on the surface, but in the
inner cavities. The solution would be to react the aminopropyl-group in advance with a
large molecule, which cannot go inside the cavities, therefore the reaction would take
place only on the surface. This work is still in progress.
References
Platas-Iglesias, C.; Elst, L. V.; Zhou, W.; Muller, R. N.; Geraldes, C. F. G. C.;
Maschmeyer, T.; Peters, J. A. Chemistry. A European Journal 2002, 8, 5121-5131. and
the references therein
Figures
smaller b-cages d int=6.6

Internal cavity: 11.8


Internal cavity: 11.4
Pore window: 7.4
Pore window: 4.1

Zeol
ite Y

Zeolite A

Figure 1. Different structures among zeolites: zeolite Y and A

94

Complex Fluids in Ultra-Thin Films

J. de Vicente1,2, H. A. Spikes2 , and J. R. Stokes1 .


1
Unilever R&D Colworth, Colworth House, Sharnbrook, MK44 1LQ, UK
2
Tribology Section, Department of Mechanical Engineering, Imperial College of
Science, Technology and Medicine, London, SW7 2BX, UK
Abstract:
In the food and personal care industries, many products consist of polymerthickened water-based systems. Often the tribological properties of these materials are
important. Since they contain quite high molecular weight biopolymers, these materials
tend to be non-Newtonian at the shear rates present in rubbing contacts, so it is difficult
to predict their friction or hydrodynamic film- forming behaviour. The friction properties
of two types of polymer solutions have been studied at a range of polymer
concentrations over a wide range of entrainment speeds in a soft point contact formed
between silicone rubber and steel. The results have been interpreted with respect to
viscosity measurements on the polymer solutions made over a range of shear rates.
Introduction:
A lot of industrial products are highly structured complex fluids or soft solids.
The rheological properties of the material is highly important as it controls stability and
performance. In particular, the performance and sensory perception of many consumer
products during their application, depends on the behaviour of the material in very thin
films and under very high shear rates. The thickness of such films can range from
nanometers to microns, and are similar in size to the colloidal microstructural elements
within the complex fluids and the surface roughness. The present work involves a
preliminary study on relating the tribology of polymer solutions to their rheological and
microstructural properties in thin films at high shear rates.
Materials and methods:
Two polymers were investigated in this work, polyethylene oxide (PEO), and
xanthan gum (XG). The last one is used in food products, while polyethylene oxide is a
well-characterised synthetic polymer. All solutions were prepared by first dissolving the
appropriate amount of polymer into Millipore deionised water. Sodium azide (0.02 %
w/w) was added to the stock XG solutions to inhibit bacterial growth. Finally
centrifugation was carried out in a centrifugal field of 16,000 g using a Beckman J2-MC
for 45 minutes.
The friction apparatus used was a modified form of the Mini Traction Machine
(MTM) (PCS Instruments, London, UK). This consists of a ball (AISI 440; radius R =
9.5 10-3 m) loaded against the flat surface of a silicone rubber disc with both surfaces
independently driven by separate motors. The load was held constant at W = 3 N and
measurements were carried out at a fixed slide-to-roll ratio of 50% over a wide range of
entrainment speeds (from 4 to 1200 mm/s). The temperature was fixed at 35C. Also,
two different elastomeric surfaces were investigated: hydrophobic (HB) and hydrophilic
(HL) ones. The low effective elastic modulus (E* = 10.9 106 Pa) and low Hertz
maximum contact pressure (pmax = 5.7 105 Pa) meant that, when a lubricating film was
present, the contact operated in the isoviscous-elastic or soft-EHL lubrication regime
(Hamrock, 1994).
Rheological experiments were performed at 35C on a controlled strain
rheometer (ARES, Rheometric Scientific, Piscataway NJ, USA). A cone and plate
96

geometry with a 50 mm diameter cone of angle 0.02 radians was used to ensure a
constant shear rate in the sample. A LS-30 Contraves viscometer was also used at low
rates.
Results and discussion:
As a way of example, Figures 1 and 2 show results obtained using a range of
different concentration of PEO solution with hydrophobic and hydrophilic surfaces
respectively.

10

-1

10

-2

Friction coefficient

Friction coefficient

-1

10

Water
0.0391 wt %
0.1565 wt %
0.625 wt %
2.5 wt %
10

10

10

-2

10

Water
0.0391 wt %
0.1565 wt %
0.625 wt %
2.5 wt %
10

10

10

Entrainment speed (mm/s)

Entrainment speed (mm/s)

Figure 1: Stribeck curve corresponding to


polyethylene oxide solutions at different
concentrations on hydrophobic surfaces.

Figure 2: Stribeck curve corresponding to


polyethylene oxide solutions at different
concentrations on hydrophilic surfaces.

It can be seen that friction coefficient falls with increasing concentration of polymer
over the whole speed range. At the highest polymer concentrations there is a levelling
out and even an increase in friction at very high speeds, indicative of full- film
lubrication. The hydrophilic elastomer gives higher friction than the hydrophobic one at
almost all conditions. Similar results were obtained for xanthan gum solutions (not
shown here for brevity). However, XG displays less sensitivity of friction to polymer
concentration than PEO, and also no significant levelling-out of friction at high speeds.
The friction coefficient versus entrainment speed curves for polymer solutions
demonstrate a decrease in friction with speed, indicative of fluid entrainment and the
formation of a partial hydrodynamic film. In fluid film lubrication, the rate of
entrainment of the lubricant (and thus the film thickness) is dependent on (Uh)a where
h is the effective dynamic viscosity of the fluid in the inlet, and U is the entrainment
speed. The question of interest is what is the effective viscosity of the fluid being
entrained? To gain an estimate of this value, scaling values, K, representative of
effective viscosities, have been calculated which have the effect of collapsing the
friction-speed plots for the various different polymer concentrations of each polymer
solution system.
Figure 3 shows all of the PEO solution data plotted on a single curve of friction
coefficient versus UK. The scaling factor K is simply a value, different for each polymer
solution concentration, which best collapses all of the data on to one line in the high
speed, primarily fluid-film region. We observe that all of the data for a given type of
surface can be collapsed on a single line, suggesting that there is a single effective
viscosity which is valid over the whole entrainment speed range. All of the data for the
hydrophilic elastomer give higher friction than for the hydrophobic elastomer. This is
presumably because the HL elastomer, being polar, adheres more strongly with the
polar steel surface at asperity contact points than does the HB surface. The fact that the
data collapse at low speeds suggest that the boundary film contribution of the polymer
97

PEO is negligible. There is an evident turn up in friction coefficient at high speeds,


which is strongly indicative of the formation of full film lubrication when the parameter
UK > 5.
Plots of friction coefficient versus UK are shown for xanthan gum in Figure 4. We
observe that the curves collapse reasonably well at high speeds but the collapse is less
complete at low speeds, especially in HB surfaces. For HB surfaces at low speed the
friction is larger for polymer solutions than for water.

-1

10

Friction coefficient

Friction coefficient

-1

HL surface
Water
0.0391 wt %
0.1565 wt %
0.625 wt %
2.5 wt %

10

HL surface
Water
0.005 wt %
0.02 wt %
0.07 wt %
0.2 wt %

-2

10

HB surface
Water
0.0391 wt %
0.1565 wt %
0.625 wt %
2.5 wt %
-3

10

-2

10

-2

10
10

-1

10

HB surface
Water
0.005 wt %
0.02 wt %
0.07 wt %
0.2 wt %
-2

10

10

-1

10

10

Entrainment speed * K

Entrainment speed * K
Figure 3: Friction coefficient versus UK, where
U is the entrainment speed and K is a scaling
factor for PEO solutions.

Figure 4: Friction coefficient versus UK, where U


is the entrainment speed and K is a scaling factor
for XG solutions. K values are different for HB
and HL surfaces.

The next step is to look for a possible correlation between the scaling factor, K
and various viscometric properties of the polymer solutions used. Figure 5 tests some
possible correlations by plotting the ratio of h/hw versus K/Kw where h represents
different viscometric properties of possible relevance: a) the measured low shear
viscosity of the polymer solutions, h0 ; b) the measured high shear viscosity of these
solutions at 103 s-1 , h3 ; c) the measured minimum viscosity at high shears before the
onset of inertial effects, hmin ; d) the calculated dilute solution viscosity, hLSD, based on
the low shear rate intrinsic viscosity of the polymer solutions, [h]0 ; and e) finally the
high shear dilute polymer viscosity hHSD. To do so we estimated [h] = 42 cm3 /g for XG
(Whitcomb and Macosko, 1978). hw and Kw are the dynamic viscosity and scaling factor
for water. A perfect correlation would give a straight line of gradient unity. We observe
that the best correlation is found for the hmin calculations.

98

Modulation of the expression of heat shock proteins and cytokines in


Caco-2 cells after exposure to L. gasseri LF221, L. sakei NCDO 2174
and some microbial products
S. Fajdiga 2 *, J.J. Malago 3 , P.C.J. Tooten1 , B. Bogovic Matijaic2 , R. Marinek Logar2 ,
J.F.J.G. Koninkx1 and J.E. van Dijk1
1

Department of Pathobiology, Division of Pathology, Faculty of Veterinary Medicine, Utrecht University,


Yalelaan 1, P.O. Box 80.158, 3508 TD Utrecht, The Netherlands
2
Zootechnical Department, Biotechnical Faculty, University of Ljubljana, Groblje 3, 1230 Domale,
Slovenia
3
Sokoine University of Agriculture, P.O. Box 3203, Chuo Kikuu, Morogoro, Tanzania
*sana.fajdiga@bfro.uni-lj.si

Summary
To investigate whether lactobacilli (Lactobacillus gasseri LF221 and
Lactobacillus sakei NCDO 2174), Salmonella enteritidis 857 and some microbial
products (lactate, butyrate) were able to modify the heat shock response and IL-8
production in 5 days old enterocyte-like Caco-2 cells, the levels of Hsp70 in Caco-2
cells exposed to lactobacilli, their microbial products and S. enteritidis 857 were
analysed by Western blotting and immunostaining. The IL-8 levels were determined by
sandwich ELISA. The expression of Hsp70 was similar between the bacterial strains. S.
enteritidis 857 induced the highest IL-8 levels. Whereas butyr ate had the potential to
induce the expression of Hsp70, lactate only slightly modulated this level. Both butyrate
and lactate had little or no effect at all on the IL-8 levels.
Keywords: heat shock protein 70, IL-8, Caco-2 cells, lactobacilli, lactate, butyrate
Introduction
The enterocytes of the intestinal epithelium are regularly exposed to substances
of dietary origin, which are potentially harmful (lectins, pathogenic bacteria) or
potentially beneficial (lactic acid bacteria). The expression of heat shock proteins and
cytokines by this epithelium is part of a protective mechanism developed by the
intestinal epithelial cells to deal with bacteria in the intestinal lumen. Heat shock
proteins are known to protect the intestinal cells, whereas the secretion of cytokine IL-8
leads to neutrophil infiltration at the site of infection (4).
Recent experiments have clearly demonstrated that exposure of differentiated
enterocyte-like Caco-2 cells to S. enteritidis 857 induces the expression of both heat
shock protein 70 (Hsp70) and cytokine IL-8. Moreover, it appeared that high levels of
heat shock proteins induced by heat treatment (42C) in Caco-2 cells were able to
inhibit the S. enteritidis 857 induced secretion of IL-8 in these cells (5).
Lactobacilli produce a variety of antimicrobial substances. These bacteria and
their products are believed to play an important role in the balance of the microflora in
the intestinal tract of humans and animals (1). The experiments of this study were
designed to investigate whether L. gasseri LF221 and L. sakei NCDO 2174 can protect
5 days old Caco-2 cells, the in vitro counterpart of crypt cells, against S. enteritidis 857
infection. In addition, we also assessed the protective capacity of lactate (microbial
product of lactobacilli) and butyrate (microbial product of Pseudobutyrivibrio
T
xylanovorans Mz5 ), which are known to exert beneficial effects on the gut cells.

100

Materials and methods


Bacterial strains and growth media
Salmonella enteritidis 857 (Se) (6) was grown overnight in Luria-Bertani (LB)
broth and from this culture a 1/100 culture in LB broth was inoculated and incubated
with shaking (200 rpm) at 37C for 2 hours. Lactobacillus gasseri LF221 is an isolate
from infant faeces (originally isolated at Istituto di Microbiologia, Facolta di Agraria,
Universita Cattolica dal Sacro Cuore, Piacenza, Italia) and Lactobacillus sakei NCDO
2714 was purchased from the National Collection of Diary Organisms, Reading,
England. Both lactobacilli were grown for 24 hours in De Man, Rogosa, Sharp (MRS)
broth without stirring. L. gasseri LF221 (LF221) was kept at 37C and L. sakei NCDO
2714 (NCDO) at 30C.
Cell culture
Caco-2 cells were grown in supplemented DMEM (Dulbecco's modified Eagle's
medium) as described previously (2). Hsp70 was induced in supplemented DMEM by
exposure to 38C, 39C, 40C, 41C, 42C and 43C for 1 hour, followed by recovery
at 37C for 6 hours. Cells were exposed to graded numbers of bacteria per cell in plain
DMEM (devoid of gentamicin and FCS) for 1 hour and recovered for 24 hours in plain
DMEM genta. Exposure of the cells to lactate or butyrate (sodium salts) was performed in
plain DMEM genta for 24 or 48 hours.
Western blot analysis and determination of IL-8 secretion by sandwich ELISA
After separation of the proteins using 10% SDS polyacrylamide gels (3) and
transfer to an ImmobilonT M-P PVDF membrane Hsp70 was detected using an antiHsp70 monoclonal antibody (SPA-810) (Stressgen Biotechnologies Corporation,
Victoria, British Columbia, Canada) and a goat anti- mouse IgG alkaline phosphatase
secondary antibody (SAB-101) (Stressgen).
IL-8 levels were assayed by sandwich ELISA using the IL-8 CytosetsT M
antibody pair kit (CHC-1304) (Biosource Europe S.A., Nivelles, Belgium).
Results
Exposure to heat shock
In comparison with the constitutive level of Hsp70 in control cells the levels in
heat shocked cells had increased significantly. In these cells the level of Hsp70
increases with increasing temperature and a temperature-effect relationship could be
clearly observed (Figure 1).
Exposure of Caco-2 cells to bacterial strains
To establish any difference in the expression of Hsp70 and IL-8 levels, the cells
were exposed to graded numbers of S. enteritidis 857, L. gasseri LF221 and L. sakei
NCDO 2174. Compared to the control cells, a slight but significant increase was found
in the expression of Hsp70 (Figure 2). However, there was little or no difference at all in
the expression level between the bacterial strains. The levels of IL-8 induced by S.
enteritidis 857 at any equivalent bacterial number were far higher than those induced by
both lactobacilli (Figure 3). In addition, the induction did not reveal a dose dependent
relationship. Control cells (C) show the constitutive levels of Hsp70 or IL-8 at 37C in
the absence of bacteria.

101

37C 38C 39C 40C 41C 42C 43C

Fig 1: Expression levels of Hsp70 in 5-day old Caco-2


cells. Hsp was induced by exposure to indicated
temperatures for 1h, followed by recovery at 37C for 6h.
Subsequently the cells were processed for Western blotting
and immunostaining.

3.5

2.5

700

Se 857
Se 857

600

LF221

LF 221

NCDO

IL-8 (pg/10 cells)

2.0

Relative heat shock response

3.0

1.5

500

300

1.0

200

0.5

100

0.0

NCDO

400

10

20

100

200

10

20

100

200

Number of bacteria/cell

Number of bacteria/cell

Fig 2: Induction of Hsp70 in 5-day old Caco-2 cells


after exposure to different numbers of S. enteritidis
857, L. gasseri LF221 or L. sakei NCDO 2174. Cells
were exposed for 1h to the indicated number of bacteria
per cell in plain DMEM, followed by recovery in plain
DMEM genta at 37C for 24h. The relative Hsp70
response was established using 2 cell passages and
duplicate cultures per passage and expressed as the
relative amount of Hsp70 sd.

Fig 3: Induction of IL-8 production by S.


enteritidis 857, L. gasseri LF221 or L. sakei NCDO
2174 in 5-day old Caco-2 cells. Cells were exposed
for 1h to the indicated number of bacteria per cell in
plain DMEM, followed by recovery in plain
DMEM genta at 37C for 24h. Cell culture medium was
collected and IL-8 was determined by sandwich
ELISA using 2 cell passages and duplicate cultures
per passage. The results are expressed as pg IL-8/106
cells sd.

Exposure of Caco-2 cells to microbial products


Cells were exposed for 24 or 48 hours to 3 different physiological concentrations
of butyrate or lactate in plain DMEM genta (Figures 4 and 5). Control cells (C) represent
the Hsp70 level at 37C, whereas HS indicates the level in heat shocked (42C) cells in
the absence of the microbial products. With increasing concentrations of butyrate an
increase in Hsp70 levels could be established after 24 and 48 hours of exposure. Lactate
appeared to be unable to induce the expression of Hsp70. Both butyrate and lactate had
little or no effect at all on the IL-8 production (Figure 5).
Discussion
Heat shock proteins are known to protect the intestinal cells against infection and
inflammation, whereas the secretion of cytokine IL-8 leads to neutrophil infiltration at
the site of infection. Since its secretion culminates into epithelial cell damage, its downregulation is vitally important.
A temperature effect on Caco-2 cells can be clearly observed. In comparison
with Caco-2 cells maintained at 37C, which expresses constitutive levels of Hsp70, the
levels of this Hsp increase significantly with increasing temperature (Figure 1).
Exposure of Caco-2 cells to lactobacilli or S. enteritidis 857 shows little or no
difference at all between the bacterial strains in the expression of Hsp70 (Figure 2).
However the levels of IL-8 induced by both lactobacilli at any equivalent bacterial

102

number are far lower than those induced by S. enteritidis 857, a well-known pathogen
(Figure 3). It is therefore reasonable to suggest that the lactobacilli induced synthesis of
Hsp70 down-regulates the IL-8 levels.
By increasing the butyrate concentration and prolonging the exposure time the
cells are triggered to synthesize Hsp70 (Figure 4), which accounts for the suppression of
the IL-8 production (Figure 5). This finding therefore indicates that the protection by
butyrate might be mediated, at least in part, via the synthesis of Hsp70. Lactate at the
concentrations tested only slightly modulated the expression of Hsp70 (Figure 4). Since
lactobacilli are producing much higher concentrations of lactate, these concentrations
should be screened for their capacity to induce Hsp70 expression. Both butyrate and
lactate have little or no effect at all on the IL-8 levels.
(A)
butyrate
lactate
HS 0.2mM 2mM 20mM 0.2mM 2mM 20mM
24h
48h

(B)

700

3.5

600

but 24h

2.5

but 24h
but 48h

but 48h

500

lac 24h

IL-8 (pg/10 6 cells)

Relative heat shock response

3.0

lac 48h
2.0

1.5

lac 24h
lac 48h

400

300

1.0

200

0.5

100

0.0
C

HS

0.2 mM

2 mM

20 mM

C HS

0.2 mM

2 mM

20 mM

Acid concentration in the medium

Acid concentration in the medium

Fig 4: Induction of Hsp70 in 5-day old Caco-2 cells


after exposure to microbial products. Cells were
exposed to butyrate or lactate in DMEM genta for 24h or
48h. After exposure the cells were collected, processed
for Western blotting and immunostaining (A) and
quantified (B). See the legend of the figure 2 for
additional information.

Fig 5: Induction of IL-8 production by microbial


products in 5-day old Caco-2 cells. Cells were
exposed to butyrate or lactate in DMEM genta for 24h or
48h. After exposure the cell culture medium was
collected and IL-8 was determined by sandwich
ELISA. See the legend of the figure 3 for additional
information.

References
1.
2.

3.
4.

5.

6.

Bogovic Matijaic B, Rogelj, I. 1999. Bacteriocinogenic activity of lactobacilli isolated from


cheese and baby faeces. Food Technology and Biotechnology 37 (2): 93-100.
Koninkx JFJG, Hendriks HGCJM, Van Rossum JMA, Van den Ingh TSGAM, Mouwen JMVM.
Interaction of legume lectins with the cellular metabolism of differentiated Caco-2 cells. 1992.
Gastroenterology 102: 1516-1523.
Laemmli U. 1970. Cleavage of the structural proteins during the assembly of the head of the
bacteriophage T4 . Nature 277: 680-685.
Lee CA, Silva M, Siber AM, Kelly AJ, Galyov E, McCormick BA. 2000. A secreted Salmonella
protein induces a proinflammatory response in epithelial cells, which promotes neutrophil
migration. Proceedings of the National Academy of Sciences USA 97: 12283-12288.
Malago JJ, Koninkx JFJG, Tooten PCJ, van Liere E, van Dijk JE. Anti-inflammatory properties
of Hsp70 and butyrate on Salmonella-induced IL-8 secretion in enterocyte-like Caco-2 cells
(Clinical and Experimental Immunology, submitted).
Van Asten A, Zwaagstra K, Baay M, Kusters J, Huis in t Veld J, Zeijst B van der. 1995.
Identification of the domain which determines the g,m serotype of the flagellin of Salmonella
enteritidis. Journal of Bacteriology 177 (6): 1610-1613

103

Non Schulz-Flory Oligomerisation of Ethylene


Aldo E. Guiducci
Shell Research and Technology Centre, Amsterdam, The Netherlands
The use of Shape- and Size- Selective Catalysts (SSSC) has received a great deal of
attention over recent years, owing to their ability to influence catalytic chemical processes.
One example of a class of industrially important catalysts is the zeolites, which are used
extensively to discriminate between similar molecules on the basis of their sizes and
shapes (e.g. use of HZSM-5 in the preferential formation of 1,4-dimethylbenzene from the
disproportionation of toluene).
Recent work has focussed on new types of SSSCs, in an attempt to explore new
possibilities and prepare catalysts with novel properties. One of the systems considered has
been the Molecularly Imprinted Polymer (MIP). This consists of a polymer matrix, in
which are embedded chemically active sites contained within pores. These pores are
designed such that the environment around the active site is carefully matched to promote
specific reactions, in a fashion mimicking that of enzymes, albeit at a less sophisticated
level. 1 Several systems have been shown to promote selectivity for chemical reactions
which show no such selectivity in solution, and thus it was of interest to explore the
concept for other commercially important applications.
Industrially, the oligomerisation of ethylene to form commercially important products has
long been exploited, and the process was chosen as a suitable model to test the application
of MIP techniques, owing to the large amount of information which has been determined
for the unsupported catalyst. The catalyst chosen was an iron complex supported by a
2,6-bis(arylimino)pyridine ligand, a class of compounds that have been found to be highly
active catalysts for the oligomerisation of ethylene.2 Heavily cross- linked polystyrene
(formed from the thermally activated, AIBN-catalysed polymerisation of styrene and
divinylbenzene monomers) was chosen as a suitable polymer matrix, with toluene as
porogen. Figure 1 shows a schematic cartoon of the catalyst environment.

N
N

N
Fe

Figure 1 Catalyst centre with growing olefin chain immobilized within polymer pore

104

Polystyrene was chosen as a suitable polymer owing to its chemical similarity to the
aromatic solvents used in the ethylene oligomerisation process. Similarly, toluene was
chosen as a suitable porogen because of its similarity to the monomers. Together, it was
predicted that the polymer would form a chemical environment around the active centres
which strongly approximated that encountered in the free solution, and thus the only
significant difference on the reaction would be the templated nature of the pores in which
the active sites were located.
The polymers formed were found to have mechanical properties which were strongly
correlated with the amount of toluene porogen used. The porogen assists in the formation
of the MIP by generating voids throughout the polymer, facilitating mass transfer. Higher
concentrations of toluene afforded polymers which were crushable and easily handled,
while lower concentrations formed mechanically hard and brittle solids, which were less
easy to process.
Removal of organic impurities and readily- leached species was accomplished using
Soxhlet extraction of the polymer with a suitable solvent. This extraction method combines
the advantages of using a minimum of solvent with a high efficiency of washing, making it
well suited for industrial applications.
As precursor LFeX (L = 2,6-bis(arylimino)pyridine, X = Cl2 or suitable template molecule)
complexes were found to be insoluble in the monomer mixture, attempts were made to
immobilise the iron- free templated ligand in the polymer first, followed by inclusion of the
iron centre. This was attempted using two techniques: the template bound to the free ligand
alone, and the template and free ligand bound to a sacrificial metal centre. Samples were
then treated with FeCl2 or Fe(acac)2 , activated using methylaluminoxane (MAO), and
screened for catalytic activity. The products formed were analysed using 1 H NMR
spectroscopy and GC, to determine the nature of the products formed.
Analysis of the data collected revealed that the catalytic activities of the polymer-supported
complexes were very low compared with those of the free complexes. The quantity of
products obtained from the batch reactions was too small to accurately compare the nature
of the products formed, and thus determine if the MIP environment had been effective in
influencing the products of the reaction.
Owing to the heterogeneous nature of the polymer matrices made during this study,
detailed characterization of the environment around the catalyst centres proved to be
extremely difficult. Possible problems, such as the surface-based (i.e. untemplated)
catalytic sites proving to be more active than internal (i.e. templated) catalytic sites due to
mass transfer considerations, suggest that MIPs may be suitable only for certain types of
catalysis, and this may restrict their application in the industrial realm. However, the
potential for creating these micro-reactors to enhance the specificity of chemical
processes is considerable, and will doubtless prove to be a valuable avenue of investigation
in many cases.
References
1. G. Wulff, Chem. Rev., 2002, 102, 1; K. Haupt, Chem. Commun., 2003, 171
2. M. Brookhart et al., J. Am. Chem. Soc., 1998, 120, 7143;
V. C. Gibson et al., J. Am. Chem. Soc., 1999, 121, 8728

105

The Stressed Immune System Can Nutritional Intervention Help?


Mark Hamer, Ph.D., Marie Curie Research Fellow
Unilever Health Institute, Unilever Research Vlaardingen, The Netherlands
Introduction
There is a growing consumer demand for food products that optimise health and
more specifically provide enhanced resistance to common infections. Thus, there is
considerable interest in identifying immuno-enhancing ingredients that may be added to
functional food products to provide this benefit. There is clear evidence that nutritional
supplementation helps to restore immune function and contributes to optimal resistance to
infections in malnourished people [1], although the literature is less clear on the suggested
benefits of dietary supplementation for immune function in healthy, well nourished subjects.
Thus, it may be advantageous to develop models where immune function is temporarily
suppressed, for example from various forms of stress, in order to examine the efficacy of
nutritional intervention on immune function. Therefore, the aim of the current ongoing
work is to identify and implement suitable models that may be employed to test the efficacy
of immuno-enhancing ingredients in healthy individuals. There is evidence for stressinduced impairment of immune function in all of the stress models reviewed, which include
psychological stress, severe exercise, and sleep deprivation that is briefly summarised in
Table 1. However, only the exercise model has been used so far as a tool for nutritional
immunology research.

Plasma IL-6 (pg/ml)

Employing exercise stress in nutritional immunology


Intensive prolonged exercise, such as marathon running, has mostly been adopted in
these models to examine the efficacy of various nutritional interventions. There is some
evidence to suggest that consumption of carbohydrate (CHO) drinks during intense training
and competition may attenuate some of the immuno-suppressive effects of prolonged
exercise. For example, prevention of exercise- induced falls in neutrophil function [2] and
reduction in the extent of diminution of PHA-stimulated T-lymphocyte proliferation
following prolonged exercise [3] have been observed. The mechanism may be associated
with an attenuated rise in plasma catecholamines and cortisol following CHO feeding
during exercise. However, CHO supplementation during a competitive marathon race did
not prevent declines in salivary IgA concentration and there were no differences between
the placebo and CHO group for those runners reporting upper respiratory tract infections
(URTI) dur ing the 15 day post-race period [4]. Nevertheless, in a randomised, double -blind
placebo study CHO supplementation during a 2.5hr run in 30 experienced marathon
runners resulted in an attenuated increase in the inflammatory marker, IL-6, in comparison
with placebo [5] (see Figure 1).

100

80
60

40
20
0
Baseline

post-run

1.5hr post

106

Figure 1. Effects of carbohydrate


supplementation during 2.5hrs of
running on IL-6 (adapted from [5]).
CHO
Placebo
* P<0.05

Bouic et al. [6] using a double blind placebo controlled trial also utilised the
marathon running stress model to examine the effect of a mixture of plant sterols/sterolins
on red and white blood cell counts, CD3+ and CD4+ lymphocyte sub-sets, and neutrophils.
However, unfortunately the experimenters were unable to gain access to the athletes until 3
days after the event, and given the differences in baseline between the groups for some
measures (B cells, IL-6, and cortisol) the findings appear to merely reflect differences in
individual variation.
Inconsistent findings from studies that have examined the effects of glutamine
supplementation on immune function following exercise demonstrate how the mode and
intensity of exercise are important variables to consider when employing the exercise
model. Two studies [7,8] have provided evidence to demonstrate that oral glutamine
supplementation can have beneficial effects on the immune system. For example, Castell et
al. [7] demonstrated that glutamine consumed immediately after and 2hr after a marathon
reduces the incidence of URTI in the 7 d following the race. However, other studies have
not found glutamine supplementation to have beneficial effects on exercise induced
immune suppression [9,10] , despite the maintenance of pre-exercise plasma glutamine
level. Interestingly, the positive findings [7,8] were obtained during competitive races
(marathon running and triathlon) whereas the studies that did not support the effects of
glutamine supplementation used laboratory simulated cycle ergometry exercise [9,10].
Furthermore, Rohde et al. [9] used an intermittent exercise protocol consisting of bouts of
60, 45, and 30 min each separated by a 2hr recovery that may have been less demanding
than a continuous exercise bout.
Other research has examined the effects of Ginseng and Echinacea on the mucosal
immune response using a Wingate cycling test model [11,12]. This model differs from
previously discussed exercise models because it consists of three consecutive 30 sec
exercise bouts of maximal intensity with passive recovery in between. Echinacea [12] but
not Ginseng [11] supplementation eliminated the mucosal immune suppression that was
apparent in the placebo group after the Wingate testing. However, given that athletes who
have severely depressed IgA levels can mount clinically appropriate antibody responses [13]
this suggests that the 43% decline in IgA that was observed in the placebo group after the
Wingate test may have little clinical relevance. Thus, the conclusions drawn from studies
that only measure a limited number of immune biomarkers should be interpreted with
caution.
Conclusions & recommendations
There is a need to employ experimental models that can be used to more clearly
identify the impact of nutrition on immune function in healthy individuals. Psychological
stress and exercise appear to produce robust changes in immune function tha t could be
implemented for nutritional immunology research, although sleep deprivation models
require more development.
The strongest evidence for the efficacy of immuno-modulating nutrients should be
provided by clinical outcome measures, such as the occ urrence of URTI, and measures
from a challenged immune system, such as antibody response to vaccination. A number of
nutritional intervention studies that have been examined in the current review have merely
provided status immune markers, such as cell counts and concentrations, which cannot
provide information on immune function per se. Therefore, it is important that future
studies measure a range of immune markers that will allow accurate conclusions to be

107

drawn, for example, whether an ingredient specifically targets innate or adaptive immune
responses. Therefore, the specific model of immune suppression should also be related to
the action of the active immuno- modulating component of the ingredient. For example,
severe exercise consistently appears to cause suppression of innate immune function and
therefore this model should be used to test immuno- modulating ingredients that are thought
to have an effect on innate immune function. In contrast, chronic psychological stress in
humans has been shown to affect some aspects of the adaptive immune response. Finally,
because of the inherent large variability between individual immune responses and the
measures of immune function it is important that the models possess a certain degree of
repeatability and reliability that may be achieved by testing with known immunomodulating actives before being used to assess the action of unknown ingredients.
References
1. Calder, PC & Kew, S. Br J Nutr 88:S165-S176, 2002.
2. Gleeson, M & Bishop, NC. Int J Sports Med 21 Suppl 1:S44-50, 2000.
3. Henson, DA et al. Int J Sports Med 19(8): 574-580, 1998.
4. Nieman, DC et al. Int J Sports Med 23(1): 69-75, 2002.
5. Nehlsen-Cannarella, SL et al. J Appl Physiol 82(5): 1662-1667, 1997.
6. Bouic, PJD et al. Int J Sp Med 20: 258-262, 1999.
7. Castell, LM & Newsholme, EA. Nutrition 13(7-8): 738-742, 1997.
8. Bassit, RA et al. Med Sci Sports Exerc 32 (7):1214-1219, 2000.
9. Rohde, T et al. Med Sci Sports Exerc 30(6): 856-862, 1998.
10. Krzywkowski, K et al. Am J Physiol Cell Physiol 281(4): C1259-C1265, 2001.
11. Engels, H-J et al. Med Sci Sports Exerc 35(4): 690-696, 2003.
12. Hall, HL et al. Med Sci Sports Exerc 35(5): S156, 2003.
13. Gleeson, M et al. Clin Exp Immunol 105(2): 238-244, 1996.
14. Herbert, TB & Cohen, S. Psychosom Med 55(4): 364-379, 1993.
15. Rowbottom, DG & Green, KJ. Med Sci Sports Exerc 32(7 Suppl): S396-S405, 2000.
16. Mackinnon, LT. Med Sci Sports Exerc 32(7 Suppl): S369-S376, 2000.
17. Dinges, DF et al. J Clin Invest 93(5): 1930-1939, 1994.
18. Savard, J et al. Psychosom Med 65:211-221, 2003.
19. Irwin, M et al. Brain Behav Immunol 17:365-372, 2003.

108

109

fl
?
fl
fl
fl
fl

?
fl
fl
fl/no effect

?
?

no effect

NKCA
Neutrophil phagocytic function
T lymphocyte proliferation
Total T-cells (CD3+)
T-helper cells (CD4+)
Cytotoxic T cells (CD8+)

Infammatory monokines
(e.g., TNFa, IL-6)
Eicosanoids (e.g., PGE2 )
Serum [Ig]
Salivary [IgA]
Th1 cytokine production
Th2 cytokine production

?
fl/no effect
fl
?
?

fl
?

no effect

URTI
?
fl

?
?
no effect
fl
?
?

fl
fl
fl
?
?
?

URTI
no effect
?

Exercise stress
Acute/intense [15] Chronic/ intense [16]

/fl/no effect
?
?
?
/fl
?

/fl
fl
/fl/no effect
fl/no effect
fl /no effect
fl/no effect

URTI
fl (Hep A)
?

no effect
?
?
?
no effect
?

fl /no effect
?
?
fl
fl
fl

?
?
?

Sleep deprivation [17,18,19]


Acute
Chronic

Notes: (URTI) upper respiratory tract infection; (DTH) delayed type hypersensitivity respons e involving an inflammatory skin reaction; (NKCA) natural killer
cell activity involved with recognising and killing abnormal cells; Neutrophils involved with engulfing and destroying microbes; T-helper cells involved with
regulating responses by secretion of cytokines; Cytotoxic T cells involved with specific lysis of infected cells; (TNFa) tumor-necrosis factor-a, (PGE2 )
prostaglandin-2 involved with initiation of inflammatory responses; (Ig) immunoglobins that are antigen-specific anti-bodies secreted by B lymphocytes; (Th1)
type 1 cell mediated driven response; (Th2) type 2 anti-body mediated driven responses; Th1 cytokines (e.g., IL-2, IFNg) and Th2 cytokines (e.g., IL-4, IL-10, IL13) act via specific receptors to regulate the behaviour of cells.

?
?
fl
fl
?

URTI
fl (flu)
fl

none
no effect

Psychological stress [14]


Acute
Chronic

Clinical endpoint
Anti-body response to vaccine
DTH response

Immune marker

Table 1. The effects of various stress models on immune function in humans.

MECHANISM OF SECRETION OF ENTEROCIN P


Herranz, C. and A. J. M. Driessen
Department of Microbiology, Groningen Biomolecular Sciences and Biotechnology
Institute, University of Groningen, Kerklaan 30, 9751 NN Haren, The Netherlands
INTRODUCTION
Bacteriocins are proteinaceous antimicrobial compounds produced by bacteria. Some
of the bacteriocins produced by lactic acid bacteria (LAB) exert antimicrobial activity
against food-spoilage and food-borne pathogenic microorganisms. Since most of the
LAB strains possess a history of safe use in foods, their use -or the use of their
bacteriocins- as natural preservatives (biopreservatives) in food has been proposed as
an alternative to chemical preservatives (5). The efficacy of bacteriocins, especially in
combination with other food preservation methods, and the consumers demand for
natural food products free of chemical preservatives, may promote their use in the
near future.
The use of bacteriocins in the food industry requires their detailed characterization at
the biochemical and genetic level, including their physicochemical properties, the
organization of the genetic determinants for their production, the molecular mode of
action against sensitive microorganisms, and the mechanisms of their secretion. In LAB,
bacteriocin secretion can occur via two different routes. While most bacteriocins are
concomitantly processed and transported to the external medium by a dedicated ATPBinding Cassette transporter (2), several bacteriocins might be transported and
processed by the General Secretory Pathway (Sec-pathway, Fig. 1). Since the Secpathway is an ubiquitous pathway in bacteria, it would facilitate heterologous
bacteriocin production in bacterial hosts of interest. This approach is interesting for
conferring bacteriocin production to starter cultures used in the food industry and for
obtaining strains producing multiple bacteriocins with different mechanisms of action
(5).
Proteins destined for export out of the cell via the
Sec-pathway are synthesized as precursors with an Nterminal extension, the so-called signal peptide, which
directs them to the translocation sites at the
membrane. The best studied translocation machinery
is that of Escherichia coli, which is composed of the
peripheral membrane subunit SecA, the integral
membrane protein complex SecYEG, and a number of
accessory proteins (SecB, SecD, SecF and YajC) (3).
During or shortly after translocation, leader peptidase
(Lep), whose catalytic domain is located at the
periplasmic side of the membrane, removes the signal
sequence from the precursor, thus allowing the release
at the trans side of the membrane. The energy for Fig. 1. Schematic representation
translocation is provided by the hydrolysis of ATP by of the Sec-pathway components
of E. coli.
SecA and the proton motive force.
Enterocin P (EntP) is a bacteriocin produced by several Enterococcus faecium strains
of meat origin, which displays a potent antimicrobial activity against several foodspoilage and food-borne pathogenic microorganisms including Listeria monocytogenes,
110

Staphylococcus aureus, Clostridium botulinum and Cl. perfringens. The structural gene
of the bacteriocin, entP, encodes the precursor form of EntP (preEntP). PreEntP is a 71
amino acid long polypeptide with an N-terminal signal sequence of 27 amino acids (1)
(Fig. 2).
MRKKLFSLALIGIFGLVVTNFGTKVDAATRSYGNGVYCNNSKCWVNWGEAKENIAGIVISGWASGLAGMGH
1
10
20
30
40
50
60
70

Fig. 2. The enterocin P precursor. The signal sequence is indicated in gray.

The presence of a typical signal sequence, and the lack of dedicated transport genes
in the entP operon, suggests that preEntP is secreted via the Sec-pathway. The main
objective of the proposed research is to determine the mechanism by which preEntP is
translocated across the cytoplasmic membrane. These studies will provide leads how to
produce bacteriocins in heterologous hosts strains.
RESULTS
If preEntP is secreted by the Sec-pathway, it should be possible to produce it in a
heterologous host in the absence of dedicated transporter machinery. To check this
hypothesis, entP fused at its C-terminus with a six-histidine tag (EntP-His) was cloned
in a Lactococcus lactis vector under control of the nisin-inducible promoter (4). EntP
was purified from the supernatant of L. lactis cells overexpressing entP by means of
ammonium sulphate precipitation, gel filtration and Ni-NTA affinity chromatography.
The purified protein, analysed by Tricine-SDS-PAGE and immunoblotting, appeared as
a single polypeptide band of approximately 6 kDa (Figs 3A, B). The antimicrobial
activity of EntP was detected by an agar diffusion test as an area of growth inhibition of
the enterocin P-sensitive microorganism (Fig. 3C). N-terminal amino acid sequencing
of the protein shown in the Fig. 3A yielded the sequence ATRSYG, which corresponds
to the first six amino acids of EntP produced by Ent. faecium P13. These results show
that L. lactis is able to secrete and correctly process PreEntP to yield mature,
biologically active EntP.
A.

Mw
(kDa)

B.

78
55
45
34

C.

Mw
(kDa)

55
45
34

23
16

23
16

EntP-His

EntP-His
4

Fig. 3. Analysis of EntP:


(A) Tricine-SDS-PAGE
(B) Immunoblotting using
anti-His-tag antibodies
(C) Agar diffusion test

To gain more information about the secretion mechanism of preEntP in L. lactis,


signal sequence swapping experiments between the signal peptides of preEntP and
preUsp45 (Fig. 4), the precursor of the main secreted protein of L. lactis (6), were
performed.

111

Fig. 4. Clustal W alignment of the


signal peptides of preEntP and
preUsp45. Identical (*), strongly
similar (:) and weakly similar (.)
residues are indicated.

The resulting hybrid precursors, SPPreEntP -Usp45 and SPUsp45-EntP (in which the signal
peptides of preEntP and preUsp45 are fused to the C-terminally histidine-tagged mature
parts of Usp45 and EntP, respectively) were purified from the supernatants of L. lactis
cells expressing the corresponding hybrid genes as described above. As a control,
proteins present in the supernatant of L. lactis cells expressing preEntP and preUsp45
were purified using the same procedure. Purified proteins were analysed by SDS-PAGE
and immunoblotting (Figs. 5, 6). An agar diffusion test was performed with purified
protein from the cells expressing SPUsp45 -EntP (Fig. 5C). The results obtained show that
the signal peptides of preUsp45 and preEntP can direct the secretion of biologically
active EntP and Usp45, respectively, in L. lactis.
These data demonstrate that the signal peptide of preEntP is a typical Sec-signal
peptide that can access the Sec-pathway of L. lactis.
A.

B.

Mw
(kDa)

C.
1

2
1

7
EntPHis

EntPHis

Fig. 5. (A) The Ni-NTA-bound protein fraction from supernatant of L. lactis NZ9000 cells expressing
SPUsp45 -EntP (1) and preEntP (2) was visualized by Tricine-SDS-PAGE, (B) immunoblotting using antiHis-tag antibodies, and (C) tested for activity by the agar diffusion assay.

A.

B.
Mw
(kDa)

50

37

Mw
(kDa)

50

37

Fig. 6. (1) 10%-SDS-PAGE, and (2) immunoblotting using anti-His-tag antibodies of the Ni-NTA-bound
protein fraction of L. lactis cells expressing SPEntP -Usp45 (A) and preUsp45 (B). The position of Usp45His (45 kDa) is indicated by an arrow.

112

REFERENCES
1.
2.
3.
4.
5.
6.

Cintas LM, Casaus P, Hvarstein LS, Hernndez PE, Nes IF (1997). Appl.
Environ. Microbiol., 63: 4321-4330.
Hvarstein LS, Diep DB, Nes IF (1995). Mol. Microbiol., 16: 229-240.
De Keyzer J, van der Does C, Driessen AJM (2003). Cell Mol. Life Sci., 60: 20342052.
De Ruyter P, Kuipers OP, de Vos WM (1996). Appl. Environ. Microbiol., 62:
3662-3667.
Stiles ME (1996). Antonie van Leeuwenhoek, 70: 331-345.
Van Asseldonk M, Rutten G, Oteman M, Siezen RJ, de Vos WM, Simons G
(1990). Gene, 95: 155-160.

113

Occurrence of Phenolic Compounds in Various Extra Virgin Olive Oils


Karel Hrncirik
Unilever R&D Vlaardingen, Olivier van Noortlaan 120, 3133 AT Vlaardingen, The
Netherlands, e- mail: karel.hrncirik@unilever.com
The growing enthusiasm about the Mediterranean diet among consumers in many
industrialised societies lies mainly in the belief that this diet plays a positive role in the
prevention of certain diseases. A number of potentially beneficial dietary factors of this
diet have been identified. Among these, Extra Virgin Olive Oil (EVOO) might play a
key role. The health benefits of this natural product are attributed to various aspects,
which include a relatively high unsaturated fatty acid content and possibly its minor
components, particularly the unique phenolic compounds (Visioli & Galli, 1998).
The phenolic fraction of EVOO consists of various secoiridoid derivatives, lignans,
flavonoids, phenyl alcohols and phenyl acids (Servili & Montedoro, 2002). As these
compounds va ry in their properties (taste, stability against oxidation and also impact on
human health), the identification and quantification of individual components of the
phenolic fraction is nowadays of great interest. Many analytical procedures directed
towards the determination of the complete phenolic profile have been proposed within
the last decade (e.g. Mateos et al., 2001), however, various extraction techniques,
chromatographic conditions and methods of quantification, have contributed to
controversial results published. In fact, concentrations of phenolics reported in the
literature greatly differ (sometimes even in order of magnitude) even for an oil of the
same olive variety (Pirisi et al., 2000; Tsimidou, 1998; Ranalli & Angerosa, 1996).
In order to explain this controversy, various extraction techniques, analytical methods
and methods of quantification have been critically re-evaluated and compared in this
study. The optimised liquid- liquid extraction system coupled with a high-performance
liquid chromatography (HPLC) method led to high recovery of olive oil phenolics
(93 %). The method allowed the quantification of nine major VOO phenolics (1-9),
whose structures are shown below.
R1

R1

R2

HO

HO

(1) R1 = OH; R2 = H
(2) R1 = H; R2 = H
(3) R1 = H; R2 = CH 3CO

(4) R1 = OH
(5) R1 = H

O
OH
O

R1

COOCH3

R2
O

HO
O

O
O

HO

CH3

(8) R2 = H
(9) R2 = CH3COO

(6) R1 = OH
(7) R1 = H

114

In order to investigate the diversity in the olive oil phenolic profile, the content of
phenols in 23 EVOO samples originating from various countries was determined by the
developed HPLC methods.
The obtained results showed that the total amount of phenolics was rather variable
within the samples analysed (208-1421 mg/g, median 521 mg/g), as well as the
composition of the individual phenols. Aglycones of oleuropein and ligstroside (4-7)
formed the majority of total phenolics in all samples (90 % on average), while the
contribution of simple phenols (1-3) and lignans (8, 9) was rather limited (on average
5 % for each group). The relative amounts of the individual major aglycones 5-7 varied
however. Whilst the Spanish oils were characterised by the predominant concentration
of the aglycone 6, the proportion of 5-7 were comparable in the samples from the
eastern part of the Mediterranean basin.
In fact, phenolic profile is unique for each EVOO. It is given by genetic origin (variety)
of the olives, environmental conditions (climatic and agronomic), the time of harvest
and the technology of olive processing (e.g. malaxation temperature, water washing),
i.e. factors, which are characteristic, to a certain extent, for each region. The impact of
each individual factor on the phenolic content is, however, still not quite clear and
further extensive research is needed to fully answer this question.

References:
R. Mateos, J.L. Espartero, M. Trujillo, J.J. Rios, M. Leon-Camacho, F. Alcudia, A.
Cert: Determination of phenols, flavones, and lignans in virgin olive oils by solid-phase
extraction and high-performance liquid chromatography with diode array ultraviolet
detection. J. Agric. Food Chem. 49 (2001) 2185-2192.
F.M. Pirisi, P. Cabras, C. Falqui Cao, M. Migliorini, M. Muggelli: Phenolic
compounds in virgin olive oil. 2. Reappraisal of the extraction, HPLC separation and
quantification procedures. J. Agric. Food Chem. 48 (2000) 1191-1196.
A. Ranalli, F. Angerosa: Integral centrifuges for olive oil extraction. The qualitative
characteristics of products. J. Am. Oil Chem. Soc. 73 (1996) 417-422.
M. Servili, G. Montedoro: Contribution of phenolic compounds to virgin olive oil
quality. Eur. J. Lipid Sci. Technol. 104 (2002) 602-612.
M. Tsimidou: Polyphenols and quality of virgin olive oil in retrospect.
Ita. J. Food Sci. 10 (1998) 99-116.
F. Visioli, C. Galli: Olive oil phenols and their potential effects on human health.
J. Agric. Food Chem. 46 (1998) 4292-4296.

115

For submission to Proceedings of 16th Marie-Curie Workshop, 21-22 Oct 2003, DG-JRC
Institute for Energy, Petten, The Netherlands

Effect of temperature cycling on the microstructural stability of whey


protein stabilised oil-In-water emulsions
Sotirios K iokias, Christel K. Reiffers-Magnani and Arjen Bot
Unilever Research and Development Vlaardingen,
Olivier van Noortlaan 120, NL-3133 AT Vlaardingen, The Netherlands.
I. INTRODUCTION
Oil- in-water food emulsions, like milk, cream, and cream cheese, are usually formed by
high-pressure homogenisation. During homogenisation, the coarse droplets of the preemulsion are stretched and broken up, and protein from the serum phase will adsorb and
prevent re-coalescence of the newly formed droplets, so that a kinetically stable emulsion
over longer periods is obtained. 1
The paper will focus on vegetable-fat based whey protein stabilised emulsions
prepared according to various processes, and their stability during chilled storage (at 5C)
and repeated temperature cycling (3 times between 5 and 25C). Besides effects of
denaturation and/or acidification, the influence of droplet size of the dispersed phase on
emulsion stability will be investigated as well.
II. EXPERIMENTAL
Model o/w emulsions were prepared from mixtures of: 30% lipid phase consisting of
either partly crystalline vegetable fat (2.4% solid fat content at 25C, 14% at 20C, 73%
at 5C or liquid vegetable oil; 4% of a commercial mostly native whey protein
concentrate (Nutrilac QU7560, ex Arla, powder containing 75% protein, of which ~10%
denatured); 0.1% potassium sorbate as a preservative under acidic conditions;
demineralised water. First, a premix was prepared at 50C. Next, either the mix was held
at 50C and homogenised ('native'), or heated to 85C and homogenised ('pre -heated').
Heating and holding steps took 20 min. Homogenisation was performed using an APV
Lab1000 homogeniser, one-stage at 300 bar by default, and is always preceded by
turraxing the mixture at 8000 rpm for 2 min. When indicated, homogenisation pressure
was varied over the range 0-300 bar. Samples were either kept at pH=6.8 ('neutral') or
acidified to pH=4.5 ('acidified') using a 50% citric acid solution in demineralised water.
Emulsions were filled in 100 ml tubs (6.5 cm diameter), sealed and stored at 5C until
further analysis. The samples cooled down in the fridge in 1-2 h.
Droplet size will be studied by means of Pulsed Field Gradient NMR (pfg-NMR).
2,3. Changes in firmness will be evaluated in terms of the force required to penetrate a
cylindrical rod in the emulsion gel.

116

III. RESULTS
Current investigations focussed on emulsion stability during chilled storage and
temperature cycling, conditions that simulate consumer use of fresh cheese-type products.
It is known that abuse can result in quite dramatic changes in emulsion properties such as
firmness and droplet size, especially in products prone to partial coalescence.
Droplet size during chilled storage was found to be quite stable over a period of
60 days. A number of emulsions were subjected to repeated temperature cycling between
5 and 25C, a temperature range over which incomplete melting of the crystalline fat in
the emulsion occurs and the remaining fat can act as seeding crystals for crystallization
during the subsequent cooling stage. The results show that both the native neutral
emulsions and the pre-heated acidified reference emulsion based on sunflower oil remain
stable with respect to droplet size during temperature cycling. For the native acidified and
both pre-heated emulsions, d3,2 values steadily increase as cycling proceeds, with the
preheated neutral sample showing the highest degree of destabilisation. Relative
increases in droplet size for both types of storage are plotted in figure-1.
chilled storage (0-60 days)

% increase of droplet size from baseline

180

cycling (0-3 cycles)

160
140
120
100
80
60
40
20
0
sunflower oil

native neutral

native
acidified

heated neutral

heated
acidified

Figure-1. Relative change of droplet sizes for emulsions prepared with partly crystalline fat under
different processing conditions (+ a liquid emulsion as a reference), after 60 days of chilled storage at
5C, or repeated temperature cycling (3-cycles) between 5 and 25C.
Results are given as: (final d3,2 initial d3,2 )*100/initial d 3,2

Samples that show a clear change in droplet size during temperature cycling, do show an
increase in firmness as well. The changes in both droplet size and firmness over the three
temperature cycles were found to be much bigger than those observed over 30 days of
chilled storage.

117

More detailed investigations were made on the heated acidified emulsions, which
are structurally closer to the commercial products. Figure-2 shows that the droplet size
decreases and firmness in these model emulsions increase with increasing
homogenisation pressure in the range 0-300 bar. Upon temperature cycling, a combined
increase in droplet size and firmness is observed with each temperature cyc le, except for
the sunflower oil based sample. This points towards a qualitatively different mechanism
than coalescence for the firmness increase in the emulsions upon cycling, most likely
partial coalescence. Interestingly, the relative changes in droplet size are largest for the
smallest droplet sizes, whereas the effects for firmness are largest for the largest droplet
size.

700

cycle 2

firmness (g)

600

cycle 3

500
400

cycle 1

300

start

200
100
0
0

droplet size d3.2 (m)

Figure-2. Relation between firmness vs droplet size d 3,2 and its change during three subsequent
temperature cycles between 5 and 25C, for pre-heated acidified emulsions prepared with partly crystalline
fat. Different initial droplet sizes were obtained by means of homogenisation pressure in the range between
0 and 300 bar. The arrow pointing down symbolises the effect of droplet size without cycling, the arrows
pointing upward indicate the effect of repeated temperature cycling. Solid lines are meant to guide the eye.

IV. DISCUSSION -CONCLUSIONS


Factors that affect the sensitivity of certain emulsions to partial coalescence inc lude first
and foremost, the required presence of partly crystalline fat in the emulsions, as the data
clearly show that cycling instabilities do not occur in emulsions based on sunflower oil.
Most probably, emulsions are destabilised during temperature cycling as a result of
incomplete melting of fat crystals upon heating followed by recrystallisation when
cooling the sample to the original temperature. Under such conditions, re-crystallising fat
often forms the more thermodynamically favored larger crystals which destabilise the
emulsion, e.g. by protruding through the interfacial layer of the emulsion droplets.

118

The data suggest that the presence of sufficient amounts of native whey at the
moment of homogenisation is an overriding factor determining the cycling stability of the
emulsion. This explains the stability of the native neutral emulsion. Denaturation of whey
protein results in the formation of small aggregates, a process often used in the so-called
cold gelation of whey proteins at lower pH. These small aggregates are apparently less
efficient in stabilising the emulsion against temperature cycling. Incidentally, it should be
noticed that apparently the inhomogeneity of the interfacial layer is crucial, since there is
no relation between the amount of protein associated with the fat phase, and the stability
of the emulsions. It is found that droplet size has a clear effect on the degree to which
these emulsions suffer from coalescence. In general, smaller droplets lead to bigger
absolute changes, though this may not be the case if one considers relative changes.
In conclusion, it was demonstrated that partial coalescence may occur in relative
protein-rich emulsions that are quite different from the ones containing low- molecular
weight emulsifiers that are traditionally studied in this context. In particular, it has been
shown that protein-stabilised emulsions in which the protein functionality has been
impaired through heat-treatment or acidification are sensitive to this type of partial
coalescence.
Acknowledgement: SK would like to acknowledge the EU Marie Curie program for
financial support under contract number HPMI-CT-2001-00113.
V.

REFERENCES

A. Bot, E. Flter, J. G. Lammers and E. G. Pelan, Controlling the texture of spreads, in


Texture in Food, volume 1: Semi-solid Foods, ed. B.M. McKenna, Cambridge: Woodhead
Publishing, 2003, p. 350-372.
1

G. J. W. Goudappel, J. P. M. van Duynhoven and M. M. W. Mooren, Measurement of oil


size distributions in food oil/water emulsions by time domain pulsed-field gradient NMR.
J. Colloid Interface Sci., 2001, 239, 535-542.
2

S. Kiokias, A. A. Reszka and A. Bot, The use of static light scattering and pulsed-field
gradient NMR to measure droplet sizes in heat-treated acidified protein stabilised oil- inwater emulsion gels, Int. Dairy J., 2004, in press.
3

119

Investigating sensing techniques for


tomorrows automotive industry
Alexandru Opran, Texas Instruments Holland BV, Almelo , The Netherlands

Abstract
The automotive industry is currently busy investigating new technologies in order to
ensure the integration of new concepts and systems in automobiles. The paper presents
some general aspects of sensing systems in relation with the requirements that these
must comply with in order to be accepted as viable solutions.
1. Introduction
The automotive industry is considered to be one of the most dynamic industries, and it
is one of the biggest customers when speaking about benefits that can be created from
developments in other fields. One important trend in the last decade has been the
development of advanced electronic control systems, able to interpret and process data
faster and more accurately. Combined with a global tendency toward miniaturization,
this trend resulted in packages of control systems and features, or translated for the final
customer in increased customization and added value packages.
Together with new safety and environmental regulations, these developments are
noticeable in todays automobiles. Systems such as Antilock Brake System (ABS),
Adaptive Cruise Control (ACC) or Electronic Stability Program (ESP) are nowadays
standard in production of new vehicles and are integral part of what it is called Dynamic
Driving Control.
The transition from mechanical systems to mechatronic ones resulted in a high demand
of interfacing devices. Analog signals must be interpreted; force, pressure position
must be measured; corrosion and fatigue must be detected so that a control system
issues the best response for every situation in part. Sensors are enabling us to translate
different types of measurands into the language of microcontrollers and power circuitry,
and the need for such devices cannot be questioned at the present moment.

Calibration
Measurand

Power
(optional)

Power
(optional)

Sensor/
Transducer

Signal
conditioning

Indicator
Recorder
Processor
Controller

Figure 1. Traditional sensing system

In a classic and non-specific approach, sensing systems can be described with referring
to Figure 1. Due to some factors that have been already mentioned and to industry
related factors such as applying cost-reducing techniques and increasing response time,
these sensing systems are becoming more complex and richer in features.
120

Modern sensing systems are pushing toward incorporating all the necessary modules in
one unit, so that the only external connection the sensing system has is with the data bus
of the microcontroller responsible for that control system. Judging by this, it is not far
that we will see sensors with integrated Bluetooth capability, so that no wiring will be
needed.

Sensing
method

Signal conditioning
/amplification

ADC

data

Memory

Control
system

Controller
Figure 2. Modern sensing systemsi

2. Main technologies
It is not the purpose of this paper to enumerate all the technologies that are used today
in sensing systems, since such a process does not bring up anything else but an
emphasis of the effort put into finding solutions to an increasing number of
requirements. Instead, the intention is to show that for each situation there exist a panel
of technologies that can be employed and suitability of each of them is given by the
specifications and requirements of each case, combined with the existing know- how for
that specific technology.
Capacitive sensing
Capacitance is known as the measure of the ability of an object to store electrical
charge. Generally this object is constituted of two plates (electrodes) separated by an air
gap with a known width (see Figure 3). The movement of one electrode into some
direction causes a change in capacitance which can be easily measured. This technology
is successfully employed in force, pressure, position and vibration sensing.

Electrode A
known air gap
Electrode B
Figure 3. Capacitive sensing

Resistive sensing
Resistance is known as the measure of the opposition an object
exerts to the flow of electric current. A change in resistance can be
easily detected, and starting from this idea there are a number of
sensing solutions that have appeared, such as strain gauges (in
which a defined change in resistance in caused by the presence of
an stress applied on a backing material) or magneto-resistive
i
ii

Figure
gage ii

Randy Frank, Understanding Smart Sensors


www.sensorland.com

121

4.

Strain

sensing (where certain metals change their resistance when exposed to magnetic fields).
The availability of different methods for varying a resistance is exactly the advantage
offered by this sensing principle, so that the application domain is large, from force and
pressure sensing, to position and current sensing.
Eddy current sensing
An eddy current sensing solution involves an
excitation coil used for creating eddy currents in
a probe (see Figure 5), and detecting coil (or Hall
sensor) that picks-up the intensity of these
created eddy currents by means of sensing the
magnetic fields present. Changes in the structure
of the probe results in disturbances of the
magnetic fields created by the eddy currents. The
advantage offered by this technology with respect
to other sensing methodologies is mainly its nonintrusive manner in which the defects and cracks
of the probe can be detected.

Figure 5. Eddy currents in a probe iii

3. Sensing in automotive products


It has been already mentioned that the automotive industry is a very dynamic one, and
this is especially emphasized by late trends not only in the engine, chassis or vital
equipment development, but also in entertainment and driving support systems. In
addition, there is an ongoing effort of finding the best solution to replace fossil fuels to
environmental fuels, without affecting the functional characteristics. This resulted in an
important request of sensors with different applications, as can be seen in Figure 6.

Figure 6. Sensors needed in a car (no special choice of the car from the picture)

This large demand has its own characteristics. The control systems which need sensors
are considered hard real-time systems since are required in the driving process, but
iii

NDT Resource Center, Iowa State University

122

some of them could be considered soft real-time systems till a certain point (such as
oxygen sensors). Given these, the functional characteristics must be perfectly met in any
possible situation. No matter what temperature is reached, no matter what special
situation occurs, no matter after how much time passed since its production, the sensors
must be behave as desired, which means that usually the requirements are not the usual
ones and involve a lot of trials and research until the best solution is found.
Requirements such as very small hysteresis behavior, 1% full scale error, maximum 1%
deviation over 15 years of operation and high sensitivity are some of these
characteristics which are needed. Apart from this, dimensional and weight requirements
are imposed such that the needed information is obtained without affecting the
constructive characteristics.
Nonetheless, the price of the final product should not be forgotten. Every additional
feature means extra materials and parts added, resulting in increased final price. This is
generally not something that needs to be argued about.
4. Conclusions
Rapid developments in more than one engineering discipline resulted in a high demand
for sensors, with strong limitations imposed on functionality. Solutions can be offered
by improving current technologies or finding appropriate new technologies.
The paper does not intend to show or present sensors for the automotive industry, but to
sketch the framework in which the demand is created and the offer meets the demand.
Effective publication of the research result will be made as soon as the product is
protected by patents and as soon as the respective information will not be cons idered
sensitive anymore.
Acknowledg ements
The research is financially supported by the European Commission, by means of a Marie Curie Industry
Host Fellowship. A lot of thanks also to Texas Instruments Holland BV - Almelo for creating such a good
working environment.

123

Implementation of -omics tools in the search for improved


Aspergillus niger enzyme producers
Lucie Parenicov, Nol van Peij, Hans Roubos, Hildegard Menke and Hein Stam
DSM Food Specialties, Department of R&D Genetics, P.O.Box 1, 2600 MA Delft, The
Netherlands
Background:
DSM (Fig.1) is a highly integrated company active
in the field of life science products, performance
materials and industrial chemicals. Currently DSM
is active in more than 150 countries and employs
over 20,000 people. Originally a coal mining
company in 1902 it developed into a performance
materials and fine chemicals producer in the 1980s.
From the 1990s, DSM has experienced a fast
growth in the field of Life Sciences (it made up
38% of the business in the year 2002). DSM Food
Specialties (DFS) is one of the 13 business groups
of DSM. DFS is active in the production and
marketing of value-added solutions in food
ingredients and nutritional products.

Fig. 1. View on the DSM-Gist plant in


Delft, the Netherlands. Aerial photo.

Aim of the project:


Aspergillus niger is the main microorganism used for enzyme production within DFS. The
aim of this research-training project is the utilization of modern genomics and proteomics
technologies in order to rationalize the strain improvement program for enzyme production
using Aspergillus niger as a host. So far, a number of important factors and processes
hampering efficient protein production have been identified. Examples are the promoters
driving the expression of desired genes, mRNA (in) stability, the secretion pathway of the
fungus (comprising ER-targeting, protein folding and maturation, (Golgi) transport and
secretion), use of a carrier protein, and (intra- and extra-cellular) proteolytic degradation.
Proteases are clearly causing significant losses in the homo- and hetero- logous protein
production. The main focus of this project is therefore the expression of the spectrum of
proteases in Aspergillus to be able to improve protein yields. Since DSM has sequenced the
complete genome of Aspergillus niger, DNA micro arrays (Affymetrix DNA chips) and
proteomics will be used to study the expression profiles of genes and proteins of strains
producing homo- or hetero-logous proteins under controlled fermentation conditions.
Knowledge of the (differential) expression will be used to construct new Aspergillus strains
by modulation the expression of specific genes. Alternatively, process improvement can be
applied based on knowledge obtained from the expression profiling experiments. This
approach should provide information on possible bottlenecks in (heterologous) protein
production and enable us to increase protein yields in Aspergillus niger fermentations by
intervening in the proteolytic system.

124

Introduction:
In the past decades, classical
strain improvement (CSI) programmes
were performed within DSM (formerly
Gist-brocades) in order to improve the
production
capacity
of
different
microorganisms, amongst which fungi
such as Penicillium (for the production of
antibiotics) and Aspergillus (for the
production of enzymes). The current
DSM Aspergillus production strains
originate from the GAM (GlucoAMylase)
strain lineage. This lineage was
developed during the CSI program from
the natural isolate A. niger NRRL3122 in
a selection for a high glucoamylase
(glaA) producer (see Fig.2) [1].
The final GAM strain acquired additional
copies of the glaA gene and was used for;
(i) transformation with a gene of interest
by random integration in a genome. This
led to several enzyme producers such as
phytase and xylanase producing strains
(see Fig.2);
(ii) manipulations using recombinant
DNA technologies. This led to a
development of the DESIGN & BUILD
concept, in which several copies of a
gene of interest are targeted into a defined
locus (see Fig.2 and [1]).
Although the CSI program was
successful in obtaining high yields of
homologous proteins such as glucoamylase,
phytase
and
xylanase,
heterologous proteins are generally
produced in lower concentrations. The
deletion of the major extracellular acid
protease-encoding gene (see Fig.2),
further improved the genetic background
of the production strains, however did not
lead to yields of heterologous proteins
comparable with those obtained by
homologous
enzyme
producers.
Aspergillus spp. is able to secrete tens of
grams per liter of homologous proteins,
whereas production of hetero logous

1 x gla

NRRL3122

3gla Pgla- geneX


3
The scheme of

3gla amdS 3gla

an overexpression construct for

transformation of A. niger according to DESIGN & BUILD

UV mutagenesis
and selection for
improved isolates

> gla

Recombinant DNA
technology
DESIGN & BUILD
Approach

GAM 1

DS25956
phytase
Enzyme producers

GAM
GAM ?

Transformation by
random integration

gla

GAM ? gla
? pep

DS34553
pmeA

approach

DS35496
porcin PLA2
e

Comercial
products

DS16813
xylanase

Natugrain
Wheat

Enzymes producers

DS34552
abf A

Transformation by
targeted integration
into ? gla loci

DS35387
phytase
Homologous
proteins
Heterologous
proteins

DS38163

xea

Natophos

DS27301
phytase

Fig. 2. The scheme of the current A. niger producer line


development [1].

proteins often results in much lower


yields [2].
As one of the world leading
companies in the production of food and
feed ingredients, DSM entered in the
genomic era by sequencing the complete
genome of an Aspergillus strain. The
sequencing was finalized in 2001. In
Fig.3, a short summary of the A. niger
Aspergillus niger genome data:
Strain
Strain:
desc. of NRRL3122
35.9 Mb
Size:
52 %
GC content:
? of genes:
14,164 (2003)
Categories of annotated genes:
Cellular organization
3270

Transposable elements
54

Metabolism
3027

Ionic
homeostasis
175
1085
Rescue /death
448
Cell communication
Cellular biogenesis

Energy
349
Cell growth
746
Transcription
966

276
993
900
Translation
794
Protein destination
:
Transport mechanisms
Transpo rt facilitatio n
Transport

300

Fig. 3. Overview of the A. niger strain genome.

125

genome is presented. In the line with the acquired genome sequence of A. niger, DSM
designed and obtained A. niger Affymetrix DNA chips. For each annotated gene at least
12 specific oligo-probes are present on the GeneChip. In this manner, the whole A. niger
transcriptome of different protein producing strains can be monitored during
fermentation. Similarly, the proteome and metabolome changes can be followed using the
recently built in house proteomic and metabolomic facilities.
With the long expertise in the CSI program and the recently obtained modern -omics
tools, the goal to rationalize the strain improvement program (genetic manipulation of
strains, fermentation conditions) can now be achieved.

Experimental design:
In order to detect the factors involved in proteolytic degradation of homo- and
hetero- logous proteins during the A. niger fermentation a number of A. niger enzyme
producers will be grown under defined fermentation conditions. Samples for extraction of
RNA for DNA chip hybridization, for measurements of the enzymatic activities and for
monitoring of the proteolytic spectra will be taken at various time points. By combining
data from the transcriptome analyses, the enzymatic me asurements and the protease
spectra analyses, time points can be identified in which proteolysis is initiated. From the
transcriptomic data the differentially expressed genes at this time point can be identified.
Using this approach new target genes can be found and their effect on the protein
production can be tested by their deletion / overexpression in an A. niger strain.
References:
[1] van Dijck, P.W.M, Selten, G.C.M., and Hempenius, R.A. (2003). Reg.Toxicol.
Pharmacol. 38, 27-35.
[2] Gouka, R.J., Punt, P.J., and van den Hondel, C.A. (1997). Appl. Microbiol.
Biotechnol. 47, 1-11.

126

Modeling Predictable Multiprocessor


Performance for Video Decoding
Milan Pastrnak1,2 * and Peter Poplavko1,3
1
Eindhoven University of Technology
P.O.Box 513, 5600 MB Eindhoven, The Netherlands
2
LogicaCMG Eindhoven / 3 Philips Research Labs Eindhoven
1

{M.Pastrnak, P.Poplavko}@tue.nl

Abstract This work addresses implementation of


decoding an MPEG4 bitstream with multiple arbitrarily
shaped video objects (AS-VOs). Such multimedia
applications pose challenging requirements on embedded
systems design with respect to compositionality, scalability,
and predictability in order to meet real-time constraints.
Multiprocessors based on networks-on-chip (MP-NoC) are
appropriate platforms that satisfy these requirements [3].
The workload of the AS-VO-decoding changes dynamically
at run-time. To control the system resources, it is favorable
to have workload estimation of one video frame (or VOP),
before the frame (or VOP) is decoded. To make this task
easier, the encoder puts a few complexity parameters in the
VOP header. For single-processor implementation, linear
complexity functions on can be used to obtain the workload
[6, 2]. Preliminary results show that with the full a-priori
knowledge of the input bitstream, the model is accurate
within 6% in average [6]. For multiprocessor
implementation, we extend these models to parametrical
IPC graphs [7]. In this case, the same accuracy is possible,
but hardly feasible at run-time due to coding efficiency
reasons. In [7], a feasible approach has been proposed,
which yields a safe upper bound on the workload, but on
the price of high error. We discuss the current status in
our modeling approach.
Keywords network-on-chip; system-on-chip; timing
model; performance evaluation; resource estimation; realtime; data-flow graph

INTRODUCTION
In order to run real-time multimedia applications on
an on-chip multiprocessor system, techniques have to be
developed to control the resource usage by those
applications on the multiprocessor platform.
To tackle the interactivity and dynamism of
applications, a real-time environment is required to
coordinate multiple jobs on the set of processors.
Informally, a job is an activity started and stopped by
some unpredictable run-time events, which can come
*Supported by the European Union via the Marie Curie Fellowship program
under the project number HPMI-CT-2001-00150.

from user actions (e.g., pushing a button) or from


changes in the video scene. We currently assume that
that each job is assigned to decode one video object. We
also assume that each running job is invoked regularly.
At each invocation, it has to produce a certain number of
output data samples (macroblocks or MBs) that
constitute together one frame (video object plane, or
VOP). A real-time control hierarchy must ensure that
the complete application (the set of active jobs) provides
output with the best quality that can be achieved while
meeting the timing constraints [9].
Complex multimedia applications pose challenging
requirements on embedded systems design with respect
to compositionality and scalability. Moreover, the
authors believe that, in order to meet the real-time
constraints at low cost, predictable hardware behavior
is necessary. We study a design of multiprocessor
network-on-chip (MP-NoC), which intrinsically
supports these requirements [1, 3].
The multiprocessor workload of each job and its
communication bandwidth requirements may change
over time, e.g., when the dimensions, shape and texture
of the correspondent video object changes. The real-time
control hierarchy should track these changes in a timely
manner and adapt to them at run-time, e.g., by allocating
more processors to the job or by reducing the amount of
computations at the expense of visual quality [9].
In this paper, we present an overview on the use of
parametrical inter-process communication (IPC) graphs
[7, 8] as models allowing estimation of the workload of
the multiprocessor jobs. We develop IPC graph models
for the decoding of arbitrarily shaped objects, as defined
in the MPEG4 standard.
This paper is organized as follows. Section I gives
background information about the Arbitrarily Shaped
Video Objects. The next section depicts the MP-NoCs.
In Section III, we briefly present current IPC graph
model together with the results on accuracy. Section IV
concludes this paper and mentions future work.

127

I. ARBITRARILY SHAPED VIDEO OBJECTS


RISC

CA

NI

VLIW

acc

L1

networkon-chip

acc

L1

CA

NI

NI

off-chip memory tile


network services

L2

CA

local mem. system

CA

communication assist

NI

network interface

Figure 1: Architecture with tile-based MP-NoC.

Boundary MB

macroblock that has transparent pixels and opaque


pixels. The MBs for non-rectangular VOP contain coded
shape data on top of texture data. The complete
decoding process of VOP involves the following parts
for the decoding of each MB:

VOP

Transparent MB

Opaque MB

MV

Texture

MPEG-2

MV

Texture

MPEG-4

shape information of the object,


motion vector(s) describing the object motion,
texture information giving the object contents.

In this paper we limit ourselves to intraframe coded


VOPs. The intraframe coded VOP (I-VOP) should have
the fully coded shape and texture parts of VO. From
earlier experiments with video applications, we know
that a considerable computation load for intraframe
VOPs will occur (despite the many interframe coded
VOPs).

(a) Arbitrarily shaped VO.

Shape

processing tile

processing tile

The various profiles of MPEG-4 and the flexibility of


its tools offer both digital television at bit rates around
1-2 Mbit/s (e.g. in the AS profile) and mobile
applications at bit rates between e.g. 10-300 Kbit/s. To
decode any type of MPEG-4 coded video, a high amount
of tools is required. We decided to focus just on one
particular tool in the standard, offering the arbitrarily
shaped video object decoding and composition.
In MPEG-4, every video object (VO) is represented in
several information layers, with the Video Object Plane
(VOP) at the base layer. This video object plane is a
rectangular frame composed from a hierarchically lower
unit called a macroblock (MB). An MB is a 16x16 pixel
block that is the smallest data unit of a video object.

II. MULTIPROCESSOR NETWORK-ON-CHIP


(b) MB bit stream structure for VO with shape coding.

Figure 1 Arbitrarily shaped VOP representation.


Figure 1(a) depicts a typical composition of MBs
within a VOP, representing an arbitrarily shaped video
object. As is visible from Figure 1, even arbitrarily
shaped (sometimes referred to as non-rectangular) VO
can be covered with a grid of macroblocks. The standard
distinguishes three types of macroblocks: boundary,
opaque, and transparent. Based on the expected
complexity issues, we decided to focus on boundary
MBs. The boundary macroblock is defined as a

On chip multi-processor architectures are important


for the implementation of MPEG-4 (multimedia)
applications [2].
The growing design complexity results in the need in
the platforms featuring compositionality, design reuse
and design scalability. The MPEG-4 standard supports
these requirements from the application side. The
network-on-chip design paradigm supports them from
the hardware side [3].
Meeting timing constraints requires predictable timing
in communication. On-chip networks can support
predictable timing by offering a guaranteed throughput
service [3].

128

To manage complexity and support predictability, we


use a hierarchical approach in the system architecture
design. Currently, we assume the simplest variant of this
approach, in the form of a so-called tile-based
architecture as depicted in Figure 1.
This type of architecture assumes two levels of
hierarchy. The lower level shows the details of
processing tiles, and the higher level consists of a set of
tiles connected by the on-chip interconnection network
and a level-2 (off-chip) memory. A processing tile
represents a small self-contained embedded computer,
consisting of one or two embedded CPU cores (e.g.,
RISC), local level-1 memory (denoted as L1 in Figure 1)
and application-specific accelerators. A communication
assist serves in each tile as a gateway from the local,
tile-specific memory system to the standard network
services.
The tile-based hierarchical approach can be seen as a
natural extension of the existing video decoders that
handle with 1 or 2 processors only one principal
MPEG-4 video object containing the complete picture of
the video stream [2]. To implement, e.g., the MPEG-4
Core profile, multiple smaller video objects can be
handled by running multiple instances of a single-objectdecoding job on the replicated processing tiles.
III. WORKLOAD ESTIMATION
In the design of a real-time control hierarchy it would
be favorable to have a method to estimate in advance
how many processor cycles each job consumes from its
processors (job workload).
We propose to construct at design time a parametrical
timing model and then use it at run-time for the
workload estimation. We require the encoder to put the
complexity parameters into the image headers.

For example, the complexity function of inverse


quantization actor is estimated by:
tIQ = 1.39K j + 44 NAC
(1)
where j and NAC are complexity parameters. In
particular, j is the total number of non-transparent subblocks and NAC is the total number of non-zero AC
coefficients. We have observed that for the ARM7
processor, the complexity functions are accurate within
6% in average, given that all parameters are known
exactly [6].
Note, that if all actors are mapped to a single
processor, the total workload is computed by simply
adding the complexity functions together. We obtain
again the accuracy in the order of 6%. For
multiprocessor job, it is not so straightforward to
compute the workload.
Step 2. IPC Graph Construction [7, 8]
This step is performed after the actors have been
assigned to processors, and inter-processor data
communication has been assigned to network
connections. For each processor Proc_i, we introduce a
process cycle into the IPC graph. For each connection
Cj, we introduce a subgraph that models the connection.
In Figure 2, we see an IPC graph containing three
process cycles (Proc1, 2 and 3) and three connection
subgraphs (Ca , CDCT and CYUV). See [7] for more
details.

B. Run-Time Workload Estimation


In [7] we propose a way to use IPC graph to obtain
workload estimation for the decoding of one image of a
video object, called video object plane (VOP). One
macroblock (MB) passes all stages of processing in

A. Design-Time Model Construction


To express the parallelism and the communication of
multiple processes executing the main loop of the job,
we use an inter-process communication graph, which is
an instance of a homogeneous dataflow graph model of
computation [5]. As an example, we show in Figure 2 an
IPC graph for intra-frame shape-texture decoding. The
graph is constructed in two major steps, presented
below.
Step 1. Resource Estimation [2, 4, 6]
For each main computation actor (software routine like
CAD, CBP, VLD) the complexity function is defined
for the given data granularity (e.g., single macroblock
(MB)). We currently assume that all processors are
ARM7TDMI cores with flat local memory with singlecycle access.

129

Ca

Proc1

Proc3

CAD

Ta

Ra

2,8K

0,7K

CBP

Proc2
VLD

TDCT

IQ

TYUV

RYUV

IP

33K

IDCT

33K

4,7K

CDCT

CYUV

Figure 2: IPC graph of MPEG-4 shape-texture decoder of an


intra-frame of a video object (or I-VOP) [7]

every iteration of IPC graph. If complexity parameters of


all MBs in VOP are known, the delays of all actors at
each iteration are also known. In this case, a longestpath computation in IPC graph unfolded multiple times
would yield the job workload with the same accuracy as
the complexity functions (around 6%). However, it is
not feasible to enforce the encoder to put all complexity
values for all MBs into the header.
To tackle this problem, we propose in [7] a scenario
approach. All MBs of the VOP are divided into several
scenarios. For the shape-texture decoding we have
identified three scenarios: transparent MB, boundary
MB and opaque MB). For each scenario, we require
the encoder to put in the header one set of complexity
parameters which characterizes all MBs that belong to
that scenario (characterization set). In addition, we need
two extra parameters, namely, Js, or the total number of
MBs in scenario s and Ls, or the number of transitions to
scenario s from other scenarios.
We estimate the job workload on all processors at runtime as follows [7]:

l s Js + s

workload =

Ls ,

(2)

l s and s s (throughput and lateness) are certain properties


of the IPC graph in scenario s, which can be computed
by applying fast graph analysis algorithms.
The desirable properties of workload estimation are
safety and tightness. The safety means that there is
enough confidence that the real workload shall not
exceed the estimated value. The tightness requirement
means that the estimated value is not too pessimistic.
For the safety reason, we proposed in [7] to
characterize each scenario with the maximum values of
each parameter over all MBs belonging to the scenario.
However, for the safety, we had to pay with the
tightness. In [7], we have evaluated (2) for an I-VOP of
a test bitstream and have observed 55% overestimation
of the real workload.
In future work, to improve this result, we will try to
exploit the smoothing effect that large FIFO buffers
and larger data granularities have on the workload
variations and to use a characterization set which is in
between the maximum and average set of parameters.
IV. CONCLUSIONS AND FUTURE WORK
This paper is an overview of our work on modeling
the
performance
of
dynamic
video-decoding
applications. We assumed multiprocessor networks on
chip as a target platform, because they meet several
important requirements of multimedia application

design. The proposed models capture both computation


and communication within a dynamic video-decoding
job. The models can be used for the run-time workload
tracking in the real-time control hierarchy of the
platform. Currently we can obtain a safe, but not tight
(55% or more) overestimation of the workload.
In the future work, we will develop more elaborate
models and estimation methods for the chosen
application driver (shape-texture decoder). We will also
investigate the possibilities for a real-time control
hierarchy, like the one proposed in [9].
ACKNOWLEDGEMENT
The authors are grateful to Marco Bekooij, Philips
Research Labs Eindhoven, for some valuable feedback
concerning the modeling approach.
REFERENCES
[1] M. Bekooij, O. Moreira, P. Poplavko, B. Mesman,
J. van Meerbergen, M. Duranton, and L. Steffens, Predictable
Embedded Multiprocessor System Design, Proceedings Philips
Conference on DSP, 2003.
[2] M. Berekovic, H.-J. Stolberg, and P. Pirsch, Multicore SystemOn-Chip Architecture for MPEG-4 Streaming Video, IEEE
Transactions on Circuits and Systems for Video Technology, pp.
688-699, Vol.12, No. 8, August 2002.
[3] K. Goossens et al., Guaranteeing the quality of services in
networks on chip, Networks on Chip, edited by A. Jantsch and
H. Tenhunen, Kluwer Academic Publishers, pp. 61-82, 2003.
[4] R. Lauwereins, M. Engels, M. Ade, J.A. Peperstraete, Grape-II:
A System-Level Prototyping Environment for DSP
Applications, IEEE Transaction on Computer, pp. 35-43, Vol.
28, No. 2, February 1995.
[5] E.A. Lee and D.G. Messerschmitt, Static Scheduling of
Synchronous Data Flow Programs for Digital Signal Processing,
IEEE Transactions on Computers, pp. 24-35, Vol. 36, No.1,
1987.
[6] M. Pastrnak,
P. Poplavko,
P. H. N. de With,
and
J. van Meerbergen, On Resource Estimation of MPEG-4 Video
Decoding for A Multiprocessor Architecture, Proceedings of
PROGRESS 2003
[7] P. Poplavko, M. Pastrnak, J. van Meerbergen, and
P. H. N. de With, Mapping of an MPEG-4 Shape-Texture
Decoder onto an On-chip Multiprocessor, Proceedings of
PRORISC 2003, http://www.ics.ele.tue.nl/~epicurus
[8] P. Poplavko, T. Basten, M. Bekooij, J. van Meerbergen, and
B. Mesman, Task-level Timing Models for Guaranteed
Performance in Multiprocessor Networks-on-Chip, ACM
Proceedings CASES03, pp.63-72, Oct. 30-Nov.1, 2003.
[9] C. M. Otero Prez, L. Steffens, P. van der Stok, S. van Loo,
A. Alonso, J. F. Ruiz, R. J. Bril, and M. G. Valls, QoS-Based
Resource Management for Ambient Intelligence, in Ambient
Intelligence: Impact on Embedded-System Design, ed. By
T. Basten, M. Geilen, H. de Groot, Kluwer Academic
Publishers, 2003.

130

Analysis of flavour compounds in complex matrices using combined


GPC-GC. Part 1: Off-line combination
N. Pineiro, G. A. Jongenotter, H. Steenbergen, M. Batenburg and H.-G. Janssen.
Unilever R & D Vlaardingen, Olivier van Noortlaan 120, 3133 AT Vlaardingen, The
Netherlands.
1. Introduction
Taste and flavour are important issues in food products and, as a consequence, in the
food industry. Therefore, the development of methodology for the isolation and analysis
of flavour/aroma compounds is an important task. Especially when fatty food samples
are involved, since the methods for these samples normally are time and effort
consuming and do not allow a high throughput. Simple solvent extracts, for example,
always contain significant levels of fat residue hampering GC analysis.
In this work, we report on the development/application of a SEC-GC system for flavour
analysis in fatty samples. The method is based on isolation of the aroma compounds
from the triglycerides using Size Exclusion Chromatography (SEC). Large volume oncolumn injection Gas Chromatography was used with the aim to obtain a high
sensitivity and avoid pre-concentration steps of the extract to simplify the sample pretreatment procedure.
2.Experimental
i) Chemicals
Dichloromethane was of HPLC grade (Merck) and freshly distilled prior to use.
Flavour compounds spanning a wide range of volatilities, polarities and stabilities were
used for the evaluation of the method. Stock solutions of flavour model compounds,
obtained from various suppliers, were prepared in dicloromethane.
Rapeseed oil was used as fatty matrix.
ii) Equipment/Apparatus
GPC
The GPC consisted of a Waters 510 pump, a six port injection valve, a PLgel 50
(5mm), 300 x 7.5 mm I.D. column (Polymer laboratories) and a variable-wavelength UV
detector (Thermoseparations products). The mobile phase was dichloromethane at a
flow rate of 1 mL/min. The injection volume was 1 mL and the UV detection was
carried out at 254 nm and 230 nm for flavour compounds and triglycerides,
respectively.
GC
The fat content was determined using a GC 2010 with an AOC-5000 autosampler
(Shimadzu). The column was a DB-5HT non-polar column, 4 m x 0.32 mm i.d. with a
film thickness of 0.1 mm (J & Scientific).The system was operated at a constant flow
rate of 4 mL/min (helium). Injection port and FID temperatures were 400 C and 420 C,
respectively. The oven temperature was programmed from 130 C to 360 C (1.85 min.
hold) at a rate of 200 C/min.
Flavour analysis was performed using a GC 8000 Top Series (CE Instruments) GC
equipped with an on-column injector, electronically controlled carrier gas supply (EL
980 Control Unit), heated early solvent vapor exit (SVE), autosampler for liquid
samples (AS 8000 Autosampler) and FID detector. Solvent evaporation and solute
preconcentration (desolvation) was performed in a large volume guard column of 10 m

131

x 0.53 mm i.d. (Varian Chrompack). The analytical column was a CP-Sil 8 CB Low
Bleed/MS, 30 m x 0.25mm i.d. and 0.25 mm film thickness (Varian).
3. Results and discussion
The aim of the GPC fractionation was to obtain a sample that is directly amenable to
GC analysis. A schematic representation of the procedure is given in figure 1. It was
similar to that described by Lbke et al. [1]. In short, the sample is fractionated by the
SEC system, and fractions are collected for further GC analysis.
Sample
SEC

0-8

9.6-10

8-9.6

14-21

10-14

21-26

Fast GC analysis
LVI GC analysis

26-30
min.

SEC

0-8

8-9.6

9.6-10

10-14

14-21

21-26

26-30
min.

Figure 1. Scheme of the procedure.

gamma-C12-lactone

700

200

vanillin

800

undecane

gamma-C8-lactone
phenyl acetic acid
decadienal

DG-S

2,3-diethyl-5-methylpyrazine
ethyl octanoate

MG-S
400

linalool

900

600

guaiacol

FFA-S

isobutyrate
butyric acid ethyl
hexanal

TG-S
800

diallylsulfide
3- heptanone
furfuryl mercaptan

Since LVI GC is very sensitive towards fat, a fast GC method was developed for
measuring the fat content in the SEC fractions prior to injection into the LVI GC
instrument. Figure 2 shows a chromatogram of the fast GC analysis where free fatty
acids, mono-, di- and tri- glycerides can be determined in less than three minutes. In
figure 3, an example of a LVI GC analysis of a mix containing 17 flavour compounds is
shown.

600

0
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 min

Figure 2. GC chromatogram of FFA,


MAG, DAG and TAG mixture.

8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 min.

Figure 3. GC analysis of a mix


containing 17 flavour compounds.

132

The amount of fat injected strongly affects the separation between flavour compounds
and fat obtained in the SEC run. Different amounts of triglycerides ranging from 0.5 to
5 mg mixed with aroma compounds were injected in the SEC system (data not shown).
As expected, increasing the amount of fat resulted in a decrease in the separation
efficiency of the SEC run.
The experiments were carried out using a mix containing the flavour compounds in
rapeseed oil as the fatty matrix. After the first SEC fractionation, most of the flavour
compounds elute in the same fraction, except butyric acid and phenyl acetic acid that
elute slightly later from the SEC column. Nevertheless, the tail from the triglycerides
elutes in the same fraction as the flavour compounds as can be seen from Figure 4. The
fat level in that fraction is 6.7 ppm, too high to allow analysis of the flavours by LVI
GC. To improve the fat removal, the fractions where flavours and triglycerides coelute
were recombined and refractionated. Figure 5 shows the results obtained after the
second SEC fractionation. Elimination of fat is now complete. All flavour compounds
of different polarity and volatility now elute in one fraction.

0.008

0.6

0.007

0.5

0.006

0.002
0.001
0
26-30

Fractions (min)

26-30

14-21

9.6-10

0-8

0.003

14-21

0.1

0.004

9.6-10

0.2

0.005

0-8

0.3

Gam
n-pe ma-C12
Van ntadecan -lactone
e
Deca illin
d
Phen ieanal
Gam yl Acetic
Ethy ma-C8-l acid
l
a
2,3-d octanoatectone
ie
Linalo thyl-5-m
ethylp
Unde ol
yrazin
cane
e
Guaia
Com
Furfu col
poun
ryl m
ds
ercap
3-He
tan
ptano
ne
Dially
Hexa lsulfide
nal
Butyri
c aci
d
Ethy
l Isob
utyra
TAG
te
's

mg

mg

0.4

Gam
m
n-pent a-C12-la
Vanill adecane ctone
Decad in
ie
Pheny anal
l Ace
tic ac
Gam
m
id
Ethyl a-C8-lact
one
octan
o
2,3-di
ethyl-5 ate
Linalo
-methy
o
lpyraz
Undec l
ine
ane
Guaia
col
Furfu
ryl m
er
3-Hep
ca
tanone ptan
Com
Dially
poun
lsulfid
ds
e
Hexa
nal
Buty
ric aci
d
Ethyl
Isobut
yrate
TAG
's

Fractions (min)

Figure 4. Elution of flavour compounds and


TAGs from the first SEC separation.

Figure 5. Elution of flavour compounds and


TAGs in the second SEC separation.

The recoveries obtained for the flavour compounds and triglycerides after the two serial
SEC fractionations are shown in figure 5. Almost all flavour compounds show good
recoveries in the overall SEC-GC procedure.

133

120

Recovery[%]

100

1st GPC
2nd GPC

80

60

40

20

Va
nil
n-p
in
en
tad
Ga
ec
mm
an
a-C
e
12
-lac
ton
e

Un
de
2,3
ca
-di
ne
eth
yl-5
-m
L
ina
eth
loo
ylp
l
yra
zin
e
Eth
yl
oc
tan
ga
oa
mm
te
a-C
8-l
ac
Ph
ton
en
e
yl
ac
eti
ca
cid
De
ca
die
na
l

He
xa
na
l
Bu
tyr
ic
ac
id
Di
al
yls
ulf
ide
3-h
ep
tan
Fu
rfu
on
ryl
e
me
rca
pta
n

TA
Et
hy
G's
l is
ob
uty
rat
e

Compounds

Figure 6. Recoveries obtained after the two serial GPC fractionations.

4. Conclusions
?The fast GC method developed and applied here allows fast and simple determination
of the fat content, as free fatty acids, mono-, di- and tri- glycerides, of the SEC fractions.
Flavour analysis performed by LVI GC provides a high sensitivity simplifying the
sample pre-treatment procedure.
?The proposed method of SEC-GC provides reliable results and allows rapid
quantitative analysis of pre identified flavour compounds in fatty matrices.
Acknowledgments
This research has been supported by a Marie Curie Fellowship of the European
Community programme: Ind ustry Host Fellowship (HPMI-CT-2002-00167).
References
[1]M. Lubke, J.-L. Le Quere and D. Barron, J. Chromatogr. A 729 (1996) 371-379.
[2] K. Grob and I. Klin, J. Agric. Foof Chem. 39 (1991) 1950-1953.
[3] G.A. Jongenotter, M. A. T. Kerkhoff, H.C.M. van der Knaap, B. G. M.
Vanderginste, J. High Reol. Chromatogr. 22 (1) (1999), 17-23.

134

Polymer physics and material behavior of protein/polysaccharide


composites: the gelatin- locust bean gum (LBG) system
Alois K. Popp and Leif Lundin
Unilever Res.NL, Olivier van Noortlaan 120, 3133 AT Vlaardingen, email: alois.popp@unilever.com
Abstract: Proteins and polysaccharides are important food ingredients and a description of their
composite behavior leads to an understanding of food texture and oral perception.
We have studied the behavior of gelatin- locust bean gum (LBG) mixtures in order to describe and
predict material properties of protein- polysaccharide composites in the phase separating liquid state
and the quenched gel state. Starting with the characterization of the single components in liquid
solution, we are able to relate the phase dia gram of gelatin-LBG to microstructure and mechanical
properties of different polymer compositions. Quenching these systems results in composite gels,
whose material properties depend on the phase diagram of the mixtures. The composite gels also show
evolution in time due to slow rearrangements within the systems (aging). This effect has been
analyzed with the help of a recently developed theoretical concept.

Introduction
Protein/biopolymer mixtures are important model systems for foods. Many food products are
liquids at high temperatures, which simplifies processing, but are soft solids upon cooling. In
contact with warm surfaces (warm bread, upon eating in the mouth), they melt again, causing
a pleasant mouth- feel. While it would be favorable to link the final oral perception in a
predictive, generalizing way to the ingredient formulation, mixing and processing of multiple
ingredients follows complex phase diagrams. The result of processing a mixture of gelling
and non-gelling ingredients is linked to ingredient and salt concentrations, pH, cooling
temperatures and temperature gradients. Upon storage at low temperatures, the gels evolve
further due to slow reorganization mechanisms in the dense networks (aging). As first attempt
to link final product behavior to a phase diagram and the behavior of the ingredient
concentrations, a binary (two-component) model system for a water-continuous food product
was chosen. It consists of the polysaccharide locust bean gum (LBG), used as thickener, and
the protein gelatin as gelling agent. While the phase behavior of these composites is well
known [1], details of the influence of thickener on the further development after gelation have
only been studied sparsely. The investigation aims at a general description, using ma ster curve
approaches where possible in order to describe properties of mixtures over a wide
concentration range with a minimum number of variables.
The biopolymer LBG is a galactomannan, similar in behavior to guar and tara gum. The major
differences between these commercially important stabilizers are the different mannose
content. While pure poly- mannose would not be soluble due to its high hydrophobicity, the
galactose side chains facilitate solubilization. The lower the mannose to- galactose ratio, the
more soluble the gum. While this ratio is about 1.0 for guar, commercial LBG contains about
3.5-4 times more mannose residues than galactose side-chains. LBG can be further purified
and fractionated to obtain limiting fractions of both high and low mannose content [2]. The
fractionation thus provides limits within which all available LBG- varieties should fall.
Material and methods
Fractionation, Rehydration and dissolution of the thickener: LBG has been fractionated into
a low- mannose (LBGLT) and a high mannose (LBGHT) fraction, according to a published
method [2] that involves an extraction from commercial LBG at different temperatures
followed by an ethanol precipitation. LBG was rehydrated in a few drops of ethanol. The
resulting slurry was dissolved in hot water by heating the solution after mixing slowly to 80 C,
keeping this temperature for at least 0.5 h. Stable solutions of a concentrations up to 1.5 %
135

Scalable audio coding


F. Riera-Palou and A. C. den Brinker
Philips Research Laboratories
Prof. Holstlaan 4 (WO-02), 5656 AA Eindhoven, The Netherlands

Introduction
Since the introduction of the CD in the early eighties, the digital format has been the
preferred method to store and transmit audio content. During the last decade, and with the
advent of the Internet, the need for efficient audio representations become more evident
and numerous compression methods have been developed [1]. In this project, a new
coding scheme is being considered whose ultimate goal is to allow the decoding of the
coded material at a variety of bit-rates/qualities.
This paper is organised as follows: in section 1 a concise introduction to the main
audio coding techniques is presented. Section 2 describes the main features of the core
coder used in this project. In section 3 it is shown how this core coder can form the basis
of a scalable compression scheme. Section 4 presents the listening test results obtained
in the evaluation of the proposed scalable coder. Finally, the conclusions summarises the
main results obtained so far in this project.

Background

Broadly speaking, audio coding can be divided into two categories: lossless and lossy.
In lossless coding, the original audio material (typically CD audio) is encoded in such
a way that when decoded, a perfect replica of the original is obtained. Lossless coders
typically achieve compression ratios between 2 and 3, this means that an original CD
stereo stream which requires around 1,400 kb/s will be reduced to somewhere between
500 and 700 kb/s. Contrary to lossless techniques, in lossy coding, the decoded signal
is not a perfect copy of the original material as some information is thrown away during
the encoding. The discarding of information is done exploiting the characteristics of the
human hearing system so as to minimise the audible effects caused by the missing data.
The advantage of lossy coders is a much higher compression factor which can be made
arbitrarily large depending on the quality expected by the listener. As an indication, MP3
which is a lossy coding method with a variety of possible compression rates, can achieve
stereo CD-like quality with a compression factor of around 10.
In recent years, a new lossy audio coding paradigm has received substantial attention
from the research community. This new technique, generically called parametric coding,
fits the input signal to a pre-determined model simplifying in this way its representation.
Using this method, compression factors of up to 50 (32 kb/s for a stereo CD stream) have
been realised while still maintaining a good quality in the reconstructed signal, although
significantly lower than that of the original material. Results indicate that for very low bit
rates, parametric coding is superior to conventional (waveform) schemes such as MP3.
Nevertheless, it is becoming increasingly clear that, even at high bit rates, it is very
difficult to attain excellent quality (like MP3 does) relying only on parametric coding.

139




TrD

TrA




TrS 

.....
.

 ....

SiA


  

SiS 

.....
.

 ....

  

NoA


BSG

Bit stream
Figure 1: SSC Encoder scheme producing a bit stream from an audio signal. It consists
of a transient detector (TrD), a transient analyser (TrA), a sinusoidal analyser (SiA), a
noise analyser (NoA) and a bit stream generator (BSG).
Consequently, it seems logical to aim for a coder that can combine the strengths of both
approaches, that is, act as a parametric coder when very high compression is needed and
resort to waveform techniques when the priority is to achieve transparency.

SSC parametric coder

The sinusoidal coder (SSC) is a parametric audio coder developed by Philips [2] in response to a call for proposals from the MPEG standardisation body and it has recently
become an international standard. SSC is capable of coding stereo audio material with
high quality at a bit rate of 24 kb/s.
The SSC coder fits the input signal to a model composed of three elements: transients, sinusoids and noise. Transients are characterised by sudden changes in the energy
level which represent the non-stationarities in the audio signal. Sinusoids are the highly
predictable and steady components within the signal. They typically last for a long time,
allowing for a very efficient encoding. The noise accounts for the stochastic part of the
signal that cannot be modelled by either the transients or sinusoids.
A block diagram of the SSC encoder is shown in Fig. 1. The coding procedure starts
by processing the transients. To this end, it uses a transient detector (TrD) which detects
transients and estimates their starting position. This information is fed to the transient
analysis (TrA) which estimates the transient component parameters and feeds these to a
transient synthesiser. In the transient synthesiser (TrS), the estimated waveform captured
in the transient parameters
is generated and subtracted from the input signal, thus making

a first residual . This first residual is input to a sinusoidal analyser (SiA) that estimates
the parameters of the harmonic components. The sinusoidal parameters are fed to a sinusoidal synthesiser (SiS) which generates a waveform. This waveform  is subtracted from
the first
residual signal thus generating a secondary residual signal . Finally, the signal is fed to a noise analyser (NoA). This analyser captures the main features of the
frequency spectrum and temporal envelope of the remaining signal ignoring its specific
waveform. This coding process works on a frame-by-frame basis and the only information that needs to be transmitted/stored are the parameters for each of three components
in the frame. The decoder uses the incoming parameters to regenerate the corresponding
objects which are then added together to create a faithful approximation of the original
audio signal.

140

RPE extension

Parametric coders such as SSC can achieve very low bit-rates by exploiting the fact that
only the model parameters need to be stored/transmitted. However, this model based
approach is also the fundamental bottleneck to achieve transparency. The difficulty stems
from the fact that many audio signals have components that fall in a grey area: neither
can they be modelled accurately by noise, nor can they be modelled by (a small number
of) sinusoids. In the context of the SSC coder, this implies that often the noise coder has
to deal with signals that are clearly non-stochastic.
In order to be able to encode such grey-area signal components in a bit-efficient
way, we propose to supplement the SSC coder with a Regular Pulse Excitation (RPE)
coder [3]. RPE is a waveform coding technique widely used in speech compression. In
RPE, an input signal is represented by another sequence, typically called the excitation,
where only a small subset of samples is allowed to be non-zero. This procedure is often
referred to as decimation. Additionally, the non-zero samples are constrained to be in predetermined positions on a regular grid. The proportion of non-zero samples present in
the excitation is defined to be the decimation factor. For example, decimation 8 indicates
that one sample out of every eight in the sequence is non-zero. The coding benefit of
RPE lays in the fact that instead of transmitting all samples in an audio signal, only the
non-zero samples and some information regarding the grid need to be sent across. The
non-zero samples of the excitation sequence are computed by minimising the distortion
with respect to the original signal in a perceptually relevant manner.
Fig. 2 (left plot) shows a block diagram of the combination of the SSC coder with the
RPE extension. The RPE block works on the same residual signal as the noise coding part
of the SSC but it is able to achieve a far more accurate and efficient representation of this
residual. The extra bit-rate due to the RPE can be adjusted by means of the decimation
factor. A low decimation factor will lead to a better representation of the residual signal
than a high decimation factor at the cost of an increase in bit-rate. The arrow linking the
SSC noise coder with RPE coder is used to symbolise the fact that the RPE block re-uses
most of the information already generated in the SSC noise analysis.
One of the major benefits of the SSC-RPE combination is that it naturally leads to
a scalable solution where the SSC layer retains the most important components of the
original signal and one or more RPE layers capture the details of the audio signal which
do not fully fit in the SSC model. Moreover the different layers in the scheme have been
combined in such a way that bit-rate scalability is achieved. This means that a signal only
needs to be encoded once but it can be decoded at a variety of bit-rates/qualities. This feature is very attractive for audio content distribution over the Internet as it allows the end
user to select the most appropriate bit-rate as a function of the network characteristics.

Listening test

In our current prototype coder the SSC base layer is supplemented with two extra RPE
layers with decimations 8 and 2, respectively. This results in a system that, after encoding
an audio file once, it can then be decoded at 24, 40 or 85 kb/s letting the end user decide the bit-rate/quality operating point. In order to measure the quality of the proposed
coding scheme, a listening test was carried out. The test involved 12 excerpts of about
10 seconds, a number of trained and untrained subjects, and five different coders. The

141

Average result per coder

100

Input signal

Transient
analysis

Sinusoid
analysis

Noise
analysis

90
80

85 kb/s

70

96 kb/s

60
Quality

RPE
coder

40 kb/s
50
40

24 kb/s

30
20

BSG

10

H.Or.

MP3

RPE2

SSC

RPE8

Bit stream

Figure 2: Left figure: Combination of the SSC coder with an RPE coder. Right figure:Listening test results. Quality rating as a function of coder. SSC, RPE8 and RPE2
denote excerpts coded by the SSC coder, the SSC coder supplemented with a =8 RPE
layer and the SSC coder supplemented with =8 and =2 RPE layers, respectively. MP3
denotes the MP3 coder and H.Or. the hidden originals.







coders were the SSC base layer, SSC plus RPE with
, SSC plus RPE with
and
, MP3 (coded at 96 kb/s mono) and the (hidden) original. The subjects were
presented with an interface making it possible to listen to the reference (the original) and
the coded excerpts. The task was to rate the coders according to a scale ranging from bad
(range 0-20) to excellent (range 80-100). The mean results and 95% confidence intervals
per coder are depicted in Fig. 2 (right plot). The two major conclusions that we draw from
these results are the following: the scalability allows a nice gradual increase in quality as
a function of bit rate and two extra layers suffice to obtain a quality comparable to MP3
and the original audio signal.

Conclusions

A new coding scheme has been designed, implemented and evaluated. The proposed
coder is bit-rate scalable allowing a one-time encoded file to be decoded at different
bit-rates/qualities. Listening test results indicate that the bit-rate scalability does not
compromise the quality offered by the different audio layers. Current work focuses on
further quality improvements in the 24 and 40 kb/s layers.

References
[1] T. Painter and A. Spanias. Perceptual coding of audio. Proc. of the IEEE, 88:451
513, 2000.
[2] E.G.P. Schuijers A.C. den Brinker and A.W.J. Oomen. Parametric coding for highquality audio. Convention Paper 5554, 112th AES Convention, Munich, 10-13 May
2002.
[3] P. Kroon, E.D.F. Deprettere, and R.J. Sluyter. Regular-pulse excitation - a novel approach to effective and efficient multipulse coding of speech. IEEE Trans. Acoustics,
Speech and Signal Processing, 34:10541063, 1986.

142

Examination of the fermentation conditions


influencing the morphology development of Aspergillus niger
using image analysis and DNA micro-arrays
Panagiotis Sarantinopoulos, Stefaan Breestraat, Rogier Meulenberg and Hein Stam
DSM Food Specialties, Research and Development, Department of Microbiology and
Fermentation, P.O.Box 1, 2600 MA Delft, The Netherlands
Introduction.
Filamentous fungi comprise an industrially very important collection of
microorganisms, since they are used for the production of a wide variety of products. These
are ranging from primary metabolites to secondary metabolites and further on to industrial
enzymes (such as proteases, lipases and antibiotics). Aspergillus species remain the most
significant fungus for commercial enzyme production and particularly Aspergillus niger is
the main microorganism used within DSM Food Specialties. It is known that fungal
morphology is often considered as one of the key parameters in industrial production.
(Pazouki and Panda, 2000). Broth viscosity and rheology, which are a result of biomass
concentration and morphology, determine maximum oxygen transfer and this is often
directly coupled to carbohydrate consumption and enzyme production. Although it has been
pointed out that fungal morphology is gr eatly affected by hyphal length, branching
frequency and pellet formation, the interactions between morphology, productivity and
process conditions are not yet fully understood, and even more only little is known about
what determines A. niger morphology on a genetic level (Gibbs et al., 2000).
Aim.
The present research project aims at the study of the influence of various process and
physicochemical conditions (medium composition, temperature and pH) on morphology,
and furthermore on protein production of selected A. niger strains, with industrial
importance. Moreover, strains with already known different morphological behavior will be
compared under standardized fermentations processes. The experiments will be carried out
in 10- lt fermentors fully equipped with sensors and control systems. In principal, two
different tools will be utilized. Initially, a highly sophisticated image analysis system will
be used for characterizing the morphology of A. niger during growth. The power of image
analysis lies in extracting quantitative information in a fast, reproducible and automated
manner. Secondly, since DSM has sequenced the complete genome of A. niger, DNA
micro-arrays will be applied for studying the expression profile of all genes influencing
fungal morphology. In that way, fermentation conditions will be adapted for future
experiments based on the knowledge obtained after gene expression profiling. As a
consequence, the gap between the traditional expertise of microbial physiology and
genetics will be bridged, to some extent.

143

Morphology of filamentous fungi.


Filamentous fungi cells exhibit two extreme types of morphology: pelleted and free
filamentous forms (Figure 1). In other words, many fungi can grow in submerged culture in
different forms ranging from dispersed filaments/mycelia to pellets (dense, often spherical
aggregates). The dispersed form can be further divided into freely dispersed and clumps
(loose hyphal aggregates) (Paul and Thomas, 1998). Pelleted growth is preferred for citric
acid production by A. niger and itaconic acid by Aspergillus terreus, whereas pelleted plus
filamentous growth have been suitable for penicillin production. Only filamentous growth
is better for pectic enzyme production by A. niger and for fumaric acid production by
Rhizopus arrhizus (Pazouki and Panda, 2000).

Figure 1. Forms of (gross) morphology found in typical


submerged cultures of filamentous fungi.

Filamentous growth increases the viscosity of the medium (pronounced shear thinning
characteristics), leads to pseudoplastic behavior, and thereby requiring a higher power input
to maintain adequate mixing and aeration. Viscosity of media containing pellets is
substantially low or less power is required for aeration and agitation, while it leads to
Newtonian broths and better mass transfer rates. However, interiors of pellet often become
anaerobic and fungal growth is confined to the external region of the pellets. Furthermore,
autolysis inside the pellets, due to oxygen limitation, results in a large portion of funga l
mass being metabolically inactive (Pazouki and Panda, 2000).
Generally, morphological forms are described on two levels macroscopic and
microscopic with the macromorphology describing the gross morphology (pellets, clumps
or freely dispersed mycelia) and the micromorphology describing the properties of these
types (branch frequency, hyphal dimensions, and segregation, i.e., compartmentalization
and physiological population distribution). While the macroscopic morphology can
influence medium rheology and thus mixing and mass transfer within a culture, the
literature mainly describes control of macromorphology by environmental conditions. The
resulting micromorphology can have a direct effect on metabolic pathway through the coregulation of genes and can influence productivity due to the segregation of hyphae.
Microscopic morphology also has other, indirect effects on productivity, with
differentiation and hyphal dimensions influencing the secretion pathway. The processes of
clumping and pelleting, and thus macromorphology, have significant influence on the

144

measured mean activities or specific productivities of the cultures investigated.


Macroscopic morphology also determines the micro-environment of hyphae through effects
on mixing, mass transfer, and cult ure rheology, which may lead to cell lysis and thereby
loss of the interior pellet structure (McIntyre et al., 2001).
Effect of fermentation conditions on morphology of filamentous fungi.
The two major factors influencing the rheology of fermentation fluids are biomass
concentration and fungal cell morphology. As the biomass concentration increases, the
number of particles in the fermentation fluid, and hence the number of potential interactions
between them, also increases, thus leading to a possible rise in broth rheology. Morphology
of the mycelial aggregates influences the physical properties of the fermentation fluid for
pelleted and predominantly clumped suspensions, whereas the morphology of the
individual hyphae can affect the rheology of dispersed mycelial broths (Figure 2). A
number of culture conditions, such as medium composition, pH, temperature, agitation
intensity, dissolved oxygen tension and inoculum level, have been identified as having the
potential to bring about substantial change in the morphology, and therefore indirectly
broth rheology (Gibbs et al., 2000).

Figure 2. Complex interactions between morphology, productivity and process conditions


in submerged fermentations of filamentous microorganisms.

Image Analysis and DNA micro-arrays.


Image analysis is a powerful tool for characterizing the gross and internal morphology
of filamentous microorganisms (Figures 3 and 4). Methods are available for studying both
dispersed and pelleted growth forms, and for identifying apices, vacuo les and degenerated
regions of hyphae. Such methods can be used to examine the complex interactions between
morphology, enzyme productivity and process conditions in submerged fermentations of
filamentous microorganisms. Image analysis, as a powerful technology, allows the
assessment and measurement of structure from examination of electronic images. Its
importance in research of filamentous microorganisms is in its capability to characterize
size and shape quantitatively (Cox et al., 1998).
Recently, DSM Food Specialties has in collaboration with another industrial partner
sequenced the complete genome of the Aspergillus niger, the in-house organism for

145

enzyme production. The sequence data were utilized to generate DNA micro-arrays (or
DNA GeneChips) of A. niger, which will be used to compare different strains and processes
on a full genomic level (transcriptomics). Based on these DNA arrays the expression profile
of all genes, and more specifically those involved in the determination of the morphology
of selected A. niger strains grown under different fermentation conditions, will be
identified.
Knowledge on the (differential) expression of these genes will be used to further
construct new Aspergillus strains by over expression/deletion of specific genes, or by
classical strain improvement (CSI) followed by targeted screening. Alternatively,
fermentation conditions will be adapted based on the knowledge obtained after gene
expression profiling. These experiments, as a whole, should give us information on
bottlenecks in genes involved in viscosity/rheology (morphology) of A. niger and moreover
(heterologous) gene expression, with the final goal to increase the enzyme yields in
industrial A. niger fermentations.

Figure 3. Hardware components of typical image


analysis systems.

Figure 4. Photograph of an image analysis system.

Acknowledgments.
Panagiotis Sarantinopoulos is a Marie Curie Industry Host Fellowship recipient and
wishes to thank the European Commission for financial support.
References.
Cox, P.W., Paul, G.C. and Thomas, C.R. 1998. Image analysis of the morphology of
filamentous micro-organisms. Microbiol. 144: 817-827.
Gibbs, P.A., Seviour, R.J. and Schmid, F. 2000. Growth of filamentous fungi in submerged
culture: Problems and possible solutions. Crit. Rev. Biotechnol. 20: 17-48.
McIntyre, M., Mller, C., Dynesen, J. and Nielsen, J. 2001. Metabolic engineering of the
morphology of Aspergillus. Adv. Biochem. Eng./Biotechnol. 73: 103-128.
Paul, G.C. and Thomas, C.R. 1998. Characterisation of mycelial morphology using image
analysis. Adv. Biochem. Eng. 60: 1-59.
Pazouki, M. and Panda, T. 2000. Understanding the morphology of fungi. Bioprocess Eng.
22: 127-143.

146

An examination of adaptive cellular protective mechanisms


using a multi-stage carcinogenesis model
H. Schllnbergera, R. D. Stewart b, R. E. J. Mitchelc, and W. Hofmannd

RIVM/LSO, Bilthoven, The Netherlands


Purdue University School of Health Sciences, West Lafayette, Indiana, USA
c
Atomic Energy of Canada Limited Chalk River Laboratories, Chalk River, Canada
d
Institute of Physics and Biophysics, University of Salzburg, Austria
b

Introduction
The linear no-threshold (LNT) model is used within the radiation protection
community to establish guidelines to protect workers and the public from the potential
human health risks of ionising radiation. The LNT model, as illustrated in Figure 1a,
is premised on the idea that the smallest possible dose of radiation can cause cancer.
However, there are biological responses to a variety of chemical and radiological
agents that may be U-shaped (Figure 1b) rather than LNT (e.g., see Calabrese et al.
2003a). This phenomenon is called hormesis, the stimulating effect of sub- inhibitory
concentrations of any toxic substance on any organism (Dorland 1974). Calabrese and
Baldwin uncovered thousands of studies that demonstrate hormesis effects across the
whole plant and animal kingdom and for a wide variety of biological endpoints
(Science 2003, Scientific American 2003, Calabrese et al. 2003b; see also
www.belleonline.com).

In 1996, Azzam and co-workers demonstrated that low doses (1-100 mGy) of g-rays,
when delivered at low dose rates (2.4 mGy/min), reduced the neoplastic
transformation frequency in C3H 10T1/2 cells (mouse embryo fibroblasts) to a rate
three- to four- fold below the spontaneous transformation frequency (Azzam et al.
1996). This has been confirmed in human-hybrid cell systems (Redpath et al. 1998,
2001). These results demonstrate that low-dose-rate radiation exposures may induce
processes, such as error-free DNA double strand break repair, that can reduce the
overall rate of cell transformation (see also Feinendegen et al. 1987, Schllnberger et
al. 2002, Mitchel et al. 2003 and www.magma.ca/~mitchel/). The aim of this study is
to examine the potential impact that cellular adaptations in DNA damage repair and
enzymatic radical scavengers may have on the cumulative incidence of lung cancer
under low dose and dose rate exposure conditions.

147

Materials and Methods


We have developed a multi-stage lung cancer model (Figure 2) that describes the
crucial biological events in the processes of carcinogenesis. In the model, radiation
and endogenous processes damage the DNA of target cells in the lung. Some fraction
of the misrepaired or unrepaired DNA damage induces genomic instability and leads
to the accumulation of initiated and malignant cells. The model explicitly accounts for
the formation of initiated cells via rate constant k, cell birth (via mitotic rates kM1 and
kM2) and death processes (rates kd1 and kd2 ), malignant conversion (kmt ), and a lag
period (t0 ) for tumour formation. To examine the potential significance of
radioprotective mechanisms, dose and dose rate dependent DNA repair and radical
scavenging phenomena are incorporated into the model. Changes in DNA repair and
radical scavenging with dose rate (and hence dose) are modelled using a
dimensionless scale-functions (Figure 3), denoted G and F respectively. For values of
G and F greater than one, radiation is less effective at inducing genomic instability
and cell transformation (i.e., reduces the rate constant k in Figure 2).

Results
Simulations were performed with the closed form solution for the cumulative lung
cancer incidence model. Estimates of model parameters are derived from data
reported in the literature. Figure 4 shows the cumulative lung cancer incidence at an
age of 75 years versus the total absorbed dose delivered in the same time span. The
effects of different levels of cellular adaptations in DNA repair with scavenger effects
switched off are shown. Figure 5 shows the combined effects of cellular adaptations
in radical scavenging and DNA repair processes. These results suggest that radiation
must induce changes in radical scavenging and DNA repair greater than about 30 or
40% (F and G > 1.3 to 1.4) of the baseline values in order to produce cumulative
incidence levels outside the range expected for endogenous processes and background
radiation (i.e., the horizontal dotted lines at 3.1 10-4 and 6.4 10-4 ).

148

Discussion and Conclusions


Sensitivity studies highlight the importance of including endogenously formed DNA
damage in estimates of low-dose cancer incidence levels. For doses comparable to
background radiation levels, endogenous DNA damage accounts for as much as 80 to
95% of the predicted lung cancers. For a lifetime dose of 1 Gy, endogenous processes
may still account for as much as 38% of the predicted cancers. For the range of model
inputs deemed biologically plausible, both LNT and non-LNT shaped responses can
be obtained. Model inputs that give rise to U-shaped responses are consistent with an
effective cancer incidence threshold that may be as high as 300 mGy (4 mGy per year
for 75 years).
Acknowledgements
Work supported by the Marie Curie Individual Fellowship EC Contract No FIGH-CT2002-50513 and by the Low Dose Radiatio n Research Program, Biological and
Environmental Research (BER), U.S. Department of Energy, Grant No. DE-FG0203ER63541.
References
Azzam EI et al. 1996. Radiat Res 146(4): 369-373.
Calabrese EJ et al. 2003a. Nature 421(6924): 691-692.
Calabrese EJ et al. 2003b. Crit Rev Toxicol 33(3-4): 215-424.
Dorland. 1974. Dorlands Illustrated Medical Dictionary, 25th edition. Saunders WB,
Philadelphia, London, Toronto.
Feinendegen LE et al. 1987. Health Phys 52(5): 663-669.
Mitchel RE et al. 2003. Radiat Res 159(3): 320-327.
Redpath LJ et al. 1998. Radiat Res 149: 517-520.
Redpath JL et al. 2001. Radiat Res 156(6): 700-707.
Schllnberger H et al. 2002. Int J Radiat Biol 78(12): 1159-1173.
Science 2003. 302: 376-379.
Scientific American 2003. August 18.

Figure legends
Figure 1: Hypothetical curves depicting (a) linear no-threshold (b) U-shaped doseresponses.
Figure 2: Conceptual overview of the multi-stage cancer model.
Figure 3: Representative examples of the dimensionless DNA repair function G. The
vertical dotted lines indicate the typical dose range expected from naturally occurring
radiation sources (1 to 3 mGy yr -1 ).
Figure 4: Solid line: no radiation hormesis effects. Dashed lines show the effects of
cellular adaptations in DNA repair (F = 1 and 1.1 G 3).
Figure 5: Predicted shapes of the cancer incidence curves when both cellular defence
mechanisms are included in the model. Dashed line: effects of radical scavenging and
DNA repair (F and G in the range from 1 to 1.4). Dotted line: combined effects of
radical scavenging and DNA repair (F and G in the range from 0.8 to 1.4). Dash
Dotted line: combined effects of radical scavenging and DNA repair (F and G in the
range from 0.8 to 3).

This paper will also be presented at the IRPA11 conference.

149

Climate Change and Sustainable Energy: Need for a New Protocol on


Hydrogen?
Francesco Sindico and Joyeeta Gupta1

1. Introduction
While energy is a primary and necessary condition of modern life, scientists and
policymakers agree that current world energy production, distribution and consumption
systems are unsustainable. On the one hand, current energy systems are alone
responsible for more than half of the worlds greenhouse gas emissions that are
expected to lead to the problem of climate change. On the other hand, energy is
produced currently mainly from non-renewable sources most of which are not expected
to last beyond the current century. According to official International Energy Agency
(IEA) statistics, more than half of the world production (59%) comes from oil and coal
while only 11% comes from renewable energies. Other current energy sources are
natural gas (21%), nuclear (7%) and hydro (2%). 2 However, the latter problem is
unlikely to solve the former problem, simply because of the urgency of the former.
Hence, there is an urgent need to shift energy production from unsustainable to
more sustainable sources. The urgency is also based on the argument that energy
investments are long-term investments and if new energy systems are being developed
based on existing fossil fuel technologies, this will lead to technological lock- in unless
we can foreclose these options as soon as possible.
The range of new and/or renewable alternative options have their own set of
economic, environmental costs, benefits and risks and choosing an energy strategy
inevitably means choosing an environmental strategy. 3 One of the options in order to
tackle energy related problems is to develop Hydrogen as an energy carrier. Hydrogen
powered vehicles could be revolutionary because they would only emit water vapour
instead of carbon dioxide. However, there remain some scientific ambiguity on
Hydrogen. Some scientists maintain that it could really solve major environmental
problems, starting with climate change. 4 Others stress the need to study more
thoroughly possible adverse effects on the environment caused by Hydrogen

The first author is a PhD candidate at the University Jaume I in Castelln de la Plana, Spain. Currently
on a EU Marie Curie Fellow from September, 2003 to March, 2004 (contract number: EVK2-CT2002-57006) at the Institute for Environmental Studies, Faculty of Earth and Life Sciences, Vrije
Universiteit, Amsterdam, the Netherlands. The second author is UNESCO-IHE professor on policy
and law on water resources and the environment and Programme Manager at the Institute for
Environmental Studies, Faculty of Earth and Life Sciences, Vrije Universiteit, Amsterdam, the
Netherlands.
2
IEA, Key World Energy Statistics 2003 available at http://www.iea.org/statist/key2003.pdf.
3
See The World Commission on Environment and Development, Our Common Future WCED Report:
Oxford Un iversity Press, 1987 at 168.
4
See M.J. Prather, An Environmental Experiment with H2 , 302 Science (2003), at 581 and M.G.
Schultz, T. Diehl, G.P. Brasseur & W. Zittel, Air Pollution and Climate -Forcing Impacts of a
Global Hydrogen Economy, 302 Science (2003), at 625.

150

production. 5 But there appears to be some consensus that Hydrogen is possibly an


optimal solution if produced from renewable energy sources.
Against this background, this paper analyses how the international community is
promoting sustainable energy and, in particular, studies how the progressive
development of international law may be able to promote an economy based mainly on
Hydrogen energy, the so called Hyd rogen economy.

2. From Rio to the International Partnership for the Hydrogen


Economy
This paper first analyses international documents which deal with sustainable
energy, and documents that deal with the promotion of a Hydrogen economy. It focuses
on the IEA efforts, on the United States (US) bilateral agreements in this field and on
the forthcoming International Partnership for the Hydrogen Economy (IPHE).
2.1. International Community and Sustainable Energy
The 1987 World Commission on Environment and Development recommended
that the world should invest in a transition to a safer, more sustainable energy era.
Energy in the context of sustainable development was dealt with at the 1992 Rio de
Janeiro United Nations Conference on Environment and Development (UNCED). Two
of the documents addressed energy: Agenda 21 (Agenda) 6 and the United Nations
Framework on Climate Change (UNFCCC)7 .
The starting point of Agenda 21 is that energy production, distribution and
consumption patterns are unsustainable. It fosters a multistakeholder approach and it
stresses that solutions must be found multilaterally and within the United Nations (UN)
framework. The Agenda also stresses the need to involve private parties in finding a
solution to energy problems as well as the need to take into account developing
countries. It did not, ho wever, define which energy sources were sustainable.
The 1992 United Nations Framework Convention on Climate Change calls on
parties to develop and transfer, inter alia, technologies to reduce greenhouse gas
emissions from the energy sector. The 1997 Kyoto Protocol calls on parties to undertake
research on new and renewable forms of energies. Although both documents emphasise
the need for energy policies, they use the targets and timetables approach leaving it to
individual governments to undertake the most appropriate policies for the domestic
context.
In 2001 the UN Commission on Sustainable Development (UNCSD) dealt
thoroughly with energy related issues during its ninth session8 and it underlined that
energy production, distribution and utilization patterns were still unsustainable. The
UNCSD defined sustainable energy as reliable, affordable, economically viable, socially
acceptable and environmentally sound energy. It then focused on some elements whic h
appear also in previous international instruments related to energy, such as the need for
5

T.K. Tromp, R. Shia, M. Allen, J.M. Eiler & Y.L. Yung, Potential Environmental Impact of a
Hydrogen Economy on the Stratosphere, 300 Science (2003), at 1740 focus primarily on the
increase of H2 in the atmosphere that would moisten the stratosphere and would lead to a negative
cooling of the lower stratosphere and to a disturbance of the ozone chemistry.
6
A 21, UN Doc. A/Conf.151/26 (1991).
7
United Nations Framework Convention on Climate Change, (New York) 9 May 1992, in force 24
March 1994; 31 I.L.M. 1992
8
UNCSD, Report on the ninth session (5 May 2000 and 16-27 April 2001) UN Doc. E/CN.17/2001/19.

151

a multistakeholder approach, the importance of private parties and of market based


mechanisms, developing countries needs and the importance of promoting renewable
energies. In 2002 at the World Summit on Sustainable Development, the plan of action
emphasized the need for energy efficiency and the development of new and renewable
sources of energy. Although the EU and Brazil proposed concrete targets to achieve an
increased use of renewable energy, this was blocked by the US and OPEC.
Of the above initiatives, the climate change treaty and protocol are the only two
legally binding documents, and at the present moment, entry into force of the Kyoto
Protocol in the near future seems highly unlikely because of US withdrawal and Russian
procrastination with the ratification process. The process appears to be stalling.
2.2. International Community and the Hydrogen Economy
2.2.1. The International Energy Agency
While energy has been the subject of global soft and hard law agreements, it has
also been on the agenda of various international organizations that have been set up for
this purpose. One such body is the International Energy Agency. The IEA is an agency
linked to the Organisation for Economic Co-operation and Development 9 that has dealt
with Hydrogen since 1977 when IEA established the Programme of Research and
Development on the Production and Utilization of Hydrogen (IEA Hydrogen
Agreement), which is a binding legal agreement. Parties10 are obliged to cooperate in
the promotion of Hydrogen research and development by participating in different
tasks, which focus on specific Hydrogen issues. The Executive Committee is in charge
of the management of the Agreement. It can take binding decisions in order to fulfil the
agreements goals and, if it considers that parties have not complied with their
obligations, it may exclude them from the Agreement or from specific tasks. An
important feature of the IEA Hydrogen Agreement is that it provides for a dispute
settlement system.
In sum, the IEA Hydrogen Agreement is a multilateral binding legal agreement
that brings together stakeholders interested in promoting Hydrogen research and
development in the developed world.

The main goal of the agency is to share energy information, to co-ordinate energy policies and to cooperate in the development of rational energy programmes. IEA pursues its goal by establishing
programmes, which facilitate international cooperation called Implementing Agreements. In 2002,
the Agency undertook a thorough legal review of the structure of these Agreements that led to the
IEA
Framework
for
International
Energy
Technology
Co-operation:
Doc.
IEA/GB(2003)6/REV2/ANN1 (as approved by the Governing Board on April 3, 2003). The latter is
currently the legal framework to which all other Agreements (including the IEA Hydrogen
Agreement) must refer.
10
Participants in the IEA Hydrogen Agreement are contracting parties and associate contracting parties,
which are mainly governments or government designated entities, and sponsors, which are not
appointed by governments. The latter enjoy less rights than the first two categories of parties.

152

2.2.2. United States Hydrogen Bilateral Agreements


In the meanwhile, although the US has withdrawn from active participation in
the Kyoto Protocol, it has begun to engage in bilateral agreements 11 on climate change
and sometimes specifically on Hydrogen as a promising means to enhance the energy
security, to increase the diversity of energy sources and to improve local and global
environmental quality. The main goal of the US climate change and Hydrogen bilateral
agreements is, once again, to promote Hydrogen research and development but they do
not give specific details on how such promotion should be undertaken. 12 They just
maintain that such a cooperation is desirable and will be undertaken. The most
important feature is that, in many cases, the bilateral agreements are explicitly a first
step for the forthcoming International Partnership for the Hydrogen Economy (IPHE),
which will be led by the US.
In sum, US Hydrogen bilateral agreements are programmatic political
declarations that express the willingness to promote a Hydrogen economy but they do
not create any binding legal obligation to cooperate.
2.2.3. International Partnership for the Hydrogen Economy
The current effort of a number of large States to promote Hydrogen research and
development is the IPHE. 13 The draft charter of the latter has been discussed at a
Ministerial Level in Washington D.C. in November 2003. 14 The two main goals of the
IPHE are to foster Hydrogen research and development and to accelerate the cost
effective transition to a global Hydrogen economy. The goals are to be achieved by a
combination of instruments: firstly, through the establishment of public-private
partnerships; secondly, with the use of market based mechanisms and, thirdly, by
achieving uniform international Hydrogen related policies, codes and standards. The
IPHE draft charter provides for an institutional structure composed of four bodies: a
Planning Committee, which is intended to be the governing body of the partnership; an
Implementing Committee, which assists this body in its work; a Liaison Committee,
which manages the relationship with stakeholders not included in the IPHE; and a
Secretariat, which focuses on administrative issues. The draft charter acknowledges
other international Hydrogen related efforts and maintains that the IPHE must work
together with them. It specifically names IEA and considers its efforts compleme ntary.
Furthermore, the legal nature of the partnership is clearly not binding.
11

12

13

14

Since its official decision not to ratify the Kyoto Protocol the US has signed a significant number of
agreements both with developed and developing countries on broad climate change issues (17) and a
fewer number of agreements on Hydrogen (3 with Brazil, the European Un ion and Italy).
The US bilateral Hydrogen agreements establish the framework within which Hydrogen research and
development should be undertaken. On the one hand, efforts in the economic field should be made to
foster a scenario in which private parties participate through market based mechanisms. On the other
hand, it is suggested that efforts in the legal framework must be undertaken. Uniform Hydrogen
international codes, standards and rules should be addressed. Neither the needs of developing
countries or the importance of renewable energies as the main source for producing Hydrogen are
mentioned in the US Hydrogen bilateral agreements.
For more information on the IPHE see http://www.usea.org/iphe.htm. The countries involved in the
IPHE are Australia, Brazil, Canada, China, France, Germany, Iceland, India, Italy, Japan, Republic of
Korea, Russia, United Kingdom, the United States of America, and the European Co mmission.
The Revised Draft of the Terms of Reference for the IPHE can be found at
http://www.usea.org/Revised%20Terms%20of%20Reference.pdf.

153

In sum, the forthcoming IPHE is a non obligatory international legal agreement


that focuses on Hydrogen research and development in order to advance to a Hydrogen
economy.

3. Conclusions
There is a global recognition of the need to shift to more sustainable energy
sources. Although renewable energy sources and Hydrogen are seen as promising new
instruments, these have their own risks and costs. As a large energy carrier, Hydrogen
appears to be the most promising providing it is generated via the use of renewable
energy resources.
Although there is global political consensus on the need for new and sustainable
energy sources, there is still considerable difference of opinion on how best to promote
this internationally. The climate change negotiating process appears to have reached a
(temporary) set-back because of the US unwillingness to ratify the Kyoto Protocol.
Most political and legal scientists and economists are aware that even if Russia were to
ratify the agreement and the Protocol would enter into force, the mere absence of the US
would cast a major shadow on the regime. The dominant question now is how to bring
the US back into the regime, and how to find new ways and means of addressing the
problem of climate change.
One area in which there appears to be common ground between the major
international actors is the current focus on Hydroge n research and policies. There is a
tendency to undertake Hydrogen related cooperation at a bilateral and even a
multilateral level, but outside the formal context of the climate change negotiations. A
positive interpretation could be that the climate nego tiations are so politically charged
and have so many parties that this is a necessary and efficient step. A negative
interpretation could be that such efforts are focused at undermining the current global
and multilateral regime. Whatever it may be, we believe that in order to promote longterm global efficiency and a solution to the problem of climate change, both initiatives
need to be closely linked in order to reap greater benefits. As long as the US remains a
Party to the climate change agreement it is under a legal obligation (derived from the
Law of Treaties) to not undertake measures that are contrary to the spirit of the
Convention. As long as the US is part of the regime, it is still possible for future US
governments to develop their Hydrogen initiative as part of the regime. This is because
energy is a crucial climate change is sue. Therefore, what we suggest is that Hydrogen
research and development should be addressed within the UNFCCC framework. A 21s
recommendations to work multilaterally within the UN would be followed and all
countries could participate to the creation of a multilateral binding legal agreement, a
UNFCCC Hydrogen Protocol. This should build on previous experiences, such as the
IEA Hydrogen Agreement, and lay down the pillars for a Hydrogen economy in which
Hydrogen should be produced in the long term from renewable energies and developing
countries needs should be taken into account at all times. We believe that the EU should
seriously consider the possibility of initiating a process within the Conference of the
Parties to develop a Protocol on Hydrogen to achieve the very same goals that the US
and the International Partners wish to achieve via the bilateral agreements and the IPHE.
However, any future Hydrogen Protocol should be based on sound science and on the
knowledge of the conditions under which a Hydrogen economy could become
sustainable.

154

Removal of Reactive Dyes from Textile Wastewaters


Yness March Slokar, Branislav Petrusevski
UNESCO-IHE Institute for Water Education, Westvest 7, 2611 AX Delft, The Netherlands
E-Mail: y.slokar@unesco- ihe.org
Introduction
Environmental problems arising from effluents of dyeing and printing textile processes being
released directly into the environment are continuously addressed and minimized, but to date
no solutions have been found for making them disappear. Reactive dyes are particularly
problematic in effluents since they are soluble in water and thus not removable by traditional
sewage treatment. In the presented study, capability of combined treatment method
comprising oxidation with hydrogen peroxide and adsorption on iron coated sand (ICS) to
remove reactive dyes from wastewaters was assessed.
Methodology
Batch experiments were carried out with model wastewaters, prepared in the laboratory,
containing:
> 50 mg/l of reactive dyes Red 120, Violet 5, Blue 5, Blue 19 or Black 5;
> ICS with concentratio n from 0-30 g/l;
> w(H2 O2 )=30% with dosage from 10-20 ml/l, added either at once at the beginning of the
experiment or in constant time intervals.
Experiments in batch reactor were conducted at two different mixing speeds, 30 and 90 rpm.
Efficiency of color removal was determined by measurement of absorbance at l=436, 525
and 620 nm.
Results & Conclusions
Effect of ICS concentration. Within the first 4 hours of experiments, the higher the
concentration of ICS, the faster the decoloration of all 5 dyes. After 4 hours decoloration of
dye solutions with higher ICS concentration slowed down, while in solutions with lower ICS
concentration decoloration was still keeping the same rate. Even though the final decoloration
of all 5 dyes was best with 5 g/l of ICS, more than 90% decoloration was achieved with dye
Violet 5 only (Fig.1).
Decoloration of the rest of the tested dyes with 5 g/l ICS was as follows:
>
>
>
>

Red 120: ~25% after 24 hours;


Blue 5: ~50% after 96 hours;
Blue 19: ~8% after 48 hours and
Black 5: ~10% after 48 hours.

155

reduction of absorbance [%]

100

0 g/l
5 g/l

80

10 g/l
60

15 g/l
20 g/l

40

25 g/l
20

30 g/l
standard

0
0

12

16

20

24

28

time [h]

Figure 1: Effect of ICS concentration on the decoloration rate


of Violet 5 at 525 nm (10 ml/l H2 O2 , 30 rpm)
Effect of H2 O2 dosage. Apart from Violet 5, changing the dosage of H2 O2 did not improve
decoloratio n rate of the rest of tested dyes, i.e. even in the best case, decoloration did not
exceed 50%.

reduction of absorbance [%]

As apparent from the decoloration rate of Violet 5 (Fig.2), decoloration was fastest when 20
ml of peroxide was added at the beginning of the experiment.

100

10 ml/l
20 ml/l

80

standard
60
40
20
0
0

12

18

24

time [h]

Figure 2: Effect of H2 O2 dosage on the decoloration rate


of Violet 5 at 525 nm (5 g/l ICS, 90 rpm)
Effect of mixing speed. Increased mixing speed of the batch reactor increased the
decoloration rate of the dye Violet 5 (Fig.3). For the rest of the tested dyes, no increased
decoloration was observed even with higher mixing speeds.

156

reduction of absorbance [%]

100

30 rpm
90 rpm

80

standard
60
40
20
0
0

12

16

20

24

time [h]

Figure 3: Effect of mixing speed on the decoloration rate


of Violet 5 at 525 nm (5 g/l ICS, 10 ml/l H2 O2 )
Effect of higher mixing speed on decoloration rate of Violet 5 can be explained by better
distribution of reactive OH in the solution, which has a double effect on the decoloration:
> more radicals can reach dye molecules throughout the solution to oxidize them and
> there is less possibility for OH to react amongst themselves to form less reactive OOH.
Generally, adsorption on ICS did not prove to be sufficiently effective for removal of color
from wastewater containing all reactive dyes tested, since very effective decoloration (>90%)
was achieved with Violet 5 only.
Acknowledgement
Presented study was carried out under Marie Curie Fellowship funded by the European
Commission (contract n EVK1-CT-2001-50011).

157

Boosting plant defence by beneficial soil microorganisms


Mara J. Pozo, L.C. Van Loon and Corn M.J. Pieterse
Section Phytopathology, Faculty of Biology, Utrecht University.
P.O.Box 800.84, 3508 TB Utrecht, The Netherlands.
m.j.pozo@bio.uu.nl Intenet: http://www.bio.uu.nl/~fytopath/

Plants in their environment face potential deleterious organisms such as fungi, bacteria,
viruses, nematodes, etc. Many of them are able to cause plant diseases, responsible of
important losses in crop production worldwide. But often the outcome of these interactions
is not disease, since plants have developed multiple mechanisms to protect themselves
against pathogens attack. Moreover, beneficial microorganisms are common in the soil,
improving plant growth and reducing the effects of deleterious organisms. While chemical
control of plant diseases is usually expensive and may have a negative impact on the
environment and on public health, the use of microorganisms to control plant pathogens,
known as biological control, is accepted as a durable and environmentally friendly
alternative in plant disease management.
Several modes of action have been described in biological control. Direct effects of the
biocontrol agent over the patho gen include inhibition by antimicrobial compounds
(antibiosis), competition for colonization sites and nutrients, degradation of pathogenicity
factors and parasitism. Indirect mechanisms include improvement of plant nutrition and
damage compensation, changes in the root system anatomy, microbial changes in the
rhizosphere and activation of plant defence mechanisms, leading to enhanced plant
resistance. It is common that an effective biocontrol agent acts through the combination of
different mechanisms (Whipps, 2001).
For example, the filamentous fungi Trichoderma spp. have been widely studied for
their effectiveness in controlling a broad range of phytopathogenic fungi such as
Rhizoctonia solani, Pythium ultimum and Botrytis cinerea. The mechanisms involved in
this protective effect are mainly direct, through antibiosis and parasitism. Thichoderma
grows around the fungal pathogen (Fig. 1) and releases toxic compounds and a battery of
lytic enzymes, mainly chitinases, glucanases and proteases. These proteins facilitate
Trichoderma penetration into the host and the utilizatio n of the host components for
nutrition. The implication of lytic enzymes in biocontrol has been confirmed in
overproducing mutants (Mendoza -Mendoza et al., 2003; Pozo et al., 2003), and the
expression of some of these enzymes in transgenic plants highly increased their resistance
to different pathogens (Emani et al., 2003).

Figure 1. The biocontrol fungus Trichoderma


virens grows around hyphae of the pathogenic
fungus Rhizoctonia solani. While growing around
its host, or coiling, Trichoderma secretes
different lytic enzymes able to degrade fungal cell
walls, allowing the penetration and parasitation of
the host.

158

Arbuscular mycorrhizal fungi (AMF), which form symbiotic associations with root
systems of almost all plants, also reduce root diseases caused by several soil-borne
pathogens, mainly through indirect mechanisms. The AMF penetrates the root system (Fig.
2A), improving plant nutrition and growth and altering the anatomy and architecture of the
root system. These changes, together with the activation of the plant defence mechanisms,
seem to be responsible for the reduction of the disease (reviewed in Azcn-Aguilar et al.,
2002; Pozo et al., 2002a). For example, colonization of tomato roots by Glomus mosseae
reduce disease development in plants infected with Phytophthora parasitica (Fig. 2B), and
the involvement of plant defence mechanisms has been pointed out (Pozo et al., 1996;
Cordier et al., 1998; Pozo et al., 1998; Pozo et al., 1999; Pozo et al., 2002b).
A

B
a
v

Nm

Figure 2. A. Tomato roots colonized by the mycorrhizal fungus Glomus mosseae were stained with
trypan blue to detect fungal structures. The fungus develops inside the root cortical cells forming vesicles
(v) and specialized structures called arbuscules (a). B. After Phytophthora parasitica infection, nonmycorrhizal tomato plants (Nm) showed strangulated collar (arrow), extensive necrotic areas in the root
system and a decrease in the root and shoot biomass. In contrast, plants colonized by G. mosseae (M)
showed no symptoms in the collar, very limited necrosis in the roots and normal biomass development.

But one of the most studied biocontrol organisms are bacteria from the genus
Pseudomonas. They constitute an excellent example of combination of multiple
mechanisms for effective biocontrol (reviewed in Van Loon et al., 1998). Pseudomonas
spp. produce several metabolites with antimicrobial activity towards other bacteria and
fungi. They also produce siderophores that will restrict pathogen growth by limiting the
iron available in the soil. Re markably some strains are also able to trigger an induced
resistance that enhances the defensive capacity of the plant to a subsequent pathogen
attack. This effect is not localized at the colonization site in the roots, but systemic,
conferring the plant a better protection not only against a broad range of soil pathogens, but
also to foliar ones (Fig. 3). This phenomenon is known as rhizobacteria-mediated Induced
Systemic Resistance or ISR (Van Loon et al., 1998). Interestingly, no major changes in
gene expression in the plant have been related to the ISR state. Instead, induced plants
show a faster or greater activation of defence responses after infection with a challenging
pathogen -a phenomenon called potentiation or priming- (Conrath et al., 2002).

159

Control

ISR

Figure 3. A. Pseudomonas fluorescens WCS417r bacteria on the surface of a plant root visualized by
green fluorescence labelled antibodies. B. Treatment of Arabidopsis roots with P. fluorescens WCS417
promotes Induced Systemic Resistance (ISR) evidenced in the picture by the reduction in disease
symptoms after inoculation with the fungal root pathogen Fusarium oxysporum f.sp. raphani compared
to controls (reproduced from Pieterse and Van Loon, 1999).

Understanding the genetic control of the plant defence-related processes underlying ISR
is a key point in biocontrol research. The complexity of these mechanisms, regulated by
multiple genes, requires the use of a well-defined biosystem and high- throughput
techniques for the analysis of gene expression, such as microarrays. The use of the model
plant Arabidopsis thaliana has greatly contributed to the progress in this area due to the
availability of mutant lines in different signal pathways and the sequencing of its genome in
full. Indeed, great advances in our knowledge in plant defence reactions have been
achieved in recent years. It is now known that plant inducible defence pathways are
regulated through a complex network of signalling cascades that involve three main
molecules: salicylic acid (SA), jasmonic acid (JA) and ethylene (ET), enabling the plant to
fine-tune its resistance reaction depending on the micro-organism encountered (Pieterse and
Van Loon, 1999). The Phytopathology group in Utrecht has shown that ISR acts through
the JA and ET signalling pathways, but it is independent on SA (Pieterse et al., 1996;
Pieterse et al., 1998). However, analysis of local and systemic levels of JA and ET showed
no changes in their production. This result suggested that ISR is based on an increased
sensitivity to these plants hormones, and not on changes in their production (reviewed in
Pieterse et al., 2002). To confirm this hypothesis, we are investigating if ISR-expressing
plants are primed to react faster or more strongly to JA or ET produced after pathogen
infection. With this aim, the induction of defence-related genes by different concentrations
of ET and JA was compared at several times in Arabidopsis plants treated or not with the
ISR inducing Pseudomonas fluorescens WCS417 bacteria. As an example, fig. 4A shows
the quicker and higher increase in the expression of LOX2, a gene involved in the synthesis
of JA, in ISR-expressing plants compared to the controls after treatment with methyl
jasmonate. In another experiment (Fig. 4B), ET application at different concentrations
resulted in higher transcript levels of the ethylene biosynthesis gene ACO in ISRexpressing plants compared with controls. These results indicate that priming of specific
sets of JA- and ET-responsive genes is indeed associated to ISR. We hypothesize that
priming of pathogen- induced genes allows the plant to react more effectively to the invader
encountered, which might explain the broad-spectrum action of rhizobacteria- mediated
ISR. To determine the full set of genes involved in the process, we are at the moment
analyzing the expression of thousands of genes in response to ET, JA and/or pathogen
attack in ISR-expressing or control plants by microarray screenings.

160

Control
0

ISR

6 12

Control
6

12 h

LOX2

0.1

ISR
10

0.1

10 ET ppm

ACO

Figure 4. Priming of JA-induced LOX2 and ET-induced ACO gene expression in ISR-expressing
Arabidopsis plants after induction by P. fluorescens WCS417 (ISR). A. Expression of LOX2, involved
in jasmonate signalling, 0, 1, 3, 6 and 12 hours after treatment with 50 M methyl jasmonate. B.
Expression of ACO, an enzyme involved in ethylene signalling, after 6 hours of treatment with
different ethylene concentrations (0, 0.1, 1 and 10 ppm). Control, non-induced Arabidopsis plants.

Although important advances have been achieved lately in our knowledge of plant
defence mechanisms and its induction, many aspects remain unclear. Understanding the
mechanisms by which plants perceive and respond to micro-organisms that stimulate their
natural defences will provide more insight into how plants can be helped to defend
themselves against pathogen attack and constitutes a very promising research area.
References
Az cn-Aguilar, C., Jaizme-Vega, M.C., and Calvet, C. (2002). The contribution of arbuscular mycorrhizal fungi to the control of soilborne plant pathogens. In Mycorrhizal Technology in Agriculture: From Genes to Bioproducts., S. Gianinazzi, H. Schuepp, K.
Haselwandter, and J.M. Barea, eds (Basel: ALS Birkhauser Verlag), pp. 187-197.
Conrath, U., Pieterse, C.M.J., and Mauch -Mani, B. (2002). Priming in plant -pathogen interactions. Trends in Plant Science 7, 210216.
Cordier, C., Pozo, M.J., Barea, J.M., Gianinazzi, S., and Gianinazzi-Pearson, V. (1998). Cell defense responses associated with
localized and systemic resistance to Phytophthora induced in tomato by an arbuscular mycorrhizal fungus. Molecular Plant MicrobeInteractions 11, 1017-1028.
Emani, C., Garcia, J.M., Lopata-Finch, E., Pozo, M.J., Uribe, P., Kim, D.J., Sunilkumar, G., Cook, D., Kenerley, C.M., and
Rathore, K. (2003). Enhanced fungal resistance in transgenic cotton and tobacco expressing an endochitinase gene from
Trichoderma virens. Plant Biotech. J. 1, 321-336.
Mendoza-Mendoza, A., Pozo, M.J., Grzegorski, D., Martinez, P., Garcia, J.M., Olmedo -Monfil, V., Corts, C., Kenerley, C., and
Herrera-Estrella, A. (2003). The mitogen-activated protein kinase TVK1 from Trichoderma virens regulates mycoparasitism
related genes, conidiation, filamentous growth and spores pigmentation. Proceedings of the National Acacemy of Sciences of the
United States of America, In press.
Pieterse, C.M.J., and Van Loon, L.C. (1999). Salicylic acid-independent plant defence pathways. Trends in Plant Science 4, 52-58.
Pieterse, C.M.J., Van Wees, S.C.M., Hoffland, E., Van Pelt, J.A., and Van Loon, L.C. (1996). Systemic resistance in Arabidopsis
induced by biocontrol bacteria is independent of salicylic acid accumulation and pathogenesis-related gene expression. The Plant
Cell 8, 1225-1237.
Pieterse, C.M.J., Van Wees, S.C.M., Ton, J., Van Pelt, J.A., and Van Loon, L.C. (2002). Signalling in rhizobacteria-induced
systemic resistance in Arabidopsis thaliana. Plant Biology 4, 535-544.
Pieterse, C.M.J., Van Wees, S.C.M., Van Pelt, J.A., Knoester, M., Laan, R., Gerrits, H., Weisbeek, P.J., and Van Loon, L.C.
(1998). A novel signaling pathway controlling induced systemic resistance in Arabidopsis. The Plant Cell 10, 1571 -1580.
Pozo, M.J., Dumas-Gaudot, E., Azcn-Aguilar, C., and Barea, J.M. (1998). Chitosanase and chitinase activities in tomato roots
during interactions with arbuscular mycorrhizal fungi or Phytophthora parasitica. J. Exp. Bot. 49, 1729-1739.
Pozo, M.J., Azcn-Aguilar, C., Dumas-Gaudot, E., and Barea, J.M. (1999). -1,3-glucanase activities in tomato roots inoculated with
arbuscular mycorrhizal fungi and/or Phytophthora parasitica and their possible involvement in bioprotection. Plant Science 141,
149-157.
Pozo, M.J., Baek, J.M., Garcia, J.M., and Kenerley, C.M. (2003). Funtional study of tvsp1, a serine protease in the biocontrol agent
Trichoderma virens by mutational analysis, In press.
Pozo, M.J., Slezack-Deschaumes, S., Dumas-Gaudot, E., Gianinazzi, S., and Azcn -Aguilar, C. (2002a). Plant defense responses
induced by arbuscular mycorrhizal fungi. In Mycorrhizal Technology in Agriculture: From Genes to Bioproducts., S. Gianinazzi, H.
Schuepp, K. Haselwandter, and J.M. Barea, eds (Basel: ALS Birkhauser Verlag), pp. 103-111.
Pozo, M.J., Cordier, C., Dumas-Gaudot, E., Gianinazzi, S., Barea, J.M., and Azcn-Aguilar, C. (2002b). Localized vs systemic
effect of arbuscular mycorrhizal fungi on defence responses to Phytophthora infection in tomato plants. J. Exp. Bot. 53, 525-534.
Pozo, M.J., Dumas-Gaudot, E., Slezack, S., Cordier, C., Asselin, A., Gianinazzi, S., Gianinazzi-Pearson, V., Azcn-Aguilar, C.,
and Barea, J.M. (1996). Detection of new chitinase isoforms in arbuscular mycorrhizal tomato roots: possible implications in
protection against Phytophthora nicotianae var. parasitica. Agronomie 16, 689-697.
Van Loon, L.C., Bakker, P.A.H.M., and Pieterse, C.M.J. (1998). Systemic resistance induced by rhizosphere bacteria. Annual Review
of Phytopathology 36, 453-483.
Whipps, J.M. (2001). Microbial interactions and biocontrol in the rhizosphere. J. Exp. Bot. 52, 487-511.

161

European Commission
EUR 21252 16th Workshop of Marie Curie Fellows : Research Training in Progress
Held at the Institute for Energy, Joint Research Centre of the European Commission, Petten (Netherlands), 21-22 October, 2003
Luxembourg: Office for Official Publications of the European Communities
2004 II, 161 pp. 21.0 x 29.7 cm
ISBN 92-894-8198-6
Price (excluding VAT) in Luxembourg: EUR 25

Вам также может понравиться