Академический Документы
Профессиональный Документы
Культура Документы
In the present volume, the PhD programmes of the Politecnico Director of School
di Milano are presented, including a description of all PhD of Doctoral Programmes
programmes which have been running in recent years and the Prof. Barbara Pernici
summaries of theses defended in the second semester of 2014 and
in 2015.
In the YearBook, the Coordinator of each of the PhD Programmes
provides a short description of the objectives of the programme, of
its main activities, and of the research areas of the programme.
Each PhD Programme has a Coordinator, a Doctoral Programme
Board of Professors, managing the programme, and an external
Reference Committee, as well as several contacts with leading
institutions.
The main goal of doctoral education is to develop a PhD thesis,
which has to present original research work developed by the
candidate, relating it to the current state of the art. This work is
defended by the candidate in a final examination in which there
are at least two external professors in the evaluation committee.
A value that has been leading the activities of the School
since the beginning, is the support to the internationalization
culture. All PhD programmes are in English or have at least
an English track, with an increasing internationalization both
of the academic body and of the PhD candidates, where the
admission rate of international candidates has reached 33 per cent
of the PhD student population. Furthermore, most candidates are
encouraged to visit another research laboratory abroad during their
PhD, to widen their perspectives on their research topic, and also to
publish their work in international venues.
As a result, the Doctoral Programmes provide a selected number of
highly qualified graduates, endowed with a solid preparation, with
the opportunity of acquiring a high degree of professional expertise
in specific scientific, technological, social and economic fields.
Ph.D. graduates are not only capable of carrying out research
projects, but develop, during their period of study, new
knowledge on scientific frontiers that can be immediately
applied to professional activities.
For each thesis, the summary presented in the volume
has the aim of providing an overview of the research problems
studied in the thesis and of the original results of the research. As
a consequence, the collection of summaries contained in the
volume provides a good overview on research activities conducted
within the Doctoral Programmes of Politecnico di Milano in the past
few years.
Mechanical Engineering | Physics |
Preservation of THE Architectural Heritage
| Rotary-wing aircrafts | Spatial Planning
and Urban Development | Structural Seismic
and Geotechnical Engineering | Technology
and Design for Environment and Building
| Territorial Design and Government |
Aerospace Engineering | Architectural
Composition | Architectural and Urban
Design | Architecture, Urban Design,
Conservation of Housing and Landscape
| Bioengineering | Building Engineering
| Design | Design and technologies for
cultural heritage | Electrical Engineering |
Energy and Nuclear Science and Technology
| Environmental and Infrastructures
engineering | Industrial Chemistry and
Chemical Engineering | Information
Technology | Interior Architecture
and Exhibition Design | Management,
Economics and Industrial Engineering
| Materials Engineering | Mathematical
MODELS and METHODS IN Engineering PhD Yearbook | 2015
22
DOCTORAL PROGRAM years. The related activities should be carried years of doctoral program are here reported:
AEROSPACE ENGINEERING
Chair: competence required to carry out innovative research and/or state space vehicles. modeling analysis, system design and
Prof. Luigi Vigevano of the art advanced applications in industries, public or private In this context, a more specific competence can implementation of specific subsystems;
research centers, Universities or public and service companies in the be gained either in a single or in the integration expert in the dynamics and control of aerospace
area of aerospace engineering, including all the fields associated of special subjects such as: dynamics and control, vehicles and related operational missions;
to it. The level of the course allows the graduates to compete in a fluid mechanics, systems and equipment, flight expert in integrated design of complex
European and international environment. mechanics, passive structural safety, intelligent aerospace systems.
The course is three years long, requiring 180 credit points (ECTS), and automated systems, structures and materials. Since its foundation, 24 years ago, the doctoral
including possible study-abroad periods and internships in private or In this respect, some examples of professional course on Aerospace Engineering graduated
public institutions. The program and credits are divided in three skills achieved in the course of the past 24 more than 70 PhDs.
main educational areas:
1. Main courses (30 credits), during the first year: courses examining
fundamental subjects (problems, theories and methods) of the
scientific research in the disciplinary areas involved; DOCTORAL PROGRAM BOARD
2. Elective courses and training on specific themes (30 credits),
gained in the second year: specific and personalized educational Prof Airoldi Alessandro Prof Dozio Lorenzo Prof Masarati Pierangelo
programs aimed at a more deep overall knowledge and to master Prof Anghileri Marco Prof Frezzotti Aldo Prof Morandini Marco
the techniques adequate for the subsequent development of the
Prof Astori Paolo Prof Galfetti Luciano Prof Quadrio Maurizio
doctoral thesis, plus seminars focused on specific and advanced
methods; Prof Bernelli Zazzera Franco Prof Ghiringhelli Gian Luca Prof Quaranta Giuseppe
3. Development of the Doctoral Thesis (120 credits): the thesis is Prof Bisagni Chiara Prof Gibertini Giuseppe Prof Quartapelle Procopio Luigi
developed within the Department or, in some cases, in other
Prof Bottasso Carlo Prof Guardone Alberto Prof Ricci Sergio
institutions, in close contact with the Department. The thesis is
started immediately (20 credits in the first year), and developed in Prof Consolati Giovanni Prof Lavagna Michle Prof Sala Giuseppe
the second (40 credits) and third year (60 credits) of the doctoral Prof Di Landro Luca Prof Mantegazza Paolo
program.
If the candidate has a background curriculum lacking some
introductory knowledge required for the Doctorate, the Faculty will
ADVISORY BOARD
ask to recover such knowledge, with the assistance of the tutor.
The same Faculty will verify afterward the overcoming of whatever Giorgio Brazzelli, AgustaWestland Marco Molina, SELEX Galileo
was lacking during the annual meeting of admission to the second & Distretto Aerospaziale Lombardo
year of the course.
The course program related to point 1 does not follow a rigid Matteo Casazza, Leitwind Fabio Nannoni, AgustaWestland
scheme. So, besides widening the basic scientific culture of the Massimo Lucchesini, AleniaAermacchi Franco Ongaro, Estec
candidate, it will take into consideration also the objectives and the
core topics of the candidates thesis. Again the program outlined
at points 2 and 3 will try to consider general cultural requirements SCHOLARSHIP SPONSORS
as well as what is deemed to be more specifically related to thesis
subject, as agreed between the candidate and the Faculty. For Agusta-Westland
the activities of type 2 and 3 a study period in a foreign country
ASI
is allowed, even strongly suggested perhaps. Its duration should
range from a few weeks up to one and a half Tecnospazio
24
AEROSPACE ENGINEERING
is the development of an FCS control laws, predictions of the possibility to easily address but Turbulence do have some
interdisciplinary simulation handling qualities and Aircraft- both airplane modeling and effects. A specific task has been
environment to support the Pilot Coupling behaviors, still at flight simulation. designed, showing that Lateral
2. Roll Control Spillover Notch Filter OFF
design and optimization of the design stage. These valuable Two aircraft models have Turbulence produces an increase
new generation aircraft with information can then be fed been realized. They essentially of the roll ASE Aggression to
Fly-By-Wire (FBW) control back in the conceptual design differ in the detail of the (AWB) mode (Figure 2). has proven to be valid by the values even significantly greater
systems technology. This study loop, with the final target to flight dynamics module: the At this stage of the project, comparison of the same task than the RB ones, because the
deals with the shift from reduce time to market and Aeroservoelastic (ASE) model, since the airplane is still totally performed by two pilots with a unsteady aerodynamic response
the Classic to the Integrated development costs of a new and the Rigid Body (RB) one. virtual, engineers can proceed significantly difference in flying can also be triggered by the
design approach of aircraft aircraft. Among other aspects of the by either acting on the wing experience. Turbulence itself, forcing the
preliminary design, since the The proposed concept is an pilot-vehicle interaction, the structure or providing the FCS Pilot-in-the-loop flight tests pilot to increase the inputs to
increasing application of FBW even simpler, desktop-level flight test campaign primarily with Notch Filter: because this have evidenced that the the inceptor to perform the
technology has introduced tool, consisting of a personal aimed at verifying the existence FE model was already the result Aeroservoelastic airplane assigned task. This increased
new and possibly unforeseen computer and a joystick, where of a Control Spillover or Pilot of an optimization process (ASE), when compared to sensibility of the ASE airplane to
aeroservoelastic problems that all the aspects of the mentioned Induced Oscillation (PIO), (NeoCASS), the latter solution the Rigid Body (RB) one, is lateral turbulence may suggest
require a reassessment of the disciplines are combined, and determining whether has been adopted (Figure 3) more responsive to pilots the implementation of an active
traditional approach. providing the project engineer the Aeroservoelastic model The aggressiveness of pilots inputs, due to the response gust alleviator device.
In this integrated design with a preliminary global insight influences the piloting inputs on the inceptor of higher frequency unsteady According to piloted flight
process, to take advantages of the airplanes dynamic technique: both aspects have (Aggression) has been aerodynamic modes triggered simulation results, Figure 4
from Fly-By-Wire Control System behavior. been successfully investigated. introduced as a quantitative by Roll FCS mode: this airplane reports the quantitative and
performances, the following Attention has been primarily The flight simulation of the parameter to evaluate the pilot behavior affects the flying qualitative average ratings of
involved disciplines dedicated to minimizing the aeroservoelastic (ASE) airplane workload while performing technique, by addressing the the adopted evaluation metrics.
Flight Mechanics cost of the tool, using off- model with a Fly-By-Wire FCS, high gain tasks: this method pilot to adapt roll inputs in order
Structural Dynamics and the-shelf desktop computers, has evidenced a Roll Spillover at
Aeroelasticity widely diffused commercial and 2.05 Hz, due to the interaction
Control Theory open-source software (Figure 1). between the rigid body roll
Certification Requirements At the same time, the flexibility mode and the first structural
and Handling Qualities are and the versatility has been a antisymmetric wing bending
captured in a concurrent
design environment, in order to
explore and eventually exploit
their mutual interactions, thus
introducing structural elasticity
from the very beginning of FCS
control laws design.
An essential part of this process
is the development of an
Aeroservoelastic Static Flight
Simulator for pilot-in-the-loop
flight tests. This very useful tool
is aimed to provide the design 1. Visual Obstacles for pilot-in-the-loop maneuvers 3. FBW Roll Control Law Architecture 4. PILOT 1 Roll & Pitch Aggression Average Ratings
26
Stefano Dossi - Advisor: Prof. L.T. De Luca - Co-Advisor: Dr. Filippo Maggi
27
Metal fuels are widely used in and without additives. In this used, was embedded inside selected additive. Similarly SP
AEROSPACE ENGINEERING
space propulsion to increase work, two techniques for the particles with an effectiveness pressure sensitivity showed a
solid rocket motor (SRM) mechanical activation (MA) of depending on the size and on reduction up to 20% under
performance. The intrinsic Al (ball milling and mechanical the specific treatment. The the investigated conditions.
properties of micrometric mixing) were investigated and particle shape of Al (spherical With respect to ball milling
aluminum (Al), like stability, applied for the production or elongated with a regular treatments, mechanical
low toxicity, low cost, and of mechanically activated Al surface) was kept using the mixing processes exhibited a
high metal content (about powders (ActAl). During the mechanical mixing treatment. higher efficiency in increasing
99.5%, depending on size), preliminary step, a rationale SEM images underlined that propellant burning rate, but a
make this powder one of the based on toxicity/generic additives, when nanometric, lower capability in decreasing
favorite materials employed hazards, cost and potential were homogeneously the pressure sensitivity. The
1. Particle morphology variation due to mechanical milling. ActAl particles
for industrial production of performance was carried out to dispersed on particle surface. substitution of Al with assumed a typical flake conformation independently on the presence of an
solid rocket propellants (SP). support the choice of activation The powder behavior at ActAl brought to a consistent additive. SEM, magnification 2000X.
During the combustion, the substances (Fe2O3, Co3O4, low and high heating rate reduction of CCP mass
low reactivity of Al causes and Cu2O). Similarly, the key has been determined by weighted mean diameter D43 key role played by the shape in safety characteristics similar to
the formation of condensed parameters influencing MA thermogravimetry and ignition (up to 70% at 10 bar and up to the reactivity enhancement of micrometric powders. When
combustion products (CCPs) were analyzed for the precise tests. Both the analyses 60% at 40 bar), thus confirming ActAl. used as ingredients in SP, ActAl
lowering the theoretical definition of the activation evidenced an enhanced a reactivity enhancement. The analyses of experimental shows a good capability in
performance enhancement processes (3 ball milling and 1 reactivity of ActAl, confirmed by As shown in Fig. 2, also D43 results evidence that MA is reducing the size of CCP and
granted by the metal. The mechanical mixing procedures). a reduction of Tign and by a total variation exhibited a strong a good strategy to obtain offers the possibility to tailor
reduction of Al powder size Fifteen ActAl powders were mass gain up to 33% (6% was dependence on the additive and versatile and easily handling the propellant burning rate,
down to the nanometric scale formulated and produced to the mass increment of the virgin only a minor relation with the ingredients. ActAl exhibited by controlling the activation
(nano-sized Aluminum, nAl) guarantee a low toxicity, a powder). ActAl metal content specific activation procedure. a higher reactivity than Al, parameters.
allows a consistent reactivity good metal content, a good was detected by a hydrolysis An investigation of particle a good metal content and
enhancement, confirmed by stability, and a higher reactivity technique and was comprised morphology influence on Tign
the reduction of the ignition than Al. Each manufactured between 93.5% and 97.5%, of Al powder was performed
temperature (Tign), by the ingredient was analyzed as it then higher with respect to nAl modelling the evolution of
faster SP burning rate (rb) and was and as ingredient in AP/ (86.6%-89.4% for the two a single isothermal particle
by the decrement of CCP HTPB-based solid propellants. tested powders), but lower than positioned in an ambient filled
size. However, the low metal Experimental data were then Al (99.2% for the considered with O2 and subjected to a slow
content (around 90%), the crosschecked and compared to material). heating rate (up to 50 K/s).
increment of SP slurry viscosity, those obtained using a standard Combustion tests were The concept was based on an
as well as direct and indirect Al and two nAl powders. performed in a strand burner energy balance and includes
high cost (due to powder All the ingredients treated by at 5, 10, 20, 30 and 40 bar to both regular and irregular
production, and to handling ball milling exhibited strong determine the effects of ActAl particles. Three geometrical
and health hazards), hinder morphological variations on propellant rb and CCP size. families have been considered
the large scale use of nAl. A caused by the mechanical Solid propellants loaded with for analyses: spheres, prolate
viable strategy to increase Al action of the spheres. ActAl exhibited an increment of spheroids, and oblate spheroids. 2. CCP mass weighted mean diameter and burning rate at 40 bar for a series
reactivity, limiting problems Particles were characterized the burning rate with respect to Irregular particles, characterized of solid propellants loaded with Al, nAl and different ActAl. The activated
related to the use of nAl, by a flake shape with the the baseline ranging from 17% by a higher external surface, powders were processed by ball milling (BM) and by mechanical mixing (MM).
The figure shows the possibility to decrease propellant CCP size and to tailor
consists in treating the powder external surface crossed by to 74% at 40 bar, depending exhibited a lower Tign than the propellant burning rate, by using ActAl. Nanometric aluminum guaranteed
by mechanical processes with cracks (Fig. 1). The additive, if on both the treatment and the regular ones, confirming the a stronger CCP size reduction, but also a higher burning rate enhancement.
28
29
Nowadays access to space is no this aspect that in the past been analyzed: granulometry, rocket propellant features, the
AEROSPACE ENGINEERING
more a dream but a necessity. was never rigorously faced, performed with a laser corresponding propellants have
Satellites orbit insertion is still four different micrometric granulometer; active metal been manufactured.
a tough issue, which presents powders have been tested content, by water bath, and The manufacture has been
several technical and managerial and compared. The powders morphology, which required done using a resonant acoustic
aspects to be optimized in the are the following: Al05a, the development of an ad hoc mixer, developing a dedicated
mission project phase. a typical Al powder used technique based on optical procedure for propellant
Solid propellant rocket motors for space applications, with microscopy. To classify particle production. Resonant acoustic 1. CCPs mean volumetric diameters versus pressure. Particles have been
offer a good solution to some approximately spherical shape; morphology four dimensionless mixers offer some advantages analyzed with a laser granulometer, adopting the Fraunhofer approximation.
questions concerning the Al06, a commercial powder parameters has been used: 1) with respect to mechanical
payload launch phase. with irregular flaky shape, form factor (FF);2) aspect ratio mixers, such as, for instance,
These propulsive systems which does not find applications (AR);3) roundness (R); and 4) continuous vacuum mixing lowest performance in terms propellants produced with
guarantee high performance, in rocket propulsive systems; compactness (C). and reduced contaminations. of burning rate and ballistic milled powders, while decreases
readiness and reliability in Al05a-M and Al06-M, which From the results obtained it is The composition used for exponent, which expresses the with pressure for the propellant
addition to relatively low costs; are the previous two powders possible to assert that Al05a propellant production is 68% of pressure sensitivity to pressure loaded with nano-aluminum.
on the other hand, liquid modified by a ball milling is the finest powder with the Ammonium Perchlorate, 18% changes. Once the CCPs chemical
propulsion systems, that are process to enhance particle highest metal content and of Al and 14% of binder. The most interesting results composition is known, it
their direct competitors, provide irregularity. In addition to this geometrically can be seen as The propellant characteristics have been shown by the is possible to estimate the
higher performance and lower set of micrometric powders, composed by particles similar to analyzed are: burning rate, propellant P-Al06, which refractive index in order to
environment impact at larger also a nanometric powder spheres. For irregular powders, agglomeration process, through evidences the lowest ballistic obtain finer results from the
cost and complexity. (Al01i) has been introduced in also through SEM images, it can the analyses of condensed exponent. granulometric analyses.
This work is addressed the comparison. This inclusion be stated that particles which combustion products (CCPs), Concerning the propellants Numerical simulations were
to the evaluation of Al was made to settle where the compose Al06 powder can be and uncured propellant viscosity. loaded with micrometric carried out to evaluate two-
particle shape influence on increment of specific surface, compared to prolate spheroids Burning rate has been Al powders, CCPs mean phase flow losses, which
propellant properties, such due to particle irregularity, while the milling process leads evaluated using the windowed volumetric diameters decrease represent the most important
as viscosity, burning rate and can be placedbetween a to the formation of oblate strand burner technique and with pressure, while for source of loss for solid rocket
agglomeration, through the micrometric and a nanometric spheroids. expressing results in the form of the propellant loaded with motors.
analysis of the condensed powder. To compare the Since the interest of this work is the standard Vieilles law. nanometric powder CCPs mean The code developed at SPLab is
products. In order to obtain different powders, the addressed to analyze the effect A significant improvement size shows pressure insensitivity able to track particle evolution
more information about following characteristics have of Al particle shape on solid to the technique used to (over 10 bar), as can be seen inside the combustion chamber
collect and analyze condensed from the results shown in Figure and the nozzle, simulating
combustion products has been 1. different interaction between
Metal done during the prosecution of Chemical analyses (XRD and gas and particles as well
ID D43 [m] FF AR R C
content [%] this work. EDX) showed that the principal as between particles. The
Al05a 42.70.5 0.880.02 1.280.04 0.840.02 0.900.02 99.50.3 The analyses carried out species identified in CCPs are phenomena simulated include
on condensed combustion aluminum, oxygen and carbon, combustion, fragmentation and
Al06 66.40.7 0.700.02 1.740.08 0.630.02 0.790.02 98.00.4 residuals were addressed to and the crystalline phases are collision.
Al05a-M 65.00.6 0.780.02 1.580.08 0.680.02 0.820.02 97.60.8 determine their particle size Al0, a-alumina, g-aluminaand
Al06-M 69.90.8 0.800.02 1.690.08 0.640.02 0.790.02 95.20.8 distributions and their chemical d*-alumina.
compositions. Combustion efficiency
Al01i 0.1410.001 - - - - 88.70.2 Burning rate tests show that grows with pressure and
Table 1 - Al particle characterization results. Shape factors analyses were not applicable to Al01i. the propellant P-Al05a has the is systematically higher for
30
31
Freeplay is one of the most Experimental rig Numerical models the second is a combination of
AEROSPACE ENGINEERING
important nonlinearities that The T-tail unit considered in this The T-tail State Space (SS) the first and the second mode
affect the control surfaces work is the one of the X-DIA, an matrices are built using the flutters.
of the aircrafts; it can induce aeroelastic model representative structural Finite Element model The portraits comparisons are
flutter phenomena and limit of a nonconventional three and the aerodynamic DLM, both shown in figure 4. It is possible
the performances of the surfaces regional jet (called developed in MSC.Nastran. to see a good agreement of the
same airplane. To investigate Target Aircraft), intensively The SS model is a Reduced trends even if the HOHB is not
2. V-g V-f diagrams.
the effect of control surface investigated in the last few years Order Model (ROM) with the completely able to catch the
freeplay, an aeroelastic wind at the Department of Aerospace basis made by the free surface response of a given nonlinear acquisition and control of the peaks due to the foldings of
tunnel model of a T-tail was Science and Technology of rigid mode plus the significant system, which is undergoing system was handled by a hard freeplay stiffness.
developed. A variable amplitude Politecnico di Milano. elastic modes. The frequency LCOs, with a Fourier series real-time tool called RTAI.
freeplay was introduced in the The model is composed by domain aerodynamic matrix leading to a set of nonlinear Conclusion
control chain by a specifically dynamically scaled aluminium is transformed into a finite algebraic equations that can be Results This work presented different
designed linkage. The numerical alloy spars, which are inserted state space realization by using solved by an iterative method. The experimental results, as methods for the study of
models were built, according in a series of aerodynamic Rogers algorithm. Once the numerical models were well as the numerical, show the nonlinear systems. The results
to the modern aeroelastic sectors made by styrofoam The linear aeroelastic behaviour validated, an alternative control typical trends for a nonlinear of a numerical integrated model
approach, describing the covered with a carbon fiber of the model is shown in algorithm for vibration reduction aeroelastic model. and a HOHB procedure are
dynamics of the tail by a state skins. In order to have a the numerical V-g/V-f flutter was developed. Figure 3 shows the LCO compared with experimental
space system with a lumped variable amplitude freeplay, a diagrams (see figure 2) amplitude trend increasing the data for a T-tail in presence
nonlinearity. mechanism was introduced in computed with the free surface. Experimental tests airspeed. The HOHB, as the of freeplay nonlinearity in the
the control chain between the The first (11.63 Hz) and the Wind tunnel tests were numerical integrated model and rudders control chain. The
Introduction actuator and the rudder; it is second (23.22 Hz) fin bending conducted on the experimental the experimental data, depicts methods shown to be able to
The research on nonlinear composed by a rigid linkage modes cross the zero damping rig in order to tune the FE two regions: the first is the catch the correct solution for the
aeroelasticity and, in connected with the rudder and line respectively at 47 m/s and model and to validate both consequence of the flutter of nonlinear system.
particular, on control surface that ends with a pin that is 78 m/s. the numerical models. The the first bending mode, while
freeplay, is motivated by the slipped into a fork connected The direct integrated (time
significant number of cases with the gear of the electric marching) model was designed
known in the literature of motor used to actuate the by assembling the structural and
aircrafts that have experienced movable surface. the aerodynamic state space
Limit Cycle Oscillations (LCO) systems. The nonlinearity was
caused by it. In fact, freeplay introduced as a lumped element
in the control chains may in the feedback loop by using a
arise as consequence of many penalty function approach.
factors, including wear of The model is time-varying
the parts during the aircrafts velocity in order to be able
life. In order to perform to reproduce the effect of an
numerical and experimental increase (or decrease) of the
investigations, a wind tunnel free stream speed during the
model of a T-tail equipped simulation.
with the rudder and the A High Order Harmonic Balance
control system was designed 1. T-tail aeroelastically scaled wind (HOHB) was developed as well. 3. LCO amplitude, NI - Experimental - HOHB comparison 4. LCO portraits at 50 m/s, NI - Experimental - HOHB
and manufactured. tunnel model. The method approximates the 1. comparison 1.
32
Vibration analysis of laminated composite resulting models. Frequently, the the free vibration analysis of applications for the solution of
AEROSPACE ENGINEERING
and solution of plate and shell on two aspects. The first issue deformation, rotary inertia and would be highly desirable. Therefore, the balance between simply-supported and clamped
structures can be considered a is related to the assumptions thickness stretching factors. The other important aspect accuracy and computational boundary conditions. Some
classical problem in structural and simplifications adopted ESL approaches assume a related to the computation of savings can be tailored on the studies are also provided for
mechanics. Over the last fifty in the mathematical model proper overall kinematic field natural frequencies of laminated specific application under study. structures involving elastically
years, researchers proposed of the structural component. throughout the thickness of the panels is the method adopted to The modeling aspect exploits restrained edges. Both single-
many different approaches to The second aspect involves the laminated structure, whereas solve the equations governing the power and versatility of the layer isotropic and multi-layered
face this problem, urged by method selected to solve the an independent displacement the dynamic problem. Again, Carreras unified formulation, orthotropic plates and shells are
the increasing adoption of flat governing equations of the field is postulated for each many different approaches are which provides a smart way of considered. It is shown, through
and curved panels as structural problem. layer in a LW framework available in the literature. Exact handling arbitrary refinements a huge number of comparisons
elements in a large variety Models of flat and curved and appropriate continuity solutions satisfying both the of classical plate theories. The with fully 3-D analyses and
of engineering applications. panels based upon the three conditions are imposed at each differential equations and the discretization of the problem to other references based on 2-D
For example, rectangular, dimensional (3-D) theory of layer interface. Typically, both boundary conditions are possible obtain an approximate solution approaches, that the present
skew, annular, cylindrical and elasticity can be considered as ESL and LW kinematic fields only for a limited set of plate of the natural frequencies is computational framework is
spherical panels are important the most accurate, since no are expressed as complete and shell geometries, boundary performed by the spectral efficient, versatile and accurate.
components of built-up aircraft overly simplified assumptions polynomial series expansion conditions and lamination collocation method, which is In particular, it is observed that
and spacecraft structures. Plate are introduced in describing the of the thickness coordinate sequences. In most practical capable of providing relatively stringent requirements on the
and shell structures are also kinematics of deformations. As and the highest power of the situations, one must rely on light discretized models to be accuracy of the computed
widely applied in automotive, such, 3-D models are suitable for assumed polynomial set is approximate methods. The profitably used in extensive frequency values can be satisfied
marine and civil engineering plates and shells of any thickness generally referred to as the most common and traditional optimization and parametric only by 2-D high-order layerwise
disciplines. In the last three ratio and any shallowness ratio, order of the theory. Owing to approaches include the Ritz studies. models, in particular when thick
decades, the design of plate and ranging from thin and shallow to their intrinsic simplifications, method and the finite element The spectral collocation method and deep anisotropic curved
shell structures has known new thick and deep shells vibrating at 2-D theories provide reliable method (FEM). For simple used here, also known as panels are considered. However,
exciting opportunities by the low to high frequency. However, models for a limited range of structures, the Ritz method Chebyshev collocation method the present approach can also
usage of composite materials. a full 3-D dynamic analysis, thickness ratios, frequency shows better convergence or pseudospectral method, can allow building less costly low-
Composite laminated plate and especially for composite plates values and through-the- and less computational need be considered to be a global order equivalent single-layer
shells offer higher stiffness/ and shells, is rather complicated thickness variation of material than FEM. However, the FEM spectral method that performs models when a more refined
strength to weight ratios than and time consuming. Therefore, properties. The accuracy usually overcomes the limitations of a collocation process, i.e., analysis is not required for the
most metallic constructions and so-called plate theories aimed degrades as the wavelength the Ritz method in dealing weighting functions are delta specific case under investigation.
today they are fairly common at reducing the problem from of the vibration mode is of with complicated boundary functions centered at special
in advanced engineering three to two dimensions have the order of magnitude of conditions and complex grid points called collocation
applications such as aerospace been introduced by employing the panel thickness and as shapes. More recently, new points. Since the mathematical
systems. appropriate assumptions on the the variation of mechanical emerging meshless methods formulation is simple and
In service, laminated panels are displacement behavior in the properties through the thickness are increasingly applied to powerful enough to produce
typically subjected to various thickness direction. direction increases like the case the analysis of plate and shell approximate solutions close
dynamic loads, and it is crucial Many different displacement- of sandwich panels. This loss problems with the aim of to exact values, this method
that their dynamic response based two-dimensional (2- of accuracy can be successfully eliminating some difficulties has been largely adopted
is well predicted from the D) theories for laminated compensated by using theories existing in FEM such as mesh with success in solving partial
design stage onwards in order panels are available. Generally of higher order and/or relying on distortion and remeshing. differential equations governing
to assure their integrity and speaking, we can distinguish a LW approach, with the price The scope of this thesis is to many physical phenomena such
stability. The accuracy of free between equivalent single- of increasing the complexity present an advanced modeling as fluid dynamics and wave
vibration analysis of laminated layer (ESL) and layerwise (LW) and computational cost of the and solution framework for motion. It was also used in some
34
35
The era of space exploration by the space traffic control up computational demanding in the time window of interest by two order of magnitude. can raise the miss-distance and
AEROSPACE ENGINEERING
started in 1957 with the launch problem. simulations, such as Monte with a computational time that Besides enabling the collision decrease collision probability
of the first manmade object, The thesis deals with the Carlo methods. An introduction ranges from a few to tens of probability computation in a besides being compliant with
the soviet satellite Sputnik I. development of new methods to Differential Algebra (DA) and seconds. The second algorithm Monte Carlo fashion, without mission constraints.
Since then a large number of for the Space Surveillance and its tools, as well as a description uses the procedure for the any assumption on relative Overall, the proposed
manmade objects was launched Tracking of the near-Earth of the high-order DA-based DA expansion of the time and dynamics as in the classical algorithms can be combined
into space, and many of them environment. All the relevant propagators developed, is given distance of closest approach. algorithms, the developed in comprehensive DA-based
are still orbiting the Earth. aspects of the problem are in the first part of the work. In The advantage of this approach methods can be used with Space Surveillance and Tracking
The large majority of objects addressed: orbit propagation, particular, the first ever high- is that it provides the polynomial any statistics, such as uniform tool. The tool would be able to
currently orbiting the Earth orbit determination, conjunction accuracy DA-based numerical approximation of the distance distribution or Gaussian manage the uncertainties by
is a result of fragmentations, identification, collision propagator is developed, using of closest approach with respect mixtures. considering the nonlinearities
mostly caused by collisions and probability estimation, and some of the most recent models to the uncertain initial states of The design of collision avoidance arising from orbit determination
explosions. These events can collision avoidance manoeuvre for Earth gravitational field and both objects, that can be used manoeuvres is tackled as a multi- and orbit propagation and could
have catastrophic effects on the design. The main goal was to atmosphere density. to efficiently compute collision objective optimization problem, produce accurate estimations
near-Earth environment: they implement innovative methods Then, the problem of orbit probabilities. using a particle swarm optimizer. of the collision probability to
increase the number of objects to propagate uncertainties in determination is addressed. A The three methods for the Two approaches are analysed, rank close conjunctions. More
and, thus, the probability of an efficient and accurate way, novel algorithm based on batch collision probability computation the first is based on SGP4/SDP4 in general, the tool would
further collisions, potentially which is a major problem when least square fit that can process exploit the availability of the and the corresponding method support in the management
leading to a collisional cascade. dealing with a large amount of measurements from a bistatic DA expansion of the distance for conjunction identification, of space traffic, re-entry, and
This scenario is named Kesslers data. radar with a multibeaming of closest approach to perform whereas the other uses the DA- observation scheduling. In
syndrome after the name of the receiver is analysed. The fast Monte Carlo simulations. based numerical propagator and the scope of the mitigation
scientist that first analysed the algorithm is capable of The numerical simulations to the expansion of the distance of guidelines, any improvement in
effects of the increasing density estimating, with a single compute the minimum relative closest approach with respect handling such operations will
of resident space objects. measurement, the whole set distance for each sample of to the execution time and the have beneficial effects on space
Mitigation guidelines have of six orbital parameters, with the simulation are replaced by manoeuvre velocity vector. The debris population control and
been published by various a good accuracy on the orbital fast polynomial evaluations. A optimization returns a set of the future exploitation of space.
organisations such as the position. By adding optical DA-based standard Monte Carlo fuel-efficient manoeuvres that
Inter-Agency Space Debris measurements the estimate gets method and two advanced
Coordination Committee closer to the reference state and Monte Carlo techniques, Line
(IADC) and the United Nations the ballistic coefficient can be Sampling and Subset Simulation,
(UN). The general aim of space estimated as well. are used. These advanced
debris mitigation is to reduce Two algorithms for the techniques limit the number of
the growth of space debris by conjunction identification are samples required to compute
ensuring that space systems 1. Research activity: workflow and proposed: the first is based on sufficiently accurate estimates of
are designed, operated, and interconnections between developed the DA-version of the analytical the collision probability, which
disposed of in a manner that algorithms. propagator SGP4/SDP4 and is usually well below 10-3. Since
prevents them from generating the rigorous global optimizer they are based on polynomial
debris throughout their orbital In this framework, differential COSY-GO. The choice of the evaluations, the methods allow
lifetime. In parallel specific algebraic techniques are objective function is such that for large computational time
space programs were started to used to perform nonlinear all stationary points of the savings. As an example when
build the expertise required to propagation of uncertainties on relative distance between the 1000 samples are required the
manage the challenges posed the orbital state and to speed- two objects can be computed computation time can be reduce 2. Computational time of DAMC, DALS, and DASS vs. collision probability.
36
Vibration-Based Damage Identification some simple methods and tools typical aircraft stiffened panel, examples, after correlation
AEROSPACE ENGINEERING
complexity of design and longer damage. two or more signal categories, identification, without having to different types of structural data generated by MSC/Nastran
lifetime periods imposed in The fundamental idea of e.g. before and after a structure know the complete stiffness and change are studied. i.e. remove are polluted with 3% of random
civil, mechanical and aerospace the vibration methods for is damaged or differences in the mass matrices of the structure. some screw for disconnecting noise level.
structures, make it increasingly damage detection is that damage levels or locations. To use of directly measured the stiffener from the panel The study shows the potential
important to monitor the health modal parameters (natural The work here presented FRFs data, which provide an and create a saw cut along the of the proposed methods for
of these structures. frequencies, mode shapes, and addresses the subjects of abundance of information, all the depth of the height of simple and rapid detection,
A wide variety of highly modal damping) are functions damage detection, localization is further beneficial as the second stiffener. FRFs of panel accurate and reliable localization
effective non-destructive of the physical properties of the and quantification in structures. execution of experimental modal in each state are recorded by and monitoring of damage in
methods, such as acoustic or structure (mass, damping, and The reported examples show the analysis is not needed, thus two measurement systems structures.
ultrasonic methods, magnet stiffness). Therefore, changes implementation and comparison greatly reducing human induced (see Figure 2, LMS Hammer The reported examples also
field methods, penetrating in the physical properties will of a number of various damage errors. Here, all methods are Testing and PSV-Laser Scanner show that some proposed
liquids, eddy-current methods cause detectable changes in detection and localization first tested on data of a simple Vibrometer) and for each methods (i.e. PCA based,
or thermal field methods, the modal properties. So, this methods based on vibration steel beam structure (see Figure scenarios the related mode Transmissibility based) are highly
and so on, are currently process involves the observation responses changes. 1) to assess their feasibility and shapes are extracted by capable for damage detection
available for the detection of of a structure/system over time The objective of such a study performance. Then all proposed PolyMAX algorithm of LMS and localization and structural
defects. Unfortunately, they using periodical measurements. is to ascertain the possibility of methods are applied to a Testlab software. health monitoring even in the
are all localized techniques, In other words, most vibration- using various damage detection more complicated structure, a In addition, numerical simulated noise polluted data.
implying long and expensive based damage detection and localization methods with
inspection time; often, methods can be considered and without the need for modal
structural components are not as some form of the pattern identification. In recent years,
inspected just because of their recognition problem as they look the authors have developed
inaccessibility and damage
can propagate to critical levels
between the inspection intervals.
The drawbacks of current
inspection techniques have
led engineers to investigate
new methods for continuous
monitoring and global condition
assessment of structures. That is
the case for methods based on
vibration responses that allow
one to obtain meaningful time
and/or frequency domain data
and calculate changes in the
structural and modal properties,
such as resonance frequencies,
modal damping and mode
shapes, FRF based methods, and
use them with the objective of a) LMS Hammer Testing b) PSV-Laser Scanner Vibrometer
developing reliable techniques 1. Experimental test setup of steel beam 2. Experimental test setups of typical aircraft aluminum stiffened panel.
38
MODEL IDENTIFICATION AND CONTROL OF experimentally on the quadrotor of an emergency maneuver peculiar feature of variable
AEROSPACE ENGINEERING
platforms for both research and command, and conducted on the implementation process. campaign. ability to safely conclude the avoid vehicle flip.
commercial Unmanned Aerial a quadrotor prototype, in both Therefore the robust control In conclusion the entire control flight without hurting people In order to complete the
Vehicle (UAV) applications is on laboratory bench test and in design approach selected was design process, specifically or causing damages could be validation of the proposed
steadily increasing. In particular, flight conditions. The gathered the structured H: optimal addressed to near hovering mandatory. Moreover strict one rotor fault recovery
some of the envisaged data was used to feed a number tuning of the PIDs parameters condition (quadrotors mainly safety requirements are expected automatic maneuver, with the
applications for quadrotors of different identification was determined assigning operate in this flying regime to be imposed by forthcoming aim of implement the feature
lead to tight performance methods, exploring both black- desired closed-loop stability during typical missions), was regulations about the use of on AERMATICA vehicles,
requirements on the attitude box and grey-box approaches, and performance requirements, developed and successfully small-medium size UAV for incremental complexity future
control system, so wide to obtain LTI control-oriented aimed to improve the vehicle applied to the real case of civil applications. The vehicle activities are planned before
bandwidth controllers must models. From an exhaustive wind gust rejection capability considered quadrotor pitch dynamics modeling with three the final test in flight. The next
be designed. This, in turn, comparison of the obtained compared to previous tuning, attitude DoF (with results working rotors was discussed: step will be the testing of roll
calls for increasingly accurate pitch attitude dynamics models obtained with a manual trial and applicable also for roll a single DoF model for the yaw and pitch control with a faulty
dynamics models of the vehicles performance in replicating the error process executed directly considering the geometrical was developed, characterizing rotor when the spin about yaw
response to which advanced experimental data, the black- on the vehicle, and replicate its symmetry of the vehicle). In the vehicle spin dynamics axis is present, safely operating
controller synthesis approaches box PBSID subspace method set point tracking performance order to obtain a final validation and validated through a test the quadrotor on a proper 3
can be applied. In view of these appears to be the best candidate considered already adequate. of the procedure, optimal tuning campaign on a proper bench DoFs test-bed constraining only
considerations the main goal for the identification part of the The structured H synthesis parameters (in both cases, using test. Hence the vertical DoF the vehicle translations and
of this thesis work was the procedure: it is able to dealing was applied on both on test- on test-bed and in flight models) dynamics was added to yaw, freeing concurrently the three
development of an integrated, with data generated in closed- bed and in flight identified will be applied on-board, adopting a proper model for the rotations. A further work will be
highly automated procedure, loop and it is computationally PBSID models of attitude pitch testing the considered vehicle rotor inflow in descent operating the development of a complete
and relative tool chain, aimed at efficient. Moreover PBSID dynamics in hovering: the control performance in flight, states where the momentum simulation environment, to
a fast and reliable deployment identified models demonstrate results obtained in simulation also in presence of wind gust. theory in invalid (Vortex Ring conduct a comprehensive
of the attitude control system that operating the quadrotor demonstrate the requirements To complete the tool chain the State and Turbulent Wake validation of the emergency
for variable pitch quadrotors, constrained on single DoF fulfillment and above all the work will be naturally extended State). The reliability of adopted maneuver, considering also the
encompassing identification of test-bed is representative of optimal tuning obtained with to the yaw DoF. The use of the rotor aerodynamics model was vertical and longitudinal/lateral
linear control-oriented model the actual attitude hovering the test-bed model in the developed approach represents demonstrated through a wind dynamics.
for the attitude response and dynamics in flight: the n-gap loop can be applied also in a valid and faster alternative to tunnel test campaign conducted
optimization-based tuning for analysis between the two flight with a non-significant the manual empirical process on isolated rotor. Concerning
the parameters of the on-board different conditions models loss in control performance, for attitude controller tuning the roll/pitch attitude control
controller structure. The research assures that the controller hence the attitude controller and would be included in strategy to perform the
activities were carried out synthesis using the on test-bed tuning can be achieved using AERMATICA control design proposed emergency maneuver,
exploiting the collaboration with model will guarantee acceptable models obtained in safe (and and development process in the problem of maintaining
AERMATICA SpA: a quadrotor performance also when applied more repeatable) identification the near future. As future work the pitch (or roll) angular set-
prototype, with relative ground on in flight one. experiments executed indoor extension, a similar integrated point when one of the two
systems and laboratory bench Concerning the control synthesis on the test-bed, avoiding a risky procedure may be developed for opposite rotor of longitudinal
test have been made available part of the tool chain, it was and time-consuming in flight the translational quadrotor DoFs, (or lateral) quadrotor axis
from company, together with all preferred to maintain the identification test campaign. clearly without the possibility goes into fault (avoiding the
necessary vehicle data. preexisting attitude controller The performance in terms to perform identification vehicle turn over at the thrust
Proper identification experiments scheme (cascade PID loops) of set point tracking, for the experiments on bench test. loss instant) was tackled,
of the pitch attitude dynamics adopted on-board of considered optimal tuning obtained with on The secondary topic addressed in exploiting the capability to
in hovering were designed, quadrotor, in order to work test-bed model, was evaluated this thesis was the development generate negative rotor thrust,
40
AEROSPACE ENGINEERING
mass of orbital artificial objects boast of high mechanical and by a Hybrid Propulsion Module using MUSCL+Limiter method
increased quite steadily leading aging properties and isotropic (HPM) and an ADR platform, for high order accuracy in space
to a total of about 6670 metric combustion, whereas the hot which loads all the systems for solution. AUSMDV is used for
tons. Most of the cross-sectional gases generated by the catalytic mission control, object capture numerical flux evaluation. The 1. ADR mission profile description. The DeoKit is assumed to be released by a
area and mass (97% in low decomposition of H2O2, at high and RCS. In order to evaluate launcher upper stage on the same orbital plane of the selected target. Rocket
time integration is performed
bodies, such as Cosmos-3M 2nd stage, reveal two possible connection points:
Earth orbit, LEO) is concentrated concentration (90%), can be hybrid propulsion for ADR, a by a two-stage Runge-Kutta nozzle or payload adapter.
in about 4500 intact objects, exploited for the HTPB ignition design tool for overall internal method. The time evolution
i.e. abandoned spacecraft (S/C) and reignition, as well as for a ballistics analysis and preliminary of the chamber conditions
and rocket bodies (R/Bs), plus a Reaction Control System (RCS), sizing was implemented, is achieved by compiling the database estimated with to reduce the HPM mass and
further 1000 operational S/C. spilling from the same main based on the same approach convergence results, providing NASA CEA. Throat erosion size, combustion times and
According to NASA results, tank. Moreover, this propellant recently suggested by Funami a quasi-static description of is introduced through the accelerations. In case of S/C, the
the active yearly removal of 5 couple theoretically causes a and Shimada (Japan Aerospace the regression rate during numerical simulation results of stresses to the structure must
large abandoned intact objects lower nozzle throat erosion Exploration Agency). The code combustion. For validation Bianchi and Nasuti (La Sapienza, be limited to avoid the breaking
would be sufficient to stabilize with respect to HTPB with uses moderately complex purpose, the regression rate Rome). The motor design aims risk of external appendages
the debris growth in LEO, pure oxygen oxidizer. However, models and allows to estimate rf estimated with Marxman at minimizing the size and (i.e. solar panels, etc.). The
together with the worldwide hybrid propulsion has not been the fuel regression rate for model was compared with masses, keeping low combustion acceleration achieved are
application of mitigation tested in orbital environment the couple HTPB+H2O2, for experimental results for times (<100 s) since HPM nozzle below 0.14g. Envisat might be
measures. However, besides yet (low TRL). Concerning which no data are presently HTPB+O2, obtaining an is not cooled to avoid the need removed by an HPM of 1260
legal and political issues, the ADR mission two targets available in literature. The underestimates of about 16% of an expensive system. From kg (DeoKit 1771 kg). Soyuz
Active Debris Removal (ADR) is are considered: Envisat (7.8t) flow field is described by at high mass flux (Gox) and up the design analysis, the highest can load two DeoKits. Hybrid
strongly hampered by high costs and Cosmos-3M 2nd stage the Q1D compressible non to 50% at low Gox. This lack performance (Is) is achieved propulsion might represent a
involved. In this research, hybrid (1.4t). The first is characterized viscous Euler equations with can lay at both the effects of with a length-to-diameter key choice for ADR applications.
rocket propulsion is proposed by the highest removal a conservation equation for injection geometry, which, ratios (L/D) of about 12 for the To overcome the lack of on-
as a valuable option for ADR priority, the second is a good the mixture fraction. The in the real case, can produce solid grain, resulting in a long orbit demonstration, a possible
missions. This technology has candidate for multi-removal calculation domain consists of significant differences from and thin motors, and relatively approach could be to equip
advantageous features: non- and demonstrative missions, the combustion chamber and Marxmans theory, and the low oxidizer mass flow rates. new satellites with small hybrid
toxic propellants that, besides due to its large number in LEO a nozzle. The solid fuel has a use of a pure convective heat Single-burn maneuvers can be motors able to perform their
their lower price, reduce (~300); the same developed single circular perforation. The transfer (soot particles cause an used for low V budgets, while post-mission disposal.
the complexity of handling, technology might be used to mass addition from the solid important radiation contribute). higher Vs require a multi-burn
storability and load operations, remove larger abandoned R/ fuel surface is considered as a However, the computed rf range strategy to satisfy the burn time
decreasing the connected Bs, such as the Zenit 2nd stage source term, in which the fuel is assumed acceptable for the constraint. Finally, the HPMs for
costs and avoiding the need of (8.9t). Controlled reentry is mass flux is evaluated with the conservative preliminary design the selected targets assumes a
a special staff; throttleability the preferred solution for the Marxman model, assuming and propellant mass budget V of 240 m/s. The Cosmos-3M
which could favor the disposal of large objects; a single pure convective regime. The of a HPM. The ideal motor might be removed by an HPM
rendezvous phase and allows boost maneuver is assumed code considers only the gaseous performance evaluated for pure of 258 kg (DeoKit 566.8 kg),
for soft initial accelerations to to lower the perigee below phase for both the fuel and H2O2 is reduced considering with a single-boost maneuver.
weak S/C targets; reignition 60 km with a FPA< -1.5 at oxidizer. Chemical equilibrium the oxidizer dilution (90%), the Vega can load 2 DeoKits, while
capability for multi-burns an interface of 120 km. A V is applied for the combustion Bray approximation for nozzle the Soyuz Launcher up to 6,
operations. For an ADR of 200 m/s is required starting considering 9 chemical species expansion, and the nozzle allowing for multi-removal
mission the propellant couple from an orbit of about 770 km. to simulate both HTPB+O2 and multi-dimensional losses, by scenario. For Envisat removal a
HTPB+H2O2 might represent The mission is performed by a HTPB+H2O2. The domain is means of a correction factors two-burns disposal is preferred
42
43
Due to their potential for high the propulsion system structural cost function and too long rates of paraffin waxes mixed paraffin waxes. For example one when really high regression rates
AEROSPACE ENGINEERING
performance, inherent safety, mass and the propellant combustion chamber. The with stearic acid and graphite of them, showing really high are demanded and external
throttling and restart capability mass) to payload mass ratio is multiport configuration can help which shows liquid droplets regression rates, reaches ductile loads are severe. For the studied
and low development costs, minimized over key propulsion in terms of decreasing L/D of the entrainment, are up to three rupture at a maximum tensile application, one macro and one
hybrid rocket engines are variables such as the chamber engine, but the cost function times higher with respect to the stress of 3 MPa. microcrystalline paraffin waxes
believed to be good candidates pressure, oxidizer to fuel ratio still remains much higher if baseline HTPB. Among these Concerns in the use of paraffin- were identified to be good
for launcher boosters, suborbital O/F, nozzle area ratio and compared to the fast burning paraffin-based fuels, the one based mixtures are also caused candidates.
launchers and landers and burning time. Sensitivity analyses paraffin fuel baseline case. based on micro paraffin wax by the repeatability of the Finally, the proposed hybrid
in particular for in-space on geometrical constraints, fuel Inevitably, the hybrid system in shows a lower enhancement. manufacturing process and rocket design solution was
propulsion applications. In regression rate, O/F shift, nozzle a multiport design configuration Experimentally it was shown the good homogeneity of the compared in terms of gross
order to be competitive with erosion rate, were performed. does not preserve its intrinsic that the lower is the viscosity of solid grain. To better control the mass with other existing
the other chemical rocket Major findings show that: simplicity. The appealing design the paraffin-based mixture at grain manufacturing process, chemical rockets for in-space
propulsion systems, i.e. liquid the fuel regression rate has is consequently identified with 120 C, by tests performed at an experimental facility was set applications having the
rocket engines and solid the highest influence on the a single circular port solution the Space Propulsion Institute of up and a repeatable procedure same total impulse. Another
rocket motors, hybrid rocket engine size and gross mass, but with a fast burning fuel as DLR, the higher is the measured was implemented to measure comparison was performed in
engines should maintain high it should be underlined that paraffin, which also preserves regression rate. The results are waxes density in softening and terms of ideal thermochemistry
performance while preserving the minimum cost function the low cost and non-toxicity well in agreement with the melting intervals. To improve with different fuel/oxidizer
their features. Moreover even value is not reached for the requirements. current literature theoretical the fuel grain homogeneity, couples. The obtained results
if the environmental impact highest considered regression In order to be used as fuel, a combustion model for liquefying the substitution of graphite, and the preservation of all
is not directly affecting the rate; the nozzle erosion rate paraffin-based mixture should fuels, for which, the less viscous commonly used as additive to the features that are making
system performance, it is of also has a high influence on fulfill a number of requirements, is the fuel liquid layer the higher give a black color for increase these systems more and more
fundamental importance in the performance due to the as the ease of manufacturing, is the fuel regression rate. the mixture absorptivity, with a studied and tested among
light of present regulatory decreasing nozzle expansion good mechanical properties If on one hand the micro black dye results to be successful the scientific community
requirements and its effect on ratio during the engine burn, while preserving high regression paraffin shows lower and negligible mean regression underline and support their real
society, so the possibility to resulting in a lower delivered rates and a good thermal regression rates, on the rate losses in the considered competitiveness.
select low cost and non-toxic specific impulse; the O/F shift, stability. These requirements are other hand rheological oxidizer mass flux range were
fuels makes this technology even caused by the variability of related to thermal, rheological, tests reveal its mechanical observed.
more interesting. the fuel mass flow during viscosimetric, ballistic and resistance to higher external To sum up, all collected data
In order to perform a combustion, does not have a physical properties of the temperatures. It was even establish a unique and useful
comprehensive analysis of hybrid strong effect on the system paraffin-based mixtures. Starting experimentally demonstrated database for the identification
rocket engine design trends and performance. Calculations from the selection of six different that improvements on the and measurement of the
demonstrate their potential, a were also carried out with the paraffin waxes, both macro softening point, up to 51 C, fundamental properties for
design/optimization tool for in- hydrogen peroxide/hydroxyl and micro crystalline, combined are achievable by mixing a liquefying fuels applicability in
space, single stage hybrid rocket terminated polybutadiene analyses were performed in their microcrystalline wax with a hybrid rocket engines. Thanks
propulsion system is developed. (HTPB) propellant combination. pure form and related mixtures synthetic wax characterized to this database, useful advices
Hydrogen peroxide/paraffin- For this slow burning fuel, that are likely to be used as by a higher nominal melting were identified: generally the
based propellant combination both single port and multiport solid fuels in hybrid rocket temperature even without micro paraffin based mixtures
and a lunar mission are selected geometries were considered. engines. Results obtained by compromising the regression should be preferred for
as the reference case. Spacecraft The comparison points out ballistic experiments performed rate. But uniaxial tensile tests applications with large storage
gross mass (defined as the sum that for this fuel, the single on a radial micro combustor at reveal that higher maximum and operations temperature
of the payload mass - everything port configuration results in SPLab in Politecnico di Milano, stresses and good elasticity intervals, macro paraffin based
except the propulsion system -, really high mass ratio based demonstrate that the regression are reached with the macro mixtures should be preferred
44
45
Cylindrical and spherical heat) are devised from the time associated to the shock
AEROSPACE ENGINEERING
converging shock waves can reference configuration in which passage by a given radius.
be used to attain high energy symmetric lenticular obstacles The solution computed using
concentration at the focus are introduced to reduce the these two new tools provides
point, thus making them shock-obstacle losses. the same accuracy provided
interesting for applications Numerical simulations are by full two-dimensional
where high temperature and performed using the FlowMesh simulations and a reduction of 1. Numerical Schlieren of the 2D-2D domains overlapping 2. Leading edge reflection types, depending on the
pressure are required, e.g. in code, a Finite-Volume solvers the computational time of more zone diverse values of M and N.
Inertial Confinement Fusion. for Euler equations. To reduce than one order of magnitude.
Unfortunately, converging shock the simulations computational Numerical results concern mainly negligible effect. on P. The leading edge radius thermodynamic model, if
waves suffer from corrugation cost, a multi-domain approach two phenomena: one is the The trajectory of the Triple Point produces a weaker effect than the van der Waals model is
instabilities which hamper the is developed. The global local shock-obstacle interaction, is traced for genuine Mach other factors and, in addition, considered.
front regularity and reduce the domain is split into three, the and the other the one is Reflections. The independence does not exhibit a particular The applicability of the self-
shock intensity with respect to Far Field- , the Obstacle- and the complete reshaping and on the shock intensity and a trend. In all the configuration, similarity assumption of the
the axisymmetrical case. the Focus-Region, dedicated focusing of the shock wave. pseudo-homothetic behavior larger thickness-to-chord ratios shock propagation is empirically
The stabilization of the respectively to the simulation of The first phenomenon, i.e. the of the trajectories with cause more relevant losses, and tested in conditions for which
converging shock wave may be the cylindrical shock generation, leading edge reflections, is respect to the leading edge therefore, lower values of P and theoretical models are not
obtained by means of the so- of the reshaping and of the studied here for the first time radius are observed. Due to T. The configuration producing available. Pseudo-self-similarity
called shock reshaping i.e. by focusing. A technique which in the case of cylindrical shock theoretical considerations, a the highest temperature peak exponents are computed for
changing the shock shape into exploits the symmetry of waves interacting with cylindrical second order fitting is derived at the focus point consists of four diverse thermodynamic
a more stable one. Literature the reshaping is developed obstacles. only on data sampled along 16 obstacles with the lowest models, highlighting a trend
points to the use of suitable for reducing the azimuthal Diverse reflection types are the first half-chord, showing thickness-to-chord ratio and between N and the self-similarity
shock-solid body interactions extensions of the three regions observed and classified in good accordance with data. leading edge radius of 14cm, exponent.
to reshape the converging (from a 360-spanning domain accordance to pseudo-steady Considerations on the offset associated to a shock produced The unsteady shock wave
cylindrical waves into stable to a (360/N)-spanning one, that reflections criteria, highlighting of the trajectories from the by an initial pressure ratio of 27. convergence is traced in the
prismatic ones. is the elementary domain), and the onset of similar patterns but reflecting surface suggest Different shock patterns are pressure-specific volume
The dynamics of the interaction for reconstructing a-posteriori for different configurations (fig. that the definition of Inverse observed after the leading edge plane. One-dimensional results
between cylindrical shock waves the solution outside of the 2). Mach Reflection in presence reflection, including reflections show an excellent accordance
converging in air and circular-arc boundaries (fig. 1). An analytic model is proposed of cylindrical converging shock over the upper symmetry with Hugoniot adiabat. On
obstacles is investigated here. A novel method to trace the for the description of the waves is more complex than for boundary, post-trailing edge the contrary, a departure of
Diverse geometric configurations shock position during the time Regular Reflection unsteady planar shocks. patterns and the nozzle numerical data concerning
(the obstacle number, thickness- is also developed. It applies to evolution, which agrees fairly The second phenomenon is the effect. Depending on the two-dimensional shock waves
to-chord ratio and leading edge solutions computed by means well with the numerical results. reshaping of the shock. N is configuration and, therefore, on from analytic curves is observed,
radius), operating conditions of numerical schemes which The conditions for the Regular- included among the investigated the resulting patterns, polygonal due to fast but intense transient
(the pressure ratio across the describe the pressure across the Mach Reflection transition along factors. Contrary to the behavior shock waves are observed with phenomena in correspondence
discontinuity used to generate shock wave as a continuous the obstacle are determined. of leading edge patterns, which a time-dependent number of of the shock reflections.
the shock wave and the gas ramp, and it accounts for the The influence of the obstacle do not exhibit any dependence edges, switching among N, 2N,
conditions) and thermodynamic very complex shock-induced geometry is observed on the so- on M, the values of pressure 3N and 4N configurations.
models (ideal and van der flow field. This criterion applies called absolute wedge angle, P and temperature T attained A general decreasing of the
Waals gas thermal equations both to the evaluation of the whereas it is absent on the at the focus point depend focusing effectiveness is
and polytropic and harmonic shock position at a given time perceived wedge angle; for mostly on it. N is found to be highlighted by the study of
constant-volume specific and to the determination of the both angles, M presents a null or rather influential, especially the effects of the adopted
Mechanical Engineering | Physics |
Preservation of THE Architectural Heritage
| Rotary-wing aircrafts | Spatial Planning
and Urban Development | Structural Seismic
and Geotechnical Engineering | Technology
and Design for Environment and Building
| Territorial Design and Government |
Aerospace Engineering | Architectural
and Urban Design | Architectural
Composition | Architecture, Urban Design,
Conservation of Housing and Landscape
| Bioengineering | Building Engineering
| Design | Design and technologies for
cultural heritage | Electrical Engineering |
Energy and Nuclear Science and Technology
| Environmental and Infrastructures
engineering | Industrial Chemistry and
Chemical Engineering | Information
Technology | Interior Architecture
and Exhibition Design | Management,
Economics and Industrial Engineering
| Materials Engineering | Mathematical
MODELS and METHODS IN Engineering PhD Yearbook | 2015
48
DOCTORAL PROGRAMme IN
49
The Architectural and Urban Design PhD course started in 1992, theoretical and methodological contribution offered by the Faculty
51
Advisory Board
Matthew J.Bell, AIA, University of Maryland (USA)
Franoise Choay, Paris (F) (formerly at Universit de Paris VIII)
Jo Coenen, T.U. Delft MIT Institute TU Delft (NL)
Giangiacomo dArdia, Roma (I) (formerly at Universit di Chieti-Pescara)
Pierre-Michel Delpeuch, Atelier Villes & Paysages, Lyon (F)
Nicola Emery, USI, Accademia di Architettura, Mendrisio (CH)
Pedro Ortiz Castano, Urban Planning, World Bank, Washington DC (USA)
Ferran Sagarra Trias, UPC, ETSAB, Barcelona (E)
David Grahame Shane, Columbia University, New York (USA)
Michael Schwarting, New York Institute of Technology, New York (USA)
52
53
How can we define Urbanism in synonym for disaster. There have including Guwahati (city under and push the city towards
OTHER ISLANDS
55
This research starts from a a series of compromises and subjected to free interpretations. construction, contributed to the the ones that continue to shape
MONUMENTAL ABSENCES reasons. Doing research on the Globalization. This means that Why, in China, the notion of
59
The word network has urban informational landscape explore this space, the thesis cases different conceptual and and agency, understood as
61
The present research focuses worldwide. the modern city, for better to understand how the scalar,
DOCTORAL PROGRAM and architectural expression. the first semester, with help from the supervisor.
ARCHITECTURAL COMPOSITION
Chair: history of the architects craft, of the profession and of the wealth formulation must be defined before the end of the thesis is discussed.
Prof. Marco Prusicki of architectural techniques. The objective is to educate professional
architects from a general point of view, in order to give them
solid historical/humanistic training and a strong theoretical base,
but also extensive knowledge of town planning and construction
techniques, so they may be able to carry out architectural design.
The training consists of the imparting of organized contents, the Doctoral Program Board
sharing of research, and participation in cultural debate. As is the
case in the other arts, composition is understood as an ensemble of Pellegrino Bonaretti Carolina Di Biase Marco Prusicki
conceptual and technical fundamentals which precede the design Enrico Bordogna Mario Fosso Angelo Torricelli
project and to which it refers. The faculty is composed of professors Siro Casolo Alberto Franchi Daniele Vitale
in the fields of architectural composition, history of architecture,
restoration and construction. Giovanni Cislaghi Jacques Gubler Stanislaus Von Moos
Adalberto Del Bo Jacques Lucan
Contents Marco Dezzi Bardeschi Attilio Pracchi
The program is based on exchange between members of the
teaching staff and external professionals from among those at the
forefront of international debate. This dialectic serves as a reference
and as a source for research. The course is divided into integrated
phases.
The first phase consists of a redefinition of positions and
responsibilities with respect to transformation of the territory and to Advisory Board
evolution in the profession, in which the PhD candidate
Klaus Theo Brenner, Potsdam School of Maria Patrizia Grieco, General manager of Italtel
is required to participate through research and contributions.
Architecture, Berlin
It consists, in addition to participation in optional Polytechnic
courses and in addition to a two-semester seminar in architectural Jean-Louis Cohen, Universit de Paris Rafael Moneo, Graduate School of Design,
design, of two specific courses given by several lecturers during two Harvard University
semesters (Architectural History and Design and Criticism and Elio Franzini, Universit degli Studi di Milano Carlo Olmo, 1st Faculty of Architecture of
architectural theories). Politecnico di Torino
The second phase consists of a more specific, in-depth undertaking
Paolo Gavazzi, councillor of Banco di Desio e Boris Podrecca, Fakultt fr Architektur, Stuttgart
and deals with the problems of composition in relationship to
della Brianza
architectural design, on one hand through two courses (Figures
of architecture and forms of construction and Conservation and Franz Graf, Accademia di Architettura di Bruno Reichlin, Accademia di Architettura di
design), and on the other hand by means of a second architectural Mendrisio Mendrisio
design seminar. These courses also involve several lecturers and are
divided into two semesters.
The third phase consists of increased involvement of the PhD
candidates in more specific research and themes, ranging from the
theoretical debate in Europe and in Italy to the issue of the old/new Scholarship Sponsors
relationship, to the topic of settlement- and founded-city structures,
to research into the cultural and figurative identities of landscapes EuroMilano S.p.A. via Eritrea, 48/8 20157 Milano
68
ARCHITECTURAL COMPOSITION
architectural education, as Colin cost of simplification requiring operatively linked to humanistic, form, space and architectural from contemporary academic an artistic enterprise constitutes
Rowe stated, is not alone to a methodology that lets them historical and construction composition themes. It must tradition, assume a fundamental a goal rather than a trend,
train a student for a professional face the complexity and globality ones, as well as by the be stressed that none of these cultural role for the development and how this trend appears
occupation, but is above all of design problems, in order to necessity to know the current experiences has ever had of research and thinking, indivisible from the need for
to stimulate his spiritual and train knowledgeable architects situation and therefore its the presumption of defining demonstrating how the re- theoretical construction.
intellectual growth, to develop who can tackle any kind of historical, structural and cultural theoretical systematics, but have
his intellectual faculties and to problem. conformation, incorporating developed in line with theoretics
enable him to grasp the nature This work aims to demonstrate this in the typological creation consisting of formulations that
and meaning of architecture, how much a schools task and the compositional process. have always been modified in
this research aims to perform a is not to develop personal Elements that these days itinere by means of experimental
comparative analysis of some of research programmes, but to seem to have been neglected, praxis. These forms of teaching
the main schools of architecture establish a common base, by determining that linguistic and therefore become attempts
both in Italy and abroad, as well identifying a well definable theoretical travail that appears to set a fashion, where
as some of the leading players trend within an intellectual to characterize a large part of unquestionably in all cases there
who have characterized them, panorama. The teachers goal todays architecture, which is is the authoritativeness of a
starting from the peculiarities of as regards students must be experiencing a radical change in master who, if operative inside
specific teaching methodologies to transmit a capacity to work, its theory, criticism and teaching. a school, can make architecture
in the compositional procedure. a knowledge of how to direct The five schools of architecture teachable and transmissible in
Guido Canella wrote in his and influence, since the investigated are the Cooper the fullness of its problematic
essay Dal laboratorio della specific nature of a school lies Union of New York, the nuclei, beyond the formulation
composizione: we all know in its own point of view, given Valparaso School, and the of a sterile body of precepts. 1. Poetic act of opening of the land in the Ciudad Abierta, 1971. Archivio
that the general linguistic travail the impossibility of transmitting Architectural Association of What happened at the Cooper Histrico Jos Vial Armstrong, PUCV, Valparaso.
of architecture is matched by a teaching in a wholly objective London as well as two Italian with John Hejduk and his
profound crisis in its theory and, manner. What differentiates examples influenced by the co-workers between 1964
consequently, in its criticism and a school and its value in any singular thinking of two and 2000, at the Architectural
teaching. I therefore believe determined moment of its architects, Guido Canella and Association between 1971 and
it is of great use to restore existence from all other schools Luciano Semerani. All of these 1990 under the direction of
unity to the critical moment therefore comes from the experiences can provide a clear Alvin Boyarsky, at Valparaso for
and the operative moment of existence of a cultural project, model to be examined, not more than sixty years, without
composition. The goal of the as well as the requisite qualities just because of the contents over-generalizing, may be
research is to identify this unity of the persons involved. of their teaching, but also comparable to the experiences
in conformity with a definite Hence the goal of the research and above all because of their of Milan and Venice. Figures
idea of architecture, undertaking is to analyse the training attempt to construct a cultural such as Ernesto Nathan Rogers,
a path that studies the figures methodologies and objectives project, which, albeit arising Giuseppe Samon and their
and schools of architecture who of a school of architecture with and becoming concrete in students like Guido Canella
have marked their teaching with the aim of verifying how much different ways, finds its roots and Luciano Semerani, are
a strongly characterized, precise the growth of knowledge in a in a humanistic sooner than charismatic personalities who
and militant point of view. Aldo determined discipline can also technical-scientific culture, have contributed to the building
Rossi, like other architects of transpire thanks to internal expanding architectures of a School of Architecture idea. 2. Book cover of Education of an 3. Guido Canella, Didactic prototype
Architect: a point of view. The Cooper of the Theater system in Milan, 1966.
his generation, maintained that and external contributions, experience to dimensions These experiences, each Union School of Art and Architecture,
a schools task is to direct and and how the compositional similar to those of other artistic founded on their own strong 1964-1971.
70
ARCHITECTURAL COMPOSITION
the principles of building the The shape of the house and and social changes and to the Nieuwe Zakelijkheid (New realism of this civilization in
Dutch city, to which is dedicated the shape of the city are so understand how their continuity Objectivity), at least until the urban planning. Become so
this study, arises from the inextricably linked. Technology, and experimentation are shown economic general cultural crisis again primary the continuity
considerations on the ways and structure and shape together as two parts of the same act. of the Thirties and the outbreak with the principles of building
principles at the base of the contribute to establish clear and of the Second World War. the Dutch city, the close
interpretation and building of continuative principles to build So, after the characters of In the years following the war, relationship between the water
the contemporary city. The study the Dutch city, in which the the traditional city, it became the increasing welfare and and the design of the city, the
of the Dutch city highlights the urban plan is the very reason for important to analyze the the spread of the automobile need to link the shape of the
need for a will to order at the its existence, instrument of order fundamental experience of led a gradual departure house to technical reasons of the
base of urban transformations, necessary to settle the land the Hendrik Petrus Berlage from the traditional urban soil, the search for a compact
the expression of a precise rationally, that has led to put Expansions Plan for Amsterdam principles, following a desire city, not only to save the soil and
idea of the city and of a clear the collective at the base of the South, one of the leading 2. The geometry in the Berlages for widespread distribution of water for a defense strategy,
relationship between building existence of the city itself. contributions to the planning Amsterdam Sud Expansion Plan. space and greenery. The analysis but also for a revaluation of
Personal analysis.
type and urban shape capable The Dutch city is a city of of the twentieth century. The of these experiences, at the the value of domesticity and
of generating spaces and places homes and the home is the part dedicated to the study of end of the third part of the variety of places of living proper
recognizable. The main reasons primary, perfect and repeated the plan and the way that led realizations of J. Van der Pek research, was an important step to the historic city. In this
for the exemplarity of the Dutch element, which se ts out the Berlage to its project, is a key for Rochdale and M. Brinkman to highlight the changes made sense, the type of the house,
experience, of the continuous shape that, even in its evolution point in my research because for Spangen, that constituted by the Dutch planning in recent its aggregation, its relationship
rigor and clarity of the urban over time, remains unchanged understanding the principles the antecedents of the main decades, but also the continuity with the water and the public
settlements, are all within its in its constituent features. If at its base is inextricably linked architects experimentations of certain principles which spaces, is the privileged object of
foundational reasons: the Dutch it is clear that the close and to those of the whole story of the following decades, remained unchanged, due to the interest of the research.
soil, geologically placed below consistent relationship between of the Dutch urban planning such as those of J.J.P. Oud in strong Dutch urban tradition, The examples in the last
the sea level, is all artificial, the type of building and the until the contemporary city and Tusschendijken, a prelude to even in that historical moment. paragraph of the thesis, chosen
hard-won from the water, shape of the Dutch city is therefore also to the general the evolution of his idea of the The last part the research, by the criterion of the clarity
drained and consolidated to inevitable for reasons inherent considerations on the city in our city towards the opening of dedicated to the contemporary of the type-morphological
accommodate the row of houses to its construction, it is also time. the urban block expressed in city, investigates the direction relationship and of the evoked
narrow and long (for structural interesting to investigate the In the third part of the thesis, I the admirable example of the taken in recent years by the idea of the city, demonstrate,
reasons and to optimize the logical process by which these analyzed the first experiences of Kiefhoek settlement in 1926. Dutch planning, aided by the despite their diversity, a clear
soil), arranged in the portions of shapes and principles have faced the 1910-1920 in opening the Here, a new concept of living current crisis that highlighted continuity with the principles of
ground between the drainage the test of time, the economic, urban block, represented by the resulted in finding solutions the great pressure to which construction of the historic city.
for perfect minimal affordable our civilization is subjected, In this context, it is confirmed
housing, the archetype of the from the economic, social, and the value of the prevailing
modern living molded in the environmental point of view. type of row house through its
concrete. In Rotterdam, for example, in experimentation, as variation in
The ideas about the opening recent years the policies are the affirmed rules, and through
of the block and building a city intended to bring to the fore its permanence inside an urban
according to the principles of issues related to the delicate order that defines the stable
a rational order, represented balance between the city and principles of the Dutch city.
the main approach in urban the environment and their
realizations thereafter in the relationship with the water
1. Evolution of the Dutch urban block (Reworking starting from a study of Max Risselada) Netherlands, fervently supported typical of the historic city,
72
HENRI PROST, LES TRANSFORMATIONS the reasons which led to the according to which the project
ARCHITECTURAL COMPOSITION
how the choices made by the corresponding to the ancient city the pensionnaires of the French the plan. The project includes a future development processes
French architect Henri Prost for (Figure 1), the districts of Galata- Academy in Rome. general reconfiguration of the and, therefore, the necessity
the urban plan for the city of Pera, the Asian shores and the The study of the city and its city, in many ways. Contrary of the studies of the layers and
Istanbul (1936-1951) find their coasts along the Bosphorus. historical layers is not an exercise to what he had planned in the formal determinants of the
basis in the matrices of the The project of the Archaeological of erudition and has no purpose the cities of North Africa, contemporary city.
ancient city. Park is present from the first of historical reconstruction. for Istanbul the question is
The research investigates proposes for the urban plan of Rather it is the city itself that is no longer to create a new 2. Project of the Archaeological Park.
The area of protection is indicated
the relationship that exists Prost, who identifies different the search field, the bearer of settlement, but rather to act with light color; the areas appointed
between architectural design zones of protection within the a hidden order, which shows within the historical districts like for new excavations are in darker
and archeology. Archaeology historic city (Figure 2). through the project, which a surgical operation, according color. (Authors drawing).
intended as fragment, immanent It is not really possible to has its merits in the ability to to the same Prost.
memory of a past that, in the understand the reasons that interpret and give meaning to The Third Part, The Reinvention
contemporary city, struggles to underlie the projects plan, the places. of the Ancient, is the one
find meaning. Archaeology as unless considering the travels Through this reading, the plan where more is put to test the
a method, analytical study of and researches carried out by of Prost for Istanbul can be seen hypothesis of the research.
the city, of its layers, the parties the architect in Greece and Asia as the elaboration of a method Through the study of the ancient
and the transformations that Minor, which are reflected in that shows its logical reason and city and the reconstruction of
determine the current status. the drawing of the hypothetical a transmissibility that exceed the the discoveries of the missions of
Archaeology as a choice, to reconstruction of Hagia Sophia chosen case. the excavation, is made explicit
renew the relationship with and the city of Byzantium in the The thesis consists of four parts. the relationship that the town
history through the project, at sixth century A.D., completed by The first three parts concern the planning project for Istanbul
the same time the creation of Prost in 1906 (Figure 3). development of the project of established with the discipline of
a new story. In this way, the Between 1905 and 1906, as Henri Prost for the urban plan archeology.
urban plan for Istanbul proposed pupil of the French Academy in of Istanbul. The fourth part is The architect fills the gaps of
by Prost not only satisfies the Rome, Prost went several times focused on the study of the the archaeological discoveries
needs of a modern city but also to Constantinople, completing current city. through the invention project,
entrusts the ancient the role of a the restauration of the ancient In the First Part, Identity and which he had expressed also
real engine of the project. basilica. In this occasion the Character of the City at the in the ideal reconstruction of
The research can be read on architect doesnt limit his Beginning of the new Republic Byzantium in the sixth century.
a dual register. On one hand research on the building but he (1923-1950), is presented the The interest in the areas of
it highlights method, tools also reconstructs the entire area situation of the city as Prost Imperial Quarters is explained
and analogies that form the of the Circus and the district of could see from the mid-Thirties though the comparison with
elements of the composition the Imperial Palace. of the Twentieth Century, Antioch on the Orontes and
of the town planning project; Due to the paucity of outlining what was to be the Milan.
on the other it investigates the archaeological discoveries at the state of act before his project. The Fourth Part, Archaeological
specific characters of the city of time, his idea of the ancient city They are essential to the Ares and Architectural Design.
Istanbul, the original matrices, is a real project and it will be the understanding the characters of The Role of the Ancient Center
the archeology, and the urban foundational matrix of his way the plan some insights on local in the Contemporary City, draws 3. Byzantium in the Sixth Century,
transformations with which the to compose the contemporary planning tools, on antecedents conclusions of the work, by as represented by Henri Prost in the
1. Henri Prosts projects for the drawing presented as fourth Envois
project faced. city and the future one. Prost urban projects and those that explaining the basic concepts Historical Peninsula. (Authors for the French Academy in Rome, in
The areas involved are is to say that the contemporary were presented in variant, on and the starting assumption, drawing). 1906. (Authors drawing).
74
75
Every megacity in the world in this case, a conscious effort adaptation local conditions common denominator in most the period after 1947 cannot however, remain imperative in
ARCHITECTURAL COMPOSITION
some way or form encounters has been made to study the demanded. From the documents structures. The fifth and final be construed as more order to develop a critical point
the subject matter of cultural transformation of the city obtained, the research goes to element assesses the plethora indigenous than the previous of view on contemporary urban
exchanges. How each city of Delhi, focusing on the consider the transformations of of styles present in the capital. one. matters.
deals with such exchanges, metamorphosis in architecture singular architectural elements In particular, the Palladian style The research conducted raises
moreover confronts or accepts and urban planning during the during the period in question; was imported and, through the questions on broader aspects,
them is altogether unique to period of 1912, when the British these being: in what ways the course of its journey from Italy which do not only apply to
each. This thesis uses the city Town Planning Committee city plans, neighbourhoods, to Delhi, gradually changed Delhi alone but also relate to
of Delhi, one of the largest of for New Delhi was formed, to types of residential or public its meaning, its form and the Indian context as a whole,
megacities, as a case study. It is 1962, when the first master buildings and architectural styles has undergone noteworthy such as the meaning of the
especially peculiar since it has plan was in fact implemented. changed over time. compromises with the culture term Indian-ness, Tradition
been influenced by a multitude This transition period, when The first element to be and inclinations of the sub- or Identity. Often the concept
of cultures whilst being ruled by the megacity was born, is addressed was the plan continent. Herewith, defining of Indian tradition has been
different reigns and dynasties; particularly interesting because brought forth in 1912, during yet another influence, the Italian an ideological invention that
it was capital to the Delhi it has often been neglected by the colonial period, which on this occasion, which has served political purposes; by
Sultanate, the Mughal empire scholars and gives opportunity was compared to the master been absorbed and become an simplifying Indian culture the
and finally the British empire. to investigate the current plan, approved in 1962 after integral part of the city of Delhi. Western Orientalizing current
Delhi as of today is one of the phenomenon of globalization Independence, whereby both In investigating Delhis has purported a static idea
most densely populated cities from a more detached and bear distinct foreign influences, architecture and planning, of it, inadvertently sacrificing
in India, influenced by multiple cognisant perspective. firstly British, thereafter scholars usually tend to its beauty. This thesis with 1. Delhi Guide Map. Surveyed 1939-
42, Delhi State Archive.
forces of globalisation and The research work involved, American. The second element consider the period before Delhi as a case study validates
is subjected to the on-going originates directly from primary of analysis goes to examine 1947 as subordinated and that each city of India has
process of internationalization. material, predominantly the singular neighbourhoods, attributable to Western- developed a distinct history
It therefore, proves to be fertile unpublished documents, which effectively the colonies through British ideas and the period and distinguished tradition that
ground to study the perennial have been collected from public which the megacity grew, to after independence for the cannot be assimilated with any
and recurring process of and private archives such as the ascertain British, American, development of purely Indian other. Indian cities have to be
hybridisation between cultures, Delhi State Archive, National for that matter, even Japanese nationalist ideas. However, studied as autonomous entities,
in this particular instance Archives of India, Institute of influences and the local herein lies a fundamental flaw. where scholars usually try to
through the perspective of its Town and Country Planning reinterpretations of these. Cultural exchanges are often compare them. The identity
local architecture and urbanism. Organization, Municipal The third element to be studied mistakenly seen as a one-way of each city exists, but varies
In this thesis, said process of Corporation of Delhi, Central is resident typologies and their process, where this research from place to place; it is the
hybridisation, essentially the Public Works Department, variation from bungalow plots has established that Delhi result of the opposition and
negotiations and re-negotiations The Royal Institute of British to house plots, which clearly has played an active role in resistance that the local culture
between cultures, has been Architects and The British demonstrate significant local the process of hybridization, had towards external influence.
examined from an historical Library in London, and the Ford changes and adaptations of developing its own character as This approach of studying cities
perspective. Using history Foundation Archive in New foreign models. The fourth opposed to merely accepting and their architecture with
as marker and using past York. This material was used to element under scrutiny is the what was brought from an appreciation for historical 2. Master Plan for Delhi prepared by
Delhi Development Authority, vol. i,
occurrences as parameters, understand how foreign models sphere of public buildings, abroad. Both periods have been relevance and a keen sensibility Government of India, New Delhi 1962.
effectively any matter of culture and influences may have been where the quest for Indian- characterized by a resilient and towards cultural exchanges has
or diversity can be assessed or adopted in Delhi and to what ness together with the search very interesting compromise not been developed effectively
reviewed with less envelopment, extent resistance to these was for modernity, recurrent pre- between indigenous and in the academic world,
bias or value judgement. In encountered or how much and post independence, is a foreign elements and thus especially not in India; it does,
76
ARCHITECTURAL COMPOSITION
principle, should assume that general definition of that model that interpret the project as an phenomenon of the new city,
revolutionary nature of its (most common for the new intellectual work replaced by led the study on the experiences
1. Location of New Towns 1940-1979
architecture defined by Guido city ) circumscribed by the ratio a series of different actors in of different countries to
Canella as commemorative apparently interdisciplinary, search of a feature-exasperated work on a heterogeneous
project of a society that in which politics , sociology , project. Removal of the idea material. The historical
guarantees rationality, justice the economy of the market of the
city from the project of conditions have produced an result and inability to propose
and equality. But currently, and design work together and new settlements is not a matter evolution that has developed new solutions. The project
the new city seems to ignore intertwine actively to ensure the of trend or failure subjective in stages, and the existence of entrusted to objective criteria,
these values to find in the operation of the project. architectural , which filiation between the various which consider only a limited
systemic model addressed Moreover, the thesis describes presupposes a choice already international experiences. number of elements on the
only to the economic and the structural reasons for which possible. The few cases examined fact of settlement, removes
technological environment, the design is in a state of But it is largely due to the provide a partial reading of responsibility from the architects
where the procedural aspect confusion between the design change of the project but the new phenomenon of the and planners from their role as
of the design is entrusted approach tending in solemn not procedural intervention contemporary city, increasingly alternative search.
to manuals or guides, who form of the great masters and which assigned a priority not popular for a massive Affirm the necessity of
presumptuously coding objective the attitude of surrender in to the idea of the whole or to urbanization, increasingly architecture for the new city,
have excluded the liability of the face of a given cycle for the unitary character of the sensitive to new development does not mean you want to play
architects and planners. The indispensable system which urban body but the aspect of models, spatial and social, the characters of the ancient 2. Milton Keynes, UK (1970)
urban environment based on business logic has replaced the case to the Executive that related to dynamic capitalist who city, but groped to define spatial
cultural and traditional city has the rational design ( which competes developer , general denies the common architecture relationships and lost values
been replaced by the value of constitutes a constructive logic contractor or corporations and social. compatible with the practices of
the market and entrepreneurial and formal diversified work architecture qualified for During the evolution of the today.
capitalist society, producing an depending on the context ) with their entrepreneurial skills in new city, the planned unit, it is
urban model dictated by the the system characterized by a completing the work . stated in parallel entrepreneurial
needs of individuals with the formal technical . The case studies examined were ideology of urban development
consequent impoverishment of This system is not interested dictated by my choice to focus and housing market, gradually
the variety of types and spatial in the context where it should on the period in which it occurs abandoning any reference to the
segregation in the new cities. be born from the idea of the process of urbanization city, and becomes the object-
This study helps to clarify the the problem but part of the of the modern city, that is monument context independent
general problem of architectural intervention as an action that when the new cities arise in losing the ability to generate
space linked to the modern defines the shape sorter and relation to the industrialization and to impose meanings to the
new city, and mainly deals with universal civilizing things ( in part and urbanization phenomena place, identity.
the current condition of the the very idea of the
city new that prefigure total territory. It is a condition that stems from
design of the new city, in order push designers to ignore the In this period, which covers the alleged objectivity (reality,
to reconcile urban quality and contest) more than a century, countless plurality) that replaces the
utility maximization. In the last century, the design experiences have occurred on subjectivity (ideals, singularity) to
The approach to the study of the new city was a field of the international scale, also the project, to support playback
starts from a descriptive phase studies and experiments for accompanied by numerous of new contemporary city and
of the phenomenon of new many rationalists modern imaginations of new cities, to formulate answers to the
settlements, for quantitative architecture and urbanism. The especially after the war, fueled questions that arise from the
and for qualitative aspects, most popular models currently by many ideologies of utopian impossibility of knowing the 3. Comparison table of New Tows
78
79
The story of the ancient architecture in its most coherent A second part investigates the The attempt to explore the changes generate different on the term direction one
ARCHITECTURAL COMPOSITION
settlements, the urban and conscious productions. role of concrete in architecture, structural types through static actions. From a typological can see how it is modified
transformations, the social The relationship between retracing its passages and case study will provide an point of view we tried to depending on the complexity of
and economic conditions architectural form and the intellectual productions form explanatory analysis and a more bring these actions to three the element. Trying to translate
that characterize the city, structural component provides its birth to the development of comprehensive search, where structural models, in which, this situation in the proposed
and construct new identities an important reading key for the material. This part of the the concepts expressed in the through a rough classification, structural classification, it is
of landscape are all closely a better understanding of the thesis examines technical and theoretical part will be verified we can identify the different possible to establish a tri-partite
related to the development buildings architecture. design considerations that led and applied more directly. By architectural types binding it to reference system as follows:
of historical memory and The ability to control the to its present development, way of different methodologies a precise distribution of loads mono-dimensional systems,
cultural transformation of its structural components, to shape highlighting the coming of and analytical techniques the inside the structural system. two-dimensional and three-
architecture. Architecture with them to have completeness todays challenges related to the thesis will propose an analysis The structural system performs a dimensional systems. As such,
its morphological characters figurative and to establish research aspects of innovation. of compositional languages, precise function, and to properly the type inserted in a simple
and ideological influences a connection between its Finally, it analyses the production rebuilding social-cultural periods fulfill its tasks it must necessarily and wise classification of this
built the city. This leads to the formal order and its static cycles and reinforcement and image of the architect- be able to control the tensions nature, can help to identify the
questions: how is a building able principles, turns out to be a efforts (during the building builder. to which they are subjected. many examples of architecture
to modify settlement patterns dominant factor in the making site) as well as those specifically The common ground on which Although it is clear that all and engineering taking the
and the social culture of a city? of architecture. The structure concerned with the appearance science and art have their the structures working in essential aspect of each case,
How is it able to relate to or can become a poetic language of durability of the material, the roots is the Renaissance. The space are three-dimensional, without precluding the study of
disengage from the constructive that represents the closest recovery and rehabilitation of two disciplines had a strong the classification (which is the meaning of each form and
logic and find its own identity, connection with Art. It manifests structures. relationship, enough to read carried out below) binds to seeking the project idea is not an
through which to express the itself in the boldness of the The third part verifies, by them as a single inseparable the dimensionality of the main imitation but in the awareness
character of the composition in masses, in the beauty of forms means of an experimental whole. In the recent history direction of action that prevails and knowledge of the structural
a meaningful way? and in reaching maximum design, the possibility of a the disciplines of architecture in the specific architectural type concept.
Given the vastness of this topic, fulfilling mastery and building relationship between structure and engineering collided. This so that the static scheme proves
the thesis focuses on the aspects experience. and architecture (not as a research proposes to reconstruct to be a key element for the
that are more specifically related This research project investigates constraint but as a true resource) the integration that has always typological detection.
to the role of the building as a the relationship between resulting in opportunities for characterized the architecture of Using a conceptual analogy,
fusion of static solutions and structure and architecture, use of innovative technologies the far past. The aim is to reach such as taking as a conceptual
architectural space. In previous in an attempt to take the where the transformation of the the merger between structural scheme the Cartesian reference
architecture, the division or structural changes and structural material often plays a role of and formal ideas as a just system, the thesis attempts, with
conflict in the relationship components as a key for the primary interest. premise of architecture. the same organizations logic,
between the point of view choice of composition and The thesis analyzes When one talks of structure, to identify the characteristics of
of composition and point of contemporary typology. developments, innovations and one means a physical system each classification. The choice
view of structure did not exist Following the development applications of technologies and consisting of a set of elements of the Cartesian model in order
as it does today. Divisions and of the research, the proposed the role of structural concrete that make up a space. It is to conceive the perception of
conflicts emerged in the middle analysis is divided into three in the design of spaces and in important to think about these the physical space, generated
of nineteen century, with the parts: architectural forms to illustrate elements in their unity although by measuring points, distances
advent of new materials and The first part seeks to analyze an understanding of the during the calculation phases and lengths allows one to
new construction techniques. different attitudes of engineers, architectural project, taking in its they are often considered identify a number of references
In the light of this rift it is useful builders and architects, and architectural form and structural individually. such as: coordinates, directions,
to reconsider the instances that comparing multiple approaches section the new solution and Depending on how the elements orientations and measurement 1. Place Victoria, P.L.Nervi e L. Moretti,
have always represented the of structural composition. new technological resources. are placed, the structural system, units. Focusing ones attention Montreal, Canada, 1960-65
80
81
The research starts from the ancient times are the linking requirements to build new shapes leads to the creation of
ARCHITECTURAL COMPOSITION
compositional study of some elements of the artifacts that architectures with identifiable pieces of architecture that are
works by Aldo Andreani build his cittadella sognata character whose original both unique and identifiable in
(Mantua, 1887-Milan, 1971) and the real architectural expression shapes can reveal the tradition relation to the tradition of the
through a critical analogy it adds of his memory. and the identity of a specific place they belong to.
a set of images and references In his architecture, Andreani environment. The study of these architectures
taken as interpretative categories keeps on dealing with two The research investigates the generates many ideas about the
which allow the reader to key principles: rule and workflow of the project by contemporary architecture and
understand the meaning and the transgression. applying a method that starts the progressive loss of identity of
goals of the project. This case He combines a compositional from the compositional analysis our cities, where architectures are
1. Camera di Commercio e Loggia dei Mercanti, Mantua, 1911-1914. The
study highlights some themes, rigor which verifies the of the single architectures and mainly self descriptive rather than building and the public space inside. The plan of the square between the
which define the existing opportunities of the typology the interpretation of images and being intended to tell their story. rooms.
relationship between the towards new configurations references, and it ends with the
single artifact, and the overall with new figurative results, examination of the urban scale
architecture of the city as well as in which the linguistic and the relationships established
the link between the monument experimentation is permeated by by single artifacts.
and the urban fabric, that are juxtaposed figures coming from The development of the research
fundamental matters for todays the classical vocabulary. requires to split it into three
architectural research. The definition of the character parts that correspond to Aldo
Aldo Andreani was a shy and that is the specific quality of Andreanis masterpieces: Palazzo
reserved person who preferred a place and architecture is della Camera di Commercio and
to stay out of the political the key element of Andreanis Loggia dei Mercanti in Mantua
debate and the artistic trends design research. Through a (1911-1914), Palazzo Fidia
of those years because he meticulous study, he integrates included in the Edificazione in
believed in the ideal of the the pieces of his imagined city Terra Sola Busca plan in Milan
architect-artist, according to with the real urban texture, (1929-1932) and Palazzo Toro in
which an artistic project comes adjusting and checking every Milan (1934).
from the right balance between single piece of architecture with These works link the single
poetics and building science. a design plan; in this process, artifact to a larger study related
He used to take materials for the introduction of a new to the city. Though they are
his compositions directly from architectural work generates different buildings as regards
the city and from his memory an innovative description of their use and the target
so that he could create a new architectural landscape, where audience, all of them can reveal
order through the sublimation of buildings stand as characters in Andreanis idea of the project,
those materials in his projects. a narrative sequence, telling a which is not limited to the
His work features a set of new story. definition of the building itself,
elements that reflect a strong The aim of the research is to but it expands to calibrate the
individuality and it mirrors the highlight the continuity of a bonds with the surroundings,
modernity through a deep compositional method according blocks and monuments as far as
reinterpretation of the historical to which the linguistic reflections reopening the whole question
3. Palazzo Toro, Milan, 1934. The corner
city in original shapes; the and the study of the organismo about the pieces of the city. 2. Edificazione in Terra Sola Busca plan, Milan, 1929-1932. View of and the connection to the San Carlo
pieces he took from the architettonico are essential Andreanis work of molding the the scalar composition for Isola. neoclassical church.
82
ARCHITECTURE BORN IN VIOLENCE: suffered from chronic lack of for each of them was given directions in which architectural
ARCHITECTURAL COMPOSITION
premeditated destruction of provoke violent reaction, etc. In preserve memory and increase was harshly erased of left to in Serbia about the actuality based preservation of damaged
architecture in order to transmit contemporary conflicts, buildings the resilience of the cities, since decay under influence of new of the subject, and lack basic building in ruinous state. Third
political messages and achieve are condemned not only as pure the omitted lesson from the past ideologies. In recent years, conversance with urbicidal and the most controversial
long-term ideological goals casualties of violence, but as a is usually the most expensive however, many designs were theories to begin with. Further approach would be the
emerges as direct consequence tool for infliction of violence, one. Hence, the scope of this presented for reconstruction on, there is no attentiveness succumbing to the urges of
of development of high- used equally fierce as any other inquiry is to broaden and amplify of these sites. In some more about social responsibility that visual extravagance, which may
tech weaponry systems and weapon of choice. the discourse on destruction successful cases, reconstruction understanding the mechanisms well lead into decadence, and
contemporary methods of The true danger of this of architecture by pointing out of buildings damaged in 1999 in which violence is changing later on even into a farce. On
warfare. One could argue that phenomenon lies in fact that the cultural implications of War went through much faster our built environment bears. the opposite end, introduction
there is nothing new or unusual the craft of reshaping the architectural redesign of space due to favorable circumstances, Consequently, there is no proper of new forms next to the old
in destruction of buildings in space slowly slides from the that was previously altered by but still, methods and approach response of the architecture ones could carry more or less
wars, since those two concepts hands of architects into the urbicidal violence. The post- of their reconstruction varied as profession to the issues of appropriate symbolism, although
mutually influenced each other hands of military experts. The urbicidal reconstruction should significantly from case to case. purposeful, politically motivated it inevitably changes the original
from the beginning of time. sophisticated art of creation be understood as architects The complex case of Generaltab annihilation of buildings. architectural composition.
However, it is undeniable that of spatial forms deviates and mission to regain his true-born (Army General Staff Building) Same happened after 1999 Finally, emphasizing the contrast
every conflict produces its mutates into its antithesis - the right to be the sole sculptor of was segregated to demonstrate War: destroyed buildings were between old and new on
own method and philosophy art of creation by destruction. built space, and the only way the full spectrum of influences seen rather as leftovers of reconstructed building, where
of destruction of architecture, The destructive event interferes to do that is by developing a that are inhibiting the proper unwelcomed ideologies than damaged and then repaired
which became particularly with good architecture, and specific language of architecture post-war reconstruction of as valuable products of its own section of the edifice is clearly
evident in the most recent unmistakably, it overtakes of violence. Close investigation buildings in Belgrade today. (architectural) culture. It could distinguishable from the old,
wars on Balkans and Middle the primacy in our perception of current condition of some Combustible mixture of intrinsic well explain the lack of scientific undamaged part, proved to be a
East. New phenomena always of urban spaces. What can notable buildings in Belgrade semiotics, ideology, architectural inquiry on repercussions very successful method in post-
brings with itself innovative architecture do to regain the that got damaged in 1999 theories, architects political that destruction enforces on disaster renewals.
terminology and formation intrinsic right to be the primal War gave us an excellent background and symbolic of architecture and development No doubt that despite
of new scientific disciplines; sculptor of our habitat from picture of understanding of destructive event, makes the of cities in Serbia a sort of everlasting presence of violence
the ones specially coined to destructive forces? urbicidal destruction in Serbian violated space almost impossible cultural darkness that lasts still. in everyday life, we are just
explain the infliction of violence Historically speaking, post-war architectural practice, and in to read unambiguously, and it Rare are the designs that may starting to understand the ways
on built environment and its reconstructions were always some most successful examples, deprives future reconstruction of be characterized as positive it changes our built environment
consequences are urbicide, used as an excellent opportunity it even provided a hint of what its authentic sense. However, response to destruction, and and what should be our
warchitecture and architecture to redefine and improve language of architecture of this lack of adequate design they came mostly as mere response to it.
of violence. urban experience of damaged violence might actually look proposals was used to theorize coincidence and not as a result
It is the calculated, highly cities. But more importantly, like. about possible development of thoughtful, theoretically
organized and politically reconstructions after violence Belgrade as a case study was of language of architecture of based action. More often they
motivated nature of destruction are a fertile ground to provide chosen due to its rich history violence using the damaged just happened by applying the
of buildings that makes it cultural response to destruction. of urban destruction, but edifice of Generaltab as safest and the simplest possible
urbicidal. Architecture became The argument of this thesis more importantly, because it proving ground. On the basis of solution total erasure of traces
a medium for transmitting, is that critical and ideological was a victim of urbicide only previously exposed theoretical of violence and restoration as
while its destruction became a stance is always demanded sixteen years ago, during the frame, and using the experiences it was.
way to create ethnic and spatial of architecture, and especially war between NATO and FR of earlier reconstructions, Ultimately, by analyzing those
divisions, claim a right to a after urbicide. Symbolic of Yugoslavia. Even before a series of different design few successful examples, it was
certain territory, modify historical destruction demands symbolic this conflict Serbian capital proposals was offered, and possible to extrapolate some
Mechanical Engineering | Physics |
Preservation of THE Architectural Heritage
| Rotary-wing aircrafts | Spatial Planning
and Urban Development | Structural Seismic
and Geotechnical Engineering | Technology
and Design for Environment and Building
| Territorial Design and Government |
Aerospace Engineering | Architectural
and Urban Design | Architectural
Composition | Architecture, Urban Design,
Conservation of Housing and Landscape
| Bioengineering | Building Engineering
| Design | Design and technologies for
cultural heritage | Electrical Engineering |
Energy and Nuclear Science and Technology
| Environmental and Infrastructures
engineering | Industrial Chemistry and
Chemical Engineering | Information
Technology | Interior Architecture
and Exhibition Design | Management,
Economics and Industrial Engineering
| Materials Engineering | Mathematical
MODELS and METHODS IN Engineering PhD Yearbook | 2015
86
DOCTORAL PROGRAM will be discussed with other PhD colleagues and experimental sustainable models.
The PhD programme is three year long and evolves around three different
Doctoral Program Board
phases, closely related each other and coherent with the research plans:
Maria Grazia Folli Mariacristina Giambruno Simona Chiodo
1. in the first period, the training proceeds along three interacting
Maurizio Boriani Lionella Scazzosi Laura Montedoro
levels: start up of the permanent (three years) workshop, based on
discussion and cross-disciplinary comparison between students and Massimo Fortis Lucia Toniolo Orsina Simona Pierini
teacher board, focused on the main topics of the Doctoral programme; Luigi Zanzi Giulio Barazzetta Maria Alessandra Secchi
a design research workshop, capable to handle the complexity of
Anna Anzani Cecilia Bolognesi
different scales and problems in the living project; a lecture tour aiming
to develop the questions dealt with during the workshop. The triple Elsa Garavaglia Alberta Cazzani
activity, feeding the fundamental attitude to interconnect different
knowledge, methods, codes and techniques, let each student to Experienced Members and Advisory Board
gain the critical and methodological awareness necessary to spot the
specific areas of interest and to start up, coherently with the central Matilde Baffa Giacinta Jean (SUPSI)
themes of the PhD programme, the research work aimed to the thesis. Luigia Binda Shao Yong (Tongji University)
2. in the second period, the PhD student, besides being involved in Vincenzo Petrini Xavier Monclus (Zaragoza University)
launching and organising the permanent workshop activities, will
develop his thesis research, taking advantage of the participation to Silvano Tintori Rosario Pavia (Pescara University)
congress, meetings, stage (it is mandatory to have at least one stage) Claudio De Albertis (Triennale di Milano, Francesco Purini (Rome La Sapienza University)
in Italy or abroad, by institution or research laboratories in convention Assimpredil)
with the PhD programme, or by other institutes approved by the Bruno de Meulder (Leuven University) Pere Roca (Barcelona University)
teacher board.
3. the third year will be devoted to the working out of the thesis: Carmen Diez (Zaragoza University) Elli Vintzileou (Athens University)
contents and methods, besides being constantly verified by the tutor, Elio Franzini (Milan University)
88
89
The thesis deals with a study rural buildings emerge that still rich and diversified heritage for the territory. Also different be the most efficient tool for also including Italian and Italian
91
Rapid urban growth and sixty kilometers of the city area the eco-system of the wetlands growing city that is framed with
93
The aim of the research is to investigated with a closer look needs, have to be designed be investigated and designed. can investigate the building by several small differences can
95
This research is focused on the the project as a total process. spaces represented in movies that have managed to generate need to design the territory and intervention in the
A Study on Tehrans Public Spaces To achieve the objectives, the Morphology, orientation and and inclusive structure of Iranian
BIOENGINEERING
Chair: to prepare the PhD candidates to the development of high level Laboratory (NearLab, DEIB), Biosignals, Bioimaging and
Prof. Andrea Aliverti engineering problem-solving abilities in biomedical, healthcare and Bioinformatics Lab (B3 lab, DEIB), Biomaterials laboratory (CMIC),
life sciences, inside research groups or in private/public industrial Biomedical Technology Lab (TBMLab, DEIB), Experimental
contexts, through a strong interdisciplinary training bridging Micro and Biofluid dynamics (BS Lab, DEIB), Computational
engineering and medical/biological knowledge. Biomechanics Lab (DEIB), Biocompatibility and Cell culture Lab
(BioCell, CMIC), Bioreactors Laboratory (CMIC). The Istituto di
During the PhD, the candidates develop a scientific research project Elettronica, Ingegneria dellInformazione e delle Telecomunicazioni
dealing with a complex problem (which can be at different scales, (IEIIT) of the Consiglio Nazionale delle Ricerche (CNR), which is
from the molecular and the cellular levels to living organisms up to located at DEIB, represents another possible option.
biomedical systems) and investigate original methods, devices, and
systems with different purposes: increasing knowledge, proposing Stage periods in distinguished research institutes in Italy and
innovative methods for diagnosis and therapy as well as improving abroad are an essential feature of the PhD candidate training.
healthcare and daily life structures and services. At the end of the The candidates are encouraged to carry out part of their research
PhD programme, the candidate are expected to be able to carry out activities in contact with other research groups, preferably abroad
innovative projects in the Bioengineering field, by proposing new through periods of at least three months spent in laboratories
methodological and technical solutions and properly evaluating the where the candidate can acquire further skills to develop his/her
technology impact in healthcare, life science and biomedical industry. research work and thesis.
Collaborations that may involve the PhD students are presently
Research is performed through theoretical and experimental active with several national and international research and academic
activities in four major areas: biomimetic engineering and micro- Institutions. Very often, the involvement of industrial and clinical
nano technologies; rehabilitation engineering and technology; partners facilitates the technological transfer of applied research
technologies for therapy; physiological modelling and non-invasive into industry and clinical applications.
diagnostics. The educational offer includes ad hoc advanced courses specifically
More specific areas include but are not limited to: molecular and designed for the PhD in Bioengineering. The offer includes also
cellular engineering, biomaterials, tissue engineering, bio-artificial the school of the National Bioengineering Group, which is held
interfaces and devices, neuro-prostheses, movement analysis, yearly for one week in Bressanone (Bz). Every year, the School is
cardiovascular and respiratory system bioengineering, central focused on different topics. As examples, the themes of the last
nervous system signal and image processing for rehabilitation, years have been: Neuroscience, robotics and intelligent machines
biomechanics, computational fluid-dynamics, computer assisted (2006), Computational Genomics and Proteomics (2007), Wearable
surgery and radiotherapy, artificial organs, implantable devices, Intelligent Devices for Human Health (2008), Bioengineering
biomedical signal and image processing, E-Health, bioinformatics, for Cognitive Neurosciences (2009), Synthetic biology (2010),
functional genomics and molecular medicine. Neuroinformatics (2011), Biomedical devices from research to
market (2012), Rigenerative medicine (2013), From functional
Since 2013, the PhD Program in Bioengineering is organized with recovery to artificial organs (2014).
an inter-departmental structure. Faculty members of the PhD The PhD Board of professors (PhD Board) is composed by highly
Advisory Board belong to two Departments of the Politecnico qualified and active researchers in Bioengineering, belonging to
di Milano: DEIB (Department of Electronics, Information and DEIB and CMIC. The PhD Board is responsible of all the candidates
Bioengineering) and CMIC (Department of Chemistry, Materials and activities. The competencies of Faculty members cover a wide
Chemical Engineering G. Natta). spectrum of research fields. This allows a continuous updating of
the PhD program and ensures that the PhD candidates are involved
PhD candidates (who are, in average, 15-20 per year) develop their in innovative work.
104
BIOENGINEERING
Costantino Maria Laura CMIC
Dubini Gabriele Angelo CMIC
Far Silvia CMIC
Frigo Carlo Albino DEIB
Galli Manuela DEIB
Guazzoni Chiara DEIB
Mainardi Luca DEIB
Mantero Sara CMIC
Migliavacca Francesco CMIC
Pedrocchi Alessandra DEIB
Pennati Giancarlo CMIC
Pozzi Giuseppe DEIB
Raimondi Manuela Teresa CMIC
Ravazzani Paolo DEIB
Redaelli Alberto DEIB
Signorini Maria Gabriella DEIB
Soncini Monica DEIB
Patient-specific modeling of the and connected to the SVC inlet by adaptation phenomena on more reliable clinical data is
BIOENGINEERING
are cardiac malformations hemodynamic features, and an integrated in the 3D-0D model In Chapter 4, two specific Then, the integration of the in flow time tracings and mean
consisting of only one effective open-loop multi-domain (3D-0D) developed in Chapter 3. All clinical cases of post-Stage sub-model in a more complex values, cycle-averaged pressures
or functional cardiac pumping model of the pulmonary system, simulations were performed by 2 patients are presented. In model of the whole circulation is and end-diastolic volumes,
chamber (the single ventricle, describing in detail the region of the partners of the Transatlantic such configurations, Stage 2 accomplished only subsequently, were used in different ways in
SV). Stage 2 surgery. Project at INRIA. circulatory network serves as since a direct identification of all the model, in particular as i)
SV defects, such as hypoplastic For each patient, clinical data 0D models built for the whole pre-operative condition to two model parameters would lead to boundary conditions prescribed
left heart syndrome and tricuspid consisted in catheterization- circulation, described above, different kinds of treatments: multiple possible solutions. in the open-loop model, ii)
atresia, require a three-staged derived pressure tracings, MR were coupled to two different in the first clinical case, a Since the heart parameter target quantities to match as
surgical approach, called the (magnetic resonance) flow 3D models of the surgical site. Stage 3 surgical planning tuning is the crucial part, this the goal of the optimization,
Fontan procedure, to separate tracings and echocardiographic The pre-operative anatomical is performed, following a Chapter is focused on the study iii) constraints on certain
the systemic and pulmonary Doppler velocity tracings. reconstruction was manipulated workflow analogous to that of suitable methods to identify parameters values in order to
circulations. The closed-loop 0D model of in order to generate virtual post- presented in Chapter 3 for heart properties to embed in assure that they are physically
Since the early days of Stage 1 circulation comprises operative scenarios. Indeed, two Stage 2 surgical planning, the circulatory model. More meaningful, and iv) prior
the Fontan procedure, in 4 main peripheral blocks surgical options were virtually and three different TCPC precisely, parameters identified knowledge on measurements
vitro, in vivo, analytical and describing the upper/lower body performed for each patient: geometries are compared. for a submodel of the single available on certain variables.
computational techniques and right/left lung circulation, bi-directional Glenn (bG) and Moreover, respiration effects ventricle or of the whole heart Afterwards, the obtained heart
(including computational fluid- adopting a typical heart model hemi-Fontan (hF). In the case of on the hemodynamics in the may be used as step preliminary parameters are integrated in
dynamics [CFD] models) have for Stage1 patients, tuned the bG geometry, the SVC-RPA different postoperative options to the identification of the a model of the full circulation,
been developed to investigate manually basing on literature anastomosis was recreated by are studied. The effect of the closed-loop circulatory network. giving a prior knowledge to the
the complex hemodynamics of works to fit all patients under virtually resecting the SVC from respiration is tested in presence The study is divided into two heart behavior.
the Fontan circulation. study. The aim of such patient- the atrium and adjoining it with of exercise conditions, simulated main parts: first, the feasibility A two-step approach was
In the present work, CFD specific 0D modeling is to minimal movement to the RPA. by increasing the heart rate,and of parameter identification chosen to find the parameter
techniques are used for the prescribe proper boundary For the hF geometry, a portion by reducing pulmonary and based on PVL data is presented; set that best matches clinical
planning of staged surgical conditions to the 3D post- of the atrium was removed from lower limb vascular resistances. secondly, a sub-model of the data. First, Adaptive Markov
treatment of SV malformations. operative geometries, integrated the 3D volume so to create the second clinical case consists heart is built and parameters Chain Monte Carlo (MCMC)
The present work is part of in a multi-domain model, where the bulging patch typical of in a patient diagnosed with are identified through a two- was employed to obtain the
the international Transatlantic different surgical options will be this surgical configuration, the veno-venous collateral vessels step method and the obtained distributions of the model
Networks of Excellence in compared. size of which was determined 4 month after Stage 2 surgery. parameters are integrated in a parameters. Then, Nelder-
Cardiovascular Research The pre-operative open- in agreement with the surgical In this study, i) the patients model of the full circulation thus Mead hill-climb optimization is
Program funded by the loop multi-domain model of team who performed the cardiovascular network at limiting the range to span. performed from the parameter
Fondation Leducq (Paris), the lungs, instead, is built operation. 4 months after Stage 2 The model of the sole ventricle set that was found to maximize
entitled Multi-scale modeling of to calculate the impedance This procedure was performed surgery was modeled thanks is composed of six parameters, the posterior distribution during
single ventricle hearts for clinical downstream all pulmonary by the research partners at to the acquisition of clinical while full time-varying tracings the previous MCMC iterations.
decision support. branches included in the 3D Great Ormond Street Hospital, measurements post-operatively of ventricular volume and The identification process
First, pre-operative 0D models model, to be integrated in the London, UK. Each outlet of and taking into account the pressure are used, the former as was performed by means of
for Stage 2 surgical planning are post-operative models used for the 3D model was connected body growth, and ii) the input of the model, the latter as a MATLAB code written in
built. surgical planning. to a RCRCR pulmonary block closure of collateral vessels was target quantity. collaboration with experts at
Two kinds of models are A multi-step approach was obtained through the open-loop simulated. This study shows However, the high uncertainty UCSD (University of California
considered: a closed-loop implemented to estimate the model described previously; the the differences between pre on clinical PVL tracings is a big San Diego).
pure 0D network of the whole parameter values of RCRCR SVC line of the 0D model was and post-operative acquired issue in handling such data. For
cardiocirculatory system, based blocks downstream the outlet disconnected from the atrium data that may be explained this reason, a method based
108
A multiscale and translational approach and then averaging the values instrumented and acquired in and long time scale, suggesting
BIOENGINEERING
Long QT syndrome (LQTS) is It is well known that autonomic high frequency (HF) if its central (EMD) allows to decompose was completed comparing long time scale, leading to an
an inherited disease whose nervous system (ANS) plays an frequency was respectively a time series in its oscillatory circadian variations and the opposite conclusion with respect
main clinical manifestation is a important role in triggering fatal included in the band 0.04-0.15 modes, the so-called intrinsic effect of BB therapy in LQT3 to LQT1 about the protective
prolonged QT interval on ECG. arrhythmias in LQTS, but its role Hz or 0.15-0.5 Hz. Spectral mode functions (IMFs). In order men and mice. role of complexity in the LQTS
The reasons for the relevant as a risk modifier in LQTS is less analysis was performed also to improve short time scale Appropriate statistical analysis variant. ASYMP LQT3 patients
interest in this disease are its clear. on mice HP variability series, results, in this dissertation a new was carried out in agreement to were characterized by a high
dramatic clinical manifestations, The aim of this work was to with HF in the range 1-5 Hz. EMD-based filtering approach the protocols. vagal modulation, confirming
as syncopes, ventricular exploit different tools in time, The power of mice and men HP was proposed, constituting in HP was approximated as the the increased risk for events
fibrillation and sudden death, frequency and information variability in HF band was taken computing only the first IMF and temporal distance between during night in this group. LQT3
but also the existence of domain in order to characterize as an index of vagal modulation subtracting it from the original two consecutive R peaks on the mice was found to be a good
symptomatic subjects and the autonomic control of directed to the sinus node while HP and QT series, thus obtaining ECG. QT was approximated as translational representation of
siblings almost never developing LQTS subjects and improve the power of QT variability in LF a low-pass filtered version of the the temporal distance between the mutation in men since mice
symptoms. Among the 13 risk stratification accounting band was taken as an index of series. Sample Entropy was then the R peak and the T-wave and men showed similar results
different mutations leading for genotype and phenotype. sympathetic modulation directed computed over the low-pass end. Missed or ectopic beats in terms of HP increase during
to LQTS identified so far, all Findings obtained in LQT1, to the ventricles. EMD filtered series and results were corrected through cubic sleep/rest periods and in terms
affecting cardiac ion channels LQT2 and LQT3 humans were To assess the overall complexity were compared with those spline interpolation. Sequences of long time scale complexity
and all threaten with beta- compared with the aim of of sympathetic and vagal derived from RMSE. of 5000 consecutive beats reduction under BB. Finally, BB
blockers (BB), this work will providing a complete framework control, a refined multiscale were chosen for each period appeared to be protective in all
focus on the main three variants about autonomic control in entropy (RMSE) analysis was Protocols and data analysis of analysis except during mice variants in different ways.
of the pathology. LQT1(45% LQTS. A translational process performed on HP and QT The database was composed pharmacological challenges
of all LQTS patients) is due to was performed thanks to variability series, consisting in by more than 100 24h where 3000 beats epochs were Conclusions
a mutation on the slow part telemetric ECG recordings in a three steps: i) elimination of the Holter recordings from: 34 extracted. Time and frequency Spectral analysis typified the
of delayed rectifier potassium KPQ-LQT3 transgenic murine fast temporal scale of the series LQT1 subjects, divided into domain analyses were carried ANS state, RMSE quantified
current channel, with symptoms population, that were compared through a low-pass Butterworth asymptomatics (ASYMP) and out iterating the analyses over the complexity of cardiac
precipitating in case of increased with LQT3 results in men. filter; ii) undersampling the symptomatics (SYMP) together 250 beats with 50% overlap regulation as a function of
sympathetic activity as during series with a factor t thus with 14 non mutation carriers and taking the median of the the temporal scales and the
physical exercise, mainly during Methods of analysis reducing its length from N to N/t (NMC) from the same family distribution of each parameter EMD-based filtering procedure
daytime. LQT2 (35% of LQTS Time domain indices, such as at each scale factor t (for t=1 line, 16 LQT2 subjects divided as representative for the whole reduced computational costs of
patients) is due to a mutation mean and variance of HP and the time series is the original into ASYMP and SYMP and 12 series. Complexity analyses were complexity analysis compared to
on the rapid part of delayed QT and corrected QT evaluated one); iii) assessing complexity at ASYMP LQT3 subjects. Some of computed over the entire series. RMSE.
potassium current, with according to the Bazetts each t through Sample Entropy, the recordings were acquired Although the main clinical
arrhythmias triggered in case correction, were calculated. calculated with a tolerance r in absence and some others Results manifestation of the pathology
of sympathetic overactivation An autoregressive model was equal to 0.15 times the standard in presence of BB therapy. Results showed that ASYMP is similar in all the considered
due to sudden emotional exploited to perform spectral deviation of the series, with Analyses were performed during LQT1 patients had a blunted variants, the proposed tools
stress or auditory stimuli. LQT3 analysis. The model order was embedding dimension L=3 and daytime and nighttime. Several vagal control and active suggested that patients affected
characterizes only the 10% of optimized according to the time shift between samples experimental protocols were sympathetic regulation, that by LQT1, LQT2 and LQT3 are
patients but is the more lethal. Akaike information criterion equal to 1. implemented, accounting for makes QT adaptable to sudden characterized by different
Due to a mutation on the gene and the power spectrum RMSE was calculated with t genotype and phenotype, ANS- HP changes. This feature is ANS profiles and some ANS
encoding the sodium current, was factorized in frequency from 1 to 12, introducing a related circadian variations and thus protective. ASYMP LQT1 profiles are more favorable than
in this case events occur during components according to the scale pooling that divides the effect of BB therapy. had also a lower complexity others to reduce the risk of life
vagal hyperactivation, mostly residual method. A component scale factors in three classes Ten KPQ-LQT3 mice were of cardiac control at medium threatening events.
110
Advanced human-robot cooperation in (ii) investigate the best control evaluate the performances for the patient and the clinical
BIOENGINEERING
neurosurgery has greatly methodologies for human-robot effective cooperation during the interaction forces with the tissues indentation overshooting
benefitted from the introduction and robot-tissue interaction patient targeting approaching. environment to the user, but rejection, allowing for the
of image-guided techniques and control, specifically designed Transparency quantifies the robotic mechanical impedance accurate, stable and safe contact
robotic devices. Thanks to their to augment surgeons skills ability of a robot to follow may mask any delicate force with the soft tissue. At the same
superior resolution, geometric during cooperatively assisted human movements without arising from the interaction with time, the user efforts during the
accuracy and indefatigability, targeting tasks on soft tissues. any human-perceptible resistive soft tissues. A non-linear force guidance were reduced by more
robotic systems are mainly used Differently from the standard forces. On the contrary, the feedback torque control was than 60%.
as an accurate and repeatable force-to motion control schema, ability to approach a target designed in order to investigate All the developed controllers
alignment tool during keyhole the control approach proposed with high accuracy depends if augmented haptic perception were tested in the scope of
neurosurgery. Conversely, exploited the high compliance on the robots ability to apply is a relevant factor during the the EU funded project for
open-skull procedures for of a redundant flexible joints resistance against environmental instruments placement on brain surgery ACTIVE (FP7-
brain resection/disconnection industrial manipulator. The disturbances. In order to the soft tissue with respect ICT-2009-6-270460). This work
are traditionally performed validation was performed in respect the clinical accuracy to pure visual feedback. The support the feasibility of the use
free-hand with intraoperative a realistic setup with brain- requirements, while allowing a control parameters were of a cooperatively controlled
physiological monitoring mimicking phantoms (Figure comfortable cooperation, the optimized on brain-mimicking manipulator to assist targeting
techniques to identify the 2), enrolling nave users as surgical robotic assistant should gelatin phantoms, which were tasks in open-skull neurosurgery
functional (eloquent) cortical/ well as novice and expert be able to automatically adapt mechanically characterized and is in line with the actual
subcortical areas, which has neurosurgeons. The research its dynamics during the guidance to quantitatively evaluate research trend in medical
to be preserved during the was focused on these particular in the operating theatre. A novel the tissues damage due to robotics, which propose devices
surgery. In particular, direct research topics: variable damping controller the contact with the tool that are effective, safe, both
electrical brain cortex stimulation (i) investigate the best control is designed to enhance the during indentation. The
encompasses the repetitive performance of the surgical performances of the robotic
execution of target reaching hands-on robotic assistant in assistance with and without
gestures on delicate tissue terms of ease of use, intuitive force feedback augmentation
(Figure 1). The conventional guidance and effectiveness were comparatively evaluated
approach can benefit from the during targeting tasks. The with respect to freehand task
introduction of a cooperatively experimental evaluation of this executions. The proposed
controlled robotic assistant, to and two well-known impedance approach was shown to
provide increased positional controllers with fixed dynamic improve the users skills in
accuracy and reduce the parameters was carried out performing a stable and safe
surgeons fatigue during the during predefined reaching tasks tool-tissue contact, allowing
holding phase, when the tool towards registered targets on a for hand tremor rejection and
is in contact with the brain calibration board. The reaching 50% reduction of the tissues
tissue. Moreover, it could allow task was shown under laboratory indentation.
the acquisition of the target 1. Intraoperative brain mapping of the condition to result in reduced (iii) preliminarily study the
motor cortex during glioma surgery.
positions and guide the surgeon The neurosurgeon is performing the targeting error, which guarantees feasibility of the proposed
towards the recorded sites, stimulation (up) while the electric the respect of the position control approaches for brain
thus increasing the reliability of brain activity (down right) is recorded accuracy requirement (1mm), cortex stimulation procedures.
from superficial electrodes placed
the intraoperative monitoring on the cortex (down left) in order to and user efforts, which ensure A group of novice and expert
technique. detect the occurrence of unwanted that assisted tool trajectories feel neurosurgeons were enrolled to
In this thesis, we investigated stimulation-induced seizures. natural to the user. quantitatively and qualitatively 2. The experimental setup.
112
BIOENGINEERING
reports that 14.1 million new of 30%-40% of early stage developed to investigate this key specific step of the metastatic
cancer cases were diagnosed breast cancer patients while step of the metastatic cascade. cascade within different organ-
in 2012 while 8.2 million 70% of advanced breast cancer However, it is noteworthy to specific microenvironments
patients died in the same year. patients are affected by skeletal highlight that despite the above with critical implications for the
Noteworthy, the spread of metastases, leading to pain, due mentioned advantages brought development of new drugs,
primary tumors towards distant to spinal cord compression and by microfluidic approaches, thus fostering a more effective
organs and the subsequent fractures, and often death. the extremely limited number screening of tailored anti-
metastatic colonization is So far, in vivo and ex vivo of cells makes technically hard cancer therapies in the context
responsible for 90% of cancer- models have been developed to perform genetic analyses to of personalized medicine. 1. Confocal microscopy of the bone-mimicking microenvironment generated
associated mortality. However, to study the extravasation investigate the transcription level Furthermore, a relevant within a microfluidic device. A monolayer of red fluorescent protein (RFP)-
despite great advances in process of cancer cells in aspect of this doctoral thesis transfected human endothelial cells covers the top media channel and the
of key regulatory genes. interface with the bone-mimicking channel embedding osteo-differentiated
basic cancer molecular and mice and zebrafish embryos The present doctoral thesis is represented by the design human mesenchymal stem cells within a collagen gel. The dark square
cell biology with the discovery through intravital microscopy. is focused on the design and and optimization of human represents a post separating two gel regions of the microfluidic device. Red:
of oncogenes and tumor However, they cannot model optimization of micro and 3D macroscale models of endothelial cells. Green: F-actin. Blue: cell nuclei.
suppressor mechanisms, much all aspects of the interaction macroscale models to study the vascularized bone-mimicking our knowledge of cancer within physiologically-like
remains to be learned about the and cross-talk between organ-specific breast cancer tissues through the identification mechanobiology and investigate environments, bridging the gap
metastatic process. The cancer human cancer cells, human cell metastatization towards of the optimal combination of key molecular pathways involved between traditional in vitro
biology seed-and-soil paradigm endothelial cells and human the bone. Particularly, a 3D experimental parameters leading in organotypic metastases assays and in vivo models.
recognizes the existence of tissue parenchyma. Moreover, microfluidic model of a bone- to the generation of functional
organ-specific patterns of strictly regulated, reproducible mimicking microenvironment vascularized environments,
metastatization which drive parametric studies are difficult surrounded by an engineered which can be employed to
the spread of selected primary to perform. Microfluidics can microvessel (Fig. 1) was study breast cancer organ-
tumors towards specific provide useful model systems to developed to quantify specific metastases by means
secondary loci. However, despite investigate complex phenomena human breast cancer cell of post-genomic analyses.
efforts to model organotypic under combination of multiple extravasation rate, migration Finally, an innovative approach
microenvironments, the organ- controllable biochemical and distance and micrometastasis combining microfabrication
specificity of cancer metastases biophysical microenvironments generation within the colonized techniques and self-assembly of
still needs to be elucidated. coupled with high resolution real microenvironment, and to vascular structures is discussed
Then, a deeper understanding time imaging, thus overcoming highlight the involvement of in details. Particularly, the main
of the metastatic cascade and limitations of traditional assays, the CXCL5/CXCR2 pathway in advantages of this approach
particularly the extravasation e.g. Boyden chamber, which the organotypic extravasation based on electrochemical cell
process could promote are characterized by limited process. Furthermore, a detachment are the possibility
the development of new imaging capabilities and do not physiologically-like microfluidic to organize endothelial cells into
therapeutics, thus improving provide tight control over the 3D model was designed to geometrically defined structures
cancer survival rates. local environment. Up to now, investigate human breast and to produce vessels aligned
Particularly, breast cancer is the the application of microfluidic cancer cell extravasation into within micrometric distances in a
most frequent cancer among techniques to model cancer bone- and muscle-mimicking spatially controlled manner.
women and the second cause of metastases and particularly microenvironments through The unique combination of
2. Fluorescence microscopy of a physiologically-like microvascular network.
cancer death in women in more extravasation events has been perfusable, functional human micro and macroscale 3D Microvessels connect each other into highly branched microvascular trees
developed regions after lung generally limited to the study microvascular networks models offers a new perspective generating a complex microvascular network spanning the entire gel channel.
cancer. Disseminated tumor cells of chemotactic events, while composed of endothelial and through which to increase Endothelial cells are transfected with green fluorescent protein (GFP).
114
115
Recent advances in imaging to track myocardial material and widely used cardiovascular create a 3D anatomical model risk factor for stroke and images. Image fusion was
BIOENGINEERING
technology have enabled points through the cardiac cycle, imaging techniques, i.e. CMR of the scar and compute its local peripheral embolization, being finally performed to enable
the non-invasive study of the while the employment contrast- and 3D echocardiography. transmurality. The information also associated with carotid, visualization and analysis of
structure and the function of enhanced sequences provides Two specific contributions, each derived from the two CMR coronary and renal artery extended field-of-view of the
the heart, the valves and the imaging the presence and focused singularly on CMR acquisition is finally combined disease. TEE technology is acquired aorta. The application
vascular system. extent of nonviable tissue in the or 3D echocardiography, are in the same reference system a suitable tool for assessing of different fusion techniques
Different techniques, such as myocardium, thus revealing its presented. In the first, methods by a dedicated registration aortic atherosclerosis, being was investigated. The method
magnetic resonance imaging structural impairment. for the 3D assessment of the pipeline featuring affine and routinely performed on patients was applied to a population
(MRI), ultrasound (US), Three-dimensional functionality and the anatomy deformable registration. The to identify cardiac sources of of 17 consecutive patients.
computed tomography (CT), echocardiography currently of the left ventricle are proposed described tool allows for the emboli and during cardiac Qualitative and quantitative
positron emission tomography represents a major diagnostic by analyzing and combining joint three-dimensional analysis surgery to guide the introduction results demonstrated the
(PET) and single-photon emission tool in clinical cardiology the cine and late Gadolinium of myocardial local function of the cannula into the aorta to potential feasibility and accuracy
computed tomography (SPECT) allowing real-time imaging enhancement (LGE) CMR from cine CMR and myocardial prevent peri-procedural plaque of the proposed approach. In a
are imaging modalities currently of the cardiac dynamics. images, acquired in the clinical viability from LGE CMR images, embolization. clinical scenario, its application
used in cardiovascular medicine In this scenario, real- routine. in a common and patient- In this scenario, a comprehensive could allow quantitative
and each of them provides time 3D transesophageal Cine and LGE CMR images are specific reference system. This procedure for the reconstruction assessment of aortic total plaque
specific and complementary echocardiography (TEE) has first processed individually to combined information is of of the descending thoracic aorta burden from 3D TEE images.
diagnostic and prognostic become one of the most extract relevant information. established importance for the from contiguous 3D TEE images In conclusion, the design and
information. useful imaging modalities for Cine images were processed to diagnosis and treatment of is proposed. First, an ad-hoc the experimental application
Among these, cardiac MRI intraoperative management of compensate for breath-related cardiomyopathies, allowing to image acquisition protocol was of comprehensive frameworks
and three-dimensional (3D) patients undergoing cardiac inter-slice misalignments, due distinguish between reversible designed to acquire spatially for cardiovascular image fusion
echocardiography have gained surgery. Furthermore, 3D TEE to the non-exact repeatability of and irreversible injured ordered and partially overlapped obtained with non-invasive
popularity in the clinical can be employed to acquire the breath-hold position during myocardium. Surgical procedures 3D TEE datasets, followed by modalities have been studied.
scenario, because of their images of the aorta, due to acquisition. Then, a 3D ASM such as revascularization or dedicated image processing The described methodologies
advantages over ionizing or its anatomical proximity when was adopted to segment the resynchronization strategies to align and fuse all acquired may have an effective clinical
invasive techniques, allowing introduced in the esophagus, endocardium by simultaneously potentially benefit from the datasets. Alignment strategy impact to improve the clinical
to assess both anatomy and allowing to characterize and analyzing images belonging to knowledge of the exact location implemented pair-wise rigid diagnosis and the definition
function of the cardiovascular quantify aortic lesions, which the short-axis image stack. To of these regions within the LV, as registration guided by a priori therapeutic or surgical strategies,
system. In particular, Cardiac are known risk factor for severe this end, a shape model of the they are significantly related to knowledge and it was validated as well as for patient-specific
Magnetic Resonance (CMR) complications such as stroke and left ventricle was constructed the likelihood of improvement of using artificially misaligned modeling purpose.
imaging is the single modality peripheral embolic events. from a large database of contractility after surgery.
capable of noninvasively This PhD work represents semi-automatically segmented In the second contribution of
defining cardiac anatomy and a contribution towards the 3D echo images, constituted the thesis, the employment of
function, myocardial perfusion, development of procedures by 205 patients with various real-time 3D transesophageal
myocardial viability, and coronary for the joint analysis of pathologies. Left-ventricular wall echocardiography (TEE) is
artery anatomy, through cardiovascular images. The motion was derived from the investigated in its ability
the application of different aim of the project was focused 3D endocardial segmentation to image the aorta.
acquisition protocols. From on the development of obtained as the displacement of The identification and
the wide set of MR acquisition comprehensive frameworks the endocardium from diastole characterization of aortic lesions
techniques, cardiac dynamics for the combined analysis of to systole. is recognized to be clinically
can be characterized by cine intra-modal information coming Late-Gadolinium enhanced relevant, as the presence of
and tagging CMR, allowing from the main non-invasive CMR images were processed to aortic plaques is an independent
116
Fatigue analysis of Nitinol cardiovascular specimens (e.g. with dog load can be neglected and the investigate the fatigue behavior
BIOENGINEERING
the assessment of cyclic enough to properly assess the failure given by the experimental In case ad hoc specimens are diastole are the most severe evidence. For that purpose,
fatigue resistance is of Nitinol cardiovascular devices tests, different disadvantages not available, a methodology to loading conditions and the size experimental tests on real
primary importance during fatigue behavior. must be noted: an high number obtain Nitinol parameters from and stiffness of the surrounding devices are performed in order
the design process of medical Several studies proved that of specimens must be tested experimental tests carried out wall has a strong influence on to validate numerical models
implantable devices made by numerical models are a to ensure statistical confidence directly on the devices is also the fatigue response. predictions: tensile and crimping
Nitinol. Focusing on peripheral valuable tool to assess the in the results, making the proposed: the experimental Since the fatigue analysis is a tests on peripheral stents
stents and transcatheter heart fatigue performance of experimental campaigns test is numerically reproduced very time-consuming process, proves the model capability
valves, due to the presence cardiovascular devices, especially expensive and time-consuming; and material parameters are an additional purpose of to reproduce the macroscopic
of cyclic loads imposed by leg for comparative purposes. difficulty in reproducing the tuned up to reach a good the present thesis is to give mechanical behavior of the real
movements, as well as by the The common trend followed real in vivo environment that fitting between numerical and reliable indications about devices. Similarly, numerical
blood pulsatility, the fatigue by authors is to assign Nitinol makes usually experimental experimental testing results. a device fatigue behavior results of crimping tests on
resistance is a critical issue since material properties taken from tests simplified; difficulty in Then, numerical fatigue FEA within a reasonable time: a aortic valve and experimental
they can experience from 10 literature and to compare the assessment of biomechanical on peripheral stent models and simplified FE model focusing evidences are in agreement in
up to 40 million loading cycles obtained numerical results to quantities, since fatigue tests transcatheter aortic valve are the fatigue analysis not on proving that crimping procedure
each year. Therefore, these a general Nitinol fatigue limit, give only the final result (safety proposed, paying particular the whole stent but just on a induces plasticization in different
devices need to be designed found in literature. However, of failure), providing in few cases attention in properly reproducing unit of interest, is set up and points of the valves stent-frame.
to survive at least 108 fatigue the correctness of numerical the number of cycles, without boundary conditions: interaction coupled with an analytical Finally, an assessment of the
cycles without failure over the results strongly depends on any information about the state with a confining wall and model representative of the proposed fatigue criterion is
lifetime of the patient, since the reliability of the material of stress through the device. amount of cyclic load undergone SFA, which allows to quantify given by experimental cyclic tests
their fracture could cause the parameters implemented in In this thesis particular attention by the stent on one side; the actual load experienced on real stents, which results are
risk of the surrounding tissue numerical constitutive models, is firstly paid to the material pressure pulsatility and effect of by the unit associated to found to be in agreement with
damage, as well as the loss in particularly when the aim properties knowledge, specific leaflet cyclic movement on the particular anatomical conditions. the proposed fatigue limit.
the mechanical properties of the of numerical analyses is to for each device under study, valve stent-frame on the other The proposed simplified
stent-frame itself. This finding provide information about since the parameters describing side. Thanks to their reduced methodology for fatigue
highlights the importance of specific stents risk of fracture the stress/strain relationship costs and high flexibility in investigation, once applied
developing a shared, robust associated to a defined loading and the fatigue properties modifying geometrical features, to a patient-specific case,
and efficient methodology condition. Moreover, the are strongly depending on boundary conditions and showed an agreement between
to face the assessment of strong dependence of Nitinol the device dimensions as well external loadings, FEA revealed model predictions and clinical
Nitinol cardiovascular devices fatigue behavior from all the as on the whole treatments a useful tool to prove the main evidence proved by 18-months
fatigue resistance. Therefore, manufacturing steps, makes (thermal and surface finishing) factors influencing fatigue follow up data. Despite the
the purpose of this thesis is to mandatory the definition of the subjected by the device during resistance of the studied devices, approximations introduced
face this issue deepening the material fatigue limit specific for the manufacturing process. giving also important indications into the proposed method, it
several involved aspects, in the device under study. The proposed way to get about how to perform in vitro reveals able to assess the stents
order to give proper guidelines On the other side, fatigue device-specific Nitinol material cyclic tests. Numerical analyses fatigue behavior associated
that can be useful both in in vitro tests represent an parameters to be used as input on peripheral stents suggest to particular anatomical
a design and development accepted way to demonstrate for material subroutines, as that the oversizing due to the conditions in a reduced period
phase of a new device and a device durability, reproducing well as to obtain the material presence of a confining wall of time, compatible with the
during the assessment of its as close as possible the actual fatigue limit for a defined and its shape and stiffness has requirements of manufacturers
proper functionality. The main operating conditions. They are number of cycles, is to perform to be taken into account during or clinicians.
hypothesis is that the numerical often governed by international experimental static and cyclic experimental tests for fatigue In conclusion, numerical
analyses or the experimental standards. Despite the tensile tests on ad hoc material assessment, while the pulsatile analysis can be a useful tool to
118
BIOENGINEERING
MRI technique sensitive is a common marker for the determined, biomarkers for glioblastoma multiforme (GBM)
to the diffusive motions diagnosis of CJD, but its origin sensitive and specific CJD to test the possibility of a better
of water molecules, has is currently unknown. Two diagnosis were proposed. reconstruction through areas of
1. Mean fECV, maximum fICV and fISO averaged among the patients with grade
demonstrated good sensitivity isotropic bi-compartment models vasogenic edema than allowed II, grade III and grade IV gliomas. The significant differences are marked with
to microstructural changes in of diffusion were developed Microstructural features of by DTI-based tractography (DTT). stars.
many diseases. An application to test two neuropathological brain tumors by NODDI In the proposed NODDIT
is tractography, the virtual hypotheses: 1) a biexponential In the second application, algorithm the termination vasogenic edemas. important applications in
reconstruction of fiber model to describe intra- and dMRI data from 71 patients criterion was based on upper Lowering FA threshold to about the preoperative mapping of
trajectories. extra-cellular hindered diffusion, with brain gliomas were thresholds on both the 0.1 can provide similar results patients with brain gliomas.
Conventional dMRI methods such the latter expected to reduce analyzed with NODDI, a model orientation dispersion index to NODDIT in the edemas;
as Diffusion Tensor Imaging (DTI) with protein deposition; 2) a with 3 compartments where (ODI) and on fISO. In a preliminary however, an unacceptable Conclusions
rely on the hypothesis of free model with restricted diffusion diffusion is free (CSF), hindered phase, the ODI threshold was specificity loss was highlighted, In all the proposed applications,
or hindered diffusion, which is in a spherical compartment (extracellular) and restricted in calibrated by comparison with with a high number of false the application of multi-
generally a good approximation modeling restriction in vacuoles. sets of sticks (intracellular), DTT in healthy regions. positives, as verified even in the compartment models was
only at low diffusion weightings dMRI data were acquired from respectively. The mean streamline density ventricles. clinically feasible and
(b-values). Many advanced dMRI 10 patients with CJD and A preliminary comparison with obtained by NODDIT and DTT The visual inspection of the advantageous when compared
methods have been proposed; 7 healthy and pathological an isotropic model showed that was evaluated in 3 ROIs per tractographic reconstruction of to traditional methods. This
in particular, multi-compartment controls. The two proposed NODDI intracellular fraction (fICV) patient: the tumor core, the specific tracts showed that in all involves a non-trivial work for
models of hindered and restricted models were fitted to the data is a valuable index of diffusion peritumoral edema and the the considered cases NODDIT the choice of suitable models
diffusion in compartments with and regions of interest (ROIs) restriction, even though contralateral WM. Compared provided more streamlines and the interpretation of results,
known geometry were developed were delineated in gray matter overestimated in isotropic to DTT, NODDIT streamline passing through the edemas, but the obtained parameters
to allow a more specific areas. The fitting performance conditions. densities were similarly high or even allowed reconstructing seem more specific to the
microstructural characterization of both the bi-compartment Grade II lesions displayed high in normally appearing WM, tracts completely missed by DTT underlying tissue microstructure
of tissues. models was significantly better extracellular volume fractions similarly very low in the tumor (figure 2). and its pathological changes.
The aim of this thesis was to than the mono-exponential (fECV), grade III gliomas showed core, and significantly higher in Thus, NODDIT could find
assess the feasibility of model- model, especially in the affected also regions with an increased
based dMRI techniques in clinical areas, but similar among them. fICV, and grade IV gliomas were
research, and to investigate their In hyperintense areas, the main usually heterogeneous. Instead
utility in the characterization of results were an increase of T2, a DTI parameters, namely FA, were
brain microstructural alterations. decrease of all the diffusivities non-specifically altered.
and an increase of the volume All the three NODDI volume
Model-based analysis of fraction of the restricted fractions allowed a statistically
dMRI signal in CJD patients compartment in the vacuole significant discrimination
In the first application, model. between grade IV and both
mathematical models were This study may represent grade II and grade III lesions
developed and applied to study an important step towards (figure 1).
the microstructural changes in the characterization of These preliminary results
Creutzfeldt-Jakob Disease (CJD), microstructural changes in show that non-invasive tumor
the most common human prion CJD. Even though the precise characterization and grading
disease. pathological mechanism by NODDI is feasible in a 2. Example of tractographic reconstruction in a representative GBM patient.
The presence of asymmetric responsible for dMRI clinical context. This could have The cortico-spinal tract was reconstructed from the same ROIs with DTT (A) and NODDIT (B).
120
121
In the last years, novel hydrogel contains a-(14)-D-linked preserve the peculiar structural for different tissues, providing a strategy often investigated specific biomolecules or to be
BIOENGINEERING
formulations were developed galacturonic acids (1.4-a-D- characteristics such as the a three-dimensional structural in literature. In this thesis, a modified for facing a specific
to obtain injectable products GalA) that are partially methyl- integrity of branched regions, support for the host cells. peptide-grafting was successfully challenge. The antibacterial
for tissue regeneration . The esterified and sometimes which show an important role in Cell immobilization within tested on pectin backbone to injectable pectin hydrogels can
advantages of using injectable partially acetylesterified Due to cell interaction. A high molecular a hydrogel represents an improve antibacterial activity. be used in multiple applications
hydrogels rely on their ability the peculiar gelling mechanism, weight and a low degree of innovative and successful Eventually, antimicrobial pectin where preventing bacterial
to conform to the defect shape low methoxy pectins, which esterification need to be pursued strategy to deliver cells in a hydrogels will be useful tools adhesion is still an unmet need.
and on the possibility of in vivo have a degree of esterification to form stable, ionotropic gels, damaged tissue. It is well known in several diseases, such as This research is aimed to the
delivery in a minimally invasive (DE) < 50, have been proposed in compatible conditions with that cells remain viable into osteomyelitis, in which the ultimate goal of combining these
way, thus reducing discomfort for the preparation of hydrogels cell viability or biomolecules injectable microspheres, where optimal treatment raises from different aspects producing a
and complications for the for biomedical applications, loading. the surface area to volume ratio the dual approach of cell delivery contactactive material composed
patient. As the challenging namely drug delivery, gene In view of developing pectin- is higher and the exchange associated to antibacterial effect, of different substances,
strategy implies that the cells delivery and regenerative based injectable systems of nutrients and oxygen is with the intention to induce both effective to dismantle
are loaded inside the gels. For medicine as implantable material for cell immobilization in promoted. In this thesis, the bactericidal effects and bone biofilms, and deliver cells in the
cell-loaded injectable gels, for minimally invasive surgery. applications of regenerative process for adipose-derived stem regeneration in a single step. To damaged tissue, disrupted by
cell entrapment is generally Pectin gels are proving wide medicine, the results of this cell (hADSC) immobilization in these aims, we demonstrated the infection, for pathologies
achieved in the liquid (or highly applicability as biomaterials with work demonstrated that the pectin-based bulk hydrogels that it is possible to produce a such as osteomyelitis and
viscous liquid) form of the recent advances in regenerative gelling kinetics and rheological was investigated. hADSCs pectin derivative endowed with periodontitis.
gel precursors, with severe medicine application, such properties of pectin hydrogels retained viability up to 24 hours antimicrobial activity by grafting
limitations on the conditions as microspheres. Bioactive can be modulated according after immobilization within antimicrobial peptides on the
and reagents used in the sol-gel modifications, such as enzymatic to the specific way of the 3D pectin matrices and pectin backbone.
transition often not compatible degradation, partial oxidation administration and to the tissue the presence of glucose and Considering the specific final
with cell viability and bioactive and RGD functionalization of to be treated. Particularly, by fine glutamine as additives resulted aim, i.e. biomedical application,
molecules immobilization. this polysaccharide were able tuning of sodium bicarbonate to play a key role in nutrient ad hoc production of pectin
Pectin, a natural polysaccharide to control degradation and and calcium carbonate content, and oxygen supply during the should be explored with
present in the cell wall of most cell adhesion. Hairy regions of a tight control of the pH of the first hours of immobilization, tailored extraction process,
plants, is nowadays object branched pectin, separated by hydrogel solutions was achieved, which are known to be the able to preserve its peculiar
of increasing interest for enzyme degradation, are known thus controlling their gelling critical phase for entrapped cells. structural properties and with
applications in the biomedical to promote proliferation and kinetics. In this context, it was After extrusion, the injectable the possibility to form ionotropic
field. Pectin is a biocompatible differentiation of cells (such as possible to obtain hydrogels hydrogels showed an excellent gels. These hydrogels could be
anionic polysaccharide that BMSCs to osteoblasts). with fast gelation that can be hADCS viability, indicating that evaluated as three-dimensional
constitutes 30% of plants wall In view to obtain an ad hoc used as in situ gelling systems, the presence of a 3D matrix (3D) cell culture systems, where
[1], widely used as thickener, pectin-based hydrogel for a or gels with a slower gelation protects cells from the damaging the cells can be entrapped
gelling agent, stabilizer and specific purpose, pectin can be process that can be used as injection process. Immobilized during gel formation in mild
emulsifier in food products [2]. extracted from different sources cell carriers, where the gel hADSCs maintained their conditions. The 3D systems can
It is almost entirely composed of and its characteristics vary preparation is performed prior stemness capability after 7 days be produced as injectable stem
three polysaccharidic domains: according to the plant species to injection. Furthermore, of incubation, indicating that the cell-loaded hydrogel for tissue
homogalacturonan (HGA), from which it is extracted. The the control of the rheological hydrogel did not affect the cell regeneration, by controlling
rhamnogalacturonan-I (RG-I) and main characteristics of the parameters allowed obtaining phenotype. the biochemical environment
rhamnogalacturonan-II (RGII). appropriate process of extraction thicker hydrogels to tailor the To provide an ideal environment within the gel. Injectable systems
HGA is the major component are the use of biocompatible mechanical stimuli of the matrix for anchorage-dependent cells, are not limited to cell delivery,
of pectic polysaccharides and chemicals and the possibility to and promote cell differentiation a peptide grafting of pectin is and can be exploited to release
122
PATIENT-SPECIFIC MULTI-PARAMETRIC MODEL OF THE thickness, the papillary muscles in a parametric probability had both an endocardial and
BIOENGINEERING
planning and guidance of Segmentation viable myocardium including myocardium wall thickness. myocardial tissue mean valuep). structure. Using the distances
electroanatomic mapping (EAM) Image-guidance allows the perfusion defects in MDCT; two Points with distance less than The best threshold to detect DE from the LVepi to the LVendo
and radiofrequency (RF) catheter navigation of the catheter tip approaches are on the early 5 mm were considered scared scar in our experiment was the surface, zones where the
ablation (CA) procedures in over the cardiac structures of angiographic scan and one on areas. formulation: p + (v p)/2 , thickness was less than 5 mm
ventricular tachycardia (VT), the interest within a high resolution the delayed scan. On the early As the detection of myocardial which best matched with the were considered scar vertices.
knowledge of the exact location anatomical map. For this intent, scan compromised zones are scar areas in the delayed scan experts identification of hyper For each triangle vertex of
and extent of myocardial scar the accurate segmentation hypo-enhanced when compared consist in detecting high density enhanced zones. Using this the LVepi mesh, the distance
is important. Today, delayed of the patient specific cardiac to the normal myocardium or values when compared to the threshold, it was possible to from LVepi to the epicardial fat
enhanced magnetic resonance anatomy of interest is a present myocardial wall thinning. nor- mal myocardial tissue, we extract scar most in agreement surface was computed; points
imaging (DE-MRI) is considered fundamental effort. Of great The approach to detect scar studied different thresholds with the opinion of the above 3 mm were considered
the imaging gold standard importance in any epicardial on the delayed scan, contrary to detect scar as similar as physicians on the scar location, myocardial scar.
for the assessment of scar intervention is epicardial fat to the first case, consists in physicians would do. We with a satisfactory classification. In patients that underwent an
tissue and it is being used, tissue, it may be confused searching for hyper-enhanced made different assumptions epicardial intervention, points
integrated into the 3D EAM with myocardial scar causing zones of the myocardium. We to determine the best way to Myocardial Multi-Parametric with epicardial fat thickness
system, to guide ablation useless ablation. Moreover, took the advantage of having identify automatically areas Map greater than 3mm were added
procedures. However, multi- the anatomical course of the the myocardium segmentation of scar; the first one was Using information from the to the myocardial mesh to be
detector computed tomography main coronary vessels may from the early scan, where the that scar in DE MDCT has automatically segmented cardiac considered as myocardial scar.
(MDCT) could be an interesting help in optimizing epicardial myocardium boundary is clearly similar attenuation values structures of interest, describing Bipolar and Unipolar voltages
alternative. The main reasons are interventions, reducing potential defined, to use as template than the blood pool. For scar location and extent, as were correlated with MDCT
related to the reduced artefacts damage of such structures. for the scar identification. this reason, using the LVend well as zones of thick fat layers based scar 35.3% of low bipolar
caused by the implantable For the anatomical segmentation Since the heart cannot be segmentation from the early we face the myocardial map voltages of the EAM, and
cardioverter-defibrillator (ICD), a 3D level set algorithm based seen as rigid body, we model scan as template on the pre- construction. Additionally, the 64.72% of unipolar low voltages
the higher spatial resolution on a multi-scale directional the transformation as a sum registered delayed scan, we created map was compared with points are within our defined
when compared to DE-MRI with stopping function was of global rigid transformation identified the parameters of the findings of EAM, created scar.
which we can have detailed developed, implementing and a local elastic deformation the Gaussian distribution of the previously to the CA procedure.
information about the anatomy the Geodesic active contour (free-form deformations) for the LV attenuation values (mv,sv). The decision of an endocardial Conclusions
of the heart (e.g. trabeculae and (GAC) formulation. The correction of the deformations In order to identify the best or epicardial EAM and RF This work represent a step
coronary arteries) as well as the stopping function was that may occur due to the parameters to identify scar, we ablation approach was taken forward not only to the
reliability in visualizing epicardial applied twiceupdated after beating heart and other used as thresholds the following basing on the prevalent accomplishment of accurate
fat distribution. The purpose the convergence of the first anatomical motions. values: mv , mv 1sv and mv distribution of scars at CE-MDCT patient-specific segmentation
of this work was to construct evolution, reducing the scale The extraction of nonviable 2sv . or during the failed endocardial of cardiac structures, but also
a 3D multi-parametric model space of the edge detector, until myocardium was accomplished Looking to the myocardium, on procedure. Values greater to the introduction of MDCT
of the heart by segmenting the level set converged again. as described in the following. the other hand, as it involves than 1.5 mV defined normal scar detection as a feasible and
automatically ventricular cavities, We validated the proposed In the case of (if scar is present) a mixture of LVendo bipolar electrogram effective approach to plan EAM,
left myocardium, myocardial method quantitatively on the hypoenhancement, we two tissue types (corresponding amplitudes and values greater aiming at improving CA RF and
scar, epicardial fat and with the manual segmentations calculated mean m and variance to two different histogram than of 8 mV defined normal LV reducing intervention time.
coronaries from MDCT images. done by expert radiologists. s2 of the entire data and used attenuation peaks), a Gaussian endocardial unipolar electrogram
Further, this map was compared as higher threshold m 3s to mixture model with two amplitudes.9 patients have an
with the findings of EAM, Myocardial Scar identify the low attenuation component densities was used ICD; 8 had only an endocardial
created previously to the RF CA Segmentation values. In order to accurately to identify two clusters on the and one only an epicardial
procedure. There are three different measure myocardial wall myocardial tissue. It consists intervention, and two patients
124
125
Posterior spinal stabilization Anatomical (e.g. pedicle support as well as its stiffness stabilization, bisegmental
BIOENGINEERING
by means of pedicle screw-rod inclination, interpedicular in ISO 12189 standard, were bridge stabilization and
based implant is a gold standard distance), as well as found to be significant, leading vertebrectomy with an anterior
in the surgical treatment of biomechanical parameters (e.g. to a much higher stress increase support (Figure 3). These models
a great variety of diseases. follower load distance, center of (even beyond 420%) than the were validated against in vitro/in
Clinical results demonstrate an rotation distance, unsupported anatomical parameters alone: in vivo literature data. The results
improved outcome with respect screw length), useful to catch this light, care should be taken demonstrated that ISO 12189
to traditional techniques stand the most important features of when using ISO test method. procedure reproduces quite well
alone. Significant improvements the thoracolumbar spine, were Since the above described results a physiological instrumented
have been made since the collected as a function of the were obtained on a simplified scenario during flexion of the
introduction of pedicle screw spinal level both from a review design capable of catching only upper body. Moreover, simple 2. Parametric FE models according to
ASTM F1717 and ISO 12189 standards
technology in 1980-90s, of the literature and using direct the general features of a pedicle considerations can help in (a, b respectively) and corresponding
however a high complication measurements on physiologic screw-rod based implant, a comparing and interpreting the meshed reference models (b, c
rate is reported still today (from subjects. A parametric FE more realistic design was also achieved results using different respectively).
12.0 to 54.0% after 2000). (finite element) analysis was considered. A parametric FE testing procedures with respect
Moreover, fatigue-related performed on both standards model of a commercial spinal to the effective clinical use.
failures, due to the high number evaluating the contribution of fixator was built and validated
of loading cycles experimented each parameter in increasing using strain gauges technique. A
during everyday life activities the stress arising on the implant parallel preliminary experimental
(e.g. walking), are continuously (Figure 2). A few parameters mechanical characterization
reported especially at the screw were found to be significant revealed that the fatigue life of
level (from 1.2 up to 35.0% (i.e. leading to a percentage constructs assembled according
after 2000). stress increase greater than to the proposed revised version
The present dissertation is 2%). Moreover, the geometrical of ASTM standard may be
aimed to better understand and configuration implemented significantly lower than those
possibly improve the current in ASTM F1717 standard assembled according to the
standards published by the reproduces quite well an average current one: this was interpreted
American Society of Testing instrumented thoracolumbar as a confirmation that a 1. Sagittal view of a patient 3. Starting from a validated L2-L4 FE
and Materials (ASTM) and the spine segment; however the standard revision should be instrumented with a spinal fixator model, different clinical scenarios
to stabilize the lumbosacral segment were simulated. The vertebrectomy
International Standardization anatomical worst case scenario taken into consideration.
(a). Simplified drawings of the scenario was directly compared
Society (ISO) for the in vitro due to the combination of the A final numerical analysis experimental approaches used to with ASTM standard model, while
preclinical evaluation of the most important anatomical was led to better understand evaluate the mechanical properties of the bisegmental stabilization,
mechanical behaviour of parameters was found at L1, posterior spinal fixators according to a bisegmental bridge stabilization and
whether the stress arising on
vertebrectomy scenario implemented a vertebrectomy with an anterior
posterior spinal stabilization leading to a maximum stress the posterior spinal stabilization by ASTM F1717 standard (b) or to a support were compared to ISO 12189
devices. In particular, the validity increase of about 15% for device in standard configurations physiological instrumented scenario models were the spring stiffness was
of ASTM F1717 and ISO 12189 ASTM F1717 and about 10% may be really representative of according to ISO 12189 standards (c). also varied.
standards, representing a for ISO 12189 standards: these some everyday life activities.
vertebrectomy (worst case) and scenarios were then proposed as A previously validated L2-
a physiological instrumented a basis for a revision proposal of L4 spine segment was then
2-Functional Spine Units (FSUs) the standards. Other mechanical instrumented according
scenarios respectively (Figure 1), parameters, such as the initial different clinical scenarios:
was investigated. precompression of the anterior vertebrectomy, bisegmental
126
BIOENGINEERING
opens up a unique window of are currently unfulfilled, due to new selection method based on UHF, the delicate issue of response to IPS was investigated, results of the fMRI activation
opportunity for the investigation important technical challenges: wavelets, we have been able to anatomical segmentation at 7T then the EEG information was analysis were compared to the
of function and structure of the in this respect, the PhD thesis recover at the channel level the is introduced: besides higher used to study evoked potentials, NIRS ones and the coupling
brain. Each imaging method focuses on the pre-processing physiological alpha rhythm in problems of inhomogeneity, the frequency content and between BOLD and NIRS signals
probes specific physiological phase to set the ground for the occipital channels. Although tissue contrasts exploited at UHF functional connectivity in the IPS was investigated in the NBR
processes with characteristic future complex unimodal and the quality of removal can be can be different from the most frequencies. An EEG-informed regions. NBRs were found to
resolutions, giving a filtered view multimodal analysis at UHF. still largely improved, these common ones and may require fMRI analysis investigated the be characterized by an HbO
on one or more brain processes The first section deals with the preliminary results pave the way specific processing techniques. hemodynamic correlates of the decrease and a concomitant
of interest. The combination of pre-processing of EEG-fMRI for future resting state EEG-fMRI The Tissue Border Enhancement EEG power changes in the IPS HHb increase w.r.t. baseline
different imaging modalities can data and is dedicated to the analysis at UHF. (TBE) technique allows an frequencies. The quantitative condition (Fig.2): the NIRS study
overcome the limitations of the removal of cardiac-related In the second part of the immediate visualization of the comparison between patient provided new information
single techniques, as it allows to artefacts from EEG data thesis, the interest is shifted borders of brain tissues: a new and control group revealed on the negative BOLD
1) extend the coverage of the recorded in MR environment towards the processing algorithm for the extraction many peculiar characteristics phenomenon.
spatiotemporal domain and 2) at 3T and 9.4T. Using resting phase. Methods for complex of borders in TBE images is that contributed to delineate the In summary, the present PhD
get a more comprehensive view state data, the performances of fMRI connectivity analysis described, called Minimum patients clinical picture. Finally, thesis has given insight into
of physical and physiological different sophisticated correction are described: in particular, a Intensity Snake Algorithm in the patient, the fMRI epileptic some crucial analytic challenges
properties of the brain. methods based on independent novel whole-brain parcellation (MISA). MISA follows iteratively network was extracted and of the multimodal integration at
The present PhD dissertation component analysis (ICA) were scheme that integrates MRI the path of minimum intensity the pattern of propagation of normal and high field, drawing
gives a comprehensive quantitatively compared, in anatomical and functional within the image using functions the epileptic activity within the attention on the combination of
overview of the advantages terms of their capability to 1) information is presented. The of graph theory. When applied network was inferred. MRI with other neuroimaging
and possibilities provided by reduce the artefact, 2) recover new parcellation divides the to a TBE image acquired at 7T, techniques, especially EEG. The
brain magnetic resonance the underlying physiological brain into non-overlapping it led to a satisfactory detection results are promising and open
imaging and its integration with information. Since the spatially connected clusters and of tissue interfaces (Fig.1). The the door to future complex
complementary neuroimaging discrimination of artefactual can be used to define nodes in combination of TBE and MISA multimodal analysis at UHF.
techniques, with special interest components from physiological connectivity networks that are can overcome the limitations
towards simultaneous EEG- ones is challenging, different homogeneous in both structure related to traditional imaging
fMRI. Indeed, the integration methods for the selection of and function. The test on two techniques at UHF, opening the
of EEG and fMRI offers the PA-related components were synthetic datasets demonstrated road to several applications.
unique opportunity of providing considered. an overall capability of the In the last section of the thesis,
a non-invasive comprehensive In the 3T study, the selection algorithm to correctly identify a set of methods for the analysis
view of brain function with high based on the components the functional clusters, both in and integration of EEG, fMRI
temporal and spatial resolution. wavelets transform, which is a resting state and in presence and NIRS data is shown. A
The primary objective of the PhD novelty of the thesis, resulted of stimulations. The reliability comprehensive overview of the
thesis is to develop technical the most accurate in preserving of the novel method was information of clinical utility 1. Borders between GM and WM
instruments for pre-processing, the physiological alpha rhythm further confirmed by real data that can be extracted from identified by MISA.
analysis, coregistration in the occipital channels. applications: the parcellation 1) the EEG and fMRI techniques
and fusion of multimodal In the 9.4T study, the overall showed a good reproducibility is given, as applied to the In a group of healthy subjects,
information, to be used at EEG physiological information across healthy subjects, 2) led to study of photosensitivity. The the negative BOLD responses 2. Plot of BOLD and hemoglobin
normal or high magnetic field was distorted by the UHF, from effects of intermittent photic (NBRs) to IPS were investigated responses to IPS in one example
a reliable definition of epileptic
channel (channel 3-4) of one exemplar
strengths. Multimodal imaging amplitude and spectral content networks. stimulation (IPS) on one patient by means of the NIRS technique, subject. The grey area corresponds to
at ultra-high field (UHF) offers to connectivity pattern of EEG To set the stage for future were compared with a group able to give insight into the the IPS interval.
128
MONITORING OF VITAL PARAMETERS FOR SLEEP between parasympathetic and frequency-domain and long- robustness of the estimation of
BIOENGINEERING
the aspects that mostly influence could represent a cost-effective data found in emerging research smart devices for monitoring like the sleep efficiency (SEFF). the others are not satisfied. For
our everyday life. Literature solution for continuous on central and peripheral vital signs (in particular HRV The dissertation aims to elevate example, if the ECG recording
report that a high percentage of monitoring and, therefore, risk changes in brain function that signal) in ambulatory subjects the attention on evaluating is completely corrupted,
serious car or work accidents are prevention. may be associated to the clinical during their daily activities. sleep classification performance macrostructure-related sleep
caused by daytime somnolence Sleep is also a sensitive status and response to treatment These devices allow the remote by means of sleep-related parameters (TIB, SOL, WASO,
due poor sleep quality. It is barometer of emotional status throughout the course of mood and continuous monitoring of features (like SEFF, TWT, WASO, TWT, TST and SEFF) can still be
important to remark that the and psychiatric conditions. Sleep disorders. Disease management people in different circumstances etc) instead of the highly estimated through the analysis
difficulty in the identification difficulties have been associated for psychiatric patient, through and situations, such as diagnosis used metrics, i.e. accuracy, of the body movement signal.
of symptoms and the costs with emotional states and continuous, non-invasive procedure, monitoring of sensitive and specificity. These In this dissertation was
connected to an accurate clinical psychiatric diagnoses related to monitoring represents a novel patients with cardio-respiratory sleep-related features find demonstrated the feasibility of
evaluation can give rise to anxiety, affective dysregulation approach. Today, psychiatric or mental diseases. This is application in clinical studies; a home monitoring of bipolar
underestimation or to the lack of (depression, manic states patients together with their because the patient lives his/her therefore, it seems more useful patients. The information that
diagnosis of sleep disturbances. and bipolar disorders), post- relatives face a great deal of daily life and is not perturbed to estimate them in a reliable can be gathered from the HRV
In addition, generally, there is traumatic-stress disorder and problems, resulting often in psychologically by the hospital way than computing a precise signals seemed to be useful for
a small number of specialized other more specific behavioral a premature interruption of environment. This also results hypnogram. The HRV and the the assessment of the different
centers. Further, a bad quality disorders such as attention treatment and of follow-up in economic savings by the sleep regulations influence stages of such pathology.
of sleep was proven to have deficit and hyper-activity by psychiatric services, due to reduction in hospitalization different pathologies including Moreover, they represent an
an impact on blood pressure, disorder (ADHD). The evaluation the deinstitutionalization of costs. mood disorders. Some changes objective evaluation of the
decreases the immunity of a sleep disorder is often mental health services and the This PhD work aimed to give a in modulation of the ANS was disease providing instruments for
defenses and may increase mixed with the assessment or establishment of services in mathematical characterization demonstrated to be amenable its study and interpretation. This
the insurgence probability of consideration of the psychiatric primary care, community centers of sleep, both to provide a to bipolar disorder, while other can be taken into consideration
metabolic disturbances such condition. Hypersomnia may and general hospitals. The use of clear and scientific quantitative processes reflect physiological from the clinicians and the
as obesity and diabetes. In be the main symptom in some wearable devices can open the description of the underlying mood changes in bipolar caregivers to better take care
addition, sleep disturbances, depressive disorders, as seasonal possibility to provide continuous physio-pathological phenomena, patients. and support the patients.
for example, the ones related depression, depression with and ubiquitous access to medical and to allow the implementation The crucial point of such analysis
to breathing, have a strong atypical features or depressive excellence in a cost-effective of an automatic classifier for was represented by the choice
association with cardiovascular episodes in bipolar disorder. way. speeding up the study of sleep of monitoring the patients in a
pathologies. For all these Psychological state assessment, Sleep is a complex state macrostructure in regular clinical naturalistic environment. The
reasons, an indication on the in particular bipolar disorder physiologically characterized practice. Moreover, it aimed study of a psychiatric disorder
sleep quality may constitute a management, is one of the by important changes in the to characterize the regulatory during the real life, at patients
good parameter for prevention areas of great demand for autonomic regulation of the autonomic behaviors behind the home gives results of relevant
of some pathologies and can the need of continuous cardiovascular activity. HRV is course of the bipolar disorder clinical importance. On the
provide a tool for improving monitoring, patient participation largely affected during sleep in order to pave the way for the other hand, such uncontrolled
quality of life. The introduction and medical prediction. The by sleep-stage organization: creation of a supporting tool to environment might be affecting
of textile wearable devices, as nature of bipolar disorder is specifically, evidence suggests a help clinicians in facing better the recording quality. It was
well as sensorized mattresses unpredictable and episodic. predominant vagal drive to the this disease. shown a possible way to face
constitutes a great advantage in Thus, it is necessary to take the heart and a reduced sympathetic It was demonstrated that the this issue through two steps.
prevention and also in follow- traditional standard procedures tone during non-rapid eye HRV signals contain information First, by evaluating the minimal
up. In fact, they can be used at of mood assessment through movement (NREM) sleep and that can be connected to the requirements for a reliable
home without the intervention the administration of rating an increased sympathetic sleep modulations that can be estimation of the considered
of medical personnel. Thus, scales and questionnaires, and modulation, with fluctuations inferred from time-domain, features. In particular, the
130
Eros Montin - Advisor: Prof Luca T. Mainardi - Tutor: Prof Paolo G. Ravazzani
131
Image registration is one of types of degradations, it is because of the several well- Fusion of radiotherapy and diffusional dataset (AM).
BIOENGINEERING
the fundamental procedures impossible to design a universal known drawbacks e.g. the
of image processing in the method applicable to all the non-convexity of the metric in
field of medical imaging. fusion tasks. Hence, in order the parameter space. Therefore,
Medical images are widely to obtain an optimal solution, registration strategies based
used for diagnosis, treatment a greater attention has to be on MI are usually constrained
planning, disease monitoring paid during the selection of the in order to obtain smooth
Longitudinal image registration of pediatric MRI, by
and image guided surgery above components which have deformation fields. This increasing the complexity of the transformation model
and can be acquired using to be defined according to the deformation smoothness is not using a pluri-metric approach. Namely translation,
different imaging modalities like nature of the specific registration always desirable in oncological rotation, scaling, and non-rigid based on b-spline.
Magnetic Resonance Imaging problems. Among these three field where the lesion presence
(MRI), Computed Tomography components an important role may induce huge localized
(CT), Xray, Positron Emission is played by the metric. Intensity tissues changes. Therefore
Tomography (PET). Images based fusion metrics assume in this thesis, an alternative
obtained using different that some particular features similarity measure has been
modalities, usually, need to extracted by image voxels will proposed and implemented.
be compared to one another be most similar when the correct The metric integrates MI with a
and/or combined for analysis registration transform is applied. local descriptor of the images
and decision making. In this They can be sub-divided in to be matched in a pluri-metric
scenario the aim of image two macro groups in relation approach. Since two images are
registration is to find the best to the kind of registration task considered fused when intensity
alignment between a fixed they solve: mono-modal and changes occur in the same MRI0 (bottom right corner) creation from the MRI1 (top
(reference) image and a moving multi-modal. The first one solve location in the two images to right corner) using a deformation field (bottom left
one (source) by evaluating their registration task of images be registered, the elected local corner) which mimic a real patient growth pattern. MRI
0 can be compared with the real MRI0 (top left corner) of
similarity. Image registration can acquired with similar parameter descriptor was a gradient based The neuro-radiological atlas exploited in the atlas-based
the patient.
be visualized as an optimization settings, conversely, multi-modal filter. The reliability of this metric segmentation, the colored area represent a part of the
regions considered in the segmentation. In red is also
problem which maximizes the one solve all the others. was evaluated in comparison reported the dose of a patient as example.
matching between these images to the literature ones and
by changing the parameters of Therefore, one of the most finally utilized in a real clinical
a geometrical transformation challenging problem in the oncological case.
which maps points in one image image fusion field arises from
to the corresponding points in multi-modal images registration. The main aim of this study was
the other one. Literature metrics based on the reassessment of images
information theoretic techniques acquired in pediatric age in
An image registration procedure had great experimental success comparison to the adolescent
is mainly composed by three and are becoming widely used one, by means of Diffusion
components: a geometric in the multi-modality fusion Tensor Imaging (DTI). The activity
transformation model, a activity. Among them, Mutual concerned the late realignments
similarity measure and an Information (MI) is considered of images arising from different
optimization algorithm. Due the elective state-of-the-art. scanners, modalities, patient
The seven anatomical areas used as focus point during
to the diversity of images to However, MI optimization is age and therapies. The main the registration performance assessment of the
be registered and the diverse still considered a hard task challenges of this task arise from Fusion of radiotherapy and diffusional dataset (FA). registration between CT and MRI0.
132
133
In vivo, cells are surrounded poorly mimic the native cellular and experimental throughput. on human mesenchymal stromal
BIOENGINEERING
by a complex multi-factorial microenvironment, featuring Considering these premises, cell osteogenic commitment.
environment characterized levels of stiffness that are microfluidic platforms represent Finally, a forth microfluidic
by specific physicochemical orders of magnitude higher promising in vitro models, platform is introduced for (i)
properties (temperature, pH, than those found in vivo and increasingly exploited as trapping and culturing single
oxygen tension), which provide lacking in presenting 3D cues enabling tools in the field of cells into defined spatial
cells with exogenous stimuli typical of the native cell physical cell biology, from the screening 2. Four microscale platforms and/or techniques are presented. For each configurations, (ii) while
deriving from soluble factors, environment. In the past few of drugs or molecules, to platform, a specific microfluidic strategy has been applied to define a automatically delivering them
cell-matrix interactions, and cell- decades, several attempts have the optimization of culture technological solution for addressing a specific biological goal. concentration patterns of non-
cell contacts. The orchestrated been made to address these conditions for inducing specific diffusive particles (i.e. gene
and spatio-temporally dynamic issues proposing macroscale cell fates. stem cell fate. In details, four approach for skeletal tissue vectors). An exploitation of
interplay of these biochemical 3D culture models relying on In this scenario, this PhD project microscale platforms and/or regeneration. the platform is then proposed
and physical extracellular innovative biomaterials, often envisions the exploitation of the techniques, ad hoc conceived A second microscale strategy is to perform on chip high-
cues, referred to as cell combined with laboratory- main principles of microfluidics in the context of national and introduced to spatially tailor the throughput screening and
microenvironment, regulates scale bioreactors. Although for the generation of innovative international collaborations, 3D microenvironment around optimization of transfection
cells structure, function and providing a more reproducible technological solutions for are presented. Each chapter is cells, based on the combination strategies, in the context
behavior, ultimately guiding and controlled approach to addressing specific questions in focused on a single platform of an innovative biocompatible of a collaboration with the
their fate. Considering the investigate mechanisms of the field of cell biology. and underlines how a specific photopolymerizable hydrogel Biocompatibility and Cell culture
complexity of the native cell cell behavior within a 3D microfluidic strategy has been (VA-086-GelMA) and an Laboratory (BioCell, Politecnico
microenvironment, in vitro environment and to engineer applied to the definition of easy to handle photo-mold- di Milano, Italy).
models are required as tools functional constructs, these a technological solution for patterning (PMP) technique.
for better understanding the approaches still have to deal addressing a specific biological The work presented in this
key pathways regulating cells with size-scales that are orders goal. chapter is partially the result of a
behavior. In order to achieve of magnitude bigger than the The first presented microfluidic collaboration with the Cell and
reliable and biologically relevant native microenvironment. platform was designed with the Tissue Engineering Laboratory
results, such models should Recently, microscale and aim to (i) generate and culture (IRCCS Galeazzi Orthopedic
feature the ability to in vitro microfluidic technologies are 3D cellular microaggregates Institute, Milano, Italy).
recapitulate the dynamic finding increasing applications under continuous flow perfusion As third technique, a microfluidic
combinatorial role of soluble as innovative approaches in while (ii) conditioning them cell mixer is presented, as the
factors, matrix-bound cues, cell biology studies, enabling with different combinations/ result of a collaboration with
cell-cell contacts and cell-matrix an unprecedented control over 1. The main principles of microfluidics concentrations of soluble the Tissue Engineering and
adhesions on cell responses. the cellular microenvironment were exploited for the generation of factors. An exploitation of Microfluidics Laboratory (TEaM,
innovative technological solutions for
To date, much of the current while reducing the time and addressing specific questions in the
the platform is proposed, University of Queensland,
understanding in cell biology the scale of experimental field of cell biology. in collaboration with the Australia). The integration of this
relies on traditional bi- platforms for better matching Tissue Engineering Laboratory mixer as upstream functional
dimensional (2D) in vitro cell the cellular level. Moreover, The aim of this research is thus (University Hospital of Basel, element within two different
culture models, which mainly they allow for automating and the development of microfluidic Switzerland), to perform studies microfluidic platforms is then
consist in the static culture of parallelizing experimentations platforms and techniques on limb bud development and demonstrated for the automatic
cells seeded on polystyrene and coupling cell cultures as tools for investigating investigate processes involved establishment of 2D and 3D
flat surface plates (mm to cm directly with high-throughput and modeling the effect of in mesenchymal progenitor osteogenic co-culture models,
in characteristic dimension). analysis systems, thus improving different cues from the cellular cells differentiation, towards a aiming at investigating the
Such substrates, however, simultaneously model accuracy microenvironment in addressing developmental engineering influence of pre-osteoblastic cells
134
Automatic Decellularization and properties. HE staining, DAPI no relevant differences, however endothelial cells seeded are lost
BIOENGINEERING
Decellularization is the complete host body could be used as both with a pulsatile pattern of the human swine model explanted presented in this thesis brought open question on the need of
removal of all cellular and cell source and bioreactor with pump exerts a circumferential at 2 weeks were patent with positive decellularization results an endothelialization previous
nuclear material from a tissue positive outcomes. Considering strain on the vessel. The system vWF+ cells covering the lumen as cell removal and preservation to grafting, indeed the literature
preserving its extracellular this complex scenarios, this can be programmed to drive and an ongoing migration of the ECM mechanical reports opposite data and there
matrix. The results of the thesis aimed at the development complex protocols that involve of a-SMA+ cells from the properties, moreover the is a contradiction between the
process are scaffolds that have and characterization of a different decellularization adventitia to the media. The hypotonic-detergent-serum loss of seeded cells and the
biochemical properties able decellularized arterial scaffold solutions repeatedly exchanged 10 weeks trial showed a more protocol allowed to limit timings better outcomes of seeded
to stimulate the cell adhesion, and its characterization in-vitro according to user-defined spread and deep repopulation and costs. The cell removal and grafts. Our findings have the
proliferation and differentiation and in-vivo. Beside this, in orderpatterns, the process parameters of the vessel wall by a-SMA+ the mechanical properties are peculiarity of the absence of
beside to mechanical properties to overcome the limitations that can be specified are the cells. Interestingly at 2 weeks crucial for blood vessel tissue antiplatelet drugs administration
similar to the native tissue and of manual operated processes number of solutions, the it was observed that no cell engineering because possible while this therapy is usually
a preserved tissue structure. this thesis aimed also at the timings, the recirculation positive for human CD31 was cellular remnants could elicit provided. Overall, a supposition
To date, the majority of development of a device for pattern, the flow rates and present in the lumen while the high inflammatory response or can be made on a paracrine
decellularization processes the automatic decellularization the temperatures. Sterility and cells forming the tunica intima calcification, while a compliance role in the early phases for
are performed with manual of blood vessels that could reliability of the device were were positive for swine CD31 mismatch could elicit intimal endothelial cells that influences
operation, limiting the safety, autonomously drive the process validated. meaning that seeded cells were hyperplasia. Storage is an the late term resolution of the
reliability and reproducibility while keeping sterility and while The swine arteries were lost and substituted by host unavoidable step in the whole implant. To give preliminary
of the process while these being easy and versatile to use. decellularized using the cells. On the contrary, results of process of scaffold production insight on these phenomena
are mandatory requisites in automatic device with the unseeded implants showed and it was also demonstrated we analyzed the macrophages
the translation to the clinic. Results hypotonic-detergent-serum that except for the 6 weeks that the -80C storage do not response to decellularized
To overcome this issue the A device able to drive a whole protocol in order to validate explant, all the other time points affect the mechanical properties. scaffold and results showed
automation of the process automated decellularization both protocol outcomes and resulted in occlusion because The device developed proved a lower and less persistent
can fulfill the previously stated process for blood vessels was device functionality. Results of the growth of amorphous to be functional and effective inflammatory response in
requisites and some groups are designed and developed. It showed no residual cells nor fibrous tissue inside the lumen for the decellularization of comparison to the response to
now performing decellularization consists of an hydraulic system nuclear material. Mechanical and intimal hyperplasia as blood vessels, and it represents silk fibroin scaffolds.
using devices. able to perfuse and recirculate properties testing showed no reported by HE staining. The an approach aimed at the
Decellularized scaffolds have up to three decellularization statistically significant differences occlusion was populated by automation of the whole process
been used also for blood vessels solution, a thermal regulation in respect to native tissue for a-SMA+ cells, new vessels, rather than a tool to provide
tissue engineering trying to fill system and an user interface. both compliance and burst lipids deposition and sparse a mean of decellularization,
the existing clinical gap for small The chamber was designed pressure. Moreover outcomes macrophages. The investigation like perfusion. Furthermore the
caliber arterial substitutions. ad-hoc for the application, showed that the mechanical of macrophages interaction with device showed an enhancement
Decellularized vascular scaffolds perfusion is provided by means stimulation enhances the decellularized scaffolds showed for the decellularization of very
have been investigated in various of dedicate holders, the system decellularization in case of 3 mm that decellularized scaffolds elicit small caliber vessels thanks
in-vivo model and reached also can house vessels of different diameter vessels. a lower expression of genes to the innovative system of
two cases of clinical implantation lengths (up to 100 mm) The evaluation of the for Il-1b, Il-6, MMP9, TNF-a at perfusion that provides radial
but for large vessels substitution. and diameters (3-7 mm), an decellularization of swine 24h compared to silk scaffolds and longitudinal strains with
However it is not yet clear in the innovative feature of the device arteries using the hypotonic- while after 96h only IL1b and a simple setup. The chimeric
blood vessels tissue engineering is the distal holder whose weight detergent-enzymatic protocols IL6 were lower. Arg I expression human swine in-vivo implants
field, and also for decellularized generates a longitudinal strain resulted in a good degree of was instead higher at both 24 resulted in patent vessels both
vascular scaffolds, if it is needed on the vessel and, at the same cell removal and preserved ECM and 96h, CD206 was higher at at 2 and 10 weeks. Interestingly
an endothelialization previous time, the internal geometry gives structure with good mechanical 24h. VEGFA expression showed it was demonstrated that the
136
137
In the field of modern sessions (i.e. up to 1 hour), reproducibility of MEA readouts. to-noise ratio comparable to recordings with observation and of mechanisms at the basis
BIOENGINEERING
Neuroscience research, cultures which prevents from performing Towards the establishment standard recording devices; continuity, (ii) standard of late-onset neurodegenerative
of primary central neuronal cells the uninterrupted tracing of of such a device (Fig. 1), (iv) air-tight external access operating conditions of cell pathologies mimicked in vitro
coupled to substrate-integrated neuronal processes that develop multiple activities have been to chemically manipulate the culture practice, (iii) avoidance (e.g. Alzheimers disease).
Microelectrodes Arrays (MEA) over extended periods of time performed over the PhD cultures without withdrawing of environmental fluctuations,
represent an unparalleled (e.g. several hours to several work. As a preliminary step, them from the chamber; (v) a (iv) reduction of the operator
methodology to study network- days or weeks), such as network the spontaneous activity of versatile software tool for on- intervention, (v) reduction of
level electrophysiological development, long-term hippocampal cultures (n=96, line classification of neuronal the number of replicates and
properties of neuronal plasticity or effects of chronic three different cell densities) spikes detected by MEAs, able the time required to have
ensembles in both physiological pharmacological treatments. A was tracked with a standard to save time in data analysis of significant results, (vi) enhanced
and pathological conditions. second key requirement is the equipment (i.e. brief recordings prolonged MEA recordings. culture comparability and data
Nowadays, non-invasive and possibility to perform parallel once every 48 h) over 1 month The prototype was successfully reproducibility (vii) nullification
multisite MEA recordings of MEA recording of multiple in vitro, which allowed to obtain deployed to perform long of stress to cells and infection
neuronal electrical activity are a cultures, which enhances statistically robust reference lasting (from some hours up to risk during cell culture chemical
mainstay technique for studies culture-toculture comparability data to validate the device to 10 days) electrophysiological manipulations (medium change
about neuronal networks and shortens the experimental be developed. Then, a portable recordings of neuronal culture and pharmacological tests)
dynamics, neuronal plasticity timescale, with an important environmental chamber was spiking activity, demonstrating and (viii) easiness to integrate 1. Scheme of the developed
and drug and toxicology tests. impact on pharmacological characterized and tested with the possibility to trace the other devices in the system. experimental platform intended
Due to the role that results from tests. Third, compactness and cell viability assays and it was unperturbed evolution of The device lends itself to be for prolonged and parallel in vitro
neuronal recordings by means of
these studies are expected to accessibility to the MEA setup proven to grow neuronal neuronal activity patterns employed in studies of neuronal Microelectrode Arrays (MEA).
play in Neuroscience research, are essential features to allow cultures on the lab bench in a (Fig. 2). Moreover, the validity phenomena such as long-term
reliability and reproducibility the integration of technological comparable fashion with respect of the system in performing plasticity, pharmacological
of the experimental outcome tools needed to perform to standard cell incubators. prolonged and parallel neuro- dose-response experiments,
are crucial for MEA-based experiments (e.g. microscopes, Starting from this proof-of- pharmacological stimulations investigations of chronic effects
studies. To this aim, a first pumps). concept, an advanced prototype has been demonstrated by of pharmacological treatments
requirement is the presence Notwithstanding previous was designed, including: (i) a means of reproducible data
of physiological conditions efforts to improve standard MEA closed and compact chamber over prolonged pharmacological
during experiments. However, experimental setups, still an housing 4 cultures on MEA; dose-response experiments.
the recurrent withdrawal of experimental platform is missing (ii) integrated environmental It can be concluded that the
cultures from the cell incubator that integrates the capabilities sensors (relative humidity, work presented in this Thesis
to perform MEA recordings to properly meet all the three carbon dioxide, temperature) constitutes a valid platform to
results in the deflection of abovementioned requirements. coupled to a custom electronic perform MEA experiments on
environmental parameters This work is a technological control unit and a real-time in vitro neuronal cultures under
(e.g. temperature and gaseous contribution towards the data logging software, able to physiological environmental
atmosphere composition) development of a stand-alone assure incubator-like temporal conditions. The design and
from canonical in vitro growth mini-incubator capable of stability, accuracy and spatial experimental verification of the
conditions, in mechanical stress providing uninterrupted MEA homogeneity; (iii) custom system have been thoroughly
to the cultures and in increased data in a high-throughput electronic boards compatible reported in the Thesis. Overall,
infection risk. Such perturbations format while keeping with the environmental the developed device has
2. Example of a prolonged MEA recording (10 days) of a neuronal cell culture
trigger functional alteration and permanently neuronal networks chamber and capable of reading advantageous capabilities inside the devised system, showing the time course of the spontaneous spiking
changed cell viability, limiting the in physiological conditions, out neuronal spikes from 4 for electrophysiological and rate averaged across the electrodes (A) and 10-minute snapshots of spike
duration of single experimental thus enhancing reliability and 60-channel MEAs with a signal- pharmacological studies: (i) MEA timing at each recording site extracted at different time points (B).
138
BIOENGINEERING
to live with dignity and the study of the bone tissue is porosity dependency of the Moreover the computational
absence of suffering during his/ focused on the dependence of mechanical characteristic of the model results show the
her lifetime, but our lack of the mechanical properties on material. In addition, beside anisotropy of the scaffold. This
knowledge on systemic, organ, the applied load or characteristic the glass ceramic material, the feature is investigated utilizing
tissue and cell degeneration due size of the experiment. The study went through mechanical two different approaches.
to senescence indicates that the study was aimed at identifying characterization of trabecular/ First the structure anisotropy 1a. Gray level distribution of the nanoporosity across a layer obtained by
ageing process is often beyond and quantifying a damage cortical bone of bovines at the is evaluated by the calculation CT-scans, dark voxels related to high porosity, whereas white voxels relate to
low nanoporosity. The image is from an area indicating 9556m9940m by
our control. Therefore, plenty mechanism occurring in the same scale. of the Mean Intercept Length 696724 pixels, in which each pixel includes13.73m 13.73m area.
of studies are carried out in bone tissue upon loading. The The mechanical properties at the (Whitehouse 1974), that is a 1b. A schematic view of a pixel size area illustrating both glass-ceramic
different laboratories to expand experimental characterization micro scale are investigated by standard method to check if material and nanoporosity obtained by SEM.
the related knowledge and was carried out on both the means of the nanoindentation the structure is mainly aligned
amend the treating methods. cortical bone tissue as well as experimental technique and in a specific direction. A degree
Most models of aging at cellular on the trabeculae of spongy the mechanical properties at of anisotropy of 25-30% is the scaffold walls was owed an effective tool to the
level are derived from simple in- bovine bone samples. The the macro scale are obtained found in this way. On the other to the intrinsic porosity of the prediction of the mechanical
vitro experiments in addition to damage mechanisms were by means of computational hand, the effective volume sintered ceramic. Consistent properties of the scaffold as
animal models ranging from fish further investigated by means tools only. The data found (Quinn 2003) is calculated in values with other investigation a function of their physical
to primates. of numerical simulation of the through the experiments is order to quantify the unloaded techniques carried out by the properties like macro and micro-
Changing in physiological nanoindentation experiments at used to feed the computational volume, due to inhomogeneity research group in the Polytechnic porosity and 3D architecture.
microenvironment and multiple characteristic lengths on models for the macroscopic of the stress distribution in the university of Turin validated the
alternations in signaling the cortical bone samples. characterization. The 3D structure during the simulations. results found in this study.
between cells and even distant The chosen material for bone structure of the scaffold has According to the results, the As a general conclusion, the 3D
tissues and organs leads to scaffolds is a glass-ceramic been scanned with a micro-CT structure anisotropy can be glass-ceramic scaffolds which
systemic age related diseases material derived from a highly scanner for further investigation. explained properly by both are designed and manufactured
such as osteoporosis. This bioactive glass (called CEL2) Based on the scanned images, anisotropy of the scaffold with the final aim to build
project is an initial step towards (Vitale-Brovarone et al. 2008). a binary volume is built architecture and inhomogeneity tissue models to be used in-
a long-term goal of generating It shows very promising bio- up, in which the value 1 is of stress distribution. vitro testing of drugs simulating
reliable biomimetic models of a chemical properties and it can indicating the material and the The interpretation of comparison healthy, diseased or aged bone
bone tissue. The tissue should be produced with a controlled value 0 is indicating porosity. between the mechanical tissues have a good potential to
be applicable for studies of multi-scale porosity, which Subsequently, a finite element response of the glass ceramic achieve their purpose.
pathological conditions and is significantly important model based on the binary material and that of the bone The manufacturing process to
development of pharmacological to improve the scaffold volume is developed in order to tissue is that the glass ceramic obtain the 3D scaffolds, which
strategies reducing animal and characteristic. Hence, the goal model the structural geometry bulk material does not exhibit should exhibit mechanical and
clinical testing as well as time of this study is to perform a of the sample and to estimate the typical damage response physical properties consistent
and cost associated with them. mechanical characterization of the mechanical properties of the (decreasing indentation modulus with those of the trabecular
As a contribution to this wide the glass-ceramic scaffold at scaffold at macro scale. with increase of indentation bone in different aging or clinical
framework, this study aims different scales, in particular The results of the computational load). On the other hand, the conditions, can be tuned so to
at studying the mechanical micro and macro scales. For model are checked by result decreasing trend of indentation provide the scaffolds with the
properties of the bone tissue such a study, a bulk form of the of an analytical model. The modulus was observed by the desired elasticity and strength
and of glass-ceramic scaffold material, non-pores (without model was adopted based on same indenting on the 3D properties. The experimental 2. Colorimetric maps of the reduced
at small length scale by making any porosity), is examined as an analytical approach proposed sintered ceramic scaffolds walls. and computational framework modulus and hardness related to
use of the nanoindentation well as a 3D Porous material, by Zhu et al. (1997). This model The decreasing trend found on reported in this study provides areas (bulk glass-ceramic)
140
141
Injuries, genetic diseases, cancer, improved patient follow-up and construct during maturation, axis, to allow scaffold tensioning potential to produce a tissue scaffold production, creating
BIOENGINEERING
ageing: all things that can life expectancy. Despite new and in preclinical models to and to connect with different engineered tubular grafts, an a multi-layered structure with
harm the complex functioning surgical techniques and drug investigate the host response hydraulic circuit in a fast and innovative PCL/PLA-TMC based a smooth continuous layer
of the human body. In 2012, improvement, these solutions (e.g., neovascularization, re- trustworthy way. electrospun tubular scaffold facing the internal lumen, it
an estimated 56 million present many limitations, such modelling), and the behaviour Bench test results was realized, and chemically is expected that endothelial
people died worldwide, with as donor shortages, permanent of the produced substitute once demonstrated that a reliable and mechanically characterized. colonisation of the inner layer
noncommunicable diseases immunosuppressive regimens, grafted. and easily assembled device The three-dimensional matrix will be possible. This would then
(NCDs) responsible for 68% of increased risk of infection, One area of particular interest is was developed. The main demonstrated mechanical permit more comprehensive
all deaths globally, up from 60% unwanted side effects, and, in in the replacement of damaged characteristics of the system properties comparable with testing within the bioreactor,
in 2000, registering an alarming case of artificial supports, finite hollow organ structures such were: 1) ease of handling, native blood vessel tissue, and a demonstration of the
increase. durability as those found throughout that makes the system user- presenting a promising full versatility of the developed
Cardiovascular disease alone This scenario let to an increasing the cardiovascular, respiratory, friendly, and reduces the risk candidate for vascular tissue system. With the dual chamber
accounted worldwide for 17.5 interest in the field of tissue urinary, and gastrointestinal of contamination; 2) versatility regeneration. organisation of the bioreactor,
million deaths in 2012, that engineering, which merges systems. Whilst the current of the system; capable of PCL/PLA-TMC matrix and the growth conditions for the
is three in every 10 deaths. In engineering and life sciences surgical procedures for axial rotation, separate media developed bioreactor recreated development of smooth muscle
terms of proportion of deaths knowledge with the final goal replacement of damaged tissue environments for inner and a suitable 3D environment and endothelial tissue layers can
that are due to NCDs, high- to develop in vitro cellularized commonly use autologous outer layers, and the ability for mesenchymal stem cells be independently optimised.
income countries have the functional substitutes able to grafts, this is hampered to connect to different growth and differentiation. Furthermore, whilst rotation
highest proportion (87%), restore or improve tissue and by the poor availability of hydraulic systems to produce The results confirmed that 3D and double phase culture were
followed by upper-middle organ activities. The three most suitable graft tissue, donor laminar and pulsatile flow dynamic culture allowed for a demonstrated to be successful,
income countries (81%). important ingredients of tissue site morbidity, and poor long within physiologically relevant better control over the cell fate the use of hydraulic pumps to
Ischaemic heart disease, stroke, engineering are biomaterials, term stability of the substitute. pressures.3) compatibility with and behaviour by facilitating provide pressure, load, and flow
lower respiratory infections and cells collected from a patient and As such, there is a critical the best standard of good mass transfer phenomena, stimulation on the tissue can be
chronic obstructive lung disease proper environmental culture demand for the production of laboratory practice. With this by facilitating the medium to tested; both in terms of directing
have remained the top major conditions (i.e., a bioreactor). tissue engineered grafts that new system we were able to flow through the scaffold wall. the cells to form a suitable
killers during the past decade. In summary, a porous delivery are capable of meeting the combine different stimuli and Moreover, the transmural flow tissue for grafts, and in terms of
End-stage organ failure or system is needed that confines functional requirements of the safely join rotation, perfusion favoured cell migration trough performing rigorous testing of
tissue loss is also one of the cells to the desired location, organ system without inducing and gentle stretching of the the thickness of the tubular potential grafts.
the most costly problem in after in vitro mechanical immune or inflammatory cultured patch, all functions matrix, permitting extracellular We can state that this research
medicine: over 8 million surgical stimulation. Significant progress responses, or losing function that until now were separately matrix formation and deposition led to the production of
procedures are estimated to has already been made in the over time. considered in dynamic cultures. along all the structure. both an innovative device for
be performed every year to field and examples of successful The presented research project, The preliminary culture Whilst the scaffold showed both tubular organs regeneration,
treat these disorders in the clinical implants of tissue- aimed to design, fabricate experiments, carried out with favourable mechanical properties and the characterisation of a
United States alone, incurring engineered products include and characterize an innovative Biofelt, gave the proof of the as well as an outer layer, novel scaffold for potential use
a tremendous health care cost skin substitutes, nasal cartilage, multifunctional bioreactor for functionality of the bioreactor. which facilitated mesenchymal in vascular grafts. With the
of more than $400 billion functioning bladder and trachea. the regeneration of hollow The obtained results revealed stem cell colonisation for modularity of the bioreactor
annually. Over the last 50 years, Whilst these are promising organs, able to overcome the that the application of rotation the eventual formation of a along with a relative ease of use,
transplantation of an extensive results, much effort is still limits of the current available stimuli favourite the penetration tunica media, the ability of the the device holds great potential
variety of tissues, reconstructive required in vitro, to elucidate devices. of the cells into the scaffold inner layer to support growth for future production of tissue
surgical techniques, and basic mechanisms regulating A prototype was produced able wall, enhancing the production of cells that would form a engineered tubular grafts.
replacement with artificial cell response, and the to perform rotation of a tubular of extracellular matrix. tunica intima was not tested.
devices have significantly behaviour of the engineered scaffold along its longitudinal For the testing the bioreactor With further refinement of
142
BIOENGINEERING
Cardiovascular diseases (CVDs), in research to study the blood approach, after being properly between manual and automatic improved both the visualization relationships between the
a group of disorders that affect flow characteristics evolving in calibrated, was applied on segmentation. (streamlines and pathlines) and geometry and hemodynamics
the heart and the vessels, are thoracic aorta and it is also used PCMRI images acquired with Finally, a 3D triangular surface the analysis of the flow, with within the aorta in pathological
the leading cause of death in clinical protocol to extend different SENSE reduction factors mesh of thoracic aorta in the the possibility of speed up the and in healthy subjects but
worldwide. In order to assess traditional anatomic evaluation. and its effects were evaluated peak of systole was created. acquisition, and Computational also to define new and simpler
CVD initiation and progression, The aim of this project is in terms of image quality (noise This 3D model, together with Fluid Dynamic (CFD) simulations indices that can be used to
blood flow characteristics are to study, to develop and to in velocity images), regularity of the velocity data, was exploited where the regularized velocity describe and quantify the
known to play an important evaluate advanced methods to the velocity fields (divergence to provide a comprehensive field can allow a quantitative presence and the progression
role, since hemodynamic allow and to improve the use of the velocity field, relative morphometric and direct analysis of complex of CVD. Finally, the 3D mesh of
alterations are closely related to of 4D PCMRI images of the error in velocity magnitude and hemodynamic characterization patient-specific 4D flow pattern. the thoracic aorta could be easily
pathological condition and may thoracic aorta for hemodynamic absolute error in flow direction), of the vessel of each subject. In The segmentation method had assimilated into computational
have a causative role in CVD and morphometric evaluations. aorta flow pattern visualization particular, we made quantitative proven to be able to properly fluid dynamics frameworks
evolution. In order to better Two main purposes can be (streamlines, and secondary measurements of blood velocity segment the thoracic aorta to get even more realistic
understand the mechanism distinguished: 1 - to propose flow patterns) and flow rate and flow, aorta area and in subjects acquired with and computational hemodynamic
of initiation and progression a new filtering approach able quantification. diameter considering different without SENSE parallel imaging models using the velocity values
of CVD as well as to assess to denoise and regularize 4D To segment thoracic aorta planes. with results comparable to the extracted in cross-sectional
the presence of the pathology velocity maps providing volumes lumen from velocity data, a new manual contour delineated planes as boundary condition.
condition, flow patterns study suitable for hemodynamic approach was developed, which Results and discussion by two experts. Quantitative
should be integrated with the applications and 2 - to develop is based on Level Set algorithm Qualitatively, after ADF analysis has confirmed the good
morphometric characterization, a new segmentation method to a computational technique application on PCMRI data, behaviour of the method: the
which consists in evaluating size extract the vessel lumen and to able to control the evolution of image noise was visibly reduced, mean distance between the
(diameter or radius, area) and create a 3D model which can a contour through an implicit while gradients associated contours is comparable to half
shape (curvature or tortuosity) of be directly used to calculate function, widely used for its to the image features were of a pixel (mean distance = 1.07
vessels. In fact, a strictly relation patient-specific indices for a ability to follow the topology preserved. In fact, the noise mm) in the non SENSE dataset
between alteration of blood comprehensive morphometric change of the object. The vessel characterizing the velocity and was equal to 1.36 mm
flow characteristics and changes and hemodynamic aorta of interest was first manually images decreased after filtering for the SENSE dataset. The 3D
in morphology of the vessel has characterization. identified by an operator and and, in agreement with this, mesh representing the thoracic
been demonstrated in many then a two-step algorithm the value of divergence was aorta, showed, in all subjects,
aortic diseases. Materials and Methods was applied: an initial rough reduced at least by 320%. a realistic shape, characterized
3D cine Phase Contrast MRI The proposed noise reduction surface was calculated using a Improvements in visualization of by a degree of smooth
(4D PCMRI) can extract a strategy is the application of Fast Marching Level Set which streamlines and secondary flow comparable to a physiological
quantitative depiction of spatial an Anisotropic Diffusion Filter was then refined, smoothed were observed for all the SENSE vessel. Through the proposed
distribution of blood flow (ADF), a well-known filtering and adapted to the local reduction factors applied in approach, it was possible
velocity as function of time technique able to reduce morphological characteristics PCMRI acquisitions; streamlines to easily and automatically
together with magnitude images data noise preserving image of the vessel using a Level Set are longer and more regular calculate both morphometric
visualizing the subjects anatomy contours. ADF is based on the Geodesic Active Contour (GAC) than in the not filtered data, due and hemodynamic indices on 1A. blood flow visualization using
This imaging technique, based anisotropic diffusion equation, approach. This new method to the more regular 3D velocity seven planes along the vessel. In streamlines of PCMRI dataset after
on the observation that spins which controls the evolution of was tested on subjects acquired field. In fig.1_A the streamlines fig.1_B the 3D patient-specific ADF application.
1B. Patient specific Thoracic Aortic
moving through magnetic field a filtering smoothing function with and without SENSE parallel calculated in systolic phase after vessel model is shown. These Model: 3D mesh, centerline and
have a phase shift proportional through the characteristics of imaging and it was validated in ADF application are shown. Two information automatically its curvature and a vessel contour
to their velocity, is now the image, such as the proximity terms of area overlap and mean main context of applications calculated can be used not calculated from one orthogonal plane.
144
Industrialization and pre-clinical testing manufacturing of biological LAL and media fill tests. implantation analysis on grafts
BIOENGINEERING
particular tissue engineering different type of tissues (bone, the marketplace, a device culture them on the scaffold statistical method has been expression of genes involved
strategies have encountered an cartilage, heart). They are accounting for all the under perfusion condition and to carried out. Once optimized in osteogenic differentiation
increasing interest and a rapid employed in tissue engineering requirements needed to deliver an engineered construct seeding parameters using and characteristic of the
evolution in the last couple of procedures to perform specific be successfully used into a suitable for implantation. Ultrafoam scaffolds and MG63 extracellular matrix of
decades, due to their potential and important functions such streamlined bioreactor-based Furthermore the developed these parameters were then bone tissue. The RT-PCR
to significantly impact current as cell seeding on porous advanced therapy strategy bioreactor is compact and used in seeding other scaffold results showed that the
therapeutic modalities by scaffolds and confined medium is still a lack. In this context therefore easy to be used inside and cell types. Optimization perfusion is a valid stimulus
providing a virtually unlimited perfusion through porous the present study focuses a standard cell culture incubator. through DoE allowed in addressing cells toward
supply of patient-specific scaffolds seeded with cells. on the industrialization and All parts have been designed to identifying which parameters the osteogenic lineage, in
tissues and organs. Increasing Moreover they: a) allow to the pre-clinical testing of a facilitate the procedures and to influence the seeding results. presence of osteogenic culture
in the number of advanced overcome the typical drawbacks technological platform based reduce the manual operations. In particular we found that the medium. Both early and late
therapies undergoing both of manual procedures and on a prototype bioreactor (OPB, The implementation of a flow velocity and the seeding markers are more expressed
early and late-stage clinical improves efficiency of seeding Oscillating Perfusion Bioreactor) sensing system able to monitor time influence the seeding in dynamically cultured
trials as well as FDA-approved processes and uniformity of for tissue engineering the culture parameters allows efficiency, while the seeding constructs than in statically
commercial products that have cells distribution inside porous purposes. Aim of the project meeting traceability requirement, density influences the cell cultured ones for ENGIpore
already entered the market scaffolds, promoting the was to develop a scalable and which is a key aspect in quality viability. As to scaffold and scaffolds.
strongly indicates that these achievement of more uniform robust bioreactor, enabling control management. cell type main differences in
therapies are emerging as a engineered constructs; b) flexible culture strategies and seeding efficiency were found
distinct healthcare sector. The improve efficiency of oxygen monitoring and control of the in relation to scaffold type and
translation of successful research and metabolites transfer and culture environment, taking into in particular to its permeability.
results into the clinic however catabolites removal c) allow account serial manufacturability In performing culture the
still suffers of important issues, automation and monitoring and quality assurance, for the bioreactor proved to be
such as cost, time, lack of ease of culture medium exchange cost effective and automated reliable and robust and was
of application and difficulty procedures; d) allow physical manufacturing of biological able to deliver constructs
in complying with regulation. stimulation of cells seeded tissues. The study comprised two with higher cell content
In this context the field of into porous scaffolds, through main phases: industrialization and viability and with a
automated cell cultivation shear stress generation; e) and pre-clinical testing. better distribution of cells
using highly specialized reducing the use of manual During the industrialization throughout the scaffold
bioreactor designs and stringent actions, promote the transfer of phase the prototype version thickness with respect to
1. Industrialized OPB
bioprocess controls will be Tissue Engineering procedures of the bioreactor has been statically cultured ones.
crucial for the development of from research to clinical re-designed in order to comply b. As final step of validation, in
biomanufacturing technologies application, improving the with regulatory requirements A validation activity has also vivo implant and evaluation
suitable for clinical-grade traceability, reproducibility, in term of GMP practice for cell been carried out in order to of bone grafts generated
production of advanced efficiency and safety of and tissue culture. In particular demonstrate the correspondence by means of the developed
therapies. Many types of processes (key requirements for a scalable and robust bioreactor, with the requirements and perfusion bioreactor have
bioreactors have been designed these procedures to compete enabling flexible culture the robustness and the safety been performed in order to
to provide different stimuli in with traditional therapeutic strategies and monitoring of the of the device. It comprised: a) verify the performance of
relation to the specific tissue alternatives in terms of cost, culture environment, taking into installation qualification (IQ); b) the device in carrying on a
to be developed. Among Quality Control and Good account serial manufacturability operational qualification (OQ); c) complete tissue engineering
these, perfusion bioreactors Manufacturing Practice). and quality assurance, for the performance qualification (PQ), process. Ovine animal model
have proven to be particularly Despite a high number cost effective and automated with the execution of sterility, has been chosen. Pre-
146
BIOENGINEERING
(VADs), the most prominent developing new more effective constant conditions, platelets of DMSO-treated platelets to
solution for treatment of heart anti-thrombotic pharmacologic were subjected to 30 and 70 different level of shear stress are
2. Mean % of platelet activation reduction provided by different antiplatelet
failure (HF), are still burdened agents. dyne/cm2 for a total time of agents (A1-G1) tested after 10 min exposure to Dynamic_30 (A) and represented in Figure 3.
with several post-implant In the first part of the thesis, we 10 min via the HSD. On the Dynamic_50 (B). % reduction are intended compared to control group. (* p < Our studies indicate that a
complications like pump failure, aimed at investigating the effect other hand, the used dynamic 0.05) See Fig. 1 for details regarding antiplatelet agents tested. paradigm shift is required in the
infections or thrombotic events. of nowadays on the market anti- shearing profiles were extracted development of new antiplatelet
Increased shear stresses are a thrombotic therapies on shear- from the stress accumulation The antiplatelet agents effects of physical forces drugs for the treatment of shear
hallmark of flow conditions induced platelet activation after (SA) distribution (probability investigated were Aspirin, encountered by flowing through mediated platelet activation. In
in blood recirculating devices, shear stress exposure via the density function, PDF) calculated Dipyridamole, Cilostazol, cardiac assist devices. particular, the discovery of new
and patients implanted with hemodynamic shearing device by mean of CFD simulations Pentoxifylline, Eptifibatide and New mechanisms of action agents able to affect platelet
such devices require lifelong (HSD), a computer controlled within the DeBakey VAD. The Ticagrelor. were also studied to overcome membrane fluidity may reduce
anti-thrombotic therapies to cone-plate viscometer able to profiles corresponding to the The percentage of platelet the limitations associated with the need of large antithrombotic
counteract the high risk of reproduce with high fidelity the 30th (Dynamic_30) and 50th activation reduction calculated current therapies. Dimethyl therapies, offering an effective
thromboembolism. dynamic shear stress profiles (Dynamic_50) percentiles of for all kinds of samples treated sulfoxide (DMSO) was used to protection to platelets when
Although these agents have encountered by blood within the PDF were implemented in with drugs compared to control, modulate intactness and fluidity exposed to high shear stress
proven their effectiveness as VADs. the HSD. Platelet activity state are represented in Figures 1 and of the platelet membranes with conditions as within VADs.
biochemical inhibitors of platelet We subjected gel-filtered after exposure to shear stress 2. the final goal of reducing shear-
activation, their behavior under platelets (GFP) pre-treated with was monitored using a specific At 30 dyne/cm2 the majority mediated platelet activation.
shear stress, i.e. in response drugs to different shear stress prothrombinase assay, the PAS of the tested agents showed a Membrane integrity and its
to physical forces encountered profiles, either constant or assay. protection effect, with a mean capacity to respond to external
when the blood passes thorough reduction of 55% (Figure 1-A). stimuli plays a key role in the
VADs, has been only marginally The same trend was found mechanotransduction apparatus,
investigated. with platelets subjected to the which is responsible of shear-
In the present dissertation Dynamic_30 condition (mean mediated platelet activation.
a detailed investigation is reduction of 50% compared
performed to assess the to control) (Figure 2-A). On
ability of traditional and the other hand, at higher
unconventional antithrombotic shear stress (70 dyne/cm2 or
treatments to protect platelet Dynamic_50) only Cilostazol
from shear-mediated activation. and Ticagrelor, respectively
In particular, both commonly corresponding to the drug
used antithrombotic drugs displayed as G1, C1 and C2,
and unconventional chemical seemed to protect platelets.
agents (DMSO) are tested 1. Mean % of platelet activation reduction provided by different antiplatelet Results obtained suggest that
under constant and dynamic agents (A1-G1) tested after 10 min exposure to 30 dyne/cm2 (A) and 70 dyne/ the most common antiplatelet
cm2 (B). % reduction are intended compared to control group (* p < 0.05).
shear stress conditions with Antiplatelet agents: A: aspirin alone (A1 - 25 M, A2 - 125 M); B:aspirin in agents, which are normally used
the aim of identifying the best combination with other drugs (B1- ASA 25 M + dipyridamole 5 M, B2 - ASA in anticoagulation management
mechanism of action able to 25 M + eptifibatide 0.25 g/ml, B3- ASA 25 M + pentoxifylline 100 M, B4- for patients treated with
ASA 25 M + eptifibatide 0.25 g/ml + pentoxifylline 100 M); C: ticagrelor (C1
inhibit shear-mediated platelet - ticagrelor 10 M, C2 - ticagrelor 100 M); D: dipyridamole (D1 - dipyridamole
mechanical cardiac devices, are
3. Ability of DMSO to modulate shear-mediated human platelet activation.
activation, thus paving the road 5 M, D2 - dipyridamole 10 M, D3 - dipyridamole 25 M); E1: eptifibatide 0.25 only partially able to protect Pre-treatment with DMSO at specified concentrations (10 min., 37C). Data are
towards a viable approach that g/ml; F1: pentoxifylline 100 M; G1: cilostazol 50 M. platelets from the activation represented as mean SEM. (* p < 0.05).
Technology and Design for Environment
and Building | Territorial Design and
Government | Aerospace Engineering
| Architectural and Urban Design |
Architectural Composition | Architecture,
Urban Design, Conservation of Housing
and Landscape | Bioengineering |
Building Engineering | Design | Design
and technologies for cultural heritage
| Electrical Engineering | Energy and
Nuclear Science and Technology |
Environmental and Infrastructures
engineering | Industrial Chemistry and
Chemical Engineering | Information
Technology | Interior Architecture and
Exhibition Design | Management, Economics
and Industrial Engineering | Materials
Engineering | Mathematical MODELS and
METHODS IN Engineering | Mechanical
Engineering | Physics | Preservation of
THE Architectural Heritage | Rotary-
wing aircrafts | Spatial Planning
and Urban Development | Structural
Seismic and Geotechnical Engineering PhD Yearbook | 2015
150
RESEARCH DOCTORATE
151
Formation aims production engineering and management, safety admits (or not) the doctoral candidate to the final
BUILDING ENGINEERING
Chair: The PhD Programme in Building Engineering provides the chance engineering, applied economy. disputation, on the basis of the thesis scientific
Prof. Manuela Grecchi to research about innovation of design, technology and process Doctors in Building Engineering can become: validity and originality.
of buildings. The growing complexity of the construction sector, professionals with advanced and up-to-date
and the challenges of sustainability and environmental impact, technical skills, who can work as consultants The doctorate activities are strengthened by the
require innovations ranging from the design and construction and project managers in engineering systematic and operative contact with:
process, to high-performance components and systems for new and companies, architectural offices, construction the most advanced manufacturers of the sector;
refurbished buildings, to advanced tools to predict performances firms or manufacturing industries; research centres and particularly ITC/CNR in
and manage the whole process. highly qualified professionals suitable for Italy, CSTB in France, CIB (International Council
The construction sector is characterised by a growing complexity, roles of planning and development managers for Research and Innovation in Building and
due to evolving regulations (such as the nearly zero energy buildings in construction companies, components Construction) at an international level, the
required in 2020), changing expectations of the users, and an ever manufacturing industries, and engineering doctorate is member of the CIB Students
expanding market of materials, products and systems. Moreover, companies; Chapter;
the economic downturn requires a huge effort to face these new researchers for private and public research the Italian (UNI) and International (ISO,
challenges and drive the sector towards 2020 and beyond. institutions (including Universities). CEN, EOTA, IAI) Headquarters of normative
This is why the PhD programme in Building Engineering deals with elaboration;
innovation in terms of technology, process and whole buildings. This The formative programme the public administrations system;
demand for innovation derives from the urgent need to reconsider The formative programme generally develops as the industrial and professional associations that
the environmental impact of buildings and their components, follows: operate in the sector.
and from the progressive industrialisation of the sector, requiring the first year is dedicated to the formulation
the characterisation of the performances of new products and of the research problem, to the development In the last years, internships and study periods
components over their life cycle. of the first knowledge of the field, to covering have been carried out at these institutions:
The new challenges of energy efficiency and environmental impact possible formative debts and to the attendance Aalborg Universitet Denmark
reduction concern all the phases of the construction process of basic and specialized courses; Aalto University School of Chemical Technology
(planning, design, production, management and decommissioning), the second year is dedicated to the Espoo, Finland
requiring a truly multi-disciplinary approach. development of the research problem through BRE - Building Research Establishment London,
Inside this complex sector, the engineering disciplines that support the extension of the knowledge concerning the UK
the planning of building works has consolidated into research objectives of the thesis; CSTB Centre Scientifique et Technique du
lines about the science of materials, environmental issues, the third year constitutes the most intense Batiment Paris, France
energy efficiency and management, comfort, building services, period of autonomous and original elaboration Fraunhofer Institute for Building Physics IBP
organizational aspects, construction and management, and of the thesis and its topic. Holzkirchen, Germany
economy. Fraunhofer Institute fr Solar Energy Systems ISE
In the development of the thesis, the relations Freiburg, Germany
Synthetical indication of the phd profiles with other researchers and the periods of study IIT,Illinois Institute of Technology Chicago,
The PhD programme in Building Engineering prepares high-profile and internship at Italian and foreign research Illinois, USA
professionals and researchers that are able to strengthen the centres are fundamental. Such experiences are Institute for Applied Sustainability to the
technological transfer from research to the construction sector. highly appreciated and favoured by the doctoral Built Environment ISAAC, SUPSI Lugano,
This figure operates inside the engineering processes in a context committee. Switzerland
that is becoming increasingly complex and multidisciplinary. The The thesis is carried out with the support of a Lawrence Berkeley National Laboratory -
doctor in Building Engineering can deal with building science and tutor and its whole course is monitored by the California USA
technology on the versants of building physics, building material teaching staff through semi-annual disputations. Newcastle University - School of Architecture,
engineering, service life of systems and components, building At the end of the doctorate the committee Planning and Landscape
152
The University of New South Wales Faculty of 2. Innovation of the productive and managerial
BUILDING ENGINEERING
University of Hawaii Honolulu - USA interoperability.
INVESTMENT GRADE ENERGY AUDIT defined (; it makes indifferent year 2014 have been taken into plans.
BUILDING ENGINEERING
Housing available for the building type, investment project creates receive by the owner so that awarding the work to the of operation and maintenance.
The Italian building stock is energy consumption and habits value for the company. the project creates value for ESCo is always higher than the It could also be a valuable tool
characterized by dwellings with of the tenants: it represents At first the maximum APV the company. NPV obtained by a traditional to spread the energy efficiency
very poor energy performance: the real building and its for the owner has been In particular, ten-years EPC have redevelopment, considering the as an opportunity to increase
about 80% of them were built conditions of use. At this stage, defined; secondly the initial been considered according to same technological solution. the profitability of the property.
before the 80s when energy all design alternatives have outlay for the owner that models first out or shared Otherwise for the ESCo, the In particular, in the public
issues were not considered been proposed considering creates indifference between savings, firstly without any cash flow generated by the sector, it can stimulate market
important; in addition, more their operational feasibility, the a traditional or by ESCo kind of incentives for the energy investment project creates transformation towards more
than half of these buildings have needs of the client or any other retrofit has been calculated retrofitting. positive returns with IRR efficient buildings and services,
never undergone any renovation constraints. by iterations; finally, for Table 1. Financial evaluations, significantly higher than any trigger behavioral changes in
or maintenance. calculation of consumption: ESCo, the APV has been without incentives, considering other form of investment on the energy consumption by citizens
Nowadays, however, energy the final thermal energy defined considering the ten-years EPC and first-out or market. and enterprises, as well as free
retrofit does not seem to be a for each combination of initial outlay received by the shared saving models. up public resources for other
priority for homeowners: first, measures previously identified homeowner. Therefore, it is Focusing on shared savings Conclusions purposes.
because of insufficient economic has been calculated by possible to define if the design EPC (ESCo 80% - 20% owner), The renovation of the existing
resources; second, because they dynamic simulation (TRNSYS combination creates value for the effects of Italian incentives building stock is a priority
show distrust of investments in Transient System Simulation all stakeholders. for energy retrofitting have been in Europe, but the lack of
the energy field. Tool) and optimization/ evaluated. In particular, both economic resources makes its
In this context, the activity of automation (GENOPT Generic Case study and results the incentives provided by the implementation slower than
Energy Service Companies Optimization Program). The proposed method has been GSE and tax deductions for the required by climate protection
can be an effective tool for economic analysis: costs of tested on a typical building
mobilizing investment in construction, operation and representing a recurring
the renovation of residential maintenance have been situation in Italy: it has allowed 1. Financial evaluations, without incentives, considering ten-years EPC and first-out or shared saving models.
buildings with a view to improve defined by means of quotation the identification of the cost-
EPC Outlay Window Wall Ceiling Floor Systems HVAC PV EPh APV IRR
the energy performance of the requests; possible incentives effective building envelope
[%] [Uf - Ug] [cm] [cm] [cm] [type] [yes-no] [kWp] [kWh/m2] [] [%]
building stock. have been also taken into combination in relation with
First-out 0.00% 1.40 - 1.10 12.0 12.0 12.0 IMP.3 NAT 0 24.52 123,893 11.61%
A method based on Energy account. heating systems.
100% - 0% 76.10% 1.40 - 1.10 12.0 15.0 14.0 IMP.3 NAT 0 23.50 123,893 11.70%
Performance Contracting financial evaluation: the In particular, Table 1 plot the
76.10% 1.40 - 1.10 12.0 15.0 14.0 IMP.3 NAT 0 23.50 41,342 59.88%
The goal of the methodology financial ratios useful to define best technical solution under
64.09% 1.40 - 1.10 12.0 15.0 14.0 IMP.3 NAT 0 23.50 0.00 8.00%
developed in this thesis is the most cost-effective solution a financial profile considering
Shared-saving 0.00% 1.40 - 1.10 12.0 12.0 12.0 IMP.3 NAT 0 24.52 123,893 11.61%
to assess the opportunity of have been calculated (APV 25300 combinations of
90% - 10% 79.15% 1.40 - 1.10 12.0 15.0 14.0 IMP.3 NAT 0 23.50 123,893 11.15%
retrofitting residential buildings Adjusted Present Value; IRR technological alternatives.
79.15% 1.40 - 1.10 12.0 15.0 14.0 IMP.3 NAT 0 23.50 45,264 111.9%
by ESCo compared to a Internal Rate of Return; PBT For each group, the logic of
66.00% 1.40 - 1.10 12.0 15.0 14.0 IMP.3 NAT 0 23.50 0.00 8.00%
traditional approach. Pay Back Time) by a financial representation is the following:
Shared-saving 0.00% 1.40 - 1.10 12.0 12.0 12.0 IMP.3 NAT 0 24.52 123,893 11.61%
In particular, the following plan, developed for each Line 1: owner-side evaluations 80% - 20% 82.20% 1.40 - 1.10 12.0 15.0 14.0 IMP.3 NAT 0 23.50 123,893 10.70%
actions have been implemented: technology package. Financial according to the traditional 82.20% 1.40 - 1.10 12.0 15.0 14.0 IMP.3 NAT 0 23.50 49,186 --1*
preliminary evaluations: assessments have been approach, considering the 67.91% 1.40 - 1.10 12.0 15.0 14.0 IMP.3 NAT 0 23.50 0.00 8.00%
a theoretical model, conducted for the homeowner, initial contribution by third
representative of a widely in the case of retrofit by means parties equal to zero;
diffuse type of building, has to a traditional approach or Line 2: where the maximum
been created by means of the using an ESCo, and for ESCo, outlay by owner to ESCo is *
IRR cannot be calculated because its cash flow does not have one and only one change of sign
156
Mediterranean Active House: analysis of climate and to create a definition of the needed to reduce the heating aims, instead, to understand
BUILDING ENGINEERING
sustainability that aims to create building industry globally. A reacts to very efficient buildings performances. The simulation Climate 2: improved ventilation result is that, in an innovative
healthier buildings, capable mixed method process was already built in Italy. The first matrix considers six different and shading system + high home, users can change the
to assure indoor comfort used: considering the vision step compare a model of Active climates: Mediterranean very efficient windows + north main energy consumption from the
minimizing the environmental as a starting point and then House already certified in the hot (Palermo); Mediterranean orientation calculated one of a factor of
impact. The research is part of implements it with the empiric Northern part of Europe: Bolig hot (Roma); pre-Mediterranean Climate 3: improved ventilation 2, depending on his actions
an International study about results of the cases study. In this for Livet, designed by AART warm (Milano); Continental system + high efficient and interactions with the
this new target framework; as way the outcome is strongly A/S and located in Lystrup (Wien); Continental cold (Uccle); envelope + south main buildings equipment. Therefore,
partner of the Active House connected to the range of Denmark. This case study is very cold (Aalborg). orientation understanding the resilience
Alliance, Politecnico di Milano is buildings analyzed. chosen from the Active House For each climate the main four Climate 4/5: improved of the model to different use
the principal center of research The research is based on a database as the case that better buildings orientations create ventilation system + high of the automation system is a
entrusted to analyze if the two steps approach that can summarize all the features the building geometry, while efficient envelope + south step necessary to integrate the
principles are suitable for hot integrate the results for a and characteristics requested. the automation scenarios create main orientation performances calculation with
climate and to edit practical better understanding of the Moreover, the choice is led by the building intelligence. Climate 6: high efficient specific corrections factors and
guidelines for designing a vision and the behavior of the possibility of compare the Different demotic scenarios are envelope + high efficient the practical guidelines with a
Mediterranean Active House. the standard in hot climates. dynamic results obtained during defined in order to understand windows + south main sort of usage manual for users.
Mediterranean climate, in fact, The first part is focused on the analysis with the software if a controlled systems capable orientation The research is just a part of a
presents different and more understanding which could be TRNSYS to the real measured to answer itself to the heat The configuration chosen for wider international framework
complex criticisms compared to the solutions for an efficient values and the predicted one and radiation stress, changing each climate allows to minimize about how to create a system
the north of Europe, because it and sustainable design in made by the designers. The buildings configuration the hours of overheating and that can be used to guide the
is more weather stratified and it Mediterranean climate according model for the South, instead, (windows open/closed for achieve a good results for construction sector through a
has a double direction thermal to the Specifications and, on is built through a step by step ventilation and shading device thermal indoor comfort on the more sustainable future. Thanks
flux. Winter is not rigid but it the other hands, to understand analysis of different features active or not) in order to protect radar. The results show that, if to the analysis it is possible
requests an artificial heating, how the Specifications should and parameters that affect the the indoor environment, is a specific dynamic simulation to understand that, using
meanwhile summer could have be modified to better meet energy behavior, performed improving the performance software is used, it is possible the ActiveH Specification and
high cooling loads, making the Mediterranean needs. thanks to an optimization in a significant way or not. to optimize the building to validation tool, it is possible
the conditioning context not The second step, instead, of the prototypes features In fact, in the Active House achieve the best Active House to design an Active House
always with a single dominant aims to define the users more sensitive to this issue, Specifications, an important rank in all Europe. Moreover, an also in Mediterranean climate,
parameter. The aim of the influence on comfort and highlighted by a sensitivity qualitative parameter is the important aspect is represent without changing the nature
research is to determine if it energy consumption in a very analysis based on the Morris automation system and the by the control system: the of the validation system.
is possible to define a robust technological house driven by method. easiness for users to read and warmer is the climate, the However, the research highlights
solution for answering to an automated control system, The two prototypes are then change its default settings. Since more complete should be the also the critical points of the
the climate needs, providing as suggested by the vision as performed in different climates that the research is finalized automation system. This features vision related to the lack of
users comfort and without solutions for the energy saving of Europe, according to different to investigate the buildings highlights the importance of the information or guidelines on
impacting negatively on the issue. orientation and technological performance (according to solar control in the hot climate, how to design and how to use
environment. The Active House In order to understand how scenarios. In this way it is the vision) in hot climate, the where the instant answer to the an Active House in order to
Specification, in fact, has been the Specifications should be possible to understand the automation studied is related external input is vital to protect guarantee the efficiency also in
developed using an open-source modified for meeting the validitys boundaries of each to the control of shading and the indoor environment from time and not only in the design
model. The development has Mediterranean requirements model and the most important natural ventilation systems. overheating and the consequent stage.
involved online debates and the research focuses on two features for optimizing an The analysis reflect that, even discomfort and increase of
contributions as well as offline different aspects: how to design Active House in relation to in Mediterranean area, some energy consumption.
meetings and workshops with an Active House for the South the context. This step allows cold mitigation technologies are The second part of the research
158
159
The presence of pathologies In fact, in case of historic or from a timely and appropriate faades present degradations based methodologies, thanks to compatibility with the substrate,
BUILDING ENGINEERING
such as detachments and listed buildings, the primary diagnostic plan. Diagnostics problems on materials and the identification of reference vapor permeability and colour
adhesion defects can threaten necessity is to maintain the should not be considered as systems, such as detachments, temperatures. This method stability. After, they validated
the permanence, the durability permanence of the element; a static tool, but should be missing tiles and discoloration, has optimized the phase of the laboratory results through
and the aesthetic aspect of the authenticity is the key of its used to check, test and monitor due to the action of atmospheric defects mapping and allowed computer-based simulations
a faade external finishing in historic value while in case of the phenomena occurring in agents, to the loss of adherence underlining their geometric using WUFI and verified
plaster or tiles. In particular, the new buildings, the durability of building assets, underlying between the support and the localization on the faades, the efficacy of the siloxane
durability can be subjected to a the components represents the the evolutional characteristics. tile or among the layers of the obtaining more accurate results protective treatment in terms of
visible endangerment when the requirement to be respected. Through diagnostics, and in support, to the absence of a avoiding manual and punctual reduction of water absorbed.
faade is exposed to excessive In both cases, the stage of particular IR Thermography, the correct maintenance. The case analysis of single areas. The results of the investigations
rainwater infiltration or critical knowledge plays an essential researchers have developed a study of the Lavello Convent, As a result of the surveys phase, have driven the conservation
moisture content. For existing role. Any proper policy of new GIS-based methodology located in Calolziocorte (Eastern the research has focused on the activities choice, identifying
buildings and components intervention, or maintenance, that allows a faster and precise Lario), is characterized by the evaluation of water absorption the best mixture for plaster,
must be taken into account the management and valorization recognition of external finishing presence of a plaster finishing. and moisture transfer in which has been applied on
residual service life, defined has to start from the degradations in terms of Few years after the conclusion plastering and bedding mortars the northern faade of Lavello
as the service life remaining comprehension of the evolution adhesion problems. They carried of restoration works, in the for external finishing systems. Convent.
after a certain moment of over time of the building, out the thermographic survey in convent cloisters appeared There are many factors which
consideration. To assess residual considering both formal and transient conditions, to allow the degradation of the exterior can intervene in the decay
service life of an inspected functional aspects. In this sense evaluation of a thermal gradient plasters due to some critical process of mortars: mistake
building or component is should find widespread diffusion between adherent and detached factors: among them, the bad in mix-design, unsuitable
important to know its history, the surveying techniques, areas. They validated the results durability of plasters applied working technique, presence
i.e. data on the original diagnostics activities and also by numerical simulation during the restoration and the of exceeding air humidity or
performance values, information monitoring plans that the through WUFI. The investigative presence of water and moisture rains, extraneous chemical
on the installation, maintenance, current scientific research offers procedure has been applied in the masonry. The research and substances. This research studied
trend of deterioration, etc., and which should be used in within the project Citt Studi, the application in the two case the damages and alterations
which can involve several the most appropriate way to Sustainable Campus and for studies, show the improvements caused by the absorption of
difficulties to be obtained. The avoid the risk of interventions the study case of Santa Maria in the field of diagnostic and water and its movement inside
research takes into account disrespectful of the existent. del Lavello Convent: preventive preliminary tests, in particular the building components: decay
both historical/listed and A correct diagnosis, as a and planned conservation, to non-destructive surveys patterns such as efflorescences,
contemporary buildings, and pathologic observation of a assess the state of conservation through IR Thermography, detachments, exfoliations
illustrates the most appropriate human body, it is essential not of the facades, respectively based on the solution of the and biological colonization
methodologies and the tools only to resolve the damage, but in ceramic tiles and plaster mathematical model of heat highlighted the loss of material
to define conservation or also to intervene on the causes finishing. The buildings analyzed transfer in transient conditions. integrity and durability.
maintenance plans. There is a and the boundary conditions inside the project Citt Studi, The preliminary tests had their The researchers chose and tested
substantial difference between that have generated the Sustainable Campus present validation by softly hammering six different lime plaster mortars
them: the conservation plan degradation, or that could be an cladding systems with ceramic both the detached area and and two cement mortars for
is based on an organic series obstacle to maintain the proper tiles, popular practice in Italy the safe one: the different bedding and further applied
of controls for managing the conservation. The prescription since the 50-60s, due to their sounds of hammering revealed a siloxane water-repellent
transformation of buildings of actions or behaviors and good wear resistance, high the detachment of the finishing. treatment (SILO 111) on the
while the maintenance plan preventive addresses that can temperatures stability, hardness, The researchers analyzed the outer surface. They evaluated
defines the interventions for remove or minimize the causes tenacity and inertia. Nowadays, results of the thermographic the protective treatment in
restoring the functional aspects. of the deterioration derives after 50 years of usage, the surveys through alternative GIS- terms of water-repellent efficacy,
160
161
Only on 2013, it is around 30 emergency tents, shelters 19 different layers of aluminum owner, and environmentally safe and /or permanent housing. In emergency management, the
BUILDING ENGINEERING
million the number of homeless of small dimensions (micro- foils, wadding and foam, 2,5 cm building material, considering addition, ASH Pillow is easy to conclusion of this research is
people, because of natural architecture), and shelters of thick, as shown in Fig.1. that it could be recycled too. pack, with rolls dimensions of the design of ASH, especially
and man disasters, in the normal size, more similar to TRMS is an innovative Moreover, cases studies of 0,21 m x 1,10 m x 0,26 m, and of building component,
worldwide. To help these victims permanent house. The result is a combination between TRM the use of TRMS in the civil with a weight around 700 g/m2, which shown the possibility
different typologies of shelters scenario where it is fundamental and air gaps based on the use architecture are analyzing to and it could be easy transport to use technology in support
are designed, trying to find a to find a balance between of low-emissivity surface for better understand performance to different sites or countries. of response and mitigation
balance between easy transport, economic issues, caused from reducing heat radiation. It is an in normal conditions. It is also easy to assembly for phases. Adaptation of ASH from
assembly, energy efficiency and events, and adequate housing insulation system developed Between the second and the the modularity of the single Response to Mitigation phases,
low cost. for users. Contemporary, the from space applications, where firth task is located the proposal, elements, and it could be tie crossing the reconstruction
The research follows a guideline and parameters heat reflection is particularly which is the implementation with the structure, as indicate period, allows supporting
transversal overview between concerning shelters indicated important. Literature, in of housing technology based to ISDR, to prevent envelope disaster resilience, contributing
the concept of Disaster by international community are fact, from the observation of on the use of TRMS. General movement and damages, to reduce vulnerability too.
Resilience, sustainable examined. General investigation, TRM, both theoretically and innovative concept is called Air especially in the case columns Simulation of ASH Pillow as
reconstruction and the available especially regarding emergency, experimentally, demonstrates Shelter House (ASH). systems. Emergency tent system shown a
materials. Aims of the study is suggests the study of envelope that the aluminum foils, always ASH is a versatile housing In the third task, there is lowering of energy consumption
to demonstrate the possibility system, in equilibrium between present independently from concept, especially focus in an economic and technical of 65 % respect to standard
to use building technology, lightweight and thermal insulation general composition, building component, itself comparison between this model of UNHCR. Moreover,
focus on TRMS (Thermal performances. could reflect a high percentage called ASH Pillow, to use during innovative answer and presently achievements of good surface
Reflective Multi-layer System), In the second task, after of heat energy. the response phases as first solution, commonly used in the temperatures with finite
to achieve the best solution the individuation of TRMS Laboratory tests, during the aid. This building component response and reconstruction element method demonstrate
concerning reconstruction after as possible answer for an research, provide the data of prototype is centered on the phases. ability of the panel to preserve
disaster to support Disaster innovative design of the 0,04 W/m2K for TRM and 0,038 use of simple and low cost The building component, based thermal performance of TRMS
Resilience during Response and envelope, laboratory test W/mK for TRMS, with 1 cm of materials, replicable by the use in TRMS technology, is moreover counting plastic joints too. This
Recovery phase, inside Disaster concerning this technology are air gaps, considering polyester of 3D printer. Compare with tent investigated to understand ability, in fact, could allow to
Management. performed to give useful data membranes too, used for the solutions, used in the refugees feasibility of ASH prototype preserve internal temperature,
The methodology is performed for the design section. Thermal tester. camp, with a durability of 12 panel in different solutions, reducing thermal losses. Both
in four different task: Reflective Multi-layer selected This kind of insulation, moreover, months, and prefabricated through software simulation. are in agreement with aims to
Task 1, State of art of shelters. for the study is composed by is non-toxic for user and building solutions used in the response In the last fourth phase is decrease Global Warming.
Task 2, TRMS studies. Literature phases, usually expensive, investigated the application Possibility of patent is evaluated
and laboratory test. the ASH could have a more of ASH in one case of in order to improve study of ASH
Task 3, Feasibility and resistance with an amortized reconstruction, and in two new Pillow, through production of a
evaluation of Air Shelter House price. Low cost is possible thanks design buildings. During this real panel prototype, to perform
(ASH) building prototype and to the wide option of TRMS application it will be possible more accurate laboratory test,
comparison between current technology applications from understand potential of material concerning thermal performance
economic scenarios. the situation of first answer application to different structure, and de-sign optimization.
Task 4, ASH Pillow Application after events, to the possibility in support to the design process,
studies to reconstruction and to be part of reconstruction and considering the prototype
new design buildings. in the future. Resistance and study of the two new shelters.
In the first task the state of art lifespan of 10 years of TRM According with the result of
of emergency architecture are ensures the possibility to use State of art, laboratory test and
studying. It is divided between 1. TRM layers composition. ASH as a temporary shelter economical features during
162
A DECISION MAKING SUPPORT MODEL TO DETERMINE basis of this envisioned decision implemented on various types provides an interconnected
BUILDING ENGINEERING
cost-effective in the long term GB projects, a comprehensive This Ph.D. study addresses these the most. This Ph.D. study endeavors GB project delivery (project
due to lower operational and identification and classification needs by (1) examining waste The GB-CS Model, developed to give the GB industry and and project team related). It is
maintenance costs; however, of waste and related root and related root causes for GB in this study, employs the literature the upper hand by: expected that properly selected
stakeholders are still reluctant causes encountered in GB project delivery, (2) analyzing integrated approach of (1) identifying, classifying and GB certification credits will
to invest in GB projects due to project delivery process, which particular attributes of GB Delphi Method based weight ranking waste and related root optimize GB project delivery
prolonged duration, high costs is apparently a crucial necessity project delivery and building assignment process and TOPSIS causes for GB projects and by mitigating the excess levels
and additional labor. These of the GB industry and the GB a hierarchical framework (The Technique for Order of presenting the cause-effect of waste generated to fulfill
challenges associate with lower literature, does not exist. based on these attributes, Preference by Similarity to relationship between them in an the additional requirements of
productivity and additional time To begin with, if the owner and (3) developing a decision Ideal Solution). The GB-CS Ishikawa Diagram, (2) building GB design, construction and
spent for reworks that lead to intends to achieve GB making support model from Model: (1) derives relative a hierarchical framework based certification. The GB-CS Model
hidden costs whose excess can certification for his/her GB this hierarchical framework to weights to hierarchically on GB project delivery attributes proposes to give the GB industry
be attributed to time and cost- project, increase in number of determine appropriate credits designed GB project delivery and assigning relative weights to and literature the upper hand
related waste in the processes. tasks specific to GB certification for GB certification. In order attributes gathered from these attributes, (3) developing a by facilitating GB projects with
Waste or process waste and in details of green to examine waste and related eleven GB experts through decision making support model an adaptive guidance model
is defined as non-value adding specifications automatically root causes for GB projects, Delphi Method based weight for determining GB certification that quantifies the outcomes
activities (NVA) that absorb necessitate more project this study classified waste types assignment process, and (2) credits that suit the particular of Green decisions and ensures
resources/cost but do not create participants, hence leading and associated root causes, determines appropriate credits attributes of GB project delivery the successful completion of GB
any additional value for the to much higher complexity in investigated their cause-effect in accordance with GB project the most, and (4) constituting projects.
project. GB processes. Moreover, if the relationship and ranked them delivery attributes via TOPSIS. an integrated approach of Future research would extend
Previous studies show that selected GB certification credits according to their negative Delphi Method based weight Delphi Method-based weight current knowledge about
time and cost-related waste are not suitable for the attributes impacts on time and cost in assignment approach ensures assignment process and developing multi-attribute
in GB project delivery process of the GB project delivery, design and construction phases to derive reliable and sensitive TOPSIS for detecting the best decision making models for
are much higher than that of increased levels of time, money of GB project delivery process relative weights for hierarchical alternative(s) among large providing aid in the optimization
non-green buildings (NGBs) and labor can be wasted while by conducting a case study attributes that can be inserted number of alternatives according of GB project delivery while
due to the collaborative and responding to the additional including three GB projects and in multi-attribute decision to multiple attributes. enduring minimal waste.
interdisciplinary nature of GB requirements of GB design, a two-rounded Delphi Method. making models, while TOPSIS This integrated study formalizes
projects to respond to the needs construction and certification. Drawing from the findings, this guarantees to detect the best the identification and
of interconnected green system Considering GB project delivery study focused on two GB project alternative(s) among large classification of waste with
design and implementation. In attributes is highly important in delivery attributes, i.e. timing number of alternatives. Hence, their related root causes for GB
such complex situations, lack of the optimization of GB project of project teams involvement integration of Delphi Method projects and reveals the cause-
guidance reproduces the same delivery and required for the and qualifications of project based weight assignment effect relationship between
mistakes in each new project. successful completion of GB teams, which play a crucial approach with TOPSIS helps them which come together
It not only causes higher waste projects since project delivery role in ensuring successful researchers efficiently examine as a multi-attribute decision
in GB project delivery process attributes affect the project outcomes from GB projects a large number of attributes making support model that aid
compared to the NGBs, but also outcomes deeply. Despite of while enduring minimal waste. and alternatives, among which the optimization of GB project
creates certain types of waste a clear necessity for a study Based on these two attributes, the best alternative is to be delivery and allows obtaining
specific to GB projects. In fact, which suggests a decision a hierarchical framework was selected. better outcomes from GB
the more complex processes making model as a guideline to built to derive relative weights to The GB-CS Model was projects through minimizing the
a project has the more waste determine appropriate credits for these attributes to be inserted developed based on LEED root causes of elevated waste
can generate. Although prior GB certification based on project in the decision making support 2009 NC under BD+C Rating and mitigating associated hidden
studies identified some waste delivery attributes, such a model model, and to constitute the System since it can be costs. This multi-attribute model
Technology and Design for Environment
and Building | Territorial Design and
Government | Aerospace Engineering
| Architectural and Urban Design |
Architectural Composition | Architecture,
Urban Design, Conservation of Housing
and Landscape | Bioengineering |
Building Engineering | Design | Design
and technologies for cultural heritage
| Electrical Engineering | Energy and
Nuclear Science and Technology |
Environmental and Infrastructures
engineering | Industrial Chemistry and
Chemical Engineering | Information
Technology | Interior Architecture and
Exhibition Design | Management, Economics
and Industrial Engineering | Materials
Engineering | Mathematical MODELS and
METHODS IN Engineering | Mechanical
Engineering | Physics | Preservation of
THE Architectural Heritage | Rotary-
wing aircrafts | Spatial Planning
and Urban Development | Structural
Seismic and Geotechnical Engineering PhD Yearbook | 2015
166
DOCTORAL PROGRAM - in areas such as communications, corporate ideas and concrete proposals.
DESIGN
Chair: Held at Politecnico di Milano university, the Doctorate in Design At the same time they are experts in managing researcher can effectively deal with, analyze, and
Prof. Francesco Trabucco Research is created and managed by Design Department, awareness, in constructive interaction among find solutions for.
in cooperation with the Department of Mechanics and the various players, and in the communication of
Department of Chemistry, Materials and Chemical Engineering.
Politecnico di Milano research doctorate courses aim to build
the skills needed to perform highly qualified research jobs in
manufacturing and service enterprises, the public sector, and the
university.
The scientific field to which this course belongs is industrial design.
Its interdisciplinary relationships include the philosophy and Doctoral Program Board
theory of language, art history, design, science of materials and
technology, industrial engineering, decision making, and computer Francesco Trabucco (Chair) Luisa Collina Silvia Pizzocaro
science. Giovanni Baule Alberto Colorni Lucia Rampino
Industrial design, following the meaning adopted within this Paola Bertola Luciano Crespi Maurizio Rossi
doctorate, is intended as a discipline acting within the industrial Alessandro Biamonti Barbara Del Curto Michela Rossi
culture and accompanying its transformations. Among its main
Monica Bordegoni Marisa Galbiati Giuliano Simonelli
tasks is to deal with processes and products configuration. In this
sense, this school specific meaning goes to use, function, social and Giampiero Bosoni Luca Guerrini Raffaella Trocchianesi
individual consumption of the products (the functional, symbolical Valeria Bucchetti Fulvio Irace Carlo Vezzoli
and cultural factors) as to manufacturing (techno-economical,
Paolo Ciuccarelli Pietro Marani Paolo Volont
techno-systemically, techno-productive and techno-distributive
factors). All these themes are expected to be faced with the Alba Cappellieri Anna Meroni Francesco Zurlo
support of the conceptual tools of research in its theoretical, critical, Paolo Ciuccarelli Silvia Piardi
historical and methodological articulations.
The complex of the issues investing the theme of innovation will
represent the conceptual trajectory of the whole program.
The attention to innovation-related phenomena are due to
various factors, partly internal to the dynamics of the discipline of
Advisory Board
industrial design, partly motivated by the perception of the growing Enrico Morteo (Architetto - Membro Comitato di Riferimento della Scuola di Dottorato))
complexity of the innovative process, thus fostering in-depth
analysis and new approaches which can legitimately be faced within Luisa Bocchietto (Presidente ADI)
the doctoral programme. Whatever the motivations for the analysis Cees De Bont (Dean of Hong Kong Polytechnic University, Coordinatore della Design Research Alliance)
of technological change and innovation, this trajectory of enquiry Carlo Guglielmi (Presidente Cosmit)
highlights the factors and fundamental ingredients of the process of
development, transition and transformation of industrial products, Michele Perini (Presidente Fiera Milano e AD di SAGSA)
services and systems. As a starting point a broad view of innovation Giovanna Castiglioni (Vicepresidente Fondazione Castiglioni)
is assumed, being a dynamic process involving the development or Claudio Luti (Presidente di Kartell)
improvement of new products, services, technologies, processes,
institutions, systems, strategies. Such an extended view of Giovanni Petrini (Socio fondatore di The Hub Milano e responsabile per Avanzi srl
innovation includes the range of economic and social activities del settore social&sustainable new business e dei rapporti con media e stakeholder locali)
168
169
This research explores the individuals, who are not artefacts through innovative in relation to the change of business models.
DESIGN
change in production models designers or manufacturing manufacturing processes based production models, in particular The third part of the research
from the design perspective. It companies, can materialize on alternative technical and distributed microproduction. analyses and verifies the activity
is an emerging socio-technical their ideas independently technological configurations Research has noted that of designers=enterprises
paradigm characterized by new transforming them into of the production meanings standardization of design both internationally and in a
forms of advanced, open and product-service solutions, even (intendevi dire significato?). professions together national context (Italy). More
distributed manufacturing. technologically complex, and And the designers are among with outsourcing and than 100 projects developed by
Democratization of fabrication potentially marketable on a them. deindustrialization processes designers-microproducers have
devices linked to an increasing global scale. On this basis, the research are changing (in worse) the been observed, also arranging
abundance of low cost In a scenario as that described, proposes a systematic study traditional relationship between interviews and visits to several
(free) design resources, where the innovation processes about the change of production designers and manufacturers. At design studios.
the appearance of indie appear driven by new players models, especially those on small the same time new opportunities An online survey conducted
online marketplaces and such as makers, two questions and very small scale. The first offered by digitization of on over 100 Italian makers,
new social forms of micro- are emerging: Whats the part of the study describes the production are gradually pushing designer-producers and
financing innovative projects (new) role of design? how do set of features and possibilities parts of designers population to makerspace managers
(crowdfunding) represent a set designers change their skills and of evolution of technologies transform theirs own nature. has studied in details the
of new opportunities to develop capabilities? for advanced manufacturing These designers (for necessity microproduction processes.
independent and integrated The research starts from this and distribution. This study and/or personal interests) Basic data and information
small and micro scale production assumption: the new forms of contextualizes the new models become new producers and about economic condition
activities from idea to market, production without a defined of production compared to the summarize in a personal of designer=enterprises,
directly. design intent and without the historical ones, from Taylorism dimension all the functions production skills and distribution
Production phenomena such as support of adequate design skills to flexible specialization of a manufacturing company: strategies have been obtained
making and personal fabrication seem to be not (so) generative. (lean manufacturing). This research and development, to understand the economic
related to some physical and The proliferation of object- exploration continues studying design, prototyping, production, sustainability of these activities
virtual places as Fab Labs and gadgets on personal fabrication the characteristics of the new promotion, distribution and their prospects of
other platforms for digital web platforms is a clear places that enable new forms of and searching funds. They development.
manufacturing and the open example. production - from Fab Labs to also prototype and produce
design certainly represent a On the other hand, the low makerspaces, from TechShops to microproduction processes
technology-based evolution of quality of many artefacts DIYBioLabs - and mapping new when design activity is expressed
the traditional world of self- materialized in the Fab Labs types of producers as makers, not only focused on artefacts,
production or DIY. demonstrates how new who are the interpreters. These but also on the configuration
Nowadays, a heterogeneous set manufacturing technologies, places and subjects are then of the resources used to
of productions developed by while expressing great potential, observed within specific contexts produce them: from materials
individuals or groups, without are not yet able to replace such as the city, identifying a to tools and machines, to
being designers or companies, industry and craftsmen (and it variety of forms of production the places of production as
can materialize their ideas is not clear whether will never which interact with each others, the microfactories. Research
independently, transforming be able to do so). Instead, there suggesting the existence of has defined these subjects
them into product-service seems to be an interesting field a system called distributed through the conceptual model
solutions, even technologically of action for individuals who are microproduction. of designer=enterprise,
complex, potentially marketable able to hybridize (in an original The second part of the research drawing an initial taxonomy
on a global scale. way) design and fabrication focuses on the change of of microproduction processes,
A small population of skills in order to materialize new designer skill and capabilities design approaches, and related
170
PROCEDURAL DESIGN AND ORDERED COMPLEXITY. The aim of the thesis is to common goal and so the design interactions to the calculating
DESIGN
has always preceded the the operating system. The structured geometries would assisted design. that they can be balanced. To the theoretical model follows
construction phase. The act material system is subordinate clearly be pointless if they As such, what is proposed is The revolution inherent in the analysis of the technologies
of designing is an opportunity to the definition of the code couldnt then be created. In a methodological approach, the new digital tools is the typical of digital fabrication. As
to organise ones ideas, and language that enables it to parallel to the development of until now barely developed, way they have transformed well as verifying new production
manage resources and predict function. software, or perhaps because of that integrates computational the discipline of design from possibilities, this study also seeks
results, and is made possible In the case of design, the it, there has been a convergence methods in the design process. iconic representation to the to understand the organisational
through the use of dedicated increased levels of computer towards the digitalisation of With this in mind, the research representation of relations and methods required for the
instruments. literacy have led designers production processes thanks analyses on one hand the processes. In this new dimension management of these and the
The introduction of the to analyse the processes that to machines able to construct, relationship between geometry the various design instances possible consequences on the
computer as a design tool has underlie the functioning of either in whole or in part, the and the digital tool and, on the can be organised in emerging socio-economic context and the
been epoch-making. Modelling the digital instruments used designed object, starting with other, how it is possible to use relational structures that role of the designer.
programmes gradually began on a daily basis. This interest its digital model. This process abstraction for design purposes transfer typical characteristics Finally are presented the
to have a greater influence on has given rise to a new type is known as Digital Fabrication through a single systemic of living systems to the design experiments carried out during
the design process. Over the of modelling, based on the and does not require additional process able to manage the process, such as the ability to the PhD, the comparison of
years the level of involvement of elaborative logic of information, interpretations to that of the increased level of complexity adapt and transform, and self- which highlights the potential
software grew, developing from which has determined a new designer as the file is prepared admitted by new approach. The organisation. This behaviour of procedural programming,
a representational role to having phase in computer assisted in real scale and fabricated term complex does not simply cannot be controlled according opening the way to additional
a direct influence on the process design, in which the form without the involvement of mean complicated; rather, it to the classic linear method experimentation and research.
of generating forms, sometime is generated by drawing up intermediaries. is a precise definition that refers (top-down), which seeks to
even characterising the structure algorithms. As is often the case when a to the science of complexity, a predict all possible situations
of the artifacts. An algorithm is a systematic new technology is first adopted, field of research that has not and subsequently prescribes the
In recent years, economical, procedure based on a series the exponential dissemination yet been completely formalised solution for dealing with them.
social and cultural changes of unambiguous instructions of the tool in the professional but which is equipped with Only by defining the behaviour
have accelerated the advent of that explain how to achieve sphere does not correspond theoretical tools suitable for the of entities on the basis of
a digital and globalised society. a specific objective. Used in with adequate theoretical new context. In the systemic the design (bottom-up) and
In a world in which hardware design it promotes research development that can integrate vision the units are relationship leaving the task of simulating
devices are omnipresent and based on the importance of the theory of the design with the patterns, inserted within a the collective effect of the
constantly interconnected, the code-procedure concept: potential on offer. The excitement broader network of connections.
software and the ability to if the solution to a problem over the new possibilities offered In Design, for example, form
manage it becomes key. can be described in a finite by the new applications has seen may be considered the result
Smartphones, tablets and number of steps, in the same a move from conceptualisation of the interaction of precise
other electronic devices are way the identity of a form is to mere jargon. With computer formalisable and quantifiable
increasingly becoming terminals the consequence of a series assisted design accompanying conditions (formal aspect,
rather than independent objects, of discreet rules that define it. the designer from the generation materials, physical and temporal
so much so that they cannot be The form is not defined a priori of the form through to its digital constraints, pre-established
used if they arent connected to but stems from a process of fabrication, the integration goal, interaction with the user,
the information infrastructure. the refinement of conceptual, with theoretical analysis and economic and production
Hardware is becoming less communicative, structural and comprehension tools able to factors) and a creative instance
relevant, to the point that geometric instances that lead to maintain a high level of design that must be implemented.
1. Procedural-Algorithmic design: a single complex geometry minimal surface
it is possible to significantly the result most in keeping with coherence is increasingly These determining factors (a Gyroid minimal surface) is placed in a periodic system. Subsequently the
alter its performance and the initial expectations. necessary. interact reciprocally to achieve a porous structure is used to create a protective helmet
172
173
This study attempts to trace design platform.
DESIGN
the role of banks, banking To conclude, this research
foundations and trusts in the addresses the importance of
territory of cultural heritage; in strategic design in shaping the
particular referring to a design actions of foundations and trusts
able to manage the activities to the development of cultural
of museums, libraries and institutions.
archives. This research analyses I propose a sustainable
the works of trusts and banking advanced design model that
foundations, exploring the could be replicated by different
underlying motivation for their organizations through this
intervention in public utility as research.
well as in private and public
projects relating to cultural
institutions and arts.
The research chooses four case
studies in order to investigate
their historical background and
understand the development
of different systems adopted by
banks and trusts. A literature
review has been conducted to
answer the central research
question: What is the best
design in the long-term funding
of cultural institutions?
I have identified a workable
model in the sustainable
advanced design for cultural
organizations able to
communicate with private actors
such as donors, sponsors, and
trusts. Data and interviews
have been analysed to test the
possibility of applying this design
network model to museums,
archives, and libraries. Further
investigation was conducted at
a regional level on the existing
networks among cultural
institutions to ensure the
application and originality of this
174
175
Due to the rising number of research, the core topic of this of story-building processes developed that combines two
DESIGN
multi-modal devices and the research is multi-channelled capable of fostering audience layers: Conceptual Framework
higher number of messages structures. These structures engagement, activating and Transmedia Building Model.
1. Conceptual and operational tool (Transmedia Design Framework) that
being conveyed across several are able to foster the sharing collaboration among peers and The former, Conceptual combines two layers: Conceptual Framework and Transmedia Building Model.
channels, designers now have of meaning-making processes social innovation, transmedia Framework, aims to sustain
a key role as mediators. Indeed, between producers and can sustain local communities in the comprehension of complex
they must bring different social audiences and shape society the development of on-line and phenomena. It intertwines the practice at a local level within Plug Social TV is web television
contexts together in mutual and influence media habits off-line interactions. key features of transmedia the Plug Social TV project (www. that makes use of social media
understanding. In a mediascape through storytelling, story Transmedia can be described projects together with the facebook.com/plugsocialtv), to produce content that aims
designers play a dual role; as listening and engagement. as a practice made up of tacit six elements of Aristotles which was activated in a to test and verify the ability
story listeners they collect stories The research also focuses on knowledge that people work poetics. These qualitative peripheral urban area in Milan of stories of local interest to
from audience and repertoires transmedia, a phenomenon with in an intuitive manner elements structure a drama (specifically, in the Bovisa and activate self-representation
and as storytellers they organise concerned with the building of a and that follows a learning by like an organic whole and Dergano districts). Since 2013, and self-narration and fosters
these stories into experiences. story universe through different doing approach (which hails were translated for the Human final year students of the a dialogue in a neighbourhood
Beginning with these channels to enhance the role from a Renaissance studio Computer Interaction by Masters of Communication among its different inhabitants.
assumptions (and in accordance of the audience. It is a socially model recovered from the Brenda Laurel in Computers Design programme (School of In fact, in these Milan districts
with theories and projects understandable paradigm, the design field). A literature review as Theatre (1991) as: action, Design, Politecnico di Milano) dual personalities exist. Formerly
developed over a number of growth of which requires special revealed that there is a lack of character, thought, language, were guided by my research industrial areas, these districts
years), this research aims to skills and teamwork. sharing practices in transmedia melody (pattern) and spectacle group (ImagisLab) in the are now populated by a mix of
determine a new form of literacy It is hypothesised that project development. Thus, it (enactment). This research in management of web-based older workers, artisans, foreign
that allows communication transmedia practice is a is clearly necessary to identify turn defines the five concepts television and in the design of citizens, and engineering,
projects to be developed that procedure that could address interpretative models and of the conceptual framework transmedia television content. architecture and design students
rethink the participatory process the issue of contemporary guidelines for its design. as: story world, content, media, This experience was essential to attending at Politecnico di
and merge different languages, complexity through a This research aims to identify engagement and context. fully understand the underlying Milano. Approximately one
media, technologies. phenomenological approach the main features of transmedia The latter, Transmedia Building processes in the development hundred students and thirty
The evolution of social to the coeval reality. A cultural projects to build a glossary that Model, suggests guidelines, tools of multi-channel projects. members of the local community
interconnections through digital paradigm was focused on can be shared and that will and an on-line platform for the Indeed, observing, monitoring have tested Plug Social TV and
technologies has emerged to allow people to become contribute to the development development of a transmedia and revising the students the potential of transmedia
from a phenomenological aware of the prominent role of a useful tool for transmedia project within multidisciplinary work was fundamental to the systems in a specific local area,
approach to contemporary played in the contemporary practices. Such an instrument teams. Specifically, a model development and exploration outside the entertainment
mediascape. Such that, multi- mediascape and emphasis was could become a conceptual constituted of four main sections of insights and hypotheses market and mainstream
channel structures have become also placed on the storytellers and operational tool for the for the activation of an iterative about the construction of the productions.
increasingly important and ability to support multiple designing story world, not only design process, each of which Transmedia Design Framework.
completely changed the role of points-of-view. Transmedia is for big Hollywood productions, requires specific skills. The In light of the feedback obtained
the audience, this in turn has a phenomenon that allows but also for everyday scenarios. four sections are: story world, from students, this conceptual
allowed for the development of audiences to participate in Based on a necessity to narrative context, functional and operational tool is in the
widespread creativity through the meaning-making process understand how aesthetic and specification and production process of being refined, not
collaborative creation and and changes the relationship economic issues work together specification. only for those who already
the collective consumption of between the mainstream media within the design of complex The building and the validation have experience in designing
narrative worlds. and participatory culture. story world, a conceptual and of the Transmedia Design transmedia experiences, but also
With a focus on the domains Thus, this work starts from operational tool (Transmedia Framework intertwined the for those who are new to the
of media studies and design the assumption that, because Design Framework) was experimentation of transmedia field.
176
Design of illusory space. Quadratura as which the perspective transforms and also the most appropriate studies, having noted the works
DESIGN
research topic that lends itself most practical and applicable by the laws of optics and light practical universe for scientific of illusory space. The research The classification of techniques
to some reflections, as well templates for the illusory space to that of graphical method investigations. has therefore initiated the of quadratura have proved to be
as from the practical point of design. for depicting spatial depth and It was therefore necessary to classification of the techniques operations aimed at highlighting
view, also from a theoretical The thesis is a research of design introducing the concept of examine the projective principles for the practice of quadratura, the design principles that still
one. The research highlights representation in the interior infinity. underlying various works, to identifying the experimental can guide the design process of
and represents the principles design framework and this study For this reason the use of investigate which spatial forms methods then become the illusory space and where we
of projective-geometric design aims to pinpoint as architects perspective also to change and were mostly adopted and, consolidated and universal. can highlight its methodological
of illusory spaces and proposes and artists of perspective transform the space, therefore according to the typological The third chapter of the value. The challenge is to make
a detailed study about the worked, disclosing the project understanding it as a parametric design, pinpoints the perceptual research analysed the spread the role of the representation
architectural perspective called contents that are related to tool for design of the space and narrative issues that are of quadratura in Lombardy, still useful and effective when
quadratura, which featured quadratura and the description itself. Considering the cases the result of the vanishing also validating the projective is describing the design of
the applications of science and of the different functions of the where the built space becomes point position choice. This and typological constants an interior; this task is more
art to interior decoration and perspective, building the space a set, ephemeral and therefore research has proved particularly of paintings, comparing attractive if we pinpoint
architectural spaces during the also as a place of experience illusory, perspective has been interesting points, since very theoretical studies, manuscripts knowledge and practical
sixteenth and the seventeenth and expression of the ways analysed as the design of the often parallels and feedback and treatises to the solutions solutions in an era ruled by
century. The fundamental of thinking, a process that graphic form of perception. have emerged among works and of the Emilian school who digital technology.
content of each quadratura is allow the built environments The second part of the research between what it has been found first influenced the birth of The research concludes its
the three-dimensional ideation transformation. analyzes theory and technique of in written documents and what quadratura in Milan and dissertation showing how design
space and the extension of The thesis has analysed the quadratura design, highlighting has actually been built. Lombardy. It was created a representation can intervene,
architectural surfaces, a veritable concept of a selection of design methods and perspective Jacopo Barozzi da Vignola, mapping of the artworks in adding value, in the processes
figurative palimpsest. The aim of figurative space, going to applications, and studying Sebastiano Serlio, Pietro Accolti, the city of Milan, highlighting of concept development in
the PhD research is to promote deepen the formation of the the projective operations and Jean du Breuil, Abraham Bosse, places in which quadraturisti the practice of interior design;
a deeper understanding of the concepts of perception and techniques of perspective Giulio Troili, Andrea Pozzo, painters operated. The research representation and graphic
quadratura as a tool for the illusion from antiquity to theorized by the numerous Ferdinando Galli Bibiena and pointed its studies about two approach consolidate their role,
design of the illusory space. Renaissance, and to Baroque. treatises of the sixteenth Antonio Palomino are the case studies, Palazzo Crivelli in identity and value even through
Studying the case studies, During Renaissance was given and seventeenth centuries in selected authors because via Pontaccio and Palazzo Litta the study, documentation and
verifying how the use of emphasis to projective methods, the many documents. It was they are considered the most in Corso Magenta, which host transmission of such design
perspective has been the key to of which were investigated the considered appropriate to interesting according to research very significant and interesting methods.
correct the space, it is believed principles of geometric and highlight the case studies in purposes. quadratura decorations.
that this instrument can be optical ones in the proportions which the practice of perspective Dealing with a research project Consider what are the tools and
reused and have a relapse in and in the visualization of seen from below upwards that identifies the quadratura methods to create an illusory
the contemporary design. The architectural works, and was applied, ie the typological parameters in relation to the space is an issue currently
quadratura, in the contemporary the use of perspective as a solutions for the decoration of use of perspective drawing and taken into account where
practice, can be a support to projective system accelerator vaults, ceilings and walls, and its material reproduction, from there is a need to create new
design a place or enrich an or slowing the effects of the their specificity. sketch-concept to perspective architectural scenarios: the
existing space, an interior setting natural perspective to modify The deepening of the manuals tools to achieve pouncing on story of quadrature is the direct
or a public environment. certain environmental aspects, and treatises written and flat surfaces and on vaults, we reference, and there arises the
Perspective is a still useful tool external and internal, to the circulated in Italy, France and can scientifically determine question whether it can still be
in the design of exhibitions or built volumes. The study about Spain, has highlighted the methods from field practice reactivated. Having analyzed
in scenography. Relationships perspective issues was therefore studies done on the project and subsequently verify their the theoretical evolutionary
between perspective and its due to verify its evolution from applications and the ways in reproducibility and variations, route, and with regard to case
178
179
The research investigates models itself, through adaptations and on one hand on the analysis theme of the research is part
DESIGN
of temporary accommodation variations of the interiors, in of selected case studies, which of a broader context that
and particularly temporary order to welcome and foster the include field observations includes temporary living,
housing solutions at mega- expression of the occupants (London 2012 Olympic Games) sociological, anthropological
events such as expos, Olympics, identity. In particular, within and the study of texts and and environmental sociology
major concerts, fairs, and the home as occupation data collections, while on the researches on the relationship
cultural, political and religious chapter, we will look at studies other hand it is based on the between the house and its
events, with particular attention in environmental psychology evaluation of interviews and inhabitants, but also the design
paid not only to visitors to describe different strategies questionnaires. The interviews of low cost hospitality and the
accommodation but also employed in order to feel were conducted with people role of events as an opportunity
to the staff working for the at home and express ones who had participated in major for the rebirth of the city,
organization of the event itself. identity even within temporary events as staff and could focusing on large events but also
The disciplines of reference for living spaces. In the house as share their opinions on the on small and medium-size ones.
the study are mainly interior and adaptation chapter we will management of the hospitality
service designs. instead look at transformations sector in that specific context,
The thesis also explores the and alterations of interior while the questionnaires were
urban dimension and the spaces in terms of design for filled out by those who will
response of the city to the degrees of freedom, to be taking part in the next
growing demand for temporary quote Ugo La Pietra, that allow expo (Milan 2015) as staff
accommodation linked not only the occupants to express their or delegates of the various
to large but also to small and identity. Finally, in the conclusion participating countries. The
medium-size events. of the theoretical part, we views of beneficiaries and
Specifically, the study is divided will introduce the concept of workers of the hospitality sector
into two parts: the first is interior legacy to describe of a large-scale event will help to
theoretical, offering the general what interior spaces leave with understand the expectations and
frame of reference and an us after their use, in terms of desires of all users.
introduction to the theme of materials and remnants, but also Another instrument of research
contemporary housing, with a of memories. was workshops through small
special focus on temporary living In the second part we will look forms of co-design, activities
spaces (as a main character of at the practical implications of aimed at non-specialist
contemporary living). Within the research, the description of audiences, the results of which
this we will find a classification the design for the temporary may help to investigate the
of the different temporary living spaces of the Expo possible interpretations of the
housing typologies in order to Village made available to the more general issue of temporary
then explain the dichotomy of delegations and the staff of living and the characteristics
home vs. house in the terms of the event, Expo 2015, in the of the interior spaces for
occupation vs. adaptation. In city of Milan. The output of the hospitality functions (for
other words we will look at the the research is also made up example, the workshop Feel
dialogue which is established of a series of guidelines for a Like Home, held during the
between the necessities and hospitality system connected to event Meet Me Tonight:
needs of the inhabitants and the context of events. Researchers Night).
the answers given by the space The methodology adopted relies As mentioned before, the
180
DESIGN WUNDERKAMMERN contemporary culture of design. the roles and responsibilities of A specific scenario, the Triennale
DESIGN
design can be considered as to craftsmanship because unique are also affecting the practice of collecting design each others through direct and contemporary design
a new phenomenon. For the pieces are highly requested consumption and the perception takes place? and indirect relationships that pieces transiting the market.
last two decades the demand by the market. Along with a of design in an extensive way. 3. What kind of curatorial allowed a comprehensive and Design Wunderkammern could
for historical rare objects and growing attention by collectors Whilst the art market has and marketing strategies integrated knowledge and be considered as a facilitator
for contemporary pieces that toward the realm of collectible been widely studied by art do market players adopt in understanding of the practice of processes, discourses and
transcend the boundaries design, what is happening today business, both a professional order to address cultural and under analysis. On this ground relationships between design
between design and art has is a progressive overlapping of and an academic discipline, historical perspectives about the research study is no longer collectors, practitioners active
steadily increased. People have roles and responsibilities in the there are no conventional design? about design itself but about in the market and cultural
been showing a new attitude research and experimentation markers for the market value 4. How collectible Italian design the dynamics and practices institutions. The developed
towards design objects and within the design field. The of design pieces. The amount is perceived and absorbed by that the market for collectible interconnected scenario could
as a consequence the design key players of both the primary of literature dedicated to the the market? design enables and how indeed increase and nurture the
market and the industrial and the secondary market are development and the dynamics 5. What could be a general these discourses could involve non-industrial design heritage
manufacturing system have shifting from being only involved of the market for collectible framework to facilitate the museums and institutions with and could also affect the
been changing their patterns. in the sale and exhibition of design is still minimum. Most sharing of collectible pieces the purpose of enhancing and perception and understanding of
The prices of the design works collectible design to becoming of the studies conducted between private collectors and nurturing the design heritage. the multifaceted contemporary
have started to rise, together meaningful actors able to on the topic highlighted the institutions? Adopting a multiple case studies design culture.
with a new conceptual and encourage appreciation and psychological, symbolic and The investigation and the methodology and a direct
narrative approach in the design understanding of design culture. poetic values of the objects, analysis have been conducted observation viewpoint, this
processes. The global market Through curated exhibitions, their meanings and the cross- with the main purpose of research study intends to gain
for historical and contemporary sales and online platforms they fertilization between design and delineating the design market greater understanding of market
limited-edition design pieces, influence the development of other disciplines. What remains infrastructure as basis for players strategies and to define
small and elitist though it is, has the market and they address to be explored, however, is understanding the collecting the extended boundaries of the
developed as a consequence critical issues related to historical how faithfully the market for praxes and the new relationships discipline outside the industrials
of significant changes in the and contemporary design. collectible design has followed that market players establish processes. The main aim is thus
practices of producing, selling Although an academic interest the art markets structure and with collectors and designers, to underline how the market for
and buying design works. in the market of collectible what cultural implications this affecting the whole design collectible design distinguishes
Galleries started to present design is still in its infancy, particular system may generate scenario. The research study itself from the art market, of
design in the same way of art, mostly because it is seen as a in the future of design culture. started from the world of design which he inherited timing and
and because of that collectors commercial phenomenon not Nowadays it is not possible to culture and design history mechanisms, and how the
began to purchase pieces worthy of being investigated make a distinction between the and entered the domains of market players helped to shape
manufactured by designers, as from a theoretical perspective, shift in public interest about collecting practices, art business the theoretical framework
they were artworks. Auction this research study relies design and the shift in the and museum studies, trying to behind the expanding practice
houses started holding sales upon the main hypothesis market for it. frame the contemporary way of of collecting design. Together
dedicated to historical and that the cultural implications The research moved from collecting design through the with the investigation of critical
contemporary works by connected to the distribution the previous statements and investigation of the features issues related to the importance
designers and architects. and production of non-industrial developed on the basis of one of limited-edition design of sharing collections, the
International and domestic design could truthfully enrich main hypothesis: the limited production and the new role of research presents the concept
fairs dedicated to collectible and develop the multifaceted edition design production, design market players. Beside of a shared database that allows
design have seen a significant culture of design. The distribution and consumption the purely economic aspect of the mutual loan of collectible
increase in sales and attendance phenomenon has indeed a can be considered a cultural the phenomenon, more efforts design pieces between private
and a noticeable number of strong commercial core, that is phenomenon able to enrich the had to be made to formalize collectors, dealers and museums.
182
183
The product design process at the same time, instead of in designer selects curves on the currently available both on the
DesiGn
is based on a sequence of two distinct moments and by shape surface, which can be market and in research labs.
phases where the concept using different means. Today considered as style features In addition, preliminary tests
of the shape of a product is computer-based tools do not of the shape, and evaluate have been performed with a
typically represented through allow us to perform the visual the aesthetic quality of these group of designers. Both the
a digital 3D model of the evaluation and the tactile curves by manual exploration. comparison and the testing
shape, and often also by means evaluation at the same time. The In order to physically represent session have achieved positive
of a corresponding physical aim of this research work is to these selected curves, a flexible and satisfactory results, which
prototype. The digital model develop a novel system for the surface is modelled by means have highlighted the high
allows designers to perform simultaneous visual and tactile of servo-actuated modules innovative potential of the
the visual evaluation of the rendering of product shapes, controlling a physical deforming system. Several are the benefits
shape, while the physical model thus allowing designers to both strip. The device is designed in of the Tactile Display used in
is used to better evaluate the touch and see new product order to be portable, low cost, the initial conceptual phases of 2. Prototype of the Tactile Display
aesthetic characteristics of the shapes already during the modular and high performing product design. The designers
product, i.e. its dimension and product conceptual development in terms of types of shapes will be able to change the shape
proportions, by touching and phase. that can be represented. The of a product according to the
interacting with it. If the new The proposed system for visual developed Tactile Display can tactile evaluation, before the
shape, either in its digital or and tactile shape rendering be effectively used if integrated development of the physical
physical form, does not satisfy consists of a Tactile Display with an Augmented Reality prototype. This feature will
the designers, it has to be able to represent in the real system, which allows rendering allow decreasing the number
modified. A modification of the environment the shape of a the visual shape on top of the of physical prototypes needed
digital model requires a new product, which can be explored tactile haptic strip. This allows a reducing, consequently, both
physical prototyping of the naturally through free hand simultaneous representation of cost and overall time of the
shape for further evaluation. interaction. It allows designers visual and tactile properties of a product development process.
Conversely, a modification of to explore the rendered shape. Moreover, designers may
the physical prototype requires surface through a continuous The developed Tactile Display improve their creativity during
the consequent update of the touch of curves lying on the has been compared with the product shape conception,
digital model, which can be product shape. Ideally, the similar devices, which are since they will have the
performed by remodelling the chance to optimise the design-
shape, or using techniques as evaluation process by evaluating
reverse engineering. Design visual and tactile properties at
and valuation activities are the same time. Future works
typical cyclical, repeated many have been presented, so as to
times before reaching the indicate the future research
optimal and desired shape. This to pursue in order to further
reiteration leads to an increase improve the design process and
of the development time and, the creativity of designers. 3. Tactile Display integrated with Augmented Reality visualisation system
consequently, of the overall
product development cost.
Indeed, it would be very efficient
and effective if the two kinds of
evaluations would be performed 1. Proposed Product Design Process
184
185
Today we are living in what consumption. So audiovisual lies in a partial, simplicistic is graphically represented as
DESIGN
sociologists and mess- presence has become pervasive, and sometimes misleading the research cicle) three
mediologists described as the occupying spaces and places interpretation of the narrative distinctive disciplines: the
narrative era, where the until recently inaccessible, and concept. Other times, the narratological theory, the media
attainment of the narrative turn narrative has spread among enthusiasm for a cultural studies and the Movie Design
interprets and transforms every varied and unexpected sectors, fashion or the illusion of a (according with the definition
object and every action in a both social and corporate, brand new discovery leads to of Marisa Galbiati). Starting
matter of narration. Telling the entering rightfully in the forget the existence of an well- from the studies of cognitivist
world and doing it through processes of marketing, of established linguistic tradition. psychologist Jerome Seymour
pictures - seems more easy and corporate training, in politics The enthusiasm with which Bruner and particularly from his
natural as never before. and institutional communication, different forms of storytelling concept of narrative thought,
As a part of the communication in clinical and psychological are accepted, show how the this thesis wants to point out
world, the various forms of therapy. field is still fertile and open to the fact that storytelling is not
1. Scheme of the disciplines involved in this research
audiovisual narrative have Also in the design world new experimentations, but merely identifiable with the
represented - and still represent the theme of narrative is at the same time are likely ability to tell stories more or
- one of the most fertile soil increansingly common and to turn into an ephemeral less interesting and credible, but definition of the grammatical understandable. Discover and
in terms of research and it is used in many fields. In cultural revival. To declare is a real system of organization statutes and the syntax of analyze the filmic material that
experimentation. Over the last the same way the audiovisual indistinctly that Nowadays, of thought and culture, a way of audiovisual narrative. A moment is emerging today from the
century moving images have communication focused on everything is storytelling, or knowledge transmission which to assert, once again, the close company archives also means
established themselves as one of artifacts, products and services to call narrative any form can involve both individuals and integration between the world to shed light on the practices
the most effective representation has become a customary of communication - even if whole communities. of narrative and the world of with which the different design
form of contemporary practice, especially for what it uses a a little part of the The following step involved moving pictures, a bond born areas have tried to represent
complexity. Audiovisual concernes the description of discoursive techniques - means the analysis of semiotics and in the past - as we learn from themselves.
managed to go beyond the complex phenomena, in the to obscure, if not to ignore at narratological studies (Greimas, the history of cinema - that Located at the crossroads of
traditional barriers of written- construction of scenarios, in the all, the theoretical and linguistic Todorov, Eco, Chatman, helps to spread audiovisual narrative, cinema and the
oral languages - sometimes documentation of design and narratology principles, that Campbell) in order to define storytelling as a mean of tradition of the utility films,
inaccessible - thanks to its nature manufacturing practices. In this manage the story mechanism. what are the main elements interpretation, dialogue and the audiovisual storytelling
of language of images and way the designer also holds the This PhD research aims to and mechanisms that constitute cultural access. The last phase could be turning in recent
its intimistic narrative features, role of a real communicator, able address the communication any kind of story (fabula is dedicated to the analysis of times into a tool for sharing
offering itself as one of the to design and to test languages landscape just discussed through and discourse, characters, a particular cinematographic projects and for facilitating
main keys to understand todays and visual narratives in order a theoretical reflection about environments, time, conflicts) genre: the industrial film. In the design more aware. In this
society. to represent actual and future the role that the audiovisual and to describe their particular the multifaceted world of way the audiovisual narrative
Nowadays, the strength of this visions of a society in a constant has, and can have, in the characteristics as well as the cinema products, this kind of become the true mediator
communication phenomenon transformation. design processes and on the relationships that bind them films has represented the real capable of activating whar John
si fostered and boosted by Unfortunately, however, the use opportunities that the visual to each other. The subsequent ground of exchange among the Grierson one of the fathers of
the wide range of repro-visual of audiovisual storytelling in narrative offers in terms of investigation involved the nature design processes, the industrial doocumentary films defined
instruments and devices and this context is often limited to representation, of knowledge of cinematographic language manufacturing practices and as the creative treatment of
by the increasing popularity of an ancillary role or the synergic sharing and documentation. and the praxis of the filmic the filmic communication. An reality.
digital distribution, sharing and application of narrative and In order to facing these text analysis (Balsz, Metz, area where it was possible
archiving of audiovisual artifacts, audiovisual strategies is not issues, this research seeks to Aumont, Marie, Elsaesser, Burch, to experience visual narrative
especially for what concernes entirely effective. In many vases converge on a unique frame of Morin, iek); the theoretical solutions for make the
the bottom-up distribution and the cause of these problems reference (what in the figure milieu which contributed the industrial tasks more clear and
186
Design of the un-finished. The first part of this research metaphor has strong influences
DESIGN
becoming the place where we web as a space of discussion in Europe and the adaptation out in the last years, emerges identify errors, and identify
are shaping the image of our that can be mapped. Adopting to climate change. With a first that most of them are based on new analysis direction. The
society: social interactions, news, a cartographic metaphor, the round of experimentations, it the study of digital objects, second movement is the way
official and unofficial documents social scientist is seen as a has been possible to see that seen as the ontological object back toward the end-user, in
are increasingly archived online. cartographer, who explores and design expertise is not only provided by each platform (such which artifacts are repurposed,
Any public issue and concern describes the debate landscape. related to public communication, as website, social networks, redesigned and enriched in
hits the web, leaving traces of Outputs of these studies are but influences the analysis search engines) on the web. As order to make explicit all the
ongoing debates. The rising therefore maps and atlases, process too. Visual artifacts are designers, we need to know the assumptions taken by the
questions are: how can we use which can be shared with indeed used by researchers into features of materials well use researcher in the exploratory
these traces to investigate social involved actors to understand the analysis process, to validate to produce artifacts. The process phase. From this analysis two
issues, such as the adaptation their position in the debate. results and identify errors and of encoding digital objects into main concepts are argued. The
to climate change? Is it possible Such approaches provide a pitfalls. data is therefore investigated. first one is that visual artifacts
to map these traces to gather a sound framework to repurpose This research is therefore framed First, an analysis of digital must be considered as semi-
big picture of the phenomenon? digital traces for social research. to explore the influence of visual object concept is carried finished products, or materials,
While the web is a place where There is a demand for new artifact within the analysis of out, highlighting the authorial supporting the analysis.
multiple actors are engaging visual languages able to social issues from the web. The choices that must be taken Identifying researchers as main
in discussions about these express the complexity of these aim is to identify needs and for its translation into data. users, artifacts core features
issues, several biases affect this studies: communication design criticalities whose solution is Then, a classification of digital are the quickness of execution,
medium. Not all the worlds expertise, in particular from the supported by design expertize. sources is suggested, describing simplicity in re-executing them,
population has equal access information visualization field, is Developing visual and interactive the characteristics of digitized and the openness to new
to it, not all the debates are needed. Diagrams are powerful artifacts within the project, it objects, born-digital objects and analysis actions starting from
public, and the loudness of tools capable of expressing has been possible to analyze re-born digital objects. From the them. Semi-products become
involved actors can be amplified different layers of the debate, how researchers and end-users analysis of already existing tools, outdated at the moment of
or weakened by the web as allowing a formalization of engage in the use of visual five approaches for repurposing their reading, their use is purely
medium. This makes the web an results providing at the same artifacts. The project gave us and translating digital objects functional to identify new
unsuitable source to understand time a seamless exploration of the ability to follow in vivo are identified, depending on research directions.
a social issue, as our perception them, from the macro to the the whole process of a social the access provided by different The second concept is closely
can be influenced by those micro view on debate. While cartography, observing the platforms. For each one, related to the previous one, as
biases. At the same time, the these approaches have been analysis methods evolution, the repercussions on design process the openness of an artifact is not
web is the perfect place for already discussed in other fields, criticalities related to the data are analyzed. Furthermore the a by-product of visualizations
looking at the public discussion few reflections have been done collection and to their visual research draws connections but it must be designed:
about issues. On the web is on the role of design within translation. Furthermore the with communication design and therefore design actions must be
possible to see who are the most the creation of such artifacts. project allowed testing different information visualization. mainly focused on developing
active actors involved in the A European project that joined design approaches and visual Finally, an analysis of the project un-finished artifacts, which are
debate, which are the factions researchers from the social languages, producing several experience is proposed, drawing open to be repurposed and
and the fractures among them, sciences, new media studies diagrams and a web-platform on failures and successes edited. Finally these artifacts can
how these relationships change and design fields has made for controversy explorations. achieved along the process. be used at the end of the study
over time. possible to analyze the role of Beyond the achieved results, we The limits of the cartographic to rebuild the analysis evolution,
In the fields of social sciences diagrammatic tools in issues and also had the ability to identify metaphor are explained and and to identify key concepts
and new media studies it controversy mapping. Two issues design approaches able to a new approach to analysis and findings that should be
is possible to see a rising have been identified as case improve the analysis. is therefore proposed. The communicated to the end user.
188
189
Although diversity has urban contexts that conceives governments and policy-makers. local museums in relation to
DESIGN
always been a fundamental diversity as a source of Therefore, in elaborating the intercultural integration policies
characteristic of human societies, dynamism, innovation, creativity metadesign framework, I have and practices. This can be used
now more than ever, it is central and growth, and stresses the selected and reinterpreted both to support museums and
to the political and research importance of interpersonal them in light of what local their partners in designing
agenda. Contextually, the and intercultural encounters. museums can concretely do. their interventions and to
socially active role of museums In Europe, the development This translation was influenced evaluate the compliance with
and heritage has become of this approach went hand in and addressed by the museum intercultural integration policies
intertwined with cultural hand with the Intercultural Cities practices and projects observed of their existing activities.
diversity and intercultural Programme, a joint action of during the first phase of the The formalisation of such
dialogue, and design research the Council of Europe and the research, all of them originally framework is in line with the
and practice have become European Commission. conceived and designed without guiding approach embraced
increasingly interested in My study is aimed at supporting any explicit reference to the by museum think tanks and
addressing social and societal especially local museums, Intercultural City objectives. associations with reference
issues. which, because of their being Furthermore, the premise to cultural diversity and social
In the light of this, my study implicated in the territory, for the kinds of suggestions inclusion and impact. From the
developed from a generic are accounted as privileged included in the framework lies perspective of design, the study
1. Marlborough Square Project exhibition within Small Works Hackney. Clear
interest in how museums and institutions for being relevant in the idea of design practice has offered new viewpoints and Village.
design could have a role in locally. By virtue of its as aimed at designing for and experiences on the relationship
addressing local cultural diversity aspirational dimension and its through museums, seen as between transformation design
issues, such as intercultural acknowledgement by European transformative services for local and design for cultural heritage
relationships and tensions, institutions and policy networks, communities. This vision was when designing in and for
integration processes and so I have identified the Intercultural informed by the transformation multicultural places. In particular,
forth. City approach as a reference design practices observed and I have framed in the context of
With the first phase of the for addressing the work of local developed during the research design for cultural heritage the
researchthrough literature museums, which are regarded process, such as Small idea of museums
review, contacts with experts, as tools potentially able to Works Hackney by Clear Village.
exploration of case studies and influence local dynamics. Lastly, based on the case of
participatory action researchI In line with those design MUST-Museo del Territorio
moved from this generic interest approaches that acknowledge Vimercatese and on my active
to a more structured conceptual peoples and organisations involvement in the design
framework and related aim: creativity and empower them process of the exhibition Parole
providing local museums with in finding their own specific per accogliere. Parole da
guidance on how to activate solutions, the outcome of the cogliere, I have reflected on how
bottom-up the elements of the study is a metadesign framework the Intercultural City metadesign
Intercultural City approach, even guiding museums in designing framework can be used in the
in the absence of a local policy their interventions. This is based specific context of MUST and its
framework that embraces this on the idea of the ten elements territory.
strategy. of an intercultural strategy With my study, from the point
The concept of Intercultural suggested by the Council of of view of museums, I have
City refers to an approach for Europe. These suggestions formalised a reference that 2. Parole per accogliere. Parole da cogliere at MUST-Museo del Territorio
the management of diversity in are mainly addressed to local would guide the work of Vimercatese. COI.
190
191
In the last decade, the trends believes that using a particular groups, workshops) used for have been used in order to
DESIGN
in the development and system would be free of effort. acquiring data from users in retrieve information form the
management of national Starting from these two different stages of development. users which could be otherwise
healthcare service are focused concepts, this doctoral thesis The users NWDs are then gathered only in the final test
on telemedicine with the aim of study a new method for exploited as main inputs to phase. In this part the thesis
cost reduction. Telemedicine can the development of a new build the new biomedical shows all the techniques used
be useful to reduce cost thanks biomedical device for tele- device. The third part consists for developing software mock-
to early de-hospitalization and medicine. The research has been in the development of the up and then the real software/
diagnosis. These development divided into five parts. hardware; this work uses a apps necessary for signal
drivers can be achieved only The first part concerns on users: concrete system development acquisition, processing and
through the use of new the research focuses on studying example to demonstrate the visualization.
technology oriented to the market and epidemiology in proposed reliability and the The last chapters of the thesis
production of low cost products, order to identify who are the applied method. This method reports data form the final test
which can be integrated with users that will use the products, is specifically applied in the of the system with real users
existing system in order to which are their main features development of a wearable as well as the problems, future
improve monitoring quality and and how many different kinds of system, device and smart developments and conclusion.
possibility. The application of users the product has to meet. garment, for weak people.
these technologies on sensible Once the users have been The system has the purpose
aspects has to face the main selected, the research focuses to continuously monitor the
issue of user acceptance and on these subjects in order to cardiac state of the subjects
usability. extract the user NWDs (Needs, (weak users). The weak users
Usability shows how a product Wants, Desires). These operation are defined in this case as babies
can be exploit by specified users has been done using some and elderlies. The process shows
to achieve specified goals with qualitative analysis technique different stage of development
effectiveness, efficiency and (Interview, Focus groups, with some subsequent version
satisfaction in a specific context Workshop). In this research, of the hardware due to
of use (ISO 26800 Ergonomics. the work with the users, and the integration of data and
General approach, principles particularly the qualitative idea arising from the users
and concepts, ISO 9241-11 analysis, is accomplish both involvement and from the data
hardware usability and ISO/IEC in the early and final stage of acquired through the qualitative
9126 software usability). The development. This method analysis.
acceptance is instead defined as allows for optimizing the design The forth part is similar to
how the information technology of the product minimizing the the third but concerns on the
can be accepted or rejected by cost of development. Users are software of the system. In this
people. In particular: also involved in the final tests part the research focuses on
Perceived usefulness: The to verify two main aspects: the the development of software
degree to which a person usability (how the system is easy (particularly GUI - Graphical 1. Outline methodology for healthcare product/system development.
believes that using a particular to use) and the efficiency (how User Interface) starting from
system would enhance his or the device is able to give correct the users NWDs highlighted in
her job performance. and reliable outputs to the the previous step. In this forth
Perceived ease of use: the users). The second part explain part, different procedure for
degree to which a person the technics (interviews, focus Mock-up and Wizard of Oz
192
DESIGN
mechanisms of Nature have that can be either chromatic or The different contents of the and mineral appearances. the specimens back to that
always been an influence on formal. It is the contradictory research are articulated in a By changing graphics, words concreteness that as functional 1. Fake Herbarium, leaves.
the production of objects and and double-nature dimension of continuous stream of text, and classification practices objects - they posses.
the design processes at different objects of natural appearance, images and graphic apparat, inspired by the natural sciences, Photographs, drawings and
scales, both indoor and outdoor. products that create a fake using diversified linguistic the different specimen are analytic texts take turn in
This influence reveals itself in Nature within the domestic approaches. interpreted from a transversal describing the object of the
an often unbalanced mix of walls. In the first part the juxtaposition point of view, unbound from enquiry - a fake Nature that
imitation and invention, simple This false environment - of photographs, drawings functionality and aiming at proliferates both indoor and
resemblance and understanding populated by trees/clothes and diagrams collected and pinpointing the recurrent outdoor through a transversal
of deeper mechanisms. hangers, grass fields/carpets selected over three years characters of reproduction and multilayered gaze. In the
Today industrial design seems and cactus/toilet brushes - is represents a selection of design and falsification of the natural described design experiences,
to be progressively acquiring an amplified by the technical experiences without the use components. the use of forms, colours and
hybrid character that aims at possibility to reproduce every of words. A series of over 100 The graphical rendition in words taken from the natural
the creation of objects inspired environmental condition in images creates a sequence of plan and elevation of the world ideal and abstract is
by natural processes, through a closed environment. Air case study couples in order to 200 selected examples, allows intertwined with expectations
the integration of technology conditioned hangars guarantee try and determine ambiguities for their systematic study and often disappointed of
and biology. As a result, an - 24 hours a day, 7 days a and contradictions in the design defines the necessary database environmental respect and
ever-growing environmental week, 365 days a year - snow practices inspired by Nature homogenous which is the safeguard, uncovering a desire
2. Fake Bestiarium, birds.
awareness gave the birth of a covered tracks in Dubai and from the end of the 60s till base for the analytic phase. for legitimation which is
design for sustainability or, in tropical beaches in Berlin. today. The images, edited The objects are re-read and dominantly aesthetic.
its most simplistic and popular And what happens outdoors? and hosted within the same interpreted starting from
reduction, to the so-called green Mobile phone towers disguised graphic frame are alternated, their appearance, form the
design. as pine trees are hiding in the revealing visual assonances and correspondence of their form
The use of the word green, woods, accompanied by the first dissonances. In the descriptive to a kingdom and specie,
adopted in lieu of the term specimen of photovoltaic trees. texts the reiteration of the same coherently with the intentions
sustainable, coincides with Do we still have the genuine analytical structure description declared by the designers
the attempt to reinstate the experience of nature? Where of case a, description of case b, and brands. By forgoing a
image of design as natural is the boundary between confrontation between cases a classification based solely on
practice, symbolic and positive, reproduction and imposture? and b defines a systematic and function it is possible to notice
by referring to Natures Are we setting up a fake homogenous way of reading. differences and analogies
most evident and superficial domestic nature in order to The sequence of texts and between specimens that mimic
characteristic: its colour. replace the real one? images follows a reverse scale the same natural element, in
This exchange gives way to The research project aims at order, going progressively from an enquiry that overturns the
contradictions and linguistic ordering and interpreting design landscape to object. ironic and analytic attitude of
practices that are inspired by In the second part the Bruno Munari who, in Good 3. Fake Lapidarium, stones.
ambiguities: on the one hand
green design promotes objects the forms and mechanisms drawing becomes the decisive Design (1963) transfigures fruit,
that are environmentally aware; of nature, to determine the instrument of enquiry. The vegetables and flowers into
on the other hand it allows for characters of the fake Nature research proposes a taxonomy objects.
the creation of designs that are that is made up of objects, system that is able to order The composition of the
far removed from any ecological materials and environments and interpret the languages, taxonomical boards highlights
value and that are hidden under simply interpreted in a functionalities and contradictions characteristics and details which
194
195
This research results from the of market constraints, disruptive and most direct demonstration Unlike other design productions with the question of designing room to recognize change that
DESIGN
observation of a constantly design and design of meta- that comes to mind, is reference that can easily disappear, food functions, but more so with will nurturing innovation, a
changing design landscape, tools - the components of a to the Italian derivative of the is therefore a stable basis that that of designing behaviors. designer meditator, trickster,
repetitions, alterations, process (Celaschi, 2014; Celi, word design, progettazione, contrasts with fleeting trends The conversations are necessary magnifier, and advanced
and disruptions, a flow of 2014). In accordance, the meaning to project, to set and allows perception of for the design process, but the designer.
information that are as many research positions itself at the forth, directly connecting change. The findings underpin way in which people participate
representations of innovation. Fuzzy Front End of Innovation design and future. To Design the untamable quality of trends; in creating a design vision is not
Paced by a continuous update recognized as a problematic is to Project. From this vantage interconnected, open structures systematic and can happen at
of its productions, Design space for the Design activity due point the study moved towards that irradiate, multiply and different levels. This is confirmed
activity is driven by a goal - the to the strong implications of the exploration of anticipation release contagious energies that by the many possibilities of
improvement of the quality of insight and tacit knowledge. The as an approach embedded annul all distinction between interpreting the present. The
life. However, innovation does argument that a conscious and/ in other disciplines. This lead past, present and future. The now is multifaceted, mixing
not always come out as novelty or unconscious implication of initially to a comparative analysis trend, in its living state, is of past, future and experienced
nor as improvement. In the very trends at the early stages of the of the many fields that use interest to the design project in moments, its richness is
least, each design action reveals design process, can direct the anticipation. Then, to a focus search of pieces of information recognized by Future Studies
itself differently depending on project towards a meaningful on the three fields that are most that will make the difference in that do not predict the future
the perspective we bring to bear innovation has weaknesses. implicated in the gathering of social and cultural contexts and but look at the way in which the
on it. As a result, it is difficult to These can be addressed by a qualitative information, mainly in the future. present can inform the creation
ascertain the designers political variant approach that seeks socially situated trends; i.e. Keeping the frame of the of alternatives paths.
and/or social commitment in to make better sense of the Fashion, Foresight and Design. research fixed on the early Finally, the findings suggests
any new production. As per contemporaneity of change. Taking into account the stages of the design process, the need to implement stand-
this assessment, the following The flux, as the backbone of importance of trends within the having made sense of the by tricks for developing
questions are raised: What are the research, connects all the socially engaged practices of perceivable representations of awareness of the many realities
the driving forces that determine parts together as well as it Fashion, Foresight and Design, change, the inquiry moves to that make up the thickness of
the direction and outcome of a connects objects to meaning, raises the question, Should the analysis of the terrain where the present. Reframing, as a
design production? How much to people, to nature, to we reconcile with trends? change is most effervescent, practice that allows the offset
control over change is in the space and time. And those Looking into trends is a difficult the present. The hypothesis of preconceived ideas of what
hands of the designer? How are the many connections to task because of the many layers that emerged from the study the future should look like,
does design take charge of the integrate at the beginning of meaning that have been of trends is that people may is illustrated with examples
future? of the design process when given to them and the mystical be the best informants of taken from various disciplines
The methodology adopted to gleaning information, framing powers that come into play. meaningful change, placing of anticipation. The underlying
answer these questions is mainly the field of action, envisioning Nevertheless, the study finds its all their expectations in the idea is that reframing offers the
qualitative, beginning from an the new, and reiterating the way through the confusion of moment not yet formed - the capacity to adapt to change
opportunity space rather than a process for the fine-tuning of appearances within the three future. This analysis begins by and to grasp the emerging
problem space and refining the the imagined possibilities. All of disputed fields, by kneading questioning the motivations challenges of the present this,
domain of inquiry along with the this contributes to building the the trends representation and of participatory co-design as a consequence of open
acquisition of knowledge. future. A future that, according via experiments that reveal the practices and rapidly moves into meaningful conversations
The study takes place, within to the design literature, limits of its mechanism and a debate on creativity. This issue, driven by sincere creativity. The
the context of an Advanced encompasses three expectations: content. The experiments rely motivated by a will to change Advanced intervention of design
Design framework, driven by critical, sustainable and creative. on the field of food, given that becomes stifled because of is recognized by the potential
four characteristics: design Anticipation and Design are it seems safe to expect that collective consensus and biases. of the design practitioner to
anticipation; design in absence intimately connected. The first food is a constant human need. Design is not concerned only continuously step aside, giving
196
Sensemaking in Furniture Design principal concept in interpreters exhortation tend to promote furniture, able to evoke cultural
DESIGN
environment comes from the and its other, its signified visualize and organize the of the physical thing directly the presence of community insight combination: individual
disintegration of traditional concept. process of sensemaking. The represented. through effectuality or insights of design issue are
symbolic structures. The According to a first general advantages would be various, In a relationship of effectuality, convention and the personal linked one-to-one to single
emancipation from bourgeois classification, semiotics of last in addition to determining a piece of furniture is meant as connection between designer design patterns, which later are
moral tensions allows us a new century has lived a contrast effective innovation paths: trace of a certain past activity a and consumer through creatively combined to look for
structuration of environment. between two capital theoretical sharing interpretative process technique of production, the use similarity or causality. new semantic keys.
Dweller puts personally pieces lines. One is the linguistic theory, with other members of of a material, a local decoration, Finally, the thesis formulates The main value of this research
of furniture in his own home which lets us replace a sign design team and with other etc. The main goal of designer a guiding principle for the the lies in considering the whole
environment and can wonder with a meaning determined corporate areas; transferring lies in the selection of this social generation of a concept map. conscious experience, by
about on the links furniture by convention. The other one the interpretative criterion practice. This represents graphically evaluating it not only for what
maintains with its formal quality, is born from the contribution within company, limiting the In a condition of causality, a the mental model of creators is based on convention but also
the production that generated it, of Charles Sanders Peirce importance of individuals; piece of furniture facilitates and guides understanding for not symbolic meanings,
the actions it invites to perform and is based on a logical- facilitating the relationship a determined future action. of gathered material both experienced and interpreted
or the interactions it makes cognitive system allowing the with clients or managers, who Here the goal of designer is the by the creation (implying directly. In furniture design like
possible. The new setting is no interpretation of sign as an would understand more easily exhortation of a wished action a value judgment) and by in arts subject may suppose
more symbolic. The pieces of inference. the influence of research work toward a passive subject. the visualization (illustrating senders purpose through an
furniture do not comply with This work refers to Peirces on the result. Nevertheless, the The relationship of convention, connections). Design team interpretative approach, similar
a predetermined order but Semiotics, involving most designers do not follow finally, is based on a piece progressively places and to that one applied to natural
represent the worldview of the interpretation. If culture were a a systematic methodology, of furniture sharing with organizes according to phenomena. This thesis sets up a
person who combined them simple system of conventional but they instinctively interpret user a code necessary to semantic areas pictures taken meaning theory more congruent
together. symbols, experience would objects by counting on their understand its performing from reference context and may with design practice, dealing
Home furniture is generally be only recognition. But, if experience, without a real properties. The central issue integrate the map with notes, with a not always codified
considered low-tech and culture is seen as a continuous externalization. of this representation is the based on personal experiences. relationship between designer
requires research mainly in interactive process which we Semiotics and methodology participation of person in The map may be a big physical and consumer.
the not technological area of cannot simplify, something new are convergent issues: we can performance. surface or a digital file, with
meaning. Furthermore, semantic may join experience through mean design semiotics as a Domestic furniture, more dynamic nature and the option
experience has generally a the interpretation of intrinsic methodology, study of methods than other objects, represents to be shared. The actions of
perceptive origin and is linked features of object. In other specifying where and how to the links of man in respect mapping come in a circular way:
more to the manifest meanings words, experience of project order and interpret the objects to himself or other people. It 1. Collecting and organizing
deriving from cultural context and interpretation, on which of a system where we have to links or divides individuals, and images in accordance with the
then to the latent meanings Peirces Semiotics focuses on, are act. We use this methodology takes on formal properties of four mentioned areas;
experienced in use and strongly linked. to define what is possible to do, differentiation or integration. 2. Grouping images according
appropriation of object. The core of the strategic design which is both understanding Furniture shows unique to homogeneous evoked
We can consider a piece of of furniture is an interpretative links of meaning (sensemaking) qualities to represent users concepts;
furniture as a sign, which activity mostly involving the and being able to managing internally order, whereas it 3. Connecting groups of
we sense, and it signifies the existing artifacts of reference them (sensegiving). shows collective practices to images placed in all areas
connection between us and the sector. This activity is essentially After a theoretical discussion, emphasize social identity. If we in accordance to common
culture we belong to. Therefore, an abductive process informed the process of sensemaking focus on the four identified cultural macro-themes.
semiotics is central in this work, by inference, and we can call it systematized in this thesis semantic areas, we can see The creation and the use
because it is the discipline sensemaking. considers each piece of furniture that two of them similarity of a concept map provide
studying sign and in particular A methodology would allow as a design entity evoking one / expression and causality / semantically coherent groups of
198
DESIGN
diffusion. Internet is the logical answers to meaning of the role
extension of free access to of archives today, through the
archive institutions: intuitive promotion of their contents,
interfaces, efficient search new curatorial processes, new
engines and attractive electronic professionals and the public
devices foster the attention and engagement.
Architectural archive institutions_map
engagement for new forms The research attempts
of informal learning, potential to demonstrate how the
knowledge experiences, as well history of modern architecture migration, etc.). Furthermore, informative and interactive
as new curatorial and scientific and references for entire they include the contribution principle of technologies,
horizons, without physical or generations, for helping spread and participation of visitors and coupled with favorable
temporal boundaries. a modern idea of living by the open their collections cyclically social dynamics, presents an
Digital archives of architecture principles of geometric rigor, putting all the materials from opportunity for the archives
and design, although in their functionality, new technologies the archives on display; to be transformed into many
niche discipline, contribute and innovative materials. web sites and portals as piazze del sapere, where
to the spiraling growth of As a witness of evolution in authentic publishing projects everyone can find, identify, and
digital heritage by circulating progress, the research examines are considered the new nurture their own interest, and
Analog/Digital archives representations of a rich how digitization and online communication tools for memories can be relived through
and extravagant repertoire: dissemination of archival content online dissemination. They also free and creative forms of
The research starts with in Montreal, the Royal Institute drawings, sketches, notes, entail a necessary redefinition of provide fertile testing ground interpretation and elaboration;
a reconsideration of the or British Architects in London, models, photographs, audio, the archives role and activities, for communication design. those infinite narratives able to
humanistic model of information the Netherland Institute of video, etc., often activated via stimulating a radical rethinking as platforms of multimedia weave the threads of the past
transmission, here represented Architecture in Rotterdam, the typologies, thematic paths, of the entire institutional experience. As platforms of with those of the future.
by the archives, highlighting Museo delle Arti del XXI secolo geo localizations, etc. They organization and representation
the new possibilities offered and the Museo di Arte Moderna preserve the testimonies of open for a wider audience.
by digital technologies. e Contemporanea di Trento architecture, objects and creative The interactive, multimedia and
Although specifically focusing e Rovereto, among others, is thoughts on design of the participatory digital models
on architecture and design that digital technologies and modern heritage that built the present new opportunities for
archives, the thesis discusses the participative models amplify the today contemporary landscapes the enhancement and fruition
changing patterns in information opportunities for the diffusion forging contemporary aesthetics. in both the organization of
communication in contemporary of information of archives Great masters, such as Le the exhibition (exhibitions in
society, which embodies strong materials. Corbusier, Gropius, Mies vad situ) and dissemination in the
technological features, trying to Architecture and design digital der Rohe, Aalto, in Europe; network (portals and websites):
make connections between the archives, with databases Whrigt and Buckinster Fuller exhibitions and cultural
humanistic and scientific fields. and descriptive metadata, in the USA, in Italy, Terragni, programs in situ open up to
The conviction gained over the reproductions, 3D models, audio Figini, Pollini, Libera, Lingeri, transversal variations of design
years of theoretical research and video, displayed on portals along with Ponti, Michelucci, topics (planning, architecture,
and action in some of the most and websites and integrated Pagano, Moretti, then the group design, territory, etc.), and
important archive institutions with new-generation content, BBPR, Magistretti, Castiglioni, to issues of contemporaneity
in this field -- such as the become media platforms and Albini, Gardella, among others, (environment, climate,
Canadian Centre for architecture new communication models have become milestones in the urban development, health, Architectural archive institutions_timeline
200
Design for public-interest services. services in various fields of daily in New York City. Here she starting from meeting a
DESIGN
from observation of those represents the main area infrastructure could support author immersed herself in a The results of this doctoral model for these infrastructures,
people who are most active in of study in this doctoral this transformation? How could specific context: Zone 4 in the dissertation can be divided into building upon the notions of
our society: groups of citizens dissertation. Hence, the author design contribute in creating this city of Milan. She started with two main parts: one focuses corporate social responsibility
who self-organize to solve their attempted to develop these new infrastructure? this neighbourhood because on services generated during and shared value.
own problems, by starting to forms of service by creating the The term infrastructure and the there was already a high level Creative Citizens and the Finally, the author reflected on
transform what is already there definition of public-interest related verb infrastructuring of citizen activism that she had related service model, the the role of designer within these
without waiting for a bigger, services, which focuses on are crucial for this doctoral encountered when working other discusses a collaborative processes: from the traditional
top-down change. its hybrid nature: the provider dissertation, because they are with Polimi DESIS Lab on the infrastructure to co-design role as facilitator with tools
Creative communities, active of such services is a system the object and action, indeed project Feeding Milan Energy and co-produce public-interest to the more recent one as
citizenship, social movements, composed of different actors the product and process of for changes, which is dedicated services. activist with proposals, and
whatever you want to call them, sharing the same values and the research. Therefore the to exploring new types of The first part considers the she introduced a possible role of
these forms of activism are acting in the public interest; hypothesis is that the creation food systems. And therefore services originated within designer as advocate sustaining
shaping our cities and they are they are services that emerge of a dedicated infrastructure to food became the key-subject Creative Citizens as actual citizens initiatives.
developing an alternative system from the bottom-up and they co-design with citizens building around which she established a results. This is perhaps an In this doctoral research the
of services between amateur often show a high level of upon their existing initiatives fundamental connection with element of originality: the author essentially reflected
and professional, public and disorganization and transience, may avoid their weakening and this community of citizens. general purpose of a PhD is on how to design new forms
private, market and society, sometimes they are just ultimate failure, facilitating the After one-years immersion to produce new knowledge of citizen participation within
profit and not for profit. initiatives that are not able to emergence of a new generation within the local context, the about a topic, and, in this society, aiming at supporting
This starting point is a positive evolve. of public-interest services and author developed the project case, a methodology based public interest, which is also
phenomenon, even moving The author questioned what the creation of a catalyst for Creative Citizens, a programme on action research also led to connected to the design of new
from a context characterized design can do for such activities, local change, hopefully fostering of weekly co-design sessions the creation of effective field forms of democracy and of the
by wide social and economic not only design for services, the encounter between the on four main topics: food results, six services (Objects re-distribution of power. Design
transformations that have but also participatory design top-down (institutions) and the services, services for sharing Library, Augmented Time Bank, is entering unprecedented
resulted in a long crisis. In fact, and all forms of co-creation bottom-up (active citizens). goods and skills, cultural Citizens Help Desk, Facecook, stages: this is an open question
the first part of this doctoral that range from co-design to To verify this hypothesis the services, legal and bureaucratic Local Distribution System, Zone for further research, in which
dissertation is devoted to co-production, precisely because author adopted a methodology services. Creative Citizens is 4 Ciceros) that are currently the relationship of design,
framing the renewed activism the protagonists of these that combines two major at the centre of this doctoral evolving in different directions democracy and politics will form
on the part of citizens, by initiatives are users, citizens who strategies: case studies and research because it researched and have produced an impact the starting point.
connecting it with the wider already practise collaboration participatory action research. For an actual infrastructuring on the neighbourhood. On
concept of social innovation and sharing. the former, she analysed existing process in the existing initiatives, the basis of these results the
and drawing a system of The research question practices, and for the latter, by using design for services author attempted to extract a
relationships with new forms of is therefore a series of she carried out research testing and participatory design and service model, building upon
economy, such as collaborative consequential questions: how one possible infrastructure to thus creating a meeting space that of collaborative services and
consumption or sharing not to waste citizen activism? support specific initiatives in a for citizens, designers, local outlining a set of characteristics.
economy and new forms of How to strengthen the various selected context. stakeholders and institutions, The second part of the results
welfare, known as relational bottom-up initiatives? How In exploring case studies the if only for six months and on a has a wider perspective: it offers
welfare, second welfare etc. to transform these activities author analysed existing forms small scale. a model for infrastructuring
All these movements basically into public-interest services of activism and collaboration, Furthermore, the author was informal activities in public-
explore new ways of offering that are effective, efficient, from civic participation able to extend her participatory interest services, describing
services and this is the authors and sustainable both from networks, to social movements, action research to a second a process that is potentially
specific interest as a researcher an environmental and social public art and collaborative context: the Lower East Side replicable in other contexts,
202
Interface Design for User Decision embedded filter panel based on relative importance for attributes: 6488: 365-375, 2010.
DESIGN
are playing an important and adopt information and use and (3) comparing candidates to which visualizes the distribution of objective and subjective and comparing opinions on
unique role; a staggering 90 a wide variety of strategies make the final choice. Interfaces of an attribute via bars and the measures, the multi-attribute the Web. In Proceedings of the
percent of people use and contingent on decision properties that aggregate information from correlation among attributes sorting was verified to be 14th International Conference
monitor reviews in their online [7]. In general, an effective customer reviews have been via simultaneous change. Then, beneficial to consumer online on World Wide Web, pp.
purchasing process [10]. information display depends developed to support the three we performed a user study to purchasing. The direct way 342351, 2005. ACM.
However, the overwhelming on two matches: on the one alternative stages. Through compare the two alternative outperforms the indifference 6. MacGregor D. and Slovic
number of reviews and hand, the match between the analysis of the results, we identify designs in the context of online and indirect ways regarding P. Graphical representation
inconsistent writing style require importance of information for the decision strategies users hotel booking. The results perceived decision accuracy, of judgmental information.
significant effort to read and tend the decision maker and the utilize to process information and show that people depended cognitive effort, satisfaction and Journal of Human-Computer
to let important information slip salience of the information the information they are inclined highly on opinion attributes intent to use in an E-commerce Interaction, 2 (3): 179-200,
by. Given that people examine display [6] and, on the other to seek at each stage. These to narrow down the range of environment. 1986.
the attributes of a product to hand, the congruence between findings lay solid groundwork to options in both interfaces, which Informed by the results of the 7. Payne J. W. Task complexity
evaluate whether the product fits the information format and the design E-commerce interfaces for points to the effectiveness of above user studies, we have and contingent processing
their desire, a number of systems way information is processed consumer decision improvement. incorporating opinion attributes derived a set of guidelines on in decision making:
have summarized customer [8]. Hence, the foundation of Concerning user decision-making in filters. And the slider interface how to design interfaces for an information search
reviews by extracting features designing information displays behavior in the stage of screening achieves significantly higher user consumer purchase decision and protocol analysis.
and associating sentiment for user decision improvement is out interesting alternatives, assessments in terms of perceived improvement in E-commerce. Organizational Behavior and
toward each feature. Liu et gaining a deep understanding of we find that: (1) 94% of decision accuracy, cognitive Human Performance, 16 (2):
al. (2005) [5] and Carenini et human decision-making behavior. participants began by eliminating effort, pleasantness to use and Reference 366-387, 1976.
al. (2009) [2] used bar charts Some researchers investigated alternatives with values for an intention to return. 1. Carenini G., Ng R.T., and Pauls 8. Todd P. and Benbasat I. The
to show the sentiment of how people use online rating attribute below a cut-off to After narrowing down options to A. Interactive multimedia influence of decision aids
summarized features. Carenini to make choices. For example, simplify the complexity of the a smaller set, 40% of participants summaries of evaluative on choice strategies: An
et al. (2006) [1] summarized user Lelis S. and Howes A. (2011) choice; (2) 55.3% of participants adopted a more compensatory Interfaces text. In Proceedings experimental analysis of
reviews in the form of a Tree suggested that people gather eliminated alternatives by both strategy Weighted Additive of the 11th International the role of cognitive effort.
map by representing a feature more information for the best static features (i.e., product Difference, i.e., comparing Conference on Intelligent Organizational Behavior and
as a rectangle with nested alternative and take more time specifications) and customer the remaining alternatives on User interfaces (IUI 06), pp. Human Decision Processes, 60
rectangles corresponding to the to inspect reviews of lower rating reviews. Moreover, the number multiple attributes and selecting 124131, 2006. ACM. (1): 36-74, October 1994.
descendants of the feature. In [4]. However, no clear picture of users who adopted opinion the alternative with the best 2. Carenini G. and Rizoli L. A 9. Yatani K., Novati M., Trusty
addition to numerical ratings, exists to systematically describe attributes (i.e., attributes overall value. More notably, multimedia interface for A., and Truong K.N. Review
Yatani (2011) [9] used adjective- how consumers make purchase extracted from customer significantly more participants facilitating comparisons of spotlight: a user interface for
noun word pairs to summarize decisions in E-commerce, in reviews) is significantly higher compared alternatives by opinion opinions. In Proceedings of the summarizing user-generated
the sentiment (adjective) towards particular, with respect to than that using an overall attributes in comparison with 14th International Conference reviews using adjective-noun
each feature (noun) to help users customer reviews. review score; and (3) the cut- those associated with an overall on Intelligent User Interfaces word pairs. In Proceedings
explore reviews in greater detail. In this paper, we take online off values are determined by review score. Therefore, we (IUI 09), pp. 325334, 2009. of the SIGCHI Conference on
In most conditions of online hotel booking as an example the value distribution of an developed a multi-attribute ACM. Human Factors in Computing
shopping, online purchasing can to empirically investigate attribute and correlation among sorting panel embedded with 3. Chen L. Towards Three-Stage Systems CHI 11, pp. 1541
be viewed as a decision-making consumer decision-making attributes, in addition to stable opinion attributes. Furthermore, Recommender Support 1550, 2011. ACM.
process from the perspective behavior in three stages of preference. Grounded in these the multi-attribute sorting for Online Consumers: 10. https://moderncomment.com/
of the customer [3]. In light of online purchasing: (1) screening user decision-making behaviors, panel was expanded to three Implications from a User Study. customer-feedback-stats
human decision-making theory, out interesting alternative(s) we framed two alternative alternative designs that mainly Web Information Systems
we learn that the same decision for further consideration, (2) designs for an opinion-attribute- differ in the way of eliciting Engineering WISE 2010,
Technology and Design for Environment
and Building | Territorial Design and
Government | Aerospace Engineering
|Architectural and Urban Design |
Architectural Composition | Architecture,
Urban Design, Conservation of Housing
and Landscape | Bioengineering |
Building Engineering | Design | Design
and technologies for cultural heritage
| Electrical Engineering | Energy and
Nuclear Science and Technology |
Environmental and Infrastructures
engineering | Industrial Chemistry and
Chemical Engineering | Information
Technology | Interior Architecture and
Exhibition Design | Management, Economics
and Industrial Engineering | Materials
Engineering | Mathematical MODELS and
METHODS IN Engineering | Mechanical
Engineering | Physics | Preservation of
THE Architectural Heritage | Rotary-
wing aircrafts | Spatial Planning
and Urban Development | Structural
Seismic and Geotechnical Engineering PhD Yearbook | 2015
206
DOCTORAL PROGRAM
207
One of the goals of the Maastricht Treaty is to promote the institutional and private operators; and evaluation methods of the technical-
Scholarship Sponsors
Consorzio Oltrep Mantovano
211
The study was conceptualised clusters and district economies. clustering phenomenon, namely what extent the agglomeration tricky matching?
DOCTORAL PROGRAM improved machine and converter performance, signal processing structures. Some of the main
ELECTRICAL ENGINEERING
Chair: prompt and efficient involvement of PhDs in any research body Use of finite elements code, simulation programs
Prof. Alberto Berizzi such as an R&D department of a production or service company. and environments for device study; Control After graduation, PhD are typically employed at:
A PhD in Electrical Engineering has a solid basic knowledge of system definition both on the device and system Major research centres;
mathematics and physics. This is essential, particularly for handling side. R&D departments;
and understanding advanced tools and methods as well as for Power generation, transmission and distribution
proper modelling, analysis and design of electrical engineering D) Measurements: firms;
applications, with particular regard to power applications. A PhD in This research field concentrates on the Engineering consultant offices;
Electrical Engineering well knows methods and applications in the fundamentals of metrology, particularly Metrology reference institutes and certification
main disciplines of Basic Electric Circuits and Fields, Power Systems, with respect to characterization of modern laboratories;
Electrical and Electronic Measurements, Converters, Machines and measurement systems based on complex digital Process and transport automation areas.
Electrical Drives.
The most important part of the PhD program is the development of
the research that will be the core of the PhD dissertation.
The Steering Committee is made by:
The main research areas are:
A) Electric Circuits and Fields: Surname Name Firm Position
This area is intended to provide the basic knowledge of methods Canizares Claudio University of Waterloo, Waterloo Associate Director
in electrical engineering for power applications. PhD students Institute for Sustainable Energy,
are specifically trained to develop critical ability and innovative Canada
approaches. The training method encourages the development of Carlini Enrico Maria Terna Head of System and Transmission
discussion and debate skills in a team environment. Control & Operation Central
The main research and training subjects are: Nonlinear networks South Italy
and periodic time-variant networks; Analysis of three-phase and Ercoli Sergio Zeroemissioni Srl Presidente CDA
multiphase systems; Switching circuits; Electromagnetic field Fasciolo Enrico A2A Reti Elettriche Responsabile Impianti e Reti
equations; Electromagnetic field numerical analysis; Electromagnetic Primarie
compatibility; Design techniques devoted to electromagnetic
Godio Andrea Alstom Transport Manager
compatibility
Lo Schiavo Luca Autorit per lenergia elettrica e Manager
il gas
B) Power Systems:
A PhD in the field of Power Systems deals with the following Cherbaucich Claudio RSE - Ricerca sul Sistema Vice Responsabile Sviluppo e
subjects: electrical energy production (e.g., frequency and Energetico Pianificazione
voltage control, protections, renewable energy sources, Dispersed Mansoldo Andrea Eir Grid Senior Power System Analyst
Generation, Microgrids); electrical energy transmission (e.g., power Monti Antonello The Institute for Automation of Director
system analysis, real and reactive power optimization, security Complex Power Systems of E.ON
and stability, integration of renewables); electricity markets (e.g., Energy Research Center
models, ancillary services, regulations); power quality and Smart Zannella Sergio Edison Research, Development Scientific Network Manager
Grids (e.g., harmonic distortion, active filters, UPS, interruptions and and Innovation
voltage dips, DC distribution).
Companies currently providing scholarships:
C) Electric machines and drives: MCM Energy Lab
This research field is strictly related to the rising demand for A2A Reti Elettriche
220
221
Conservative functions or voltages. The efforts to extend complementary approach, where reactive power. A balance rule contribution will be available to are still more powerful and
ELECTRICAL ENGINEERING
generalized powers in the the concept of reactive power commutations are basically the concerning harmonic reactive develop new control strategies meaningful. In fact, through
electric network are those that also under distorted conditions external constraints to a time powers over nonlinear resistor of power converters based Inductive Impulsive Power and
satisfy the balance Tellegens provided significant results for invariant multi-port. Meanwhile, under continuous conditions on the Swept Area Theory. Capacitive Impulsive Power
theorem, and they are powerful the analysis and theoretical inconsistent initial conditions, is obtained and discussed Another conservative function, functions, it is possible to
tools in different contests. The comprehension of the distorted caused by switching, imply as a novel interesting result. called Jump Power, is proposed separate the effect of capacitors
attention for these functions steady state. The literature on discontinuities on state variables This aspect impacts on a in order to address some from inductors. Furthermore,
is still, at the present time, this subject is very large; in and impulsive behavior on some possible extended definition properties and issues of principle an interesting result is found:
very animated. The main the past many authors have voltages and/or currents. In fact, of reactive power under regarding one-port elements, the ideal switch can absorb
reason behind that is the wide proposed different definitions Diracs delta impulses of voltage distorted conditions. Thanks to in particular ideal diodes and or generate electric energy
diffusion and usefulness of the of nonactive power in distorted and/or current may occur at the the Switching Power, a novel ideal switches, in the presence when an impulse of current or
reactive power for practical steady state. In particular, when switching transitions. Impulses quantitative relation between of jump discontinuities. Some voltage occurs meanwhile the
and theoretical point of view power converters are present redistribute charge and flux at hard switching commutations theorems based on the Jump ideal diode can only generate.
for linear networks under in networks, as sources of the switching instants when and Closed Area over Time is Power are stated. In particular, These facts are important
sinusoidal steady state. It is distortion or as active filters capacitor voltages and inductor obtained, with both theoretical possible conditions in networks mathematical aspects regarding
significant to recognize two to eliminate this distortion, currents, respectively, are and applicative relevance. are addressed whereby soft the ideal model of switches and
formal properties of reactive these networks are considered discontinuous. Nevertheless, as a More explanation is presented switching, passive or active hard diodes. In some cases these facts
power under sinusoidal steady as time-variant networks and whole it appears to lack general through a demonstration that switching commutations occur. cannot have a physical meaning
state conditions: the balance called switched networks. They principles as well as applications shows how ideal switch and Other conservative functions, as it is shown in analytical
property and its invariance on pose several challenges in the of generalized powers in the power converters can become called Inductive Impulsive examples. However, the total
resistors. The balance property construction of efficient time field of switched networks. sources of reactive power. Issues Power and Capacitive Impulsive energy absorbed by switching
states that the algebraic sum domain simulators. Due to the In this work, according to the of principle regarding the ideal Power, are defined in order to has a clear physical significance,
of reactive power on the single wide range of applications, concept of area on the v-i switch model with respect to the analyze the switched networks as it is related to the variation
one-port elements in a network operating conditions, and plane, a new approach called real one is another important in the presence of Diracs of energy stored in the set of
is equal to the corresponding phenomena to be studied, many Swept Area Theory, under result of this work. Moreover, delta impulses in the electric reactive elements or generated
term on the whole network. different tools for computer both nonlinear continuous and concepts of Ideal Switch Multi quantities. These impulses by electric sources. On the other
The invariance means that the analysis and simulation of discontinuous conditions, is Port and multilevel voltage/ are due to inconsistent initial hand, in the presence of more
reactive power is always nil on switched networks have been developed. Novel conservative current elements are proposed conditions caused by switching. than one element, the partition
resistors. However, important developed. The switch model functions, as Area Velocity and as a unified theory of power Also in this case, some of this energy among the
changes have occurred in the plays an important role within Closed Area over Time, involved converters, whereby most of the properties and issues of principle different switching elements still
last 50 years. In the electric the analysis and simulation of in this theory, are proposed. power converters existing can regarding one-port elements, has no physical correspondence
networks, the presence of switched networks. The ideal An analysis is carried out, by be recognized in a general and in particular ideal switches and with the loss of energy into the
power electronics equipment, switch model is the simplest means of these functions, over modular way. Furthermore, the ideal diodes, are addressed. single element.
arc and induction furnaces, in possible one and has several nonlinear R, L, C elements Swept Area Theory is extended Moreover, some theorems based
addition to clusters of personal advantages with respect to and over the ideal switch and to the Ideal Switch Multi on Inductive Impulsive Power
computers, represent major others. In the presence of ideal diode. In addition, jump Port in order to find relations and Capacitive Impulsive Power
nonlinear and parametric loads switching, classical issues that discontinuities are discussed in between Switching Power are stated. These conservative
proliferating among industrial rise up are related to network detail. The Closed Area over and commutations of power functions, despite having similar
and commercial customers. The solution and inconsistent Time is related to the harmonic converters. In this way, the properties to the Connection
main problems emerge from the initial conditions. Network reactive powers and under possibility of a power converter Energy that was presented in
flow of nonactive power caused solution is fulfilled by several sinusoidal steady state becomes to generate or absorb reactive the past literature as a function
by harmonic currents and methods. The main one is the proportional to the classical power is proved. Hence, a regarding the whole network,
222
ELECTRICAL ENGINEERING
widespread diffusion in melting selection. regulation cannot be performed and process stages, based on the
facilities in the last decades. Arc furnaces can be considered in the correct way and energy analysis of the harmonic content
Key factors for this kind of electrical polluting devices from transmission from arc to scrap is of the electrical quantities
process are accurate melting the electrical point of view; the negatively influenced. , obtained by means of the
process control, less expensive uneven nature of the electric The main objective of this work implemented monitoring system.
melting charge material (steel arc, due to uneven density of is to discover specific electric Starting from this classification
scrap), small production batch the media in which it takes supply conditions that could be it can be possible to obtain a set
capabilities, and lower energy place (air and scrap), as well correlated with poor melting of suitable voltage set-points for
requirements. as its tendency to be displaced performances of the EAF. For this different operating conditions.
1. Electrical scheme of Calvisano steel plant; measuring points are represented
It is well known that the by electromagnetic forces, task, a case study was selected In addition, an improved
in red
melting process depends on the result in a consistent harmonic and a distributed measuring monitoring system, interfaced
presence and propagation of the content both in arc currents system has been developed in with the furnace control system,
electric arc between electrodes and voltages. Distortion of arc order to monitor voltages and electrical quantities at the supply conditions on melting can be implemented in order to
and scrap charge. The heat electrical quantities influences currents in the points of interest secondary windings of the EAF performances was evaluated automatically set both furnace
produced by the arc current also the harmonic content at within the electrical system. A transformer have also been through a statistical analysis. and voltage set-point regulation
melts the scrap to create the primary side of the furnace steel plant has been chosen estimated. In this way the This analysis shows that, in parameters, thus attaining better
molten steel. transformer and definitely as a case study: Acciaierie di analysis is referred to the same the presence of variable and melting performances in every
The characteristics of the electric in the whole electric system Calvisano steel plant is located point in which the EAF control non-uniform voltage profiles, operating conditions.
arc depends mainly on two connected to it. Disturbances near the city of Brescia, in the system measures voltages and about 50% of the processes
factors: the quality of the are present in the connecting main steel manufacturing district currents for process control. melting performances are poor
scrap charge and the quality point to the supply grid; they of northern Italy in the north- As a result, the deviation of the and for the remaining ones
of the electrical supply. are related to both furnace east part of Lombardia region. actual voltage values from the better results are obtained only
The furnace process control is activities and external sources. A scheme of steel plant electric preset ones can be computed in by means of a longer refining
automatically set to maintain These phenomena can alter system and the measuring points order to verify how the actual process; this action leads toan
a specific energy delivery from voltage and current profiles is represented in Fig.1. conditions are far from the increase in energy consumption
electrodes to scrap charge; significantly and can result, From voltages and currents desired ones (see Fig.2). The analysis of the voltage
effectiveness of this action is beyond disturbance injection measured values, other electrical Data from production records profiles, both on the primary
reduced if the feeding voltage from the steel plant to the grid, quantities of interest were are employed in conjunction side of the furnace and at PCC, 2. Actual voltage profile (in blue) and
preset voltage profile (green dotted
is unstable and/or the scrap in voltage profile non-uniformity derived, in particular active and with those obtained from allowed detection of the main line) at the secondary side of the
charge density is uneven or and variability. reactive powers and harmonic the analysis of the electrical factors which influence voltage furnace transformer
too low (presence of holes The stability and uniformity of indexes. quantities in order to verify the levels uniformity and stability:
and spots inside the charge voltage levels is essential for the Particular attention was given effective correlation between voltage transients due to voltage
or poorly conductive pieces of melting process because, in the to the electric asset at the electric supply conditions and regulation performed in the grid,
scrap). Scrap charge density furnace control system, voltage connecting point between the process performances for each and wrong voltage regulation
is indirectly influenced by is not under closed loop control; plant electric system and the melting process. on the busbars of furnace supply
charge composition, so melting this means that, if voltage supply grid, that is at the Point Furnace activities were section.
process performance is in a profile at the primary side of the of Common Coupling (PCC) as monitored for several months One of the main problem
way correlated to the type of furnace transformer isnt steady usually referred to. and a comprehensive set of in voltage regulation is the
employed scrap mixture. How during the process, electric arc In order to directly compare melting processes was built; voltage drop in the circuit from
strong is this correlation depends characteristics can be different actual and preset secondary then a classification has been the busbar to the furnace
on several other factors involved from the desired and scheduled voltage and current, the made and influence of electrical transformer; this is due to
224
COMPUTATIONAL INTELLIGENCE
225
This thesis discusses a novel thesis. In this thesis, after being coefficients are determined by
ELECTRICAL ENGINEERING
approach to solve complex sufficiently trained, an ANN is imposing boundary conditions
electromagnetic problems by used as a surrogate model to at various interfaces of
computational intelligence substitute completely full-wave spheroidal coordinate. With the
techniques. Chapter 1 provides characterization. Two different aim bringing a brief introduction
background materiai on optimization schemes, also the of spheroidal coordinates
computational intelligence different approaches in sampling and problem description,
techniques: from originai Particle target data, are clearly explained radidal spheroidal and angular
Swarm Optimization (PSO); in detail in chapter 3, which also spheroidal wave functions
Genetic Algorithm (GA) to marks the end of the first half are interpreted theoretically
their variations namely Genetic for this thesis. in chapter 4. Chapter 5 and
Swarm Optimization (GSO) A variety of structures in chapter 6 provide all the exact
and Meta-PSO. The theory of Electromagnetism, representing solutions of two cases prolate
Artificial Neural Network (ANN) an indispensable part of and oblate respectively. All
and its training algorithms are physics, sometimes cannot be the analytical formulations
also presented in Chapter 1. fully described by simulators. are retrieved by separations
Chapter 2 discusses a This issue is more obvious of variables, whereby the field
traditional approach by applying in scattering problem when components are expressed as
heuristic optimizations to EM interpreting complex objects infinite series of products of
components. Various new and materials, the use of radial and angular spheroidal
designs of EM structures from mathematic tools is strongly wave functions. Numerica!
multilayer microstrip antenna needed. The research carried out results in these sections also
and metamaterial inspired in this context is a boundary- exhibit the profound influences
antenna, to frequency selective value problem where the aim of thickness and material
surface are optimized. The core is to determine radiation field properties of coating layers
challenge is the identification of of primary sources illuminating on both far-field and near-
an appropriate cost function and spheroidal structures. Primary field regions. Conclusions and
it was done by properly modeled sources are Hertzian electric and discussions on the outcomes of
electromagnetic objects by magnetic dipoles; spheroidal each problem can be found at
commerciai full-wave analysis. structures are two confocal the end of each chapter. At the
However, this traditional prolate/oblate dielectric layers end of chapter 5, the issues on
approach is always expensive in (either made of isorefractive scattering of a magnetic dipole
terms of computing time and or anti-isorefractive material) on Prolate Spheroids is totally
the dynamic memory required coating a metallic prolate/ covered, it is sufficient to create
for each assessment is relatively oblate spheroid. The problem of a representative cost function
big. Therefore, with the aim of one layer coating was already to optimize radiated far fields of
reducing computational efforts solved by Dr. Askarpour and multiple dipoles on the structure.
and memory consumption, Prof. Uslenghi in [98, 99], it
an equivalent surrogate is relevant to investigate the
model for antenna design behavior of primary sources
by Artificial Neural Network in the case of doublycoated
(ANN) is intepreted in this spheroids. The modal expansion
226
SIGNAL PROCESSING FOR DISRUPTION DETECTION entropy H are indicative of the The result of this analysis is the The results presented in this PhD
ELECTRICAL ENGINEERING
on controlled thermonuclear number of toroidal and poloidal analysis of the results obtained of a mode indicates the growth based on the square root of the for example thresholds related
fusion, in particular regarding revolutions of the force lines by applying an algorithm of of MHD activity, if the mode moving variance of the time to plasma parameters such as
the identification of a precursor of the magnetic field, limiting Singular Value Decomposition becomes unstable a disruption derivative of the ratio H/P1. This magnetic field, plasma current
signal of disruption and the the performance of a tokamak. (SVD) to magnetic signals. The can occur. last disruption precursor is able and density. The markers
development of a real-time The plasma, despite being a signals used in this analysis are 2046 plasma discharges to recognize up to 82% of the identified in this analysis can
algorithm for predicting good conductor, has a small fluctuations of the magnetic collected between 2008 and disruptions, 79% at least 20 be used in order to classifying
disruptions in tokamaks. but finite resistivity, the rational field caught by an array of small 2012 have been selected from ms of warning time before the disruption causes. For this
Tokamak is an axisymmetric surfaces with constant magnetic multiturn coils, the so called the FTU database. No threshold plasma current quench. purpose it is mandatory:
configuration that, through flux and pressure tear, allowing Mirnov coils. The signals have parametrization related to The same analysis have been analyzing in detail the plasma
an appropriate combination to the lines of force of the been opportunely filtered and the plasma engineering adapted and repeated for discharge studying the
between a toroidal magnetic magnetic field to reconnect in resampled at the frequency of parameters such as toroidal Mirnov coils signals coming evolution of different plasma
field produced by external coils magnetic islands. A tokamak interest for MHD activity. magnetic field, plasma current from JET machine, in order and machine parameters,
and a poloidal magnetic field can also operate in the presence The work was done in and density has been used. to test the robustness of the distinguishing not intentional
generated by a current flowing of magnetic islands, but their collaboration with the Institute The signals considered have algorithm in different devices. from intentional disruptions and
in the toroidal plasma, achieves evolution, if not controlled, of Plasma Physics of Milan (IFP) been normalized, having the We have analyzed 2044 taking in account experimental
the Magneto Hydro Dynamic can lead to disruption, i.e. and the analysis was applied to advantage to be independent plasma discharges collected conditions. Moreover, it should
(MHD) equilibrium condition. the sudden fall of plasma the magnetic Mirnov signals of from sensor calibration between 2012 and 2013: 457 be important to focusing on the
In order to obtain controlled current followed by the loss of the ENEA Tokamak FTU (Frascati and signal amplitude. We disruptive shots and 1587 precursor phases preceding the
fusion, the equilibrium condition confinement. Disruption is very Tokamak Upgrade) Frascati, distinguished 1665 regular regular terminations. Also in instability and on the plasma
must be maintained stable dangerous for the integrity of Italy and after that the same terminations and 381 disruptions this case we do not distinguish development before the thermal
for a period greater than the reactor because the energy analysis has been adapted and without taking any classification disruption causes and no quench. The investigation on
an appropriate time, with stored in the plasma is abruptly applied to the magnetic Mirnov on the causes that lead to a threshold parametrization has a possible correlation between
respect to any perturbation. released to the wall machine. signal of the tokamak JET (Joint disruptive current quench. From been established. The developed time dependence of the markers
A major limitation to fusion The study and prediction European Torus) Culham, UK. A the application of SVD analysis algorithms based on the SVD and the disruption evolution
goals is made by the onset of disruption is therefore a set of Mirnov coils are circular to a set of FTU Mirnov coils analysis of the MHD activity should be done for improve our
of magnetohydrodynamic fundamental research topic in coils oriented to measure the in previous works the entropy signals from a set of JET Mirnov analysis.
instabilities in plasmas. A class of this context. fluctuations of the poloidal has been resulted to be a good coils provide a disruption
instability, bounded to dissipative While many techniques are component of the magnetic marker for the presence of MHD precursor based on the square
magnetic perturbations, are available today for disruption field. instabilities. The investigation of root of the moving variance of
the tearing modes that occur avoidance and mitigation with Through the Singular Value the disruption precursor starts the time derivative of H/P1 able
as an helical perturbations of some degree of successes, analysis is possible to extract with the study of entropy time to recognize up to 63% of the
the current and temperature actually there is not any strategy useful markers of the instabilities evolution during the plasma disruptions, 50 ms before the
of the plasma localized around that consent to fully predict presence, such as the Entropy discharge. The two ensembles plasma current quench. For JET
the rational surfaces (surfaces and avoid all disruptions. The H and the marker P1. The disruptive shots and regular machine, an estimator based on
on which the magnetic lines definition of a precursor signal entropy H is proportional to terminations have been selected the entropy mean value around
of force of the field B have a of disruption is currently an the square of the normalized and considered separately, minimum seems to be more
rational step) with its rotation interesting argument of research singular values and describes the evaluating the mean value of efficient, being able to recognize
speed. Tearing modes instability and in this PhD thesis an phase coherence in Mirnov coils entropy around its minimum. up to 79% of the disruptions,
may take place around the original method to determine a signals. The entropy can assume This is a good candidate in at least with 50 ms of warning
magnetic surfaces with low precursor signal is proposed. values in the range between 0 order to discriminate disruptive time before the plasma current
rational values of the ratio m/n, In this work, the precursor and 1 and lower values of the shots from regular terminations. quench.
228
229
This research is composed words, the DSO will become be directly dependent upon Estimation was triggered by why the installation of the new
ELECTRICAL ENGINEERING
of two major topics; first, a sort of a local dispatcher the location whereas as the limited success in applying measurement equipment has a
the development of the and will involve its real/passive penetration of DER increases, transmission system state central part in planning of the
State Estimation function for customers in activities related to it will impact not only the estimation approach to new or improvement of the
distribution systems and second, the network management and distribution system capacity distribution environment; and existing DNs.
on of Measurement Equipment optimization. This, obviously, restraints but the voltage additionally, specific approaches The proposed solution
Placement for the sake of requires a deep review of the and frequency stability of the that have been developed for for Optimization of the
improvement of observability of regulatory framework. interconnected DER units. distribution systems experienced Measurement Equipment
distribution network. In this sense, definition of From the infrastructural point problems related to limitations Placement strongly reckons
The traditional vertically Smart Grid, now usually of view, there is a clear need
1. DSO interactions with markets & TSO at different time frames
integrated structure of the in use, appears reduced as it of enhancing the observability
electric utility has been focuses only on the appearance of the network, now generally
deregulated in recent years of the network, while it is more limited to the HV/MV
particularly by adopting the appropriate to speak about substation and the preparation of the distribution grids. on genetic algorithms that
competitive market paradigm Smart Distribution System of appropriate channels of The second motivation is due have proven to possess good
in many countries around the (SDS), extending the involvement communication with the users. to the fact that the increased properties with large-scale
world. The market-governed also to network users. Among Software tools evolution penetration of dispersed problems like this one. Many
electrical business and the the various initiatives that the includes the enhancements of generation is one of the different implementation
Renewable Energy Sources (RES) distributor must undertake in the SCADA side for managing main contributors to the new approaches have been
have changed significantly the order to adapt the methods the new devices and information management mechanisms that developed, tested and compared
power flows in distribution of planning, management coming from the network on are being developed. To improve on large number of realistic test
networks. and analysis of operation of one side. In addition, a new the hosting capacity of the DN DNs for this dissertation and
On the other side, the evolution the network, acquisition of family of software applications and to reduce the impact of DG the best approach is recorded.
of the distribution systems dedicated tools and the related is being developed to support in on the regulation requirements Apart from the fact that the
seen through the remarkable infrastructure plays a crucial role. both real-time operation and the of the bulk power system, the OMEP algorithm provides (sub)
expansion of dispersed Some of the roles that need to planning phase. DG has to provide ancillary optimal solutions for all the
generation plants connected to be taken into account are shown At this point, the specific services (to solve local and global tested systems, it is also robust,
the medium and low voltage below in Fig. 1. software tools for both real-time issues frequency, reserve, highly modular and easily
network is one of the main Most distributed energy management and planning of voltage regulation, congestion implementable in DMS solutions.
challenges. The growth of resources (DER) can be disposed the distribution network need management). Therefore the
the dispersed generation is in the distribution network to be developed. To implement distribution system will assume
causing a profound change of and to be accessible to provide these, the developer has to have the role of DSO. Premise of
the distribution systems in the network support, DER must in mind all the above stated this is the knowledge of the
technical, legal and regulatory co-ordinate with the rest of limitations and challenges of network. For this reason, SE
aspects; most likely the the power system without modern systems. This thesis applied to DN is a main tool
Distribution System Operators affecting other costumers. The provides tools for improvement to improve the effects of DG
(DSOs) will more and more capability of DER to provision of the observability of the penetration.
take, on local dimensions, tasks ancillary services will depend on distribution systems and optimal On the other side, having
and responsibilities of the role factors such as DER location and planning of network for the elaborated on the fact that
assigned on a national scale to number of resources integrated same cause. the knowledge of the network
the operator of the electricity to the grid. Most of the benefits The first motivation to develop state is crucial for its operation,
transmission network. In other relying on ancillary services will the Distribution System State it is not hard to understand
Aerospace Engineering | Architectural and
Urban Design | Architectural Composition |
Architecture, Urban Design, Conservation
of Housing and Landscape | Bioengineering
| Building Engineering | Design | Design
and technologies for cultural heritage
| Electrical Engineering | Energy and
Nuclear Science and Technology |
Environmental and Infrastructures
engineering | Industrial Chemistry and
Chemical Engineering | Information
Technology | Interior Architecture and
Exhibition Design | Management, Economics
and Industrial Engineering | Materials
Engineering | Mathematical MODELS and
METHODS IN Engineering | Mechanical
Engineering | Physics | Preservation of
THE Architectural Heritage | Rotary-wing
aircrafts | Spatial Planning and Urban
Development | Structural Seismic and
Geotechnical Engineering | Technology
and Design for Environment and Building
| Territorial Design and Government PhD Yearbook | 2015
232
DOCTORAL PROGRAM IN
233
The thesis works that are presented in this Yearbook are very
Sound Absorption Measurements with with the sound intensity probe. newly developed 3-D sound objects near the sensors;
237
In the last decades there TiO2 scaffold was taken as the stability against photocorrosion
239
Significant progress has been time. For this reason degradation problems that are related to the that constitute the Gas Diffusion
SILICON NANOSTRUCTURES FOR ENERGY being a key means to open new In a dedicated study conducted starting point and a strategy for
An experimental study on fluid dynamics in with 3 different non dimensional classification in three different two different types of guide
Luca Cornolti - Tutor: Prof. Angelo Onorati - Supervisor: Dr. Tommaso Lucchini
245
Introduction more and more important. In combustion models. Moreover A comprehensive combustion between the mean flow and
INCREASING ENERGY RECOVERY OF WASTE-TO- recirculated inside the grate 4.4. Cost optimal configuration with evaporation
249
Over the past decade, several with or require the injection relevant conditions of a performed by a wide variety are remarkable, and that the The analyses reveal that the
Nanostructured Oxide Semiconductors current (Jsc), and fill factor (FF) further annealing and forms a Hierarchical PdO
Strategies for access to energy in monitoring and evaluation in the development and approach for electro-mechanical
BUILDING SUSTAINABILITY ASSESSMENT indicators were related to LCI assessments, with the analyzed building, with
Waste atomic separation and raw material required for the separation oxygen compounds were very overall energy consumption
261
Research goals drop in the air side. To carry out porosity - as already shown temperature of the cold (Tc) and in actual and ideal operation. negligible. Thus an isotropic
Irene Prencipe - Advisor: Dr. Matteo Passoni - Tutor: Prof. Carlo E. Bottani
263
Since it was first observed in process and different strategies the target requirements imposed to exploit the enhanced laser cm2) at the PULSER I laser system of high field plasmonics effects
Primary Exergy Cost of Goods and Services: requirements, expressed in thermodynamic based metric (MIOTs) of national economies
Reduced Order Methods: Applications to which can be tailored to Method and on the Proper the neutronic, the thermal-
269
The thesis focuses on the OpenFOAM CFD opensource of an automated procedure available through the ECN Detailed chemistry modeling
DOCTORAL PROGRAM 5. Geomatics, with focus on: physical geodesy anthropogenic activities responding to the needs
275
The last few years have seen the type of vehicles on which Edition (fig. 1). This camera photogrammetric systems comprehend the possibilities of
Giulia Ercolani - Supervisor: Prof. Marco Mancini - Co-supervisor: Dr. Chiara Corbari
277
Large-Eddy Simulation (LES) in recent years computational LES of the ABL and then in this code grid spacing model can properly simulate a microscale spatial distribution
DEVELOPMENT AND TEST nable forestry and agriculture HexaKopter and the Nikon an approach already performed
285
Keeping roads in good condition mixes. Also, the environmental to high inhomogeneity of the Multiscale approach analyzing the samples by the extraction moduli and strength, although
LIFE CYCLE ASSESSMENT OF WASTE PREVENTION depend on different variables. were characterised in terms of and energy performance. The
289
Nowadays, the demand for one or more digital cameras Lying System), whose final few centimeter of error in all developed. This sensor could are re-projected on the point
295
Since the first satellite missions sensors and backscattered by the single look complex (SLC) results with an overall accuracy imaging. The main idea is that
297
The Global Navigation Satellite year). This result agrees with
299
Titanium dioxide (TiO2) experimental measurements model for reactive species hydroxyl radicals, has been
DOCTORAL PROGRAM advisor is assigned to each student. The tutor has a supervisor
303
The Doctoral Program in Industrial Chemistry and Chemical
305
Professors Committee
Grants
Solvay Specialty Polymers Italy SpA, IIT, Pirelli, MTU Friedrichshafen (Motoren Turbinen Union)
306
307
The term drug delivery the production of polymer of publications focusing on validation of a nanoparticles relevant toxicity was found.
309
In the field of renewable energy, (PTFE), in order to obtain a with F2. CC contains condensed decreases the porosity needed The high resistance of PFPE to
Purification of Natural Gas by means of a to the ratio between the price of H2S ternary system are few and As future developments of this
EXPLORING DIFFERENT CATALYTIC ROUTES FOR and 0.1. Potassium has a minor least 300C) were required to the Pt clusters ranged from 0.5
315
A key biological process in the cell-cell interactions and tissue a number of interactions with the crystal structures of tri- and cadherins ligands (fig.2). The
Perfluorinated Materials and Photocatalytic Photoactive Coating effectively absorbed into the and TGA analyses on pristine
319
Multicomponent reactions of poly-functional molecules and multicomponent free-radical selected moieties on the
321
Combustion is present in numerically. fractal aggregates are added define correctly the boundary through the development of rate of production analysis.
323
Tunnel fires play a relevant and ventilation velocity, a key design scenarios - such as bypasses
325
The objective of this work is to up to 400C, where some investigated performing lab-scale insensitive to HCl concentration, of the gas-phase within the
DOCTORAL PROGRAM
329
Introduction
INFORMATION TECHNOLOGY
Chair: The PhD program in Information Technology (IT) goes back to the
Prof. Carlo Fiorini year 2001, when the two traditional programs in Automation-
Computer Engineering and Electronics-Telecommunications were
merged. As such, the unified course covers the research interests
in four scientific areas, namely Computer Science and Engineering,
Electronics, Systems and Control, and Telecommunications. This
broad variety of research activities is completely focused in the Monti Guarnieri): Networking, Applied courses reflect the scientific interests of DEIB
ICT area, and perfectly corresponds to the core mission of the electromagnetics, Information transmission and faculties. The curricula include a wide choice of
Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB). radio communications, Optical communications, courses (about 30 per year), of different nature.
However, pursuant the history of the Department, and following the Wireless and space communications, Remote The challenge is to promote interdisciplinary
new trends of the modern society, some cross-related research fields sensing, Signal processing for multimedia and research while offering technical advanced courses
are also encouraged, such as ecology, environmental modelling, telecommunications. that spur innovative and cutting edge research.
operations research, and transportation systems. The PhD program Therefore, particular attention is devoted to help
in IT is the largest at the Politecnico in terms of number of students. Industrial collaborations each student to make the best choice according to
There are more than 60 first year students and about 210 in total. Due to its intrinsic technological nature, the PhD an internal regulation scheme.
The students are subject to an examination every year to evaluate curriculum is corroborated by many industrial
the progress achieved in their research and course work. collaborations. About 25% of the total number Internationalization
of scholarships are funded by industry or by Every year about 5 courses are delivered by
Topics international research projects involving industrial foreign professors. Moreover, the PhD program
The research carried out in the Department in the field of partners. In the school vision, the collaboration encourages joint curricula through agreements
Information Technology (including 35 between computing or between university and industry is ideally with foreign institutions. At present we count
experimental laboratories) can be subdivided into 4 main areas: based on the challenge of turning invention joint agreements for a Double PhD Program with
Computer Science and Engineering (Vice-Coordinator: Prof. Andrea into technological innovation. This amounts to New Jersey Institute of Technology (NJIT) in the
Bonarini): Information systems, Database management, Information shaping new technology frontiers and to building electrical and computer engineering disciplines;
design for the web, Methods and applications for interactive a fertile atmosphere for a mixture of world-class academic cooperation for a Double Doctoral
multimedia, Embedded systems design and design methodologies, research at universities and in private companies. Degree with the Graduate School of Engineering
Dependable systems: performance, security and reliability, This also amounts to creating a common terrain and Graduate School of Life and Medical Science,
Autonomous robotics, Artificial intelligence, Computer vision and of friendly culture, to size the risk and to believe Doshisha University, Japan; double PhD Program
image analysis, Machine learning, Dependable Evolvable Pervasive in strong basic research. The external referee with Queensland University of Technology,
Software Engineering, Compiler Technology, Natural Language board is composed by members of public and Australia; between DEIB and the Drexel University,
Processing and Accessibility. private companies, working in industry and Philadelphia, Pennsylvania, USA; Double PhD
Electronics (Vice-Coordinator: Prof. Angelo Geraci): Circuits in applied research. The board is in charge of Program with the University of Western Ontario,
and systems: theory and applications, Single-photon detectors monitoring the activities of the PhD program Canada; Joint PhD Degree Program between DEIB
and applications, Radiation detectors and low noise electronics, and giving suggestions for its development. The and Georgia Institute of Technology Atlanta
Electronic circuit design, Electron devices. board meets once a year to point out the new USA; Agreement for co-supervison of a doctoral
Systems and Control (Vice-Coordinator: Prof. Paolo Bolzern): emerging research areas worth to be investigated Thesis with Ecole Polythechnique de Montreal
Control systems, Robotics and industrial automation, Optical and to monitor the visibility of the course in the Canada.
measurements and laser instrumentation, Dynamics of complex industrial world. The PhD program in Information Technology
system, Planning and management of environmental systems, participates in ICO-NEH (International
Operations research and discrete optimization. Educational aspects Curriculum Option of Doctoral, in Erasmus
Telecommunications (Vice-Coordinator: Andrea Virgilio The teaching organization and subject of the mundus Programs: STRONG-TIES Strengthening
330
Training and Research Through Networking Every year at least 4 fellowships are assigned to
INFORMATION TECHNOLOGY
Board Committee
Livio Baldi (Micron Semiconductors Italia)
Claudio Bartolini (HP Labs - USA)
Riccardo De Gaudenzi (European Space Agency)
Giuseppe Fogliazza (MCM S.p.A.)
Renato Marchi (Gruppo PAM)
Fabrizio Renzi (IBM Italia S.p.A.)
Stefano Signani (Unicredit S.p.A.)
Maurizio Zuffada (STMicroelectronics S.r.l.)
Vulnerability Detection and on the analysis of network have been proposed for each level the wrong commands
INFORMATION TECHNOLOGY
are systems where software the improvement of their developed several solutions to been carried out. which is refreshed periodically simulation, it has been shown
and hardware entities monitor management into more efficient tackle with potential threats, but Third, relying on the previous by means of a multiparty that the proposed solution
and manage physical devices and effective CPSs, paved the these solutions lie far from being analysis, two case studies have key agreement scheme. The can run in realtime with the
using communication channels. way also to new security and unaffected by defects, as it is been defined in order to better computational complexity of the firmware without affecting the
They have become ubiquitous safety threats. In particular, the shown in this thesis. investigate the vulnerabilities novel multiparty key agreement general performances of the
in many application domains, security of CPSs involves the In this research work, the of such systems. A first testbed scheme has been shown to be system.
including health monitoring, need to secure a set of systems attention has been focused has been realized using a KNX equivalent to a computational In conclusion, this research
smart vehicles and energy that were explicitly designed on the CPSs that operate in a system as most representative Diffie-Hellman problem. This shows how it is possible for
efficiency, as in the power supply assuming the existence of a smart grid, considering that the because of the more stringent solution provides support for a hacker to perform cyber-
provisioning and management physical barrier to confine the introduction of these so-called constraints of computational sending secure commands in attacks to CPSs with a discrete
(e.g., smart buildings and smart attackers, which has fallen smart devices has particularly capacity, and a second one 1.3 seconds, and performs knowledge of the physical
grid infrastructures), thanks when the need to connect attracted the attention of the for the SCADA systems, based a key agreement, providing devices and industrial processes.
to the rapid growth of the them with the external world cyber-security researchers, mainly on Siemens PLCs as most forward secrecy, in roughly Thanks to the numerous
embedded system technologies. has surged. This is particularly due to the large scale impact representative because of their one minute of computation weaknesses of the CPSs, it
On a more general perspective, problematic as CPSs have a that would occur in the case of spread in the industrial market. and transmission time on is possible to obtain the full
these systems bridge the multitude of possible attack a cyber-attack. There are many Fourth, after an extensive activity the most constrained BAN control of industrial processes
cyber-world of computing vectors, by their own nature, different CPSs that are worth of reverse engineering, a set network infrastructure: KNX. and to perform different
and communication with the since a number of malicious being investigated; nevertheless, of viruses has been developed Furthermore, these performance actions in order to accomplish
physical world of the more actions can be undertaken at this research focuses on two of to exploit the identified figures improve by more than different goals, from a Denial-
complex systems, which include every level of their programming them, which appear to have a vulnerabilities, trying to an order of magnitude on faster of-Service to a more complex
mechanical parts, sensors and and working stack, from the major role in the smart grid field: generalize the attack scheme. A BANs, thus providing a widely manipulation of the physical
actuators of different kinds. low-level firmware residing the BANs, which can operate at first malware has been designed applicable solution, proved to systems. This thesis also shows
Examples of CPSs include the on the devices, to the the peripheral level of the grid, to attack a KNX system and be secure against the strongest that it is possible to use different
smart electricity-grid systems, infrastructure coordinating their and the PLCs, which are widely to show the possibility of fully possible network bound techniques to maintain the
the Supervisory Control And functionality, which typically used to manage the generation reprogramming the network attacker model, with the same safety and security of CPSs. The
Data Acquisition systems resides on common general- and distribution of energy. devices, in order to obtain a mathematical strength of well results of this research have
(SCADA) for managing the purpose computers. To this This thesis pursues several malfunction or a complete characterized crypto-systems. advanced the state-of-art in
functioning of industrial plants, end, it is crucial to approach objectives, at different levels of Denial-of-Service. A second In the case of SCADA, a novel the security of CPSs and BANs,
and the distributed control the securization of CPSs at complexity. malware has been designed to defensive approach has been and have been published in the
systems of Building Automation multiple levels. In particular, it First, different technologies attack the application program proposed, which is based on the refereed proceedings of research
Networks (BANs) designed to is fundamental both to tackle have been investigated for of Siemens PLCs, in order to application of model checking venues.
monitor and control the security the communication level both the BAS and SCADA show how it is possible to techniques at firmware level,
and safety of the mechanical security of such systems, by presented in the literature, change the specific industrial to verify the integrity of the
systems in a building, as well as providing authentication to in order to understand what process running on the PLCs. industrial process execution. It
their services. the involved endpoints and kind of communication In the worst case scenario the has been shown how to apply
All the aforementioned systems strong integrity guarantees on protocols, operating systems, malware can shut down the those techniques to a real
were commonly designed both the transmitted data and and hardware are used. Each entire process. Both malwares case based on the application
as stand-alone networks control signals, and to tackle the technology may have specific have been tested on realistic program of a water turbine.
in physically protected system security, lest an attacker vulnerabilities that cannot be testbeds, proving the feasibility A Safety Checker (SC) for the
locations, using proprietary can alter their functioning for generalized. About the BAS of the attacks. system has been written, which
technologies. The extension malicious purposes. The research architecture, the thesis focuses Fifth, some countermeasure is able to intercept at firmware
334
335
Solution processible envision the realization of more photosensitive devices: the independent on the incident
INFORMATION TECHNOLOGY
semiconductor technology has complex electronic devices (even adoption of tailored solutions, power density (over a power
attracted considerable research bendable) for imaging/lightening resulting from the mixture of density range larger than two
interest in the past few decades; purpose with a pixilated solvents with different physical decades), which is high desirable
in particular, devices based structure. properties, contributed to for photodetectors.
on conjugated polymers and Inkjet printing technology, avoid the coffee stain, pile up The same inkjet printing
semiconducting nanocrystals, which allows ambient condition different layers, all of them technology was adopted to
both processible from solution, and room temperature by inkjet printing, and make pattern quantum dots for
can pave the way for the manufacturing, meets all optoelectronics devices with making LEDs. In that case, the
development of low-cost, large requirements for patterning remarkable performances. piezoelectric inkjet is preferable
area and flexible electronics. solution-based semiconductors Finally, we reported pinholes- over other printing techniques,
The key advantages of these and manufacturing our devices. free, fully inkjet printed like micro-contact or thermal,
materials over crystalline Furthermore, the additive nature photosensors, based on a for two main reasons: the non- 1. Cartoon (top) and optical 3. Top: EQE as function of the incident
semiconductor technology are of this deposition technique vertical inverted architecture contact additive approach avoids micrograph (bottom) of the inkjet power density (measured at 570nm,
mainly ascribable to their low- is virtually compatible with with the photoactive layer material waste and mechanical printed photodetector with active at 1V reverse bias). Bottom: Device
area 100 x 100 m2. The dummy bandwidth measured at 570 nm under
temperature manufacturing whatever substrate, even sandwiched between two contact between the dots strip of silver is needed to minimize incident power density of 0.1 mWcm-2
and compatibility with flexible flexible. electrodes, deposited on and the stamp, and finally the the resistive losses due to the at 1V reverse bias.
substrates, and their unique In particular, we worked with flexible substrate, with EQE (at piezoelectric driving prevents transparent electrode based on
PEDOT:PSS.
and tunable optical capabilities. the piezoelectric inkjet printing: 1V reverse bias) in excess of the heating of the dots-based
The high absorption coeffcient functional solutions were loaded 60% over most of the visible ink that would be detrimental,
of conjugated polymers makes into tiny nozzles that are able spectral region, and a peak of in terms of quantum yield,
them suitable for effcient light to propel small amounts (tens 83% at 525nm. Noticeably, for dots. We inkjet printed a
harvesting devices. Further, the of pL) of the same solutions in EQE and specific detectivity of solution based on core-shell
quantum confinement effect in response to an electric voltage our devices are comparable dots PbS-CdS (emission peak at
semiconducting nanocrystals is pulse. Control over the jetted with the literature reported 1450 nm) in ambient condition
particularly useful for making solution is mandatory in order devices made by the more at room temperature and we
both devices with narrow and to obtain well-defined structures standard coating techniques. addressed the issues of coffee
pure emission and devices and high resolution features (~ In other words, the novelty of stain and thickness uniformity
harvesting light at a certain 10 m): that is mainly achieved our approach, does not just by tailoring the solution itself
wavelength. by properly tuning the voltage prevent the manufacturing and modifying the surface
In this work we focused, waveform which drives the of high performance devices, energy of the underlying layer.
primarily on the realization piezoelectric nozzle. but demonstrates the inkjet We manufactured arrays of dot
of photosensors based on Nonetheless, the major hurdle printing capability of patterning spots with 25m resolution
conjugated semiconductors, in depositing films by inkjet and making efficient electronic and finally demonstrated the
secondly on novel LEDs with printing relies in the uniformity devices as well. electroluminescent behavior of 2. EQE spectrum (red) measured at
1V reverse bias under incident power
pure and narrow emission, and the continuity of the films In-depth analysis of printed devices incorporating arrays of density of 10 mWcm-2 and normalized
based on quantum dots. In themselves. In fact, casted films photosensors has enlightened printed dots as emissive layer. active layer absorbance (black).
particular, we addressed the suffer from the coffee stain the relationship between the
need of patterning functional effect which gives rise to non- manufacturing recipe and the
materials, coming from solution, uniform thickness and hollowed device electrical properties; after
to manufacture many devices films. Noticeably, we addressed an optimization step we were
on the same substrate and that issue and manufactured able to report devices with EQE
336
337
In recent years there has been technology. In particular, the the formation of a new gap, by experiments and numerical resistance state and prevents multiple computation steps. The
INFORMATION TECHNOLOGY
a huge increase in the storage extreme ease of fabrication, Fig. 1b. This final state shows a simulations. These results a set transition. Changing the new nonvolatile logic approach
capacity of integrated memories, the low power and energy high resistance (it is called high support the high functionality set/reset pulse-width causes a allows to suppress the static
still the technological solutions consumption and the high write/ resistance state HRS) and it is of nanoionics in storing and negligible change of maximum leakage power dissipation, while
remained almost the same. The erase speed suggest that this identified as the logic state 1. elaborating information in metal endurance, which is explained reducing the area consumption
continuous demand of new technology may became a valid oxides. by an Arrhenius model of failure. thanks to the scalable 2-terminal
portable, low-cost, low-power alternative to Flash memories This thesis is focus on the Switching variability of the Starting from a detailed structure of the RRAM switch.
devices has forced a huge effort and an optimal choice for electrical characterization and set state was studied and electrical characterization The RRAM logic was studied
in R&D pushing the limits of storage class memory. the modeling of several key two different models where random telegraph noise (RTN) also from the circuit viewpoint,
the current technology. Flash The RRAM memory cell has issues of the RRAM, namely provided: (i) an analytical model, in RRAM devices was studied. discussing the implementation
memory, representing the main an extremely simple structure, the switching mechanism, for the Poisson fluctuation The random fluctuation between architecture, the select/unselect
stream memory technology basically it is a parallel plates the program variability, the of the defect number in the two levels is explained by the scheme to prevent disturb and
and experienced an impressive capacitor, where a metal oxide endurance failure and the conductive filament and (ii) change of charge state in a the implementation of a 1-bit
development, that has led this is sandwiched between two random telegraph noise. a Monte Carlo model, which bistable defect close to the CF. adder. Two architectures are
technology to a 16 nm node metal electrodes that act as Starting from the concept of describes the discrete injection The model provides a physical considered, namely a 1T1R
and to the implementation top and bottom contacts, as complementary switching, a of defects. Both models can quantitative description of architecture, where RRAM in
of 3D architectures. As we shown in Fig. 1a. The application concept recently proposed to capture the dependence of both the electron transport both the top and the bottom
approach the scaling limit of a positive voltage to the solve the sneak-path in crossbar- variability on the compliance in presence of a fluctuating crossbar arrays are selected
imposed by physics to Flash top electrodes induces a ion array, an innovative multilevel current. It is also evidenced a defect and the temperature- by a transistor, and a hybrid
technology, the research is migration from the top electrode scheme for oxide RRAM was new set-failure phenomenon dependent switching kinetics. 1T1R-1R1R architecture, where
making more and more efforts toward the bottom electrode developed. The new scheme induced by complementary The model accounts for the size only RRAM in the bottom
towards the development of that creates a conductive relies on the storage of two switching. Set failure might be dependence of RTN amplitude, crossbar requires a select
new devices that are able to filament that connects the two different states for any resistance suppressed by accurate choice of which is due to the partial or full transistor. Finally, a 1-bit adder
combine the characteristics electrodes, as shown in Fig. 1c. level, where the two states the programming voltage, time depletion of carriers depending is designed and demonstrated
of Flash technology with This state is characterized by a differ by the orientation of the and pulse shape, as well as by a on the CF diameter, and for bias by experiments and simulations.
the performances of faster low resistance (it is called low CF. The defect concentration careful RRAM stack engineering. dependence of RTN switching. RRAM logic appears as a
memories, overcoming the resistance state LRS) and it is and orientation can be Pulsed operation of oxide RRAM The work on RTN, presented at promising alternative to CMOS
scaling limits that will soon identified as the logic value of 0. independently controlled by the was studied, showing that the IEEE conference IRPS, was technology for area and energy
afflict the floating gate devices. On the opposite, the application current compliance and voltage the reset pulse-amplitude Vstop awarded with the Best Poster scaling thanks to the reduced
At the same time, it is emerging of a negative voltage at the top polarity, respectively. Storage controls resistance window and Award. area and nonvolatile behavior of
the need of a new family of electrode leads to ion migration and discrimination of 8 states switching variability, the high Nonvolatile logic operation RRAM devices.
memory, called storage class toward the top electrode and (i.e. 3 bits) is demonstrated resistance state distribution in RRAM through conditional
memory, that combines the improves for increasing Vstop, switching in serially-connected
benefits of a solid-state memory, while low resistance state is devices was developed in this
like high speed and robustness degraded due to capacitive work. The RRAM state variable
and the archival capabilities and overshoot and incomplete can be 0 or 1. The state variable
low cost of conventional HDD. set. Endurance failure at high is used both as input or output
Among many competitors, Vstop is due to negative set, i.e. of the logic operations. AND,
oxide based resistive switching a defects injection from the IMP, NOT and transfer can be
memories represent a strong bottom electrode in to the achieved in a single clock pulse,
candidate for next generation depleted gap that induces a non while OR, XOR and all other
solid state non-volatile memory switching state with low high operations are achieved in
338
339
Warehouse-scale computing, resource allocations. on shared resources; deep sleep dissertation is proposing novel generation servers. depending on the specific
INFORMATION TECHNOLOGY
which is supported by states impose high transition approaches to the problem of context (application types, public
datacenters, emerged in the There is a tension between latencies and flush shared achieving QoS enforcement Both AutoPro and Rubik versus public clouds, criticality of
last decade as a fundamental these two issues (efficiency and state, impairing performance; in an efficient way in two demonstrate the importance the QoS requirements, ...).
enabling technology for QoS), as techniques to improve traditional controllers for complementary scenarios. of three principles that we This dissertation analyzes these
pervasive phenomena such efficiency (e.g., virtualization, dynamic voltage and frequency We analyze these two scenarios suggest as guidelines for the problems and provides two
as the Web 2.0, big data, power management, colocation, scaling (DVFS) reduce active and propose two methodologies development of next-generation practical solutions for two
and cloud computing. ...) impact performance, often power, but can heavily impact and practical systems (AutoPro computer architecture somewhat complementary
Despite being assembled unpredictably. performance, because they are and Rubik) that solve this and operating systems for scenarios.
from commodity components This dissertation attacks both oblivious to the peculiarities of problem: datacenters:
(servers, interconnects, ...), these sides of this tension and datacenter applications. AutoPro tackles on the Availability through the
datacenters opened the way to proposes novel techniques and problem of providing hardware/software stack of
a new paradigm for mainstream tools to help solve it, towards Then, we analyze metrics to predictable performance with application-level information is
computing; as researchers future efficient QoS-driven quantify the performance of automated resource allocation key for effective control.
work on understanding this warehouse-scale computing. datacenter applications and in public infrastructure-as-a- Control systems used to tune
new paradigm, two important define their QoS. service (IaaS) cloud computing. system-level knobs need to be
themes emerge in a new way First, we analyze a well-known Throughput is a general metric AutoPro provides a practical founded on solid theoretical
compared to traditional systems. model for the total cost of to quantify rate of progress or solution based on a control- bases (e.g., AutoPro uses
A major concern for datacenter ownership (TCO) of a datacenter load, but it is not enough to theoretical background for control theory, Rubik uses
operators is their efficiency and find that, as things capture the performance of systems running compute- statistics and control theory);
and cost-effectiveness, which stand today, opportunities to latency-critical applications, such bound, throughput oriented ad-hoc empirical controllers
are crucial to supporting the further reduce TCO, and allow as user-facing services, which applications. With AutoPro, do not generalize well and
growth in the services and value datacenters to scale further, need to provide performance we focus on current hardware often fail due to unpredictable
coming from big data and cloud mostly lie in improvements in guarantees on the end-to- and propose a solution that is pathological cases.
computing. the efficiency of IT equipment, end latency of each request. directly deployable on modern In order to support the
Additionally, public cloud particularly the efficiency of Latency-critical applications are datacenters with no hardware dynamic execution context of
computing presents further servers. particularly interesting, because changes. datacenters, as opposed to the
challenges for both datacenter On this basis, there are three they define an operating context Rubik analyzes datacenters static runtime of traditional
operators and users. A major main opportunities to improve that is peculiar to datacenters; running latency-critical clusters, control systems need
issue for users that want efficiency: increasing server we analyze the behavior of five applications, along with other to operate at a high frequency;
to bring their workloads to utilization, reducing static latency-critical applications, batch work and tackles the coarse-grained adaptation
the cloud to take advantage power consumption, reduce studying how latency is problem of reducing the cannot adapt to quick
of utility computing is that dynamic power consumption. affected by different operating TCO while maintaining QoS changes and imposes overly
performance on virtualized The challenge is being able conditions. One important guarantees on the tail latency, conservative guardbands,
resources is hard to understand to target these opportunities consequence of defining QoS thus improving efficiency. Rubik leaving much on the table.
and often unpredictable. For this without hurting QoS. We show with application-level metrics provides a solution based on
reason, using public clouds for that traditional mechanisms and is that traditional systems that a runtime system and few key Completely solving the problems
applications that need to provide policies to pursue these goals are optimize for aggregated, low- hardware changes, mainly to of providing QoS and operating
a required quality of service not well-suited for datacenters: level metrics cannot provide this provide partitioning of the datacenters efficiently remains
(QoS) level is not straightforward colocating applications causes type of QoS guarantees. memory hierarchy; this solution an open research problem,
and often leads to increased inefficiency and performance could be implemented with and different techniques
inefficiency due to conservative degradation due to contention The main contribution of this negligible overhead on next- and approaches are needed
340
Analysis and Control of Mechatronic are detailed. The gray-box analysis of the mechatronic in closed-loop. Finally, the
INFORMATION TECHNOLOGY
systems in the industrial methodologies which, under demanding requirements measurements by a proper physical model of the apparel analysis
applications leads to more and suitable conditions, guarantee needed to face these parameter identification used in this work are presented, This Thesis proposes a detailed
more demanding requirements robustness and accuracy technological challenges, lead procedure. Then, the overall with particular focus on the description of the mechatronic
in term of accuracy, dynamical Control problems: The to the necessity of innovative models are validated comparing control architecture. The control actuators of the main diffused
response and energetic proper design and tuning of devices, such as the mechatronic the simulation results with strategy of the Virtual Feedback technologies, with particular
efficiency. Recently, the the regulation systems are actuators. The large diffusion experimental data. The Reference Tuning, adopted focus on the analysis of the
technological performances directly responsible of the of the electronic components developed models are also also in the EHSA control, is dynamical behavior and on
have been improved by the performances. The knowledge bring to an enhancement of the adopted to develop suitable applied to the selfbalancing the design and tuning of
introduction of the electronic and development of recent and performances, especially in the strategy to estimate variables manual manipulators, with specific methodologies for the
components, which allows the innovate control strategies can estimation and control fields, that cannot be directly measured the experimental validation estimation and the control of
implementation of innovative provide solutions to overcome but there still is improvement or comes from low quality of the obtained results. Then, such devices. All the proposed
control and estimation the actual limitations. margin, particularly in the overall sensor. In particular, the actuator the load mass estimation algorithm are experimentally
strategies: this new generation Since the mechatronic actuators system dynamic. For this reason, position estimation is explored problem is faced with two validated.
of devices, called mechatronic studied in this Thesis have a the mechatronic actuation using the Kalman filtering different approaches and the innovative control and
actuators, is the topic of this practical impact on the industrial systems are the main topic of approach and compared to performance are validated with estimation solutions applied to
Thesis. In particular, three applications, the proposed this Thesis, with particular focus the measurements provided proper experimental tests. industrial cases
aspects that influence the methodologies and algorithms on the dynamical behavior by transducer of different The second application case A novel control strategy is
mechatronic actuation systems are validated by experimental analysis and control strategy technologies. Finally, the considers the regulation of provided for the velocity
are faced in the present work: tests. All the laboratory and design useful for industrial speed estimation has been the speed profile trigger for regulation of a self-balancing
Modeling problems: The industrial tests have been applications. As overview of the also investigated with different electrical motors in rolling mills, industrial manipulator, including
knowledge of the dynamical performed in collaboration with linear mechatronic actuators, algorithms. based on the load estimation. load mass estimation: the results
behavior of the mechatronic industrial partners: Consorzio three different actuation Two different the control After an introduction of the it the possibility of a transducer
actuators is the primary Intellimech Kilometro Rosso, technologies are presented: scenarios for linear actuators hot rolling process for the removal, obtaining the similar
base for the improvement Scaglia Indeva and most of all the Electro Hydraulic Actuator are illustrated. First, the Virtual seamless pipe production, or better performances. An
of performance, because it Tenaris Dalmine R&D. (EHA), the Electro Mechanical Reference Feedback Tuning illustrating the role and the innovative methodology for
permits to deeply understand The Thesis is structured in Actuator (EMA) and the most methodology is customized for importance of the mechatronic the control of the speed profile
the system functioning two parts: in the first section, recent and innovative Electro the cascade control architecture actuators in this industrial trigger for electrical motors in
and provide fundamental the development of suitable HydroStatic Actuator (EHSA). For to face the actuator positioning environment, the problem of rolling mills is also proposed,
information for the models, estimation algorithms each actuator typology, the main problem in the EHSA. Then, the the crop ends is discussed. implemented and tested on
development of the estimation and control strategies for components and the function pole placement and root locus The control strategy usually a real productive plant, with
and control strategies. different technologies of principle are detailed in addition criterion are used to design adopted to reduce the impact notable benefit on the industrial
Estimations problems: In mechatronic actuators are to the description of the the regulator of the actuator of the crop end phenomenon productivity.
industrial environment is not illustrated; in the other section, control strategy. Then, the main force. Finally, on the base of the on the process productivity is
always possible to obtain the methodologies previously advantages and drawbacks of models previously implemented presented, highlighting the main
the information or the analyzed are applied to specific each technology are listed and and tested, a global comparison limitation. Then, an innovative
measurements required for industrial environments, such as related to the most common of the different actuators control strategy is proposed to
an ideal control, because of self-balancing manipulators and applications. is provided to highlight the overcome the actual limitation:
difficult operating conditions rolling mills for pipe production Then, the mathematical models performance differences on the base of two different
or absence of proper The increasing number of developed to study the dynamic between the EHAs, EMAs and approaches for the motor load
transducers: the estimation industrial applications which behavior of the mechatronic EHSAs. estimation, the proper trigger
algorithms allow to overcome involve the actuation systems actuators (EHA, EMA and EHSA) After a detailed technological time is estimated and controlled
342
343
Mobility has a huge impact in insurance and fuel, and to The services currently in place that allow the seamless use of providing users with the distance unloading of application on the
INFORMATION TECHNOLOGY
terms of economic costs and access the service users must be often appear as a simple rental technologically different vehicles, from the nearest vehicle. This vehicle on-board box, Chapter
it accounts for one quarter of registered. service just for cars. However, and it provides a unique and technique has been successfully 5. None of the currently
the worlds greenhouse gas Specifically, this Thesis presents the GM service is green, flexible, standardized mode of access tested by using real car-sharing available car-sharing kits offers
emissions. Traffic congestion, the Green Move project smartphone-based, free from (the Vehicle Interface) for all data from the Milano Car2go the possibility to add/remove
pollution and greenhouse gas which is an innovative, high- intermediaries and provided the system actors. The GEB has service. functions depending on the
emissions are some of the most technological and advanced with the most innovative fleet been tested both on the fleet The main contributions of user, location or period. On the
prominent issues that modern Vehicle Sharing System based management algorithms. vehicles and, for four months, this Thesis are summarized as contrary, the Green e-Box is
metropolitan areas currently on an heterogeneous and A detailed description of the in a condominium- based VSS follows: based on a novel middleware,
have to face. The goals set at electric vehicle fleet. The main fleet vehicles is provided (e.g. demonstrating great reliability. Development of an innovative which allows applications to be
the political level in this regard components of such a wide and size, number of seats and Then, two innovative system for vehicle sharing dynamically loaded/unloaded
cannot be achieved without complex system are depicted autonomy) along with technical algorithms for managing and which eliminates all the based on the current context.
a significant re-think of urban in Figure 1. This Thesis focuses aspects (i.e. vehicle data and controlling the vehicle fleet intermediaries between the The modeling of a Vehicle
mobility. on the fleet vehicles related buses, charging modes and were introduced. The Feedback users and the system, Chapters Sharing System as a dynamical
New models of sustainable components, both at a very low commands). Dynamic Pricing (FDP) technique 3 and 5. Green Move is the system, provided in Chapter 7.
mobility call for the integration (on-board electronic box) and All the vehicles have to be models the VSS as a dynamical first Vehicle Sharing service At the best of our knowledge,
of vehicles that differ in type high (control and optimization endowed with a Green-eBox system. This opens up new which is entirely smartphone- a Vehicle Sharing System has
(e.g. cars, scooters and bicycles), algorithms) levels. To visually (GEB) to be inserted into the control options for the fleet based. It doesnt require never been modeled as a
technology (e.g. electric, hybrid inspect some of the Green GM fleet. The GEB is an Android balancing problem. In this neither a member-card to dynamical system. This opens
or with classical combustion Move solutions refer to: https:// based electronic on-board approach, a control strategy lock/unlock vehicle doors nor up new control options for the
engines) and ownership (they www.youtube.com/watch?v= control unit; it implements can be derived using the theory mechanisms to retrieve and fleet balancing problem. In this
can be publicly or privately kI3yiB50RWw. several abstraction mechanisms of feedback-regulated systems. give back the vehicle key. approach a control strategy
owned, fully or partially shared) Thus, assuming that people are The key is simply sent by the can be derived using the theory
within the same system, and sensitive to changes in the price control center to the users of feedback regulated systems.
they must offer end-users of the service this can be actively smartphone as an electronic A novel approach called
common functions, services and and real-time con- trolled by ticket. Feedback Dynamic Pricing
interfaces. acting on the trip fee. A full- The possibility of adding to a (FDP) is presented in Chapter
This Thesis deals with Vehicle fledged simulator has been Vehicle Sharing System vehicles 7. It is based on the modeling
Sharing Systems. Vehicle Sharing developed and experimental that are heterogeneous in their of a VSS as a dynamical system
gained global popularity during data are encouraging and types and owners, Chapter 5. and it aims to control the fleet
the first decade of the 21st demonstrate the feasibility of The lacking of heterogeneity of balancing by varying the price
century as one of the possible the proposed approach. the vehicle fleet is one of the of the service in real-time.
solutions to environmental and The Nearest Available Vehicle most important limits of today An innovative technique
increasing city traffic congestion (NAV) algorithm for predicting Vehicle Sharing Systems. The (named Nearest Available
problems. A Vehicle Sharing the user distance from the Green Move project overcomes Vehicle, NAV) for solving
System is a 24/7 vehicle rental nearest vehicle overcomes the the most of the technological the problem of the lack of
service (the fleet vehicle is drawbacks of free-floating VSSs problems leading to an easy booking mechanisms in free-
heterogeneous) most dedicated due to the absence of booking integration of different vehicle floating car-sharing services
to drivers who make occasional 1. Components of a Vehicle Sharing System on which control and optimization mechanisms. The idea is to use models. is introduced in Chapter 8.
use of the car, typically for short algorithms are hooked. This Thesis, among all the components, deals with the the data from vehicles past Development of an highly It provides users with the
design and development of both a vehicle on-board electronic box prototype
periods of time and within and control and optimization algorithms for fleet balancing and forecasting of locations to make a prediction dynamic platform for prediction of their distance
city areas. Rates include both vehicles locations. of future vehicle locations by managing the load- ing/ from the nearest vehicle.
344
INFORMATION TECHNOLOGY
miniaturization, made available shorten the connection to gain GA independent of fs, low- while the metal input pad of
by the integrated circuit the device under test (DUT), pass filtered by LPF(f) to prevent an appositely optimized CMOS
1. Basic schematic of the impedance analyzer based on modulation/
technology, allows nowadays physically approaching it as 2fs component and other amplifier is directly used as a demodulation architecture IC.
to implement sophisticated much as possible to the IC; spurious harmonics to propagate conductive substrate electrode.
measurement systems on the integration of the readout into the loop, and finally This solution is determinant
chip that can perform in a measurement system together demodulated back to the input for minimizing the input stray
millimeter sized package most with the DUT implies the frequency by M2. For standard capacitance and the capacitive
of the functions available with development of embedded operating conditions, when fmod coupling with the tip holder,
standard discrete components custom systems, which must also = fs, the circuit acquires the DUT and therefore the equivalent
instrumentation. Circuit cope with special requirements input current and provides an input current noise. Although
integration not only allows (for example biological, or output VHF at the measurement more delicate than standard
to develop massively parallel optical, mechanical, chemical) of frequency, whose phase and passive electrodes, this active
systems, otherwise too complex the DUT itself. The device itself amplitude are proportional to substrate allows to explore the
and bulky to be managed, must be finally properly devised the input DUT admittance. Since ultimate detection limits of this
but it is often also the key to match as much as possible the signal is translated into DC, technique, without requiring any
for dramatically improving the readout requirements: two paths in parallel are required modification to the commercial
the sensitivity of the readout maximize signal and signal-to- to process both the in-phase (I) AFM instrument, and leading
systems which are designed noise ratio when connected and quadrature (Q) input signal to a system sensitivity up to
for investigating the micro- to the readout electronics, components; the unity gain 14.4zF. The third and last work
and nano- world. Sensitivity minimize its input capacitance. SUM block adds them together presented focuses instead on
is indeed strongly related to The work presented in this thesis as they are translated back into the design and production
the capacitance (electrodes is focused on the development the measurement frequency. of optimized devices which 2. Principle scheme of the new measurement system topology.
and instrumental) at the input of instrumentation for high The translation into DC allows are conceived to sense the
of the measuring system, CIN sensitivity measurements, to obtain into the loop two photoconductive effect caused of the physical mechanisms
which actually defines the noise and therefore focuses on DC outputs which behave by light, when it is coupled into involved and a first analysis of
level of the instrument: an IC the development of the IC as an external lock-in system conductive waveguides. This the material platforms suitable
bonded to a nanoelectrode preamplification chain, but also connected to VHF . Despite the technology is very promising for this purpose, amorphous
with an input capacitance of on the setup and on the device increased loop complexity, the for the development of non- Silicon devices have been
few hundreds of femtoFarad development. The first work DC translation has therefore the invasive light detection into designed, produced and finally
would allow to get about a x100 presented is mainly focused benefit of avoiding an additional microphotonic structures. The tested, proving the validity of
improvement in the sensitivity, on the development of an lock-in system, as well as device concept is schematically their working principle on this
if compared to the setup integrated transimpedance advantages in terms of sensitivity shown in Fig.3, and consists of technological platform.
built with a bulky instrument, amplifier, made of a very peculiar and compactness. Besides the two electrodes surrounding the
with cable connections and two channel modulation/ IC, a dedicated setup suitable optical waveguide and spaced
about 10pF input capacitance. amplification/demodulation for embedded applications from its core by an electrically
The miniaturization of the structure embedded into a has been also implemented, insulating layer: the core 3. Illustration of the device structure,
preamplification chain is feedback loop. In this topology, allowing to realize a complete conductivity, which is increasing consisting of two metal electrodes
not sufficient by itself, but whose principle structure is instrument for impedance proportionally to the coupled deposited onto the electrically-
must be coupled by both shown in Fig.1, the incoming spectroscopy. The new IC insulating upper cladding and
light, can be measured for
capacitively coupled (CA) to the
well-conceived setup and signal is firstly modulated to DC topology, coupled with the frequencies higher than fA =1/ electrically-resistive (RWG) silicon
device to be measured. The frequency by multiplier M1, then dedicated PCB, allows for the (2CARWG). After theoretical study waveguide.
346
Silvia Boiardi - Supervisor: Prof. Antonio Capone - Co-supervisor: Prof. Brunilde Sans
347
Legacy wireless access While energy-aware network During off-peak periods, the at the same time, to provide mathematical computation,
INFORMATION TECHNOLOGY
networks (WANs) are typically design models or operation operation savings correspond an example of the possible while CPLEX branch and bound
designed regardless of energy management solutions have to the energy spared by turning benefits derivable from a cell solver was used to optimize
efficiency considerations. been largely treated in the off the largest cell, while the sleeping mechanism applied to them. Through remarkable
The planning objective is to research world, the challenge other two remain active to serve the chosen topology. Tuning the results, we demonstrated
minimize the deployment is to convey both CapEx and the traffic demand. Would it objective function by means of that network planning and
costs while insuring complete OpEx costs as well as power be possible to decrease further a trade-off parameter, different energy efficiency are closely
area coverage and connection issues into a single modeling the energy consumption of the network topologies can be related. To further confirm the
quality in any load condition, framework. topology? Figure 1(c) answers produced to accommodate validity of the joint approach,
especially during peak traffic The work carried on during this this question, illustrating the the designers requirements. In we compared the cellular and
periods. In recent years, the doctorate program arose from main principle of the joint particular, the major selling point wireless mesh joint model results
growing demand for ubiquitous the belief that a fundamental optimization. Given a traffic of the JPEM is that, whether a with those of a more traditional
mobile communication access issue had been so far overlooked variation profile, the joint service area is blank or already procedure, where network
increased not only the amount by the research on green mobile framework is able to build a covered by legacy equipment, it deployment and operation are
of customers to be served, networking: an effective energy- flexible topology, specifically allows the network planner to addressed separately.
but also the awareness of the aware operation through a cell designed to take advantage build the most energy-efficient Due to the high number of
environmental impact. If on sleeping technique is closely of the demand variability. In network from the energy optimization variables, the
one hand the capacity of the dependent on the planning the example, by adding an consumption point of view, JPEM framework is extremely
current networks has to be decisions taken during the additional cell at the cost of a maintaining the CapEx under computationally expensive.
upgraded to manage the extra network design phase. While modest CapEx increase, it would control. Therefore, the last part of the
traffic, on the other hand new the design and operation be possible to switch off as As main line of research, we doctorate program was devoted
energy-aware techniques should problems had always been much as three access stations designed the JPEM framework to develop an ad-hoc heuristic
to be introduced in the regular tackled separately, we developed instead of just one in low load to optimize cellular access procedure in order to obtain
network functioning. an optimization model that conditions. networks. An instance generator results in a shorter amount of
Within this context, green jointly considers the planning was developed using C++ time and allow the investigation
networking has emerged as and management of a wireless programming language to of larger test examples. Through
a new way of building and access network. The philosophy produce several test scenarios, the observation of the cells
operating communication at the basis of the approach is considering LTE technology activation pattern during the
networks to improve their summarized in Figure 1, where and three types of access daily peak and off-peak time
energy efficiency. Besides a three-cells toy topology devices - macro, micro and pico periods, a partial topology is
is represented. Bold circles 1. Effect of joint optimization on provided as input to the JPEM
its obvious environmental cells. Once again first in the
network operation management.
advantages, power consumption stand for the area coverage of literature to tackle planning and and integrated to form a feasible
reduction is beneficial to mobile turned-on access stations, thin management at once, we also solution. The heuristic approach,
operators for economic reasons. circles stand for switched-off The joint planning and developed a variation of the joint fully implemented using AMPL,
Two types of cost are incurred: devices and black dots stand energy management (JPEM) formulation to suit wireless mesh proved successful and solved a
capital expenditures (CapEx), for traffic aggregators (also optimization model, as we access technology. We designed large set of real size instances.
related to the purchase and called test points). Figures 1(a) named it, is an original integer an all-new instance generator,
installation of radio equipment, and 1(b) report examples of a linear programming problem. Its this time producing Wi-Fi mesh
and operation and management topology deployed according to objective is to minimize the sum test instances. In both cases, we
expenses (OpEx), consisting in the minimum installation cost of the capital investment and implemented the models using
energy, site rentals, marketing criterion, operated during high the energy costs due to devices AMPL, a programming language
and maintenance costs. and low traffic load respectively. deployment and operation and, specific for high-complexity
348
349
This work was developed due to the synergistic usage of and a complex mechatronic active suspensions in a vehicle to the extreme simplicity of the a load estimation algorithm
INFORMATION TECHNOLOGY
under the supervision of Prof. the actuators. It is also shown design. If compared against is presented. The advantages algorithm, it is showed how is developed. The spectrum
Sergio Matteo Savaresi, from how MCC allows to decouple semi-active suspensions the of the MCC technique lie in its MCC can be rearranged in order analysis based method is shown
Politecnico di Milano. It is comfort objectives, namely cost-benefit ratio does not simplicity: similarly to the classic to control active aerodynamic to be able to provide a reliable
this work goal to investigate heave, pitch and roll oscillations seem compelling, but in logic rule based strategies for surfaces. Several simulations estimation of a vehicle load and
the potential of semi-active reduction, making it an terms of pure performance semi-active suspensions control, are conducted in order to of its distribution without the
suspensions and active extremely versatile and powerful improvement, the proposed the MCC algorithm is extremely assess the new control system need of measuring neither road
aerodynamic surfaces for algorithm. AAS system outperforms semi- simple and its implementation performance, highlighting the profile nor the wheel vertical
the purposes of developing In the beginning, a complete active suspensions. It yields on a real vehicle is trivial. The higher effectiveness of the motion, but simply by processing
centralized control strategies, study on the potential performance comparable to sensors that are needed in order methodology in enhancing the vertical acceleration of the
which can improve several effectiveness of active that of fully active suspension to operate it are an inertial vehicle comfort with respect to chassis and the pitch speed,
aspects of ride experience aerodynamic surfaces is carried at a fraction of the power unit on the chassis, which is decentralised control strategies. the latter being required to
towards an ideal multi-objective out. In particular, two reasonable requirements. able to provide the controller Later, a preliminary concept of provide additional information
vehicle control. It is in fact configurations are presented for The most popular comfort- with heave speed, pitch speed multi-objective vehicle control on the load distribution. The
shown how these actuators AAS installation on each vehicle oriented algorithms developed and roll speed, in addition to by means of active aerodynamic method is shown to be effective
can be used with multiple corner, and decentralized control for semi-active suspensions potentiometers measuring surfaces is presented. It is in fact enough to discern among a
purposes, from comfort techniques are developed with are typically designed on the the suspensions deflection (so shown how these actuators can copious but finite set of loads
enhancing, through road the goal of enhancing comfort quarter-car model. When it that the suspension deflection be used with multiple purposes, and distributions. Naturally,
holding and handling increasing, and road holding respectively. comes to actual implementation speed can be computed by switching among different the potential of the developed
to steering and braking Two model-based controllers on real vehicles, four (or derivation). Moreover, since control modes, accordingly estimation methodology goes
capabilities improvement. A are designed on the classical two in the half-car model) the MCC methodology is not to the driving scenario. A beyond AAS controller effective
new control strategy (namely quarter-car model. The proposed controllers usually take care model-based, it can be readily ride mode map is presented, tuning, since mass knowledge
the Multi-objective Chassis design and the several trade- of the comfort enhancing of employed on different vehicles together with the switching is considered a vital information
Control, also referred to as offs involved in the tuning are each singular corner, neglecting without any re-tuning. Another conditions among the ride to prevent the criticalities
MCC) is also developed both discussed considering several the interactions among them. great potential of the strategy modes, namely the driving that arise in several other
for semi-active suspension and parameters. In particular, the Thus, several simulations are is that the control objectives mode, the braking mode and applications, when load can vary
Active Aerodynamic Surfaces effect of different tunings, the carried out in this work, where can be decoupled in order the steering mode. Per each during the vehicle usage. This
(AAS), which makes use of aerodynamic surface size, the a half-car and a full-car model to define which variable the ride mode a control strategy is can happen for example with
logical control rules and it road disturbance amplitude, the are endowed with semi-active control system needs to focus developed, and the outcomes public transportation, garbage
is independent from vehicle vehicle mass and the vehicle suspensions, each one controlled on more, among chassis heave, are analysed by means of time collectors, or private vehicles,
models, similarly to the skyhook velocity are thoroughly discussed with the same decentralized pitch or roll reduction. The domain and frequency domain especially motorbikes, in which
algorithm. The MCC is therefore and numerically analysed, from strategy. Both frequency and parameters that need to be performance assessment tests. the load variation with respect
characterised by an extreme the performance and power time domain evaluations are modified in order to modulate It is thus noted how AAS can to the vehicle mass is high.
implementation simplicity, and it requirements point of view. performed in order to assess the algorithm action on each be successfully employed for
can easily be employed on any Furthermore the AAS control comfort performance, showing objective are extremely concise enhancing several aspects of the
2-wheels or 4-wheels vehicle is tested with semi-active how classic control strategies and trivial to tune. Several time ride quality, such as comfort,
without re-tuning, being not suspension systems, showing perform when employed in a domain and frequency domain braking efficacy and steering
modeldependent. In addition that the two approaches are decentralized fashion. tests are carried out, showing capability.
to these qualities, it can provide additive from the performance After, a new centralised that the classic semi-active Finally, since it is shown that the
better performance with point of view. The performance methodology that can improve control strategies performance knowledge of the actual vehicle
respect to decentralized control improvement comes at the comfort by means of the can be outdone by the MCC mass is vital for the development
strategies (such as the skyhook), cost of power requirements synergistic use of all the semi- methodology. In addition, due of a model-based AAS controller,
350
FRONT-SIDE AND BACK-SIDE ILLUMINATED SPAD CMOS SPAD is reported. In the and proved to be capable of the path towards important
INFORMATION TECHNOLOGY
has been a growing request for sufficient horizontal and advanced imaging. The aim system electronics includes a Taking advantage of the high
all-solid state smart imagers, vertical pixel resolution to of this doctoral work was to FPGA module that processes sensitivity and fast frame rate
fostered by the needs of image a broad field of view; design monolithic a Single- data coming from the chip; a capability, the SPAD camera 1. Designed Single-Photon Camera.
a number of applications, adequate image repetition Photon Avalanche Diode highspeed USB 2.0 interface was used in collaboration
which demand complex and rate to allow real-time (SPAD) array and read-out for communication and a 32 with the Karolinska Institue
advanced data collection image acquisition and image electronics, able to provide MB SDRAM for data storage. and the Technical University
with high sensitivity (possibly processing; simultaneously both high A second back-mounted of Berlin to evaluate a novel
at the single-photon level) low power consumption, frame rates and single-photon board accommodates the DNA sequencing method.
and very high acquisition compactness, and low-weight sensitivity. The array was power converters, digital The technique relies on the
speed (possibly thousands of for reducing costs and for fabricated in an automotive- logics, and analog circuitry for contemporary measurement
frames per second). Among ease of installation and grade CMOS technology to implementing a Direct Digital of Fluorescence Correlation
theses applications, we can maintenance. Nowadays, the ensure scalability, reliability, Synthesizer, thus providing Spectroscopy (FCS) and
mention: molecular imaging, imager market offers a broad and low cost. Moreover, with arbitrary waveform modulation Fluorescence Lifetime Imaging
fluorescence lifetime imaging, portfolio of either commercial- the aim of enabling high for the light source. The (FLIM) of 4 different dyes
micro-array based biological or scientific-grade cameras, performance smart system- camera is housed in an labelling the 4 different
analysis, confocal microscopy, ranging from consumer CMOS on-a-chip implementations aluminium case supporting a nucleotide. With respect to
and adaptive optics; as well active pixel sensor (APS) the single-photon detectors 12 mm F/1.4 C-mount imaging other method, this one is so
as, Safety (e.g. automotive, cameras up to high-end CCD are paired with sophisticated lens, whose field-of-view is sensitive that do not need
environmental surveillance, imagers. However, none of in-pixel intelligence able not approximately 4020. The DNA multiplication through
traffic and workplace safety them simultaneously offer high only to deliver two-dimensional whole system is very rugged Polymerase Chain Reaction
monitoring, product safety speed and ultra-high sensitivity: (2D) intensity information and compact and a perfect (PCR) and thus would allow
analysis, food and agriculture CCDs reach sensitivity at close through photon counting in solution for vehicles cockpit, one to detect the smallest
quality and safety assessment) to single-photon level but either free-running (down with dimensions of 80 mm mutation in a DNA sequence.
and Security (access control, necessarily require cooling to 10 s integration time) or 45 mm 70 mm, and less that This research project is still 2. Photograph of the bonded SPAD
biometrics, surveillance systems, and long integration times time-gated mode but also 1 W dissipation. To provide the going, but promising results imager.
dangerous agents monitoring, (i.e. very low frame rates); APS to perform light demodulation required optical power and were already obtained. A
homeland security, fire hazards) imagers provide video-rates with in-pixel background allow fast modulation of the second collaboration with the
scenario, which simultaneously but with relatively limited suppression. The provided optical signal, the illumination Institute of Photonic Sciences
require: detection efficiency, thus features allows different source has a modular design (ICFO) was also launched. The
single-photon sensitivity in the requiring bright illumination operating modes, thus based on a power supply measurements carried on at
300 nm 900 nm wavelength scenes. Special kind of CCDs, enabling both time domain board and five 808 nm laser ICFO were aimed at exploiting
range; like the intensified or electron- applications such as flash driver cards. The system was and validating the high-
integration times as short multiplying CCDs, can reach detection, fluorescence lifetime operated at night and during sensitivity of the SPAD camera
as microseconds (for precise sensitivity at close to single- imaging or fluorescence daytime, in both internal and to overcome the typical low
time-tagging or videos of fast photon level but are costly and correlation spectroscopy as real traffic scenario, yielding frame rate of CCD in speckle
optical transients); bulky and necessarily require well as frequency-domain high dynamic-range (118 dB) contrast spectroscopy (SCOS),
instrumental response function a cooling system; photo- lock-in depth measurement for high-speed (over 200 fps) and let scientists image at high
capable of measuring few multiplier tubes and micro- three-dimensional (3D) ranging depth measurement with high speed the blood flow in living
tens of picoseconds for the channel plates are well-known in automotive vision and lidar. precision (less than 50 cm at tissues in few seconds. Overall,
estimation of the arrival time of single-photon detected but The development of the first 45 m). The prototype was the opportunities given by the
a single-photon (e.g. for time- they cannot be integrated with ever back-side illuminated tested at Centro Ricerche Fiat designed CMOS SPAD opened
352
353
In this research, localization is mentioned goals, this research algorithm enables FP to be localization error bound (SPEB), the threshold to beacons with
INFORMATION TECHNOLOGY
treated in a more general way considers algorithms for both applied in larger environments function of transmission power the SNR below the threshold,
w.r.t. the existing denitions active and passive localization or environments with ner space from each beacon. Then, two realizing a type of simple
in the literature since here the scenarios. The rst section of grids. The second aspect of the new sub-optimal algorithms are adaptive power allocation
concepts of static localization, the research deals with the scheme deals with the critical investigated. The former is based (Adaptive Power Allocation
ngerprinting in a single investigation of energy-efficient concern when tracking more on the denition of a parameter (APA)) directly based on the
shot estimation of position localization algorithms for than one passive target, i.e. to called uncertainty area which is a measured SNRs.
information and tracking of passive localization. The rst part make the correct discrimination convex function of transmission Simulation results conrm that
position information over developed for passive scenarios among measures as a result of power in the pair-wise selection such a simple strategy can
different time instants are is dedicated to the theoretical occurred ambiguity in paths of beacons. Numerical results be effective in medium low
integrated for achieving a analysis of a hybrid tracking clusters scattered by different conrms a notable performance SNR regions, even w.r.t. more
common goal, i.e. an enhanced algorithm composed of active targets especially when targets advantage of localization sophisticated optimization
energy efficiency. Localization and passive steps. The analysis are moving close to each other. with Power Allocation (PA) procedures.
scenarios, considered in of this scheme, based on the The nal phase of this analysis schemes w.r.t to the case of
this research, cover active Posterior Cramer Rao Bound is dedicated to consideration uniform power allocation MAIN RESULTS OBTAINED
localization, in which the target (PCRB), conrms that mixing of applied TOA-based ranging among beacons specially in I. Achievement of energy
participates in the estimation active and passive cases can be technique (so-called soft target locations in the vicinity efficiency via combination of
process of inter-node ranging an effective tool towards energy ranging) and its potential of beacons. The latter proposed active and passive steps in
information, and also passive efficiency. The second part of the characteristics for providing algorithm is based on the fact localization
scenarios, in which the target investigation of passive scenarios more accurate ranging measures that the optimal SPEB based PA II. Design of a power allocation
does not participate in the deals with the energy efficiency by means of feeding a kind of approach does not show any algorithm for active
estimation process. The general under two perspectives. A a-priori information. This issue advantage when performance localization scenario in order
trend in design of localization hybrid tracking algorithm provides EKF update steps with (i.e. Mean Square Error (MSE)) to improve localization
algorithms is towards achieving composed of Extended Kalman more precise ranging measures of the ranging estimator performance
more accurate estimation Filter (EKF) based tracking leading to a better localization achieves a oor corresponding III. Investigation of impact of real
of position. Additionally, and Fingerprinting (FP) is performance. Simulation results to a certain threshold in the ranging estimator on existing
considering the limited battery proposed in order to tackle validate a zero-energy tracking received Signal to Noise Ratio optimal power allocation
life in wireless sensor networks conventional problems related algorithm in which the mobile (SNR). This corresponds to the strategies for WSN localization
(WSN) nodes as a determining to the implementation of either target does not consume energy. behavior of a practical, realistic IV. Design of a hybrid
factor in the performance of tracking or ngerprinting The second section of the ranging estimator where the fingerprinting and EKF based
WSNs, the algorithms should separately. One of the common research deals with energy achievement of higher ranging tracking for indoor passive
respect energy efficiency drawbacks of FP belongs to efficient algorithms for accuracies becomes not possible localization scenario
constraints. The contribution large data size and consequent active localization scenarios. by increasing transmission power
of this research is mainly large search space as a result of Accordingly, power allocation over a certain threshold because
focused on the investigation either vastness of surveillance among transmission power of phenomena like, for example,
and proposal of localization area or ner grid resolution of beacons is an effective tool the maximum sampling rate and
algorithms which should in FP grid map which limits toward this objective. We computational load available
achieve these two crucial design the application of FP to small present a pervasive literature in the sensors. Consequently,
objectives: environments or scenarios review on existing power this sub-optimal, simplied
accuracy improvement with largely spaced grid points allocation (PA) schemes and, in PA algorithm is based on the
energy efficiency leading to poor localization particular, of the optimal ones, distribution of transmit power
Considering the two above- performance. This hybrid based on the minimization of a of beacons with the SNR above
354
OPTIMAL/ROBUST CONTROL FOR ANTISLIP/SKID phenomena. Considering Chapter 6 - Nonlinear the single control units and to
INFORMATION TECHNOLOGY
control of railway vehicles are where the autumnal mixture of possible operating conditions communication. The electrical unknown adherence force. control based strategy and
the main focus of this work. foliage and water challenge even with a relatively simple nonlinear model clarifies the context of The control philosophy aims at showed that it outperforms
We analyze the problem from the most recent WSP devices. parametrization. operation of the present work. enhancing vehicle performances the other techniques. Special
two points of view: the single It is remarkable how Then an innovative adaptive by estimating the maximum attention is devoted to
axle speed control and the technological improvements controller based on such Chapter 4 - Adherence available adhesion force with understanding the part played
whole train coordinated control. have not been able to solve parametrization is presented characteristic a nonlinear filtering, which is by the various parameters
In both of them we propose this problem: an emblematic and its performances analyzed The main source of disturbance then exploited via a nonlinear in the problem, such as the
innovative control algorithms example is what happens in via simulation. Also a discrete when designing a control controller. The effects of the distance between the actuated
to enhance the performance of braking maneuvers, where controller based on optimal system for the wheel speed is implemented control scheme coaches and the convergence
current wheelslip protection initially poor performances filtering and control theory is the friction force between the are the use of the maximum rate of the control units. Focus
devices exploiting advantages where ascribable to the difficulty presented and its performances wheel and the rail, often called available deceleration in case of has been placed on the role of
offered from modern control of controlling mechanical brakes assessed via simulation on a adherence force. This force is poor adhesion conditions and, communication and preview for
theory. Results are theoretically with air pressure circuits and detailed Simulink diagram kindly also the mean by which a torque consequently, a reduction in the performance improvements. As
validated and simulation the use of electrical motors as provided by Alstom Transport applied to the wheel axle is wear rate of the rolling stock. such, the dependence of the
experiments are given. brakes, with its faster response Italy. It has also been possible to transmitted to the rail, hence The results are illustrated via adherence curve on the train
times, has not been a complete test part of this algorithm on a providing traction or braking simulation. state is expressed only by its
Chapters 1 and 2. answer to the problem. real time Hardware In the Loop force to the vehicle. dependence on the position and
Introduction, motivations and In the spirit of overcoming such simulator, always provided by In the following sections Chapter 7 - Distributed the traveling speed.
background difficulties, in this work we Alstom TransportItaly. different existing models of the Control with Preview The communication protocols
The operation of railway vehicles devoted our attention to the use Other than the single axle wheelrail con- tact force are In this chapter, we propose a used in railway applications
requires the ability to control the of modern control techniques control, we will devote our presented, which are based method to take advantage of are introduced, to show how
rotational speed of the wheels in to push the performances attention to the whole train on the theory elastic contact information exchanges between the proposed algorithms can
every environmental condition. over the current industrial and braking performance with between solids. Peculiarities of preceding and following coaches be implemented and spot the
To this purpose, a wheel slip scientific limits, paying attention an innovative distributed each model is summarized, and in order to improve the braking possible problems that might
protection device (WSP) is to what, in our opinion, was not preview control approach and from these model we propose performance and to reduce arise.
necessary to prevent the wheels yet investigated and proposing we will show how a simple two simple controloriented the stopping distance of the
from slipping on contaminated innovative ideas to overcome communication between models which are capable train. Different braking control Chapter 8 - Simulation and
rails. This is done by comparing current limitations. neighboring coaches is capable of describing all operating approaches are considered tests
the axles peripheral velocity of improving stopping distance conditions. depending on the level of We provide some simulation
with the train velocity and by Chapter 3 - Modeling performances. communication between the results conducted in different
controlling the speed to prevent approach The illustrated models range Chapter 5 - Discrete time control units. In particular, scenario to assess the validity
the two from being too different. The dynamics of the mechanical from a single wheel model adaptive controller a novel distributed braking of the algorithm proposed in
Despite the progresses control system involved in wheel speed without transmission dynamics An algorithm for optimal wheel control algorithm is proposed Chapter 5, where only a result
theory has experienced in the last control is studied. Different to a simplified full train model. slip control to be readily used in that, thanks to the preview of has been shown which has
decades we still face, especially models are used for different Each of these has different the implementation of a discrete the future condition of the rail been obtained on a real time
in adverse meteorological design strategies, and each of purposes. The single wheel time controller is presented. offered by the coaches at the simulator available in the Sesto
conditions, inadequate railway these is thoroughly presented. model is the most useful Theoretical justification is front of the train, is capable of San Giovanni Alstoms plant.
traffic response due to difficulty Particular attention is devoted to study traction and given as well as simulation better tracking the maximum Here more extensive simulations
of controlling the speed of the to the understanding of braking algorithm as it is results obtained on a detailed adherence condition. In so have been conducted on a
wheels on slippery rails. This the wheelrail adherence the simplest model capable metropolitan train model doing, it is possible to enhance Simulink model of the train.
problem is particularly evident in characteristics and propose a of describing all relevant provided by Alstom Transport. the performance offered by
356
357
The problem of guaranteeing reliability in a holistic way allows management framework for hardware platform. At the
INFORMATION TECHNOLOGY
the correct behavior in digital to drive the several decisions dealing with reliability in multi/ multi-node level, the designed
systems even when faults by exploiting the synergy of manycore embedded systems. framework employs a hybrid
occur has been investigated both the most classical aspects Reliability represents the main approach to minimize aging
for several years. However, the and reliability-oriented ones. optimization dimension and is while optimizing communication
researchers efforts have been Postponing the reliability considered both for permanent/ and computation energy. A
mainly devoted to safety- and assessment to the later phases transient faults management runtime orchestrator has been
mission-critical systems, where of the design flow on a system and components aging designed to smartly map tasks
the occurrence of faults (both prototype is not appealing, mitigation. Energy consumption on the available nodes starting
transient and permanent) can be because failure in achieving the minimization has been from pre-computed optimal
extremely hazardous. Nowadays, desired level of reliability would introduced because it is directly mappings. Tasks are then re-
the need to provide reliability be detected too late. However, and considerably affected mapped, at runtime by means
also for non-critical application the complexity of managing by the knobs the framework of heuristics, to cope with the
environments is gaining a lot reliability, performance and exploits. Perfomance is taken evolving conditions. Transient
of momentum, due to the power/energy consumption into account as a constraint to faults management is considered
pervasiveness of embedded all at the same time grows be satisfied according to the soft at the multicore level only,
systems and their increasing exponentially, especially when real-time paradigm. The result is since creating, scheduling, and
susceptibility due to technology several decision variables are a cross-layer self-adaptive system gathering results of redundant
scaling. While in critical available in the considered for the combined optimization threads and voters/checkers
applications the budget devoted system. For this reason it is not of reliability and energy under benefit from a shared-memory
to reliability is almost unlimited possible to envision a system performance constraints. bus-based architecture such as
and it is not to be compromised, able to properly react to any The proposed framework the intra-node one. A rule-
1. A graphical overview of the proposed system composition.
in non-critical scenarios the possible scenario on the basis of autonomously takes care of the based system has been designed
limited available budget used to decisions precomputed at design resource management problem, to guide the orchestrator in
guarantee the best performance time; a new paradigm based hiding its complexity. The overall selecting, at each instant of in a simulation environment lifetime motivated the
and energy consumption is to on self-adaptability is to be work is organized in several time, the best redundancy- by executing application traces development of a lightweight
be shared for providing reliability designed. Self-adaptive systems layers as shown in Figure 1. based reliable technique to collected from execution on real framework for estimating the
as well. are becoming quite common This framework implements a satisfy the users reliability architectures. reliability function and the MTTF
In the past, great effort has when dealing with such complex well-known control loop where requirements and minimize The envisioned framework and for architectures able to tolerate
been devoted to provide strict systems: relevant examples the status of the system and the perfomance overhead. This the design of the presented multiple failures, considering
reliability management. This are available in literature if the environment are sensed layer is located, in each node, adaptation layers represent the varying workloads. This
led to the shared belief that performance management is (observe), adaptation is defined on top of another adaptation main innovative contribution framework is based on Monte
reliability is to be considered considered. through decisions made at layer that takes care of aging of this research work. The Carlo simulations and random
from the early stages of the Given these motivations, runtime to meet the specified mitigation and computation preliminary investigation on walks.
embedded systems design we argue that the self- goals and constraints (decide), energy optimization. This is self-adaptive systems led to the The obtained results proved
process. In fact, as this process adaptive paradigm is to be and the values of the system achieved by acting on different formalization of a model for the effectiveness of the
is becoming more and more implemented when designing parameters are modified knobs (tasks mapping and describing and organizing this proposed approach, obtaining
complex, approaches that do not embedded systems with the accordingly (act). The designed scheduling, resource switch- kind of systems in a structured remarkable improvements in
consider reliability throughout aim of considering reliability framework is integrated in a on/off, DVFS) through the and systematic way, as well terms of lifetime extension and
all the design steps may lead as a driving dimension. In two-layer heterogeneous multi/ synergic orchestration on ad-hoc as for preliminary validating energy consumption reduction,
to expensive or not-optimized particular, in this thesis we manycore architecture, which designed controllers. Each them. Moreover, the need for while meeting performance
solutions. Moreover, considering propose a comprehensive is considered as the reference adaptive layer has been validated estimating complex architectures constraints.
358
359
In this PhD work, MEMS be represented with an RLC coefficient and m the effective transducer, allowing to measure technology has yet to reach
INfORMATION TeChNOLOGy
technologies for modern electrical circuit. Further mass. the mechanical Brownian noise the reliability and repeatability
ultrasonic transducers have expanding the scheme, three All the parameters of the model that sets the ultimate limit to required.
been studied. As part of different physical domains that are subject to process the maximum distance for the Two proof of concepts of the
a collaboration between can be separated by means of variation and whose precise ultrasound to be detected. This envisioned applications have
Politecnico di Milano and VTT ideal transformer: electrical, knowledge is of paramount limit together with the minimum been provided with the same
- Technical Research Centre of mechanical and acoustical. importance for the coupling to distance, which is related to acoustic characterization
Finland, the aim of this work is Analyzing the transfer function the electronics are obtained by the saturation of the sensing set-up, exploiting an array of
to obtain extensive knowledge thus obtained from the electrical means of an electromechanical electronics, sets the operative receivers, aligned side by side:
on the prospects and limiting to the acoustical ports and vice characterization. range of the devices (~ 5 cm). (1) an ultrasonic source were
factors for these technologies to versa two Figures of Merit are A wide selection of devices Another limit is imposed by the correctly detected, with relative
both replace current ultrasonic related to physical parameters have been at disposal, with high directionality of the multi- errors below 9 % in a range
transducers, fabricated with of the device: the first is the different dimensions, materials membrane transducers, that, between 1 cm and 4.5 cm; the
piezoelectric crystals, and to emissivity, defined as the ratio and working principles; as due to interference between reduced range with respect to
explore the possibility to further of the emitted acoustic pressure a consequence to optimize each elements signal, has been the one achievable by the single
extend their field of application. and the applied AC voltage, the ultrasonic testing system measured to be 14 for the transducer is due to the angle
The theoretical analysis and the towards a specific application selected devices. The emission between the position of the
supporting experiments reported the most reliable and promising profile of the CMUTs is shown in source and the normal to the
in this work show a promising devices have been used in the Figure 2. surface membrane; and (2) a
future for Micromachined and the second is the sensitivity, transmission experiment. The sound image (like in a sonar),
Ultrasonic Transducers, based defined as the ratio of the experimentation system consists were reproduced detecting
both on capacitive (CMUT) produced current and the in a set-up for emission and the presence of a source and
and piezoelectric (PMUT) pressure impinging on the detection of ultrasonic waves reproducing its shape; the
transduction, as represented in membrane, from two separated transducers limited contrast that these
Figure 1. The main features that placed in front of each other, second example has shown is
stand out in these devices are: . with the possibility to change a consequence of the simple
low-power operations, cheap their relative positions with high custom algorithm (as this was
cost of production, ease of Where w is the pulsation, accuracy. A preliminary study of not the aim of the work), which
interfacing with electronics, high h the electromechanical signal shapes suggests to use a is poor compared to industrial
density arrays capabilities. coupling factor, Za the acoustic burst signal of few tens of cycles patented imaging algorithm.
The mechanical structure impedance of air, Am the area of at the resonance frequency of The results discussed have
has a mass-spring-damper the membrane, k the stiffness of the transducers for the voltage been object of two works, one
behavior and therefore can the membrane, b the damping driving the emitter. 2. Emission profile of a CMUT regarding the effect of different
As a first result the emission transducer, where the color is the materials and radii and other
intensity of the emitted pressure from
efficiency and the detection the source, placed in the position demonstrating the localization
sensitivity were measured at (0,0). performances in air, presented at
resonance to be E = 48 Pa/V and IUS 2013 and Eurosensors 2014
S = 389 nA/Pa, in accordance Lower frequencies PMUTs are respectively.
with the predictions. foreseen to overcome these
The low-noise electronics limitation and to work at longer
designed has proven to have distances, but at the moment
1. Principle scheme of a CMUT (on the left) and of a PMUT (on the right). a lower noise floor than the of the writing the production
360
361
Crowdsourcing, i.e., the offered by the commercially-
INFORMATION TECHNOLOGY
assembling of strangers to available platforms against
accomplish a task, has the the proposed characterization.
potential to revolutionize the On the other hand, we turn
way people work on the web. our attention to the problem
The promotion of crowdsourcing of assigning tasks to crowd
initiatives allow companies to workers. In this respect, carefully
easily collect and compound considering workers accuracy
contributions in a distributed has already proved to be the
fashion, while letting individuals key enabler for increasing task
work and earn without the need quality. We therefore propose
for a physical working place a task assignment policy to
or pre-existing employment support the assignment of tasks
contracts. Thanks to its high in relation to crowd workers
flexibility, crowdsourcing is abilities, under the assumption
gaining a more prominent role in that workers may exhibit varying
both the industry and academia, accuracy depending on their
and it has been estimated that workload. We validate our
companies have the potential findings through an extensive
to crowdsource more than 300 experimental phase. Specifically,
billion USD of work worldwide. we conduct experiments with
As the number of organizations the aim of verifying which
embracing crowdsourcing is task design dimensions affect
increasing, crowd workers the quality of the outcome.
are likely to become a limited Moreover, we offer empirical
resource. An important issue is evidence of the existence of a
therefore to understand how fatigue/learning phenomenon
to obtain, retain and persuade among workers, and we
a crowd to contribute. In extensively validate the proposed
this work, we are especially task assignment procedure
interested in understanding against both synthetic and real
which mechanisms are effective data.
for eliciting high quality
contributions from the crowd.
Our approach is twofold. On
the one hand, we focus on how
a careful task design can help
improve quality of contributions.
We present a characterization
of the design space of
crowdsourcing tasks, and we
then contrast the capabilities
362
363
The interest on large scale data obtained from measuring programs that require HPC through a non-linear scrambled from the prototype have also be space. The use of remote atomic
INFORMATION TECHNOLOGY
knowledge discovery social and economic activities technologies. mapping of global memory used to formulate an analytical operations, in place of lock
applications is rapidly increasing often produces scale-free The objective of the research addresses to the nodes of model, which identify the system routines, allowed to significantly
in both industry and scientific graphs, that is graphs has been to study how to the distributed system. The bottlenecks and can be used reduce the synchronization
research. Example of such whose degree distribution extend commodity distributed third feature is the automatic for dimensioning a large scale overhead and exposed larger
applications are found in approximates a power law. architectures with the exploitation of lightweight distributed system. amounts of parallelism
the fields of social network These graphs have an ultra double aim of improving the multi-threading to transparently The development of FPGA enhancing the effectiveness of
analysis, data mining, protein small diameter, which means performance of applications with tolerate the long latency of prototypes allows to reduce the many-core system.
sequences analysis and in that the maximum number irregular memory patterns, and remote memory requests. Finally, the design and evaluation
the study of interactions in of steps required to reach the simplifying the programming the support for fine-grained cycle of hardware architecture
biological and other complex furthest node is extremely small model. synchronization, thanks to a with respect to full ASIC
systems. The data analyzed by with respect to the number The approach followed is hardware implementation of designs. However, they still
these applications are naturally of nodes in the graph. This the design of an architecture locking routines that allows the require a considerable effort
represented by dynamic and high interconnectivity of the template based on many-core threads to lock single words of and amount of time, which
irregular data structures such as structures makes it difficult to processors, which extends the global address space. repeats and increases for each
graphs or unbalanced trees. The optimize the partitioning of large commodity many core The feasibility and effectiveness incremental update. Therefore
algorithms used, therefore, show data sets on distributed memory architectures introducing a small of the approach has been the research has moved on with
an irregular behaviour in both machines. Therefore this class number of custom components. evaluated with a prototype the creation of a lightweight
control and data patterns, which of algorithms do not perform The new components enrich composed of 4 FPGA devices system simulator, in order to
have a degrading impact on the well on current architectures for the processor functionality by connected by an ad-hoc evaluate additional features
performance. One of the most High Performance Computing providing features useful for network. The prototype using high level performance
limiting factors is the very poor (HPC), which are designed to running large-scale graph-based includes the custom-designed models instead of a full HDL
spatial and temporal locality exploit locality of accesses and applications, but they do not components, which are added to implementation. The simulator
of memory accesses caused by regularities in the control flow. require a modification of the an architecture based on off-the- neglects modeling the details
the unordered traversals of the The only family of system core internal architecture or shelf cores and communication of cache and memory hierarchy,
data structure. Many analysis architectures specifically the memory hierarchy. Hence, subsystem. These components which are irrelevant for
algorithms require a partially designed for efficient execution the resulting architecture offer the four fore-mentioned applications which lack data
ordered visit of the entire graph, of graph-based applications can perform equally well features with a minimal impact locality, thus improving the
for example in breadth first has been proposed by Cray with regular and irregular on the chip architecture. In simulation speed. On the other
order, or random visits to the with the MTA and XMT applications, by enabling spite of the small size of the hand, it models the extended
neighbourhood of each node. supercomputers. But they are or disabling the additional prototype, the performance memory behavior introduced
This kind of visits generates highly expensive because they components. scaling of typical irregular by the custom hardware
irregular patterns which cannot are composed of mostly custom A set of four features are kernels proved the effectiveness components: the automatic
be predicted by a memory components, which prevents the keys of the architecture of the approach. In addition, identification of remote accesses
prefetcher, and because of the exploiting economies of scale. design. The first is a global the FPGA prototype allowed that triggers a context switch,
low temporally locality of the Also, because of the necessary and distributed address space, to evaluate the technical issues the scrambled address space
data most of memory accesses trade-offs, they achieve reduced which allows the use of the related to the implementation and the fine-grained locking
hits the main memory, making performance on more regular Shared-Memory (SM) Single of the proposed architecture, routines. The simulator allowed
complex cache hierarchies workloads based on linear Program Multiple Data (SIMD) suggesting the technical details to evaluate the impact of
essentially useless. algebra and matrix operations, programming model. The required for supporting other additional architectural features,
In addition, because of the which are used in most of second is the probabilistic commodity processors. such as the support for atomic
small world phenomenon, the engineering and scientific reduction of dynamic hotspots, The performance data obtained operations on the global address
364
365
Industrial manipulators Industrial robots specifically preplanned task, is proposed, assessment, evasive velocities entering the robot workspace
INFORMATION TECHNOLOGY
represent a significant element designed for human robot which defines the relevance for are computed, that are used to through the modification of the
for the automation of some interaction are becoming task execution of constraints, avoid the detected obstacles. preplanned robot task.
industry sectors, such as cars available in the market. and consequently identifies Finally, a state machine Then, collision avoidance is
manufacturing, or for machines However, industrial robot the possibilities for their that allows to automatically formulated as an optimization
tending and parts movement. controllers currently lack the relaxation. Once relevance of design the collision avoidance problem. Kinematic limitations
However, their diffusion in features needed to ensure safe constraints has been identified, strategy from the constraints of the robot and unilateral
productive settings such as and productive human robot a preplanned trajectory can classification is proposed. operational space constraints
consumer electronics industry interaction. be modified and adapted to In order to integrate the collision can be effectively taken into
is hampered by an insufficient This thesis aims at extending the current conditions of the avoidance system with an account with such an approach,
flexibility, or by an excessive an industrial controller environment, still preserving industrial controller, a system increasing the robustness
cost. Human robot interaction functionalities with a collision the possibility of its successful for the communication between and the effectiveness of the
is a promising solution to such avoidance system, in order to completion. them is proposed. Such a system overall system. Two versions 2. The FRIDA dual arm robot with the
a problem, as cooperation allow the robot operation in Then, a strategy for task allows the robot programmer of the system are proposed, distributed distance sensor, adopted
between robots and workers unstructured environments consistent collision avoidance to exploit the added collision with an increasingly accurate with the first collision avoidance
strategy.
could greatly increase robots and in close cooperation is proposed, which is based avoidance functionalities using management of kinematic
flexibility, and, at the same time, with humans. This research on the previously presented the standard robot programming limitations of the robot, and a
the adoption of manipulators therefore contributes to the classification and executes interface. The communication deeper exploitation of the robot
that are safe for human robot use of industrial robots in evasive motions exploiting system demonstrates the capabilities for evasion. The
interaction would reduce the new production scenarios and relaxed constraints. For this possibility of extending a first version is experimentally
costs related to environment proposes a safety system which purpose, an assessment of standard industrial controller validated on a dual arm pick
structuring. can be implemented adopting danger generated by the with capabilities for the and place task, while the second
The deployment of industrial existing technologies, shortening robot on obstacles is adopted: adaptation to unforeseen events. one is validated only through
robots in human robot the gap between research and constraints of higher relevance Two different implementations simulation.
collaboration scenarios poses application. are relaxed as the level of danger of the collision avoidance
new challenges for robot As a first step a classification increases. Then, exploiting strategy are then proposed.
manufacturers: guaranteeing for constraints composing a the above mentioned danger First, null space projection is
safety for human operators used in order to execute evasive
cooperating with robots, actions consistently with task
while achieving productivity constraints. Such an application
in unstructured environments. of the collision avoidance
Robots should appear friendly strategy is experimentally
to workers, avoid collisions and validated on a dual arm force
reduce the risk of consequent controlled assembly task. For this
injuries. At the same time, purpose, a distributed distance
the pursue of safety must not sensor prototype is designed
diminish robots productivity, nor and created, to endow a dual
should it disrupt the possibility arm ABB FRIDA robot with
of task completion or generate obstacle sensing. The collision
a risk of damages for the avoidance strategy effectiveness
manipulator or the production 1. A manipulation task is divided in elementary operations in order to classify is demonstrated by the capability
setup. its constraints. of evading from a human 3. The second collision avoidance strategy is applied on a dual arm task.
366
367
In the last years, social media in task execution) and passive On the other hand, passive
INFORMATION TECHNOLOGY
have attracted millions of users crowdsourcing (i.e., exploitation crowdsourcing denotes an
and have been integrated in of user-generated content to alternative approach for
peoples daily practices. They extract useful information). leveraging the online activity
enable users to create and share On the one hand, active of users for task resolution,
content or to participate in social crowdsourcing is the process which amounts to analyzing
networking. of outsourcing tasks to a a huge amount of publicly
User-generated content, i.e., the large group of people, called available contents, to extract
various forms of media assets workers. In this scenario, human information about behaviors,
publicly available and created workers are asked to perform interests and activities of
by end-users, is published every very specific tasks (called crowd the social media population.
day on the Web and mostly in tasks), which usually are easy to Researchers from different fields
social media at a massive scale, be solved by humans but hard to (e.g., social science, economy
either in the form of textual be solved by machines. and marketing) analyze a variety
documents (e.g., blog articles, In the context of active of user-generated datasets to
posts on social networks, crowdsourcing, only tasks understand human behaviors,
comments and discussion) or in difficult to be performed by find new trends in society and
the form of multimedia items a machine are submitted as possibly formulate adequate
(e.g., images and videos). Most crowd tasks. They are often policies in response.
user-generated content is about based on uncertain data, However, due to the
personal lives and facts about since these data can hardly be uncontrolled nature of users
users. However, users often processed by computers, due participation on the Web, the
publish more structured and to their unstructured nature. huge mass of available data
complex information. Unfortunately, an appropriate contains replicated information,
Crowdsourcing has gained modeling of the impact of a as well as low quality or
increasing importance in crowd task answer on uncertain irrelevant content. Moreover,
the last years. The term data is yet to be defined. content is often replicated
crowdsourcing generally refers Moreover, similarly to the use maliciously: users copy content
to the outsourcing of a non- of machine resources, which created by others (and often
automatable task to people. cost, also human computational subject to copyright laws),
The growth of the time spent resources are not freely available rename it and pretend they are
online has led to a growth of in any amount, and may the authors of the corresponding
interest in crowdsourcing. provide erroneous answers. original content.
Several works have been Consequently, an approach In this Thesis, we propose
developed, either making for the selection of the best methods to overcome these
users actively participate in the candidate set of tasks to submit problems, both in the active
resolution of tasks or exploiting to the crowd under some fixed and passive crowdsourcing field,
data they generate and publish constraints (e.g., costs and with the objective of maximizing
over the Web. We refer to these time) needs to be devised, the quality of results, under the
approaches as, respectively, together with quality assurance assumption of budget and time
active crowdsourcing (i.e., active procedures that guarantee an constraints.
participation of motivated users appropriate result quality level.
368
Switching, reliability and novel on the characterization and the cycle-to-cycle variability times, in the range of 103 s.
INFORMATION TECHNOLOGY
totally changed our human life the Flash memory, in which the come. This motivates the need the selector and the memory and iii) pseudo-repeatable of the device can be changed
and society. The widespread bit of information is stored as for research activities, such devices are affected by drift. characteristics. by mean of electrical pulses
diffusion of internet and mobile an electric charge in the floating as the ones described in this Then, we show how it is possible The fourth chapter is dedicated applied to the cell. In our work
technologies had an impact on gate of a MOSFET device. doctoral dissertation. to accelerate threshold voltage to the study of crystallization we are able to accomplish a
the world that is probably more Nowadays, the Flash technology The introductory chapter of this drift by mean of electrical kinetics in PCM. Our work shows complete set of boolean logic
dramatic than the invention is facing several issue related to thesis provides an overview of pulses in the subthreshold evidence of non-Arrhenius operations, namely the NOT, the
of steam engines and cars at scaling, among which the most the current non-volatile memory region, which could represent crystallization in GST directly in NAND and the NOR operation.
the begin of XX century. This important are random telegraph (NVM) scenario, subdividing the an important tool from the PCM devices, by comparing the When compared to standard
revolution has been possible noise, electrostatic control of possible technology evolutions application point view to limit thermally induced crystallization CMOS logic, the PCM logic
thanks to the big efforts put floating gate and variability. For within an evolutionary scenario the drift effect in the memory (thermal regime) with the offers the advantages of logic-
by electronic companies on this reason, memory companies and a paradigm shift. The device. Finally, the study is electrically induced crystallization in-memory, of reconfigurable
the development of integrated are trying to find alternative phase-change technology extended to the modeling of (pulsed regime). The non- logic and zero static power
circuits, according to Moores solutions to Flash memory. is then introduced, dealing resistance drift in the crystalline Arrhenius crystallization, dissipation, while is shows
law. The MOSFET transistor, One of the most promising with its history, the basic state of Ge-rich Ge-Sb-Te alloys leading to different activation worse performance in terms of
at the basis of computation technology, which has already operation and the elementary for embedded non-volatile energies in the Arrhenius plot dynamic power consumption,
in electronic processors, was reached the industrial maturity, physical description. This memory applications (ePCM). of crystallization time in the two switching time and endurance.
shrunk from the 20m channel is the phase change memory chapter reviews the current We show evidence of resistance regimes, is attributed to the This work paves the way for
of 1975 to the 14 nm channel (PCM). The PCM is a particular state-of-the-art in the physical drift and decay, which are fragile nature of GST glass and a new field of application for
of 2014: There is no other type of resistive memory, where comprehension of sub-threshold attributed in our model to to the broke of Stoke-Einstein PCM, which together with
technology in history that the reversible phase transition of conduction, threshold switching, structural relaxation at the grain relation above glass transition. neuromorphic computation
was able to accomplish such the active chalcogenide material, crystallization and structural boundaries of the poly-crystalline We propose a new experimental makes this technology attractive
a dramatic improvement in usually Ge2Sb2Te5 (GST), is relaxation, providing the basic state, and to grain boundaries technique to study electrically for alternative way to compute
such a short time. In the Turing used to store the logic bit of elements needed for the coalescence respectively. induced crystallization down information in the big data era
machine approach, the ability information. The two stable comprehension of the following The third chapter of this thesis to the holding current. In this of the present days.
to store information (memory states in the memory correspond four chapters. Finally, the deals with a detailed study way, we were able to extend
function) is as important as the to the high-resistance current perspectives of the PCM of the retention capability in our study of crystallization
ability to process the information amorphous phase (reset state) technology are discussed, with PCM on a large statistical scale. kinetics, and to characterize set
(logic function). For this reason, and the low resistance crystalline a quick glance on the so-called Such studies are fundamental transition in a wide time range
the development of logic devices phase (set state). PCM devices PCMS architecture, which is in order to allow large arrays from 50 ns to 10 s. Then, we
(transistors) came along with the have been scaled to the 20nm expected to solve the current to properly satisfy the data model crystallization in PCM
development of memory devices. size, while low-power and PCM limitations in terms of size retention requirements. A wide by a finite element approach,
Modern PCs and mobile devices nanosecond-switching operation scaling by stacking the memory experimental characterization which is based on filamentary
make use of several types of has been demonstrated. On the element and a cell selector is then provided in the crystallization after threshold
memory, which differ in terms of other hand, the ultimate scaling made of another chalcogenide temperature range below switching and on non-Arrhenius
speed, cost and data retention of the PCM is still unclear, material. 180C, presenting a detailed crystallization kinetics. Finally,
time. Among these, the non- due to the impact of random The second chapter is devoted study of the cell-to-cell and we show evidence of electrically
volatile memory must retain data telegraph noise, crystallization, to the study of structural cycle-to-cycle variability. The induced crystallization in
for years also when the power and resistance drift. Nowadays, relaxation-related phenomena in overall variability is interpreted the subthreshold regime, by
supply is switched off. The a deeper knowledge of the PCM the amorphous phase of phase through a compact Monte- performing continuous current
dominant non-volatile memory physics is strongly requested to change memory devices. The Carlo model, able to explain stress experiments at low current
technology in the past decades drive the development of the chapter is particularly focused both the cell-to-cell and 1 A and for relatively long
370
A General Sensor-fusion and Parameters For instance, on a mobile and typical solutions require architecture. A comprehensive
INFORMATION TECHNOLOGY
artificial agents able to perform depending on the operating hazard avoidance and safe readings, intrinsic matrices during normal robot operation. A modular formulation of the
given tasks with a certain degree environment and on the positioning of the robot arm, and depth distortion pattern information fusion problem has
of autonomy. These tasks always required degree of autonomy. for a total of twelve cameras of a RGB-D cameras, to name As robotic systems face new and been obtained based on state-
involve interaction with the In general, as the assigned employed in navigation, plus an a few. To ensure that sound more advanced tasks, system of-the-art factor-graph inference
environment, whatever it is our tasks grow in complexity, the inertial measurement unit. and consistent state estimation developers and researchers techniques; it allows to handle
familiar physical world or some set of variables that have to can be achieved, it is often are required to handle very arbitrary number of multi-rate
virtual scenario. be observed increases and As new, noisy, possibly required to determine these complex sensor-fusion and sensors and to adapt to virtually
multiple sensors are required. As contradictory, evidence comes parameters with a high degree parameter calibration problems. any kind of mobile robot
In principle, most of these tasks heterogeneous sensors observe from multiple, heterogeneous, of accuracy. However, it is often Despite the wide variety of platforms, such as Ackerman
could be performed relatively different aspects of the reality, sensors, processing has to be difficult to to determine these solutions available in the steering vehicles, quadrotor
easily if only the robot knew redundancy in perception results applied in order to fuse the by directly inspecting the robot literature, platform dependent unmanned aerial vehicles, omni-
certain quantities such as its own in an increased fault tolerance available information and (think about the case of 3-DoF specifications make them not directional mobile robots.
position, the position of its goal, and robustness with respect to update the robot internal orientations), while others are directly applicable, or require Different solvers are available
the current distance from walls, unforeseen situations. model of the world. This is simply not directly observable, adaptations, enhancement, or to target both high-rate online
if the planned path towards the model is often called belief, and e.g., the matrices of intrinsic substantial extensions. The lack pose tracking tasks and offline
goal is free from obstacles, and As an example, let us consider modern robots also include an camera parameters. of off-the-shelf, flexible solutions accurate trajectory smoothing
where possible obstacles are the case of robotic systems explicit characterization of the which are deployable with minor and parameter self-calibration.
located. Unfortunately, these for space exploration, such as uncertainty regarding variables A number of ad-hoc solutions effort undermine the availability
variables are seldom directly the Curiosity rover, and part of which are critical with respect has been proposed in the of baseline solutions to compare An extensive evaluation of the
observable in practice. the Mars Science Laboratory to their tasks. The problem of literature to handle accurate new approaches against and resulting framework has been
Moreover, even in scenarios mission. Because of the delay how these internal believes can calibration of very specific sensor often requires researchers to performed on different mobile
where the operating conditions, in communication between be consistently updated as new configurations (and they are still develop from scratch even very robots. ROAMFREE has already
such as the light conditions, or Earth and Mars, the robot observations become available subject of active investigation). simple sensor-fusion algorithms, proved its flexibility and out-of-
the site map, can be controlled can not be teleoperated, and has been subject of active These techniques often rely on reinventing the wheel and the-box deployment in several,
or jointly designed with the waiting for human instructions research in the last fifty years, artificial environment structures, scarifying reusability. real-world, information fusion
robotic system, there will always in each unforeseen situation and it is still ongoing. Many such as checkerboards in and sensor self-calibration
be inescapable degrees of is clearly impractical. Thus, techniques have been proposed camera calibration, or on In this work we introduce problems.
uncertainty in the robot and the robot needs a high degree and effectively employed in the availability of external ROAMFREE, a general,
environment state. of autonomy, at least for several applications. Notable information, not produced open-source, framework
elementary tasks such as examples are the Extended by the set of sensors being for multi-sensor fusion and
In order to bound the heading towards given positions Kalman Filters, or, in general, calibrated, such as position parameter self-calibration
uncertainty in their knowledge, and obstacle avoidance. To recursive Bayes filters, and, ground truth. Unfortunately, in mobile robotics. In its
most of the modern robots this end, several sensors are more recently, graph-based relevant parameters might development, we employ
employ sensors and maintain an employed; leaving apart the optimization techniques. change over time, such as and extend mathematical
internal model of the state of scientific experiments and biases in gyroscope sensors, and software engineering
the world; this model is updated general purpose elements such However, hardware sensors, or which depend on environment techniques to ensure that the
according to observation as the mastcam, a multi-spectra, pre-processing to be applied temperature, motion, and on a resulting framework can be
evidence and it is then employed high definition camera, the rover on raw data, often involve number of other factors. In this easily specialized to handle
to make decisions about how to has two pairs of navcams, to calibration or tuning parameters cases, offline, ad-hoc, calibration specific cases, and some of its
accomplish the assigned tasks. acquire stereoscopic 3-D images, that turn out to be critical to procedures, and environment component replaced without
The set of sensors available plus four pairs of hazcams, build internal robot believes. structures, can not be employed any change to the overall system
372
Study of a collaborative repository of semantic some elements may be infinite in complementing existing to support biodiversity policy-
INFORMATION TECHNOLOGY
modelling procedure to ease georeferenced matrices. has to be relativized from generality with array-based is introduced for estimating emphasize their implicit common
array-based multivariate Domain specific frameworks universal set of the research semantic constraints. The soil erosion by water. SemAP structure while also suggesting
transformations of public may offer a convenient option activities up to become a simple second key idea of SemAP is to is applied to extend a well- new indices. Other index families
environmental data, along for dealing with standard specialized module within a encourage modularisation of established environmental are analysed with the help of a
with the architecture of a problems within a given sectoral transdisciplinary context, the data-transformations so as to model (RUSLE), both to its GeoSemAP workflow. A SemAP-
collaborative repository of domain. Object oriented common sense evaporates. As easily propagate the semantic general multiplicative structure based nonlinear statistical
modelling meta-information approaches might enable a consequence, it should be support to lower-level sub-D- (whose factors are extended analysis (brownian distance
based on the procedure. The information to be represented communicated in a simple but TMs which might prove helpful exploiting the array semantics correlation) shows the least
procedure, Semantic Array and transformed in sophisticate also compact and unambiguous even to better explore software of the problem at model-scale) correlated indices.
Programming (SemAP), is flexible ways. The objects of a cross-disciplinary way. uncertainty. Modularisation and to a specific critical module The first three case-studies
intended as a lightweight monolithic model are typically Array Programming (AP) may be seen as an abstract (where array-based multiplicity deal with self-standing topics.
paradigm to support integrated straightforward to propagate might support part of this semantic constraint in order for is introduced for mitigating The fourth one highlights
natural resources modelling and effective in transferring task. AP originated for the array of components of a extrapolation errors at module- the flexible reusability of
and management (INRMM), structured information with reducing the gap between given algorithm (i.e. their sub-D- scale with a robust ensemble specific SemAP techniques as
in the context of wide-scale default behaviours/assumptions. mathematical formulation TMs) to be made explicit in the approach). specialised modules. Relative
transdisciplinary modelling for However, this communication and code implementation algorithm implementation. The second case study deals Distance Similarity illustrates the
environment (WSTMe, here is more difficult to achieve for by introducing very concise SemAP has been formalised with the temporal dynamics application of robust ensemble
tested from catchment up to non-monolithic models using operators and coding patterns and expanded to address of a complex modelling and methods.
regional and continental scale). several programming languages to deal with variables potentially geospatial problems by management problem under Finally, statistics are presented
It is a common experience and tools, with multiple teams composed by billions of means of a problem-driven deep uncertainty. A SemAP- on the number of unique
among computational scientists, involved and possibly no single elements and considered as approach focused on the broad enhanced modelling architecture authors who contributed to
to codify even short algorithms expert able to cope with the atomic (with correspondingly heterogeneity in the European is introduced for large wildfire SemAP-supported publications
if no out-of-the-box solutions overall integration complexity. terse manipulation). AP data continent. Real world case behaviour prediction, assessment along with unique authors
are available with remarkably Within a particular discipline, structures can offer a support studies illustrate collaborative and control, focusing on the citing these works. Despite
longer implementations. a particular research team, or already widespread (given the applications to WSTMe problems multiple heterogeneous sources the intrinsic variability, these
Computational science specialised modelling approach, extensive use of AP languages, in Europe. The case studies are of uncertainty. A data-driven initial data show how the
algorithms not rarely deal with a significant part of the overall e.g. MATLAB/GNU Octave, essential to build the reference robust modelling explores the proposed paradigm is currently
large amounts of data with information on the semantics of GNU R, Python with Numpy/ repertoire to serve as a guidance array of multiple fuel models, under active expansion within
a precise (despite sometime data and data-transformation Scipy, IDL), and for supporting the community meteorological disturbances and the community of potentially
nontrivial) semantic structure. models (D-TM) may be taken for noticeably less arbitrary/ of scientists involved in applying fire control strategies. interested researchers.
If so, data may be organised granted. This means that a core restrictive than a particular the modelling procedure, Despite the local spatial scale
in multiple groups with base of knowledge might safely choice (within a virtually which is meant to drive a of wildfires, their impacts may
homogeneous semantic content. remain unexpressed among infinite set of possibilities) collaborative, peer-reviewed far exceed this scale (off site
Examples of such groups are experts in the same domain. of objects to be shared repository of metadata and impacts): the architecture is
matrices, time series, tuples, Unfortunately, this is not the among multiple and highly data-transformation models explicitly designed in order for
graphs or more generic multi- case whenever that particular heterogeneous modules. related to web-available large events in the European
dimensional arrays. Geospatial domain of knowledge has to However, this support is still environmental datasets. The continent to become susceptible
problems may often associate interact with other domains, poorly exploited. transdisciplinary collaborations of real-time rapid (i.e.
geographic information to perhaps quite far from it. The AP data structures are very in developing the case studies approximated) assessment.
particular arrays: for example, Namely, when a set of practices general: multi-dimensional consolidated the growing The third application
spatial regular grids of data and knowledge shared by arrays where the value of research community interested characterises landscape patterns
374
375
Dams and reservoirs are essential magnitude and frequency possibility of substituting the
INFORMATION TECHNOLOGY
to satisfy human demand of high and low flows; the high-dimensional physically-
of water for a plurality of peak discharge is generally based, distributed-parameters
uses: irrigation, industrial and and intentionally significantly models with low-dimensional
domestic supply, hydropower reduced. The trapping of lumped-parameter ones. These
production, flood mitigation and sediment in the reservoirs causes techniques have been proposed
recreation. The management of sediment deficit downstream, in the Technological Sectors
these multi-purposes systems which in turn triggers not only (like Aerospace Industry) and
is difficult since it has to deal erosion on the river banks but have up to now found just a
with the relevant conflicts also incision in the river bed. series of applicationin the water
between those interests. The This process can extend over world. The goal of this thesis
definition of a satisfactory and hundreds of kilometers and last is to explore the application
stable compromise requires the for decades. of Dynamic and Non-dynamic
design of regulation policies Starting from the pioneering Emulation Modeling techniques
of the reservoirs, which every Harvard Water Project in 1962, to design optimal policies of
day objectively establish the the research community has set water management, considering
release to be operated from a large effort on building more the most important objectives of
each reservoir, taking care of and more complex models to the considered case study.
all the interests. The design of estimate the effects of different Keywords: Reservoir operation,
such policies requires to set planning and management Multi-objective optimization,
up and solve a complex Multi- decisions. A vast amount of Response Surface, Dynamic
Objectives Control Problem, in studies have been carried out Emulation, hydro-morphological
which the interest of the Parties on these effects and nowadays model
are quantified by indicators, the complex physically-based models
values of which are estimated are available to forecast and
with large, physically-based quantize them. Nowadays, very
models that describe the detailed and precise models can
effects of the water distribution be easily set up, with which the
in the downstream user evaluation of the effects, even
systems (canals, hydropower subtle as the geomorphological
plants, irrigation districts and ones, is possible. At the same
municipalities). time the art of defining and
However, even more difficult solving complex Mathematical
and challenging problems are or control Problems for policy
posed by dam management. design has reached an adult
The construction of large dams stage. However how to merge
has a strong impacts on the large physically-based models
hydrological regime of and the within a Design Problem is still
movement of sediments in the an open issue.In the last years a
regulated river. Dams affect the possible solution approach was
hydrological regime primarily offered: the Emulation Modeling
through changes in timing, techniques, which gives the
376
377
Conventional seismic imaging multiple events in the migration some other interface. In chapter cross-correlation imaging new dataset is constructed using is faster than the conventional
INFORMATION TECHNOLOGY
algorithms are based on procedure; the image obtained three and four I present two condition: the source and both the linear and non-linear multiples prediction methods,
single-scattering hypothesis. from multiples can be used to techniques that can be used receiver wavefields are cross- components of the Greens independent from the original
The reverberations that complement the primary image. to identify and construct the correlated not only in time function, thus accounting for acquisition geometry and
generate during wavefield In the first chapter of my model of the cross-talk events. but also based on their local the internal multiple reflections. it returns an image of the
propagation in the subsurface thesis, I show the mathematical One method is based on the coherency in the time-space Although multiples have been unwanted artifacts that can
(multiple reflections) are usually derivation on which the distinctive features that primary domain. proven to be useful signal, their be compared directly with the
considered as unwanted noise. migration of multiples is based. and cross-talk events show in The last two chapter of my prediction and attenuation are subsurface image obtained
Even though the most energetic I show with examples on both the pre-stack domains, and relies thesis deal with the internal still fundamental tasks. The through the migration of the
reverberations are the so called synthetic and field dataset, the on an advanced interpolation multiples. In the first chapter migration of surface-related recorded data. The method
surface-related multiples, main benefits and drawbacks of methodology. With second I used the Born scattering multiples can, in fact, benefit is suitable for the multiple
in areas of high structural the proposed methodology. The methodology presented, I theory to show how to exploit from a separation between artifacts estimation and also as
complexity we usually record main issue in using the multiple show that the cross-talk terms the information contained in primaries and multiples. an interpretation tool for the
also strong internal and intra-salt reflections is the cross-talk noise: can be eliminated during the the surface-related multiples Moreover the identification of identification of the horizons
multiples. The most common the migration procedure returns migration process by modifying when performing the seismic the reflector mainly responsible responsible of the internal
approach to handle with the the correct image plus some the imaging condition. The new imaging. In doing so, however, for the generation of the multiples generation.
presence of multiple reflections artifacts related to the cross-talk imaging condition operates in I didnt take into account reverberations, is very useful.
in the acquired seismic data is to of primaries of one reflector an higher dimensional space the internal multiple events. In the last chapter, I propose
try to eliminate them. However, and multiples belonging to with respect to the conventional In areas of high structural a method for estimating both
the multiply scattered recorded complexity, though, for instance surface-related and interbed
events had interacted with the in presence of salt bodies, multiple artifacts, using as
subsurface discontinuities, thus strong internal and intra-salt input the migrated section and
carrying useful information multiples are usually recorded. the subsurface velocity model.
about it. When properly When properly imaged, these The technique relies on the
imaged, they can enhance events can enhance the seismic use of seismic demigration, it
the seismic image and may image and provide additional
provide additional illumination illumination in those areas
in those areas poorly imaged poorly imaged by conventional
by conventional approaches. approaches. I propose a
Recently, researches on multiples technique based on non-linear
have shifted their focus on the seismic interferometry, that
exploitation on what has been allows to separately exploit the
often considered only as noise. information coming from the
Instead of considering the internal multiples recorded by a
reverberations only as noise, it is conventional seismic acquisition.
possible to use them as a source The original wavefield is used
of valuable information. Both as input to reconstruct a new
the most common migration virtual seismic experiment with
algorithms, Wave Equation 1. Comparison between the image obtained with primaries and with multiples. sources and receivers placed
Migration (WEM) and Reverse The red arrows point to some of the locations where the multiples migration is underneath the formations
inferior to the conventional one. The areas highlighted with the yellow boxes, 2. The internal multiples are exploit to reconstruct primaries that provide
Time Migration (RTM), can be are those where the migration of multiples provide the better results with mainly responsible of the additional illumination of the subsurface. The events of interest are indicated
modified in order to include respect to the image of the primaries. multiply scattered events. This with the yellow arrows.
378
379
Human computation is a the initial hype of transforming purposiveness and playfulness: are limiting the capabilities
INFORMATION TECHNOLOGY
research area that focuses on millions of hours typically poured in a traditional application, the that these systems may offer.
exploiting human intelligence to into traditional games into improper insertion of gaming The proposed framework
solve computational problems useful and productive work. elements may result artificial and investigates the design of game
that are beyond the capacity The problem that GWAP have thus not produce the desired mechanics and motivation
of existing Artificial Intelligence faced since their inception is engagement effects, while on techniques in games in order
(AI) algorithms. The growth of related to the fact that the very the contrary spoiling the users to solve human computation
the Web and Social Networks fundamental mechanisms on productivity, symmetrically in tasks by providing a set of tools
provides a massive amount of which they rely on, to guarantee a GWAP the task to be solved that will be used to ease the
persons that can be leveraged the quality of the submitted may mismatch with the game development of interactive
to perform complex tasks, but a results, have been considered mechanics, thus decreasing the media applications that have
fundamental issue in exploiting as Game Mechanics while in playability of the game and to be integrated within media
the contribution of crowds is reality they are simply validation failing to attract people and refinement tasks fulfilled
how to engage the potential mechanisms. For this reason, engage them in the execution of by players. The work has
users for the specified purposes even the most famous GWAP the task. also, dually, investigated the
and how to ensure the quality of were centered on experiences Another common challenge of methodologies and approaches
their contribution. To overcome that aimed at maximizing the human computation systems for gamification, that is the
the problem, a set of approaches throughput of high quality is data reliability. Humans are injection of game-like features
have been developed; Games submitted content instead of expected to be unreliable, in traditional applications (e.g.
with a Purpose (GWAPs) are focusing on the entertainment especially in ludic environments software development, customer
digital games in which the dimension typical of other where a playful interaction relationship management)
players actions in the game digital games, producing with the system to test its to improve key performance
contribute to a real-world applications that were perceived borders is expected. Therefore, indicators.
purpose outside of the game, as non games by their users. players may generate false data
whether it be predicting protein As it happened with GWAP, either on purpose or for other
structures or providing labels gamification, the process of reasons. Different strategies
for images. The standard way using game design techniques have evolved to deal with this
to accomplish the same type and game mechanics to issue but they are typically
of work is to crowdsource enhance traditional applications, tailored just to the particular
the work directly using a has been able to accomplish task they have been applied to.
service like Mechanical Turk in significative results but also As human computation tasks
which contributors are paid as catastrophic failures. Once are by definition not efficiently
workers. To address the lack of again, this phenomenon has to solvable by an algorithm, it is
extrinsic motivation that plagues be attributed to poor design due necessary to find new means to
traditional human computation to the lack of guidelines and handle this challenge. The lack
platforms, GWAP provide best practices to support the of established GWAP design
intrinsic motivation in the form development. paradigms, the difficulties
of entertainment. Many GWAP The main reason is the inherent of player engagement and
have been developed since the difficulty of the design of retention and the issues of
release of the first instance, the both GWAP and gamified choosing or defining the right
ESP Game in 2003. But not all applications, which resides validation techniques in order
GWAP seem to have lived up to in the tradeoff between to obtain meaningful results
380
Energy-Aware Traffic Engineering for (ii) We filled a major gap in EWO into a novel dynamic cost of the latter. Tests showed
INFORMATION TECHNOLOGY
widespread of ICT and or data center networks, consumption of IP networks (iv) We outperformed the results implement this extension of uncertain traffic demands we
telecommunication networks sensor networks and so on, proportional to the incoming obtained by our competitors in MILP-EWO, we developed a applied well known Robust
was sped up by the necessity of was motivated by two joint traffic load, i.e., by putting to those fields where state-of-the- new open-source network Optimization techniques, such
improving the efficiency of other considerations which made us sleep the redundant network art work were available. In all management framework, i.e., as cardinality constrained budget
sectors of the economy, while, think that there were both need devices while guaranteeing the addressed scenarios we were JNetMan, offering a set of uncertainty.
at the same time, reducing their and room for novel significant that the active resources were able to achieve energy savings APIs to be used by network v) We proposed a novel bi-level
energy footprint. However, the contributions in this specific able, thanks to an optimized from 20% to 60% according administrators to easily optimization model to correctly
massive development of ICT field: configuration of the network to the problem data and implement management policies manage the allocation of elastic
infrastructures and services has (i) In 2011, while energy- routing, to guarantee the constraints. We were the first based on SNMP commands. traffic demands, such as those
produced a significant increase awareness applied to other types correct functioning of the to formalize a way to model the iii) We developed both exact and carried by TCP. Our modeling
of the energy foot-print of the of networks such as wireless had network. The set of methods allocation of network bandwidth heuristic methods for centralized framework exploited the
ICT sector itself. To give an idea been already quite thoroughly we developed can be identified in presence of elastic (TCP) SEANM in IP networks operated concept of max-min-fairness and
of the order of magnitude that explored, a very limited as techniques for sleep- traffic, providing thus the tools with flow-based routing proportional fairness to correctly
we are talking about, in 2010, literature was available on based energy-aware network to correctly perform energy- protocols such as MPLS. We approximate the amount of
the yearly energy consumption how making wired IP networks management (SEANM). The aware network management in considered a multi-period bandwidth allocated by the
of the world most important energy efficient. This was quite sleeping-strategy was then presence of TCP flows too. scenario according to which network to each specific flow
Internet Service Providers, e.g., surprising, since it was clear that applied and adapted to different Here follow a more detailed a single day was split among according to the capacity of the
AT&T and China Mobile, was wired IP networks too had a non network contexts with the aim summary of the work done six different macro periods crossed link and the presence
over 11 TWh per year. Note that negligible impact on the overall of providing a comprehensive set and of the the main results and characterized by a quite constant of other concurrent flows. We
this is equivalent to the annual energy efficiency of ISPs. of management tools to be used produced along the whole Ph.D. level of traffic. Limitations on developed a SEANM application
electricity production of a mid- (ii) Due to both hardware and in any IP network according project: the routing variability across to show that, in presence of
sized nuclear power plant, or management limitations, wired to the considered network i) We developed a novel consecutive time-periods and elastic demands, the new
to almost half of the amount IP networks were identified as configuration. centralized off-line optimization constraints to both guarantee modeling framework allows
of electricity sold abroad by highly energy inefficient. Their The strength of our research algorithm (MILP-EWO) to quality of service and preserve to reduce power consumption
Hydro-Qubec in 2011. Making consumption is always at the work lays in four main aspects: save up to 60% of energy device life-time were included. when traffic conditions made
telecommunications networks maximum level independently (i) The depth and scope of our consumption in IP networks The proposed methods, which typical SEANM approaches
greener, in addition to reducing of network utilization. It was work, which, differently from operated with the most were applied to perform both ineffective.
greenhouse gas emissions, estimated that consumption what can be found in literature, popular shortest-path routing off-line and on-line optimization, vi) Finally, we considered
may have significant economic reductions in the order of 50% covered the whole set of issues protocol, i.e., OSPF. The idea allowed to reduce the daily metro and backbone networks
impacts: for instance, for large would have been potentially and topics which were relevant was to efficiently adjust the network energy consumption operated with Carrier Grade
companies like AT&T or Google, achievable by optimizing the to energy-aware network administrative link weights from 40% up to 60%. Ethernet and proposed a novel
a modest 3% reduction of way wired IP networks are management. We considered used to define the shortest iv) We extended the SEANM SEANM method to balance
the electricity bill would result managed. networks operated with paths among sources and approaches for MPLS based energy consumption and
in several millions dollars of We addressed the problem of different configuration settings destinations so as to exploit only network to explicitly guarantee network congestion.
savings. energy-awareness in IP networks and protocols, we studied the the necessary network elements resilience to single link failures
In our Ph.D. research project in a very comprehensive way, trade-off between network and consequently put to sleep and robustness to unpredictable
we addressed the problem of aiming at dealing with all the resilience and energy-awareness, the redundant ones. Proper traffic variations. Our goal
improving the energy efficiency important aspects related to we handled the problem of constraints were respected to was to study the trade-off
of wired IP networks. The choice both network management coping with network traffic keep network congestion under between energy consumption
of wired IP networks among and energy-efficiency. We uncertainty by implementing control. and network survivability, while
all the other possible domains, first identified the main robust optimization techniques. ii) We integrated MILP- explicitly quantifying the energy
382
383
Fluorescence correlation different locations within a living densities that allow cost efficient
INFORMATION TECHNOLOGY
spectroscopy (FCS) is a well- cell. Parallel FCS acquisition is implementation of even complex
established technique to study then developed, with the help and resource intensive DSP
binding interactions or the of multi-pixel detectors and algorithms. They offer lower 1. (a) Photon detection module integrated with FPGA based correlator; (b) PC interface for intercommunication with the
diffusion of fluorescently labeled multi-spot excitation generation non-recurring engineering costs module and real time display of correlograms.
biomolecules in-vitro and in-vivo. technique, which maps each and faster time to market than
FCS is commonly implemented excitation spot onto every more customized approaches time bin being 5 ns. This long scheme for the 32-channel based correlation computation
by using a confocal microscope target pixel of the detector. such as full-custom VLSI or lag time correlator is divided correlator is redesigned. The with real time display of all the
to detect the fluctuations Furthermore, simultaneous data ASIC design. On the other into two parts to maintain photon detection module correlograms. The signal track
of fluorescence intensity acquisition and processing is hand, software correlators are real time display. Apart from together with the multi-channel from each SPAD pixel can be
arising from changes in the needed, demanding for multi- also available; they have fast the FPGA based correlators, a correlator as a compact module stored in PC for offline analysis.
number of molecules diffusing input high efficiency correlators. design cycle, flexible structure PC based software correlator (see Fig. 1) can be applied in According both to literature and
through a small (~femtoliter) Thereby this thesis describes and can provide offline exploiting an improved multi- FCS experiments providing commercial products, few of
observation volume. The design of multi-channel operations. However for online tau scheme was designed for direct signal detection and them are able to accommodate
autocorrelation function correlators for high throughput computations, they could lack offline analysis. The signal trace analysis path. However, some high number of input channels
(ACF) of fluorescence intensity FCS experiments. efficiency due to the high CPU for the software correlator inherent features of SPAD arrays, while maintaining large lag
fluctuations can yield physical utilization rate which prevents is recorded in photon mode namely afterpulsing and optical time range with low minimum
as well as photochemical Various correlators have their application in high- which counts the time interval crosstalk effects, may introduce time bin. Thus this correlator
information (molecule size been developed, some are throughput FCS experiments. As between two pulses. Combining distortions in the measurement module would be of great
and concentration, blinking commercially available. The a result, FPGA based correlators the FPGA based correlator for of correlation functions. These interest for high-throughput FCS
or binding/unbinding rates) hardware correlators have providing multiple channels, online correlation computation, limitations are investigated experiments.
about the diffusing molecules. been traditionally employed high dynamic lag time range with photon mode recording to assess their impact on the
To monitor the fluorescence for real-time calculation of and online operations, are very of the signal traces in FPGA, module and evaluate possible
fluctuations, FCS measurements correlation functions over a promising for highly paralleled and PC interface integrated workarounds.
need to be performed at certain dynamic range. Since FCS experiments. with software correlator for In order to further upgrade
nanomolar concentrations in typical FCS measurements intercommunication and the 32-channel correlator and
with typical acquisition times the correlation function spans A single-channel FPGA based offline analysis, a complete enable correlation computation
on the order of a few seconds over several decades of lag correlator was firstly designed single-channel correlator was between modules, a standalone
to several minutes. However, times, linear channel spacing employing the multi-tau developed. cross-correlator module has
faster acquisitions of FCS data is impractical and the multi- algorithm. It features a been developed. It employs
are desirable in two cases: tau algorithm is used. Real- maximum lag time of 150 ms Based on the successful design larger FPGA, faster data
in high-content screening time, multi-tau correlators while minimum time bin being of single channel correlators, a transfer interface, able to
approaches, many molecules were formerly implemented 10 ns. This correlator is adopted 32-channel correlator is then hold 64-channel FPGA based
on reaction at different on custom high-speed digital to characterize afterpulsing implemented which is intended cross-correlator with maximum
locations require simultaneous signal-processing hardware, effect of Single Photon to be directly contained in lag time around 1 min. The
interrogation; also when either application-specific Avalanche Diodes (SPADs), a photon detection module module can connect with two
observing fast evolving dynamic integrated circuits (ASICs) or, whose results are verified by a mounted with a 32 1 SPAD external photon detection
systems, diffusion parameters lately, field-programmable gate commercial correlator. So as to array. The module has a same modules, receiving up to 128
change as a function of time. arrays (FPGAs). Nowadays, meet state-of-art standards, lag FPGA as the one used for channels of photon counting
It is thus important to develop FPGAs can be manufactured time range of the FPGA based the single-channel design. In signals. The correlator system
FCS methods that enable in 28 nm CMOS processes correlator was then extended order to fully utilize the limited can execute instant signal-trace
simultaneous measurements at and have reached integration upwards to 80 s with minimum FPGA resources, the replication recording and fast online FPGA 2. 64-channel cross-correlator module
384
385
Thanks to its ability in the functional drifts due to first non-invasive light detector transparent actuators,
INFORMATION TECHNOLOGY
generation, manipulation, fluctuations in the environment, was demonstrated in silicon based on the integration of
and detection of light on- aging effects, mutual crosstalk, photonics waveguides. This photosensitive chalcogenide
chip integrated photonics has and fault events. Indeed, as the transparent light monitor, that glasses with the silicon platform,
been imposing as an enabling scale of integration increases the is the ContactLess Integrated were realized to provide
technology in a broad variety aforementioned effects become Photonic Probe (CLIPP), exploits post-fabrication permanent
of fields of application, such as critical, and consequently natural absorption effects of trimming functionalities,
telecom, optical interconnection, monitoring, control and silicon waveguides to monitor both for compensation of
bio-sensing, and quantum stabilization of components is the status of a circuit without fabrication tolerances and circuit
photonics. However, it is widely mandatory. At the same time, affecting its operation and reconfiguration. Athermal and
believed that the applications while aggregating several without wasting any additional trimmable silicon waveguides
that will really benefit from devices, the power consumption photon with respect to those were developed, in order to
the exploitation of integrated required by tuning and control naturally lost by the waveguide. enable simultaneously passive
photonic technologies are those operations should not increase Thanks to its non-invasive thermal stabilization and post-
that require the aggregation of as well. nature and inherent CMOS fabrication trimming of silicon
many components into complex This thesis aims to develop compatibility, many CLIPPs circuits. Also, resiliency of these
systems-on-chip for the delivery devices and technologies to fill can be placed in a photonic devices to high-power induced
of advanced functionalities. the existing gaps that prevent circuit, thus enabling multipoint thermal effects is shown.
Indeed, much technological photonic integrated circuits monitoring of complex devices Active stabilization and
effort has been dedicated to move from a single-device and circuits aggregating several feedback control of thermally
to scaling device dimensions level to a new system-on-chip components. actuated silicon resonators was
down to the ultimate physical paradigm. In this work the Furthermore, low-power demonstrated by exploiting
limit, so that now photonic an error signal provided by
platforms like silicon are mature the CLIPP monitor integrated
enough to squeeze thousands inside the microrings. Advanced
of components into small chips. control functionalities such as
Yet, the realization of large-scale wavelength tuning, locking and
integration circuits performing swapping were demonstrated.
complex tasks is still a challenge, Finally, advanced functionalities
and the result is that the and concepts, such as
perspective is still on the device transfer function recovery,
rather than on the system. reconfigurability, adaptability
In photonics, as in the were addressed in high-order
case of electronics, device filters and delay lines composed
miniaturization is not a of several coupled ring
direct synonymous of device resonators, and by even more
integration. In fact, integrated complex integrated circuits,
circuits cannot function properly such as variable symbol-rate
without adequate tools to differential phase-shift keying
dynamically steer and hold each receivers.
1. Silicon photonics chip hosting several photonic devices and circuits whose
embedded device to the desired operation is non-invasively monitored and controlled by means of CLIPP
working point, counteracting monitors and CMOS electronics.
386
INFORMATION TECHNOLOGY
smooth textureless object plane with unknown patterns, or the object shape: a plane level unknown texture. The
is an interesting Computer by an uncalibrated, moving can provide up to 2 constraints moving camera with varying
Vision research problem camera with varying intrinsic on the camera parameters, intrinsic parameters is simulated
whose applications rang from parameters. Starting from the therefore only-plane based using a set of spatially separated
2. Geometric solution for recovering the epipolar geometry
prosthetics to custom cloth, acquired images, we address calibration methods cannot cameras distributed around the
shoe manufacturing and film the problem of recovering i) the deal with more than 1 varying foreground object. To estimate
industry. The smoothness of value of the camera intrinsic parameter; only-silhouette based plane). The tangent plane are the fundamental matrices. the plane induced homography,
the object makes it difficult to parameters for each image, calibration requires, in each view intersects background plane A minimal representation of we randomly generate a
identify image correspondences, ii) the camera motion and, iii) pair, at least 7 frontier points at a line and the trace of this the camera matrices in a view sufficient number of 3D points
due to the continuously the 3D reconstruction of some to exist, which is unrealistic for line on the image planes of triangle is used, where only on the background plane and
varying contour generator as points on the object surface. ordinary surfaces. Our algorithm the two cameras are a pair of inter-image homographies their images are corrupted
the camera viewpoint moves. is based on both silhouette and corresponding epipolar tangents, and the epipoles are involved. with various level of Gaussian
In addition, using consumer We propose a new framework plane and it can be applied to the two of which are related by The multiple view projective noise. The average relative error
cameras (e.g., those provided that could calibrate a fully calibrate a fully varying camera the plane induced homography. reconstruction is built up on focal length and principal
with tablets or smart phones), varying camera from background from ordinary surfaces without If we find two frontiers and incrementally using view point with respect to standard
often the auto-focus property and silhouettes of smooth restricting either the camera respectively their associated two triangles as the building block. deviation (std. dev.) of Gaussian
of the device is active letting the objects, not requiring more motion or type. pairs of corresponding epipolar Each time a new view is to be noise are obtained as the proof
intrinsic camera parameters vary than two frontier points per tangents, the epipoles are added to the set of calibrated for the accuracy and robustness
between acquisitions. Therefore, view-pair. Although it is possible The proposed algorithm is determined as the intersections views, an optimal view triangle of the proposed algorithm. We
traditional camera calibration to calibrate a camera from divided into 3 main steps. of the two pairs. The process of is built and solved that has the show good calibration accuracy
methods cannot be applied. only plane or silhouette, these The epipole positions in each searching the frontier points is smallest Sampson error. The has been achieved with both
In this thesis, we consider the methods suffers from limited image pair are first estimated formulated as an optimization calibrated set is then updated relative errors are about 5%
scenario (Fig. 1) where a smooth usable information and thus by finding two epipolar lines problem where we minimize by adding the new node. when the std. dev. of Gaussian
3D object of unknown shape is their calibration are restricted that are tangent to the object the distance between the Projective bundle adjustment is noise is around 1 pixel.
silhouette with the help of the tangent line in the second image also performed to prevent error For real image experiments,
plane induced homography. mapped by the plane induced accumulation. The above process the images are taken with a
Consider a smooth object being homography and the convex hull is repeated until all the views are mobile phone camera with
observed against a background of the apparent contour. Two added to the reconstruction. autofocus effect. Two categories
plane by two cameras(Fig.2). global minima of the objective of experiments are carried out:
The apparent contour (edge of function correspond to the two Finally, a flexible self-calibration 1. the intrinsic parameters of
the silhouette) of the object is outermost frontier points. algorithm using the absolute the camera are varying due to
the projection of the contour dual quadric is employed to auto-focus effect; 2. the focal
generator, which is a 3D curve The projective reconstruction determine the camera intrinsic length is manually adjusted
consisted of points whose of the image sequence is then parameters and to upgrade by zooming in and out. For
tangent plane goes through robustly computed from the the projective reconstruction both cases, good calibration
the camera center. The frontier estimated epipolar geometry. to metric. Unit aspect ratio accuracies are obtained in terms
point is the intersection between A view triangle is the graph and zero skew are used as of re-projection error.
the two contour generators, representation of the geometries constraints in self-calibration
where the tangent plane passes in a camera triplet where the and no other constraints are
through both camera centers nodes represent the cameras adopted. Euclidean bundle
1. camera calibration scenario (the tangent plane is an epipolar and the links between them adjustment is performed at the
388
389
Innovation and competition description of evolution of the mathematical conditions under models the expected long-term also for further degenerate
INFORMATION TECHNOLOGY
processes are often identifiable characteristics of the system in which branching occurs are evolution of the phenotypic cases (e.g., when both the
in science. They are responsible terms of ordinary differential expressed as sign conditions on traits of the coevolving branching conditions are
for evolutionary dynamics equations. Diversity increases appropriate second derivatives community. In chapter 3 we critical). In chapter 5 we develop
driven by innovative changes in the system each time of the competition model, but focus on the emergence of a general methodology to study
in the characteristics of competition between innovative theoretical results in critical diversity in the AD framework, the evolution of biodiversity
individual agents and by and resident strategies gives cases in which such derivatives that is, evolutionary branching. in eco-evolutionary two-
competitive interactions that rise to their coexistence annihilate are not yet available. We classify the evolutionary species communities, with an
promote better performing (evolutionary branching), Although mathematically non equilibria with respect to their application to prey-predator
ones. Genetic mutations and and reduces when evolution generic, these situations are convergence and evolutionary interactions. We then use such
natural selection play this role brings groups of agents quite common in applications, in stability, recovering the classical methodology in two fields of
in biology, but the potential to extinction. Evolutionary which particular symmetries of branching conditions, i.e., the science different from biology.
applicability of the evolution branching is particularly the model bring some derivatives two mathematical conditions In chapter 6 we analyze the
paradigm can be extended to interesting: in appropriate to annihilate systematically. In in terms of second derivatives possibility that the interplay of
social, economic, information conditions, innovative agents conclusion, the main goal of the of the invasion fitness under natural and artificial selection
sciences and engineering. can coexist with resident ones thesis is to focus both on the which the system becomes due to fishing could lead to
Quantitative approaches to and their strategies, initially analysis of theoretical aspects dimorphic and experience disruptive selection on exploited
evolutionary dynamics were very similar, can then diverge of evolutionary branching in disruptive selection, thus fish stocks. Finally, in chapter
born from genetics and generating two resident forms the framework of AD and on increasing its diversity. Chapter 7 we study the evolution of
economic game theory. While with different characteristics. the development of models 4 is devoted to the study of the fashion purely driven by social
biologists traditionally consider The evolution of this enlarged to interpret diversification branching bifurcation, namely, interactions, with particular
evolutionary change separated system can still bring to the phenomena in the above the transition from evolutionary focus on the emergence of
from the demography of the situation in which evolutionary mentioned fields of science. stability to evolutionary style diversity, and find out that
interacting populations, game branching is possible for The thesis is organized as instability along with the different styles can successively
theorists study the relative one or both forms of agents follows. Chapter 1 is an change in a model parameter. emerge starting from a
diffusion of a given set of present in the system. Thus, introduction on the theory This bifurcation occurs when single style society. Chapter
alternative strategies and the this succession of evolutionary of evolution, starting from its the branching condition ruling 8 discusses and summarizes
robustness of the corresponding branchings brings simple history, passing through its evolutionary stability changes the achievements of the work
equilibria with respect to systems (with few resident basic elements (mutation and sign. To study such critical and close up the thesis with
invasion from potential dissident. forms) toward more complex selection), and closing with the case, a particular third order suggestions on extensions and
By contrast, the more recent and diversified configurations. mathematical approaches to approximation of the invasion future research.
approach of Adaptive Dynamics The study of the possible the study of the evolutionary fitness must be computed, and
(AD) takes explicitly into branching scenarios is then very dynamics. The concept of a novel property of the resident-
account both the evolutionary interesting in biology (where it evolutionary diversification mutant competition model
and the demographic change gives an interpretation of the and evolutionary extinction must be exploited in order
and characterizes both the diversification of species from are also intuitively introduced. to obtain simple and general
evolutionary equilibria and a common ancestor), but also Chapter 2 is dedicated to the results. The case in which the
transients and non-stationary in social sciences, economics, Adaptive Dynamics approach, other branching condition is
regimes. AD represents a technology, engineering, the resident-mutant competition critical is more complicated
flexible framework, based on etcetera. Moreover, some model, the computation of and is left for future research,
the hypothesis of rare and theoretical aspects of branching the invasion fitness, and the but our theoretical approach
small mutations, for the formal are still unstudied. For example, AD canonical equation, that is general and remains valid,
390
Model Predictive Control Of Energy energy control shows the Main results obtained capacity of HVAC actuators
INFORMATION TECHNOLOGY
This dissertation addresses the and to (2) thermal zones thermal energy controllers, building, generation from data of an existing shopping separated controllers interact
problem of thermal energy overheating, especially during whose aim is to achieve comfort Renewable Energy Sources center. The final results in order to minimize the
control in the context of large- winter where the cooling power improvements in term of (RES) and a battery as electrical show an accuracy of about overall energy expenditure,
sized commercial buildings is limited. From a control- vertical stratification reduction Energy Storage System (ESS). 0.5C in simulating the zone taking into account different
characterized by wide open oriented perspective, this and energy saving. On the The microgrid system analysis, temperatures and a monthly pricing schemas for buying
spaces and containing several means that HVAC actuators are classic side, PI temperature modeling, design and simulation energy consumption error and selling electrical energy,
thermal zones i.e. shopping operated nearby the saturation decentralized control for each is addressed in detail still paying below 1%. penalties due to load
centers and malls, convention boundaries, making the zone is designed. Then, the attention to real world operating 2. The thermal model is simple imbalance, battery non-
and congress centers, theatres, regulation problem very critical. dissertation focuses on advanced conditions, system non-linearities enough to be used for the linear efficiency, load limited
airports, train stations, large In this context, this research building energy control based on and the possibility to pursue design of both classic and flexibility and non-perfect
office buildings and so on. With work focuses on the Model Predictive Control (MPC). different control objectives using advanced controllers for the renewable power prediction.
respect to the residential sector, application of control and The implemented MPC regulator the same control architecture. control of vertical temperature The final results show that
buildings in this category present optimization techniques voted addresses notable steps forward Specifically, the solution here stratification. Novelty is over a period of one month
specific features that make at the reduction of both energy with respect to the state-of- proposed for the control of the provided in the mathematical the presented architecture
them particularly suitable for the consumptions and vertical the-art, such as mathematical smart microgrid above described formulation of the MPC saves costs up to 63% with
achievement of energy reduction temperature stratification formalisms which allow to deal is the adoption of a distributed controller, which separates respect to the same scenario
through the implementation employing a model-based with the non-linear behavior of control framework, where the positive and negative values managed without an ESS.
of active efficiency strategies. approach. Suitable grey-box thermal actuators in common previously presented building of thermal power in order 5. The system is extensively
In fact, they make an extensive class of models can be tuned Heating, Ventilation and Air MPC operates the HVAC plants to account for different tested both in simulation
use of Building Automation using building physical and use Conditioning plants (HVAC), in a flexible way (to better operating costs of heating and on an appropriate
and Control Systems (BACS) information (white-box), while to achieve a suitable degree of follow RES production or to deal and cooling. Simulation results experimental setting,
technologies, whose potential uncertain parameters are tuned robustness for operating under with transmission problems) show that the MPC decreases controlling a microgrid with
is however largely unexploited from available measurement real and uncertain conditions and interacts with a second monthly energy consumptions a real vanadium battery as
in practice. Second, they have data (black-box). The structure and to be able to shape and MPC which controls the ESS of 4.5%, without any change energy storage system, an
assets suitable for demand- obtained for the multizone curtail the load according the power flow, so as to minimize to the HVAC plants and a existing 10kW wind turbine
response (namely comfort model is detailed enough to specific context of operation. In the overall operating costs, reduction of more than 60% and a resistive load to simulate
reduction consequent to catch all the relevant dynamics addition, particular emphasis is to maximize the exploitation in stratification with cooling the building consumption
a temporary energy price affecting energy and indoor put on model robustness with of RES and to serve building actuators in place. (Syslab microgrid at RIS
reduction), like thermal inertia, comfort performances, and respect to disturbance inputs needs in terms of thermal 3. The developed controllers is DTU, Denmark). The control
which decouples thermal and simple enough to be employed (such as internal gains, solar comfort, ensuring sustainability able to deal with real world architecture proves to be
electrical consumption. Finally, to tune temperature regulators. radiation, non-manipulable with minimum side effects. limitations of HVAC plants. flexible enough to prosecute
adjacent thermal zones located The modeling technique HVAC loads, occupancy), whose An intense insight in real Particular emphasis is put on very different tasks, while
in open spaces with high discussed is applied to an value in the future is hardly system and components and strategies for the consistent keeping a standard and
ceilings are widely coupled existing commercial building. predictable. For this reason the also a significant experimental compensation of actuators scalable mathematical
and often affected by a large Real data collected by the on- building MPC is equipped with testing on a real microgrid have non-linearities directly into formalism.
amount of internal gains due site BACS is used for tuning a Kalman filter which improves been carried out in order to the MPC, employing variable
to lighting systems, appliances model parameters and validating system robustness with respect evaluate the capabilities of this hard constraints specific of the
and occupancy. This often leads the obtained model over a long to unknown disturbances and architecture in a reliable manner, current operating conditions
to (1) vertical temperature period of time (winter season). model errors. pointing out advantages and and updated at each time
stratification, which is one of The building multizone model The implementation of advanced limitations of an actual on-field step. The result is an increase
the main sources of discomfort is then used to design and control strategies for thermal implementation. of the heating and cooling
392
393
The evolution of modern error makes these systems prone
INFORMATION TECHNOLOGY
wireless standards poses to the generation of limit cycles
stringent noise specifications appearing as unwanted spurs in
to the design of frequency the spectrum. The random noise
synthesizers for high-data-rate contributed by building blocks
communication systems, limiting can eliminate those spurs acting
tolerable jitter and spurs level. as dithering signal, without
Moreover, such performance the addition of extra noise. A
must be provided at low power closed-form expression of the
consumption and small area, in total output jitter as a function
order to meet the requirements of loop parameters and noise
for mobile applications and sources is developed, which
battery powered systems at low suggests a design strategy for
cost and high integration level. noise minimization. Yet the
Therefore, the key challenge lowest achievable value for the
in frequency generation is jitter ultimately depends on the
the design of high-efficiency noise of the oscillator, which is
synthesizers (i.e. with low jitter considerably high in the case
at low power), and it is generally for ring oscillators. An effective
tackled relying on fractional-N way to significantly reduce it,
phase-locked loops (PLLs) with without increasing appreciably
LC oscillators, due to their power consumption, is to rely on
better noise/power compromise the concept of injection locking,
with respect to ring oscillators. but unfortunately its application
However, the analog content of has been so far bounded to
LC-based PLLs do not benefit integer-N synthesis, preventing
from technology scaling and the introduction of inductorless
prevents their design from frequency synthesizers into
easily fitting into a typical digital standardized wireless systems.
design flow. In this framework, With this aim, we propose
we present a high-efficiency a technique to enable fine
frequency synthesizer based fractional-N resolution in
on a fractional-N digital PLL injection-locked PLLs that
with a ring oscillator. Minimum allowed us to reach the best
jitter is obtained developing power/noise trade-off among
a comprehensive analysis of published fractional-N digital
noise in digital PLLs, specifically frequency synthesizers without
focusing on architectures with integrated inductors.
single-bit phase detector, which
are the most promising in terms
of efficiency. However, the
coarse quantization of phase
394
395
In recent years, the interest of The necessity of a repetitive towards a remote computer. and its power consumption. streams and implements a fast employed architecture, as well as
INFORMATION TECHNOLOGY
many research fields in non- measurement inherently asks for Initially, the work has focused Nevertheless, the technological communication channel towards the whole instrument.
invasive optical analysis has a high count rate instrument: on the application of a limit today present is the data the computer is necessary.
rapidly grown. In particular, the higher, the shorter the total 48-channel Detection System rate that can be managed After the study of the different
studies performed in biology measurement time. Besides, to a setup design to perform on-board and transferred protocols available on the
and chemistry have found other specifications concern the single-molecule FRET analysis. towards the computer. Fast market, two of them have been
increasing benefits from the linearity, which is a fundamental The system, already designed, protocols available for compact chosen to be implemented,
development of systems able feature to define the shape of was featuring a 124 matrix instruments do not provide a in particular the SuperSpeed
to perform single-photon the signal without distortions, of Single Photon Avalanche transfer rate far greater than USB 3.0 and the 10-Gigabit
measurements, since they push and the time resolution, that Diodes (SPADs) and a Field 10Gbps, which means that Ethernet over optical fiber. They
the sensitivity of the analysis to characterizes the ability of Programmable Gate Array the number of acquisition have been successfully tested
ultra-low intensity levels and the system to resolve very fast (FPGA) as its logic core. Its channels that can be properly on evaluation boards and then
to ultra-fast evolving signals. signals. However, applications firmware, instead, has been handled is a few tens. A implemented on a specific
These sensitivities are achieved are pushing the instruments designed to perform a time- 64-channel acquisition system Data Management Board, that
thanks to the employment of to expand their number of stamping measurement on the has been chosen as the target; mounts another FPGA to handle
single-photon detectors and channels, since this will reduce incoming photons, in order to a router will multiplex all the the logic operations. 1. Rendering of the 1024-channel
timing acquisition chains. The the total measurement time, provide to the software raw data detection channels on these To validate the performance system, final purpose of the work
Time-Correlated Single-Photon opening also the way to to compute the FRET efficiency. acquisition chains. To further of these new kind of systems, where this project has been inserted
Counting is one of the leading many new kind of analysis. A 100MHz clock has been used reduce on-board complexity, specific multichannel
techniques on which many Time-resolved imaging and to stamp the incoming data, and but also dimensions and power instruments are required. A
others rely on: Fluorescence simultaneous spectrally resolved forty-eight parallel First-In First- dissipation, two twin 32-channel multichannel pulse generator
Lifetime Imaging Microscopy analysis are just examples of Out memories (FIFO) have been Acquisition Boards have been is necessary to provide the
(FLIM), Frster Resonance Energy these new possibilties. implemented as buffers between designed. They will exploit the input signal to all the TCSPC
Transfer (FRET), and Fluorescence Regretfully, a strong trade-off incoming data and downloaded TAC-ADC structure (acronym for chains, but none of the today
Correlation Spectroscopy (FCS) exists between performance and ones. By optimizing the transfer Time-to-Amplitude Converter commercially available one
are just a non-exhaustive list of number of channels in TCSPC process, a download rate of and Analog-to-Digital Converter) satisfies all the specifications.
them. systems: applications are forced 20MBps has been achieved, since it is the one that provides Thus, an 8-channel Pulse
The TCSPC technique consists in to choose between instruments which means an average count the best performance, especially Generator has been designed
the repetitive laser stimulation with many channels but poor rate of 100kcps on every SPAD, in terms of linearity. The FPGA as a test instrument for these
of a sample, recording each performance or single channel enough to perform the analysis. that receives the ADC data systems. A 2-channel Module is
time the delay between the instruments with state-of-the- This work has been useful is in charge for creating the the base block of the generator:
stimulating laser pulse and the art performance. Aim of the to evaluate on the field the histograms, but an external after choosing the input signal
pulse coming from the photon researchers is the development necessity of multichannel on-board memory is necessary between an on-board reference
emitted by the sample and of high performance instruments, thus a to store all of them. Provided the and an external trigger, a delayer
captured by the detector. After multichannel TCSPC instruments 1024-channel high performance complexity of the system and its loop delays the reference edge
several photons it is possible to that will break this trade-off. system has been ideated in the novelty in the research group, a and a fast output transistor
build a histogram that represents In particular, this thesis work is research group. The increase of Demoboard has been designed stage generates the output.
the original waveform of the devoted to find solutions to the the number of channels carries to test the performance of the This structure reaches a time
light signal, since the probability system part of the instrument: many problems: the detector full structure. resolution of 6ps when no delay
distribution of the photon delays the management of the matrix and the acquisition logic Once the data have to be is added, worsening to 20ps
corresponds to the intensity of incoming data, their on-board are just some of them, as well sent out from the system, a for a 1s delay. Both results are
the light signal. storage, and their transfer as the dimensions of the system section that gathers the two remarkable and this validates the
396
397
In this Doctoral dissertation, the capability and wide dynamic detected signal. Accordingly, the In the improved version of the
INFORMATION TECHNOLOGY
design, realization and testing range. replacement of the DEPFET with Day-0 front-end, more than
of a new front-end for the DSSC In spite of its capability in this stage should keep as much 2800 photons at 1keV can be
project has been discussed. The high dynamic range and as possible equal the following allocated within the dynamic
Day-0 solution as an alternative low noise performance, the architecture adopted for the range of the ADC based on the
approach to DEPFET sensor manufacturing of the DEPFET processing electronics. compression behavior of the
scheme has been demonstrated sensor needs a sophisticated The proof-of-principle of the front-end.
through both analytical processing technology which Day-0 solution has been verified In conclusion, the Day-0 solution
evaluation and experimental requires relatively long in a first prototype. Using as an alternative approach to
characterization. The proposed manufacturing time. According the prototype with a Silicon DEPFET sensor in the DSSC
front-end takes advantage of the to the mentioned complexity, Drift Detector (SDD), X-ray project has been demonstrated
simple implementation inside it was evaluated to have an measurements have been carried through both analytical
the same ASIC chip and produce alternative sensor matrix, with out to assess the electronics noise evaluation and experimental
the compression behavior by its corresponding front-end, as performances which are in line characterization. Although the
using a resistor (fixed value in Day-0 solution. A Day-0 solution with the theoretical expectations. DEPFET approach has a very
the first approach and variable is here intended as a solution After the confirmation of the good noise performance, but
one in the improved version) characterized not by the best performance, a conservative in the improved version of the
in series with the transistor. In performance of the DEPFET, design of the Day-0 solution Day-0 front-end, the noise
this thesis, new Day-0 solution but available in a shorter time has been implemented in the F1 performance has been improved
was discussed in a way to keep to allow first beam tests and chip as the first full size matrix as close as possible to the
the same filter as it is in DEPFET experiments. The alternative chip. The F1 chip comprises two DEPFET approach. In the frame
sensor scheme. sensor is made of mini Silicon different front-ends suitable rate of 4.5MHz, the total noise
For the European XFEL, there Drift Detector (mini-SDD) and for two different sensor types: of the DEPFET approach is about
are several imagers under the compression behavior is it can be mated to either a 42 el r.m.s while it is about 66 el
development for the X-ray obtained from the front-end on DEPFET sensor matrix which itself r.m.s in the Day-0 solution, and
photon detections, including the readout ASIC and not by the compresses the input signal or in the 0.9MHz frame rate the
single-point, 1-D and 2-D silicon sensor, as in the DEPFET. to an SDD sensor as the Day-0 electronics noise is about 12 el
detectors. As one of the 2-D In the Day-0 solution, the solution. r.m.s for the DEPFET approach
imagers, the DSSC project aims DEPFET is removed and replaced An improved version of the Day- and 23 el r.m.s for the Day-0
to provide wide energy coverage still by a PMOSFET transistor, 0 front-end was investigated solution. The dynamic range
with single photon detection but now belonging to the ASIC to improve the dynamic range of the improved version of the
capability thanks to its very low chip and realized with the of the front-end and its noise Day-0 front-end can be matched
noise. To achieve this result a same IBM technology of the performance. In this design, with the dynamic range of the
novel detector structure with following stages of the ASIC. the nonlinear transconductance ADC in 1keV photons energy as
a compressive characteristic The PMOSFET is designed in a of the amplifying PMOS has the target photon energy one.
based on the DEPFET concept is way to provide low noise and been enhanced by statically Also in other photon energy
under development by the DSSC a compression characteristic switchable parallel devices and ranges, the dynamic range
consortium. The high gain for close to the one of the DEPFET- a non-linear capacitor has been of the Day-0 front-end can
small collected charge and the based solution. In fact, this added. Moreover, the resistor be tuned due to its flexibility
compression for large signals will alternative front-end input in the first version of the Day- of compression behavior
provide both desired features stage should still provide a 0 front-end is replaced by a introduced by extra parallel
of single photon detection non-linear amplification of the NMOSFET as a variable resistor. branches.
398
399
The growth of oil price, the influential factors, as vehicle acceleration) and computes Finally, in this Thesis we propose electronic equipment. provides an electronic device
INFORMATION TECHNOLOGY
congestion of urban areas and characteristics (e.g., weight, three power-related indexes. The a method and the related road Differently from existing and its hardware/software
the increased sensibility for engine power, aerodynamic smartphone application provides infrastructure aiming at reducing studies, we design a vehicle- interface, which allows
environmental problems brought drag), network condition (e.g., feedback to the driver in order the energy consumption of independent system that the system to interact with
scientists and companies to traffic flow, congestion), road to induce a change in her driving vehicles driving along roads requires just few parameters a heterogeneous fleet of
invest significant resources characteristics (e.g., slope and style, which in turns should controlled by traffic lights. An without any connection to electric vehicles in a uniform
on improving the energy topology), external conditions enable the energy savings. The algorithm has been conceived the vehicle communication way;
efficiency of the transport (e.g., weather and temperature) interaction between driver and for computing energy-optimal interfaces. relies on mobile devices to
sector. Nowadays, one of the and driver behavior which system is achieved through speed profiles to drive along An experimental validation let users access and interact
major environmental problems impacts on vehicle dynamics different Human-Machine- a road taking into account a of the driving-style systems with the system (e.g. take
is air pollution deriving from (e.g., speed, acceleration, gear Interfaces (HMIs). Differently set of constraints (e.g., speed is given. The effectiveness of possession/release a reserved
the transport sector, and road choice). In particular, the driving from existing studies, we design limits, traffic light rules). A the approach, and the savings vehicle, open/close its doors,
transport alone is expected to style has a huge impact on a vehicle-independent system road-marking infrastructure enabled by the interaction with enable/disable the drive);
be the largest contributor to vehicles energy consumption. that requires just few parameters continuously suggests the the driver are assessed with an offers an infrastructure to
anthropogenic climate forcing Various research results showed without any connection to optimal velocity the driver should experimental campaign carried customize the software
in 2020. that improvements in the driving the vehicle CAN-bus or OBD- adopt in order to minimize fuel out on urban and extra-urban configuration of vehicles
The devising of more efficient style provide direct savings interface. consumption. routes by different drivers. by pushing new services,
vehicles (e.g., improved from 5% up to 40% of the Furthermore, the driving style Experimental results prove thus realizing a platform
control systems, aerodynamics, total energy expenses as well as application has been integrated Main Contributions that the proposed driving style which allows services and
powertrain and engine design), reductions in air pollution. in a real scenario. We developed The main contributions of this system reduce the vehicle user-defined applications to
the usage of alternative energy Within this interesting and a vehicle-to-infrastructure Thesis can be summarized as consumption up to 30%. be dynamically loaded and
sources and fuels (e.g., electric evolving context, this Thesis (V2I) architecture as a part follows: A questionnaire proposed unloaded on vehicles.
and hybrid vehicles), and the proposes a couple of green ITS of the Green Move project, A novel method for the to the volunteers evaluates An energy-oriented roadside
deployment of Intelligent applications with the aim of an innovative Electric-Vehicle energy-oriented, quantitative the different user interfaces assistance system for signalized
Transportation Systems (ITS) proposing innovative solutions Sharing System. Thanks to estimation of the driving style designed for the driving style intersections is presented. We
applications are solutions for the tailored to fuel-consumption this platform, the driving style is presented. An important application. propose a novel algorithm
decarbonization of the sector. optimization through the application can be dynamically innovative contribution is The driving style system that determines the energy-
Intelligent technologies are improvement of the driver loaded and unloaded on vehicles given by the definition of three has been applied in a real optimal speed to pass a road
therefore playing an increasingly behavior. In particular, the main of the vehicle-sharing service. power-related indexes for scenario. We introduce a with signalized intersections.
important role in the drive for objective is to design energy- The application computes evaluating the instantaneous V2I/I2V system, in which the Based on the knowledge
green innovation. Especially, the oriented, vehicle-independent a quantitative estimation of driver performances. To the driving style application has of Signal Phase and Timing
development of energy-oriented and passive systems aimed at the driving style of a vehicle- best of authors knowledge, been integrated. The V2I/ (SPAT) and road information,
Driving Assistance Systems influencing the driver behavior sharing user, communicates this existing systems foremost focus I2V framework is the main an off-line velocity planning
opens the perspective towards a and promoting a fuel-efficient information to a control center, on the qualitative classification component of the electric algorithm determines the
new model of ITS, which can be driving. and displays it to the driver. Thus, of the driver behavior, usually vehicle-sharing project named optimal trajectory in terms
referred as Green ITS, and aims First, we present a system feedback is provided to the user limited to a finite set of Green Move, a three-years of energy consumption. The
at conceiving applications and able to assess in real-time the to nudge her to adopt a more discrete labels. Moreover, they project co-financed by Regione method explicitly considers the
services specifically designed to driving style. The system is fully economical driving style, whereas are vehicle-dependent, in Lombardia and developed consumed energy as figure
reduce energy consumption and integrated in a smartphone the control center can use this that they were developed for in the city of Milan. Within of merit to be minimized and
polluting emissions. application that acquires the information to make appropriate specific vehicle models. Thus, the Green Move project, solves a specific optimal control
Vehicles energy consumption signals related to the vehicle enhancements to its service or to the approaches are hardly we developed a prototype problem that considers the
depends upon different dynamics (e.g., velocity and provide additional features. reusable and require dedicated platform that: traffic lights as constraints.
400
SUNDROPS: SEMANTIC AND DYNAMIC DATA IN A derivation of context-aware data extensions of the system. analysis,only the most frequent
INFORMATION TECHNOLOGY
surrounded by a high quantity variant of the EClaT algorithm ADDICT was conceived almost that these data carry intrinsic request or query. once stored in order to allow its
of data, coming in different allowing the system to reduce a decade ago, and was based temporal information that In the first case, the system historical analysis.
and heterogeneus formats. the processing time required to on a methodology for context- might be helpful in context simply acts as the original
However, humans cannot exploit perform the frequent itemset aware system design where management. Context-ADDICT system does This work aims at building a
the whole power of this data mining task. This feature adds the context must be explicitly The main difference between answering incoming requests real-life system based on the
without appropriate aid of a useful functionality allowing declared. With SuNDroPS, Context-ADDICT and SunDRoPS , while in the second case the above technologies, plunging
digital means. The SuNDroPS the extraction of previously the injection of context is lies in the sensor data handling: system continously uses and more deeply into some aspects
system (Semantic and dyNamic unknown knowledge from the transparent to the user and to while the first is based on a analyzes the information flowing which remain to be investigated
Data in a Pervasive System) aims data flowing in the system. the application: the current user static and predefined sets of through the sensors and checks like semantic data stream
at supporting (mobile) users context can be inferred from context and each transition must if it is necessary to send new processing and cloud-based data
with a context-aware approach, Context-ADDICT Revisited sensor readings and the data be notified by the user herself information to the user. mining algorithms. Prototypes of
allowing them to consider only a The main goal of Context- corresponding to each context (or by an application running All the modules operate using (parts of) the system have been
small set of data, automatically ADDICT and SuNDroPS is can be automatically assigned by on the user device), SunDRoPS a push paradigm (there is no produced, within the application
selected by the system itself to create a middleware mining historical data by means instead can (at least partially) module pulling information from domains of car-sharing services
according to their current infrastructure to support the of the designed, efficient data automatically determine which other ones, every module listens and support for citizens mobility
context and interests. Part of design and development of mining algorithm. is the running context of the for new data or commands); this in the Green Move project for
this data (e.g. user data, service context-aware data-intensive SunDRoPS has several user and/or of the system, on requires a bit more bandwith electrical car sharing,
data) is stored in traditional applications. The focus is on components, some of which the basis of the environmental consumption but this ensures
information systems e.g. mobile, possibly peer-to-peer inherited from the Context- gathered data. a more reactive behavior of the
supported by RDBMSs while applications, where the notion ADDICT system: the core of the The problem of seamlessy system.
a large part of them dynamic of context can be exploited system is still built upon the core switch context is quite on the
data that come e.g. from to provide the user with a modules of Context-ADDICT; in cutting edge of the technology MR-Miner
sensors or system logs need to filtered view over the data, addition, SuNDroPS interfaces nowadays. Lots of different Another problem is how to
be treated as data streams and retrieving only the information two more components that services provides context- manage all the historical
dealt with in the appropriate relevant to the users in their allow to handle sensor data oriented components (primarily data about user preferences
way with while integrating and WSNs (Wireless Sensors location-based context). The and past contexts in order to
them with the other data. Networks): TRex and PerLa. SunDRoPS system empowers extract information to be used
This problem requires research These new modules enrich the this contextualization allowing in the future for refining the
beyond the current state-of-the- set of datasources that can to use more data other than information to be sent to the
art. be managed by the system, the location ones, providing a user. The Run-Time Context and
The SuNDroPS system stems allowing it to manage events framework that can be adopted Preference Manager module
from the Context-ADDICT streams, datastreams and WSNs. in very different scenarios, aims at performing this task.
(Context-Aware Data Design, For all the data mining activities both in indoor (e.g. museums) The module operate an
Integration, Customization and SunDRoPS provides the new or outdoor (e.g. emergency advanced personalization of the
Tailoring) system and adds new data mining component, situation management) user current context choosing
features to manage high loads MR-Miner, that implements applications. which are the information that
of dynamic, sensor-coming data, MREClaT. The output of MR- best fit the user needs, analyzing
seamlessly combining them with Miner is then used as input PerLa and TRex the whole user transaction
the more traditional information. by the Run-Time Context and PerLa and TRex modules run log. Since the log can be take
Moreover, SuNDroPS includes Preference Manager, allowing both as standard query wrappers very long time to be processed
the new data mining algorithm it to speed up the automatic and as environmental sensing in order to speed up its
402
403
phenomena is way more evident the lifetime of a modern Flash logarithmic time axis, leading injected electrons constitutes a
INFORMATION TECHNOLOGY
than in the past: statistical device. to the definition of an average relevant source of program noise
fluctuations in the number spectral density of trapped (PN), setting the ultimate limit to
of charges, in fact, play a The impact of the research electrons. The interaction the accuracy of the VT placement
significant role when such activity was recognized both of detrapping with random of decananometer Flash cells.
number reduces due to cell by the semiconductor industry telegraph noise (RTN) was also In conclusion, the research
shrinking (Figure 1 provides an and the scientific community, as highlighted, showing that the activity provided the physical
example of an extremely scaled the main results achieved in this VT instabilities resulting from the understanding, the modeling
Flash NAND technology). Also, field were awarded at the IEEE latter process cannot be ignored tools and the characterization
when charges are trapped in International Physics Reliability for a meaningful comparison 2. Measured and calculated VT techniques required to
the tunnel-oxide or in the IPD, Symposium (IRPS) both in 2013 with data, and that they distributions of a cycled Flash NAND investigate the programming
additional quantities subject to and 2014. can easily be included in the array during a high temperature accuracy and the reliability of
bake emulating a data retention time
1. Cross section of a state-of- statistical dispersion are the time simulations. stretch. extremely scaled Flash memory
the-art NAND array (IMFT 20 nm constants of the the capture/ The use of clever experimental The results, which can be technologies, highlighting the
technology) along the word-line release events and the VT procedures enabled the first applied also to different dominance of few-electron
direction.
following each of the events. direct observation of single- reliability phenomena involving Single-electron charging of the phenomena on cell operation.
The Ph.D. thesis extensively electron detrapping in NAND charge trapping/detrapping floating-gate during Fowler- In this sense, the work provides
The research work is focused analyzes both of the previous Flash arrays. Results revealed a in MOS devices, paved the Nordheim programming of a a valid reference for further
on the emerging constraints points for two of the main significant statistical dispersion way to the development of a mainstream Flash memory cell developing the Flash concept,
to NAND Flash operation phenomena ruling the behavior of the number of trapped comprehensive statistical model could be detected for the first pushing it towards the single-
and reliability dictated by the of modern Flash cells, i.e. charges, of the detrapping able to deal with VT instabilities time by using simple averaging electron limit.
granularity of electric charge charge detrapping and electron time and of the single-electron under whatever on-field usage techniques on a state-of-the-
in the gate stack. Progressive injection into the floating- VT. Starting from these of the memory array. To this art NAND Flash array. The VT
cell miniaturization entails that gate via Fowler-Nordheim observations, the phenomenon aim, not only retention/bake corresponding to an injection
cell electrostatics and, in turn, tunneling. In both cases the was then re-examined from conditions, but also the cycling event is fixed and depends
cell threshold voltage (VT), is analysis started from a careful a discrete, statistical point of conditions, i.e. the number of only on the capacitance of cell
ruled by an extremely reduced experimental characterization, view, showing that in the most P/E cycles, together with the IPD, differently from the case
number of electrons, therefore which, thanks to the use of interesting case where the duration and the temperature of single-electron detrapping
pushing the Flash cell to work clever methodologies, allowed number of trapped electrons of the idle periods in-between, where the VT shift is statistically
in a few-electron regime. This to determine for the first time feeding the detrapping process were considered, exploiting the distributed. Nevertheless, the
has two main consequences. the discrete, statistical nature is Poisson distributed among the idea that the average spectral injection process is of statistical
First, the VT displacement (VT) of the phenomena. After that, cells, detrapping events are the density of trapped electrons nature, and the number of
caused by the emission or the starting from relatively simple result of a nonhomogeneous can be calculated according to electrons injected during a
injection of a single charge in assumptions on cell physics, Poisson process and a simple the cycling pattern. The model programming pulse of amplitude
the gate stack (either the tunnel- new semi-analytic stochastic and powerful formula allows can accurately reproduce the Vs is approximately Poisson
oxide, the floating-gate or the models were developed, based the calculation of the full VT experimental data (an example distributed for small values
interpoly dielectric) became so on a small number of free statistics without the need to is shown in Figure 2) and was of Vs, while it it takes on a
relevant that it can be clearly parameters. The aim of the resort to lengthy Monte Carlo used to discuss the accuracy sub-Poissonian distribution at
detected by using adequate models is to provide a useful simulations. Also, the important of some testing schemes higher Vs, as shown by extensive
experimental procedures. tool to investigate how few- assumption was done that the commonly adopted for the experimental characterization.
Second, the stochastic nature electron phenomena affect cell detrapping time constant takes assessment of NAND device In either case, the spread
of the charging/discharging VT and evaluate their role along on a wide distribution on the reliability. associated to the number of
404
405
Multi-core processors are efficiency. paradigm, namely the tight to optimize the overall average metrics and by tuning the
INFORMATION TECHNOLOGY
widely used in modern This thesis targets the multi- and dependence of application system performance. By application parameters at run-
computing platforms, both many-core platform domain, performance on platform applying techniques of software time. The AS-RTM is generic, but
in the embedded and High- ranging from HPC to embedded architectural details. Although approximate computing, an its behavior can be customized
Performance Computing systems. While these two the OpenCL API is cross-platform application can be designed to for each application by passing a
(HPC) domains. Recently, also worlds are still quite different in and generic enough to enable expose tunable parameters that different list of operating points.
GPUs have been used for terms of computational power, programming of different types trade off the output quality with It also allows defining one or
general purpose computing, to recent years have witnessed of accelerators, customization of throughput. In our approach, more application goals, which
accelerate compute-intensive a convergence of parallel software parameters is necessary DSE is used to identify the represent soft constraints on the
kernels of throughput-oriented architectures and programming to achieve good performance parameter configurations performance metrics (e.g. the
applications. However, these paradigms. Today platform when porting applications called operating points that frame-rate or QoS). Experimental
platforms expose different vendors are adopting OpenCL, to the target platform. This provide the optimal trade-offs results show a better average
programmability complexities: a cross-platform API, to exploit problem is exacerbated by 2. Application and platform domain of with respect to throughput, performance with respect to a
while general purpose multi- the computational parallelism the intrinsic heterogeneity of the proposed methodology. output quality and resource plain Linux configuration and,
core CPUs provide good of modern accelerators while modern computing platforms, usage. In turn, this knowledge at the same time, a significant
code portability on a small enabling functional portability of which usually include several Another contribution of this base allows implementing improvement of performance
number of compute units, GPU applications. For the application accelerators of different type. thesis deals with resource performance-aware scheduling predictability.
architectures achieve much domain (see Fig. 2), we target sharing in multi-application and effective application auto-
higher throughput but require stream processing applications, At this aim, an optimization scenarios. This type of tuning, by reducing the decision The proposed design
more specialized application such as applications for smart methodology is proposed, based parallelism referred to as space at run-time. methodology and runtime
code (see Fig. 1). In between, cameras and augmented reality. on customization of a parametric request-level parallelism is As shown in Fig. 3, each software layer have been
there is a wide range of The main contribution is about OpenCL application design. The enabled by the increasing application is linked to a library implemented and demonstrated
platforms with different trade- customization and optimization methodology exploits Design number of cores integrated in that provides an Application- on a real case study an
offs between programmability of OpenCL applications. It Space Exploration (DSE) to the same chip. The proposed Specific Run-Time Manager OpenCL stereo-matching
and computational parallelism, addresses one limitation of identify the optimal solutions, run-time management technique (AS-RTM). The main purpose application targeting different
as well as GFLOPS/Watt the OpenCL programming with respect to multiple design allows accounting for dynamic of the AS-RTM is to manage industrial platforms. Some of
objectives such as throughput application requirements and application adaptivity by the outcomes of this thesis
or Quality of Service (QoS). The workload variations, in order monitoring the performance have been used within the
integration with the Multi- 2PARMA FP7 European project
Objective System Tuner (MOST) and implemented in official
framework allows to automate prototypes delivered to the
the DSE process and to project consortium.
implement advanced exploration
strategies. On the one hand, the
proposed techniques reduce the
exploration time on simulation
platforms while providing
close-to-optimal solutions; on
the other hand, they exploit
platform-specific constraints to
1. Trade-off between programmability and computational parallelism for multi- prune out unfeasible solutions 3. Application adaptivity through the Application-Specific Run-Time Manager
core platforms. from the design space. (AS-RTM).
406
A prompt gamma camera for real-time tests of a reduced-size prototype and showed good agreement. compact superconducting
INFORMATION TECHNOLOGY
radiation therapy that uses is necessary. The objective of feasibility of the different of 137Cs and 60Co (662, 1173 with respect to the points in the in a novel design, such as a
high-energy proton beams for the thesis is to design a novel alternatives to find the best and 1335 keV, respectively). At full target. further pixelation of the crystal
cancer treatment. Differently gamma detector, which satisfies compromise in terms of these low energies we obtained Measurements at the WPE were and the choice of a fully digital
from conventional radiation these needs. The detector performance, cost and simplicity. an energy resolution of about 13 useful to verify that the camera acquisition chain for pulse
therapy, proton beams deliver must be compatible to the A pixelated crystal configuration %, 9 % and 8 %. fulfills the requirements that we processing.
their maximum energy within a introduction in a slit-camera was preferred to a monolithic The camera was tested in the identified at the beginning of As stated in the first chapter,
defined range, thereby reducing system, where a knife-edge scintillator to reduce the count West German Proton Therapy the project. Satisfactory accuracy real-time range control in proton
adverse effects to adjacent slit collimator selects prompt rate on a single channel. The Center in Essen and in the in range retrieval was reached at therapy would represent a major
healthy tissues. Proton beams gammas emitted along the crystal is segmented into 40 Proton Therapy Center of clinical beam current for pencil improvement in the delivery of
open up new perspectives beam axis of the target, in slabs with 4 mm pixel pitch, Prague in order to evaluate beams with therapeutic doses. this radiotherapy technique,
for the treatment of tumors order to produce a reverse 1D 30 mm thickness and 100 mm its performance during At the Proton Therapy Center which is already a valid method
in proximity to organs at projection of the beam path on height. LYSO was chosen as best beam irradiation, in different of Prague we were able to to defeat cancer. However,
risk, supposed the range of a scintillator crystal. The design candidate due to its high density, measurements conditions and perform the first tests toward finding a technical solution that
therapy particles is well under of the camera system was first fast decay constant an high light with different targets. Both the clinical application of the goes beyond scientific research
control. In clinical practice, optimized with Monte Carlo yield. centers use the IBA C230 instrument. We first compared and that can be clinically applied
range calculations are affected simulations. Silicon PhotoMultipliers (SiPMs) cyclotron, a isochronous profiles for targets with densities has represented the main
by uncertainties and safety Preliminary measurements with were selected for scintillation cyclotron with a constant energy corresponding to bone and challenge in this field. We hope
margins are taken, preventing the HiCAM camera, an Anger light collection because they of 230 MeV. We first acquired fat to profiles acquired with that our efforts in this direction
proton therapy from exploiting camera originally developed offer compactness, good photon profiles, by selecting the energy PMMA and water targets. After represent a milestone toward
its maximum potential. Range for SPECT applications, were detection efficiency and fast events in the 3-6 MeV energy that, we applied for the first the reaching of this objective.
assessment in vivo and in real- performed operating the beam response. We designed an range, during the irradiation of a time a realistic treatment on
time is considered by clinical with currents lower than the optimized architecture of the homogeneous cylindrical PMMA an anthropomorphic phantom.
professionals a key for improving ones adopted in clinical practice, systems, integrating all the target with therapeutic proton Results are very encouraging
proton therapy. However, demonstrating the feasibility components into a compact and energies from 100 to 230 MeV because they demonstrate
appropriate technical solutions of the slit-camera concept practical instrument, easy to and calculated that the number that the camera is able to
are not yet available for the use to reach millimeter accuracy be introduced in the treatment of protons needed to reach a monitor prompt gamma signal
in clinical routine. in range determination. The room. 2s precision of 4 mm in range during a realistic PBS (Pencil
Among different techniques measurements served the Due to the novelty of the retrieval is between 0.45 x 108 Beam Scanning) delivery. Data
for real-time range control, the purpose of establishing the adopted solution, the detector and 1.5 x 108 for the lowest and processing and comparison with
thesis is focused on prompt specifications for the new required a dedicated electronics. highest energies, respectively. the simulations is ongoing. For
gamma imaging, which is based detector, compatible to the use Custom electronics boards As observed from Monte Carlo the clinical use of the camera,
on the fact the prompt gamma at clinical beam currents. Several were designed to perform simulations, the higher the there will be a set of simulated
rays emitted by the target nuclei, modifications were needed both spectra acquisition for a beam energy, the higher the profiles for each spot of the
after the interaction of the beam concerning the scintillator precise energy calibration and uncorrelated background, mostly treatment map and acquired
with tissue, are correlated to the geometry, the photo-detectors, high efficiency photon counting due to neutrons. We verified the profiles will be compared to
beam penetration depth. the electronics and the data for profiles reconstruction. ability of the camera to detect simulated profiles in order to
Imaging prompt gammas is acquisition. The best trade-off between millimeter shifts for high doses, verify is a shift happened during
challenging because they are Based on the knowledge on energy resolution and counting by moving the target at steps the delivery of the treatment.
emitted along a continuous gamma ray detectors, we efficiency was found during the of 1 mm to emulate a range Finally, the very first
energy spectrum up to 10 MeV identified several technical design of the electronics boards. shift. Profiles were compared measurements with
and a detector with a count options and investigated We performed characterization to Monte Carlo simulations the S2C2 accelerator, a
408
409
While the incidence of stroke and detail it to produce a virtual recording and configuration
INFORMATION TECHNOLOGY
rises worldwide, so do the costs rendition of the exercise that functionalities.
of the subsequent intensive can be inserted imto a virtual We discuss extrinsic game
rehabilitation, setting off alarm environment. We then add these elements that can be leveraged
bells that call for solutions to elements that partain to gaming to increase the motivational
lower figures while preserving that do not interfere with the factor of a rehabilitation
therapy efficacy. At-home underlying exercise mechanics exergame without undermining
autonomous rehabilitation to create an exergame through the benefits of the therapy, and
appears as a promising solution, iterative prototyping, and we we thus introduce and detail
reducing costs for health add the modules that allow ad-hoc scoring mechanisms,
providers and patients alike. automatic on-line supervision algorithms to provide variations
The trend of exergaming, to obtain the final autonomous in content, and we detail long-
i.e. exercising through video therapeutic exergame. term motivation mechanisms
games, may represent the key We design and develop a that leverage procedural content
to the success of autonomous complete game engine for generation and interactive
rehabilitation. However, rehabilitation, built upon evolution methods.
rehabilitation at home demands the Panda3D open source We follow our guidelines to
careful consideration, as all game engine, that integrates develop a set of nine games
the requirements of a correct exergames and high-usability for posture and balance
rehabilitation therapy must be interfaces with autonomous rehabilitation of post-stroke
addressed even in the absence supervision enabled by elderly patients. We conclude
of a therapist. computational intelligence. The with results from several studies
The aim of this research is to engine includes an abstraction performed using our games,
study the feasibility of at-home of exergames that enables easy including a three-month pilot
autonomous rehabilitation creation of new exergames for test with the complete system,
through exergaming. To do our system, knowledge-based proving the benefits of our
so, we explore the state-of- on-line automatic monitoring solution.
the-art of the exergaming through fuzzy systems,
field and devise guidelines to automatic on-line adaptation
design effective and motivating through Bayesian techniques,
exergames. We provide a novel a layer that performs input
definition of exergaming and we abstraction and supports several
explore the design of its double different devices such as the
nature, as exercise, and as game, Microsoft Kinect sensor and the
and we provide guidelines for Nintendo Wii Balance Board, and
both. We create a methodology clear and meaningful feedback
for the structured design and through high-accessibility and
development of exergames for consistent interfaces and a
rehabilitation that leverages our Virtual Therapist Avatar. We
new definition. We start from also support asynchronous
a given exercise, as defined by configuration and assessment
therapists, and we structure by a remote therapist through
410
Study, Design, and Evaluation of between different methods realistically simulated) exploring computing the optimal
INFORMATION TECHNOLOGY
seen a wide spread development robots, and so these methods contexts of real (or realistically environments, based on the that, in the worst case, taking Further, I experimentally evaluate
in recent years, especially cannot be plainly used. What is simulated) robots and are competitive ratio. Another issue into account also information (in simulation) the impact of
for tasks that are difficult, needed is the development of empirically assessed by testing is the difficulty in reproducing gain in selecting the next some controllable factors on
dangerous, or simply boring navigation strategies that allow them in some environments. experiments as parameters destination location does not exploration (different perception/
for humans. Relevant examples mobile robots to autonomously On the other hand, exploration are usually not reported in the provide any advantage over decision timings, and exploration
include planetary exploration decide their next target strategies are defined in descriptions of experiments, and considering only distance, while strategies vs. coordination
and search and rescue. There are locations, besides how to go to theoretical settings. In these thus it is not clear what factors it does in the average case on methods), providing some
several challenges that designers a specific target. Furthermore, approaches, proposed methods impact the performance of graphs modeling realistic indoor insights that could be useful for
face during the development the use of multiple robots can are assessed using theoretical exploration. environments. a roboticist that has to set these
of systems of autonomous make the execution of the task tools like worst-case bounds and Given this background, in parameters.
mobile robots, from low level more efficient, if they smartly competitive ratio in some classes the general context of the To improve exploration In addition, I show how some
issues, i.e., sensors, actuators, coordinate among themselves. of environments. However, multirobot exploration problem, strategies and coordination of the artificial intelligence
etc., to high level issues, i.e., This dissertation focuses on sometimes assumptions are not the objective of this dissertation methods. techniques used in this
control, navigation, etc. One exploration, in which one fully realistic (e.g., infinite line- is threefold: I define exploration strategies dissertation can be used for
of the most important aspects or more robots execute the of-sight visibility). and coordination methods that exploring a belief state space
that affects autonomous mobile following steps in order to Further, most of the exploration To contribute to bridge the embed information coming from in the context of pursuit-
robots performance is the set discover and map the features strategies and coordination gap between theory and semantic maps. This allows to evasion games, in which a
of techniques that allow them of an unknown environment: methods proposed in literature practice for exploration privilege some specific areas of pursuer attempts to capture an
to decide the next location to (a) perceive the surrounding base their decisions only on strategies. the environment. For example, adversarial evader that tries, in
reach (navigation strategies) by environment, (b) integrate the current metric map, which I contribute to define the if robots know that an area of turn, to actively escape, when
possibly coordinating among perceived data in a map represents the spatial features problem of calculating the an environment is labeled as they both have a line-of-sight
themselves (coordination representing environment of the environment, like the optimal off-line exploration corridor, then that area should sensor model.
method), according to their known so far, (c) decide position of obstacles. In the last paths under some realistic be privileged and more than The contributions provided in
current knowledge about the where to go next and who years, several methods have assumptions i.e., robot with one robot should be allocated this dissertation could foster the
world they operate in, in order goes where, (d) go to the been proposed to build semantic time-discrete and limited to it, so that the exploration of achievement of the long-term
to autonomously carry out the destination locations chosen. maps that associate semantic perception and environment the environment is speeded up, goal towards the theoretical
assigned tasks. Specifically, step (c) is the focus labels (e.g., corridor or room) represented as a grid. I analyze as rooms are typically attached and practical definition and
To introduce the idea of of the dissertation, namely the to portion of the underlying the relation between such to corridors. I experimentally the evaluation of exploration
navigation strategies it is useful selection of interesting locations metric map. Despite the great discretization and its continuous show that there is a significant strategies and coordination
to start from considering the (exploration strategies) and effort in constructing semantic counterpart and formulate the improvement about the methods for increasing
(huge) literature about path their assignment to robots maps, the study of their use for discrete problem as a search exploration of relevant and total autonomy of mobile robots.
planning, which shows that the (coordination methods). exploration is still rather limited. problem. Thus, I develop areas of indoor environments
core for most of the currently In spite of the importance of the Finally, a lively debate on good the first algorithm to find within a given time interval,
employed methods has been exploration problem, general experimental methodologies the (approximated) optimal when a priori information
developed in mid-1990s. techniques that allow mobile is currently ongoing in exploration path. Simulation about the relevant areas of the
In these approaches, a user robots to be fully autonomous the autonomous robotics results show the viability of environment is available.
specifies the goal and the robots are not mature yet. community, as they have our approach for realistic
can decide by itself how to go First of all, exploration not reached yet a maturity environments. To improve the experimental
there. However, in several cases strategies are usually defined level comparable to that of Moreover, I contribute to assessment of multirobot
the goal might not be known following two rather different other disciplines. The relative strengthen the experimental exploration systems.
a priori or the user cannot approaches. On the one hand, comparison currently made results obtained with real (and The method I propose for
412
413
The increasing diffusion of the rainfall events. However, measurements. The input into a set of classes according properties of clouds. The
INFORMATION TECHNOLOGY
broadband internet connection even if rain strongly affects the database is accurately modified to ILWC and IWVC maximum validation of the model has been
and the development of Ultra signal power, it is quite limited and scaled to better adapt to values. The site-specific statistics carried out on both first and
High Definition TV (UHTV) in space and time. Moving the site-specific climatology of of ILWC and IWVC to be second order statistics, showing
requires the use of satellite towards the use of frequencies the location. The data collected reproduced, provided either the ability of the synthesizer in
communication systems able from 50 to 70 GHz even the by different instruments by measured data or by ITU-R generating appropriate time
to provide large frequency so-called clear-sky attenuation (radiometer, beacon receiver models, are taken as input by series able to reproduce the
bandwidth by making use due to atmospheric component and raingauge) are used to two separated optimization input site-specific statistics. The
of radio frequency carriers such as gases, water vapor and generate a database of time procedures that return the model is valid for every link for
(Extremely High Frequency - oxygen as well as clouds become series for water vapor, clouds number of daily time series to site located in temperate regions
EHF) up to Q/V Band. Some relevant, especially for systems and rain events. The synthesizer be selected from each ILWC and with a range of frequencies is
satellite-based service providers, with low power margin. generates, for each atmospheric IWVC class. For the time series between 5 and 70 GHz and
such as EUTELSAT Company, In this work we focused on component separately, a selection, we identify the daily elevation angle between 5 and
start using satellite operating the study and development of time series of attenuation of time series of ILWC and IWVC 90.
in Ka Band despite, at such a time series synthesizer for measured data and finally a that jointly satisfy the solution
these frequencies, the system main tropospheric components time series total attenuation of the two optimizations. In
has to cope with very strong (water vapor, clouds oxygen obtained by the combination of the second part of the work
attenuation introduced by and rain) that mainly affect the all atmospheric effects (water we focused on the synthesis
the tropospheric constituents. transmission of signals in free vapor, oxygen, clouds and of rain attenuation events. The
Indeed, the physic of the space condition. In particular, rain). In the first part of my rain events collected in the
channel is mostly influenced by the aim of this study is the work, we considered the non database are catalogued in 10
the working frequency of the development of a generator rainy attenuation components classes according to the peak
transmitting system, and the of time series of attenuation focusing in particular on the of attenuation they experience.
gathered attenuation, which to simulate the variability modelling of Integrated Liquid A dedicated optimization takes
affects the signal, is increasing in time of the tropospheric Water Content (ILWC) and as input the long term statistic
with frequency. In this respect, components separately and Integrated Water Vapor Content of rain attenuation, provided
satellite system can exploit the their combination in total (IWVC). One critical step of this either by measured data or by
use of Propagation Impairment attenuation. The basic idea is to retrieval is represented by the ITU-R models, and return the
Mitigation Techniques (PIMT) start from real measurements identification, discrimination total amount of rainy time to be
to reduce the negative effects collected during propagation and interpolation of the rainy selected from each class. Then,
of fading due to atmosphere. campaigns to generate periods in order to avoid the the events of rain attenuation
The development and design time series of attenuation error in the estimate of ILWC are randomly selected according
of these systems must be for each atmospheric and IWVC. The rainy periods to optimization result. Finally, the
supported by the use of component reproducing identification is obtained composition of all tropospheric
tropospheric channel models the statistic of attenuation considering three databases components is achieved through
and synthesizers able to at a selected location for of measurements (raingauge, a new algorithm based on the
characterize the time varying defined Satcom parameters beacon receiver and radiometer) identification and classification
channel not only in statistical (frequency, elevation angle). properly combined. We assumed of cloud types (Cloud Type
terms. For satellite systems The measurement database 24 hours as the basic time Algorithm CTA). This new
operating at these frequencies taken into consideration is the frame for the time series of procedure guarantees an
the main source of signal one provided by the ITALSAT ILWC and IWVC. Daily time accurate superimposition of all
degradation is represented by experiment over 7 years of series are properly catalogued events according to the physical
414
Characterization and modeling of account the disordered nature characterization is then provided crystallization times compared to
INFORMATION TECHNOLOGY
has driven a substantial PCM has the memory capability of the following three chapters. the I-V curves. gaussian spread in the activation Finally, the work compares
transformation in both our thanks to the property of Finally, the current perspectives The energy landscape model energy for crystallization. two set techniques, namely: i)
personal lifestyle and in the particular materials (e.g. of the PCM technology is lastly extended to consider Finally, the cycle-to-cycle square pulses with amplitude
work organization within the chalcogenide alloys) to reversibly are discussed, with a quick the carrier heating, at the basis variability is analyzed more in lower than the melt level and
companies. switch between an amorphous glance on the so-called PCMS of threshold switching. This depth, allowing to subdivide ii) triangular pulses with slow
Such impressive changes have and a crystalline phase with architecture, which is expected phenomenon is crucial, since the retention characteristics quench from the melt. Resulting
so far been possible thanks markedly different electrical to solve the current PCM it enables the phase transition into three separate families resistance distributions are
to continuous performance resistivity. Such principle limitations in terms of size (by allowing a larger current to namely: i) analog variability, analyzed and compared as a
enhancements in the allows obtaining non volatile scaling by stacking the memory flow into the cell) and it limits ii) digital (binary) variability function of the pulse energy,
semiconductor field. A key memories with interesting read/ element and a cell selector the maximum read voltage. The and iii) pseudo-repeatable concluding that the triangular
enabling factor has been program speed, good scalability made of another chalcogenide proposed switching model is characteristics. pulses allow to obtain tighter
the availability of memory perspectives and a remarkable material. then validated as a function of R distributions at a given pulse
technologies able to store data cycling capability. the ambient temperature and of The fourth chapter is dedicated energy.
of larger and larger size in a Nowadays, a deeper knowledge The second chapter is devoted the amorphous size. to the statistical study of the The studies of the array-level
faster and faster way. of the PCM physics is strongly to the study of the electrical The present study, conducted program operation in PCM, or statistics in PCM are rarely
The leading actor of such a requested to drive the conduction in the amorphous as a function of the amorphous phase switching. The study is reported in literature; for this
success story has been the development of the PCM phase. Such studies are of cap size for two nano-scaled focused on the analysis of the reason the statistical studies
Flash technology, which is technology in the years to come. particular importance, given technology nodes, is particularly programming characteristics at presented in chapter III and
based on the ability to change This motivates the need for that PCM is a resistance-based relevant for the prediction of statistical level. chapter IV of this doctoral work
the threshold voltage (and research activities, such as the memory. For this reason, a the conduction properties as a First of all the work describes the are particularly interesting for
accordingly the read current) of ones described in this doctoral deeper knowledge of the function of the cell geometry, dependence of both the set (or the array-level optimization of
a MOS transistor by injecting work. conduction properties allows to possibly allowing to address crystallization) and the reset (or future PCM products.
charge in a dedicated floating optimize the resistance window the cell design in future PCM amorphization) operations on
gate layer. The introductory chapter of this between the two programmed technologies. the initial reset state, determined
Given the requests from the work provides an overview of states and to carefully optimize by the reset voltage. The
memory market of even higher the current non-volatile memory the cell geometry. One of the key properties of dependence of the melt current
performances, Flash memories (NVM) scenario, subdividing the Amorphous materials present the PCM technology is its non and of the obtained resistance
are currently facing severe possible technology evolutions the peculiar property of a non- volatility, which is guaranteed distributions on the amorphous
challenges. The size scaling, so within an evolutionary scenario homogeneous conduction, by the relatively long time size are then presented and
far well predicted by the Moores and a paradigm shift. The because of the disordered needed for the relaxation of the discussed.
law, is expected to be harder phase-change technology is nature of such materials. This (metastable) amorphous state The collected data allow to
and harder below the 20 nm then introduced, dealing with poses some critical issues, since into the (stable) crystalline one. compare the high-temperature,
technology node. For this reason its history, the basic operation the prediction of the resistance The third chapter of this work set region with the low-
the semiconductor companies and the elementary physical value as a function of the cell deals with a detailed study of temperature, retention regime
are strongly looking for novel description. geometry (e.g. for the cell down- the retention capability in PCM providing the following
concepts able to sustain the The first chapter reviews the scaling) is non-trivial. To deal on a large statistical scale. Such observations: i) the previously
improvements seen so far. current state-of-the-art in the with such problem, this chapter studies are fundamental in order observed non-Arrhenius
Among the novel technologies physical comprehension of sub- introduces a novel model for to allow large arrays to properly behavior of crystallization is
proposed as next generation threshold conduction, threshold conduction, based on the so- satisfy the data retention confirmed at statistical level and
memories, a privileged position switching and crystallization, called energy landscape. Such requirements. ii) the set operation is shown to
is currently hold by the Phase providing the basic elements an approach allows taking into A wide experimental have a tighter distribution of the
416
Economic mechanism for online pay-per- devoted to situations where payperclick/visit auctions (e.g. i.e. we do not have all the
INFORMATION TECHNOLOGY
(SSAs) constitute one of the study of effective models of there could be a gap in the The problem is challenging the hardness of the problem. information w.r.t. the ideal case
most successful applications of the user attention and their guarantee of the approximation since it represents one of the Specifically, this aspect could in which all the information
microeconomic mechanisms, exploitation in the auction algorithm between the optimal first examples where online be crucial in the choice of the is known. The truthfulness
producing a revenue of about mechanism. A number of works approximation algorithm that learning theory and mechanism model to adopt for a specific requirement influences these
$42.8 billions in the U.S. alone showed that externalities play can be designed and the optimal design are paired to obtain application; bounds.
in 2014, dominating display an important role in the user one under the constraint of effective methods to learn under 2) truthfulness: an economic The ideal goal is the design of
ads, the second largest revenue behaviour. On the other hand, truthfulness. For this reason, equilibrium constraints (notably mechanism often is composed very expressive user models
source. In a SSA, a number of externalities may make the I studied both the situations, the truthfulness property). of agents that interact. Agents admitting very efficient
advertisers bid to have their problem of finding the optimal the first in order to give a In literature there are papers that are usually rational (selfish). allocation algorithms that can
sponsored links (from here on allocation intractable, even computational complexity that study this problem when Rules are required in order to be used in truthful mechanisms
ads) displayed in some slot when approximated. The most characterization of the problem a single slot is available and handle the interaction and to with the minimum online
alongside the search results of a widely adopted user model is and the second in order to a specific parameter called guide it to a stable outcome. learning regret. In the work,
keyword. SSAs currently adopt the Cascade Model, in which provide a mechanism design quality is unknown. I extended Otherwise, the market I showed that this is never
a payperclick scheme, which a user is assumed to scan the result. the study to the case of multi- could become unstable and the case in practice. Indeed,
requires positive payments from ads sequentially from the top Studying the third model, slot instances considering also unpredictable. For this reason, in each user model provides a
an advertiser only when its ad slot to the bottom slot with I also enlarged the set of situations where parameters order to guarantee the stability different tradeoff in terms
is clicked. Given an allocation a probability to observe the applications considering also different to the quality are it necessary to design truthful of expressiveness, economic
of ads over the available slots, subsequent slot that depends the environment of Mobile geo unknown. In order to obtain mechanisms; stability, approximation bounds,
each ad is associated with on the last observed ad (ad location advertising. Specifically, truthful mechanisms, it is 3) exact and theoretically and online regret bounds and
a clickthrough rate (CTR) dependent externality) and on this is an environment where necessary to adopt Multi- bounded approximation therefore there is not the best
corresponding to the probability its position (positiondependent mobile ads are targeted based Armed Bandit (MAB) algorithms algorithms: once the hardness user model for each scenario,
that such ad will be clicked by externality) and with the on a users location (e.g., streets that separate the exploration of a problem is known, the but each scenario potentially
the user. CTRs are estimated remaining probability the user or squares within a city or a and the exploitation phases. problem has to be solved. requires a different model.
by the auctioneer and play a stops to observe the ads. The district). This field has been In the final dissertation, I This requires the design of
crucial role in the definition of computational complexity of the identified as a key growth factor provided bounds over the loss algorithms. In the case the
the auction, since they are used SSA problem when the Cascade for the mobile market and has of these mechanisms w.r.t. finding the optimal allocation
by the auctioneer to compute Model is adopted is unknown, not been widely studied, in to the one adopted when all is not an easy problem, a study
the optimal allocation (in but it is supposed to be NP- particular not from a mechanism the information is available, of approximation algorithms
expectation) and to compute the hard. Moreover, the Cascade design point of view, a crucial considering two different is required, otherwise the
payments for each ad. Model presents limitations ingredient for its success. In my measures of loss (regret): regret problem cannot be solved in
In microeconomic literature, in the way the externalities final dissertation, I proposed in the revenue of the auctioneer, practical situations. The study of
SSAs have been formalized as are represented w.r.t. the real exact algorithms and then I i.e., how much the auctioneer the computational complexity
a mechanism design problem, world. I worked to overcome identified subclasses of instances loses, and regret in the Social could guide the choice of which
where the objective is to design some of them introducing where the problem can be Welfare, i.e., how much the approximation algorithm has to
an auction mechanism that three new models. I provided easily solved in polynomial time. community loses in terms of be studied. At the same time
incentivizes advertisers to bid a detailed characterization of Finally, I proposed polynomial values of the allocation. the truthfulness requirement
their truthful valuations (needed the computational complexity time approximation algorithms influences the design of the
for economic stability) and that of two of the three models. that can be implemented in In conclusion, from a more algorithm too;
assures both the advertisers and Afterwards, for the models truthful mechanisms. general point of view, I can 4) online learning: in real world
the auctioneer to have a non that are NP-hard, I proposed summarise my work in the environments, often, we face
negative utility. approximation algorithms. The final part of my work is following way. I studied online situations of lack of information,
418
Decomposition Methods for Quadratic Zero- problem and then dedicate classical Traveling Salesman from the MSTP reduced cost
INFORMATION TECHNOLOGY
with linear constraints is a very approach behaves well only if approach which can be used level RLT representations the problem we provide a Linear increased computational effort,
general class of optimization one gets tight lower bounds either for generate a strong computation requires much Programming formulation for while the bound obtained using
problems and has a wide range for the objective function. In relaxation of the or to provide effort. We propose a new the general QTSP that has a the reformulation scheme seems
of applications. This problem has general, the solution methods a convex reformulation of the compact reformulation for each variable for each cycle in the to tradeoff between the bound
the following form: of finding a lower bound for problem. Then we describe level of the RLT representation given graph. Since the number tightness and computational
the can be divided into two different reformulations of exploiting the structure of the of cycles is exponential in the effort.
QP: min xTQx + cTx main groups: Reformulation- the based on an equivalent problem. Computational results graph size, we propose a column Finding the shortest path in a
s.t. Ax = b Relaxation approaches, and convex or non-convex quadratic on some benchmark instances generation approach. We directed graph is one of the
x {0,1} Reformulation-Decomposition 0-1 programming. After indicate the potential of the new compare the bounds resulting most important combinatorial
approaches. Since in general introducing the different -based RLT representations as the level from this new formulation optimization problems, having
where A Rm n, Q Rn n, the is a non-linear non reformulation strategies, we of the RLT increases. Moreover, with those obtained by some applications in a wide range
b Rm, and c Rn. convex problem, most of the use various decomposition we study two special cases of linearization techniques. of fields. In its basic version,
proposed approaches try to techniques (including Lagrangian the QAP including the Adjacent Computational results on some however, the problem fails to
Many combinatorial optimization reformulate the problem either decomposition) to obtain a QAP and QAP on reducible set of benchmarks used in the represent situations in which the
problems admit natural as an equivalent Mixed Integer strong lower bound. Part 2 is graphs. The Adjacent Quadratic literature show that the column value of the objective function
formulations as quadratic 0-1 Linear Program (MILP) or as concerned with the some special Assignment Problem (AQAP) generation approach is very is determined not only by the
programming problems. The an equivalent quadratic 0-1 cases of the related to quadratic is a variant of the QAP where promising. choice of each single arc, but
Quadratic Assignment Problem program and solve the resulting version of some well-known the cost coefficient matrix has a The Minimum Spanning Tree also by the combined presence
(QAP), Quadratic Traveling program by effective algorithms combinatorial optimization particular structure. Motivated Problem (MSTP) is one of the of pairs of arcs in the solution.
Salesman Problem (QTSP), Graph that take the problem structure problems. Among the most by strong lower bounds most known combinatorial We model these situations
Partitioning, Quadratic Knapsack into account. Based on the important classical combinatorial obtained by applying RLT to optimization problems. It as a Quadratic Shortest Path
Problem (QKP), and Quadratic structure of the resulting optimization problems, we the classical QAP, we propose concerns the determination of Problem, which calls for the
Minimum Spanning Tree problem different relaxation and/ study the quadratic assignment two special RLT representations a minimum edge-cost subgraph minimization of a quadratic
Problem (QMSTP) are among the or decomposition methods may problem (QAP), quadratic for the problem. The first is spanning all the vertices of a objective function subject to
well-known particular cases of be applied to provide a lower minimum spanning tree problem based on a ``flow formulation given connected graph. The shortest-path constraints. We
the QP which arise in a variety bound. (QMSTP), quadratic traveling whose linear relaxation can be Quadratic Minimum Spanning prove strong NP-hardness
of real-world applications. All of This thesis consists of two parts: salesman problem (QTSP) and solved very efficiently for large Tree Problem (QMSTP) is a of the problem and analyze
these problems are known to be Part 1 deals with the solution finally, quadratic shortest path instances while the second one variant of the MST whose cost polynomially solvable special
NP-hard in general and therefore methods for the general case problem (QSP). has significantly more variables considers also the interaction cases, obtained by restricting
the is also NP-hard. of the QP. We study different The Quadratic assignment and constraints, but possesses between every pair of edges of the distance of arc pairs in the
Branch-and-bound algorithms reformulations and relaxation problem (QAP) is one of the some desirable properties the tree. We review different graph that appear jointly in
are the most successful strategies based on linear and classical difficult combinatorial relative to the constraint set. For strategies found in the literature a quadratic monomial of the
procedure for solving the. The semidefinite programming. More optimization. Due to its wide the QAP on reducible graph we to compute a lower bound objective function. Based on
branch-and-bound algorithm precisely we start with classic verity of applications and its give a Lagrangian decomposition for the QMSTP and develop this special case and problem
is based on decomposition of linearization methods to obtain resistance to solution strategies, based on splitting the variables new bounds based on a structure, we devise fast lower
the original problem into a a lower bound, and then try to numerous researchers have and then dualizing the copy reformulation scheme and bounding procedures for the
series of smaller subproblems, improve the reformulation so studied the QAP and proposed constraint so that the resulting some new mixed 0-1 linear general problem and show
and then recursively solves that its LP relaxation provides both heuristic and exact solution problem can be decompose to formulations that result from computationally that they clearly
each subproblem, and discards a stronger lower bound in a methods. We review different two quadratic semi-assignment a RLT. The new bounds take outperform other approaches
the non-optimal solutions by reasonable time. Moreover, reformulations and lower problems. advantage of an efficient way proposed in the literature in
using the best obtained lower we propose the Semidefinite bounding procedure for the The QTSP is as a variant of the to retrieve dual information terms of its strength.
420
WSN POWER-PERFORMANCE OPTIMIZATION IN creation of an easy and user- system and devices. To do so, we of the network, because of the
INFORMATION TECHNOLOGY
approach to support programming languages, layers of a WSN: from the APIs, into our framework. have devised an optimization sensing from processing,
independent heterogeneous libraries and system calls, a network level, where opportune Special attention has been approach capable of significantly extending idle periods at the
applications from multiple file system and linking/loading software allocation models are paid to allow the execution of enhancing the energy efficiency deepest possible power states
diverse users in wireless sensor mechanisms, as well as POSIX- addressed, through the node more complex tasks and the of the nodes while minimizing is also possible. This, however,
networks. The aim is to provide like threads and processes. level, which provides support reduction of radio transmissions quality-of-service loss. may cause loosing the memory
significant performance, Network lifetime. The first to the execution of multiple by leveraging local processing, After the rationalization of the status when the microcontroller
resources availability and two goals ask for a level support heterogeneous applications, till as well as the memory isolation duty-cycle, the subsequent is powered-off or put in stand-
abstraction from low-level that ultra-low power and the non-functional optimization of multiple processes running step within the non-functional by. This new problem mainly
details, while reducing energy resource-constrained devices layer that sustains the lifetime of on the same node. With the optimization layer has regarded concerns the preservation of the
consumption to a level may not offer. This entails the nodes. same effort, we have moved the adaptation of the node status (registers, stack, heap)
comparable to that of ultra-low the need for more powerful As regards the network level, toward the simplification of operation profile to the actual of the applications between
power and resource-constrained platforms and a higher level starting from the encouraging code development, through the computational needs, to ensure two subsequent active periods
platforms. The following of abstraction that lead to results obtained in previous adoption of standard libraries that in no case the energy and requires a power-efficient
fundamental goals are at the increased energy consumption. works, our effort has been and system calls and the support consumption exceeds the bare hibernation mechanism.
basis of our research. Since the lifetime of the network devoted to overtake some of to code mobility for dynamically necessities of the applications. Consequently, a formal
Support to heterogeneous plays a primary role both for the limitations characterizing reconfiguring nodes. These In particular, we started from a hibernation model has been
applications. The support to the clients and for the service the initial centralized software objectives have required a careful analysis on the specific carefully defined to ensure that
heterogeneous applications provider, guaranteeing that the allocation model. As an combined hardware/software characteristics of the sensing the best choice is always made
must be firstly provided in the superior resources do not lead evolution of our previous work, co-design to guarantee suitable and the processing stages in between a complete system
allocation phase, to ensure to a fast energy depletion is a we aimed at defining an ILP performance of the platform. wireless sensor networks. In hibernation and a low-power
that the functional goals of the fundamental goal for the whole modelwhich eliminates some The next step in our top-down classical approaches to WSNs sleep state.
users are satisfied, preserving framework. restrictive hypotheses and sets flow has been devoted to programming, in fact, the We extensively validated
the efficient operation of the Guaranteeing quality of the golden reference for the address the non-functional phases related to data sensing our models by developing a
network. Secondarily, at node service. The need for energy- allocation problemalong with optimization level. The first and data processing are strictly prototype hardware/software
level, the hardware and software efficiency should not degrade a very fast heuristic, lightweight issue that we have tackled is mingled in the application layer: platform and, where needed,
platform must provide sufficient the performance in terms of and accurate enough to be related to the heterogeneity of programs are usually developed suitable simulation frameworks
resources and isolation to allow quality of service, as specified embedded in cluster-heads and the sources that deploy their as cycling loops in which that stress the models at their
the concurrent execution of by the clients. One possible more powerful gateway-nodes. applications on the network, measurements are performed limits. Results confirmed a
software coming from diverse metric can be, for example, the In perspective, the heuristic which may lead to very on the considered physical behavior that fully complies with
sources. respect of the deadlines and would possibly allow an in- inhomogeneous periods and parameter and, soon after, the the general objectives of the
Standardization of the adherence of the execution network distributed allocation of tolerable slacks. This can cause retrieved data are sent toward work.
applications development. to the specified sampling and clients applications. a fragmented duty-cycle on the the sink via the radio channel.
The application development processing periods. A co- Continuing our top-down nodes, eventually leading to fast From this paradigm ensues a
should be as close as possible to optimization of energy and analysis, after having defined energy depletion and operation general behavior of the network
standard programming models quality aspects is therefore a suitable allocation model, inefficiency. At first, thus, we which can be very energy
and environments, so that clients needed to avoid an unbalanced we moved at node level with investigated the possibility of consuming. We have designed
can avoid taking into account behavior toward only one of the aim of providing more merging as much as possible the operation of our framework
low-level platform details and these two critical aspects. computational and memory the execution moments of starting from the assumption
develop their applications as To fulfill the goals above resources, as well as a higher the periodic tasks, in order to that locally analyzing data on
they do for general-purpose described, we propose a degree of abstraction and reduce the energy overheads the end device can strongly
programs. This requires, for top-down approach, which standardization. In particular, the of frequent awakenings of reduce the energy consumption
422
423
The topic of my thesis Under this framework, several with 2 MeV protons. A study
INFORMATION TECHNOLOGY
is devoted to the study, SiC detectors have been devoted to the optimization
design, characterization and characterized during my thesis of SiC detectors operating in a b
application of Silicon Carbide work acquiring interesting time-of-flight configuration
(SiC) detectors for photons data on their properties and has been done by means of 1. (a) Simulator developed on the physical model of SiC detectors. (b) Simplified physical model of SiC detector
performance.
and charged particles, like performance in terms of the designed simulator. The
alphas, protons, ions, especially response speed, time resolution effect of the doping of the SiC 2
for laser-generated plasma and tolerance to plasma epitaxial layer, of the detector nuclear radiation detectors to mm detector, 8 well resolved of particles from the signals
radiation experiments. High radiation. Time response of SiC geometry and bias conditions analyze the radiation emitted peaks have been detected delivered by time of flight
purity and thick SiC epitaxial detectors has been studied by have been studied to establish from the plasma. Great results within only 20 ns, and 2 semiconductor detectors has
layers have been used to means of both simulations as the criteria for designing ultra- have been achieved from our peaks within less than 2.2 ns been developed, in the figure,
realize radiation detectors. The experiments. fast SiC devices for time of flight SiC detector from this project: have been detected in this a1 and a2 are identified by the
application of SiC detectors in A simulator based on Matlab detectors. nuclear fusion was firstly fund experiment. It has been proved CR-39 nuclear track detectors
laser-generated plasma physics and Simulink has been realized The laser-plasma experiments with so low density of laser that Interdigitated SiC detectors and our SiC detector allowed
has been done: in particular, for this study. Figure 1 shows were conducted in Prague boron interaction, high yield are more sensitive to slow ions to determine their energy
the effects of the ultra-high the block schematics of the Asterix Laser System (PALS) production of alpha particles than more conventional pad distribution with very high
intense radiation levels on the simulator: parameters such under the project of High was demonstrated using SiC SiC detectors. By achieving precision.
detector by triggering the high signal to noise ratio and 1. A. Picciotto et al Boron
detector will be studied both like temperature, photon or energy proton acceleration
theoretically and experimentally. ion/particle energies, detector by thin hydrogenated-doped proton (p)-boron (11B) nuclear nanosecond time resolution, proton nuclear fusion
The charge carrier transport geometry and bias conditions, silicon dielectric targets using a reaction (1)current signal pulse the advantages of SiC detectors enhancement induced in
in SiC has been studied and doping of the SiC layer can be sub-nanosecond laser (HEPA) amplitude as high as 1.8 A have been demonstrated over silicon targets by low-contrast
the timing performance of the set. organized in collaboration with has been
2 acquired with our the traditional detectors used in pulsed laser, Physical Review
device has been theoretically The Simulations of the Dr. A. Picciotto from FBK-IRST, 5 mm SiC detector, 0.8 ns laser experiments, like Faraday X, 2014
determined and experimentally response of SiC detectors to Italy. The project was aimed to risetime and 1 ns pulse width Cup. In addition, a method
demonstrated. photons have been carried maximize the proton/ion energy has been measured with our 1 for calculating the number
Laser-generated plasma out obtaining very fast signals yield at target interaction of
experiments related to ion with rise time and width of a sub-nanosecond laser PALS
acceleration and nuclear few hundreds of picoseconds, with hydrogen targets. The
reactions are worldwide carried which are in a well agreement project also planned to compare
out and many great results with the experimental data. the ion/proton energy with
have been achieved in the last The response of SiC detectors those achieved with advanced
decades. Traditional radiation to alphas and protons with silicon, metallic and polymeric
detectors used in laser-generated different energies have been targets, keeping the target
plasma experiments are showing studied as well, considering the thickness constant. The goal
their limits and new detectors deposited energy distribution of the project was to propose
with better performances given by the available simulator new semiconductor materials
are required. In the last few SRIM 2013. Minimum rise as possible new targets to
years, SiC radiation detectors time and pulse width of 0.3 be employed in laser-driven
have been so proposed to be ns and 0.7 ns, respectively, particle acceleration experiments
a b
employed in these experiments have been predicted for 5.5 instead of the standard metals
due to their great physical and MeV alpha particles. Similar or polymers. Our SiC detectors Figure 2. (a) Signals form shot 44023 acquired with circular pad SiC detector, the alpha signals start from 50 ns and end
electrical properties. results have been obtained were used together with other at 97 ns. (b) Calculated alpha number of Figure 2 (a), the total number is 38880
424
INFORMATION TECHNOLOGY
of the objectives Another important consideration policies would need to be Orion power models are used
Today the semiconductor is the high variability in the load operated at too fast rate, to produce power traces from
industry is facing increasing experienced by multi-cores, and interrupting the cores so the execution data produced
problems to continue delivering caused by the potentially very frequently to run the policy by the GEM5 simulator. A
performance improvements at a different activities performed by would result in an unacceptable flexible thermal model has been
pace that is now expected by its the cores. In this perspective, overhead. developed using the Modelica
user base, as well as the general effective dynamic thermal The proposed policy relies on language, also supporting 3D
public. This is not yet caused by management solutions that can event-based control theory to die-stacked chips. The model is
problems in achieving feature push the cores to their maximum couple the fast reaction time component-oriented meaning
size reductions, but by the side performance subject to the needed to counteract abrupt that each individual component 1. The developed simulation flow
effects and nonidealities caused constraint imposed by the need temperature changes with a is modeled separately with its
by said scaling. to remain within safe operating low overhead. A hardware- own differential equations,
The failure of Dennard scaling temperatures is a key aspect to software split is proposed maximizing flexibility. tuning. controller, was thus shown to
in deep nanometer architectures achieve the best utilization of where a hardware state Moving to the technological achieve a level of performance
results in an ever worsening the computational capabilities machine generates events Discussion of the results side of the matter, the devised comparable to a fully hardware
power density increase, of current, and especially future when the temperature of a core obtained control solution is particularly solution while retaining
eventually leading to the dark multi-cores. changes by a given threshold An innovative thermal control suitable to be implemented in the flexibility of a software
silicon problem, where power or a programmable timeout strategy was proposed, based real multi-core architectures, implementation.
and thermal constraints limit Methodology occurs, while a software on event-based control theory. thanks to its flexibility and
the number of transistors in a The thermal dynamics of both policy implements the control This control scheme was tested negligible overhead. In fact, the
chip that can be switched at conventional 2D multi-cores algorithm. Since the policy is using the developed simulator, implementation of the controller
the maximum clock speed to an as well as 3D die-stacked ones no longer executed periodically, by running standard benchmarks was proven to take a few tens of
ever decreasing fraction. This show two separate time scales: the intervention rate is adaptive, on a simulated 24 core 3D clock cycles.
problem, if not solved, could a slow one, in the order of depending on the variability die stacked testbed chip, in The proposed event-based
be a major roadblock in the seconds to a few minutes, and of the core temperature. order to assess its suitability for
evolutionary path from multi- a fast one, in the millisecond This solution couples the novel architectures exploiting
core to many-core architectures. range. performance benefits of a 3D stacking. The selected set
Power efficiency alone is not The fast dynamic is that of the policy implemented entirely in of benchmarks exercised the
enough to mitigate the dark silicon layer, that has a small hardware with the flexibility of a ability of the policy to withstand
silicon issue, as one of the thermal capacity, and owing software policy. diverse workload requirements,
major problems caused the to the non-negligible thermal A simulation flow has been such as both CPU bound and
power density increase is the resistance of the silicon bulk, developed in order to validate I/O bound applications. The
need to effectively dissipate the can swing very rapidly with the proposed policies, composed proposed scheme outperforms
generated heat away from the respect to the temperature of of a cycle-accurate multi-core state of the art fixed rate
silicon die to prevent immediate the heat sink. The slow thermal instruction-set simulator, a control strategies, evidencing
failures as well as reliability dynamic, is conversely that of power model and a thermal the inherent advantages of
issues caused by high operating the heat sink. To effectively model. The instruction-set a solution that dynamically
temperatures. This problem control the temperature of simulator is based on GEM5, and autonomously adapts the
will also be worsened in the future generation multi-cores, extended to support multiple controller intervention rate to
future with the introduction of controlling the fast thermal voltage and frequency islands the application needs, without
advanced chip technologies, dynamic becomes necessary. on the chip, down to a per the need for application-specific 2. The simulated 3D multi-core processor
426
427
During the last years in the exploits the DC motor torque equipped with a pressure sensor. design of the control algorithm. in the BBW actuator. In this in doing this we focused on a
INFORMATION TECHNOLOGY
automotive field - both from to move a wedge where the Therefore, the actuator control context the classical dithering particular task that it could be
industrial and academic side - a braking pads are installed, problem consists in the tracking Once the system to be controlled compensation is compared with necessary when designing the
significant effort has been done rather than exerting a clamping of a reference pressure that is modeled, the control problem a more sophisticated model control algorithm: the design
for the evolution of active safety force directly on the braking comes from the driver-vehicle can be faced; in this thesis based compensation approach. of a position controller. In this
and/or performance vehicle disk. The EHB - directly derived interface (i.e. the braking pedal) it is solved following two In particular, the model based particular case of study we
dynamic control strategies, as from the most spread vehicle or from other dynamic control approaches. approach approximates the compared a canonical model
well as for the development brake architecture - employs a strategies. A first one is focused on non-linear friction model with based PID tuning with the
of autonomous vehicles. The hydraulic system, activated by For such a BBW technology, the exploring innovative and an innovative linear-in-the- VRFT approach, showing that
practical application of these an electronically commanded control problem, however, turns advanced control techniques; parameters (LP) approximation, the VRFT algorithm provides
systems is naturally subdued motor/pump, to generate the out to be quite challenging: these control techniques are this is updated through an satisfactory performance. This
to the availability of on-board requested braking force. firstly because of the typical studied and then adapted to the adaptation mechanism. approach makes the control
actuators that allow to regulate In this thesis a hybrid EMB/ non-linearities of the traditional particular system, finally they In parallel to the adaptive design faster and avoids critical
the desired control variable EHB solution is considered: a hydraulic layout - e.g. presence are validated on a test bench. friction compensation technique, phases such as the identification
independently - or with certain traditional hydraulic brake is of brake fluid reservoir, oil Pursuing this approach simplifies a sliding mode based controller one and the controller designed
degrees of freedom - from employed and, using an electric compressibility - along with the testing phase: in fact, guarantees the pressure one.
drivers willing. This need has motor mechanically connected those related to friction validating the control algorithms tracking.
led to the so called drive-by-wire to the master cylinder, the and temperature variations. on a test bench allows us to Following the conventional
paradigm, where the standard desired pressure on the braking Moreover, the required control discard the safety requirements, Then, anti-windup compensation control techniques and
mechanical connection between pads is generated. With respect performances are highly which are really strict and technique is employed in order exploiting the peculiarities that
driver and actuators is replaced to the well known EMB and EHB demanding, both in terms of severe in this application. This to build a compensator able the control oriented model
by an electronic system, devoted solutions, the considered one bandwidth, both in terms of is not possible when testing to linearize the system intrinsic highlights, we propose two
to the regulation of the actuator has the advantage of keeping absence of overshoot. the control algorithms on a non-linearity. At this point, different control strategies:
according to driver or vehicle the usual vehicle hydraulic brake real system, i.e. on a racing considering that in real systems an adaptive position-pressure
control logic requests. layout, adding just the electro- The starting point of the motorbike. control action has an upper switching control and an
Among others, the brake- mechanical actuator, thus control design is the system The second approach is focused physical saturation due to adaptive cascade position-
by-wire (BBW) technology saving space, weight and cost. model: writing each actuator on solving the control problem hardware limitation, an anti- pressure. These control
focuses on the design of an Furthermore, it gives flexibility component equations, a adopting conventional control windup compensation is derived. strategies are simple, robust and
actuator capable of applying about where to locate the BBW physical based complete techniques; in this case the The employment of these two easy to tune. Their design phase
the desired braking torque to actuator and it does not increase model is derived. This model control algorithm are studied compensators permit to design is performed on the test bench,
the vehicles wheel. During the sprung mass of the vehicle. represents very well the system and designed based on the a pressure control on a linear then they are implemented and
the years, different technical This actuator is specifically experimental response and it is system model, then they are system, without considering the tested on a real motorbike,
solutions have been explored. designed by Brembo for high useful to adopt it as a simulator; implemented and tested on a intrinsic system non-linearities. showing satisfactory
The most successful ones are performance motorbikes. however it is too complex to real motorbike. For this reason, Moreover, this control algorithm performances. Moreover, due to
the electro-mechanical (EMB) The BBW actuator main goal is be employed for control design in parallel with the control does not discard the control the safety critical application, a
and the electro-hydraulic (EHB) to provide the desired braking purposes. For this reason, design, the fault detection action saturation in the design fault detection algorithm suited
architectures. In the former, an torque at the wheel; for this, besides the complete model, a algorithm design performs an phase. for the particular case of study is
electric motor directly exerts the straightforward control control oriented one is derived. important role. presented.
the requested torque on the variable should be the braking This model provides a sufficient Finally, the Virtual Reference
wheel. A promising variant of torque. However, due to the system description for control In pursuing the first approach, Feedback Tuning (VRFT)
such architecture is the Electric sensor reliability, cost and purpose; its analysis gives firstly we deal with the friction, paradigm is experimentally
Wedge Brake (EWB), which encumbrance, the system is important guidelines for the which has a dominant role validated on the test bench;
428
429
Sensor networks represent a stationarity may not hold, even inspected process. It is crucial Cognitive Fault Detection and those relationships providing the sensor network scenario;
INFORMATION TECHNOLOGY
valuable technological solution though these assumptions to distinguish between these Diagnosis Systems (CFDDS) is meaningful information for the development of a set of
to monitor and acquire data would generally improve two situations: in the former able to automatically learn the fault detection and diagnosis. integrated techniques, coming
from an environment, a critical the decision abilities of the case, direct maintenance nominal and the faulty states After learning the network from statistics and machine
infrastructure or a cyber-physical aforementioned applications. should be performed, while the in an on-line manner, and is dependency structure, fault learning fields, which rely on a
system. These data are generally Thus, information coming from latter one requires a reaction generally characterized by the detection and diagnosis is theoretically sound framework
used as input for an application, the sensor network is critical to to an environmental change, ability to exploit temporal and carried out in the space of developed by the system
which is able to react to changes monitor the underlying process, specifically chosen based on the spatial relationships present estimated parameter vectors identification field;
occurring to the inspected to check the status of the considered application scenario. among the acquired data. Most of linear time invariant models the ability to characterize
system. Examples of these system and react according to its Traditional Fault Detection and of existing cognitive FDDS apply approximating the functional the temporal and spatial
applications based on sensor behaviour. Diagnosis Systems (FDDS) are the cognitive approach only to relationships included in the relationship existing among
networks, are those inspecting Sensor networks usually work systems specifically designed a single aspect of the system, dependency graph. Deviations data with the dependency
an environmental phenomenon in harsh conditions, which may to detect and diagnose faults thus they still require at least to from the learned nominal graph, learned with the use of
(e.g., a river or a rock wall) induce faults, thermal drifts or possibly occurring in complex know partial information about concept are detected by a statistical framework;
protecting critical infrastructures ageing affects affecting both systems. More specifically, the analysed process, or they are means of a decrease of the the ability to characterize the
or those monitoring the the embedded electronic boards the tasks of a FDDS are: addressing the design of CFDDS log likelihood provided by nominal state of the system
behaviour of a water distribution and the sensors. In fact, they are detection of the fault, i.e., to for specific applications. the Hidden Markov Model inspected by the sensor
network. These monitoring affected by physical degradation promptly understand whether In this dissertation we propose modeling of the parameter network through learning
systems are composed by a set (due to e.g., humidity, dust, a deviation from the nominal a new CFDDS meant to vectors sequences. Following mechanisms based on the
of sensors, processing boards chemicals and electromagnetic state has occurred; isolation, operate on sensor networks. the detection phase, an cognitive approach;
and a data transfer apparatus. radiations), which may induce i.e., to determine which unit is The proposed system is able isolation mechanism based the ability to learn the
During the operational life of a gradual deviation of the providing faulty measurements; to characterize the nominal on the logic partition of the fault dictionary during the
this infrastructure, the sensor measured value from the real identification, i.e., to capture the conditions of the system, by dependency graph is able to operational life of the system,
network continuously inspects one. Thus, it is of paramount main characteristics (e.g., type, relying on fault-free data coming distinguish between faults and without requiring a priori
the system status, by sending importance to promptly detect intensity) associated to the fault. from the sensor network. At change in the environment. information about the possible
measurements to the central and diagnose faults occurring One of the main drawbacks of first, the proposed system Finally, if a fault has occurred, faults.
processing station, where the in a specific unit, since they traditional approaches for FDDS learns the dependency graph an identification procedure
application runs. In fact, based could affect the application is that they generally require a existing among datastreams, to is executed to characterize
on measurements coming layer, which operates based on priori information on either the select only relevant functional different faults by relying on
from the sensor network, the assumption that provided system in nominal conditions or relationships. Based on them, a newly developed evolving
applications can be designed information is not corrupted. If the possible faults. Thus, their the proposed CFDDS is able to clustering-labeling technique
to take decisions (e.g., in the the application does not take direct application to the sensor perform fault detection, isolation in the space of the parameter
case a deviation from the usual into account the possibility of network scenario is far from and identification, without vectors, which learns the fault
working conditions is registered, a fault, it may take an incorrect being trivial, since, as pointed requiring a priori information on dictionary in an on-line manner.
an alarm is raised) and decision, e.g., not alert the out before, we do not have either the inspected process or The innovative aspects of this
contextually react to the change population when a threat is information about the model of the faulty states. cognitive framework for fault
(e.g., for critical infrastructures, present or vice versa. Moreover, the system generating the data, More specifically, to model the detection and diagnosis are:
alert the population about the in the sensor network scenario or the possibly occurring faults. relationships constituting the the design of a CFDDS
threat). In this scenario, a model the unexpected deviation In recent years, a novel and causal dependency graph of the completely based on the
for the inspected phenomenon from nominal conditions promising cognitive approach sensor network we rely on the cognitive approach, which is
is usually unknown and may be caused either by a has been proposed to design concept of Granger causality, able to cover all the phases of
the assumption of process fault or by a change in the FDDS. This novel generation of which allows to consider only fault detection and diagnosis in
430
431
Online geolocalized data is being upon outdated indicators, into the patterns of activity and content generated in Brazil and area and to those in the areas segmentation of areas upon
INFORMATION TECHNOLOGY
massively produced as a result of one could partly update or popularity of users in the Yahoo accurately predict the GDP and neighborhood without being Flickr data. The results generated
both, the interactions on online complement them using digital Meme microblogging service. the social capital of 45 Brazilian too general (e.g., the label by our framework can benefit a
social networks, and the content data. In this dissertation we We observe that a combination cities. To make these predictions, clothing stores is preferable variety of applications, including
shared on the Internet that is propose methods to estimate of different type of social and we exploit the sociological to professional places). We geo-marketing, urban planning,
annotated with geographical urban indicators as well as content-producing activity is concept of glocality, which says evaluate the framework with a and social recommendations.
locations. This constitutes a an unsupervised learning necessary to attract attention that economically successful hierarchical clustering algorithm
rich source of information framework to discover dynamic and the efficiency of users, cities tend to be involved in upon Foursquare data in the
to characterize geographical areas of the city using the namely the average attention interactions that are both local cities of Barcelona, Milan, and
places where either the people geotagged content published by received per piece of content and global at the same time. London. We find that it is more
interacting reside or where the either residents or visitors. published, for many users has We indeed show that a citys effective than baseline methods
geo-tagged content is produced. First we conduct and analysis a defined trend in its temporal glocality, measured with social in discovering functional areas.
Urban resources are allocated of online attention patterns footprint. The analysis of the media data, effectively signals We complement that evaluation
according to socio-economic evolution in a content sharing user time series of efficiency the citys economic well-being. with a user study involving 111
indicators, and rapid platform. Evolution of online shows different classes of users To this end, we aggregate participants in the three cities,
urbanization in developing social networks is driven by the whose activity patterns give the attention that the citys and with an additional temporal
countries calls for updating need of their members to share insights on the type of behavior residents are able to attract on
those indicators in a timely and consume content, resulting that pays off best in terms of the platform at the level of the
fashion. The prohibitive costs in a complex interplay between attention gathering. city and quantify it using a set
of census data collection individual activity and attention Second, we analyze a random of metrics that are put together
make that very difficult. To received from others. To shed sample of interactions in the in a linear model that accurately
avoid allocating resources light on the matter, we look same service but focusing on predict the GDP.
Finally, we propose an
unsupervised learning
framework to capture the
composition of cities. To
discover functional areas in a
city, spatial discovery algorithms
have been recently applied to
social media (e.g., Foursquare)
data: functional areas are often
identified based on semantic
annotations of places and
human mobility patterns.
We propose a framework based
on an objective function to
maximize. By being integrated
into any clustering algorithm,
this function aims at finding
and labeling areas such that
an areas label is semantically
1. City graph depicting online attention exchange related to the points in the 2. Functional clusters identified in Barcelona
432
433
Autistic Spectrum Disorder (ASD) of interaction modes and have motor dysfunction, and motor
INFORMATION TECHNOLOGY
and autism are both general been designed for different autonomy, developed and tested
terms for a group of complex platforms and input devices, on the eld; a demonstration
disorders of brain development, from conventional mouse of the initial hypothesis that
characterized by a triad of or joysticks to (multi)touch the touchless paradigm can
symptoms related to lack of gestures, speech-recognition be successfully applied in the
social interaction, decits in the devices, physical manipulation treatment of autistic children.
acquisition and expression of of digitally augmented objects, In the rst part, this work will
language, and repetitive patterns robots, head-mounted devices provide an overview of the
of behavior often accompanied for autistic children. Still, very current state of art in the eld
by sensorimotor impairments. few studies have explored the of nonconventional interaction 1. empirical experimentation
For years, different techniques potential of full-body interaction, paradigms for the treatment
have been used to improve the and in particular of motion- of autistic children, including the design of three touchless results of these empirical studies
quality of life of people who based touchless interaction. virtual reality, tangibles, robots, games, based on Kinect, aimed have conrmed and extended
have various developmental In the research arena, motion- mobile devices, discussing in at improving childrens motor, the outcomes of prior research,
dis- abilities. However, the use of based touchless interaction particular touchless motion- cognitive and social skills, whose providing additional empirical
technology continues to receive has started to be explored based interaction with respect to development informs the third evidence that touchless gaming
limited attention, despite the for learning and therapeutic autism. part of this work. Considering does have a strong potential
fact that it tends to be a high purposes. Still, most existing In the second part, I will every childs uniqueness, the to improve autistic childrens
interest area for many of these works consider the domain display through an empirical games are strongly customized attention and motor- visual
children. of regular children, while experimentation and using play and user friendly, in order to skills. Overall, the research sheds
This work tries to present a very little is known about therapy, the actual effectiveness allow for therapeutic purposes, a light on the opportunities
solution in this broad and how motion-based touchless of this type of interaction with both at the centers and in offered by full body touchless
varied panorama developing interaction works for autistic children with autism spectrum remote mode. games for therapy and
innovative interactive children and if this paradigm can disorder. The experiments education of these special users.
technologies for autism that can be successfully applied to these show positive results, but a This thesis ends with the
be integrated with therapeutic subjects. further analysis of the usability proposal of a research agenda
and school activities and can be This work wants to provide highlights that children ran into in this eld and an outline of
autonomously used by therapists three different contributes to the a number of problems in using future work
and teachers to promote, current research on the subject: commercial games currently
through engagement: social a theoretical one, i.e. a set of on the market. On the basis of
interaction, communication guidelines for the development such observations, and thank
capabilities and motor skills. of applications for children to the collaboration with three
Despite a general lack of interest with cognitive and motor therapeutic and educational
in this eld, in the last years disabilities; a technological centers which participated in the 2. example of game developed using
we have seen an increasing one, a exible architecture to phases of development, design guidelines
number of technologies in the support the development of and evaluation, a structured
research literature and on the applications according to such set of generic and specic As described in the fourth part
market focused on helping guidelines and three examples guidelines has been distilled, as of this thesis, the games have
and educating children with of games for autistic children presented in the second part of been evaluated in a controlled
autism. Existing products and with low-moderate cognitive this work. study remarking improvements
prototypes support a variety decit, low-medium sensory- These guidelines informed for the areas of interest. The
434
The second life of television content based on users requirements are used in the analysis phase to
INFORMATION TECHNOLOGY
watch television has radically feedback from the final users. has been developed in order to the knowledge graph. It can 3). It enables the integration
changed. Also the home In this thesis I define a show some possible features act directly on the knowledge of the heterogeneous
environment is changing Framework for the integration offered by the described graph, or it can handle the data coming from multiple
since many smart users watch of the heterogeneous and framework based on a subgraphs extracted from the knowledge sources. The system
television while using a portable dynamic data coming from meaningful dataset. query module also in terms of captures multiple aspects
1. overview of the integration
PC or a tablet as secondary different knowledge sources Figure 1 presents an overview matrices or tensors. framework of the domain, from the
screen more or less related to (broadcasters archives, online of the integration framework. The core of the framework is semantic characterization of
the broadcast programmes. newspapers, blogs, web It consists of three main layers: the knowledge network. In the TV content, to the social
At the same time, social encyclopaedias, social media a source processing layer, a particular, Im interested in subjects via their public actions. characterization and the social
networks allow the final user to platforms, social networks, knowledge graph layer and a capturing the dynamic evolution The possible relationships perception of a TV event, to the
be immersed in a collaborative etc.). The framework uses a knowledge query and analysis in time of the graph by using between differed items could temporal evolution of the social
environment and to talk knowledge graph to model all layer. The source processing temporal nodes associated to be: a group of subjects that perception. The demo scenario
about television. TV users the heterogeneous aspects of layer has the role of collecting all social objects and describing recognize a social value of an relies on real data gathered from
social activities implicitly make the information in homogeneous the data which will be conveyed their lifecycle. act supports the resulting social YouTube and Twitter, related to
connections between concepts way. I instantiate it in the in the model. It accesses number In order to summarize the final object, a social object represents several Italian TV talk shows on
by means of videos, news, context of the investigations: of predefined web/social/ definition of the graph (figure 2) a social instance of some politics, broadcasted by RAI and
comments, and posts. The the integration of the cultures media sources and processes a brief description of main node concepts on a precise context other Italian operators.
strength of such connections of TV and Web defining a them in order to extract those types is given, the knowledge and structural relationships
may change as the perception model for the integration of the information units which will graph represents the result of express part of links between
of users on the Web changes heterogeneous data coming be represented as nodes in the public actions of users in social entities of the same type.
over time. Moreover user- from the knowledge sources knowledge graph, as well as environments. Nodes represents Social objects can evolve in
generated contents (UGC) are (broadcasters archives, EPGs, those information that support Items. Edges represents time. I consider temporal
revolutionizing all phases of the collected audience data, social the existence of relationships relationships among Items. The representation of a social
content production value chain networks, etc.) which play a role (modelled as edges in the graph) knowledge graph has three main object towards a special type
, in particular it can be observed in what I call the second life among them. entities (node types): Subjects of concept called time concept.
that a very large number of TV content, starting from The knowledge graph layer represent Users that act in some Each edge of the knowledge
of UGCs include significant its production phase, going manages the knowledge way, Social Objects that are graph can be weighted, the
portions of content already through the on-air phase, and graph, which is the core of the the result of public acts and weight expresses the strength of
broadcast by the TV networks. continuing with the on-line proposal. The graph contains Concepts that are physical the relationship, together with
In this context a number of phase, during this phase the essentially three types of and ideal objects referred by the structure of the graph, they
Social TV applications are television content turns to be nodes: social objects, subjects
emerging, providing to the final a magnet for users in the and concepts, and all social,
user tools for social interaction network attracting users, and it representation and structural
while watching television becomes a Social Object. interactions among them.
or media content related to Through a prototype called The knowledge query and
a particular TV program. If MeSoOnTV, the Media and analysis layer consists in a set
properly leveraged, these Social-driven Ontology-based TV of components for querying,
collaborative social environments knowledge management system, browsing and analyzing the
can be seen as rich information that enables the integration of knowledge graph. A query
data sources, indirectly returning the heterogeneous data coming module extracts subgraphs
to broadcasters and content from multiple knowledge from the knowledge graph 2. the knowledge graph definition 3. a screen shoot of the demonstrator
Building Engineering | Design | Design
and technologies for cultural heritage
| Electrical Engineering | Energy and
Nuclear Science and Technology |
Environmental and Infrastructures
engineering | Industrial Chemistry and
Chemical Engineering | Information
Technology | Interior Architecture and
Exhibition Design | Management, Economics
and Industrial Engineering | Materials
Engineering | Mathematical MODELS and
METHODS IN Engineering | Mechanical
Engineering | Physics | Preservation of
THE Architectural Heritage | Rotary-wing
aircrafts | Spatial Planning and Urban
Development | Structural Seismic and
Geotechnical Engineering | Technology
and Design for Environment and Building
| Territorial Design and Government |
Aerospace Engineering | Architectural AND
Urban Design | Architectural Composition |
Architecture, Urban Design, Conservation
of Housing and Landscape | Bioengineering PhD Yearbook | 2015
438
DOCTORAL PROGRAM examination of actual architectural and Qualifying elements of the curriculum
441
The constant and growing need ergonomic regulations have of some past and present case flexibility methods and comfort
Public and Personal: Urban Spaces on a new way to approach the urban
The Future of the Ruins political-ideological and cultural. transition from exposition of towards the issues of quality,
TOWARDS THE METHODOLOGY FOR REUSE OF modern needs or limitations the supervision of relevant
DOCTORAL PROGRAM
451
The Ph.D. programme in Management, Economics and Industrial in candidates a research-oriented mind-set, along CRIC-University of Manchester, SPRU-University
Advisory Board
Adolfo Arata Pontificia Universita Catolica de Valpairaiso
Paolo Cederle Unicredit Group
Mats Engwall KTH Royal Institute of Technology
Jean Pierre Helfer IAE Universit Paris
Nicolai J. Foss Copenhagen Business School
Christer Karlsson Copenhagen Business School
Janina Kugel SIEMENS AG
Gilbert Lenssen European Academy of Business in Society
Filippo Passerini The Procter & Gamble Company
Andrea Pignataro ION Investment Group
Carlo Purassanta Microsoft Italia
Punam Sahgal IIM Lucknow, Noida Campus
Bianca Scheller John Crane
Valerie Suslow University of Michigan Ross School of Business
Reinhilde Veugelers University of Leuven
Scholarship Sponsors
Animp-Oice-Fondazione Luigi De Januario
Centro per lo Sviluppo del Polo di Piacenza
C.T.G. Italcementi Group
Edor Metodi Quantitativi
Fondazione Politecnico di Milano
Pirelli & C.
Telecom Italia S.p.a
Fondazione Cariplo
454
Implementing Environmental Sustainability The process of technical how OPEX and time can be (more precisely 13, 88 years);
457
From 1997 to 2014, the the businesses competing in a generalizable and applicable to literature review, we conclude managers from 32 Italian configuration; and the influence
Allocation of Risks within Public-Private of all potential risks identified. concerning the adoption of a level of risks only without
461
The objective of this work Methodological approach Finally (as third step) the thesis to be of interest also for the dimensions in affecting the final the spending review instrument.
FAMILY BUSINESS PERFORMANCE. This thesis aims to provide family decisions. The use of automated general economy.
465
In the context of collaboration, However, single manufacturing been used to extract the most case practical applications and
Sustainable value creation pathway towards sustainable research findings regarding manufacturers willing to propose
INFOMOBILITY AND OPEN DATA: ARE THEY EASY research. privacy and security to this be a very interesting alternative
471
Risk, uncertainty and component for firms success reduced, thus firm value should results are not unique. Moreover, adoption, its determinants, firm that ERM creates value. Firms
473
Sustainable development has development of environmentally and entrepreneurial team to be, in average, influenced provoking results. Based on
475
The manufacturing sector practices (Schrettle, Hinz, of Sustainability Operations relations with specific business achieve long-term operational of the business logic of
CREATING PUBLIC VALUE THROUGH PROCUREMENT: More, from an academic procurement field is just as of resources, history of the
479
Nowadays, manufacturing contribution to manufacturing different topics related to effectiveness of serious games
481
Motivation: In product objective is to look for patterns research is to uncover patterns
PUBLIC SERVICE NETWORK FORMATION: Then we investigated the four elements emerging from the against change, tend to aliment
487
Supply chain management chain configurations and the
Clean and Competitive Factories: a neuro-cybernetic reference Two archetypes show medium in mapping the is-situation of
AUTOMATION IN WAREHOUSES: STUDY OF THE NEW The fourth paper focuses on in case of configurations using For multi-tier systems, results
493
Intense competition among management to monitor service
DOCTORAL PROGRAM
MATERIALS ENGINEERING
Chair: Materials Engineering with a common, basic knowledge on Ph.D. student is also assigned to give seminars on topical issues
Prof. Chiara Castiglioni Materials Science and Technology, followed by a specialized and/or to lecture on the specific fields of his research or on the
training in specific fields. The objective is to combine the theoretical cultural aspects related to his thesis.
knowledge with the skills required by technology in order to form
qualified researchers who can manage the design, manufacturing The Doctoral School requires the acquisition of 180 credits (in 3
and use of traditional and/or new materials. years) :
at least 30 credits must be obtained through attendance at
Contents of the Doctoral Program Doctoral Courses (with positive evaluation in the examinations);
The Doctoral Course covers the following areas: in addition, attendance at National and International Schools are
Polymers and composites strongly encouraged.
Cements and ceramics the remaining credits will be assigned on the basis of the research
Metals activity necessary to the development of the Thesis project.
Biomaterials and materials for biomedical applications
Processing and characterization of advanced metallic alloys Faculty:
Corrosion and durability of materials The faculty is constituted by Professors from two Departments of
Innovative materials for civil and industrial engineering Politecnico di Milano:
Materials Characterization (Microscopies, Scattering, Spectroscopy)
Modelling and theoretical approaches to the study of materials Dipartimento di Chimica, Materiali e Ingegneria Chimica Giulio Natta
structure and properties Chiara Castiglioni (coordinator) Francesco Briatico Vangosa Chiara Bertarelli
Micro and nanostructured materials (vice-coordinator)
Functional materials for applications in photonic, electronic and
sensors Luca Bertolini Fabio Bolzoni Massimiliano Bestetti
Surface engineering and advanced coatings Alberto Cigada Luigi De Nardo Giovanni Dotelli
Materials for industrial design
Fabio Ganazzoli Luciano Lazzari Marinella Levi
Meta-materials
Transformation of materials Valdo Meille MariaPia Pedeferri Guido Raos
Material for Cultural Heritage Marta Rink Lucia Toniolo Stefano Turri
Pasquale Vena
The courses face immediately advanced issues both in the main
and elective courses. Different curricula are offered, which will be Dipartimento di Energia
activated on the basis of the decisions of the faculty. The Doctoral Marco Beghi Andrea Li Bassi Paolo Ossi
program is characterised by high flexibility, in order to satisfy the
needs of students that have to develop their research activity in Professional skills achieved by PhD in Material Engineering:
different thematic areas. For this reason each student submit to The industrial world depends necessarily on a great variety of
the Faculty a Curriculum to be approved. The Courses offered materials. Nowadays, it is easy to outline two industrial needs:
deal with theoretical, experimental and modelling aspects. Several i) development and innovation in the production, processing,
Courses comprise workshops and seminars, with the participation application and conservation of traditional materials; ii)
of invited internationally celebrated speakers. Students can use development of innovative materials for the production of new
the most modern facilities for materials synthesis, processing and manufactured goods or devices to cope with the growing demands
characterization available at the Politecnico di Milano or in other of modern technologies. Since these two kinds of industrial needs
research laboratories. require specialized people a few specialized curricula are offered.
498
MATERIALS ENGINEERING
the research work and the teaching activity assure an adequate
preparation to the academic career.
Granting Agencies :
ST Microelectronics s.r.l. Media Lario Technologies
Fondazione Istituto Italiano di Tecnologia IIT Istituto Nazionale di Astrofisica
SOLVAY SOLEXIS SpA INSTM
eni SpA Electrolux SpA
Faber SpA
500
Properties of Cellular Polymeric Materials and a simple model based on Changing the process In particular, it was noted that
MATERIALS ENGINEERING
important class of engineering the nature of the constituent a single frequency only. structure. The comparison on the process conditions direction ranging from 1.5 to 3.
materials, yet the knowledge material and on the structure of The acoustical and mechanical between PEOC and PEOC90 and formulations. Nanoclay A part of the study was devoted
and the understanding of their the cells and its change under behaviour of foams is also seems to confirm that the charged foams prepared at to assess the applicability of
behaviour is far from being load. The possibility to use affected by the presence of open changes in the structure have the lowest pressure conditions existing models in order to
complete. The aim of this study polymeric foams to enhance porosity. This was investigated an effect not only on the displayed high structure predict the dependence of
is to investigate different aspects acoustic performances of panel in a study on a novel open- modulus, as expected, but anisotropy and bimodal cell mechanical behaviour on the
of the mechanical and acoustical led to the study of the influence cell polyethylene foam (PEOC) also on the loss factor, which size distribution (see Figure 1). anisotropy ratio of the foams
behaviour of cellular polymeric of static deformation on the produced at CellMat laboratory decreases with increasing holes More homogeneous cell size microstructure. The simple
materials in relation to their properties of the foam subject (Universidad de Valladolid, size in cells walls, while static distribution was obtained in model based on rectangular cell
three dimensional structure. The to cyclical stimuli. Indeed the Spain) through a well-controlled deformation does not seem pure PP foams and in foams proposed by Gibson and Ashby,
first part of the work is devoted static deformation may affect production route. This material to affect significantly the loss produced at higher pressure. in spite of the complexity of the
to the development of a method the microstructure and the displays a peculiar structure factor. In the case of closed cell A correlation between cell structure of the studied foams,
for predicting the mechanical constituent material properties, characterized by almost closed foams, the observed reduction anisotropy and cell size was can describe their behaviour in
response of honeycomb panels thus modifying significantly the cell connected by small holes. of loss factor with increasing observed and a different trend acceptable way.
made by polymeric material foam behaviour. This study is of Crushing the foam up to 90% static strain might be attributed for each material was observed.
via a continuous process, practical interest since foams, of its original thickness allows to to the contribution of air
in the context of Progetto and in particular those used for obtain a material with different pressure to E.
Alveoplast, a funded project noise control, are frequently structure and mechanical Finally, the mechanical
in the framework of Industria subject to static deformation response (PEOC90). Results properties in relation to
2015 initiative. After superimposed to cyclical one. suggested that changing the anisotropic microstructure
experimental validation of the It was shown that the time- microstructure can be a very of polypropylene based
predictive ability of the three temperature equivalence effective way to control, and medium density (180 kg/m3)
dimensional finite element is applicable to predict the enhance, its sound absorption foams were studied. A non-
model, the mechanical response dynamic mechanical (DMA) characteristic. The very same standard compression moulding
of several geometries was response at frequencies not foams showed interesting technology, called improved
analysed in order to optimize directly accessible, and that the response to quasi-static and compression moulding (ICM),
the honeycomb structure for effect of static pre-strain and cyclical mechanical stimuli. In was used to achieve anisotropy
the use as acoustic barrier for of temperature are decoupled. fact, unlike other open cell of cellular structure and
traffic noise control, taking into The obtained data were used to foams, such as PU flexible foam, a fine control on the final
account both the mechanical simulate the acoustic behaviour PEOC compression behaviour foam density. Four different
and acoustical requirements. of a car part made by a layer of is significantly affected processing pressures (0.5; 1.5;
Besides honeycomb structure, polyurethane foam sandwiched by the rate of the applied 4; 8 MPa) and two different
also polymeric foams were between two steel sheets. The compressive strain. Extending formulations (pure PP and
studied. These materials find prediction was in accordance the observations of the studies PP+nanoclays) were employed
application, amongst many with the measured acoustic performed on liquid-filled to prepare a total of eight
others, as sandwich panels performances over a wide range polyurethane foams present different foams. The influence
cores, as shock absorbers and of frequencies, significantly in the literature, this peculiar of processing condition and of
as acoustic liners for sound wider than that directly behaviour was attributed to the addition of nanoclays on the
absorption purpose. Regardless accessible in DMA experiments, the stress contribution arising foaming process was studied
1. Cross section of nanoclay-reinforced polypropylene foam prepared at 0.5
the application they are used for, and gave better results than from gas flow through the by means of microstructure and MPa. Elongated cells give the material anisotropic mechanical properties; small
their mechanical and acoustical the prediction carried out using holes interconnecting cells, mechanical characterisations. rounded cell formation is favoured by the nucleating effect of nanoclay.
502
503
The research work of the that each stimulus have on stem and differentiation. All the of hyaluronic acid and heparin lithography and soft-lithography performed with cancer cells and
MATERIALS ENGINEERING
present thesis can be divided cells response. 3D freestanding hydrogel coatings resulted high were first chosen as baits due were developed and optimized malaria-infected red blood cells
in two main parts both related niches with tailored geometry biocompatible with enhanced to their preferential interactions as technologies to selectively assess the capability of HA and
to the development and were fabricated by two-photon proliferation and higher with cancer cells and malaria- functionalize PFPEs surfaces heparin patterned PFPEs surfaces
functionalization of polymeric polymerization (2PP), which metabolic activity compared to infected red blood cells (pRBCs) following different strategies. in selectively capturing individual
materials for advanced human allows rapid prototyping of 3D uncoated scaffold. Preliminary respectively. Particularly, HA is Particularly, free radical population cells.
health applications. One complex polymer structures. In results assessed that HA- involved in tumor growth and polymerization was exploited for
deals with stem cells-based order to widen the range of 2PP coatings induced the formation metastasis, while heparin is able glycidyl methacrylate modified
therapies for tissue engineering structures mechanical properties, of cells agglomerates, which to bind to the protein domain HA grafting onto partially cross-
applications whereas the other hydrogels were considered as maintain their plurypotency expressed by malaria-infected linked PFPE surfaces by photo-
deals with the development of biomimetic materials suitably thus suggesting their potential erythrocytes and their availability lithography through photomask.
patterned functional biomaterial for 2PP scaffold coatings. application to produce in well-defined positions offer Whereas, biomolecular
surfaces for rare cells capture Immobilization strategies based therapeutic MSCs in large, the possibility to immobilize recognition and strong specificity
and isolation. The goal of tissue on physical, chemical and pharmaceutically relevant cancer cells and pRBCs and to between avidin and other
engineering is to replace or to photochemical interactions scales. Conversely, commitment make these cells available for biotin-binding proteins was
repair a damaged tissue or organ were explored to functionalized towards the osteo-chondral specific tests and experiments used for patterning. A photo-
with autologous-engineered 2PP structures with thin layers lineage was observed for thus encouraging the discovery activable biotin was preprinted
artificial substitutes made by of hyaluronan- and gelatin- softer gelatin-coated niches, and formulation of new drugs by contact printing and UV- a 3D niche.
seeding living autologous based hydrogels, which were highlighting the main role that and therapies. A fundamental grafted onto partially cured PFPE
organ specific cells on a developed to have tailored the chemistry of the surface issue was the selection of the substrate, then biotinylated- 1. Confocal images acquired on GEL-
biomaterial, a scaffold, acting stiffness encompassing the coating, combined with substrate material that deeply heparin was immobilized SH-coated niche substrates seeded
with MSCs and cultured for 14 days.
as an extra-cellular matrix and range of physiological values. the geometry of the micro affects the final performance using avidin as intermediate Nuclei stained blue, collagen type-I
culturing them giving proper Hydrogel mechanical properties architectures, has respect to the of the array. Best performing linker. Preliminary cellular tests stained red and actin stained green.
cues until maturation in a were evaluated validating mechanical properties of the materials should avoid non-
functional tissue ready to be a reliable methodology coating in addressing stem cells specific cells binding, thus
transplanted. One promising based on three independent differentiation while cultured in perfluoropolyethers (PFPEs) were
strategy consists in addressing experimental techniques. The main objective of the second considered due to their wide
stem cells fate by fine-regulating Rheological results obtained part of the present research was range of properties including
their interactions within 3D on macroscopic samples were the design and realization of very low surface tension, which
artificial microenvironments, successively benchmarked patterned functional surfaces enhances PFPEs anti-fouling/
synthetic niches engineered with swelling experiments for rare cells capture and fouling release properties.
to mimic individual biochemical following Flory-Rehner theory isolation. Currently, biomolecular Different photocurable PFPEs
and biophysical factors. To this and by Atomic Force Microscopy patterning is considered one were compared and deeply
end, our first objective was (AFM) nanoindentation, of the key technologies for the characterized to understand the
to fabricate and functionalize the latters considered more realization of living-cell arrays possible relationship between
3D synthetic matrices which suitable techniques for the and for the study of specific their main structural parameters
are engineered to enable characterization of small-scale individual cellular processes and their protein resistance
independently tuning their hydrogel samples as those instead of analyzing the behavior behavior. PFPE-dimethacrylathes
physical-mechanical properties, grafted onto niches. Finally, of a whole cell population, as for were selected as the most
in terms of geometry and biological tests were performed conventional cell-based assays. valuable candidates as substrate
2. SEM image showing cancer cells adhering selectively on GMHA spot (a);
stiffness, thus allowing on hydrogel-coated structures to Among all the different kinds for HA and heparin selective optical microscope image showing malaria-infected red blood cells (staining
investigating the specific role study aspects of biocompatibility of proper chemicals, derivatives patterning. To this end, photo- violet) adhering on heparin (b).
504
Carbon Nanostructures for the commercial sample, (CD) and electrochemical resistance of mesoporous
MATERIALS ENGINEERING
under the general framework growth temperature, its part, fabrication of Pt species (i.e., CNT and available at certain energy
of Carbon nanostructures surface undergoes nano electrocatalyst supported on RGO) can improve a part densities.
for electrochemical energy scale modifications and the different carbon materials, of the deficiency of poorly
conversion and storage type of the subsequently namely, CNT, reduced 1. Tafel plot of the specific activity of conductive AC electrodes.
applications, a range of grown filamentous carbon graphene oxide (RGO) Pt on different carbon supports in the Increased CNT contents
activities with main shows a direct relation to and hybrid CNTRGO was kinetic region. could however, suppress
focus on synthesis and the size of surface nano demonstrated using a rapid the capacitance again due
electrochemical characterization features formed on SS and single step microwave to the low specific surface
of carbon nanostructured substrate. CNTs were the assisted polyol process, and Regarding the mass specific area of CNTs. In ACRGO
materials have been carried out. dominant growth products the activity of Pt towards activity, CNTRGOPt support class however, both Cs and
Accordingly, the thesis has been where the average size of SS the oxygen reduction showed particularly high activity Es continuously increased
divided into three chapters and surface nanofeatures was reaction (ORR) was studied due to simultaneous benefitting by RGO content due to
a summary of each activity will below 60 nm. Due to the on different supports. Due from the conductivity of CNTs, simultaneous improvement
be presented in the following surface modifications of SS to direct relation of the and wettability and surface area of conductivity and surface 2. Ragone plot of different
as the objective and results of a during the high temperature abundance of defects on the factors of RGO. It was finally area. ACCNT RGO class, carbonaceous active materials for
chapter. carbonrich CVD treatment, a support and Pt concentration concluded that hybridizing the borrowing the conductivity supercapacitors.
(i) Chemical vapor deposition reduced corrosion resistance and dispersion, a Pt 1D and 2D support families from CNT and surface area
(CVD) direct growth of of the SS was found to occur. nanoparticle (NP) size trend (CNT RGO), shared the features from RGO, presented steadily
carbon nanotubes (CNTs) In particular, chromium as follows was found on of individual components, so high Cs and Es. Studies on
on stainless steel (SS) and depletion of SS due to different supports: RGOPt < as to result at the same time the rate capability, power
their application for the chromium carbide formation CNTRGOPt < CNTPt. XPS in high area and mass specific performance, frequency
bipolar plates of proton and sensitization caused studies, demonstrated a C/O activities. response and internal
exchange membrane fuel an infirm behavior of the ratio trend of the supports (iii) Investigation on different resistance of the electrodes
cells (PEMFCs): in this part, material in electrochemical in the following order: RGO carbon nanostructures as showed a superior behavior
a systematic study of the media, appearing in the form Pt (7.26) < CNTRGOPt the active materials for the of pure CNT electrodes
parameters affecting the of intergranular corrosion. (17.49) < CNTPt (21.32). supercapacitors: the main (a t0 = 0.19 s, Ri = 0.83
direct growth of CNTs on Accordingly, although The content of sp2 carbon focus of this part was on and a Ps, max = 22 kWkg 1
SS was carried out and the problems such as high was also following the same the electrochemical behavior at matched impedance
suitability of the resulting electrical contact resistance trend. Area specific activity of CNT, RGO and their condition) or those containing
material for working in of bare SS in corrosive media evaluation of the samples composites as the active a high content of CNTs. Pure
electrochemical environments (mainly due to passive oxide towards ORR showed a clear materials for supercapacitors, RGO (with a t0 = 6.31 s, Ri
was investigated. CNTs were layer) were addressed, the outperformance of all the compared to activated carbon = 6.38 and a Ps, max = 5.6
successfully grown on SS CNT coating provided by lab-made samples compared (AC) as the conventional kWkg-1) was behaving better
via a simple CVD method this method was evaluated to the commercial Vulcan material for this application. that pure AC (a t0 > 100 s
and without application of insufficient as a corrosion XC7230%Pt over the entire Supercapacitor electrodes and a Ps, max = 4.1 kWkg1) in
any external catalyst after barrier in the electrochemical kinetic region. In particular, were prepared in three main this respect, but still more
fine tuning all the surface media such as bipolar plates the specific activities of classes including ACCNT, sensitive to rate compared
characteristics of SS as well of PEMFCs. RGOPt, CNTRGOPt and ACRGO and ACCNTRGO. to CNTs. Composite three
as the growth parameters. It (ii) Investigation on different CNTPt samples at E = Using techniques such as component electrodes of
was found out that during carbon nanostructures 0.90 VRHE, were 2.25, 2.3 cyclic voltammetry (CV), ACCNTRGO class benefited
the controlled atmosphere as the catalyst support and 3.13 times higher than cyclic chargedischarge from low ionic diffusivity
506
507
Cu2ZnSnS4 (CZTS) and phase CZTS/CZTSe. As a result, investigated. It was observed step (soft annealing) at evaporating an Al grid contact formation of CZTS.
MATERIALS ENGINEERING
Cu2ZnSnSe4 (CZTSe) are different precursor compositions and confirmed by XRD, SEM comparatively low temperature on it. In addition to this, a sulfurized
promising absorber material of Cu-Zn-Sn ranging from (EDS) and Raman spectroscopy (350C) before sulfurization CZTS film was again selenized in
for thin film photovoltaic Cu-poor, Zn-rich to Cu-rich, that in order to get good was also explored. It has been A 0.6% efficient CZTS solar order to form CZTSSe as record
devices due to their abundant Zn-poor have been investigated. crystalline form of CZTS, longer observed that by using soft cell was fabricated on Mo foil efficiency of this kind of solar
constituent materials, suitable Results showed that Cu-poor, sulfurization time (here, 120 annealing step, sulfurization substrate which is the first of cell are coming from CZTSSe
direct band gap ranging from Zn-rich precursor (here, Cu/ min) at high temperature is time could be minimized to 10 this kind of solar cell on Mo foil thin film solar cell. The major
~1.0 eV (CZTSe) to ~1.5 (Zn+Sn)0.91, Zn/Sn1.21) is needed. Raman spectroscopy min. (fig.1) XRD diffraction peak of CZTSSe
eV (CZTS), large absorption best for getting single phase is employed here as diffraction Fig.1: External quantum was observed at 27.35, 45.35
coefficient over 104 CZTS. It is well known by peak of Cu2ZnSnS4, Cu3SnS4 and In case of CZTSe thin films also, efficiency (EQE) of CZTS and 53.77 which are in good
cm-1 as well as theoretical literatures that Cu2ZnSnS4 ZnS are very close to each other. different precursors of Cu-Zn-Sn solar cell prepared from co- agreement with the literatures.
conversion efficiency of forms with solid state reaction SEM (EDS) analysis show that with same compositions have electrodeposited Cu-Zn-Sn
32%. Current commercial of Cu2SnS3 and ZnS at high composition of CZTS are very been selenized at 550C with precursor
absorber material for thin temperature. For this reason, near to the stoichiometric ratio different selenization periods (10
film photovoltaic devices are it is very important to have Cu, Zn, Sn and S. Cross-sectional min, 30 min, 60 min and 120 Besides from co-electrodeposited
CdTe and Cu(In,Ga)Se2(CIGS) homogeneous compositions SEM image show that CZTS min). XRD, SEM (EDS), Raman Cu-Zn-Sn precursor, Kesterite-
which contain earth rare and of Cu, Zn and Sn on precursor possess bimodal distribution and Photoluminescence (PL) Cu2ZnSnS4 (CZTS) films were
toxic materials. In this work, before sulfurization. By using of grains: smaller grains are spectroscopy results showed also successfully synthesized
CZTS/CZTSe thin films were rotating horizontal working located near the interface of that 60 min selenization is by using electrodeposition-
successfully prepared using an electrode, we have got almost Mo and larger grains at the enough for the well formation annealing by using novel stacked
electrodeposition annealing mirror like surface (Ra top of the CZTS which use to of CZTSe without having any layer approach. Adherent
route, in which Cu-Zn-Sn metal 0.094 m) with homogenous reported in literature. In order secondary phases. In case of and homogeneous Cu-poor,
precursors (co-electrodeposited/ distribution of Cu, Zn and Sn to make sure that the CZTS CZTSe, a broad PL peak was Zn-rich stacked metallic Cu-
stacked layers) were deposited from single electrolyte. In case of compound are intended for observed at 0.94 eV at low Zn-Sn precursors with different
by a novel approach on Mo co-electrodeposited precursors, photonic application (i. e. solar temperature (15K) which is in compositions were sequentially
substrate, followed by annealing two different ramping rates cell) samples were characterized good agreement with existing electrodeposited, in
in elemental sulfur/selenium (20C/min. and 2C/min.) by Photoluminescence (PL) literatures. Cu-Sn/Zn order onto Mo foils
environment in quartz tube during sulfurization at 550C spectroscopy at low temperature After fabrication of CZTS/ substrate. Subsequently, stacked
furnace with N2 atmosphere. have been applied and it has (15K). A broad and sharp PL CZTSe, n-type CdS buffer layer layers were soft annealed at
Different characterization been observed by XRD and SEM was observed at 1.21 eV in our was deposited by chemical 350oC for 20 min in flowing N2
techniques like XRD, SEM (EDS), (EDS) that low ramping rate studies, which has matched up bath deposition using cadmium atmosphere in order to improve
Raman spectroscopy, GDOES, does not necessarily increase with the existing literatures, acetate. Before depositing of intermixing of the elements.
Photoluminescence spectroscopy the grain size of the film rather as a broad PL band from CZTS CdS, CZTS/CZTSe films were Then, sulfurization was
and cross-sectional image create some secondary phases compound at around 1.2 to 1.3 etched by 3.5% KCN solution completed at 585oC for 15 min
have ensured the well formed in the film as Kesterite is eV are found in many studies. for 30s. A 80 nm intrinsic i-ZnO in elemental sulfur environment
Kesterite (CZTS/CZTSe) after metastable. Moreover, formation GDOES analysis shows that the buffer layer, which acts as to in a quartz tube furnace with N2
sulfurization/selenization of of undesired MoS2 was also composition of Cu, Zn, Sn and S prevent any shunts, was then atmosphere. Here also different
the precursor. As investigation observed in the XRD in case of were not changed along the film deposited by RF sputtering. The characterization techniques
of the phase diagram of these 2C/min ramping rate. Effects thickness which confirms the TCO layer consisting of 350 nm such as XRD, SEM (EDS),
materials show that only a small of different sulfurization periods formation of CZTS along the film Al-doped ZnO (AZO) was grown Raman spectroscopy, GDOES,
region (in terms of composition) (10 min, 30 min, 60 min and thickness also. Moreover, the by DC pulsed (2 kHz) sputtering. Photoluminescence spectroscopy
is possible to make single 120 min) at 550C have been effect of intermediate annealing Finaly, cells were completed by have confirmed the well
508
509
Gene therapy can be broadly non-viral vectors have been the copolymer series, chitosan- bPEI-based polyplexes shed light once. Importantly, dPAMAM the pDNA concentration was
MATERIALS ENGINEERING
defined as the introduction of proposed as promising and safer graft-bPEI with an intermediate on their superior transfection G4-paromomycin displayed revealed to be a key parameter
genetic material, either DNA or alternatives. grafting degree of 2.7% was properties. the highest transfection in gene delivery. By optimizing
RNA, into target cells in order Non-viral vectors for gene the most effective transfectant A family of three effectiveness and prominent the transfection parameters,
to modify and control their delivery are natural or synthetic and allowed for increased aminoglycoside-rich dendrimers, antibacterial activities, disclosing we provided useful information
protein expression. Offering materials which are positively transfection efficiency and based on polyamidoamine this polymer as very suitable for on testing conditions for the in
new treatment possibilities for charged at physiological pH. lower cytotoxicity than the gold dendrimer generation 4 future in vivo applications. vitro screening of non-viral gene
both inherited and acquired They spontaneously interact with standard polymeric transfectant (dPAMAM G4) linked to Finally, aiming to study the vectors.
human diseases, in the last the anionic nucleic acids and 25 kDa bPEI. Most important, neamine, paromomycin and correlation among the intrinsic In conclusion, my thesis
two decades, gene therapy condense them into micro-and we demonstrated how the neomycin, was developed. properties of cationic polymers, shows that the integration
has become one of the most nano-scale particles, which degree of grafting directly Conjugation of dPAMAM with the experimental conditions of different moieties into
intensively developing strategies protect the genetic material from affects the surface charge, the paromomycin and neomycin and the in vitro transfection a single transfectant is a
for current clinical research. degradation until they reach transfection efficiency and the led to products with increased outcomes, a systematic promising approach to design
Direct administration of free their targets. Unfortunately, cytotoxicity of copolymer-based transfection efficiencies and comparison of the most used new and more effective
oligonucleotides and DNA to despite the development of an polyplexes. Moreover, in order lower cytotoxicities compared commercially sourced polymers multifunctional systems,
cells is rather ineffective because extensive number of reagents, to expand the understanding of to the unconjugated dPAMAM. for gene delivery was carried which join the advantages
of their large dimensions and several issues still need to be the processes of gene delivery, Moreover, dPAMAM G4- out and the role of several of their building blocks. The
their anionic charge that does solved, hindering efficient non- a Chi-g-bPEIx copolymer was paromomycin and G4-neomycin, important parameters affecting structure-activity relationship
induce repulsion with the viral gene delivery. further characterized for its at their optimum N/P, displayed the transfection efficiency studies have established the
negatively charged biological In this scenario, the aim of my complexation behavior with enhanced transgene expression was evaluated. lPEIs, bPEIs, correlations among the chemical
cell surfaces. Therefore, it is Ph.D. thesis was to address time-resolved fluorescence also compared to the gold lPLLs, and dPAMAMs, differing structure of newly synthesized
necessary to develop efficient some of the unsolved issues spectroscopy in combination standard 25 kDa bPEI. Moreover, in Mw, were characterized and commercially sourced
and safe gene delivery systems in the gene delivery field, by with SYBR Green I. Fluorescence given the well-known antibiotic after complexation in terms polymeric gene vectors, the
able to protect DNA against developing, characterizing amplitude and lifetime properties of aminoglycosides, of physicochemical properties physicochemical properties of
degradation by nucleases and and/or optimizing some newly measurements during DNA- dPAMAM G4-conjugates and transfection behavior vector/DNA complexes and their
transfer the genetic materials synthesized and commercially condensation by a Chi-g- were tested whether they still as a function of N/P and biological activity, providing
to target cells. To date, the two available cationic polymers. bPEI copolymer, the building possessed antibacterial activity, complexation buffer. Of useful information for the
main approaches for the delivery With the aim to combine the block 2 kDa bPEI and the either alone or in combination note, 25 kDa lPEI complexed rational design of more and
of genetic materials into cells high transfection efficiency of gold standard 25 kDa bPEI with the plasmid DNA (pDNA). at N/P 40 in 150 mM NaCl more effective transfectants.
are based on viral and non- branched polyethylenimine (bPEI) highlighted polymer-specific The conjugation of dPAMAM was by far the most effective Further advances in this area
viral vectors. Viral vectors are with the biodegradability of DNA arrangements within G4 to aminoglycosides greatly transfectant. Moreover, factors would require interdisciplinary
reported to be highly effective chitosan, 2 kDa bPEI was grafted the polyplexes. Dynamic enhanced its antimicrobial such as the composition of approaches to understand the
but they all share many critical to the chitosan backbone, time-resolved fluorescence activity. Moreover, the the culture medium, the order role of the vector chemistry, and
disadvantages which strongly obtaining a series of seven measurements provided better antibacterial properties of of mixing of the reagents, the the physicochemical properties
limit their clinical application, chitosan-graft-bPEI copolymers insights into the process dPAMAM G4 derivatives transfection time, the dose of of the vector/DNA complex,
such as immunogenic and with different degrees of of polyplex formation and were not influenced by their polyplexes delivered to cells, combined with mathematical
mutagenic issues, and the grafting (Chi-g-bPEIx). Along disassembly in the presence complexation with DNA. the cell seeding density, and the modeling and fundamental
limited extent of the DNA they the Chi-g-bPEIx series, the of anionic competitors. Some Of note, dPAMAM G4- volume of culture medium were studies of cellular processes.
can carry. In light of these higher the degree of grafting, relationships existing among paromomycin and G4-neomycin evaluated experimentally. Of
drawbacks and due to their high the greater the -potential the optical behavior, the were shown to efficiently note, the cytotoxicity was mainly
standards in terms of safety, and the cytotoxicity of the physicochemical properties and transfect mammalian cells and influenced by the variation of
versatility and easiness of use, resulting polyplexes. Among the transfection activity of Chi-g- to inhibit bacterial growth at the experimental conditions, and
510
511
This PhD thesis covers three a thermoplastic elastomer. (EWF) method is widely used conditions, the yield strain tests are expensive because
MATERIALS ENGINEERING
different research topics in the Within the collaboration with to characterize the fracture turned out to be fairly constant. timeconsuming. A method
field of science and technology De Rosas group the possibility behaviour of thin polymeric This result suggests that the based on short-term tests to
of polymeric materials. They to produce s-PP fibers or films films whose application is mainly strain at yield onset in DEN(T) predict failure in the plasticity
all have in common the fact with an elastomeric behaviour in plane stress conditions. It specimen can be obtained by the controlled regime is reported in
that the knowledge of the through the extrusion and allows to obtain a materials uniaxial tensile test which, using literature. This method is based
yield and/or the post yield cold drawing processes was specific fracture energy by a standard specimen geometry on the hypothesis that failure
behavior is necessary for their considered. Indeed s-PP performing fracture tests on deforming homogeneously up to occurs when the increase in the
analysis. Further, they all deal processing has not been faced in several notched specimens (for yield, is more simple to perform. non elastic strain component,
with semicrystalline polyolefins. the research activity of this PhD example Double Edge Notched The hypothesis of yielding of the which is in literature commonly
In some cases a correlation thesis, which is mainly aimed to Tension DEN(T) specimens) whole cross-section was verified named as plastic strain,
between the macroscopic study the mechanical behaviour differing in the cross-section for the studied material. reaches a critical value and
mechanical behavior of the of s-PP in relation to the width. The method is valid if the on the fact that the relation
semicrystalline polyolefins strain-induced microstructural plane stress conditions prevail Assessment of long-term between the yield stress and the
investigated and semicrystalline transformations suggested and there is not edge effect: the performance of isotactic strain rate in a constant strain
polymers main deformation by literature. A suitable low thickness of the specimen polypropylene using short rate test is equal to the relation
mechanisms was attempted. experimental method was set and a notch length higher term tests between the stress applied in a
up to overcome the necking than a minimum value allow The experimental work related creep test and the corresponding
A study of the yield and post- effect which causes strain to satisfy both the conditions. to this topic was performed strain rate in the creep steady
yield behavior of syndiotactic localization in a s-PP specimen In addition the method can be during the six-months term state. The stress-strain rate
polypropylene during a tensile test: considering applied only if the yielding of the period at Technological relation can be easily determined
The research activity aimed to different zones of a single whole cross-section has occurred University of Eindhoven (TU/e) performing short term constant
the study of the mechanical specimen as different specimens before crack onset. In this PhD in the Netherlands. In the last strain rate tests. Time to failure
behaviour of syndiotactic strained up to different strains, thesis the latter hypothesis decades polymers have been for a certain value of the applied
polypropylene (s-PP) started a larger amount of data from was verified, considering a increasingly employed in the stress can be predicted once that
within a collaboration with few tests was obtained and it HDPE, performing fracture production of loadbearing the critical value of the plastic
the research group of Prof. De was possible to investigate a tests on DEN(T) specimens. component to be used also at strain has been determined
Rosa of the Chemical Science wider strain range than that To determine the yield onset relatively high temperature. through a preliminary
Department of Universit degli explored in literature. The effect (stress and strain at yielding) Pipes for hot water and gas characterization of the yield
Studi di Napoli Federico II of configuration regularity loading-unloading tensile tests transportation are an example. and the creep behaviour of the
which has performed a wide of s-PP on its yield and post- were performed up to different They are commonly subjected material. The method has been
characterization of polyolefins yield behaviour was examined strains: through the back to a fairly constant pressure set up for the characterization
structure since the 80s and considering three commercial extrapolation of the permanent for most of their service life. of amorphous polymers.
more recently has focused on syndiotactic polypropylenes strain versus the applied strain For such applications it is Recently it has been extended to
the study of the strain-induced having different stereoregularity the yield strain was obtained necessary to predict the pipe semicrystalline polymers. In this
crystal form transitions of index. and the corresponding yield lifetime. Generally temperature thesis the plasticity controlled
s-PP. Considering the high stress determined. The tests accelerated creep tests are failure of isotactic polypropylene
strain recovery of s-PP when Material yielding in relation were performed under uniaxial performed to build the hoop was studied and data from creep
strained after being plastically to the applicability of the and plane strain conditions and stress vs. failure time curve and tests on pipes were used for
deformed (i.e. strained Essential Work of Fracture using DEN(T) specimens as well. eventually to predict pipe time- prediction validation.
above its yield point), they method While the yield stress resulted to-failure in correspondence of
proposed the use of s-PP as The essential work of fracture to be dependent on the loading the applied hoop stress. These
512
513
Since their early production, that provide information on changes in the chemical and of compositional analysis of the
MATERIALS ENGINEERING
plastics have been increasingly the chemical composition, the mechanical surface properties. artworks and evaluation of the
used to create artworks. state of preservation and the All the selected materials actual state of conservation.
Nowadays, a wide range of effectiveness of conservation showed a surface degradation The procedure included mainly
plastic objects is displayed in strategies. although with different extent microscopic observations and
museums or private collections This PhD project was designed and depths. Each aged polymer use of spectroscopic with
and artworks like sculptures in order to address some of exhibited different products of the aims of identification of
or paintings, installations, these issues related to the oxidation due to the different materials, surface morphology
toys, cinematographic and conservation of plastic artefacts, pattern of ageing and in characterization and assessment
photographic films and which include the study of the same material different of chemical, physical and
collectable industrial design degradation processes of products of oxidation have been mechanical properties of
objects have become part of selected polymeric materials, detected at different depths. material surface and bulk.
our cultural heritage. However, the development of active These chemical changes lead Finally, a complete and extended
there is an increasing concern conservation strategies and to surface cracking and strong portable FTIR spectroscopic
about the preservation of the improvement of a multi- yellowing of the specimen, campaig, was carried out on
plastics in collections because analytical investigation protocol mainly for ABS, PVC and PP. standard polymeric specimens
such materials may have a for the assessment of the Mechanical investigation was and artworks from the collection
short life expectancy, being conservation conditions. The carried out with three different of the Art Institute of Chicago;
much more susceptible to study of degradation processes techniques such as scratch, the obtained data allowed
chemical degradation reactions. was carried out on five specific micro- and nano-indentation to build-up a very reliable
Degradation mechanism can materials used in design objects tests that allowed the testing database of total reflectance IR
involve both thermal and and contemporary artworks of materials to highlight the spectra, shared in the scientific
oxidative processes, firstly of the 20th century, such as differences between unaged and community.
during manufacture and then acrylonitrile butadiene styrene aged samples at different depth
because of the usage, as objects (ABS), poly(vinyl chloride) scales.
are continually exposed to (PVC), polypropylene (PP), high The conservation strategies
air, moisture, light and heat. density polyethylene (HDPE) and developed include cleaning
Degradation does not entail only linear low density polyethylene treatments, to reduce the
physical and chemical changes (LLDPE). Accelerated photo- surface yellowing of aged
but may result also in loss in oxidative ageing was performed materials and to remove surface
function, form or significance on specimens in order to deposits, and procedures for the
of the object, which can simulate, in a suitably short introduction of specific additives
show deformation, shrinkage, time, the photo-oxidation of the in already manufactured objects,
cracking, surface deposits or materials in museum conditions. called post-additivation
discoloration. A cutting-edge multi-analytical techniques. Agar gel was
For the preservation of plastic investigation, including selected as cleaning agent while
materials and artifacts, it spectroscopic techniques (with specific optical brighteners and
is necessary to understand lab and Synchrotron equipment), plasticizer were selected for the
degradation patterns, assess optical and electronic post-additivation procedures.
condition and estimate risks. microscopic observations and The investigation protocol
Therefore, there is a growing mechanical tests, was setup proposed in the PhD project
need for research activities in order to fully investigate provided the accomplishment
514
Alessio Zanutta - Supervisor: Prof. Andrea Bianco - Tutor: Prof. Chiara Bertarelli
Coordinator: Prof. Chiara Castiglioni
515
The progressive increase through the thickness and the spin-off for economy in Europe. films of polystyrene derivatives
MATERIALS ENGINEERING
in telescope size and in diffraction efficiency, which is We have found a good undergoing photo-Fries
complexity of the astronomical one of the key properties in candidate in photopolymers, rearrangement. Such results
instrumentation has highlighted the astronomical field, directly which is an important class of provide useful guidelines to
how the current technologies depends on n and film holographic materials that are design polymers with enhanced
and traditional materials do thickness. becoming popular in visual refractive index modulation.
not completely meet the art, anti-counterfeit and for Indeed, we recently designed
present and future astronomical displays. In the framework new thiophene-based molecules
requirements. Therefore, new of a scientific collaboration that could be interesting
materials and solutions have for diffractive holographic candidates for a next generation
to be developed not only to elements for astronomy, Bayer of photo-Fries polymers.
realise future astronomical MaterialScience and Polygrama
facilities, but also to improve Lynx provided solid and liquid
the performances of already acrylic-based photopolymers
available instrumentations. In both green sensitive and
this context, this research project panchromatic. Materials were
deals mainly with the study of characterised in terms of
photoactive materials for the refractive index modulation
production of either refractive as function of chemical 2. Astronomical dispersive elements designed and produced for the ALFOSC
or diffractive holographic 1. Mechanism of hologram formation composition, grating line spectrograph at La Palma (ES)
optical elements. In particular, in photopolymeric Volume Phase density, and holographic writing
attention has focused to Volume Holographic Gratings conditions, i.e., light power 3. Photo-Fries rearrangement scheme
Phase Holographic Gratings density and exposure time. Nordic Optical Telescope (2.56 a complete understanding of of one of the synthesized and studied
(VPHGs) as reference diffractive The dichromated gelatin Moreover, transparency of the m) in La Palma (Canary Islands) the mechanism leading to such molecules
optical elements, since they (DCG) is the common used photopolymers before and after were successfully designed a great variation still missed.
are considered nowadays the holographic material, which the exposure has been measured and manufactured. All the By means of DFT calculations
baseline for dispersing elements provides high performances in the UV-Vis-NIR in order to elements were tested both at on reference molecules and
in modern astronomical (especially in terms of n). determine the wavelengths the laboratory level and on sky applying a Lorentz-Lorenz
spectrographs thanks to their However, it requires a complex range of use. The gratings providing excellent results. model, polymers that undergo
high diffraction efficiency. developing process, its chemical based on photopolymers were Along with the main project a photo-Fries rearrangement
Moreover, VPHGs can be used composition is variable and it is tested at normal conditions on photopolymers, another were studied to predict the
as a tool to determine the highly sensitive to humidity. It and in cryogenic environment, important research activity refractive index modulation,
performances of holographic turns out that there is a limited and ageing resistance was also concerned the study of a new which accompanies this light-
materials. Their working number of manufacturers of evaluated. Interestingly, the class of materials that show a induced process. The results
principle is based on the periodic VPHGs for astronomy, which VPHGs based on the Bayers high refractive modulation upon demonstrated that a change
modulation of the refractive are located only in the US. materials showed constant exposure to UV light. Specifically, in material density has to be
index (n) in a thin film of Therefore, alternatives in term performances at room and we focused our attention on considered the main source of
photosensitive material having of holographic materials that cryogenic temperatures for long the Photo-Fries rearrangement the modulation of the refractive
a uniform thickness (d). This overcome the drawbacks of time. VPHG dispersing elements that occurs in aromatic esters. index. The change in material
modulation is usually induced by DCGs while providing equal for the AFOSC camera of the In literature, many polymers density was experimentally
means of a holographic process. performances, are higlhy desired Asiago Telescope (1.82 m) and have been reported showing confirmed by measuring the
The light diffraction takes place and the achievement provides a for the ALFOSC camera at the high n values. Nevertheless, spectral reflectance of thin
Electrical Engineering | Energy and Nuclear
Science and Technology | Environmental
and Infrastructures engineering |
Industrial Chemistry and Chemical
Engineering | Information Technology |
Interior Architecture and Exhibition Design
| Management, Economics and Industrial
Engineering | Materials Engineering |
Mathematical MODELS and METHODS IN
Engineering | Mechanical Engineering |
Physics | Preservation of THE Architectural
Heritage | Rotary-wing aircrafts | Spatial
Planning and Urban Development |
Structural Seismic and Geotechnical
Engineering | Technology and Design for
Environment and Building | Territorial
Design and Government | Aerospace
Engineering | Architectural and Urban
Design | Architectural Composition |
Architecture, Urban Design, Conservation
of Housing and Landscape | Bioengineering
| Building Engineering | Design | Design
and technologies for cultural heritage PhD Yearbook | 2015
518
DOCTORAL PROGRAM schools etc.), planning and intermediate results and data analysis.
Franco Dassi - Supervisor: Prof. Simona Perotto - Tutor: Prof. Luca Formaggia
527
In this thesis we develop associated with the statistical
Object Oriented Geostatistics to a constant and are the heterogeneous earth systems, and in Banach spaces. We
Weighted Functional Inequalities and a general positive finite Radon means finding those functions become identically zero
533
Cardiovascular diseases are can lead to sudden death. Instead, cardiac anatomical activity in the left ventricle is studied pathologies (myocardial consisting in a methodology
THE ROLE OF MECHANICS IN MORPHOGENESIS: with a homogeneous growth. for the regulation of tissue size is due to homogeneous and
537
Functional Data Analysis, i.e., functional data is approached generalization of Hotellings T2 the parameters of a functional- way ANOVA is performed to Software
Elisabetta Repossi - Advisor: Dott. Marco Verani - Coadvisor: Prof. Riccardo Rosso
541
Metal foams are special cases state upon heat treatment of incompressible-compressible considerably as they can be mixing is compressible. The
543
In this thesis, we address the In addition to the above implies that the practical utility our W-cycle algorithms converge approach in the context of a analysis is carried out following
545
This thesis is devoted to the on numerical fluxes: different G_1), and periodic boundary the solution of the continuous of levels. In several situations, one introduced in Chapter 3
DOCTORAL PROGRAM
549
Within the current global economic scenario, still striving to recover in the development and qualification of new measurements
MECHANICAL ENGINEERING
Chair: from general slowdown and uncertainty, Mechanical Engineering techniques, as well as in the customisation and application of
Prof. Bianca M. Colosimo stands out as one of the leading and driving sectors of industrial well-known measurement principles in innovative fields. MTM
manufacturing in Italy. In terms of per-capita manufacturing major research focus is oriented towards the design, development
production (2013), our country ranks 2nd in Europe and 8th on a and metrological characterisation of measurement systems and
worldwide scale (Confindustria, Scenari Industriali n.5, June 2014). procedures, the implementation of innovative techniques in
sound/vibrations, structural health monitoring, vision, space and
In this competitive panorama, and in order to respond to the rehabilitation measurements.
requests of a challenging sector, the PhD Programme in Mechanical Machine and vehicle design: this research area is involved in
Engineering provides doctoral candidates with a strong scientific advanced design methods and fitness for purpose of mechanical
training, fostering and refining research and problem-solving components. Advanced design methods refer to the definition of
abilities with respect to the academic and non-academic milieu. multiaxial low and high cycle fatigue life prediction criteria, and
Our Programme, organized within the Department of Mechanical the assessment of structural integrity of cracked elements, the
Engineering, relies on the development of an interdisciplinary prediction of fatigue life criteria of advanced materials as polymer
and integrated high-level educational offer, by focusing on a matrix composite materials (short and long fibres), the definition
comprehensive scientific proposal, from conception to realization. of approaches to predict the influence of shot peening on fatigue
strength of mechanical components. Gears, pressure vessels and
All Doctoral Candidates follow a minimum path of three-years, helicopter components are dealt with. Optimal design and testing
which includes specific courses and lectures, held by Faculty of vehicle systems create a synergism between the theoretical and
members and foreign professors and experts, in-depth research, the experimental researches on ground vehicles.
laboratories and active cooperation with international industries, Manufacturing and production systems: this research field gives
institutions and research groups. With this background, our relevance to the problem of optimal transformation of raw
Doctorates are able to blend the exactness of scientific knowledge materials into final products, addressing all issues related with the
with the ability to deal with management and industrial issues. In introduction, usage, and evolution of technologies and production
this view, their scientific profiles are suitable for prestigious positions systems during the entire product life-cycle. PhD activities, in
at national and international level within universities and research particular, are developed within the following research fields:
institutions, large industrial and consulting companies, SMEs. Manufacturing Processes (MPR), Manufacturing Systems and Quality
(MSQ).
RESEARCH AREAS Materials: this area is focused on the study of production process
The PhD Programme in Mechanical Engineering covers a number of and characterization of materials, for structural and functional
different disciplines, being devoted, in particular, to innovation and applications. Excellent research products were obtained both on
experimental activities in six major research areas; all doctoral thesis fundamental research topics (e.g. nanostructured materials, foamed
displayed in the following pages belong to one of these areas: alloys, chemical phenomena in liquid melts, microstructural design
Dynamics and vibration of mechanical systems and vehicles: ecc.) and on applied research (e.g. failure and damage analysis,
this research line is organized into five research areas, namely texture analysis, high temperature behaviour, coatings for advanced
Mechatronics and Robotics, Rotordynamics, Wind Engineering, applications, etc.). The research projects carried out in recent years
Road Vehicle Dynamics, Railway Dynamics. It features modelling addressed specifically the following research topics: Steelmaking
of linear and non-linear dynamic systems, stability and self- and Metallurgical Processes, Advanced Materials and Applied
excited vibrations, active control of mechanical systems, condition Metallurgy.
monitoring and diagnostics. Methods and tools for product design: two main research topics are
Measurements and experimental Techniques: the Mechanical and addressed in this field: PLM-Product Lifecycle Management, which
Thermal Measurements (MTM) group has its common background includes process modelling, engineering knowledge management,
550
product innovation methods, systematic innovation principles Cheli, Alfredo Cigada, Andrea Collina, Giorgio Colombo, Roberto
MECHANICAL ENGINEERING
of Materials, DBA (Dynamic Bench for Railway Axles), Dynamic CESARINI Riccardo Brembo Director of Brembo
Testing, Dynamic Vehicle, Gear and Power Transmission, Performance
Geometrical Metrology, High-Temperature Behaviour of Materials,
COELI Paolo Centro Ricerche Fiat Head of Feature Planning at
La.S.T., Manufacturing System, Material Testing, Mechatronics,
FCA EMEA
MI_crolab Micro Machining, Microstructural Investigations and
Failure Analysis, Outdoor Testing, Physico-Chemical Bulk and GARITO Domenico Schaeffler KG Global Key account Fiat &
Surface Analyses, Power Electronics and Electrical Drives, Process Chrysler WW presso Schaeffler
Metallurgy, Reverse Engineering, Robotics, SIP (Structural Integrity KG
and Prognostics), SITEC Laser, Test rig for the Evaluation of Contact BOIOCCHI Maurizio Pirelli Tyre General Manager Technology
Strip Performances, VAL (Vibroacoustics Lab), VB (Vision Bricks Lab),
FAINELLO Marco Ferrari Senior Manager at Ferrari
Virtual Prototyping, Water Jet, Wind Tunnel.
MURARI Bruno ST Microelectronics Advisor
INTERNATIONALIZATION ROMANI Mario ANSALDOBREDA Director
We foster internationalization by strongly recommending and
supporting PhD candidates mobility abroad, for short-term FAVO Francesco RFI CERSIFER RFI Diagnostics Head of
study and longer research periods. We promote, draft and Department
activate European and extra-European Joint Degrees, Double PhD POLACH Oldrich Bombardier Transportation Chief Engineer Dynamics
Programmes and Joint Doctoral Thesis; our Department is actively LONGANESI CATTANI Francesco PRADA-Luna Rossa Head of PR
involved in EU-based and governmental third-level education
agreements such as Erasmus Mundus, Cina Scholarship Council and CADET Daniel Technical Directorate, Alstom External Relations Director
Brazilian Science Without Borders. Transport
Our international network includes some of the highest-level FOGLIAZZA Giuseppe MCM Technical Manager
and best-known universities all over the world, such as MIT-
MANDELLI Massimiliano Sandvik Italia General Manager
Massachusetts Institute of Technology (US), University of California
at Berkeley (US), Imperial College London (UK), Tsinghua University BIGLIA Mauro Officine E. Biglia & C. Manager
(CN), University of Illinois at Urbana-Champaign (US), Delft BORSARO Zeno Riello Sistemi Technical Manager
University of Technology (NL), University of Michigan (US), cole
RABINO Edoardo Centro ricerche Fiat Manager
Polytechnique Fdrale de Lausanne (CH), Technische Universitt
Mnchen (DE), University of Southampton (UK), Technical University CATTANEO Stefano IPG Fibertech General Manager
of Denmark (DK), Pennsylvania State University (US), Chalmers CANTELLA Michele ATOM R&D Manager
University of Technology (SE), Technion-Israel Institute of Technology
(IL), Virginia Tech (US), Technische Universitt Darmstadt (DE), LIVELLI Marco Jobs CEO
University of Bristol (UK), The University of Sheffield (UK), cole ZIPRANI Francesco Marposs R&D Manager
Centrale de Paris (FR), Politcnica de Madrid (ES), Universit Laval
(CA), Universidad EAFIT (CO), AGH (Akademia Grniczo-Hutnicza)
University of Science and Technology (PL). SCHOLARSHIP SPONSORS
Brembo, Pirelli, Rold Elettronica, Saes Getters, MUSP, Riva
DOCTORAL PROGRAMME BOARD Acciaio, STMicroelectronics, Inaf - Osservatorio Astronomico
Bianca Maria Colosimo (Chair), Stefano Beretta, Andrea Bernasconi, di Brera, Fondazione Universit di Mantova, Ferrari, Tenaris,
Marco Bocciolone, Marco Boniardi, Monica Bordegoni, Francesco Rizzoli Ortopedia, Fondazione Politecnico, ETS Sistemi Industriali,
Braghin, Stefano Bruni, Gaetano Cascini, Federico Casolo, Federico Fondazione Universit di Mantova, ITIA-CNR, BLM Group, Luxottica.
552
553
Contact between macroscopic of existing models (empirical,
MECHANICAL ENGINEERING
surfaces occurs on asperities and semi empirical and theoretical)
local nano/micro-scale asperity. the most promising approach
Tribological phenomena like for micro-scale friction model
friction and wear are highly scale (both hysteretic and adhesive)
dependent. has been identified. This model
Two principal strategies in tire allows to account for the
friction estimation are Top- effective pressure distribution,
Down and Bottom-Up. the asphalt properties, the
Top-Down approaches start sliding velocity and temperature
from tire dynamics and rely distributions in the footprint
on statistical mechanics and area. Sensitivity analyses to
1 Normalized lateral force versus slip angle
experiments. Their outcome surface topography, background
3. Estimated total friction coefficient for two models of asphalt
are typically empirical or semi- temperature and road roughness
empirical friction models. In this law for high-performance tire model able to predict tire cut-off wavelength have been
approach friction identification rubber compounds at both contact forces as a function of carried out. Some criteria for
is carried out relying on macro-scale and micro-scale these nine parameters has been objectively determining the road
macroscopic kinematic quantities have been carried out. developed and validated. Results roughness cut-off wavelength
and the dynamics of the entire For macro-scale, nine physical show that the proposed model is have been proposed and tested.
tire, i.e. longitudinal and lateral factors (i.e. slippage, slip angle, able to correctly predict tire-road The main improvements with
contact forces. camber angle, vertical load, lateral forces with a correlation respect to the literature are a
Bottom-Up approaches, inflation pressure, tire bulk and coefficient higher than 91% complete model accounting for
instead, start from first principles tread temperatures as well as in all working conditions (Fig.1 both the hysteretic and adhesive
and use fundamental mechanics road surface roughness and road & Fig.2). The high stability and energy losses as well as the
and physics to link the atomic temperature) were identified as fast simulation time allow to use extension of hysteretic losses
scale to the macroscopic being effective from a sensitivity the proposed model in real-time to 2D deformations (Fig.3). It
aspects of deformation and analysis carried out on telemetry conditions. is shown that, considering 2D
energy dissipation in material. data. Thus, a non-linear tire Starting from a literature review instead of 1D deformations,
Such an analysis is based on increases the estimated
the understanding of the friction coefficient by 7% to
fundamental physics behind 21% according to the surface
the contact mechanisms, load roughness.
distribution, stress and strain
patters, elastic and plastic
material response, surface
topography, interaction of
contaminants and surface
chemistry. Adhesion forces play
a controversial roles in micro-
domain.
In this research, identification
and sensitivity analysis of friction 2. Normalized lateral force in time domain for Left tire
554
Diya Zohdi Ratib Arafah - Supervisors: Prof. S. Beretta, Dr. M. Madia (BAM, Germany)
555
The PhD activity has been dedi- cated to the estimation of of the component under
MECHANICAL ENGINEERING
devoted devoted to the the crack driving force, in terms investigation.
thorough investigation of of J-integral, and the variation 3. Development of an analytical
the effect of biaxial loading of the local constraint along the assessment methodology for
on the fracture assessment crack front. A major effect of fracture instability anal- ysis of
of cracked components. In the biaxial loading on these two pressurized components with
particular, the research aims quantities is observed. longitudinal external surface
to address unsolved issues in The following main results that cracks.
the assessment of pressurized have been achieved in this work: 4. Application of cohesive zone
components. 1. Extension of reference stress modeling to simulate the
The analyses have been carried method to biaxial loading ductile tearing in thick- walled
out onto different levels, starting through the development of components (Fig. 1).
from the failure analysis on the reference yield stress solutions
tested cracked specimens, the for plates with semi-elliptical
analytical assessment based surface cracks subjected to
on current standards, up to biaxial tensile loading.
the higher level represented by 2. Importance of carrying out
the numerical investigations fracture toughness tests on
by means of finite elements. dedicated specimens, which
In the analytical and numerical are able to reproduce nearly
assessment particular care is the same constraint condition
1. Comparison between the computation (left) and experimental result (right) of burst pressure for tubes subjected to
biaxial loads.
556
MECHANICAL ENGINEERING
Objective optimization and models that are then employed shape variations. i.e. the beads, the 0 degree the objective functions domain for four cantilever beams with different cross
sections.
Topology Optimization in the subsequent and more The Parameter Space (circumferential) steel ply and
techniques are applied for detailed analysis. Investigation (PSI) method is the 90 degree (radial) ply have
solving actual engineering adopted for computing Pareto been considered in the model.
problems related to the A novel analytical method based optimal solutions. An incompressible Neo-Hooke
lightweight design of vehicle on the matrix formulation of model has been employed for
components relevant for safety. the Fritz John conditions for Topology Optimization describing the rubber material
Pareto optimality is applied for techniques have been used in property.
The thesis proposes a double minimizing the total mass of a conjunction with the original Subsequently a new prototype
stage approach for performing cantilever beam loaded at the simplified model to obtain of motorcycle Smart Wheel is
the optimal structural design free end while maximizing its optimal layouts at the very developed and realized (Fig.4).
of vehicle components, i.e. structural stiffness. Maximum beginning of the design process The device is able to measure
the minimization of the mass stress together with buckling are (Fig.2). tire contact forces on a front
and the maximisation of the treated as design constraints. The preliminary design of the motorcycle wheel.
structural efficiency (e.g. Unreferenced analytical brake caliper and the front The measured data were of 2. Result from Topology Optimization of the brake caliper of a race car, bottom
view (left) and lateral view (right).
stiffness, integrity etc.) of such expressions of Pareto optimal upright of a race car as well great interest since represent
components. Simplified models sets for beams with different the design of a new front actual loading conditions of
(either analytical or numerical) cross sections are derived by wheel for a race motorcycle are the component and are used
are always derived to guide the means of this method and performed by means of topology as reference loads for the
applications involving Topology compared. optimization approaches and topology optimization of a new
Optimization, i.e. the optimal Fig.1 shows the analytical simplified structural models. motorcycle wheel.
structural design problem is expressions of the Pareto
at first dealt with a simplified optimal sets plotted in the Regarding the design of The combination of topology
model and then Topology objective functions (compliance motorcycle wheels, the optimization approaches with
Optimization is performed. So and mass) domain for the four knowledge of loads acting on proper simplified models has
both analytical and numerical analysed cross sections. Results the component during its real been effective for the refined
methods are derived in order show that the I-shaped beam working situations and also structural optimization of a
to solve a number of optimal exhibits the best structural the knowledge on how these number of lightweight vehicle 3. Finite element model of a race motorcycle tire.
structural design problems performances. contact forces are transferred components relevant for safety.
referring to simple models of An original multi-objective from the tire to the wheel rim
vehicle components. optimization approach is used are of crucial importance for
The simplified models are used for the structural optimization the design of an optimized
in a preliminary phase of the of the simple model of a brake lightweight component.
design process and have a great caliper, i.e. a simplified finite For this reason a proper in-depth
importance since they provide element model of the caliper study has been performed
the designer important and is developed and used for the before structural optimization.
useful indications for solving the optimization. New original simplified analytical
problem. The design variables are the and FEM models of a motorcycle
Moreover, results obtained in dimensions of the cross sections tire are developed and validated.
4. New Smart Wheel able to measure
this phase, are also important of the beam elements and the In the Finite Element model tire/terrain contact forces and
since can help the designer to position of some nodes of the the actual structure of the tire moments on a race motorcycle.
558
559
A Multihead Weigher Machine the package target weight, cost and possible losses due to variables together with the the two random sampling as much as the number of
MECHANICAL ENGINEERING
(MWM) is mainly composed according to the product, the the deviation of the product expression of the objective algorithms (RD and RC) remains packages per minute increases.
of a system of feeders, a set cycle time constraint and the performance from customers function (expected production irrelevant by varying n and
of H pool hoppers, a set of H objective function. This problem and/or producers target. Thus, cost per ``conforming package their performance are always
weight hoppers and a discharge is equivalent to the well-known the initial setting of the machine ) to minimize, the problem to worse than the BF and RSM
chute to the packaging machine knapsack problem. Instead in is a very important decision find out the optimal setup of a ones. These conclusions have
(Figure 1). The product is my thesis we want to tackle the affecting the general economic MWM has been formalized. The been generalized changing the
continuously fed via a central setup strategy which is still an performance. Currently, the Solution Space of the problem MWM main parameters. Instead,
dispersion feeder and H radial open problem. setup procedures adopted in has been characterized. Its deep the SPSA has always the worst
feeders to the pool hoppers. The setup problem of a MWM industrial practice mainly rely analysis allows us to discover an performance regardless the
The role of the pool hoppers is deals with the determination on the operators skill and interesting symmetry property number of objective function
to stabilize the product before of the optimal average weight experience during a trial-and- to reduce its dimension and, evaluations.
dropping it into the weight of product to be delivered to error manual setup which consequently, to tackle the setup Lastly, the optimal solution
hoppers. Each weight hopper each pool hopper. This setting does not guarantee the best problem faster. According to the found with the RSM algorithm
is equipped with a load cell may change according to the performance. To the best of our characterization of the Solution is compared with two ``rule
that weighs the product and type of product to be packed knowledge, the setup problem Space, five algorithms have of thumbs used in industrial
transmits the information to a and the target weight of the has not been addressed in the been considered: gradient based practice. The expected cost
computer. The computer then package. An improper selection scientific literature apart from algorithms (SPSA and RSM), a of the RSM solution and its
selects a subset of hoppers of the machine setup affects some preliminary results. ``Brute Force (BF) algorithm standard deviation are lower
whose total weight is equal to or the machine efficiency in terms and two random sampling than the two industrial solutions
greater than the target weight of ``non conforming rate, Thanks to the definition of the algorithms (RC and RD). Their allowing a firm to save money
T. Then, the computer opens the material cost, scrap or rework setup problem and its main performance (Figure 2) in
selected hoppers releasing the terms of median and standard
product through the discharge deviation of the expected cost
chute into the downstream have been compared using
packaging machine. For the same number of objective
customer protection, the law function evaluations n, which
requires that the weight of each is used as a proxy of the
package must be no less than computational effort. As easily
the target weight. Consequently, predictable, the increasing of
a package filled with a n causes the values of both
quantity of product below the indicators to diminish. In fact,
target weight is defined ``non a greater number of evaluated
conforming and cannot be sold machine setups allows for an
in the market. improvement of the algorithm
A MWM is a complex machine performance.
which needs a setup strategy We can surmise that the
and a suitable operation performance of BF and RSM
software. The control software algorithms become more
2. Scatterplot of the performance of the different optimization methods. The
works in real time and its goal is and more comparable as n median of the expected cost obtained thanks to the 50 replicates, is plotted
to select the best hopper subset decreases. Moreover, the on the x-axis. Instead, on the y-axis, the value of the standard deviation of
to open in order to achieve 1. A multihead weigher machine. performance difference between expected cost is plotted for each optimization method.
560
561
In recent years, problems related an accurate prediction of the data available in literature were use of the proposed model solution introduces small errors the effect of different peoples
MECHANICAL ENGINEERING
to in-service vibrations have experimental evidence does not used to validate the proposed highly improved the predictions with respect to the complete distribution was evaluated by
gained a growing attention. currently exist in literature. approach. The experimental of vibration amplitudes with model. Conversely, the use of means of simulations. Results
Since brand new structures This work aims at proposing results showed that passive respect to the use of the model the modal superposition of the showed that peoples effect
have become more and more and validating an innovative peoples presence could produce of the empty structure, as effects for MDOFs structure can increase with the occupation
slender, an increasing number of approach to include the effect a significant increase of damping exemplified in Figure 2. introduce errors which can be rate. However, it was also proved
problems related to unexpected of peoples presence when ratios with respect to the empty The proposed approach was also hardly quantified a-priori if the that the modification of modal
vibration amplitudes have been simulating the dynamics of joint structure. The predicted modal applied to predict vibrations in modes are not well separated. parameters is highly influenced
recorded. Indeed, people acting Human-Structure systems. parameters and Frequency operating conditions. Also in An analysis of the effect of by peoples distribution.
on pedestrian structures behave First, the work focused on the Response Functions (FRFs) were this case results showed that people in different postures
as dynamical systems capable analysis of the effect of passive in good agreement with the the use of the model of the and for different directions of
of modifying the dynamics people on the modal properties experimental values in all the empty structure to simulate the vibration through the analysis
of the structure itself as well of the joint Human-Structure considered cases, as exemplified structural response causes an of various apparent mass curves
as of introducing a load. This systems. An appropriate in Figure 1. overestimation of the vibration was also proposed. Results
phenomenon is commonly analytical model was proposed The proposed approach was amplitudes. Conversely, the use showed that people can both
known as Human-Structure to include the effect of peoples then extended to predict the of the proposed methodology increase and decrease the
Interaction (HSI). At present, presence. The method only structural vibrations. First, tests led to results much closer to the natural frequencies and damping
however, the knowledge of requires the knowledge of the under controlled conditions experimental measurements. ratios of a structure.
HSI is still limited. Indeed, the modal model of the empty were performed to validate The analytical matrix of the The method was also verified
determination of vibration structure as an input. Each the proposed approach. One joint H-S system was then considering a grandstand
amplitudes of structures subject is then added locally on subject was asked to march on analyzed in order to highlight of the San Siro Stadium as
occupied by people is a very the structure by means of his/her a force plate, while a second its properties. The observation test case structure. Thus, an
complex task. Particularly, apparent mass. The proposed subject was standing still on of the analytical form of this extension of the model to a
at least two main critical approach places no constraints the structure. The actual force matrix allowed to evidence the different and more complex
issues can be identified. The on the number of structural induced by a single subject differences between the use of case was proposed. The impact
first aspect regards a correct degrees of freedom taken into and the structural response the complete model proposed of the number of people on
characterization of the active consideration. were measured at the same in this work (Multi Degrees Of the structure on its dynamic
forces induced by people on Two slender staircases and time. Results showed that the Freedom - MDOF - structure) behaviour was analysed and
the structure. The majority of and the superposition of Single
standards and codes suggest to Degree of Freedom (SDOF)
model human-induced forces structures to predict the dynamic
as harmonic forces. However, behavior of MDOF structures
this assumption is too simplistic occupied by passive people. An
and does not reflect the real approximate approach based
trend of human-induced forces. on the analysis of the apparent
The second aspect regards mass curves was also proposed
the influence of people on to predict the type of influence
the dynamic properties of the due to peoples presence on the
structure they occupy. Few modal parameters of the joint
attempts were made to include H-S system. Results showed
the effect of people. However, that under the hypothesis of
a model capable of providing 1. Experimental and predicted FRFs SDOF structure the approximate 2. Experimental and predicted structural vibrations
562
563
Aluminum alloys show a these materials suitable for high being worked into wires. Since materials was then evaluated
MECHANICAL ENGINEERING
great number of remarkable temperature applications. A the production of MMnCs by by different mechanical tests
properties, such as low density, novel concept of composites, conventional melting processes (hardness tests, instrumented
good resistance to corrosion, which further enhances the was considered to be extremely indentation, compressive and
low thermal expansion. These properties of conventional critical because of the poor tensile tests, dynamo-mechanical
characteristics make them very composites, is given by the wettability of the nanoparticles, analysis) and microstructure
attractive materials for several design of metals reinforced different alternative powder investigation techniques
industrial fields where important by nanoparticles. Due to their metallurgy routes were adopted. (scanning and transmission
applicative constrains have to very small size, the nano-fillers Alumina nanoparticles were electron microscopy, electron
be satisfied. For example, light are able to interact with the embedded into Al powders by back scattering diffraction, X-ray
weight (higher performance lattice defects, i.e. dislocations, severe grinding and consolidated diffraction, differential scanning
and lower consumption) and enabling new strengthening using several techniques. Special calorimetry). The experimental 1. Proposed mechanisms
about the effects of milling
improved mechanical and mechanisms to be activated. attention was directed to the results were then theoretically and consolidation processes
functional properties (strength, Their impact can be of great structural characterization at discussed. Literature equations on the composite powder
corrosion and wear resistance) relevance either from the micro and nanoscale since and models were also used reinforced with in-situ and
are essential features that scientific or the technological uniform nanoparticles dispersion to predict the mechanical ex-situ nanoparticless: I) after
drying, the aluminum powders,
materials have to possess in point of view. Since metal matrix in metal matrix is primarily behavior of the material and which are covered by an oxide
order to be employed in many nanocomposites (MMnCs) are a important. Moreover, some the numerical and experimental passivation layer, are supposed
applications in the mechanical, very novel class of materials, the of the billets produced via results were compared. to be additionally surrounded
by g-Al2O3 clusters; II) after
automotive, and aerospace field. lack of knowledge associated powder metallurgy were also milling these clusters and the
Another important feature of to them has still to be filled up. rolled to prepare wires as an oxide passivation layer (square
Al alloys is their recyclability, as Several technological issues example of final product. The fragments) are broken up into
small debris; III) after ECAP
reprocessing does not damage have to be overcome in order to Al nanocomposites revealed compaction the fragments of
their structure. Moreover, CO2 produce bulk nanocomposites an ultrafine microstructure passivation layer and the g-Al2O3
emission limitations and energy characterized by homogeneous reinforced with alumina NPs are dispersed into the
aluminum matrix.
cost make lightweight materials dispersion of nanoparticles and nanoparticles produced in-situ or
a priority condition. high mechanical performance. added ex-situ.
In this view, Al based metal The comprehension of the The work had a strong empirical
matrix composites are physical phenomena related basis. Different sintering
considered very interesting. to their improved mechanical methods and parameters
These hybrid materials show behavior and functional were employed to produce
opportunities to design light- properties is still incomplete and MMnCs characterized by well
weight structures with precise needs a deeper understanding. dispersed nanoparticles in the
balance of mechanical and The aim of this thesis work Al matrix. In particular, different
physical properties, with a consisted in the development powder metallurgy routes
relevant improvement on the of Al nanocomposites with were investigated, including
tribological characteristics and enhanced damping and high energy ball milling and
also high temperature strength. mechanical properties and good unconventional compaction
Furthermore, the reinforcement workability. The nanocomposites methods (ECAP, BP-ECAP,
particles are generally exhibited high strength, good hot extrusion). The physical,
thermodynamically stable at the ductility, improved damping mechanical and functional
elevated temperatures, making behavior and the capability of behavior of the produced
564
565
Despite Shape Memory Alloys higher forces. No successful applications have been classified values of current crash boxes
MECHANICAL ENGINEERING
are a quite young technology, applications, which make use based on the protection and the results obtained with
after the invention of Nitinol of Superelastic Alloys have been mechanisms detected. Among the drop weight tower for the
Stent in the 1990s, they became found in the industrial market. all the applications, the crash specimen LD3510.
a standard in the biomedical The goal of this research work box has been selected as the
field thanks to their unusual is to identify an application reference one, since the crash
Test Peak[N] Average [N] %E. ab E.ass. SAE LR
characteristics known as that exploits the key features boxes and the hedgehog
Energy [J] (J)
Superelasticy and Shape Memory of Shape Memory Alloys in the absorber showed a similar
Effect. Both these characteristics industrial field. To achieve this impact response. LD3510 20,7 6296,0 926,8 62,5 14,0 0,7 6,8
are attributable to the Shape result, it is necessary to search Afterwards, several tests have LD3510 64,8 8439,0 1459,6 81,7 53,7 2,5 5,8
Memory Alloys capability of a new field or to investigate been carried out, in order
LD3510 121,3 9602,0 1844,9 89,8 109,5 5,2 5,2
recovering large displacements among those already explored. to prove the concept and
through a phase transition For the research, Biomimetics eventually the device has LD3510 181,5 22500,0 1184,1 99,9 181,1 8,6 19,0
between two intermetallic solid has been used, getting demonstrated to be capable C.Box A 15000,0 49350,0 20830,0 8,3 1250,0 6,1 2,4
states. However, in the industrial inspiration from the different of absorbing large amounts of
C.Box B 15000,0 67040,0 40000,0 16,0 2400,0 7,4 1,7
field, only few successful examples provided by the energy. The first tests carried
applications exist. Since 2000, Biomimicry taxonomy. The latter out have shown an absorption C.Box C 15000,0 50010,0 25000,0 10,0 1500,0 5,8 2,0
the industrial market has shown is a classification of animals capability equal than 85% with
a growing interest for such and plants that have been respect to the energy provided. However, the results have shown
material. Some prototypes have studied and classified based Such capability has been the excellent potential of the
been developed but they did on the functions they perform. subsequently verified through developed device. Subsequently,
not really achieve the market. Among the diverse examples compression tests and impact a very simple design has been
Psychological inertia is often taken into consideration, the tests. All the collected data have proposed. The latter is just a
responsible to push potential hedgehog, and its spines, have been processed through ANOVA hypothesis aiming at verifying
users toward more conventional led us to devise a new concept in order to verify the significance if the hedgehog absorber can
solutions notwithstanding the of impact absorber. The device of the parameters that turned respect the energetic specs of a
numerous advantages that these based on this idea exploits the out to be significant. Afterwards, crash box.
alloys are capable to offer if Superelasticity of many Shape the same values have been used Despite all the tests done so
used for some applications. The Memory Alloy wires arranged to calculate some regression far, before switching from
lack of classification standards with the same configuration of models. The obtained results the applied research to a
is another reason that does the spines of a hedgehog. have been also compared with precompetitive development,
not help the diffusion of Shape To characterize the behavior those of the crash box, a device further tests are needed in
Memory Alloys. Today, two kinds of such absorber, the field of commonly used to absorb order to increase the level of
of industrial applications exist; impacts has been investigated. shocks in the automotive field. It confidence with the hedgehog
they are the electro and thermal- The goal was to better is important to state that, since absorber behavior. If the results
actuated devices. The analysis understand the phenomenon the test method used for the obtained in this research work
of such products has underlined and the protection mechanisms crash box is different from what will be confirmed for higher
that electro-actuated devices currently exploited and has been done for the hedgehog impact energies and for different
use small diameter wires with to find an application to absorber, the comparison has layouts, fields such as the
more complex electronics while make a comparison. Five been made only partially. automotive or the military one
thermo-actuated devices are protection mechanisms have could benefits from the use of
simpler but usually they perform been identified and different Comparison between some the hedgehog absorber.
566
567
Evolution in high-speed rail roundness wheels, hot boxes) important for the vehicle condition. For this work, the maintenance and experimental and nominal conditions. The
MECHANICAL ENGINEERING
transportation is aimed towards can be detected by wayside dynamics (damping, stiffness, database was built using a multi- purposes. A multi-body model results of the tests performed
increased efficiency and monitoring units, whereas for or wheel conicity). These body model of the high-speed of one single car of the same on virtual and real data show
optimisation, to reduce life cycle other fault types the diagnosis methods show generally good trailed ETR 500 class car and train was used to simulate faults unambiguous detection of the
cost (LCC) of rolling stock, while would not be reliable and/ performance at the price of included various wheel profile and conditions that could not be proper condition for the yaw
maintaining high standards or accurate enough to allow some degree of complexity. conditions and different levels of introduced on the real train. damper and for the wheel
of reliability, availability, resolving the kind, location In this thesis two methods for degradation of the yaw damper. The model-free method has conicity. On the other hand,
maintainability and safety and degree of severity of the the condition monitoring are The proposed model-based been applied on ETR 500-Y1 estimation of the lateral damper
(RAMS). To his end, traditional fault. Other methods make use presented. One model-free method identifies the values of experimental data, and it was coefficient does not allow the
maintenance strategies, based of on-board measurements, (data-driven) and one model- some parameters fundamental able to distinguish the cases of detection of a fault for this
on pre-determined travelled generally by accelerometers based. Both of them consider a for the lateral and yaw motion a bogie with new profile from component. Next steps for the
time or distance intervals, or gyroscopes, and can be railway bogie in the horizontal of the bogie, using a multi- the case of a bogie with worn model-based method are the
are not sufficient anymore, divided in two categories: data- plane and have the aim to assess step approach based on the profiles. Next steps for this test of valid alternatives to the
and focus is moving towards driven and model-based. Data the condition of the wheel-rail linear Kalman Filter and on research include an improvement Extended Kalman Filter and the
modern techniques of condition driven techniques use several profile and of the secondary the Extended Kalman Filter. To of the database used for fault application of the method to the
monitoring for the components, algorithms to analyse data to dampers, particularly the anti- this end, a simplified dynamic detection, using data measured vertical dynamics of the vehicle.
to achieve the so-called produce significant indexes yaw damper. model of the bogie has been during line tests, the inclusion in
Condition Based Maintenance tied to faults or malfunctioning The aim of the model-free developed, that reproduces the method of other fault types
(CBM), in which critical wear of various components. These method is to detect the also the lateral track irregularity and the extension of the method
or fault of components is indexes are then compared to incipient instability of the dynamics affecting the two including new indexes other
estabilished by analysing data addressable ranges that may bogie well in advance of the wheelsets. Such irregularity than frequency and residual
generated during the normal be defined, e.g. based on a traditional instability detectors, is estimated in one step, and stability margin such as the
service of the vehicle using fault statistical treatment of available when oscillations have not then given as an input to the modal shape, by using both the
detection and isolation (FDI) fault records, to achieve fault reached critical amplitudes. following step, that estimates lateral accelerometers mounted
methods. The objective is to detection and, less frequently, The method is based on the three important parameters: on the bogie. The same method
perform maintenance only when fault isolation. In model-based combined use of the random lateral damping of the bogie, could also be applied to the
needed, thus reducing costs of methods instead, measurements decrement technique (RDT) to yaw damping of the bogie and vertical dynamics of the bogie,
component fixing or substitution are used in combination with approximate the auto-correlation wheel-rail conicity, used as a to assess condition monitoring
on one hand, and off-service of a mathematical model of the of the acceleration signal of linearization of the relationship of primary and secondary
the vehicle on the other. system to generate a residual, the lateral bogie frame, and of between the rolling radius vertical suspension components.
Examination of State of Art in and the FDI process is then the Prony method to identify variation and the wheel-rail The model-based method was
the field shows a large variety based on the examination of the characteristic exponents of relative displacement in lateral tested by means of virtual and
of approaches to the problem. the residual function. Results are the system for the RDT output. direction. in-line measurements. While line
Some methods make use of encouraging, but faults isolation The output of this analysis is Both the methods have been test measurements were possible
wayside mounted sensors to is generally difficult, and few the estimate of the (possible) tested with data collected by a only for the vehicle running
examine the rolling stock while or no results with application haunting motion frequency and prototype CBM system that was in a condition presumably
it transits in certain spots on measurements performed on rail the residual stability margin. installed on board of an ETR close to nominal, virtual
the rail network. The advantage vehicles have been presented. These results are in turn analysed 500 class high-speed train called measurements, performed
is that an entire fleet can be Other model-based methods by a fault classifier that, using an ETR 500-Y1, owned by RFI (the using multi-body simulations,
monitored while in service with use more complicated filters addressable database of possible company managing most part allowed the application of the
few sensors, but only a limited to assess the estimation of the component status combinations, of the Italian rail infrastructure), method to data generated
range of faults (e.g. out of proper value of some parameter determines the most probable which is used for track with components in both faulty
568
Profile Monitoring of Multi-Stream Sensor traditional SPC practice, different time-frequency scales. challenges and designed for
MECHANICAL ENGINEERING
quality management, traditional impose small lot productions or from product-based SPC to in- critical issues push the need to methods in actual industrial
Statistical Process Control (SPC) even one-of-a-kind productions process SPC. develop novel SPC approaches scenarios.
procedures depend on quality (i.e., the production of lots that A quality monitoring based on and to improve the existing
characteristics measured on consist of a single item). In that in-process data may provide a ones.
the product of manufacturing case, there is no possibility to faster reaction to out-of-control This thesis is aimed at studying
processes. They also assume perform a control chart design shifts, thanks to condition- and developing novel signal-
that a number of parts may phase based on repeated based control strategies aimed based SPC methods in order to
be collected during In-Control processes, and hence novel at quickly mitigating (or even deal with those challenges. A
(IC) operations to estimate quality control procedures must suppressing) the effects of faults, particular family of industrial
the process parameters and be considered. with a consequent reduction processes in considered, i.e.,
to design the control charts. A viable solution consists of of wasted parts. Furthermore, the family of discrete-part
Nevertheless, the evolving sensorizing the machine tools in-process SPC provides a manufacturing operations that
2. Ultra High Pressure (UHP) pump of
market demands and the and the production systems potential 100% production exhibit a cyclical behaviour. In a waterjet plant (top panel) and multi-
development of novel in order to collect data about coverage as it allows collecting that case, the IC state of the stream signals from water pressure
technologies have been leading the quality and stability of the data during each process run, process can be described by and plunger displacement sensors
3. Products of a roll grinding process
(bottom panel)
to productive scenarios where process during the process instead of evaluating the quality cyclically repeating patterns (top panel) and multivariate variables
traditional SPC methods are no itself. This is possible thanks to characteristics on sampled of acquired signals, known as in in-control (IC) and out-of-control
(OOC) process states (bottom panel)
more appropriate or even not the continuous technological products at the end of the profiles. Therefore, this study In the last part of the study,
applicable. developments that are process. deals with the use of profile the analysis is focused on
In different discrete-part leading to data-rich industrial However, such a paradigm monitoring methods for in- the development of profile
manufacturing applications (e.g., environments, where several shift implies a number of process sensor signals. monitoring methods for
in the aerospace sector), the sources of potentially useful novel challenges and critical Different inter-related research processes that exhibit multiple
production of high-value-added information are easily available. issues with respect to the problems are discussed and IC states, which represents
products implies extended faced. In the first part of the a challenging violation of
machining times (e.g., several study, the focus is on signals traditional SPC assumptions.
hours for a single part, possibly from a single sensor that This analysis if motivated
longer than the tool life). It exhibit complicated patterns by the fact that, in different
also involves expensive tools or undesired misalignments. manufacturing applications,
and materials, together with In this frame, the two parts of the same quality can be
high precision requirements. following problems were produced by processes can not
The use of traditional SPC faced: (i) the integration of be characterized by a unique IC
procedures, based on post- registration information in a state. The existence of multiple
process measurements, implies profile monitoring framework, IC states is due to operating
a delay between the possible to guarantee a proper modes that vary in time (e.g.,
occurrence of a fault and the management of different different cutting parameters,
detection of its effects on the signal variability sources; (ii) different tools, different
product. This yields unacceptable the enhancement of profile ambient conditions, etc.), and
costs for wasting expensive monitoring performances in hence novel profile monitoring
materials and time-consuming the presence of complicated methods are required.
re-manufacturing operations. signal patterns characterized by All the proposed approaches
In addition, high customization 1 A paradigm shift in SPC: from product-based to signal-based monitoring information contributions on are driven by actual industrial
570
Smart tyre data: identification of relevant and consequently vehicle The CyberTyre capabilities to cornering performances. Once
MECHANICAL ENGINEERING
The introduction of modern sensors capable to collect useful tyre-road contact forces. The of the estimated quantities cornering condition. Results of and CyberTyre in order to
digital electronics in the information from the interaction accelerometer signals, collected is maintained. Manoeuvre the proposed exploitation index experiment the active control
automotive industry has between tyre and road. during the tyre rolling motion, simulations of a validated are presented in a series of logics based on CPI.
increased the integration The innovative measurement include information about the vehicle model have been run in sweep of inclination angle and
of intelligent systems in the system is based on a single tyre-road contact as they are Matlab/Simulink. vehicle manoeuvre simulations
vehicle design. The availability three-axial accelerometer strongly correlated to the macro- both performed with the Flat-
of new technologies is pushing installed inside a rubber support deformation of the tyre inner Another important developed Trac machine from MTS.
the automotive sector to and then glued onto the inner liner which is directly generated argument is the introduction Figure depicts the CPI trend
constantly invest in developing liner of the tyre as Figure shows. by the contact forces and slips of new relevant parameters in function of IA at imposed
new innovative solutions This device is able to transmits quantities. in order to maximize the tyre inflating pressure (P), vertical
aimed to improve the vehicle the acquired signals via wireless To assess the benefits induced by cornering performances by load (Fz) and tyre slip angle
efficiency in terms of safety and to a suitable receiving system. the smart tyre, tyre-road contact studying the tyre contact (SA): moving towards negative
performances. forces measurement are included patch dynamics behavior by IA, lateral force increase while
Tyres play fundamental key into an extended kalman filter means of CyberTyre. Particular the index is minimized and it
when characterizing the vehicle CyberTyre for active vehicle which are added to the ones attention is given to the tyre tends to the same value for all 2. CPI trend for a sweep IA test
dynamics since they constitute control systems usually present on-board vehicle inclination angle (IA) which testing conditions. Figure shows
the only parts of the vehicle Since tyre forces estimation as the steering angle, lateral has significant influence on a step steer manoeuvre run at
which maintain contact with is a decisive and challenging and longitudinal acceleration tyre lateral forces especially in different IA but same entity of
the road. One of the most feature in the automotive and yaw rate. In particular, lateral limit conditions. Indoor lateral force: in this situation the
challenging and innovative sector, a strategic CyberTyre more precise and prompt testing had a strategic role for maximum cornering condition is
aspect of the automotive sector objective is the fulfillment of this tyre-road friction coefficient the topic development since identified by the SA which has
is to use the tyre as a sensor able it allowed to approach the to be minimized and this trend
to identify useful information as problem in controlled dynamics is well described by CPI. In fact,
it is located in a very privileged conditions as well as finite lower values of SA, on equal
position. The emerging concept element analysis (FEA) gave terms, allow to have a margin
of smart tyre basically describes very helpful support for the for applying higher lateral force
a tyre equipped with sensors phenomena comprehension. The and furthermore the power
and digital-computing systems introduction of a different signal dissipation due to lateral slip is
for monitoring thermal and filtering method is a crucial reduced.
mechanical parameters which aspect of the treated argument:
are eventually extended to the both time and frequency Conclusion
vehicle electronic control unit domain are considered for the A first step towards the
while driving. data elaboration and features correlation of the tyre contact
Pirelli Tyre is pushing technology extraction. Eventually, a new patch exploitation from the
for sensors embedded in the index (CPI) of tyre contact CyberTyre point of view is
tyre inner liner able to focus on patch exploitation is defined presented in this research
vehicle dynamics and security and based on numerical model activity. Next steps are aimed to
devices. The CyberTyre project of the tyre portion in contact make the CPI more robust and
developed by Pirelli Tyre has the with the road able to describe reliable: in many studied cases
main purpose to make the tyre the normal stresses distribution the index seems to identify the 3. CPI trend for a step steer
an active part of the vehicle by 1. CyberTyre sensors glued onto the tyre inner liner in the contact patch domain. best tyre configuration for the manoeuvre at different level of IA
572
573
During the recent years due to that potentially can be seen as In M. Colledani and T. Tolio experiments that are reported performance. On the other results are validated by the
MECHANICAL ENGINEERING
the significant technological a resource of metals, such as (2013) a multi-level framework in this work confirm that the hand, the separation quality at experimental analyses performed
innovations, the production copper, aluminum and gold. that is illustrated here is quality of the separation process process level affects the material at the ITIA-CNR laboratory for
of electric and electronic Due to the complex and variable introduced for integrated may be degraded by increasing routing in the system. demanfuacturing, within the cell
equipment (EEE) has been material mixture of WEEE and modeling of the material the material flow rate as it can The decomposition method 3 on recycling technologies and
marked as the fastest growing ELV, their material recovery is a separation systems considering increase this kind of interactions. that breaks the system into the systems.
area in industrialized countries. very challenging task that has the interaction among process Effect of the material re- small sub-systems, composed of The assumption related to
This results in an increased not been solved yet. For instance and system levels. Indeed in processing two machines and one buffer, the effect of the flow rate
amount of waste electric and talking about the PCBs that mechanical separation systems In material separation systems is used in order to calculate the on separation quality is
electronic equipment (WEEE). In are widely used in electronic there is a strong interaction it is common to re-process the system performance. Moreover validated thorough the corona
EU countries WEEE is the fastest products, currently only about between process and system materials in order to improve a methodology that is called electrostatic separation of
growing waste stream having 30-35% of metals represents levels. the output quality or increase linearization is introduced a binary mixture composed
an annual growth rate of 3% in the PCBs are recovered with In the proposed model, in the the amount of the recovered for quantitative analysis of the of plastic and copper. The
to 5%.WEEE are considered purity level varying between system layer the dynamics of the materials. In spite of the material re-processing. In this same process is also used for
critical waste streams due 85% to 95% depending on the material flow in the recycling importance of the material method the behavior of the experimental validation of the
to their hazardous materials element. system is considered. This layer re-processing, this aspect is not reworking loop is approximated linearization method.
contents. Therefore in case of Indeed efficient treatment considers in input from the included in this model. by a transfer line composed of The experiments that are
a non-proper waste treatment, of these complex mixtures process layer transformation In this work a multi-level desired number of machines. performed on CES process
they can generate negative requires automated multi-stage matrices that are required for approach is taken considering Developed methods are also confirm the importance
environmental impacts. systems composed of different analysis of material flow. In these aspects and other implemented in Matlab. The of the material re-processing
The need for treating end-of-life size reduction and material the process layer the physics system logistics issues such as methods are validated by in material separation systems.
products is also highlighted in separation stages. In this regard, of the process is used in order machine breakdowns, machine comparing the results with It also proves the criticality of
other sectors such as automotive Smart mechanical treatments, to predict the transformation processing rates, starvation and simulation models, under choosing the right set of process
industry. It is estimated that over that are considered in this work, matrix based on the updated blocking propagation, and role different system configurations. parameters for material re-
15000000 vehicles retired per are ideal techniques for recovery estimate of the material flow of conveyors as finite buffers. The results confirm that the processing.
year in the USA and 15-25% of of materials in WEEE and ELV dynamics calculated in the Therefore, a comprehensive precision of the proposed
their total weight is landfilled. since they involve very limited system layer. The interaction model is developed that is methods.
Taking into account the potential environmental impacts, energy between two layers is captured capable of predicting the Another important process
environmental impacts of the consumption and production through parameter exchange performance and support the in recycling systems is the
waste disposal many countries of by-products. Mechanical between two layers. design of the separation systems comminution process for
have set up new regulations and separation systems use different Although the proposed multi- used in recycling. shredding the input particles.
legislations in terms of end-of- material properties such as level model is useful in capturing Material flow rate is considered Comminution processes directly
life management and in order conductivity, size and density for the process-system interaction, as a system level parameter affect the material routing and
to improve the recycling process treating the input mixtures. following relevant aspects are than can affect the separation the efficiency of the downstream
and reduce the waste disposal. In spite of the extensive research neglected. quality. Indeed, on the one hand separation processes. In this
Waste treatment is an important works dedicated to analysis of Uncertain separation matrixes at process level the separation work different experiments
issue not only in terms of different material separation The quality of the separation quality and the size-reduction on comminution process are
environmental concerns but and comminution technologies, processes that can be efficiency is degraded with performed to help providing
also from the recovery aspect of design and performance determined by the separation increasing flow rates. Therefore a better insight to different
valuable materials. In fact WEEE evaluation of these systems have matrixes is not constant and the material flow rate which aspects of these processes.
and end-of-life vehicles (ELV) been rarely studied from system can be affected by particle- is a system level parameter The assumptions of the
are mixture of various materials engineering pint of view. particle interactions. The real can affect the process level model and some of the key
574
575
In-situ structural health obtained by optical microscope the backface of Al substrates comparing with global nominal
MECHANICAL ENGINEERING
monitoring methodology which verified the crack position of the SL joint and therefore results. This problem can be
provides continuous monitoring monitored by this technique. minimizing the effect of any overcome by using an array
of technical structures, helps to In the first place, an array of possible misalignment. of FBG sensors keeping the
optimize the use of the structure electrical strain gages was With the goal to apply this intermediate distance between
and minimize downtime. This used for acquiring BFS profile. BFS based technique using the sensors as small as possible.
PhD work introduced an in-situ An identical framework was FBGs to bonded joints made Difficulty remains as the exact
monitoring technique based also adopted and tested of more complex materials position of the sensor within the
on backface strain field which experimentally by replacing such as woven composites, the optical fibre is still a concern.
is capable of evaluating the the electrical resistance strain assessment of strain sensing Also local strain values produces
structural integrity of adhesively gages with FBG sensors which using FBG sensors onto the by smaller FBGs can differ
bonded joints and also enables confirms the applicability of backface of such material were from global values obtained by
monitoring of crack propagation the monitoring system with performed where strain field continuum mechanics. In order
under fatigue loading. The FBG optical sensors. It was also is not uniform and local strain 2. BFS based monitoring technique using an array of FBG sensors. to eliminate all these concerns,
concept of monitoring method observed that, the use of an gradients exist. We adopted in the last past of this thesis,
for single lap (SL) joints is array of FBG sensors produced T-matrix method coupled with distributed optical sensing using
based on a known relationship more accurate results compared DIC technique in our study. simulations that, spectral In order to confirm our optical backscatter reflectometer
between the crack position to electrical strain gages. This DIC technique was used to response of 10 mm FBGs observations, experimental (OBR) was adopted. BFS profiles
and position of the negative is because of their multiplexing capture the strain field in woven does not perform well in high tests were carried out for for CFRP-CFRP SL joint were
minimum of backface strain capability which enables to composite and that strain field strain gradient fields. This is woven composite strips using experimentally obtained using
(BFS) profile, usually found by insert all the FBG sensors within was used to simulate the FBG because they fail to produce FBG sensors for all three this technique and compared
finite element analyses. For a single optical fibre. Hence it response for a given gauge distinguishable single peak gauge lengths. For 10 mm with FE analyses. As OBR
an Aluminum-Carbon fibre was possible to place all the length. which is mandatory for strain FBG sensors, it was seen that interrogates thousands of
reinforced polymer it was found sensors along the centerline of It was seen from T-matrix estimation. Whereas 4 mm the experimental results does sensing locations along a single
that the corresponding backface FBG sensor seemed to retain not match well with simulated optical fiber simultaneously and
position of the negative the main characteristic needed results as suspected. For both thus transforming an ordinary
minimum strain value of BFS for strain measurement. Even 4 mm and 1 mm FBG sensors, optical fiber into a high spatial-
strain profile along Al substrate though small chirping effect good agreements were found resolution distributed strain
follows the crack tip with an was still present in this case, between experimental and sensor, it was seen that this
offset of approximately 2 mm. however it was not as dominant T-matrix simulated observations. method can capture BFS strain
By means of this correlation, as of 10 mm case, which caused Moreover, experimental results profile with higher accuracy.
it was possible to detect the the peak to split. Finally, for 1 confirm that, strains read by Comparison with FE analyzed
presence of crack in static mm FBG sensor, it is seen that both type of sensors are close to results confirmed. Finally,
tests and this allowed for reflected spectra produce a the averaged strain of the strain the FE model was validated
real-time monitoring the crack distinctive peak which is vital field over their corresponding by comparing with 2D DIC
propagation in the bonded for strain estimation. Besides gauge lengths. experimental results.
joint during fatigue test. After the chirping effect is completely Although peak splitting
the specimen had undergone eliminated in this case thus phenomenon was eliminated
150,000 fatigue cycles, the crack making this configuration very for smaller FBG gauge lengths,
length from the monitoring much suitable for high strain but due to their small size, they
technique was noted and gradient applications such as in sense rather local strain values
compared with measurements 1. BFS based monitoring technique using an array of strain gages. woven composites. which were confirmed while
576
577
Cold spray is a promising and material can flow (Figure review is on cold spray to types of damage to which
MECHANICAL ENGINEERING
technology to obtain surface 1--.a). Variation of temperature mitigate corrosion. The effect structural parts are subjected
coatings. Development of new and plastic strain as well as jet of deposition temperature in service. The effect of cold
material systems covering a wide formation and instability are and pressure, particle size, spray deposition on fatigue life
range of required functionalities, presented. carrier gas, post-treatment and of Al alloy is studied and 15%
from internal combustion Mechanical behavior of co-deposition of metals and improvement was achieved.
engines to biotechnology, consolidated coating under ceramic particles on corrosion The enhancement of fatigue
brought forth new opportunities indentation loading conditions behavior is discussed. life by means of shot peening
to the cold spraying. This thesis is also explored. Cold spray The second goal is investigating is also considered. It was found
goal can be divided into two deposited coatings show strong the application of cold spray that conventional shot peening
main categories: dependency on the indentation on repairing damaged parts (SP) and severe shot peening
The first is understanding the size scale. To interpret the and biomedical engineering. (SSP) are more efficient if they
fundamental features of the cold experimental observation, a There has been some efforts in are performed prior to cold
2. Critical reviews on a) cold spray material system and b) important
spray. In this regard, the focus damage based finite element repairing structural parts using spray deposition. SSP+CS (best parameters on corrosion behavior of cold spray coating
was on two issues: assessment model consisting particle interior cold spray. Most of the time combination) increased the
of critical and erosion velocities and particle boundaries was only visual inspection has been fatigue strength up to 26%
and also the mechanical developed (Figure 1.b). performed since simulating in comparison to as received
behavior of cold spray coating This phase of the thesis real loading conditions are condition.
under indentation loading. In was accomplished with two not possible. In the present For biomedical application,
cold spray, bonding is obtained comprehensive reviews (Figure investigation, a systematic study guidelines are proposed for
when the impact velocity of 2). First on different material of defect shape and ability of deposition of porous coatings.
particles exceeds critical velocity, systems that have been cold spray to fill it is done (Figure The porous-coated implant
but it is less than an upper limit cold sprayed. This includes 3.a). Furthermore, repaired can be stabilized by biological
beyond which erosion happens. metallic, ceramic, metal matrix part must retain bulk material fixation as a result of bone
A new model is proposed, composite, polymer and nano properties. Fatigue represents ingrowth. Experiments were
combination of numerical and structured powders. The second one of the most intricate carried out at under-critical
analytical solutions, to calculate impact conditions, using rather
the critical and erosion velocities. coarse powders and fast gun
This model was based on energy traverse speeds. Thick, macro
approach. However, different rough, sufficiently high strength
phenomena have been proposed coatings, suitable for implant
as the indicator of adhesion applications, with porosity of up
and coating build up. What to about 30% was successfully
3. Applications of cold spray coating a) Studying ability of cold spray in cavity
most approaches agreed upon deposited using marginally low filling for repair applications, b) Under critical cold spray deposition to obtain
is that a material jet is formed impact conditions (Figure 3.b). porous coating for biomedical application.
during high velocity impact.
However, excessive deformation
of elements in the material
jet makes it extremely mesh
dependent in simulation. To
1. Von Mises stress distribution a) impact simulation of single particle using
this end, Eulerian framework is Eulerian framework b) Consolidated cold spray coating under indentation
used in which elements are fixed loading and progressive damage at interparticles.
578
579
In recent years the development was performed with genetic are proposed considering the tracking the complex trajectory
MECHANICAL ENGINEERING
of CFD simulations has increased algorithms in MATLAB most influencing non-linearities: with considering nonlinear
the knowledge of the problems environment; the dimensioning sliding-dependent flexibility, parameters.
of fluid-structure interaction. of the motor-reducer was backlash, and friction. Using this Taking everything into
This trend has been particularly conducted downstream of model, the most critical poses consideration, Hexaglide (figure
important for floating body a dynamic simulation of the of the robot with respect to the 1) was chosen for using in the
research fields such as offshore motion in Simulink, whereas the kinematic mapping of the error wind tunnel as a sea simulator,
wind turbines and sailboats, operating conditions are more from the joint- to the task-space because it can be covered the
involving two fluids. Moreover, severe; ADAMS MSc software are systematically investigated to desired workspace with different
the reliability of CFD software was used for vibration analysis in obtain the workspace positional joint motion ranges. Also, it is
requires further experimental small movement. and rotational resolution, apart more rigid than the Hexaslide
validations. To reach t, as a Parallel Kinematic Machines from control issues. The error and Hexaglide that can satisfy
complementary approach to (PKMs) are commonly used for sensitivity analysis will do with all limitations of the commercial 2. Increasing percentage of the error using belt-driven unit with respect to ball-
that of the test tank, there is tasks that require high precision respect to the most influencing linear transmission. screw-driven unit.
the need to perform dynamic and high stiffness. In this sense, non-linearities parameters. Ball-screw drive was the best
aeroelastic tests in the wind the rigidity of the drive system Finally, a non-linear adaptive selection as a Hexaglide linear
tunnel. of the robot, which is composed robust control algorithm for actuators (figure 2), but it
In this thesis has been addressed by actuators and transmissions, trajectory tracking, based on depends the required accuracy
project of a parallel kinematic plays a fundamental role. In the minimization of the tracking of the wind turbine researcher. If
machine that emulates the this thesis, ball-screw drive error, is described and simulated. belt drive satisfy their accuracy,
fluid-structure interaction: the and belt drive actuators are The dynamic parameters are they can use the belt drive
architecture of the machine has considered and a 6 Degrees of learned during the motion because it is cheapest linear
been chosen according to the Freedom (DoF) parallel robot and introduced in the inverse transmission.
specifications and behavior of with prismatic actuated joints dynamic model of the robot, The presented control method
the sail boats and offshore wind is used as application case. A which is used as a feed-forward can control the robot precisely.
turbine in floating situation of mathematical model of the compensator. The PID adaptive-robust
sea; kinetostatic optimization ball-screw drive and belt drive Optimal geometrical control has lowest error and
parameters of robots were consequently the precision is
found considering kinetostatic best among the other applied
performance index. Drivers and control methods (figure 3). 3. Pose error percentage in three types of the control method.
motor-reducer was chosen for
the robot by considering the
efficiency of the reducer. The
robots were considered flexible
and first natural frequency of the
robot were calculated. Actuators
were analyzed for dynamic
problems considering nonlinear
parameters such as flexibility,
backlash and friction and error
evaluation of the robot. Selected
1. 3-D view of Hexaglide robot was controlled for
580
A Method for Forecasting Design consists of seven sequential Name-Value logic was used to the testers with the method
MECHANICAL ENGINEERING
innovation increased (TF) methods. useful to synthesize and different industrial contexts showed that the method does
considerably in the last In this scenario, understanding create recommendations for as white good in the context not require so much workload
years. This situation forced which forecasting methods, design engineers bringing the of the FORMAT-project and demand, and consequently it
companies to quickly adapt their techniques and tools can be knowledge from forecasting mining industry in the context can be used and adopted by
organization, processes and suitably introduced in general experts to the design users; of Cluster Mining-project. design engineers without a deep
products to better answer the design process emerged as a iii) Proposal of a method for 1. Management of knowledge Industrial partners recognized forecasting expertise (as the
emerging demands from the research opportunity in the forecasting design requirements for design requirements in the the usefulness of the approach testers were).
framework of the system operator.
society. In such a continuous design domain. In specific, capable to support design Circled numbers refer to the steps of
in producing results that clarify Finally, the author did not have
fight to survive, companies this PhD thesis attempts at engineers to anticipate the procedure of the method, and the directions of development the opportunity to explore
need to anticipate the main shed light on the forecasting information about design arrows represent the logical flow for their product and process. the benefits of the method
from one screen to the other.
features of future products and methods suitable to be used and requirements. The latter is For what concerns the FORMAT- application in the overall
related manufacturing processes adopted by design engineers, by the main contribution of this project, the method has been design process, starting from
as essential mechanisms to taking into account their skills, research. It should be intended The achievements of the useful in order to support elicitation of requirements
keep their market position, knowledge and background. as a contribution to the design research activities have been the overall methodology of until the final design proposal.
competitiveness, and moreover theory and practice to generate subdivided into three different the project at bringing new Nevertheless, given the over-
to support the innovation Research method and solutions that are capable criteria: i) Theoretical structural information about vacuum mentioned results, the author
process. contribution of having a more resilient validity (Pedersen et al., 2000): forming technologies. Moreover, considers that the method can
In this dynamic scenario, The first contribution of response to product changes, i) Empirical performance the proposed method has be embedded in any Product
the constant evolution of this research concerns the but can be useful also to drive validity (Pedersen et al., brought positive amendments Development Process or used
products and processes create formalization of knowledge R&D strategies and related 2000): iii) Capability of the to the overall FORMAT- as a complementary analysis
emerging and non-obvious among different domains. investments. method to guide the design methodology, mainly for as demonstrated during the
problems challenging those The major contribution of The proposed process is engineers without experience in what concerns the analysis of different case studies along
who conceptualize, develop and the present research, in turn, organized according to the forecasting. information, known as stage-A. this research. Noteworthy,
implement the new solutions, consists in the consequent core elements of a forecasting For what concerns to the Cluster the method is able to support
known as design engineers. introduction of a method for method, as proposed by Results and conclusions Mining-project, the method has different users with lacks of
In this scenario, a relevant forecasting design requirements Martino (1993), to keep a The proposed objectives been useful to clarify directions knowledge in statistics (i.e. not
research goal in the engineering usable by design engineers more structured and systematic and goals of this research of R&D about mining mill. only design engineers).
design domain is the definition without previous experience approach to develop a have been achieved by using Regarding the ability of the
and identification of suitable in forecasting. The research forecasting analysis. Specifically, different approaches. First method to guide design
methods and tools to anticipate proposal has been structured this contribution focuses on of all, theoretical structural engineers without experience
main features of products and in the following tasks: i) three main aspects: identification validity has been achieved in forecasting, the control
processes, which have to be Formalization of the semantics of relevant requirements; by the individuation of a group test has been useful to
capable of driving, with more of the requirements. This identification among the set of techniques from the validate the method steps. The
efficiency, the design process allows keeping a structured requirements of the variables forecasting research field in method steps allowed gaining
and also support decision and systematic description of possibly behaving according to order to properly carry out the new insights and conclusions
makers about technological the requirements during the a logistic model; logistic growth regression analysis to support about the product of which
choice. This mentioned need is elicitation process, simplifying analysis. the early design phases. Several testers were not aware before.
pushing design engineers for the transfer of knowledge; ii) The proposed method for contributions were developed Moreover, the group working
methods, models, and tools Formalization of forecasting identifying requirements suitable as a theoretical framework with the method was capable to
to better answer emerging methods compatible with for representing the evolution of for the organization of design bring better results than those
demands and changes by design engineers knowledge. the technical system under study requirements i.e. Element- freely working. Remarkably,
582
583
Last years are characterized data to an increasing extent. Gaussian Processes poses some
MECHANICAL ENGINEERING
by the impact of global trends The main purpose of this thesis computational challenges and
and manufacturing technology is to explore new approaches this is why a different method
is faced with a number of for reconstructing a surface based on multilevel B-spline is
different challenges. The starting from different sources proposed.
topics of customized products, of information, which have to As a second contribution, the
reduced life time, increased be appropriately fused. Surface thesis presents a novel method
product complexity and global is meant in a broad sense, both for data fusion, where the
competition have become as the geometric pattern of a uncertainty of the specific
essential in production today. physical object to be inspected measurement system acquiring
a: Low fidelity model b: High fidelity model c: Fusion model
The changing in modern and as the surface representing data is appropriately included
manufacturing towards a a response function to be in the data fusion model to 1. Prediction of the surface with error map
customized productions is optimized. represent the uncertainty
characterized by high-variety The first part of the thesis propagation.
and high-quality products, a focuses on the reconstruction Eventually, the thesis faces
paradigm shift in metrology is of the surface geometry via the problem of using surface
coming on. data fusion. In this case, it is modeling to quickly detect
Information that concerning assumed that multiple sensors possible out-of-control states of
the state of the products and are acquiring the same surface, the machined surface. Starting
production processes is obtained providing different levels of from a real case study of laser-
with the aid of metrology. data density and/or accuracy/ textured surface, an approach
Due to this paradigm shift, the precision. The thesis starts to combine surface modeling
complexity and accuracy of exploring the performance of with statistical quality control is
the product requirements are a two-stage method, where proposed and evaluated.
increasing. Gaussian Processes (also known The second part of the thesis
At the same time, smart as kriging) are appropriately used focuses on using data fusion
sensorization of equipment as modeling tool to combine for process optimization.
and processes is providing new the information provided by In this second application,
opportunities, which have to be two sensors. Figure 1 shows the data provided by computer
appropriately managed. A large reconstructed surface, with the simulations and real experiments
amount of data can be available error map, using only one sensor are fused to reconstruct the
to aid production and inspection (Lo-Fi and Hi-Fi models) and response function of a process.
and appropriate methods to properly combining the available In this case, the aim is to find
process the big data have to information (Fusion model). the best setting of the process
be designed. Then, the thesis faces the parameters to maximize the
In this scenario, multisensor problem of suggesting a data process performance. It is
data fusion methods can fusion method when large shown how data fusion can be
be employed to achieve point clouds, i.e., big data (as effectively used in this context to
both holistic geometrical the ones commonly provided reduce the experimental efforts.
measurement information and by non-contact measurement
improved reliability or reduced systems) have to be managed.
uncertainty of measurement In this case, the use of
584
585
In the last 50 years, economic Therefore, the aim of this work models supply to these equipment design. the final refinement. All these explained in terms of gradient of
MECHANICAL ENGINEERING
and technical demands have has been to gain an effective deficiencies, organizing the Also the assessment of the finding are in good agreement deformation inside compression
forced the steel industry to knowledge of the metallurgical metallurgical understandings in recrystallization kinetics can be with literature and theoretical samples.
develop innovative processes mechanisms at the basis of steel equations that can be used for performed in different ways. results. Finally, the comparison According to the present work,
to supply the transportation, microstructural evolution during steels microstructural evolution Direct methods, based on the of dynamic recrystallization the direct analysis is the most
energy and construction hot rolling and to compare modelling. The main weakness observation of the evolution of kinetics measured by direct and accurate way to study the high
market with high strength, their hot characterization of this approach is that the microstructure, can give many indirect methods has shown a temperature microstructural
high toughness and cost- methods, in order to identify models calibration parameters details about nucleation and reasonable agreement. evolution needed for the design
effective steels. This has the most accurate ones to are dependent on the growth mechanism, but they Regarding the microstructural of a TMP, but a complete
led to the development of be used together with FEM. laboratory testing methods. For are very time-consuming and evolution after deformation, the characterization, taking into
thermomechanical processes Since the approach adopted example, researchers can decide not always applicable. Indirect comparison between indirect account different temperatures,
(TMP) able to refine the in this research work is mostly to characterize the hot flow methods, based on the study methods has confirmed that strains and strain rates, grain
austenitic grains and the final methodological, to simplify stress-strain behaviour of steels of the effect of recrystallization double hit test can measure sizes and chemical compositions,
microstructure after phase the comparison of testing by torsion or by compression on mechanical properties, are slower recrystallization would require such a large
transformation through the methods, an AISI 304L austenitic tests. Since, as shown in this faster, but they can give only an kinetics than those obtained investment in time and research
control of rolling temperature of stainless steel has been studied work, the flow stress curves are average information about the by stress relaxation method. efforts to be incompatible with
microalloyed steels. in torsion and compression at different due to the different behaviour of the material. Direct observation by optical industrial product develop
To design a TMP schedule it temperatures between 900 C response of materials to these Concerning the microstructural microscopy has shown that ment times and requirements.
is necessary to have detailed and 1200 C and strain rates deformation modes and to evolution during deformation, nucleation of new grains However, a careful indirect
models describing the from 0.001 s-1 to 10 s-1. The the presence of friction in the the application of indirect occurs at grain boundaries analysis of the changes in the
recrystallization kinetics and recrystallization kinetics has been compression tests, this kind of methods, such as the and triple junctions with high temperature mechanical
the evolution of the size of studied using direct methods, decisions have an impact on the differential analysis coupled a rate proportional to the response can give many
recrystallized and unrecrystallized such as optical microscopy constitutive parameters used to to the Quelennec model, has strain applied. After an initial information in a quicker time
austenitic grains of steels during and Electron Back-Scattered summarize the hot deformation permitted to calculate the refinement, the microstructure and can significantly reduce the
and between each deformation Diffraction (EBSD) and indirect behaviour of steels. flow stress curves for different coarsens and the final grain metallographic campaign efforts.
pass. Moreover these models, to ones, such as double hit and Beside these issues, also the combinations of temperature size is reduced proportionally
be industrially effective, have to stress relaxation tests. combination of experimental and strain rate with very good to the nucleation rate. EBSD
be coupled with Finite Element The first part of this work uncertainties and analysis agreement with experimental analysis, giving more details
Modelling (FEM) for the rolling has been devoted to a critical methodologies have a results and to evaluate indirectly on substructures produced
mill design. review of the wide literature significant influence on the recrystallization kinetics from by the deformation, has
Up to now, only empirical on these metallurgical constitutive parameters. It was the flow stress curves shape identified in the distribution
models have proved to give phenomena, focusing on demonstrated, using a Monte itself. On the other hand, the of deformation inside grains
satisfactory results, but these recovery and recrystallization of Carlo method, that not all the recrystallizing microstructure the mechanism promoting and
rely on a huge quantity of steels. Despite the numerous techniques can be used with the has been characterized in terms limiting the growth of new
parameters that have to be publications, it is still lacking same trustworthy. In particular, of change in the diameter grains. The recrystallization
calibrated by ad-hoc laboratory a detailed knowledge of it has been found which method and shape of grains by optical kinetics measured by EBSD has
tests. Unfortunately different nucleation mechanisms of is the most robust against the microscopy. It was shown that shown to be more reliable than
tests and different test analysis recrystallized grains and a propagation of experimental nucleation occurs at triple the one measured by optical
methodologies usually give framework of differential uncertainties and that the errors junctions and serrated grain microscopy and similar to the
different calibration parameters equations describing the flow in the determination of the flow boundaries and after the ones measured indirectly. The
for the same phenomenon and stress curves of a recrystallizing stress are the most important formation of the first necklaced differences with double hit and
therefore different FEM results. material. However, empirical aspects to be minimized in the structure the grain growth limits stress relaxation tests have been
586
587
The number of industrial welding gun, and the definition techniques in order to define existing techniques employed in three ad-hoc cases and two needed for the resolution of the
MECHANICAL ENGINEERING
robots worldwide is constantly of a motion plan for each robot a new cell design approach Stage 1 (probabilistic roadmap) industrial cases provided by design and motion planning can
increasing. According to the so that the body is correctly that is highly integrated with have been modified in order the Italian company COMAU be decreased from some weeks
International Federation of assembled, while coping with motion planning. The research to best adapt to the high S.p.A. The three ad-hoc cases of manual work to some days
Robotics, 160,000 new robot cycle time and avoiding collisions copes with the following topics: complexity of the environment were successfully solved. The of simulation and manual work.
installations were sold in 2012, between the robot and the cell design, motion planning that characterizes multi-robot influence of the environment Manual work will be limited
leading to the second highest fixture or among robots. for single-robot cells, motion spot-welding cells. Several complexity on the single-robot to the preparation of the data
level ever recorded for one year. Currently, cell design and motion planning for multi-robot cells, algorithms were developed motion planning and on the and to adjustment of the final
The majority of these robot planning are sequential and collision detection and multi- exploiting the information final solution was addressed solution, if needed.
installations (about 40% of new completely manual activities, resolution simulation. coming from the technological through a detailed study of
installations in 2012) are related generally managed from The thesis proposes a 5-stage process of the spot welding. the first industrial test case.
to the automotive sector and to different industrial functional approach (Fig. 2) able to The motion plan is based Specifically, the need for motion
multi-robot cells for body spot units. Moreover, due to these provide the cell design and on the Open Robot Realistic planning algorithms taking
welding. activity subdivisions, several the coordinated robot motion Library that, representing the into account the technological
Multi-robot cells for spot cycles are needed for obtaining a plan. Motion planning is based virtualization of the robot problem of the spot welding
welding cells are robotic cells feasible final solution. However, on an off-line decoupled motion planner, catches the real was proved. The resolution
in which several parts are each cycle causes delays and motion planning techniques robot behavior during trajectory of the second industrial case
assembled by spot welding (Fig. errors that could be avoided for high-dimensional spaces generation. OBB hierarchical represented a successful test bed
1). They are characterized by through better integration of and articulated robots in order decomposition (Stage 5) is for the whole approach.
different robots working at the these activities. to grant applicability to multi- employed for collision checking According to the analyzed
same time on a single body that The proposed approach aims robot cell for spot welding. The and as a basis for multi- cases, the employment of
is handled by a transporter. The at defining a methodology motion plan for each single resolution simulation. Cell design this methodology can provide
body is generally composed of for optimizing the cell design robot is defined through existing is solved simultaneously to multi- useful support for robotic and
two or more components that while reducing time and error techniques (Stage 1), whereas robot motion planning in Stage automotive companies. The time
are blocked during the welding due to the lack of integration the coordination of the robot 3. It copes with the selection
process by ad-hoc fixtures. The between the design and the is based on a new developed of the best resources among
design of the multi-robot cell motion planning. The idea is to model applied to articulated a set of preselected resources
for spot welding relies on two exploit existing motion planning robots (Stage 2 and 3). The according to a cost minimization
main steps: cell design and criterion, the definition of the
off-line multi-robot motion position of the resources in the
planning. Given the fixture, systems (position/orientation of
the body and the welding the robot and allocation of the
points, cell design concerns the welding guns to the robots) and
selection of the resources, such the allocation of the welding
as robots and welding guns, points to the robots. The
and their displacement in the approach does not currently take
space cell while considering into account the flexibility and
productivity, costs, flexibility reconfigurability of the system.
and reconfigurability. Motion The final provided solution is
planning concerns the allocation validated through simulation
of the welding points to the (Stage 4).
resources, i.e. a robot and its 1. Multi-robot cell for car body spot welding. The approach was tested on 2. Approach
588
589
Interest in the aerodynamics train, called crosswind, both while in the rear part this effect limit conditions for the train layer on the ground and the depends not only on the train
MECHANICAL ENGINEERING
of trains has grown in the last reduced-scale models in wind is less important. Since the overturning. ballast plays an important role underbelly geometry but also
30 years, especially with the tunnel and CFD simulations effect is limited to a small part Regarding the effect of the wind for the slipstream assessment on the track conditions, so
introduction of new high-speed are already provided by the of the whole train, the impact generated by the train passing and in Fig. 2 it is shown how the there are lots of parameters
rolling stock. The necessity CEN norm. However, both the on the aerodynamic coefficients trackside structures and people, vorticity develops in the rear part that need to be taken into
to increase run safety and approaches do not take into is not so pronounced. The called slipstream, TSI standard of the train. The wind tunnel set- account. In order to better
train interoperability between account of the relative motion comparison between the results requires at least 20 independent up has been defined according understand this phenomenon,
European countries has led between the train and the of the simulations with still and full scale measurements to these information and it was an experimental set-up has been
to define standards like CEN infrastructure when the wind moving models highlighted with major restrictions on found that WT measurements defined to measure air flow and
and TSI. So, a new train that is is blowing. Neglecting this that considering the relative environmental and infrastructure overestimate the numerical forces in the train underbody
designed to run on the European effect may have an impact in motion between the train and conditions. These tests are very results. Again, the development region during different test
high-speed line must fulfil the definition of aerodynamic the infrastructure lead to larger expensive both in terms of of the ground boundary layer campaigns. Then a numerical
requirements on its aerodynamic force coefficients. In this study, aerodynamic coefficients. The time and costs. In the present represented the major problem. CFD model, composed of a
behaviour defined in the TSI. the quantification of the variation is larger the larger is work the possibility to study the BL could be better controlled by full scale model of ETR500
The requirements usually ask differences in the coefficients the infrastructure dimension slipstream problem using a still reducing the splitter plate in the put on the STBR scenario, has
for full scale experiments or has been investigated by with respect to the train model in wind tunnels will be upstream direction and using been defined to simulate the
experimental tests on reduced- means of CFD simulations dimension. In fact, considering investigated. The benefits from a porous floor on the splitter undercar flow. Effects of ballast
scale models in wind tunnel or with a moving reference frame the EMBK scenario, it has been working on a still model in a plate in order to prevent its stones and sleepers have been
moving model test rig. Especially approach, considering different found an underestimation in the wind tunnel are that ambient generation. taken into account through
for full scale experiments, the infrastructure scenarios: aerodynamic coefficients of the conditions can be controlled rough wall functions. Results
cost for homologation tests the Single Track Ballast and order of 10% in the considered during all the test runs and it is showed a good agreement
is very high since they have Rail (STBR) and the 6m high range of yaw angles, as shown possible to avoid the problem between experimental measures
to be carried out during the embankment (EMBK). The in Fig.1. The underestimation of small measuring time, but and numerical model, both in
night in absence of commercial deflection of the flow induced of the aerodynamic coefficients an appropriate measuring terms of velocity profile and
traffic and performed several by the relative motion changes will lead to an underestimation system should be adopted. of forces acting at the ballast
times over a long stretch of the the incoming flow conditions of the Characteristic Wind In this case CFD simulations level. This tool could be used to
line in order to consider the on the front part of the train, Curves that represent the can be used as a useful tool perform forecast analyses. For
different features that could for designing the wind tunnel instance, with the numerical
be encountered such as slope, experiment and testing different model it should be possible to
tunnels, curves and different solutions in advance. Since the consider different train speeds
infrastructures. slipstream strongly depends on or different train underbelly
In the present research an the generated turbulence, a geometry, without the necessity
investigation of the possibility DES numerical model has been 2. Vorticity generated by the train
of performing full scale tests,
to rely on virtual homologation, applied, in order to capture the reproduced by means of CFD that are much more expensive
consisting of CFD methods and smaller flow structures that form simulations. and time-consuming.
wind tunnel tests, to simplify near the surface the train with
homologation procedures with an adequate computational The flow generated in the
respect to three aerodynamics effort. Different CFD models underbody region of the train
features of high- speed trains is have been developed and the lead to a phenomenon called
performed. results have been compared to ballast lifting and it has been
Regarding the effect of the 1. ETR500 leading vehicle force (CFZ) and moment (CMX) aerodynamic coefficients the full scale measurements. It recognised in the TSI as an
wind acting transverse to the with EMBK scenario. was found that the boundary open point. Ballast pick-up
590
591
We dont sell product. We sell there is a lack of systematic psychological needs. In this based tool. This tool, called User need-finding activities and the throughout the whole UX design
MECHANICAL ENGINEERING
experiences. Sentences like this approaches suitable to support fashion, on the one hand it takes Experience Design Supporting evaluation stage. This offers, as process. This opportunity, as well
one are increasingly common design engineers in designing into account the subjectivity Tool (UXDST), aims to foster a main future opportunity, new as the main limitations of the
in companies advertisements. for the UX. of experiences, whilst on the the ideation of novel artefact hints on the possibilities for a work, are critically discussed in
In a same fashion, the number This thesis deals with these other hand it allows building a features capable of addressing novel integrated approach for the final chapter.
of scientific publications about issues and tries to address systematic approach upon. users psychological needs. supporting design engineers
User Experience (UX) is quickly the following main research From this perspective, in order to Some exemplary screenshots
raising: the research user question: How can we support develop a systematic approach, from the UXDST are shown
experience on Scopus returns design engineers and companies the concept of Experience in Figure 2. Hence, the tool is
about 11,500 entries, but less in effectively and efficiently Affordance is expanded beyond tested in order to preliminarily
than one fifth of them are older designing for the UX?. The its definition and a theoretical verify its capability of supporting
than 2005. UX can be defined research question is thus model that describes how the design for the UX and
as a persons perceptions and addressed as follows. Experience Affordances work individuate the most suitable
responses resulting from the The first step arises from the developed. To this purpose, paths for future improvements.
use and/or anticipated use of consideration that designing it is assumed that Experience Finally, a complementary
a product, system or service. experiences implies some risks. Affordance can be seen as discussion is carried out: in
However, the design for the UX In order to overcome said risks, proposals for needs satisfaction spite of being outside the main
has gained several criticisms, a novel formalisation for the made to the users; then, it contribution of this dissertation,
related to some limitations UX is outlined. More in detail, is argued that the process two further systematic methods
that both researchers and after a critical discussion about through which the artefacts are proposed and discussed.
practitioners aim to overcome. the role of the UX designer and offer these possibilities to the The first one is based on the
Firstly, an experience cannot her/his designs, the relation users can be described as a narrative analysis of user
be designed, nor guaranteed. between the user and the communication one. Hence, a stories, and aims to support
1. A schematic representation of an Experience Affordance
The purpose of designing artefact is represented by means novel design-as-communication designers and design engineers
experiences could thus lead of the concept of affordance; model is proposed. Basing in individuating and highlighting
designers to attempt to design a formulation in affordance is on the Jakobson model of users psychological needs. The
something that is not there thus proposed as a possible communication, Experience second one, instead, roots in
to be designed, such as the solution to overcome the Affordances are then modelled Experience Affordances and
user. Secondly, the design above discussed issues. Hence, in a fashion that not only proposes such a formalisation
of an interactive artefact, the concept of Experience highlights the elements that as a tool for evaluating the
to a certain extent, impacts Affordance is postulated as play a role in providing users possible experiences for needs
users behaviours, habits follows. An artefact affords an with experiences of needs satisfaction offered by artefacts
and eventually experiences. experience to a user when the satisfaction, but also the roles to users. Therefore, after
Hence, every designer or user has a certain psychological these elements play. a theoretical discussion an
design engineer developing need and the artefact has a Eventually, the main research exemplary application is shown.
interactive artefacts acts as a UX feature capable of fulfilling it. question of this thesis is The methods and tools proposed
designer. From that perspective, A schematic representation of addressed through the in this dissertation refer to
systematic approaches can result Experience Affordances is shown development of a systematic different time phases of a design
with being valuable support to in Figure 1. approach, based on said design- process. Whilst the main part
those which have to deal with Such a formalisation as-communication model. deals with the synthesis of novel
the UX, albeit not being properly gives a prominent role to Subsequently, such an approach solutions, the other methods
UX designers. Nevertheless, the satisfaction of users is implemented into a computer- are related, respectively, to the 2. Exemplary screenshots from the UXDST
592
593
Nowadays, gas turbines and loadings. compact tension specimens. be adopted, even if residual those calculated from numerical similar to those present in the
MECHANICAL ENGINEERING
other components employed These conditions represent the This condition was not met strains are present. The strains simulations, which considered linepipes were tested. It was
for power generation are motivation and the starting when tests performed at high recorded during the experiments single crystal plasticity. Good found that good estimates could
subjected to several load cycles point of this Ph.D. thesis, which temperature were considered. confirmed the elastic shakedown agreement was found, be obtained only if experimental
during their lifetime, since they aims to study crack propagation In particular, it was found that condition numerically simulated. underlining the capabilities opening/closing levels, together
are switched on and turned in plastic zones. In this work experimental data-points exhibit In particular, it was found that of DIC in the study of fatigue with experimental stress
off several times each day, in the attention is mainly focused a marked speed increment. the crack propagation model crack growth. Moreover, it was and strain amplitudes, were
order to meet the peak loads on the effects of crack closure This increment was related to accurately describes fatigue found that crack propagation considered.
requested by users. High plastic and material cyclic behavior a damage mechanism present crack growth when the stress/ behavior of single crystals is
strain can be present in these and to the possibility to apply at crack-tip, which consisted strain cycle registered at the similar to the one observed
components, since high loads, the general formulation of J in a pattern of micro-cracks notch root is fully reversed, when polycrystalline structure is
applied to the structures during in fatigue life assessment for surrounding defect main whereas it provides wrong considered.
start-ups and turn-offs, can components subjected to LCF body. Additional experimental estimates when an applied mean In the final part of this work,
generate yielding in certain conditions. campaigns underlined the stress is present. This fact was these techniques were applied
regions, such as near notches. In the first part of this work, fact that this phenomenon related to the wrong crack- in the study of fatigue crack
In order to take into account a general formulation of is temperature dependent. closure estimates provided by growth in linepipes subjected
these conditions, state-of-the- was obtained, starting from Because of this, a speed the current analytical model. to severe loading conditions.
art procedures consider fatigue Dowlings original proposal. This factor was introduced in the Because of this, the attention The particular loading condition
life assessment as a crack formulation, which does not calculations for describing the was shifted on the experimental experienced by the tubes implied
propagation problem. Fatigue depend on Masings hypothesis, enhancement of crack growth determination of crack closure. the adoption of the crack
crack growth is usually described allows the direct extraction of rates. An innovative technique propagation model based on
taking into account material the cyclic J-integral from the The same observations were based on computer vision, . An experimental campaign
elastic-plastic behavior and remote fatigue cycle. This model obtained in the second part digital image correlation, was was performed to study crack
crack growth rates are described was employed to study fatigue of the work, in which crack employed to study crack-closure opening and closing loads
as a function of elastic-plastic crack growth in presence of propagation in presence of in a Ni-based superalloy, Haynes during propagation in presence
parameters, such as the applied plastic strain. It was found that notches was studied. This 230. In this phase, single crystal of very high plastic strains.
plastic strain range or the cyclic short crack propagation could activity was developed in structure was considered, in DIC was employed during
J-Integral. be accurately described only order to provide a crack order to remove the effects this activity, by adopting an
These models, however, present if effective stress and strain propagation model suitable of grain boundaries in crack innovative technique based on
several limits. First there are ranges were considered in for real components. An opening/closing levels. Two virtual strain gages positioned
simplified equations based on J-integral range calculations. accurate study of the stress/ different techniques were near crack tip. It was found
fatigue load cycles calculated by This means that, even during strain field of a compressor adopted: virtual extensometers, that the levels calculated by the
adopting Masings hypothesis, a propagation in the LCF regime, disk, performed in order to placed along crack flanks, were analytical model proposed by
feature that does not take into crack closure plays an important check material behavior near employed to check crack-closure Newman underestimated the
account transient phenomena, role. As proposed by state- the most critical parts of the levels, whereas a regression effective stress and strain ranges
such as ratchetting and mean of-the-art procedures, it was components, showed that disks algorithm was used to extract applied to the crack at Re=0.5.
stress relaxation. Then, they found that crack growth rates, experience an elastic shakedown the effective stress intensity In particular, it was observed
consider the effects of crack experimentally measured during condition. This situation is factors from the singular field that, during LCF, a crack stays
closure, but opening levels are short crack propagation, lie related to the particular spinning present at the tip. In order to open for almost the whole
usually calculated by adopting on reference curves obtained tests performed before final validate the technique, cyclic fatigue cycle. At this point,
analytical models, known to be from tests performed at high installation and implies that an plastic zones were calculated specimens containing defects
precise only under fully reversed stress ration on standard approach based on LEFM can from extracted and compared to with constraint conditions
594
Low Frequency Vibrations analysis as a The 1xRev component showed of axle was discretised by means laboratory test.
MECHANICAL ENGINEERING
Railway axles are one of the analysis (LFV) the labs of Politecnico di Milano block loading repetitions, still a mechanism was most important a crack in similar size is highly
crucial elements providing LFV is based on the measure of Department of Mechanical nearly monotonically increasing phenomena to investigate probable during vehicle service.
safety and functionality of harmonic components in the Engineering. In particular, a trend, but the trend was less during simulation contact 1xRev and 2xRev harmonics
railway vehicles, thus special axle bending vibration having three point rotating bending clear, probably on account of parameters were defined at both occurred to be the most
attention to their health needs periodicity which is an integer was applied to the full-scale disturbances such as thermal sides of cracked specimen walls suitable to indicate presence
to be paid. To prevent failure of sub-multiple of the revolution specimen via an actuator group effects that may produce a bow (crack lips), which allowed to of crack. Comparison of two
the axle periodical inspections period. These vibrations and an electric motor: in this of the rotating axle. Finally the simulate opening and closing of cases with different crack depth
are performed. Railway axles are induced by the crack- way, both constant amplitude amplitudes of the remaining crack lips caused by rotational (30% and 35%) revealed that
in modern vehicles are not breathing mechanism and by and block loading fatigue or 4xRev to 7xRev components bending. During simulation NxRev vibration components
only transferring load but also asymmetry in the bending inertia crack propagation tests could be remained very low during analysis campaign two scenarios are undergoing higher rate of
providing support for brakes of the axle, as produced by a carried out. the whole tests with a slight were investigated: laboratory change of amplitude due to
and other auxiliary elements, stiffness reduction introduced by LFV signals have been acquired increase in the final stage of the test equivalent scenario and crack propagation than from
thus their proper inspection propagating transversal crack. by means of three laser test. In conclusion, the 1xRev, railway case scenario, where the developing out-of-roudness
against faults requires complete Abovementioned vibrations have transducers to monitor possible 2xRev and in some cases also latter one including additional profile of wheel allowing to
disassembling process and use a low-frequency nature, and due damage occurrence and its 3xRev harmonic components excitations raised by wheel assume that that axle condition
of one of the nondestructive to this fact the measurements development during the axle of vibration signal appear to and rail irregularities was monitoring based on LFV is
diagnostic techniques (like can be conducted by using operation under load. Because be the best suited to be set in representing more closely real capable of delivering diagnostic
ultrasonic scan, eddy current simple, robust and inexpensive of difficulties to monitor axial relationship with the presence working conditions of railway results even during further
test, etc.). This approach has transducers. This method vibration in real application (and possibly with the size) axle. degradation of wheel profile.
significant disadvantage: was initially proposed for authors have focused only on of a propagating crack in the The laboratory scenario
inspection needs to be performed crack detection in the shaft vibration measurement in radial axle. It is necessary to keep in simulation results appeared Bibliography :
in workshop which is time, effort of turbo-machinery [1], and direction. Two laser transducers mind that not only increase of to be in good qualitative [1] Pennacchi P., Bachschmid
and money consuming. Due to was demonstrated to provide have been mounted in vertical amplitude of vibration signal agreement with the N., Vania A.: A model-based
the lack of information about reliable results for this kind direction and one in horizontal can indicate presence of fault. measurements performed in identification method of
the state of the axle during its of application based on direction pointing to the central Some experimental results are the full-scale tests on cracked transverse cracks in rotating
operation the inspections are experiments performed on a region of the axle, which showing decrease in amplitudes axles. In particular the ratios of shafts suitable for industrial
performed periodically on time laboratory test rig, however due allowed to obtain the highest during initial stage of crack the amplitude for the 1st and machines, Mechanical Systems
or kilometer basis, which can to different work specification displacements produced by development, which could 2nd harmonic were in good and Signal Processing, Vol. 20,
cause either money loss for of railway axle (speeds much the applied loads. For speed be explained by influence of agreement with these that can 2006, 2112-2147.
operator in case of unnecessary lower than first critical, not detection and absolute phase opposing phases of generated be observed during laboratory [2] R. B. Randall, Vibration-based
check (no problems found) or stable working conditions due lag analysis Tachometer signal signals [2]. test. The absolute amplitude condition monitoring: industrial,
serious accidents in case of to additional excitation sources, (one per rev signal) has been of vibration obtained at the aerospace and automotive
axle failure due to overcome weather condition influence, obtained from the electric motor 3. 3D model analysis and end of the tests performed on applications: John Wiley & Sons,
of critical condition before etc.) successful application in used to rotate the axle. results cracked axles was consistent 2011
planned inspection. In presented railway field was not obvious. During experiment campaign 3D analysis by finite element with the amplitude obtained
work analysis of application of a number of nxRev harmonics method was used to investigate by means of Finite Element
the Low Frequency Vibration 2. Experimental tests and appeared in the measured signal vibrations inducted by breathing simulations for a crack in the
measurement approach for results and after trending process the mechanism of crack existing range of 25% to 35% which
railway axle online condition The experimental full-scale tests increase in the level of vibration in railway axle under working was roughly consistent with the
based monitoring system is have been carried out by means became evident for some of conditions (applied loads and final crack amplitude found on
presented. of the Dynamic Test Bench for them. rotations). Finite element model the specimen at the end of the
596
597
Light-weight manipulators and static deformation of the overshoots avoidance. compliant robot base dynamics. real assembly tasks. Results
MECHANICAL ENGINEERING
are increasingly used in robot base may affect the Moreover, compliant robot In this case, the defined control show the effectiveness of the
interacting robotic tasks where task execution, resulting in base state-of-the-art methods strategy takes into account proposed control strategies,
reduced mass and (controlled) decreasing performance (e.g. do not consider contact the estimated position of the compared with control schemes
compliance are required to steady-state errors) and task tasks or consider the use of manipulator base to correct in literature (in particular, a
ensure safety and adaptability. failures (e.g. instabilities). external sensors (i.e., reduced the position set-point of the second order explicit force
Interacting tasks are generally applicability to industrial impedance control, obtained tracking controller based on the
referred to both human-robot Interaction control has long been contexts). implementing a Kalman Filter impedance control), that show
interaction (e.g. handling investigated in order to safely (KF) to avoid the use of external force overshoots, instabilities
assistance) and robot machining execute such applications. The purpose of this Thesis is to sensors. and lower dynamic performance.
(e.g. automatic assembly or Impedance control is particularly extend (a) and (b) algorithms to
surface finishing). Moreover, suitable for interacting a global impedance shaping to The proposed control scheme Topics covered in this Thesis
lightweight manipulators are applications, allowing to overcome described limitations: has been developed based on have been developed and tested
often mounted on flexible define a target dynamics (i.e., avoid force overshoots/ an incrementally study of the at ITIA-CNR, IRAS group.
structures or mobile platforms. mass, stiffness and damping instabilitie; described complex scenario.
In such applications the parameters). compensate for compliant Firstly, the rigid robot base
dynamics of the interaction is Although impedance methods robot base. scenario has been taken into
affected by the robot base and are proved to be dynamically account (focusing on the robot
the environment, in addition to equivalent to explicit force The novelty of the defined - environment interaction).
the dynamics of the controlled controllers, a direct tracking approach is the possibility Then, the compliant robot
robot. of the interaction is not to track a target interaction base scenario has been taken
The critical compliant scenario straightforwardly allowed. force (or deformation) while into account (focusing on the
(i.e., compliant robot base, To overcome this limitation modifying the impedance of compliant robot base dynamics).
controlled robot, environment preserving the properties of the global interacting scenario, A complete model of the
and, eventually, human the impedance behaviour two having a complete estimate interaction dynamics is needed
operator) may cause the different families of methods of the complete plant. This is as the proposed control
presence of natural frequencies have been mainly introduced: defined as impedance shaping. strategies are model-based and
in the operating bandwith of class (a) force (or position) This is done evolving from class to design the observers.
the manipulator due to the tracking controllers and class (b) (a) and (b) methods, tuning on- Developed dynamic models have
coupled dynamics, significantly impedance variable impedance line both the position set-point been validated, especially the
influencing the interaction controllers. and the stiffness and damping impedance control model of the
accuracy in a wide critical Limitations of class (a) are parameters of the impedance closed-loop robot dynamics (i.e.,
bandwith. Such condition related to the limited bandiwth control based on both the force static and dynamic impedance
may give rise to excitation of of the controllers to avoid error and the on-line estimated behaviour validation). Then,
resonances of the coupled instabilities dealing with stiffness of the interacting observers and control strategies
system that could hinder the changing environments. environment, obtained have been developed, using the
stability of the task execution Limitations of described class implementing an Extended developed models to synthesize
(especially during contacts (b) are related to the absence of Kalman Filter (EKF). the algorithms and to study the
transients). Moreover, the base the runtime estimation of the Eventually, such control scheme closed-loop stability.
elasticity is remarkably crucial environment stiffness. is capable to consider the
from the controller performance Additionally, both class (a) and compliant robot base scenario Developed control strategies
point of view. Both dynamic (b) are not considering force in order to compensate for the have been validated in a
598
MECHANICAL ENGINEERING
method to estimate axial load accurate estimation of material comes from point ii of this list the best methods in literature
in tie-rods by means of indirect data, working properly even iv. use the FE model to build the with less constraints (e.g. the
measurements. The knowledge with rough nominal data; relationship between the value necessity to know the Youngs
of this information is of great to work even with operational of each eigenfrequency and modulus of the material with a
importance to assess the health modal analysis in order to the axial load . The tie-rod very high accuracy).
of the tie-rod itself and of the apply an effective continuous axial load is estimated as the Finally an experimental validation
whole structure, which the monitoring of tie-rods. mean of the various values . of the method was carried out
beam is inserted in. In order to achieve a v. the value of is refined by and the results confirmed those
The method is based on dynamic method based on dynamic means of a modal updating coming from the Montecarlo
measurements and require the measurements, the modal procedure, by cross-correlating simulations.
experimental estimation of behavior of tie-rods was the FE model results with the Therefore, the designed method
the tie-rod eigenfrequencies investigated. The tensile load experimental data in terms of 1. Errors in estimations before and after the updating procedure reaches all the goals fixed at
and mode shapes in a limited is the object of the analysis, eigenfrequencies. the beginning of the work and
number of points. Furthermore, but the tie-rod dynamic The method takes into account a it is noticed that the developed
the approach requires to develop behaviour is affected also by reasonable range of uncertainty The experimental validation was modal behaviour of a beam procedure does not require
a simple finite element model, other parameters: some are for each unknown parameter. performed on a test rig as similar was the starting point to design to measure the input to the
which is then cross-correlated measurable and therefore Two methods for assessing were as possible to an actual tie-rod. the new method to assess the structure. Furthermore, the
with the experimental data by considered as known (e.g. the investigated. One is effective The tests were carried out with axial load. The eigenfrequencies number of sensors required to
means of a model updating cross section area) and others also working with environmental different levels of axial load of the beam were found to measure the response of the
procedure. are unknown (e.g. the actual forcing, the other led to a and different stiffness of the be suitable to estimate the beam is limited to two. This
The aim of the present work is free beam length , the Youngs feature useful to estimated the constraints. In order to simulate axial load but the analyses also makes the current method much
to design, develop and validate modulus and density of the actual beam free length . also a real low level of stiffness, showed that an estimation of cheaper with respect to most of
an innovative technique to asses material). The behaviour of Extensive numerical simulations some rubber layers were put the stiffness of the constraints the referenced techniques, even
the tie-rod axial load, based on the constraints at the end are and experimental were carried between the elements of the was necessary prior to any other enabling to carry out continuous
dynamic measurements and unknown as well. The stiffness out tests in order to validate the clamps which constraint the task. Hence, methods two assess monitoring of the tie-rods.
able to overcome most of the of the constraints is represented method. beam ends. The experimental the stiffness of the constraints
problems and limitations of the by the equivalent torsional Numerical validation was carried tests were performed by
previous approaches. Particularly, stiffness in the model of the out by means of Montecarlo measuring the input force on
the method is expected: tie-rod . The following method simulations on different case the beam as well as by providing
to be simple to apply was developed to assess the studies. Results are represented only the environmental forcing.
(both experimentally and axial load: in terms of, where is the true Results are shown in Fig. 2.
numerically); i. identify the first tie-rod value of axial load in the The experimental tests show
to give accurate assessment eigenfrequencies by means of beam. For a case study, Fig. 1 that the maximum discrepancy
of tensile force (results as experimental tests; shows the results in terms of (after updating) on load
good as or better than those ii. asses by identifying the average value of and the results estimation was lower than 10%,
achieved by the best methods eigenvector components in distribution pointed out by the in strict accordance with the
in literature) two points; interval and the boundary values numerical validation.
to be effective with different iii. build a FE model of the tie-rod and , for different value of axial In conclusion, the
kinds of tie-rods (e.g. with where the material properties load and constriant stiffness. comprehension of the basic
both uniform and non-uniform are fixed to nominal values Results before and after the effects of geometrical and
beam cross-sections); and the geometry is assumed model updating are shown. mechanical variables on the 2. experimental test results
600
Title: Human body response to multi-axial of the nonlinear terms to the and the cross-axis APMSs under stationary. Consequently, the
MECHANICAL ENGINEERING
could lead to serious the nonlinearities of the body- between the transmitted force and legs bent postures, the facts, one would expect that the characterized by exposing
consequences on health. transmitted vibrations. This field and the applied acceleration. responses modelled with major inertial contribute occurs subjects contemporarily
Vibration transmitted to the was not completely explored In the first part of this work, conditioned and linear models along the direction of excitation. to independent vibrations
body generate cyclic stresses and few works were reported in nonlinearities were identified were very similar and differences No differences occurred within along either a primary or a
critical for the spine and may the literature. Besides the lack of by conditioning the APMS between the ordinary and the the APMSs being the vibration transversal directions (i.e.
lead to musculoskeletal disorders material, the largely described deriving from the vertical multiple coherence functions magnitude not a driving with increasing magnitudes
(i.e. low-back pain) besides nonlinear behaviour was often whole-body vibration (WBV) were comparable to the intra- parameter for the modelling of of acceleration). Results from
physical and psychological in contrast or not fully supported with a set of nonlinear functions subject variability. the biodynamic response. the comparison between the
fatigue. by the experimental results: e.g. of the acceleration; both the The full three-by-three APMS Secondly, the same statistical APMSs taken under single-
Many studies focused on nonlinearities with respect to the acceleration and the force were matrix was derived by exposing analysis were performed on axis and dual-axis exposures
understanding the dynamics vibration magnitude were found measured only along the vertical subjects to independent the APMS matrices of a single confirmed that the biodynamic
of the body both in seated even if the ordinary coherence direction. vibrations along the three individual. In this case, it was response was influenced by
and standing postures. These function was close to unity. Afterwards, the full (three-by- orthogonal axes. Such task observed a more dependence the addition of a secondary
studies evidenced a general The research activity was carried three) matrix was identified with involved the design of either of the response on the vibration transversal acceleration. Such
nonlinear behaviour which was out according to the following a purposely-designed excitation a dual-axis excitation system, magnitude. Many differences dependence cannot be extended
investigated under different steps: system composed by two made by the junction of two occurred at frequencies where either to all the directions of
experimental conditions (i.e. 1. characterization of the electrodynamic shakers and a tri- electrodynamic shakers, or a the aggregate responses excitation or within a specific
vibration magnitude, postures apparent mass in case of axial force plate. The excitation tri-axial force plate, for the were found equal. Definitely, response path. It was found
and other anthropometric vertical whole-body vibration; was initially mono-axial and the measurement of the transmitted the comparison between the a marginal contribute of the
parameters). Despite the 2. identification of the force was measured along the forces in the reference system. population and the individuals overall magnitude of vibration
huge amount of material, nonlinearities; three coordinated axes. Both the For each axis, both the direct APMS matrices evidenced since dual-axis APMSs were
the variability of the human 3. design and realization of a symmetry of the APMS matrix and the cross-axis APMSs were the more dependence of the almost equal. As for the APMS
body dynamics was not still suitable excitation system and the effect of the vibration computed by the use of linear individuals response on the under single-axis excitations,
clearly understood, being and setup for measuring magnitude were assessed with estimators. vibration magnitude. This finding nonlinearity in the response due
the issue more complex than the apparent masses in the paired t-student and Wilcoxon Generally, the normalized may depend on the scatter in to the addition of a secondary
expected; the main open issue, basicentric reference system; signed-rank tests. In the last part APMS decreased in amplitude the populations biometric data vibration was more evident for
is that the variability due to 4. characterization of the full of the research, the response towards higher frequencies, whose uncertainty introduced on the individual rather than in
the nonlinear behaviour of the (three-by-three) matrix of (forces along three mutually except for the case of vertical the response is such to overlay the sample population. Such
biodynamic response has not apparent masses for the perpendicular directions) was WBV where both the direct and magnitude dependent effects. behaviour may again depend on
been systematically compared standing persons; measured with the uncorrelated the frontal cross-axis APMSs The frontal cross-axis APMS the variability in the populations
to the inter- and intra-subject 5. characterization of the excitation along two axes. increased to a main resonance under vertical WBV was biometric data whose
variability. response in case of multi-axial The APMS derived in these peak (at about 5-6 Hz) and then conditioned for assessing uncertainty is such to overlay the
Aim of this work was vibrations (no more than two conditions has been compared decreased to lower magnitudes. whether nonlinearities in the effects due to the addition of
the identification of the axes contemporarily excited). with the one obtained upon Such coupling may suggest a response occurred. Results from extra-axis vibrations.
nonlinearities affecting the The study of the nonlinearities exciting a single axis. common vibration mode in the the linear estimators and the
response of the human body was performed for both single- Nonlinearities in the human xz-plane but it did not occur conditioned quantities were
exposed to whole-body axis and multi-axial vibrations. body response to vertical under a reciprocal condition found similar. Further analysis
vibrations. The research was The reference parameter for WBV were analysed using the (i.e any resonance rose on evidenced that the drop in the
entirely addressed to the the biodynamic response of the conditioned output spectra both axes due to a frontal coherence function was not
characterization of the response human body was the apparent and the multiple coherence excitation). No more couplings attributable to some nonlinearity
for standing persons using a mass (hereafter APMS), i.e. the functions. The contributions were found between the direct but the response was not
602
603
Quality is an important aspect traceability is a fundamental presented for task-specific ideal substitute geometries
MECHANICAL ENGINEERING
of a product. High quality aspect to ensure reliable uncertainty evaluation by to sampling points will be
product ensures the functionality measurement results. simulation, coherent with the discussed. An improvement
of assembled products and ISO15530-4 standard. The of non-linear least square
the interchangeability among This thesis addresses the proposed simulation approach is fitting is presented, based on
products from different problem of traceability of a based on spatial statistic model the optimization of the initial
manufacturers. To verify product focus-variation microscope considering the correlation solution through chaos method.
quality, tolerance verification as 3D coordinate measuring among captured points. To
(geometrical measurement) by system. First, the traceability of support the simulation and
means of coordinate measuring the instrument will be discussed consider all significant error
systems has to be carried out. considering its performance. sources, characterization studies
Advancement of manufacturing Proposals for reference artifacts to investigate the influencing
technology enables a significant and procedures to conduct factors of measurement by
reduce of critical dimensions, performance verification focus-variation microscopy are
together with an increase of according to the ISO10360-8 presented, too. Finally, industrial
geometric complexity. This and ISO10360-3 standards are case studies are carried out to
creates challenges in coordinate presented. These proposals validate the simulator developed.
metrology: tolerances become consider both 3-axis and The validation conforms to the
tighter. Optical-based metrology 4-axis configurations of the ISO15530-4 standard.
instruments are a potential instrument.
option to verify these tolerances. As a by product of this study,
But of course also in this case Second, an approach is algorithms to associate
604
605
Tyre testing is an essential part of option, both suitable to static superficial roughness of the
MECHANICAL ENGINEERING
the tyre development process: its and dynamic conditions, is object is to redirect light on the
goals are both the refining and represented by contact pressure unit area in direct proportion to
validation of numerical models sensing devices base on the the applied contact pressure.
of the tyre thermomechanical frustration of total internal Therefore, in the case of a
2. Comparison between normal stress maps obtained in statical conditions by the developed test bench (left) and a
behaviour and the direct reflection of light. pneumatic tyre, a measurement commercial piezo-resistive matrix system (right). Values displayed are normalized with respect to the inflation pressure.
verification of the effectiveness device for the footprint contact
of the design choices. pressure can be realized, based
In particular, tyre footprint on this physical phenomenon. A second testing system, aimed moderate intensity. the analysis of the contact patch
stress field acquisitions offer This work focuses on the at dynamical conditions, is Currently, straight free rolling normal stress field acquired
the potential advantage of development of two such implemented on a tyre testing tests at speeds up to 120 km/h by the dynamical system,
encapsulating a great amount testing systems and their drum by inserting in its structure with camber angles up to 4 in i.e. evaluating the forward
of information: the details of implementation at Pirelli Tyre a frame supporting a curved magnitude are possible. displacement of the normal
the stresses exchanged between Testing Department facilities in glass sheet, aligned with the stress field resultant with respect
the tyre and the road surface Milano Bicocca. A static tests drum surface, also illuminated to the vertical plane containing
are of fundamental importance bench has been implemented by the sides by LED lights arrays. the wheel screw axis, obtained
in a vast area of the tyres and by supporting a glass plate A calibrated camera, fixed to via standard certification
vehicles dynamical behaviour. illuminated form the sides by an the wheel hub, frames the procedures. The results of
Normal stress maps in the array of LED lights and framing underside of the glass plate. the validation processes and
tyre footprint are routinely 1. the optical static footprint tests its underside with a calibrated Both the camera and the LED their repeatability proved to
measured in static conditions bench. camera. lights are triggered to acquire be very encouraging and the
by tyre manufacturers with The intensity values of the an intensity map of the tyre static tests bench has been
different methodologies: the When light is forced to be totally pixels are correlated to contact footprint upon the passage of advanced to the industrialization
most commonly used are reflected inside a transparent pressure magnitude via the the tyre on the glass window. process and CE certification.
pressure-sensitive papers and medium, an exponentially interposition between the As in the static measurements The dynamical system is still in
piezo-resistive matrix systems, decaying wave is present at the tested tyre and the glass surface case, a polymeric material sheet active development while its
initially developed in the field of boundary interfaces. If an object of a calibrated polymeric is fixed on the outer surface of industrialization is undergoing.
3. fv coefficients comparison referring
biometrics. Much less common is present in the portion of space material. Each acquired image the glass. The material is not the Future developments are
to rolling resistance measurements
is the availability of contact occupied by the evanescent is then processed by a custom- same as in the static case, since obtained by the developed optical especially focused on the
stress or deformation fields in wave a portion of the reflected developed software that the sensitivity and mechanical system (on the abscissae) and via a extension of the range of testing
dynamical conditions (i.e. with light can be reflected back isolates the footprint area, resistance requirements are very standard certification system (on the conditions allowed, in terms
ordinates).
the tyre rolling), limited to date through the medium, reaching either automatically or with the different. Also in this case, the of wheel tangential velocity,
essentially to miniature force a camera framing the opposite intervention of the user, and software for the processing of longitudinal and lateral slip and
transducers line arrays that boundary surface. Since the converts the image intensity the acquired frames has been A first validation of the torque applied.
operate a burst of acquisitions decaying constant is in the order values into normal stress custom-developed based on the systems has been performed
upon the tyre passage. This of magnitude of hundreds of values. The conversion is made Open Source C++ computer by comparing their results to
method offers the advantages mm, for an object to occupy constraining the integral of the vision library OpenCV: in those of established commercial
of giving information about the region of the evanescent normal stresses to equal the addition to the functionalities measurement system in the
stresses in all three directions, wave it has to be in practical normal load applied during the present in the static tests case of the static tests bench.
however it is very expensive terms in contact with the tests, acquired by the test bench bench software it allows for For the drum testing system, a
both in implementation and medium surface. The effect of reading the output of load cells the recovering of motion blur comparison was made on rolling
in maintenance costs. Another the deformability and of the placed on the wheel hub. degradation effects up to resistance estimates based on
Industrial Chemistry and Chemical
Engineering | Information Technology
| Interior Architecture and Exhibition
Design | Management, Economics and
Industrial Engineering | Materials
Engineering | Mathematical MODELS and
METHODS IN Engineering | Mechanical
Engineering | Physics | Preservation of
THE Architectural Heritage | Rotary-wing
aircrafts | Spatial Planning and Urban
Development | Structural Seismic and
Geotechnical Engineering | Technology
and Design for Environment and Building
| Territorial Design and Government |
Aerospace Engineering | Architectural
and Urban Design | Architectural
Composition | Architecture, Urban Design,
Conservation of Housing and Landscape
| Bioengineering | Building Engineering
| Design | Design and technologies
for cultural heritage | Electrical
Engineering | Energy and Nuclear
Science and Technology | Environmental
and Infrastructures engineering PhD Yearbook | 2015
608
DOCTORAL PROGRAM IN PHYSICS 4) Doctoral Thesis. The thesis work is the students are involved in innovative work. A list of
PHYSICS
Chair: attracting bright students with a good scientific background and ETH-Zrich, EPL-Lausanne, Lund Institute of
a clear interest towards the development and applications of new Technology, University of Paris-Sud, Ecole DANDREA COSIMO AP
Prof. Paola Taroni
ideas and technologies. It offers a wide range of opportunities Polytechnique-Paris, University of Barcelona, DELLA VALLE GIUSEPPE PA
in the fields of advanced applied physics, such as photonics University of Berkeley, University of Cambridge,
DE SILVESTRI SANDRO FP
and optoelectronics (lasers, ultrafast optics), biomedical optics Technical University of Wien, University of
(optical tomography), vacuum technologies (thin film depositions), Bordeaux, Massachusetts Institute of Technology, DU LAMBERTO FP
material technologies (microelectronics and nanotechnologies, Harvard University, INFM-CNR, IITIstituto Italiano FINAZZI MARCO AP
micromechanical processing), and advanced instrumentation di Tecnologia, European Space Agency, ENEA,
(electronic and atomic microscopy, nuclear magnetic resonance). Elettra-Ts, PSI-Villigen, Agenzia Spaziale Italiana, GHIRINGHELLI GIACOMO AP
Scientific education and training to develop general research European Synchrotron Radiation Facility (ESRF- ISELLA GIOVANNI PA
abilities in all areas of applied physics is increasingly needed by Grenoble).
LANZANI GUGLIELMO FP
advanced technological companies. Through a general education The average number of fellowship/grants for
in the basic areas of applied physics and a specific knowledge in students admitted to the PhD Program is twelve LAPORTA PAOLO FP
condensed matter physics, as well as optics and lasers, the PhD per year, while the average number of available MARANGONI MARCO AP
Program aims at the development of an experimental approach to positions is more than double. At present, the
problem solving techniques and at the attainment of a high level of overall number of students in the three-year NISOLI MAURO FP
professional qualification. course is fifty-six. RAMPONI ROBERTA FP
The Doctoral Program has a strong experimental character. The Teaching and research activities of the Doctoral STAGIRA SALVATORE PA
contents are strictly related to the research activities carried out in Program are controlled and organized by a
the laboratories at the Department of Physics. They can be divided number of Faculty members large enough TARONI PAOLA FP
into two main areas: to cover a wide spectrum of research fields. TORRICELLI ALESSANDRO AP
a) Condensed Matter Physics, including photoemission; spin- All members are highly qualified and active
resolved electronic spectroscopy; magneto-optics; X ray researchers. This ensures a continuous updating *Position: FP = Full Professor; AP = Associate
diffraction; magnetic nanostructures for spintronics; of the PhD program and guarantees that the Professor; RC = Researcher/Assistant Professor
synchrotron radiation spectroscopy, positron spectroscopy,
semiconductor nanostructures.
b) Optics and Quantum Electronics, including ultrashort light
pulse generation and applications; UV and X optical harmonics The Doctoral Program relies also on the advice of a Steering Committee, formed by distinguished
generation; biomedical applications of lasers; diagnostics for experts (see table below) coming from R&D industries or research laboratories, taking care that the
works of art; laser applications in optical communications; time goals of the PhD Program are in line with the needs of the non-academic world.
domain optical spectroscopy and diagnostic techniques.
All research activities rely on advanced experimental laboratories Family Name First Name Institution
located at Politecnico di Milano (Milano-Leonardo Campus and PIROVANO AGOSTINO Micron Semiconductor Italia s.r.l.
Como Campus) and are performed in collaboration with several DONATI FABIO EPFL Lausanne, CH
international Institutions. Besides the experimental research, a MIOZZO LUCIANO Solvay Specialty Polymers
consistent effort is devoted to the design and development of novel
von KENEL HANS ETH Zrich, CH
instrumentation.
The educational program can be divided into three parts: 1) Main MASOTTI GIOVANNI El.En. S.p.A
courses specifically designed for the PhD program; 3) Activities
pertaining to more specific disciplines which will lay the foundation
for the research work to be carried out during the Doctoral Thesis;
610
611
My PhD research activity was a PMMA polymer, free volume Equivalence Principle (WEP) for suitable as positron to diffusion inside silica aerogel has
PHYSICS
focused on the use of positrons, measurements have showed that antimatter. The WEP is one of positronium converters studied been developed for this purpose.
the anti-particle of the electron, the material optical properties the cornerstones of Einsteins at L-NESS, silica Aerogel 85 According to the simulation
bothas a probe for material depend even on slight changes General Relativity, which states and MCM-41represent the results the use of transmission
properties and as a fundamental in the free volumes dimensions that the trajectory of a free best candidates. They are geometry enhances the cold
element for the production and concentration. fall body does not depend on mesoporous silica characterized positronium yield in vacuum
of antihydrogen at the CERN Positron annihilation its composition but only on to have a very low density and compared to the reflection
laboratories. spectroscopy techniques have its initial kinematic conditions a very high porosity. Aerogel geometry and the results will
Positron annihilation also been applied to the study (position and velocity). The 85 in particular has a density of be used as indications for the
spectroscopy is a consolidated of positron to positronium gravitational interaction is 85 mg cm-3 and a porosity of design of thin film silica positron
technique for the study of converters for the AEgIS the only one, among the four 96 % and has been developed to positronium converters.
defects in metals, alloys and (Antimatter Experiment:gravity fundamental interactions by the NASA to be applied in In conclusion, besides
semiconductors as well as for Interferometry spectroscopy) (gravity, electromagnetic, the Stardust Project as particle acknowledging that the
the analysis of free volumes experiment at CERN (Geneve, weak and strong) described collector in the space. When AEgIS experiment is ready
inside polymer materials. Switzerland). The AEgIS by a classical theory, General a silica aerogel sample, or to produce antihydrogen in
At VEPAS laboratory (Variable experiment will produce Relativity, and not by a quantum MCM-41 sample, is used as order to measure antigravity,
Energy Positron Annihilation antihydrogen by overlapping field theory. Therefore the g a target of a positron beam, it has been shown that the
Spectroscopy), L-NESS a cold antiproton cloud and a measurement for an antimatter- positronium atoms are formed positron annihilation techniques
(Laboratory for Nanostructure, cold, laser excited, positronium matter system could in principle inside the interconnected pores are suitable for the analysis
Epitaxy and Spintronics on cloud, therefore an efficient yield shed light on which quantum and then escape into vacuum of composition; porosities
Silicon, Politecnico di Milano, of positronium is crucial for the model could describe the from the sample surface. The and voids of contemporary
Polo territoriale di Como), a slow experiment fulfillment. gravitational interaction. In a common geometry used for the thin film based devices and
energy positron beam and a bulk The AEgIS experiment is one second phase of the experiment production of positronium atoms allow disclosing new sets of
positron lifetime spectrometer of the five experiments in antihydrogen spectroscopy will is the backscattering geometry: information on them. Moreover,
have been employed to the the world that works with be performed in order to verify positronium atoms are emitted for fundamental antimatter
study of thin films, hybrid solar low energy antiprotons. The the Charge Parity Time theorem from the same surface where physics experiments, the
cells and PMMA polymers in antiproton decelerator AD at (CPT), which states the symmetry the positrons are implanted. micrometric mesoporous silica
order to correlate positron CERN, a very unique facility, between the properties of Since for many experiments membranes in transmission
spectroscopy information delivers bunches of cold matter and antimatter. that involve high positronium geometry will remarkably
to the electrical and optical antiprotons. The only way to At CERN I had the opportunity yield in vacuum, like the AEgIS increase the efficiency in the
properties of the materials. study antimatter properties with to cooperate in the setup of the experiment but in general all production of cold positronium.
The chemical composition a good enough resolution is to AEgIS pulsed positron beam the experiments for the study
and the morphology of voids cool (i.e. to lower the kinetic that delivers positrons to the of positronium properties,
and porosities in hybrid energy) and catch antiatoms in AEgIS central region in which positronium formation in
solar cells and thin film electromagnetic traps; the lower antihydrogen will be produced transmission geometry is very
metal oxide semiconductors their kinetic energy, the more in the near future. I also promising, Aerogel 85 properties
- IGZO in particular have accurate the measurements can participated to the antiproton have been used to simulate a
been studied and a strong be performed. The principal runs during which antiprotons cold positronium yield in vacuum
correlation between the aim of the AEgIS experiment is were successfully cooled and using micrometric silica thin
positron spectroscopy results to measure the antihydrogen stored for hours inside the films in transmission geometry. A
and the electrical properties of gravitational acceleration g apparatus electromagnetic traps. Monte Carlo simulation method
materials have been found. In on earth, i.e. to test the Weak Among the several samples for the study of positronium
612
613
The topological insulating phase thesis an innovative TR-ARPES Physics (Beijing, China) and they relaxation processes allows to optical transition, only empty revealed dichroic signal does not
PHYSICS
has been recently theorized setup is described and novel present a single Dirac cone in extract the presence of a second states of the second Dirac change sign in the whole probed
and experimentally observed experimental results on Bi2Te3 the center of the Brillouin zone. empty Dirac-cone-like surface cone away from the center of k-region, contrary to what
in three-dimensional systems. and Bi2Se3 topological insulators We demonstrate the ability to state approximatively 1.8 eV the probed k-region can be observed for the second surface
A metallic surface state with are presented. modify the population of the above the first Dirac cone. The populated. Then, the spin- state. This novel finding seems
a Dirac cone dispersion like Commonly, TR-ARPES setups topological Dirac cone of Bi2Te3 weak electron-phonon coupling polarized electronic population to indicate a net spin-order of
graphene appears within are based on a high-repetition on a time scale of hundreds of of topological electrons leads decays along the spin-polarized the image potential state. The
the bulk energy gap but, rate Ti:sapphire laser followed femtoseconds. Our Bi2Te3 sample to longer relaxation times in branch of the second topological image potential state wave-
unlike graphene, spins and by a fourth-harmonic generation is p-doped, thus the Fermi comparison to electrons lying in cone by intraband scattering function is localized outside the
momentums are locked optical setup of its fundamental level crosses the topological the close bulk band, allowing us events. Thus, a dichroic signal sample but is presents also a
resembling an helical spin frequency (typically 1.55 eV). surface state. The pump beam to disentangle the topological appears in the center of the decaying tail into the bulk. Thus,
structure. The possibility to Our innovative setup is based on promotes valence electrons into state contribution from the bulk second Dirac cone with a certain an interference effect between
induce a spin-polarized surface a high-repetition rate Yb-laser empty bulk bands. Hot non- band one. The existence of the delay (approximatively 30 fs) the image potential state wave-
current together with the source. By means of a cascade thermal electrons redistribute second topological surface state as a results of the intraband function and the spin-polarized
predicted protection by spin-flip of nonlinear processes 1.85-eV in the whole Brillouin zone as has been already proved by relaxation process. This is the topological surface wave-
backscattering events promote pump and 6.05-eV probe pulses a consequence of relaxation other research groups by means first experimental evidence to function is possible. This latter
the application of topological are generated. The heart of the processes (mediated by electron- of two-photon photoemission our knowledge of the ultrafast could lead the image potential
systems in future spintronic optical setup is a non-collinear electron and electron-phonon technique. Theoretical flow of a spin-polarized state to earn a net spin-order.
devices. optical parametric amplifier scattering) and fill the empty calculations and the observed electronic population in the In conclusion, the excellent
Angle-resolved photoemission that allows to tune the output states of the topological Dirac helical spin structure suggest second Dirac cone of Bi2Se3. working parameters of our
spectroscopy (ARPES) in widely wavelength at 680 nm (1.85 eV) state. The population of the that the second empty Dirac We prove the capability to innovative TR-ARPES setup
employed in studying surface with an associated bandwidth topological state presents a 70- cone shares the same physical selectively excite one of the two allowed us to obtain novel
properties of topological ensuring a pulse duration shorter fs delayed response time. This origin with the first topological spin-polarized branches of the intriguing experimental results
insulating systems. The than 30 fs after the prism first experimental evidence of surface state at the Fermi second topological state with on the ultrafast electronic
understanding of physical compressor. By a sum-frequency the delayed population time of edge. We show novel dichroic circularly polarized pump light, dynamics in three-dimensional
properties of topological generation optical modulus the Dirac cone suggests that the TR-ARPES measurements of i.e. photo-induce an ultrafast topological insulating systems.
electrons under an optical 205-nm (6.05 eV) pulses are topological surface state cannot the second Dirac cone of Bi2Se3 spin-polarized surface current.
excitation is fundamental for obtained. Emitted electrons be directly optically populated where we are able to selectively Moreover, topological electrons
future applications and it can are detected by means of a by the employed pump beam populate one of the two do not simply recombine
be achieved by exploiting time-of-flight electron analyzer. but only as a consequence of opposite spin-polarized branches with empty unpolarized bulk
the classical pump and probe The measured state-of-the- the electrons decay from the of the second topological states but flow along the Dirac
technique. Time-resolved art 50-meV energy resolution bulk bands. Hence, bulk bands surface state with circularly cone maintaining their spin-
ARPES (TR-ARPES) provides a together with the excellent 70-fs behave as charge reservoirs for polarized pump light and follow polarization.
direct snapshot of the temporal temporal resolution place our the topological state feeding this the temporal dynamics of We observed an unexpected
evolution of the band structure novel setup at the forefront of latter on a time scale of several the dichroic signal, i.e. of the dichroic signal in
of the system upon an intense this technique. picoseconds. photo-induced spin-polarized correspondence with the image
optical perturbation. Only few Bi2Te3 and Bi2Se3 three- The Bi2Se3 sample is intrinsically electronic population. We potential state of Bi2Se3. The
research groups in the world can dimensional topological n-doped, thus the Fermi level detect a strong k-dependence image potential state can be
perform TR-ARPES experiments, insulating samples were grown crosses the bulk conduction of both the temporal decay populated with the linearly
thus dynamical properties by the Prof. X. Zhous research band and the topological Dirac and the response time of the polarized 6.05-eV beam and
of topological electrons are group at National Lab for cone is completely filled. A dichroic signal. Thanks to an photoemitted with the circularly
still partially hidden. In this Superconductivity Institute of detailed study of pump induced efficient bulk-to-surface state polarized 1.85-eV beam. The
614
615
Conservation science is a of the application of a set of by PL experiments at different
PHYSICS
multi-disciplinary field, which advanced optical and vibrational irradiance, both in CW and with
combines a number of scientific spectroscopy techniques to the short pulses. Measurements
methods for the material study of pigment materials. In have shown that the ratio
characterization of works of particular, the attention has between the band edge and
art and of their degradation been focused on two different the trap state emission intensity
products, for the definition classes of materials: modern depends on the excitation
of proper conservation and pigments and traditional intensity. These findings confirm
preservation protocols, for the dyestuff. As modern pigments, that the mechanism for carrier
developing of new restoration cadmium based pigments recombination in cadmium
materials. In particular, have been studied in depth; pigments is highly influenced by
the characterization of the anthraquinone based colorants electron trapping in deep trap
material used in an art object have been considered as states.
is fundamental not only for traditional dyestuffs. As regards traditional dyestuffs,
1. Photoluminescence spectra of a cadmium pigment sample following excitation with a 100 Hz-Q-switched and a CW
defining proper conservation Cadmium based pigments are a group of red anthraquinone laser source acquired with the spectrometer in continuous modality (a). Images of the PL emission excited in the pulsed
protocols but also for the in cadmium-zinc sulphide (Zn1-x based lake pigments and dyed (spot 1) and the CW (spot 2) regimes from the same sample (b).
depth study of the technology, CdxS with 0 < x < 1) or cadmium textiles were studied, with
the trade roots and the style sulphoselenide (CdSxSe1-x with 0 both vibrational and electronic performed in the selected area, for dye analysis mainly focus on the analysis of the palette of a
of an epoch. Many advanced < x < 1), with a high substitution spectroscopy. Lake pigments the SERS spectra are acquired the detection of the organic part watercolour painted by Vincent
analytical tools have been of Zn in the light yellow shades and dyed textiles are produced focusing a CW 488 nm laser of the material, disregarding the van Gogh. In particular, a novel
recently developed, with and a high substitution of Se through the formation of an directly on the SERS active metal ion used as mordant. data analysis protocol for the
the main aim of providing a in the darker shades. They are insoluble dye-metal ion (called substrate. In conclusion, a number classification of multispectral
chemical description of cultural IIb-VIa semiconductor, with mordant) complex. The technique was successfully of optical and vibrational data cube was combined
heritage materials, with a non or a direct radiative emission in Part of the research regarded applied to the study of a series spectroscopy techniques were with luminescence imaging
micro-invasive approach. This is the visible range and two trap the development and testing of painting cross sections, widely merged together in a multi- techniques and with point-like
particularly challenging, due to state radiative emissions in the of a new Surface Enhanced expanding the applicability of analytical approach finalized to Raman spectroscopy.
the fact that artwork materials near IR. The in depth study Raman Spectroscopy (SERS) SERS based techniques to the
are complex mixtures intrinsically of the photo luminescence based technique. The approach study of single layer or single
heterogeneous, composed of (PL) emission of a group of combines the high sensitivity lake particles in cross sections or
a wide range of compounds, commercially available cadmium of the SERS read out with the heterogeneous materials.
from organic to inorganic ones, based pigments gives new high resolution of the UV-laser The study of the same class
and a wide range of size scale, insights into the photophysical desorption of the materials in of materials was then carried
which goes from the chemical properties of these materials. an area few microns large. The on with the application of
identification of compounds to The radiative emissions of the ablation step is described in steady state and time-resolved
the mapping of trace elements, materials exhibited a strong figure 2: (i) a high power UV luminescence analyses on
alteration or restored phases. dependence on the excitation laser pulse is delivered on the a series of anthraquinone
This intrinsic complexity calls for power, as it can be seen in area of analysis, (ii) the material dyed textiles, with the aim of
multi-analytical approaches, to Figure 1, which shows the enters in a plasma phase, (iii) the discriminating between the
overcome the various limitation emission from a cadmium plasma plume expands upward various types of dye-metal ion
of individual spectroscopic sample for two excitation and outward, (iv) the plasma complexes in a non-destructive
methods. regimes. In fact, a non-linear solidifies on the SERS active fashion. In fact, the majority of 2. Schematic representation of the ablation step (left). Close up of the two
This thesis reports the results behaviour was demonstrated substrate. After the ablation is the standard analytical technique ablation optical geometries: focussed beam (top) de-focussed beam (bottom).
616
PHYSICS
to revolutionize radically the way optical confinement in few- approximation, independent particular, to the manipulation of
we look at information science, micrometer-sized waveguides from the specific chemical polarization encoded qubits, we
offering unprecedented levels improves the performances composition of the material. fabricated an integrated device
of computation efficiency and of non-linear interactions, for Consequently, femtosecond laser that performs the polarization
communication security. In the example for the realization of micromachining is adequate for state tomography of light,
past two decades, the effort efficient integrated photon processing a large number of simultaneously on two separated
of the scientific community sources or frequency converters. different transparent substrates, spatial modes. We validated
for advancing in this research Lastly, integrated optics, from glasses and crystals to its functioning by performing
field has been enormous, from being a mature technology polymers, with the same laser the quantum state tomography
both the theoretical and the developed for classical optical system and fabrication setup. In of single photons and of 1. Real (left column) and imaginary (right column) parts of the density matrices
experimental point of view. communications, can benefit addition, this technique shows polarization-entangled photon of the Bell y- state reconstructed with our integrated device (top row) and
with standard bulk, optical waveplates (bottom row). The calculated fidelity
However, despite the number from a well-established an intrinsic three-dimensional pairs. The high measured between the two is > 97%.
of remarkable achievements manufacturing industry and fabrication capability, fidelities of the reconstructed
in creating and manipulating a solid know-how in device allowing for the realization of states (> 97%) are comparable
individual quantum systems, designing. optical circuits with complex to those obtainable with distribution protocols at high suitable conditions, the exotic
quantum technology is still Among the various fabrication geometries, not obtainable standard bulk optical elements repetition rates (100 MHz), phenomenon of fractional
far from real world practical techniques, femtosecond laser with conventional lithographic and testify the suitability of our provided that the residual Bloch Oscillations can take
applications, due to the high micromachining demonstrated processes. novel integrated components in spectral distinguishability of the place. Furthermore, we studied
complexity of the experiments, in the recent years to be a very In this thesis work, femtosecond the manipulation of polarization output photons is compensated. how the presence of a strong
that makes them hardly scalable powerful technology for the laser micromachining is used encoded qubits. The presented results represent AC monochromatic field can
beyond the proof-of-principle design and the development for the fabrication of several A polarization insensitive an important step forward in influence the electronic transport
demonstration level. A relatively of innovative optical circuits integrated optical devices integrated optical circuit lies at the realization of a customer- in a graphene-like bidimensional
new and effective approach for for quantum applications. This that implement a number the heart of the functioning of oriented quantum device with lattice and possibly induce
facing this huge technological technique, in fact, presents of new functionalities, with the architecture we propose real world applications. dynamic localization. Finally,
challenge is that of integrated unique features and several important applications in the for short distance quantum Finally, we demonstrated we constructed a specially
quantum photonics, where advantages over conventional field of integrated quantum communications. In particular, how waveguide arrays, finely engineered semi-infinite lattice
small, monolithic optical circuits lithographic processes. It is photonics, in particular for the we have shown in this work engineered both in terms that is capable to support a
are used to manipulate quantum a mask-less and single-step manipulation, the measurement the design of a handheld of circuit geometry and peculiar surface state, with
states of light. The intrinsic fabrication technique that and the transmission of qubits device based on micro-optics waveguiding properties, can the energy embedded in the
mechanical stability of these does not need any special encoded in the polarization elements and with an extremely be used as model systems continuous band of scattered
devices allows to control the auxiliary facility, like a clean degree of freedom of quantum reduced footprint (few tens of with controllable parameters lattice states, showing an
phase of light in each spatial room. It can be performed with light. In addition, three mm3) capable of preparing and to simulate, with only classical algebraic rather than exponential
mode to an extent that is a relatively simple fabrication experiments regarding the sending over short distances resources, the evolution of localization at the lattice edge.
impossible to reach with bulk setup and this reflects in a simulation in dielectriclight- faint laser pulses in four complex quantum dynamics that
optical elements, allowing for significant cost reduction and guiding structures of the possible polarization states. are inaccessible in real systems.
the fabrication of complex speed-up in the prototyping of dynamics of complex quantum An exhaustive characterization By adopting this approach, we
interferometric structures. The new devices design. Since the systemsare presented. of all the single device studied the effect of particles
spatial overlap of different waveguide fabrication is based For the first time, we realized components demonstrated interaction on their motion
modes can be performed on non-linear multiphoton a waveguide-based integrated that the sender module we in a ordered lattice, under
straightforwardly by means of absorption of femtosecond device that behaves like an propose is in principle suitable the action of a static force,
directional couplers. The field laser pulses in a transparent optical wave-plate, with for performing quantum key and we observed that under
618
Time domain diffuse optical imaging at optode and tissue can cause pre-clinical tests on rats. We investigated the possibility to
PHYSICS
dissertation has been mainly within the probed medium. distance its experimental use contact TR scanning instrument approach, coupled with high in applications like brain and
accomplished in the Physics This feature provides a better in different application fields based on null-distance was DR fast-gated measurements, muscle functional imaging. We
Department of Politecnico di contrast, spatial resolution and such as functional Near-Infrared built. The proposed setup enlightens the potentialities of proved the equivalence between
Milano and in collaboration signal intensity as compared Spectroscopy (fNIRS) for brain was characterized in terms this technique, but also allows a classical TCSPC board and
with foreign institutions as to measurements at a large imaging, non-contact scanning of performances using two us to understand the bottleneck a fast-gated counter in terms
Commissariat lnergie interfiber distance in the imaging and tomographic protocols for assessment of of the technology. The increase of achievable contrast and we
Atomique et aux nergies case of reflectance geometry. reconstructions. time-domain diffuse imagers (BIP in the DR of the measurement demonstrated its suitability for
Alternatives (CEA, France) However, the huge increase in First of all, we aim to and nEUROPt protocol) agreed is indeed limited by a source of various applications.
and Physikalisch-Technische the peak of early photons demonstrate that the use of the upon by many institutions. After noise called memory effect. In In conclusion, the improvement
Bundesanstalt (PTB, Germany). (i.e. photons directly reflected null-distance approach, coupled the objective characterization order to better understand this given by the null-distance
The main framework of this or scarcely diffused from the with the high DR fast-gated of the instrument, several in- newly discovered background approach in different
work resides in the interaction of surface) causes the saturation acquisitions, can improve fNIRS vivo measurements on healthy contribution, a comprehensive applications was demonstrated.
light with diffusive media. In the of the dynamic range (DR) measurements and permits to volunteers were performed. In characterization was done In the future, the development
last decades light has become of common single-photon detect brain activation with most of them, the expected to identify its physical origin. of compact and low-cost
attractive as a non-invasive tool detectors thus preventing higher signal-to-noise ratio and trend of oxy- and deoxy- A possible solution for the devices can lead to the
to investigate diffusive media the use of the null-distance improved spatial resolution. haemoglobin was observed, reduction of the memory realisation of small and portable
and different optical methods approach. For this reason, the Different technological solutions meaning that the proposed effect by 4 decades was then instruments, exploiting the null-
based on Continuous Wave technical implementation of a were considered to solve instrument is able to detect proposed. distance approach in different
(CW), Frequency-Domain (FD) or null-distance deep-tissue scheme problems connected to high DR hemodynamic changes. During my PhD I finally applications fields.
Time-Resolved (TR) approaches is very challenging and never acquisitions (e.g. the removal During my PhD I also
have been developed. Although tried before outside our research of any optical reflection) and investigated the use of a
CW techniques are more group. a dedicated setup was built. null-distance approach for
common, easy to implement The null-distance approach has Finger tapping exercises on tomographic reconstructions.
and already commercialized, TR become feasible for the first healthy subjects were performed In this case, the high DR
spectroscopic techniques are time thanks to the development and we clearly demonstrated acquisitions were analysed using
important alternatives for non- by Dipartimento di Elettronica, the improvement given by the the Mellin-Laplace transform
standard cutting-edge research. Informazione e Bioingegneria null-distance approach in the which exploits the different
The most important feature of Politecnico di Milano of the detection of brain activation. arrival time of photons to
of the TR approach is that the Single-Photon Avalanche Diode The technological development improve the reconstruction. We
depth investigated by photons (SPAD) modules that can be of compact detectors and demonstrated on phantoms that
is encoded in time. Indeed, enabled in fast-gated mode. microelectronic laser sources the new approach permits to
the early-arriving photons are Indeed, they can switch from the (e.g. VCSEL), will improve the increase the spatial resolution
those that travelled only in the OFF to the ON state in less than proposed setup, decreasing and the depth sensitivity of both
superficial layer of the medium 200ps, thus rejecting the peak cost and going towards a 2D and 3D reconstructed maps
while photons arriving later have of early photons. In addition, miniaturization of the probe. (see Fig.1). Then we moved
visited deeper structures. the application of the gated A second field of application toward clinical applications such
Recently, it has been technique permits to enhance explored during my PhD is as the monitoring of the vessel
demonstrated that the use of the DR of the measurements up the non-contact scanning permeability in flap surgery.
a small distance (few mm, or to 7 orders of magnitude. imaging. A non-contact We built an instrument based
even null) between the injection My PhD activity was completely approach is needed in cases on multiple source-detector 1. Reconstructed (first row) and simulated (second row) maps of two separate
and collection points improves devoted to the investigation where the contact between distances and we performed inclusions at 15 mm depth. The black signs represent their real position.
620
TripletTriplet Annihilation-Induced
621
Herein we will present results microscopy (AFM) imaging, it composites will help us to gain
PHYSICS
of our studies on photoactive will be demonstrated that the insight about the influence of
layers of solution-processed up-converted blue emission temperature on the CT-UC and
organic composites that could will be significantly enhanced, ET-UC processes. According to
be potentially utilized as solid- if the binary model is dispersed the time-integrated PL studies
state photon up-converting in the photophysically inert in the range of temperature
layers. Different methodologies matrix of poly (styrene) (PS). The between 100 K and 290 K, it
for increasing the luminescence ternary structure PS:DPA:PtOEP will be verified that lowering
intensity of the triplet-fusion will experimentally prove that the temperature enhances the
induced photon up-conversion the presence of PS tunes DPA TTA-induced up-converted blue
process in solid state layers and PtOEP aggregation and emission intensity in the up-
of organic thin films will be consequently gives rise to converting composites working
discussed. In this work, low- an increased up-converted with either CT-UC process or
energy photon up-conversion on luminescence emission. ET-UC process.
the basis of charge transfer (CT- Concerning the CT-UP process, At present, our results enable a
UC) and of energy transfer (ET- the same photophysical discussion on the microscopic
UC) in thin films will be probed characterisation will be processes of energy migration
as two possible mechanisms accomplished on the blend that dictates the efficiency of
of low-energy photon up- films comprising PtOEP as the TTA-induced low-energy photon
conversion via triplet-triplet sensitizer and either PF2/6 or up-conversion in solid-state
annihilation (TTA-UC). PF8 as the blue emitter. The composites. This methodology
Regarding the ET-UC process, experimental observation paves the way toward the
a comprehensive temperature- from the comparative room- sensitization of photoactive
dependent spectroscopic temperature PL measurements devices such as solar cells,
study will be carried out on a will confirm that the up- light sensing photodiodes, and
binary composites consisting converted TTA-induced blue photodetecting transistors to
of the organometallic complex emission intensity will be photons of low energies.
(2,3,7,8,12,13,17,18-octaethyl enhanced in PF8:PtOEP thin
porphyrin) platinum (II) (PtOEP) film in respect to PF2/6:PtOEP.
mixed with the blue emitter Moreover, the effect of b-phase
9,10 diphenylanthracene formation in the film of
(DPA). Time-integrated and PF8:PtOEP will be addressed. The
time-gated (on the ns-s time time-integrated and time-gated
scale) photoluminescence (PL) PL measurements will rationalize
measurements will be employed that the presence of b-phase in
for probing the generation the up-converting composites
of the photon up-converted affects the intensity and the
DPA delayed luminescence lifetime of the up-converted blue
via TTA-UC, after the laser emission.
photoexcitation of DPA:PtOEP The temperature-dependent
at 532 nm. In the light of these PL measurements on the
data as well as atomic force aforementioned up-converting
622
623
The work introduced here was even though it is an important cell. Thanks to the capability
PHYSICS
carried out in the laboratories issue that will determine of the nanozeolites to absorb
of the Center for Nano Science whether they can represent a water, we have demonstrated
and Technology (Istituto Italiano market opportunity. an improved stability of the
di Tecnologia) in Milan and of devices after 1000 h of outdoor
the Department of Physics of the The research activity presented exposure without affecting
University of Oxford. in the PhD thesis deals with the the performance of the device
design and the development and not even the standard
Photovoltaics (PV) is considered of innovative hybrid solar fabrication process.
1. SEM pictures of (a) standard PSC architecture and (b) new architecture with a protective active layer. (c) J-V curve of
one of the most promising cell architectures, i.e. devices We showed that the concept devices with (black spot) and without (red spot) buffer layer. (d) stability test of previous devices.
renewable energy technologies incorporating both organic of integrated getter can be
that could help to solve and inorganic materials. In successfully implemented in a
important environmental and order to improve the devices PSC, as well. first identified the origin of improvement of hybrid polymer/ amount of crystalline phases
geopolitical problems arising on its whole, we concentrated Since hybrid organometal this efficiency drop with the metal oxide solar cells, a perform better despite the lower
from the actual consumption our efforts mainly on the halide perovskites are very occurrence of electrical shunt technology with big potentiality, injection efficiency in the metal
of fossil fuels. Because of the improvement of long-term sensitive to moisture during pathways becoming increasingly thanks to the non-toxicity of the oxide. This has been proven
remarkable potential of a stability while keeping an eye on the crystallization process, more important as the device materials and the low-cost of by growing a monolayer of
technology based on an energy efficiency and processability. usually they are processed in is operated under standard the manufacturing process. 4-mercaptopyridine on the metal
source worldwide available, Dealing with hybrid active a controlled dry atmosphere. working conditions. We Important improvements have to oxide surface. This interlayer
in the last decades research materials, vulnerability towards To improve the robustness developed an equivalent circuit be done to increase the power promotes a better covering
and technology innovation has moisture is critical. Therefore we of PSCs, we redesigned the model, to quantify the loss of conversion efficiency to levels of the TiO2 surface and a -
focused on the development of present technological solutions architecture of a standard cell current that results from the appealing to the PV market. stacking of the polymer already
solar cells able to achieve high that have been conceived to by replacing the mesoporous undesirable leakage path that The understanding of the effect at the interface, allowing for an
power conversion efficiencies improve the stability during Al2O3 scaffold layer with arise as a consequence of the of local morphology on charge enhancement by three times of
with low production costs. long-term operation or during a scaffold composed of migration of the metal through generation dynamics at hybrid the device efficiency.
Among these, hybrid devices, the fabrication process of dye- nanozeolites. We demonstrated the hole-transporting material polymer/metal oxide interfaces
such as dye-sensitized, polymer/ sensitized solar cells (DSSC) and that standard devices show (HTM). represents a matter of primary
metal oxide and perovskite- perovskite solar cells (PSC). lower photocurrent when Then we proposed a new importance on the way to
based solar cells, have been the We approached the problem they are fabricated in a humid device architecture which is able enhance device performance.
subject of intense research. of water infiltration in a DSSC environment, while the to address this problem (Fig. We found out that it is possible
However, since the PV market by engineering the device performances of device with 1a,b). By adding a thin Al2O3 to tune the interfacial polymer
needs efficient and stable solar architecture to fabricate more the nanozeolites scaffold are mesoporous layer between the morphology by properly treating
cells that can be prepared with robust cell, independently of independent on the environment perovskite and the HTM, we its surface. In particular, we
cheap and simple processing the quality of the encapsulation. of the processing. Therefore, we prepared devices with nearly measured a higher charge
techniques, as long as the focus We introduced for the first time proved that the zeolite scaffold no degradation in the first injection efficiency in the oxide
of the research is just on one of the concept of an integrated can protect the device during 350 hours of operation (Fig from amorphous polymer phases
these research fields, the route getter in an optoelectronic the fabrication process. 1d). Moreover, we improved compared to crystalline phases.
towards the commercialization device and we implemented Another common issue of the efficiency of such devices Nevertheless, we found that
of the technology will be long. that by introducing the getters perovskite solar cells is the by reducing the high series the energy mismatch existing
In particular, the long term in the form of a dispersion of initial drop in efficiency that resistances related to the between the two phases acts as
stability of third-generation solar nanozeolites in the mesoporous is observed during long thickness of the HTM (Fig 1c). a barrier to charge collection,
cells has drawn little attention TiO2 photoanode of the solar term stability studies. We Finally, we focussed on the then devices with a large
624
DEVELOPMENT OF GRAPHENE-BASED
625
The microelectronic scenario real circuits comprise multiple in/out signal matching, large oxide and reduce the contact
PHYSICS
is following a constant trend stages, GFETs exhibiting intrinsic voltage swing, and high resistance. We foresee that it
toward miniaturization, gain Av = gmrd >1 are needed current drive. Knowing the would be difficult to outperform
beneficial for increased to provide voltage gain to the effect of device parameters on the III-V in a short term. Anyway
operation speed and lower single stage, preserving signal performances we have tried to graphene has shown speed
fabrication costs. However, integrity in transmission. To optimize the design to reduce performances that are unbeated
the conventional silicon-based achieve over-unity gain a low the delay per stage. Different by organic electronics on every
electronics is reaching a scaling output resistance rd and high metals have been tested with substrate, so we can guess
physical limit, with short channel transconductance gm are needed. two-probe and four probe that this performances can be
effects becoming detrimental With a high-k, ultra-thin native measurements to reduce contact obtained as well on flexible and
for devices operations. In aluminum oxide formed during resistance, showing that pure transparent substrates through
this perspective the scientific e-beam evaporation of Al, we Au contacts provide the lowest further research.
community has intensified the managed to fabricate GFETs with value of 200 m. Best devices
reaserch on novel materials, high gm and Av. Connecting these exhibit a highest oscillation 1. a) Circuit schematic and b) optical image of the fabricated RO. c) Gate delay
per stage: comparison between current Si CMOS technology and the best
among which graphene, a two GFETs in inverter configuration frequency of 4.3 GHz with 0.9 graphene ROs fabricated.
dimensional sheet of carbon we demonstrated the first m gate length, surpassing
atoms arranged in hexagonal graphene voltage amplifiers silicon ROs speed at the
S-parameters, that are related HEMTs. However, in our devices
lattice, emerged as one of the exhibiting significant voltage same gate length, Fig.1c. The
to the impedances of the circuit. we found out the opposite
most promising candidates. gain in ambient conditions. fabricated ROs have been tested
To extract useful information trend that can be explained by
The high mobility (almost equal Graphene obtained by CVD- as analog mixers and amplitude
from measurements I have made the large rd (good saturation)
between holes and electrons), growth have been implemented modulators, operating in the
a small-signal model which shown by GFETs thanks to the
carrier density and saturation in the fabrication process, gigahertz frequency range,
contains all the resistances, ultra thin oxide that leads a
velocity arising from the providing more homogeneus thereby demonstrating the
capacitances and inductances strong gate control producing a
peculiar band structure make performances and scalability, potential of this technology for
of the circuit, Fig.2a. With ADS fast carrier depletion. The GFETs
graphene suitable for high and CVD-graphene-based integrated circuits applications.
software, the values of these exhibit highest fT of about 10
speed electronics. However, inverters exhibiting DC and The second part of my PhD 2. a) Small-signal circuit model of
parameters can be optimized GHz, fmax of 21 GHz , Fig.2b,
the lack of a bandgap prevents AC voltage gain above 20 dB work has dealt with the study of the GFET used to extract device
relatively to the measurements, and Av > 30 dB at 10 MHz for parameters from S-parameter
graphene-transistors (GFETs) to were demonstrated. These GFET high-frequency response,
giving access to the intrinsic devices with 1 m gate length. measurements. b) Gain performance
be turned off, with consecutively devices could be cascaded aiming to further improve the
device parameters. From Moreover the fmax/fT ratio is fmax, fT and Av for one of the GFETs.
low ION/IOFF and high static power to perform multiple logic performances and to extend
S-parameters measurements the above 3, the highest value ever
consumption, hindering the operations. Demonstration of their frequency capability.
maximum oscillation frequency reported for GFETs. Compared
development of graphene-based device cascading led to the A FET can be described as a
fmax, cutoff frequency fT and Av with the state of the art of
devices for logic applications. fabrication of the first graphene two-port device, with gate as
can also be extracted. fT gives graphene technology these
In the field of high-frequency integrated ring oscillators (ROs), input and drain as output port.
an estimation of the transit devices exhibit good Av and fmax,
analog electronics, however, the Fig.1a,b. ROs are composed The frequency response can be
speed of the carrier in the while fT is still limited by the low
complete switching-off of GFETs of an odd number of inverters probed through the application
channel, while fmax describes intrinsic mobility, mainly due to
is not required, thus making cascaded in a loop in which of a small power wave signal
the maximum frequency at the oxide interfacial traps that
graphene an attractive option. noise components at a certain superposed to the DC bias at the
which the transistor is capable degrades transport properties
This PhD work was devoted to frequency can propagate and highest gain point at both ports.
to amplify power. Tipically in increasing the scattering. To
the development of graphene get amplified thereby inducing A VNA records the reflection
GFETs fT is higher than fmax, improve the performances we
devices for high-frequency oscillation. ROs require inverters and transmission coefficients
in contrast with conventional should improve the quality of
applications. Since almost all with over-unity voltage gain, of the power waves, called
626
627
Ultrafast optical techniques while later, it was proposed that monitors the modifications of suggested for the mechanism, the time scales of different shows unusual properties
PHYSICS
have been established as the magnetization is a quantum- interest in the heated region. along the progress on the (quasi-) particle interactions about the Fermi level: for one
most effective and flexible mechanical concept, so, until By employing the pump & experimental side. Some of the after the pump excitation: spin orientation they have
approaches to study the quantum mechanics gradually probe method, and based on theories are more accepted, electron-electron interaction conducting metallic behavior,
dynamics in a material within a evolved within the last century, changes in the polarization state but up to date there is not a occurs on a time scale of 100 while for the opposite spin
picosecond time-window. This magnetization remained a rather of the reflected probe beam unanimous agreement. The fs, demagnetization is found to orientation the presence of a
relatively new field provided unclear phenomenon. Latest from a magnetic specimen, we theories can be categorized into take place on 180 fs timescale gap in the density of states leads
fundamental insight into one of decades were full of significant can elucidate spin variations: two general types: single particle and the extracted time constant to semiconducting or insulating
the most important properties progress in the understanding the technique is called Time based and collective excitations for electron-phonon relaxation is properties. We measured the TR-
of condensed matter systems: and application of magnetism Resolved Magneto Optical Kerr based mechanisms. Our 260 ps. The deduced dynamics reflectivity for all samples in the
magnetism. Magnetism, a theoretically and experimentally. Effect (TR-MOKE) and is a well- recent experimental evidences of the conductivity tensor same delay range. Our analysis
primary physical phenomena, Parallel to the rapidly expanding established method to extract contribute to further clarification demonstrates that ultrafast suggests that a drastic change
has been recognized thousands computer industry, demand has the dynamics of magnetization of the process. demagnetization cannot in the transient reflectivity can
of years ago as its trace is found grown for faster data access down to the fs time domain. A crucial point is that in be attributed to significant be regarded as a clear hint of
in Chinese and Greek ancient in the memory storages. One Although deliberate particular conditions TR- modifications of the band charge contributions in the
references. During this long of the potential candidates for manipulation of spins in the MOKE does not reflect the structure, such as a collapsing Kerr signal. In the case of CrO2
time, magnetism, even without this continuing quest has been fs time window may not bring pure magnetic behavior, i.e. exchange interaction (related an order of magnitude larger
a deep and fundamental photonic control of the spins in to femto-spintronics devices in it contains mixed information to single particle processes). jump in the initial reflectivity
understanding, has been a suitable magnetic material. the near future, the extensive about charge and spin effects. Instead we suggest that with respect to Fe sample can
employed in several applications, Owing to technical innovations scientific curiosity in this We have developed an the loss of spin ordering be observed, corresponding
e.g. compass needle. About in the optical apparatuses, fields yields more illuminating experimental approach, so-called takes place due to electron- to considerable charge effects
two hundred years ago, Oersted leading to generation of beams insight about the fundamental Time Resolved Magneto-Optic magnon interaction (collective in the Kerr signal for the
realized that the interaction with only few femtosecond characteristics of the photon- Spectroscopic Ellipsometry (TR- excitation). Our proposed picture half metallic sample. Since
between the electric field (10-15 s) pulse duration, the matter interactions and the MOSE), to carefully analyze the is perfectly consistent with we found an unusual probe
and a magnet could have investigation of magnetic magnetization behavior in the Kerr signal. Utilizing this method the experimentally obtained energy-dependence results of
unlocked the gate to dramatic properties on such a short time strongly nonequilibrium state. we are well able to disentangle time-scales, energy cost and the transient magnetization
technological applications, scale has become possible, The principal objective of our magnetic contributions from the observed fact that shows behavior in LSMO sample,
including electric motors. Later, in particular with pump & study has been a comprehensive optical effects. We have that demagnetization is probe we plan to perform the same
Faraday discovered the rotation probe technique. The pump & investigation of the ultrafast successfully performed TR- energy-independent. experiments in different LSMO
of light polarization when probe scheme can be basically spin dynamics in the metallic MOSE in several metallic and In order to clarify whether stoichiometries as a next step to
passing through a magnetic described in the following ferromagnets by means of half metallic systems, and their TR-MOKE technique reliably reveal the genuine spin behavior
medium, followed by Kerr in general picture: an ultrafast time resolved magneto optical genuine spin dynamics has been traces the spin dynamics or in these interesting and complex
1876, who put the first stones intense laser pulse, the pump, Kerr effect technique. In these deduced and discussed in great not, we have also compared oxides.
in a new branch of magnetism locally perturbs various (quasi-) transition metals, the spin detail. the magneto optical response
called magneto-optics. The particles in a sample. Then quenching process takes place The experiments to study the of some ferromagnetic
next milestone in the field of after a certain delay (which within only few hundreds of fs ultrafast demagnetization benchmarks: metallic Fe
magnetization is due to Maxwell in our experiments could upon laser excitation: the most dynamics in ferromagnetic and halfmetallic CrO2 and
and his basic equations, which range from tens of fs to tens rapid magnetization variation metals have been conducted La0.7Sr0.3MnO3 (LSMO)
mathematically systematized of nanoseconds), another ever observed. The origin of this on a 50 nm thick Fe (100) film systems. Half metals are a
the relation between magnetic ultrashort pulse, sufficiently ultrafast phenomena is under epitaxially grown on MgO (100) category of ferromagnetic
field, electric field and charges weak and with a specified vigorous debate. A number of at room temperature in ultrahigh or ferrimagnetic materials
and currents. Nevertheless, a polarization, the so-called probe, microscopic models have been vacuum. The results revealed whose electronic structure
628
629
Organic materials offer an organic materials could cover dependent photophysical,
PHYSICS
attractive opportunity for the low-energy range of the electrical, thermal, structural and
the development of organic solar spectrum. The use of morphology-related properties
photovoltaic cells that could ternary blend as the active layer of the ternary photovoltaic
support portable consumer has been also suggested as a P3HT: PCBM: QBT system for
electronics. With respect to practical method for controlling elucidating the origin of the
the state of the art, organic the morphology of the OPV PCE improvement. An increase
photovoltaic cells (OPV) show layer. in efficiency around 47%, by
low values of power conversion Considering the ternary adding only 0.6wt% of QBT,
efficiency (PCE) not so attractive organic solar cells progresses, a has been actually gained, while
for industrial production. methodology for improving the further additions are detrimental
Different approaches to improve PCE of organic solar cells made for the device performances.
the PCE were reported in by photoactive layers of P3HT It was highlighted that in the 2. Sketch of ternary blend energetics and proposed excited state path ways. Schematic representation of the energy
levels of P3HT, PCBM, QBT, asobtained bycyclic voltammetry.The LUMO P3HT is taken as -2.9 eV.
literatures, and one of these was and PCBM of non-optimized P3HT: PCBM: QBT systems
by acting on the morphology microstructure is presented, charge generation proceeds
the EQE spectra of the ternary molecular weight, were studied holes carriers mobility. As a
of the mostly used active layer through the introduction of via three different excited state
devices that show improved by comparing the electrical comparison, QBT did not have
based on the binary blend a quinoidal small molecule pathways that are consistent
EQE values at the corresponding properties, surface topography, positive effects on l-P3HT:
of P3HT: PCBM, for example 5,5-bis-(3,5-di-tert-butyl-4-oxo- with the relative position of
wavelength ranges. film crystallinity and charge PCBM: QBT (l: low molecular
through the introduction of 2,5-cyclohexadiene-1-ylidene)- energy levels of the materials
Then the effects of donor carriers mobility. The results weight) properties, because
solvent additives and/or a 2,2-dihydroxy bithiophene (QBT) involved. The direct excitation of
polymer molecular weight turned out that with QBT the crystallinity level of l-P3HT
third component. The addition as a third component. Based the P3HT component results in
on ternary organic solar cells as the third component, the matrix is already optimized. The
of a third component could on a series of independent charge generation that is driven
properties were investigated. h-P3HT (h: high molecular microstructures of the active
extend the absorption band characterization experiments by electron transfer between
Two sets of P3HT: PCBM: QBT weight) matrix crystallinity layer determines the triple
of the binary cells, since the we address the QBT content the photoexcited P3HT and
system, which show P3HT was improved, assisting to an bulk heterojunction devices
both the electron acceptors
characterized by different enhanced absorption, increased performances.
QBT and PCBM in the triple
bulk heterojunction (BHJ). Then,
the direct excitation of PCBM
causes energy transfer from
the photoexcited PCBM to QBT
followed by a hole transfer
process from QBT to P3HT.
Moreover, the direct excitation
of QBT, which is characterized by
an absorption at the wavelength
range of 700 nm, results in
photocurrent generation via a
photoinduced hole transfer from
QBT to P3HT. The positive impact
of these three excited state
path ways on the production of (a) (b)
1. Graphic abstract photocurrent is confirmed by 3. Ternary blend films crystallinity summary. (a) h-P3HT-PCBM-QBT films, (b) l-P3HT-PCBM-QBT films
630
631
In the expanding research enhanced spatial resolution, affect the PEC cell efficiency in performed. It has been observed suitable protocols for the
PHYSICS
field of bioelectronics, optical better biocompatibility and photo-current generation. that organic prosthesis can realization of a proper thin film
stimulation of living cells and higher conformability to All organic, photovoltaic sustain the surgical procedure substrate and the subsequent
tissues has recently started to the remaining retinal tissue. retinal prosthesis has been for subretinal implantation and fabrication of the overall device.
emerge as a promising tool, Moreover, organic conductors then optimized and widely follows the natural curvature of BC and PET has revealed to be
complementary to electrical and semiconductors are unique characterized. Preferred device the rat retina. Biocompatibility the most suitable substrates
stimulation, both for in vitro and materials in combining ionic architecture includes a fully properties have been assessed for the realization of a scalable
in vivo studies. The most direct and electronic conduction, thus biocompatible and flexible as well. Preliminary results retinal prosthesis.
application falls within the field mimicking the mechanisms substrate, namely a silk indicate that, up to two Overall, this work provides a
of retinal prostheses, consisting adopted by nature for signal fibroin film, a biocompatible months post implantation, light detailed characterization of
in restoration of impaired light transmission. Besides the above and flexible conducting sensitivity of dystrophic retinas organic based retinal prosthesis
sensitivity in blind retinas. In mentioned benefits, the contact layer, namely a poly(3,4- is restored by the photovoltaic implanted in blind rats, and
this scenario, organic materials of an organic semiconductor ethylenedioxythiophene) prosthesis; moreover, they show represents a useful starting point
appear optimal candidates for with tissues and physiological and poly(styrenesulphonate) that its implantation doesnt for subsequent engineering
active photosensitive layers solutions rises important issues (PEDOT:PSS) film, and an compromise the functionality of of artificial devices targeted to
and/or conducting electrodes of biocompatibility and temporal active, conjugated polymer remaining inner retinal layers. human beings.
and/or substrates, thanks to stability. layer, namely a regioregular Based on these promising
their excellent biocompatibility, In the first part of this thesis, poly(3-hexylthiophene) (rr- results, the feasibility of
mechanical properties and the hybrid interface of an P3HT) film. The prosthesis has implantation of an all-organic
optoelectronic capabilities. organic semiconductor with a been analyzed after 28 days of retinal prosthesis in a human eye
It was recently reported that physiological-like environment immersion in saline solution, has been investigated. To this
polythiophene-based blends are has been widely characterized by irradiation with ambient light at aim, a different animal model,
able to elicit action potential making recourse to a plethora 37 C, by means of absorption the pig, has been selected
in primary neuronal networks, of optical and electronic spectroscopy, contact angle for its eye similarity to that
and also to partially restore techniques, and by adopting and transient photocurrent of human beings, and proper
light sensitivity in explanted the preferential architecture measurements. The prosthesis device architecture has been
retinas bearing photoreceptors of a photoelectrochemical cell stability in physiological implemented. Many possible
degeneration. These promising (PEC). Interestingly, transient conditions has been therefore candidates for the substrate
results encouraged the photocurrent measurements successfully assessed. material have been evaluated,
realization and functional have permitted to identify the Realized prostheses have been including bacterial cellulose
evaluation of an all-organic, main processes occurring at the implanted in dystrophic Royal (BC), poly(ethyleneterephthalate)
photovoltaic retinal prosthesis. interface of the polythiophene College of Surgeons (RCS) (PET), poly(lactic-co-glycolic
Respect to the current, state derivative with aqueous rats eyes, and biocompatibility acid) (PLGA), polycaprolactone
of the art retinal prostheses solution under irradiation. The and functionality studies have (PCL), poly(methylmethacrylate)
based on inorganic materials, PEC cell has been studied also been carried out. Optical (PMMA). In each case, many
a photovoltaic prosthesis in case of oxidation of the coherence tomography, different constraints have
realized with an organic polythiophene based film by confocal scanning laser been considered, including
semiconductor avoids use of treatment with oxygen plasma: ophthalmoscopy, histochemistry solvent resistance, wettability,
external components (like this case resembles the one of and immunohistochemistry, processability, thermal stability,
intraocular receivers and sterilization, and suggests that, electrophysiology, pupillary mechanical properties. A
amplifiers) and does not if performed with optimized reflex measurement, and visually combinatorial approach
need any wiring, can offer parameters, oxidation doesnt driven behavior test have been permitted to define the most
632
633
The experimental activity enabling a mask-less direct use of both the etchants in
PHYSICS
performed in this PhD Thesis writing of any sort of geometry subsequent steps enables the
context has been devoted to inside the substrate, as in realization of complex platforms 1:
a) different microfluidic chips fabricated by fs-laser assisted etching;
develop and integrate new the ship-in-a-bottle idea. In with innovative design and b) micro-filter blocking 7mm-beads immersed in water;
functions for fluidic analysis particular, the micro-structures uniform internal structures. c) T-shape microchannel with movable glass block acting as fluidic switch;
purposes in lab-on-a-chip should be first irradiated by Thanks to this strategy, we d,e) AFM-based 2D-mapping revealing the surface roughness of d) top-wall and e) bottom-wall;
f) Schematic design of the laser integrated microcavity.
(LOC) devices, fabricated the femtosecond laser, with successfully fabricated a micro-
by femtosecond laser 3D motion-control, and then filter for particle separation and
micromachining in glass. selectively removed by chemical a fluidic-switch for fluid selection and detection inside the of 7500lines/mm and refractive domain inspection, we found
The idea below LOCs concept etching leaving empty zones (reported in fig. 1b,c). The filter, microchip, allowing to reduce index modulation up to 5 10-2, out that the non-uniform energy
is the possibility to miniaturize inside the substrate, thus composed of a grid of 2.05mm- either the coupling and thus permitting first-order distribution within the fs-laser
and integrate several laboratory producing the microfluidic size 15 x 15 pores and directly transmission losses as well as operation in the VIS-region. focal-spot leads to a different
functions on a single substrate platform. Furthermore the same encapsulated inside a square increase the device selectivity. Therefore, by merging the fs- surface structuring in the top-
with dimensions ranging from tool (fs-laser) can be used to channel, permits to separate Furthermore by exploiting the laser micromachining for micro- wall (fig. 1d) and bottom-wall
hundreds of micrometers to fabricate waveguides in fused- different particle species in intrinsic high sensitivity of a laser device fabrication with the soft- (fig. 1e). In detail, the former
few millimeters. The very first silica glass with a slight refractive suspension. In particular, the cavity, the optical properties matter holography for gratings case feels the periodicity of the
consequence of miniaturization index increase with respect internal dimensions are suitably of fluid-samples can be easily imprinting, we aim at realizing laser-tracing whereas the latter
is that small quantities of to the surrounding volume. engineered considering typical monitored thanks to an in-situ an all-in-one platform for fluids one reveals a more random
samples and reagents are used As a direct consequence, size of biological samples, where highly sensible sensor. The first analysis, with basic design profile. This difference implies
(10-9 to 10-18 litres) thus leading fluidic recirculation and optics blood-cells and related material choice for laser integration in displayed in fig. 1f showing the a different scattering of any
to a natural high resolution and detection are easily integrated range between 1.5 mm-10mm microfluidic systems is selecting central microchannel filled with light beam passing through the
sensitivity of detection together in the same device, allowing in size. Regarding the second high-gain dye-molecules dye-solution, two side gratings sample and, considering each
with short times for analysis. a fast-prototyping of total- device, we demonstrated an diluted in liquid solvents as acting as cavity mirrors, and particular application of the
Other side benefits related to analysis-systems (micro-TAS). extremely compact fluidic switch active medium, with emission the external connections. As lab-on-a-chip, the surface profile
small size are the low cost, low Easy network reconfigurability is (1mm x 2mm) that can be covering the whole VIS region. first step, we optimized either can be controlled by suitably
energy consumption, reduced visualized in fig. 1a. connected to other systems by As a direct consequence, due the micro-device from the tailoring the fs-irradiation
waste generation and more Considering the etching step, means of inlet/outlet insertions to high chemical degradability fluidic point of view and the parameters: such as beam-
generally better controlled we explored the combination for full integration. The switch is of any dye compound, fluidic photo-polymerizable mixtures shape, polarization, pulse energy
reactions which ensure safe of the hydrofluoric acid (HF) composed of a T-shape channel recirculation results necessary in chemistry in order to obtain and laser-traces spacing.
work conditions. In short, LOCs for large volume removal and with a movable glass-block order to refresh new molecules high efficiency diffraction
are extremely portable devices potassium hydroxide (KOH) for inside, enabling to alternatively in the cavity and enhance the gratings with the selected
which facilitate their diffusion extreme precision fabrication block port-1 or port-2 and select laser life-time. Furthermore, an configuration. Secondly, since
in the healthcare market with (order of 1mm). When using the output direction. easy lasing-tuning is achieved the beams coherence during
the innovative idea to dispose of one single etchant, the former Besides the described micro- by integrating wavelength- the holographic process results
portable points of care, to be tends to produce highly conical mechanical components, we selective mirrors, constituted strongly affected by the intrinsic
even used by non-experts and microchannels due to isotropic carried out a state-of-the-art of Bragg gratings recorded by surface roughness left by fs-
without any special equipment. etching in the volume, whereas study for integrating a laser means of 2-beams holography laser fabrication, we performed
In this regard we exploited the the latter is limited in etching microcavity directly inside the within photo-polymerizable a detailed study in order to
fs-laser assisted etching for strength requiring several hours analysis device. Indeed the resins. In particular we selected control and improve the surface
fabricating the micro-devices, for removing few hundreds of potentials of LOC systems can this technique as it allows for quality. In particular, by means
thanks to the intrinsic 3D micrometers of irradiated fused- be significantly enhanced if a single-step volume grating of AFM-based 2D mapping
potential of this technique silica. Interestingly, the combined disposing of source excitation realization, with high resolution and corresponding spectral
634
635
Hybrid interfaces between cells. In particular, we are a continuous current upon cells, HEK-293 allowed to isolate was instead attributed to a shift
PHYSICS
organic semiconductors and interested in understanding how illumination. Instead, a capacitive the effects of photoexcitation in the membrane equilibrium
living tissues represent a new photoexcitation of the active charging of the polymer/ on the basic properties of the potential towards more negative
tool for in vitro and in vivo material in the device is able to electrolyte interface is observed, membrane, investigated via values, determined by the
applications, bearing a huge modulate the potential of the similar to what happens in all- electrophysiological methods electrochemical equilibrium
potential, from basic researches plasma membrane, which is electrical silicon-based devices and in particular with pacth- of the ionic species on the
to clinical applications. In the main parameter controlling used for capacitive stimulation clamp techniques. Apart from two sides of the cell plasma
particular, light sensitive the firing of action potentials in of neurons. Interestingly, the the capacitive charging of the membrane as described by
conjugated polymers can be excitable cells. capacitive currents obtained interface, which is reflected the Goldman-Hodgkin-Katz
exploited as a new approach for First, the current strategies in the polymeric devices in a spiking signal in the equation, which strongly
optical modulation of cellular used for measuring and upon photostimulation are recorded potential of the cell, depends on temperature. To 1. Electrical schematization of
activity. It has been previously controlling bioelectrical activity comparable in intensity to those the illumination resulted in a complete the description of the coupling between the hybrid
polymer/electrolyte interface and a
demonstrated that thin films of are reviewed; after describing typical of electrically-driven biphasic effect, with an initial the photostimulation process, biological cell described with a two-
organic semiconductors used the evolution of electrical inorganic devices; however, their transient depolarization of a mathematical modelling of compartment model that considers
for photovoltaic applications measurements and stimulation temporal dynamics are quite the membrane followed by a the dynamics of the membrane the basal and the lateral portion of
the plasma membrane. The green
are able to stimulate the of cellular activity, particular short (on the order of 1 ms), due prolonged hyperpolarization. potential is proposed, which box represents the area of the device
bioelectrical activity of neurons attention is paid to optical to the finite amount of charge Both these effects were consistently reproduces the illuminated during stimulation.
grown on their surface upon techniques and the photoactive that can be accumulated at the attributed to the local heating experimental data collected on
illumination with pulses of hybrid polymer interfaces are polymer/electrolyte interface. of the system mediated by the HEK-293 cells. developments, towards the
visible light, both in the case of introduced. The experimental Once this capacitance has light absorption in the active The work is concluded by creation of a multi-functional
hippocampal neuronal cultures characterization of these been charged, all the charges polymer. In particular, the initial wrapping up the results platform for light-controlled
and of explanted blind retinas. hybrid polymer/electrolyte photoexcited in the active transient depolarization was in the context of existing cell manipulation, with possible
This thesis is focused on the interfaces is presented, with material during illumination related to an increase in the techniques for cell stimulation applications in different fields of
study of the functioning a thorough investigation of recombine non-radiatively to membrane capacitance with the and by pointing out to future neuroscience and medicine.
mechanisms of these hybrid their spectroscopic, electrical the ground state. Thus, apart temperature, consistently with
interfaces, composed of a and thermal properties, in from a small fraction that is used recent reports on the mechanism
photoactive layer in contact order to delineate the main to charge the interface, all the of functioning of infrared neural
with an electrolytic solution; phenomena that occur at the energy of photons absorbed by stimulation (INS), in which water
the main absorbing material device surface upon illumination the material is dissipated into absorption of IR light is used
used is the prototypical with short pulses of light (on thermal vibrations leading to a to induce a local heating. This
conjugated polymer poly(3- the order of tens/hundreds of local heating of the electrolyte depolarization signal significantly
hexylthiophene) (P3HT), in some milliseconds). On these short at the device surface. Depending depends on the electrical
cases blended with the electron timescales, the hybrid interface on the light intensity used, an properties of the membrane,
acceptor phenyl-C61-butyric does not support an efficient increase in temperature on the in particular its time constant
acid methyl ester (PCBM). The electrochemical transfer of the order of few degrees can be (the product of membrane
study is carried out both from charges photogenerated in the observed after illumination with resistance and capacitance),
a photophysical and electrical active material to the electrolytic pulses of tens of milliseconds. with faster membranes
point of view to understand the solution. In contrast with The effects of photostimulation (i.e. with a higher specific 2. Variation in membrane potential measured on an HEK-293 cell upon
processes occurring at the hybrid standard organic photovoltaic of the active material on cells conductance) responding illumination with a 200 ms pulse of light (I = 57 mW/mm2). The figure shows
interface upon illumination, cells, were charges are extracted were studied by growing with lower depolarization the comparison between the experimental results (grey open circles) and the
numerical modeling of the system (pink solid line). The single contribution to
and regarding the ability of the by metal contacts, the hybrid HEK-293 cells on the hybrid signals upon illumination. The the total signals from the variations in membrane capacitance (blue dashed
device to stimulate biological device is not able to sustain interfaces. Being non-excitable subsequent hyperpolarization lines) and equilibrium potential (green dashed lines) are also reported.
636
Strain engineering in Si, Ge and SiGe alloys is an important target for the but extended over 20 m strain, in particular it was found
PHYSICS
performance of microelectronic the obtained structures. First be induced in the germanium. characterized by AFM, verifying used for the direct visualization using TEM analysis in order to
devices has pushed the of all, the SiGe-on-Si system This strain can be enhanced that the dislocations propagate of the dislocations present in investigate the nature of the
traditional silicon-based was studied. The realized using a Ge membrane instead along the pit rows (Fig. 2(a)). a film. Through the Raman defects.
architectures to their limits. structures were characterized of a bulk material. SiGe/Ge This control effect was not maps, it was possible to obtain
One possible way to overcome by Raman spectroscopy in membranes were obtained using confined only within the pattern information concerning the
these limits is identified in the order to study the strain state a wet etching process which
use of strain engineering. This of the silicon as a function preserves the SiGe layer, leading
method allows an increase in of the patterning of the SiGe to the realization of high quality
performance of the devices film, obtaining a compressive surfaces. The used anisotropic
through the ability to control strain of the Si substrate of etchants, TMAH and KOH, allow
the band structure of the 1%. Using finite element a high Si/SiGe selectivity which
semiconductor. Compressively method simulations (FEM), it is necessary to avoid damaging
strained silicon is used in was possible to demonstrate the stressors. These samples
high-speed electronic devices, that the strain induced in the were analyzed by Raman
and tensile germanium in substrate depends on the ratio spectroscopy, confirming the
optoelectronic devices. The use between the width and the preservation of the SiGe layer on
of thick films in hindered by the spacing of the stripes (Fig.1(a)). top of the Ge membrane. At this
nucleation of dislocations, which So, wide stripes with narrow point the SiGe stressors were
decrease the performance of gaps between them were fabricated from the SiGe layer 1. a) FEM simulation of exx(upper panel) and ezz (lower panel) for SiGe/Si stripes. b) SEM image of SiGe nanostressors on
the devices, however using the needed to maximize the induced (Fig 1(b)). top of a Ge membrane.
dislocation engineering method strain. After these promising Regarding dislocation
it is possible to govern the results, the SiGe-on-Ge case engineering, a new method of
nucleation and the propagation was considered. In this case the controlling and engineering the
of dislocations at the nanoscale goal was a uniaxial tensile strain dislocations was developed.
level. higher than 4% in order to Suitable patterns, realized by
Regarding the problem of induce the transition to a direct EBL, provided a controlling
strain control, a suitable band gap in the germanium. effect on the propagation of
nanopatterning of the SiGe layer A simplified numerical model, dislocations in an epitaxial SiGe
induces, through the generation which was very sensitive to the film grown on the patterned
of edge forces, a compressive temperature, the growth rate, Si substrate. The pattern was
or a tensile strain, respectively and the Ge content of the alloy, a matrix of inverted pyramid
for a Si or a Ge substrate. was developed to describe aligned along the <110>
SiGe films were deposited by the strain relaxation process directions: these inverted
low-energy plasma-enhanced of tensile SiGe on Ge. In this pyramids acted as favourable
chemical vapor deposition way the optimum conditions points for the nucleation of
(LEPECVD). SiGe nanostressors for the growth of a metastable dislocations. In this way, it
were realized by electron beam film were found. Also in this was possible to control the
lithography (EBL) and reactive case the strain depends on the propagation of dislocations
ion etching (RIE). Scanning spacing between the stripes, along the pit rows. The ability
electron microscopy (SEM) and and for a spacing of 20 nm to confine dislocations and 2. a) AFM image of the amplitude in tapping mode of SiGe on a pit-patterned substrates. b) Raman maps of the Si-Si
atomic force microscopy (AFM) the strain is higher than 4%, obtain areas without dislocations peak of the SiGe film.
638
639
This thesis investigates the By employing the best
PHYSICS
fabrication and characterization performing polymer among the
of water-gated organic field- selected ones, poly(2,5-bis(3-
effect transistors (WGOFET), hexadecylthiophen-2-yl)thieno
employing organic polymeric [3,2-b]thiophene) (pBTTT), an
materials as the active electrolyte gated pBTTT-based
semiconducting layer, for device is realized, sensitive
biosensing applications. towards pH and ionic strength of
Compared to a typical solid the electrolyte, being stable for
state top-gate transistor more than 24 hours operation.
configuration, a water-based Moreover, the device is modified
electrolyte substitutes the usual with the enzyme penicillinase,
polymeric dielectric, allowing to and a proof-of-concept sensor
work at very low voltages (<1V) specifically sensitive towards
typical of the biological systems, penicillin is demonstrated.
and opening for the possibility Furthermore, two n-channel
of development of a WGOFET- electron-conducting polymers
based biosensing platform. are investigated for the first
In a first phase of the work time as active materials in
we screen the most promising water-gated architectures,
p-channel (holes conducting) demonstrating outstanding
semiconducting polymers to performances. By coupling
be used as active materials for p- and n- type polymers,
WGOFETs: we finally focus on water-gated complementary
polymers of the polythiophene inverters are realized, paving
family, which exhibit remarkable the way to the fabrication of
performances in terms of complementary circuits working
charge carrier mobility and in a liquid environment.
electrochemical stability
as compared to previously
demonstrated state-of-the art
polymer-based WGOFETs, being
able to work also in a biological-
like environment (0.2 M molar
concentration NaCl solutions).
Notably, the critical comparison
among different polythiophene-
based polymers allows to
unravel key physical mechanisms
of the hybrid interface between
conjugated polymers thin films
and saline electrolytes.
640
Time-resolved investigation of electron 1snp manifold by the generated half-cycle of the IR field), due 1 IR) allowed bound excited
PHYSICS
has been the development of natural way to follow in real demonstrated in streaking ionization pathway, depending applied to singly-excited helium, arguments in a Floquet-like
a versatile high-repetition rate time the dynamics of an atom measurements for the on the intensity of the IR which has been the subject of model; quantitative agreement
(10 kHz) XUV/IR attosecond (or a molecule) triggered by the characterization of isolated pulse: while at low intensity many similar investigations in was provided by simulations
beamline for time-resolved interaction with a broadband attosecond pulses (see Figure 1). (lower than 3x1012 W/cm2) the the last few years. In particular, based on the solution of the
studies of electron dynamics in XUV attosecond pulse. Because It consists in a two-color XUV-IR dominating process is the two- the novel contribution was the TDSE in SAE.
few-particle systems (atoms and of the large bandwidth implied ionization experiment performed photon process (1 XUV + 1 IR) demonstration of a control
small molecules). by the very short time duration on a noble gas. Provided that stemming from the whole 1snp over Light Induced States
The generation of XUV radiation of attosecond pulses, multiple the dipole matrix element manifold (with n>2), at higher (LISs), which are absorption
was achieved via upconversion reaction pathways are usually remains approximately constant intensities the four-photon features appearing only in the
of Carrier-Envelope Phase involved, complicating the within the bandwidth of the process (1 XUV + 3 IR) from the simultaneous presence of an
(CEP) stabilized pulses centered interpretation of experimental XUV pulse, the delay-dependent 1s2p state becomes dominant. XUV exciting pulse together
around 800 nm with 5 fs results. It is then highly desirable photoelectron kinetic energy An important part of the project with a dressing IR pulse.
duration, obtained via hollow- to collect every possible piece of distribution can be interpreted was also dedicated to the study They can be thought as the
core fiber compression of information from the process, as a FROG spectrogram, of atoms and molecules with (virtual) intermediate state
the pulses produced by a i.e. to realize a kinematically allowing for the retrieval of the Attosecond Transient Absorption connecting the ground state
commercial titanium-sapphire complete experiment: this is the XUV pulse. Spectroscopy (ATAS), where the to a two-photon (1 XUV
based laser system. ultimate goal of a REMI. Furthermore, the REMI was used observable is provided by the
The production, via High-order The successful operation of to study IR-assisted ionization of XUV radiation transmitted by
Harmonic Generation (HHG) in the REMI inside the developed the helium atom, excited to the a dilute gas sample, spectrally
noble gases, of subfemtosecond resolved thanks to a home-built
XUV pulses with duration XUV spectrometer.
down to 380 as was then This technique can be
demonstrated in attosecond considered complementary to
streaking experiments; the the REMI, since it provides access
combined exploitation of to the dynamics in the bound
different generating media excited states of a system, which
and metallic filters resulted in a are invisible to a charged particle
broad frequency tunability in the spectrometer since no ionization
15-30 eV range. is taking place.
From the technological point ATAS was applied for the first
of view, the main challenge time to a molecular system (N2)
of the project was to interface for the study of multi-electron
the developed beamline with a dynamics. The population of
REaction MIcroscope (REMI), a a coherent superposition of
charged particle spectrometer electronic states lying below
capable of detecting multiple as well as above the ionization 2. Experimental observation of a coherent electron wavepacket in N2. The
electrons and ions in potential was demonstrated transient absorption trace shows half-IR-cycle oscillations in the region 17.0 -
coincidence, providing access to (see Figure 2); the coherence 18.5 eV (lower panel) as a result of the interference between the single XUV
photon (tail of the XUV spectrum - green shaded area in the upper panel) and
the momentum vector of all the was proved by the observation
1. An experimental streaking spectrogram performed on argon. The XUV the three-photon (absorption of one XUV photon - center of the green shaded
detected fragments. attosecond pulse was centered around 28 eV: 380 as is the pulse duration of quantum beatings with area followed by absorption of two IR photons) pathways leading to the
The detection of the generated obtained from the retrieval algorithm. a periodicity of 1.33 fs (one population of the same state states (blue shaded area).
642
643
Stimulated by technological is crucial in view of possible PhD aimed at achieving a The results achieved during
PHYSICS
applications of oxides in catalytic applications. Finally, deeper understanding of the my PhD could pave the way
catalysis, electrochemistry, gas the large majority of materials subtle mechanisms occurring toward a deeper understanding 1. Large-scale STM images of (a) the Fe(001) substrate before (top) and after
sensing, corrosion protection, possessing long-range magnetic when interfaces between of heterostructures formed by (bottom) exposure to 50 L (1 L = 1.33 10-6 mbar s) O2 at room temperature,
electronics and high-density ordering belongs to the class nanostructured transition metal magnetic oxides supported and (b) the Fe(001)-p(11)O surface. An atomically-resolved STM image (3.83.8
nm2) of Fe(001)-p(11)O is reported in the inset of panel (b).
storage, considerable scientific of reactive metals. In this (TM) oxides and a magnetic on ferromagnetic substrates,
effort has been spent in the last respect, new exciting properties Fe(001) substrate are formed. In especially considering the great
decades to the investigation can arise from the interfacial particular, three main strategies progresses achieved in direct
with atomic-scale resolution interaction of oxides possessing have been adopted to achieve observation of spin structures
of oxide surfaces and metal/ long-range spin ordering the preparation of high-quality down to the atomic scale via
oxide interfaces. In particular, a and their magnetic support. Cr, Co and Ni nano-oxides. (i) spin-polarized STM. Moreover,
growing attention has been paid In particular, technologically TM growth on a well-ordered, the peculiar structures formed
to the investigation of structural, relevant phenomena, currently pre-oxidized Fe(001) surface, by these oxides in direct contact
electronic and magnetic exploited in high-density i.e., Fe(001)-p(11)O [Fig. 1(b)]; with a reactive substrate could
properties of the surfaces of storage media, spintronics and (ii) post-oxidation of, and (iii) prove promising candidates as
bulk crystals and ultrathin oxide magnetic field sensing, still await homoepitaxial RD onto ultrathin novel model catalysts.
films supported on noble-metal suitable model systems to allow TM buffer layers, grown on
substrates. The latter have been a profound understanding via either Fe(001) or Fe(001)-p(11)
the subject of extensive research atomic-scale probing. O.
(i) as model inverse catalysts, Indeed, the high reactivity Fig. 2 reports a restricted
and (ii) due to their rather of magnetic supports as Fe selection of the STM results.
facile preparation by reactive poses serious limitations to Growth of sub-ML (1 ML =
metal deposition (RD) in oxygen the preparation methods 1.2 1015 at./cm2) amounts of
atmosphere, thanks to the inert for well-defined oxides, Cr on Fe(001)-p(11)O leads
noble metal support. i.e., ideally characterized by to the stabilization of two
Comparatively much less atomically flat surfaces, and monolayer-thick oxides with
efforts have been spent on the chemically and structurally Cr3O4 [Fig. 2(a)] and Cr4O5 [Fig.
investigation at the nanoscale of abrupt interfaces with the 2(b)] stoichiometry, unobserved
oxides coupled with substrates substrate. O2 exposure, in fact, among bulk Cr oxides.
characterized by an elevated readily induces the formation Conversely, Ni/Fe(001)-p(11)O
reactivity toward oxygen, such as of substrate oxide phases [Fig. growth leads to the formation
Fe [Fig. 1(a), top]. The question 1(a), bottom], whose structure of an alloy oxide surface, with
of how oxides form on top of a and chemical composition are Ni4FeO5 stoichiometry [Fig. 2(c)].
reactive metal is, nevertheless, hardly controlled. As a result, A similar tendency of Fe atoms
2. (a,b) Atomically-resolved STM images of single layer-thick Cr oxides prepared
of apparent relevance, e.g., oxides grown by RD on reactive to get oxidized is strikingly by growing (a) 0.75 ML, and (b) 0.80 ML Cr on Fe(001)-p(11)O at 670 K. The
concerning the use of oxides as substrates like Fe exhibit rough observed when oxidation of presence of ordered Cr vacancies (dark spots in STM) results in Cr3O4 and
protective coatings. Moreover, surfaces and diffuse interfaces, Ni/Fe(001) films is considered Cr4O5 formal stoichiometry for the nano-oxides in (a) and (b), respectively. (c)
STM morphology of a 5 ML-thick Ni/Fe(001)-p(11)O film grown at 570 K. At
investigation of the effects of hampering both atomic-scale [Fig. 2(d)]. In this case Fe variance with respect to (a,b), the dark spots in (c) have been identified as
the reactive support on the imaging via scanning tunneling atoms migrate through the Ni single Fe atoms. The formal stoichiometry of this alloy nano-oxide is therefore
local electronic properties, microscopy (STM) and idealized buffer and form a buckled FeO Ni4FeO5. (d) Atomically resolved STM image (top, 7.04.8 nm2) and line profile
(bottom) of the polar FeO(111) nano-oxide (surface unit cell sketched in red)
chemical composition and defect modeling. nano-oxide with a polar (111) developing when a 5 ML Ni/Fe(001) film is exposed to 150 L O2 and post-
distribution of the growing oxide The research activity of my orientation. annealed to 570 K.
644
From Dye sensitized to Perovskite-based solar evidences obtained with mesoporous scaffold on the
PHYSICS
renewable energy source on widespread applications of this conventional tools employed for retrieved for luminescence (described as oriented dipoles)
earth, it would then potentially technology. Hybrid organic DSSC characterization. In this decay analysis as some groups according to the linear Stark 1. Cartoon illustrating the double
enable sustainable economic inorganic perovskites have thesis, Time Correlated Single reported in early works. effect theory for absorption path for electrons percolation in
growth for humanity with a emerged at the forefront of the Photon Counting (TCSPC), Second, cw-PIA showed very shift. Mesostructured PSCs, in mesostructured PSC, with slow and
fast dynamics associated to TiO2 and
minimum detrimental impact most exciting and innovative Photoinduced absorption (cw- weak feature attributable to cw-PIA and EA experiments, perovskite transport respectively,
on the environment, if properly materials for this application in PIA), Transient Photovoltage the absorption of electrons showed derivative features hypothesized according to TCSPC, cw-
converted and stored. Among recent years. Initially proposed (TPV) and Electroabsorption in the TiO2 matrix in efficient similar to the DSSC case, PIA and TPV analysis.
the photovoltaic technologies as an evolutionary step of (EA) are employed to shed state-of-the-art devices. revealing the presence of
developed to convert solar mesostructured solid state light on charge dynamics Efficient injection and fast oriented dipoles at the
power, silicon-based ones sensitized solar cells, they soon and interface physics of both charge recombination would interface between perovskite
are the most consolidated in demonstrated ground-breaking class of devices. Experimental justify this result, but it and mesoporous oxide. The
industry. There are, however, performances, soon reaching evidences show that the would be in contrast with the orientation provided by the
many alternatives, promising outstanding solar to electricity models developed for data observed good performances interaction with TiO2 could
cheaper, environmental-friendly conversion efficiency close to interpretation in DSSC analysis of the solar cells. Third, influence perovskite growth
solutions, with characteristics 20%, and holding the promise have to be reconsidered when TPV revealed a double path and transport properties;
suitable for various technological of accessible scalability with dealing with perovskite solar for charge recombination, moreover, the presence of
applications. Among these, Dye- low-cost solution processability. cells (PSC). The comparative with a slower component electric dipoles in perovskites,
Sensitized Solar Cells (DSSC) are Hybrid halide perovskite analysis between the old attributable to the charges which appear to have a
mostly coming of age, providing possess key characteristics for and the new generation injected in the TiO2 substrate, strong influence in PSC
potential penetration into high- PV manufacturing: intense of mesostructured solar cells recombining with the typical working mechanism, has
added value energy markets, and broad light absorption, is completed with a punctual dynamics observed for solid been experimentally tested.
such as the BIPV (Building- remarkable ambipolar charge review of the hot debate of the state DSSC, and a faster Figure 2 provides a schematic 2. Right: schematics of dipole moment
Integrated Photovoltaics), mobility and very low non- scientific community around component attributable to the view of the direction of the variation for the TiO2/perovskite
(PERO) interface and shape of the
unattainable by classic radiative carrier recombination each of the investigated aspects. electrons percolating through dipole moment variation corresponding linear Stark signal for
inorganic semiconductor based rates, leading to impressive Two main topics are presented the perovskite material. The and the corresponding oriented dipoles. Left: DFT calculated
photovoltaics. DSSC solar cells charge diffusion length. These and experimentally supported by fast recombination component shape of the Stark derivative ground and excited state dipole
moments for the system.
are based on the light harvesting fascinating properties allowed the author: does not negatively affect feature, together with a DFT
operated by a sensitizer the inclusion of perovskite 1. Charge injection to the the performances of the simulation confirming EA
antenna through electron materials in a variety of device TiO2 matrix is limited in devices since it is coupled with experiments.
injection into the conduction architectures in the last couple mesostructured PSCs, contrary efficient and fast transport,
band of a nanostructured semi- of years, exploring different to the case of conventional as demonstrated by means of
conducting oxide, followed by manufacture procedures dye-sensitized solar cells. transient photocurrent in the
dye regeneration by a redox including spin coating and This implies that, after same samples. The cartoon
shuttle liquid electrolyte or solid evaporation of thin films. The charge separation in the in figure 1 shows schematics
hole transporting material. Huge characterization perovskite- perovskite itself, electrons of the hypothesized double
efforts have focused in the last based solar cells (PSC) offers are mainly transported path for charge carriers in
20 years on the development several challenges to the towards the selective contact mesostructured perovskite
of new efficient materials and scientific community, which has through the same active solar cells.
innovative architectures for been involved in the last decade material. This behaviour 2. EA spectroscopy has been
DSSC, in order to improve device in the study of DSSC. In fact is hypothesized following employed to demonstrate
performances, progressing these devices present interesting different experimental the influence of the oxide
646
PHYSICS
metrology and trace gas down absorption spectroscopy molecules. providing to a piezoelectric
sensing historically developed (CRDS), proved to be very actuator the sawtooth signal
along two separated paths. powerful and effective. It is In this thesis we present a new needed to dither the cavity
In the last decade, the former based on the measurement of approach to comb-assisted length and performing a cavity
benefited from the advent of the photon decay rate inside a CRDS, where high sensitivity length tracking in order to
optical frequency combs (OFCs), high-finesse optical cavity, with and frequency accuracy are maintain the dithering of the
enabling the absolute calibration high sensitivity and immunity to obtained by using a high-finesse cavity resonance centred around
of optical frequencies. The probing laser intensity noise. optical cavity (F100000) and by the probe laser frequency during
latter has witnessed over the Our aim is the development of locking a CW probing laser to a spectral scan.
past twenty years continuous a comb-assisted, continuous one mode of an Er:fiber OFC in The test of the spectrometer
methodological and technical wave (CW) CRDS setup that can the near-IR, respectively. A wide- performances was performed
developments leading to provide the high accuracy on bandwidth phase lock between on the P14e line of CO2 at
more and more sensitive the frequency axis offered by the OFC and an extended cavity 1.57 m at low pressure (Fig. 1. Experimental setup of the comb-assisted CRDS spectrometer. Blue lines
spectrometers, especially an OFC and at the same time diode probing laser (ECDL) is 2). The measurement of the indicate fibre paths. FC: fibre coupler. DAQ: acquisition board; PZT: piezoelectric
when combined with high- high sensitivity on the vertical achieved by using their beat decay rate 1/t is related to actuator; PD: photodiode.
finesse optical cavities, while scale. An absolute frequency note as the driving signal of an the gas absorption by the law
little attention has been paid axis adds several features to acousto optic modulator (AOM) a=(ct)-1-(ct0)-1, where 1/t0 is
to the frequency accuracy of a high-sensitive spectroscopic in a feed-forward configuration, the decay rate of the empty broadening regime in single the need of GHz electro-optical
the measurement. For this measurement. It enables cancelling the frequency optical cavity. The huge density spectral acquisition. When modulators that do not exists
reason the connection between ultrahigh precision and accuracy, fluctuations of the ECDL with of spectral points and the high averaging over several scans the outside the telecommunication
these two fields is particularly with frequency uncertainties respect to the stable OFC. The signal to noise ratio of the statistical uncertainty in the line spectral region.
attractive. on the line positions as low coherence properties of the recording make this system centre frequency reaches 9 kHz,
The huge success of OFCs lies as 1016, and also ensures an comb are efficiently transferred ideal for an accurate line-profile with the absence of systematic
in their discrete, stable and increased signal-to-noise ratio to the CW probing laser, whose analysis. A limit of detection errors on the frequency reading.
reproducible structure, readily and sensitivity thanks to massive frequency noise is reduced with on the vertical axis as low as This setup, besides attesting the
referable to primary microwave averaging of precisely calibrated a maximum control bandwidth 1.61010 cm1 is found over undisputed advantage given
standards, allowing the link spectra. These characteristics of 0.8 MHz. This ensures a a single spectral scan which by a comb-based spectrometer,
between the optical and the help to solve many open high spectral purity of the consists of 1500 spectral points also shows that comb-based
radio frequency (RF) domain and questions, such as the analysis of probing laser and very high acquired over 75, as limited by highly sensitive detection
providing an absolute frequency the impact of line-shape models reproducibility and accuracy temporal drifts of the empty allows frequency accuracy to be
axis to any spectroscopic on the spectroscopic parameters, to the frequency axis upon cavity and by parasitic etalons, straightforwardly pushed to the
recording. providing deep physical insights scanning the comb repetition as highlighted by the analysis typical kHz-level of the sub-
Bringing the benefits of OFCs on the collision processes of rate. The use of an AOM in of the Gaussian fit residuals. Doppler regime.
into the field of molecular molecules. This also permits this setup is very effective Several statistical independent In conclusion, we have
spectroscopy and trace gas the absolute determination because, besides providing for measurements were repeated introduced a new configuration
sensing is the main goal of of molecular line centre the probing laser line-narrowing to get an insight into precision that allows conjugating very
this thesis. To the purpose frequencies and linestrengths and referencing, it is also used and accuracy of the line centre high performance on both the 2. Top) Typical single-scan absorption
of the enhancement of the with unprecedented precision to interrupt the laser beam and frequency retrieved from the horizontal and vertical scale. spectrum at a pressure of 2.5102
measurement sensitivity, several and accuracy, allowing tests to start the ring-down event fitting, demonstrating an error mbar for the P14e line of CO2. Bottom)
This ranks the system close to
Residuals of a Gaussian fitting for
spectroscopic techniques have of quantum-mechanical once a reasonable intensity of 38 kHz on the line centre the state of the art of comb- a single-scan and for a 50-times
been developed. Among all the calculations of the energy levels threshold is measured (Fig. 1). determination in a Doppler assisted spectrometers, without averaged spectra.
648
649
Nanotechnology, the CNTs are the one-dimensional all time. In the field of physics biodiagnostics and integrated which brings back the system to carrier mobility in these two
PHYSICS
characterization and exploitation allotropic form of carbon and the discovery of CNTs and the optical devices. In this respect, the thermal equilibrium. classes of low-dimensional
of objects with dimensions in the they were first discovered in discovery of graphene are at the possibility to control the In this thesis we exploit ultrafast materials. A deep understanding
order of 1-100 nm, represents 1991 by S. Iijima. With all the first and third position electronic and optical properties pump-probe spectroscopy of all these factors is a
nowadays a fruitful platform for its different allotropic forms, respectively. Despite the huge of gold nanoparticles by to define the time-scale and fundamental prerequisite for
advanced technologies and a carbon played a key role in number and diversity of possible changing their geometry and mechanisms of the energy further development of their
challenge for future applications. the technological advances applications of these low- nanocrystallinity together with relaxation processes, interactions technological applications.
Besides the empirical use of of our society. Graphite, the dimensional allotropic forms their strong sensitivity to the with the environment and
nanostructures in ancient times most stable and common of carbon, the easiest example environment are of fundamental
for ornamental purposes, such carbon allotrope, has a layered to explain their importance importance. The study of the
as the Lycurgus Cup (4th century and planar structure formed is probably electronics. Being ultrafast optical response of
AD, Rome) with its iridescent by carbon atoms arranged in already at the physical limit for gold nanoparticles allows to
colours or the glittering stained a honeycomb lattice. Its first the Moores Law, where the selectively address the series
glass windows of the medieval large scale application dates quantum mechanical effects of relaxation processes that
cathedral of Sainte-Chapelle back to the 16th century, when can no longer be neglected, fully characterize the temporal
(1242, Ile de la Cit, Paris), the the discovery of an enormous the further miniaturization of evolution and interactions
modern interest for the field was deposit in Borrowdale (UK), transistors calls for completely of plasmons, electrons and
triggered by the visionary talk of used by the locals for marking new methodologies. Circuits phonons inside the material.
R. Feynmann, Theres Plenty of sheep, led to the birth of the made of graphene and CNTs When the photon energy of
Room at the Bottom (1959). On pencil industry. The advent of would benefit both from the an excitation pulse falls inside
the other hand, the invention the low-dimensional carbon smallest possible size, limited the absorption spectrum of the
of the Scanning Tunneling allotropes (fullerene, CNTs and by the dimension of atoms, gold nanoparticle plasmonic
Microscope in 1981 by G. graphene), instead, had to wait and the highest ever known resonance, coherent oscillations 1. Timeline of the carbon allotropes, from graphite to graphene. The pencil
industry was born in 1564 thanks to the discovery of a graphite deposit in
Binning and H. Rohrer, both until the second half of the 20th carrier mobility, exceeding by of the electron sea can be Borrowdale. Graphene is considered the fundamental building block for this
awarded with the Nobel Prize in century. Fullerene (the ideal 0D approximately two orders of launched. These oscillations class of low-dimensional materials, obtained by wrapping-up (fullerene),
Physics in 1986, is often referred system formed by carbon atoms magnitude that of silicon. lose the phase information rolling-up (CNTs) or stacking (graphite) a single-atom layer of carbon.
as the watershed in nanoscience, on a hollow sphere) was found Gold nanostructures are metallic (plasmon dephasing) within
as it represents the first tool to experimentally only in 1985 systems of nanometer size approximately 10 fs and transfer
directly observe matter with by H. Kroto et al. and earned with unique optical properties the whole absorbed energy to
sub-nanometer resolution. Thus, the Nobel Prize in Chemistry in that arise from the so called the electronic distribution. Then,
we are nowadays experiencing 1996 to R. Smalley, H. Kroto Localized Surface Plasmons, in the first 100 fs the electrons
the results of approximately and R. Curl. Graphene, a one namely free electrons collective relax from a non-thermal to a
half-century of intense nano- atom thick carbon layer, is the oscillations coupled to an thermal electronic distribution
research, with a large number of youngest among the carbon external electromagnetic field. via electron-electron interaction
discoveries and applications that allotropes. It was isolated and The technological importance of and subsequently transfer
are more and more affecting detected for the first time gold nanostructures hails from energy to the lattice via electron-
our everyday life. In this work in 2004 by the group of A. the large variety of possible phonon interaction. Finally, the
we concentrate on the ultrafast Geim and K. Novoselov, both applications, including non- cooling of the lattice takes place
optical properties of two classes awarded with the Nobel Prize linear frequency conversion, on a relatively slow time scale (in
of nanomaterials: Carbon in Physics in 2010. Recently, high resolution imaging, the order of the ns) by energy
Nanotubes (CNTs) and Gold Nature has published the list of photovoltaics, biological sensing transfer to the environment, i.e. 2. Schematic picture of the relaxation processes following ultrafast optical
Nanoparticles. the 100 most cited research of and imaging, cancer therapy, phonon-phonon interaction, excitation in gold nanoparticles.
650
ATTOSECOND ELECTRON DYNAMICS IN COMPLEX experiments performed with By performing a Velocity Map systems with the aim of studying
PHYSICS
published a pioneering work dynamics occurs on a faster of ultrafast electronic dynamics the goal of understanding the about the slope and shape of oscillation in the yield of
on quantum coherence effects temporal scale ranging from a in matter. The main problem interaction of molecular nitrogen nitrogen (in particular N2+ ions) immonium dication fragment,
in the vibrational states of few fs down to a few hundreds the community has to face is with extreme ultraviolet (XUV) potential curves, a sort of real- providing for the first time an
anthracene, paving the way for as (1 as = 10-18 s), for this reason the low intensity of attosecond radiation, that is of crucial time mapping of molecular experimental demonstration of
the study of ultrafast dynamical in order to track the electronic sources, since the convertion importance to completely electronic states. charge migration in a biological
processes in isolated molecules. motion in matter shorter light efficiency of HHG process is disclose the atmospheric Then we tried to push our molecule.
In the same years the laser pulses are required. quite low (in the order of 10-6), radiative-transfer processes. investigation to more complex
sources were experiencing a In 1987 and 1988 two resulting in XUV energies usually
dramatic development thanks indipendent experiments were in the range between hundreds
to the appearance of the first able to produce coherent of picojoules up to few
subpicosecond dye lasers extreme ultraviolet (XUV) nanojoules. This level of energy,
(1974) and, few years later, the radiation by exploiting the and correspondent intensity, is
achievement of pulses with a interaction between a strong tipically too low for initiating
duration down to 6 fs (1987). IR laser field and the atoms non linear processes in matter,
The great results on both sides of a rare gas. The result was thus for performing attosecond-
converged in the development a series of odd harmonics of pump attosecond-probe
of ultrafast spectroscopy and the fundamental wavelength, experiments. For this reason the
femtochemistry, providing an corresponding to a train of common solution is to combine
ultrahigh-speed photography subfemtosecond bursts. Only the XUV pulses with a VIS/NIR
at the atomic and molecular few years later this process was laser field, with an attosecond-
level. Nowadays this research fully understood and called High pump femtosecond-probe
field is really well established Harmonic Generation (HHG). configuration. This setup can still (a) (b)
and gives a direct access Since then, great effort was preserve a temporal resolution
1 (a) Photoionization experiment on molecular nitrogen performed in a Velocity Map Imaging (VMI) spectrometer. (b)
to dynamical processes of made to investigate more in in the attosecond timescale and Pump-Probe map of the N+ Kinetic energy, as a function of the delay between attosecond-pump and few-femtosecond-
great importance in physics, detail the HHG process, until the in the last years gave important probe pulses.
chemistry and biology. From first experimental demonstration results in investigating ultrafast
quantum mechanics we know of attosecond pulses generation, electron dynamics in atoms,
that femtosecond temporal performed in 2001 by Paul and recently even in simple
scale is intrinsically related to and coworkers, who were able molecules. Despite these positive
the nuclear motion, for this to generate a train of 250 as results, attosecond physics still
reason a typical experiment pulses. During the same year didnt show the capability of
with femtosecond resolution is a single attosecond pulse with investigating complex systems,
able to investigate in real time a time duration of 650 as was for example biomolecules, where
the evolution of a reaction, the successfully isolated from a train ultrafast electron dynamics are
breaking of a chemical bond, of attosecond pulses. These expected to play a fundamental
the fragmentation of a complex results paved the way for the role in many biological processes
system after the perturbation of birth of attosecond physics. such as catalysis, respiration,
the initial quantum state, down In the last two decades a DNA damage by ionizing (a) (b) (c)
to the vibrational oscillation in strong effort was made to radiation and photosynthesis. 2 a) Oscillatory dynamics on the decaying slope of the delay-dependent yield of immonium dication (reported in the
diatomic molecules (the ground characterize attosecond This thesis describes attosecond- inset in a 100-fs delay range). b) Sliding-window Fourier-transform of the experimental data. c) Sliding-window Fourier-
state vibrational period of H2 is sources and to apply this pump femtosecond-probe transform of the numerical result.
652
653
Scanning Auger Microscopy Microscopy (AFM) and micro- of the SAM investigation, flakes are routinely produced the simultaneous exploitation achievements are promising
PHYSICS
(SAM) is an electron spectro- Raman spectroscopy, to providing an assessment of nowadays with thicknesses of AFM profiles at flake in view of a wider control for
microscopy technique that investigate two prototypical the local chemical conditions ranging from single atomic border, providing an absolute the large area devices based
joins the imaging resolution of nanostructured systems: (1) 1D at film surface. It is shown layers (0.3 nm and about 1 thickness reference, and of on graphene and its oxide
Scanning Electron Microscopy Tungsten oxide nanorods and that, although the oxygen nm, respectively) to few layers. SAM, providing highly reliable with reliable directly seeing
(SEM) and the analytical power (2) 2D graphene and graphene concentration in the bulk of the They belongs to a wider class values. The direct determination of structural defects, and
of Auger Electron Spectroscopy oxide (GO) single and multilayer tungsten oxide films depends of thin film materials that can of the EALs allows to obtain quantitatively detect chemical
(AES). In addition to the films. strongly on the oxygen partial be obtained from bulk layered sub-monolayer precision in impurity on almost any
imaging capabilities provided In the first section, the pressure during deposition, materials, which form crystals the determination of the layer substrate.
by SEM, potentially down to investigation focus on the the oxygen is not chemically with weakly bound layers. thickness by SAM, at any point Finally, it has been demonstrated
the nanometer scale, in a SAM chemical-physical conditions bound to the W atoms to form Graphite (the bulk form of inside the flake surface. The the possibility to get information
apparatus the focused electron at the surface of amorphous- a stable compound. Rather carbon from which graphene approach is experimentally on the thickness of graphene
beam from the SEM column is like Tungsten films, which it is adsorbed at the large has been firstly extracted) is the demonstrated on graphene and GO flakes by secondary
exploited to generate Auger have an important influence interface area provided by the most common example and and graphene oxide (GO) electron contrast with the
electrons, which are filtered by on the nucleation and nanoscale grain boundaries of perhaps the most attractive flakes, over conductive and typical SEM resolution, down
the AES analyzer. AES is able growth of 1D Tungsten Oxide the quasi-amorphous Tungsten due to the wide range of insulating substrates, which to the nm scale, once a proper
to provide semi-quantitative (WOx) nanorods. WOx is a film. This finding explains the astonishing properties of are representative of a large reference is given. As the Auger
atomic concentration and transition metal oxide, the need for a thermal annealing graphene, its biocompatibility variety of applications. The microspectroscopy is often
chemical analysis of the few nanostructuring of which can to promote the growth of not to be forgotten. They are sensitivity of the graphene associated to SEM in the same
atomic layers at surface of a lead to unique characteristics. Tungsten oxide nanorods, which prototypical examples of 2D thickness characterization apparatus and it is now able to
sample. Thus, SAM is potentially The photochromic and however require also other structures that can be used for on SiO2 substrates, already provide the absolute reference,
able to give a relevant insight electrochromic properties of local concurrent conditions to the state-of-the-art and futures demonstrated, is increased and, this demonstration pave the way
into the surface electronic, WOx in nanostructured thin nucleate and develop like the nanoscale devices. The family more important, these results to the high lateral resolution
chemical and structural films have been increasingly material strain or curvature. By of layered compounds includes extend the field of application characterization of large area
properties of nanostructured investigated and applied to the unique surface sensitivity however other, sometime very of SAM characterization graphene and GO films with
systems, with nanoscale lateral the development of devices. of SAM, it is shown that the interesting, examples; these considerably beyond the present sub-monolayer resolution on a
and sub-nanoscale depth Nanostructured WOx is also Tungsten oxidation arrives at include semiconductors or practice, to graphene oxide very wide variety of substrates,
resolution. These capabilities a well-studied material for the film surface under selected superconductors and present films and to metallic substrates, with properties ranging from
make SAM a very valuable photocatalysis and sensing; in mechanical conditions, and at the same time several other which are hardly accessible conductors to insulators and
tool for the investigation its nanorod form, WOx is also that a high surface mobility of interesting properties, like by the more acknowledged from heavy to light atomic
of todays nanostructured appealing for field emitting the oxygen atoms is needed charge density waves and phase -Raman approach. Moreover, weight compounds.
solid state systems, with a purposes. to overcome the difference transitions connected to them. the SAM characterization
large variety of applications In this work, 1D Tungsten oxide between the lower film In this work, it is shown that boosts the relevant advantage
ranging from sensing to nano nanorods are grown by Pulsed concentration and the higher the state-of-the-art thickness of accessing any point of the
electro-mechanical and energy Laser Deposition (PLD) at the oxygen stoichiometry in the characterization by SAM, based full film area, while the direct
harvesting devices and to surface of amorphous-like nanorods. on semi-empirical electron thickness measurement by AFM
catalysis. Tungsten thin films on Silicon The second section deals with Effective Attenuation Lengths are limited to the proximity of
This work exploits the SAM substrates, under a variety the thickness characterization on (EAL), can bring to uncertainties film borders. This advantage is
capabilities, complemented of conditions. The growing graphene and graphene oxide largely exceeding the single shared with -Raman, which
with the use of several other conditions are correlated to flakes on different substrates layer thickness. To overcome has other limitations such as
microscopic and spectroscopic the different structural features by Auger electron microscopy. this limitation, EALs are the reduction of the GO sheets
technique, like Atomic Force of the nanorods with the help Graphene and graphene oxide experimentally determined by by the laser irradiation. These
Industrial Chemistry and Chemical
Engineering | Information Technology |
Interior Architecture and Exhibition Design
| Management, Economics and Industrial
Engineering | Materials Engineering |
Mathematical MODELS and METHODS IN
Engineering | Mechanical Engineering |
Physics | Preservation of THE Architectural
Heritage | Rotary-wing aircrafts | Spatial
Planning and Urban Development |
Structural Seismic and Geotechnical
Engineering | Technology and Design for
Environment and Building | Territorial
Design and Government | Aerospace
Engineering | Architectural and Urban
Design | Architectural Composition |
Architecture, Urban Design, Conservation
of Housing and Landscape | Bioengineering
| Building Engineering | Design | Design
and technologies for cultural heritage |
Electrical Engineering | Energy and Nuclear
Science and Technology | Environmental
and Infrastructures engineering PhD Yearbook | 2015
656
DOCTORAL PROGRAM IN operating methods, the PhD programme provides related to individual research, with great
design code requirements, fig. 2); or to critical technical systems and to reducing the impact
663
Earthquakes have affected investigated for giving a new development of the construction preventive techniques for seismic
The Pinacoteca Ambrosiana throughout The whole project, desired by the architectural space to its
Arturo Danusso in Earthquake Engineering namely, the shear wall and rigid architect of the Marina City Concluding, all the performed
671
In the thesis the issue of seismic knowledge, handed down from the Apennine villages in the environment and, specifically, proposed. Also in this case, defining land archives,
The encaustic painting between revival and invention. The city of Verona saw instead Motta, Gallo Gallina, who the analytical technique in
The Ducal Palace in Mantua. isolation of the monument and, twenty years earlier, by 1938. institutional impasses that,
DOCTORAL PROGRAM
679
Objectives In their theses, students are expected to develop original scientific
ADVISORY Board
Giorgio Brazzelli, AgustaWestland & Distretto Aerospaziale Lombardo
Matteo Casazza, Leitwind
Massimo Lucchesini, AleniaAermacchi
Marco Molina, SELEX Galileo
Fabio Nannoni, Agustawestland
Franco Ongaro, Estec
Scholarship Sponsors
AgustaWestland
682
683
Different approaches have been also the capability of the method this work, effectiveness of a approach. Also, micro-structure been elaborated on the base
Design optimization of wind turbines is achieved by Sequential computation and finally CoE systems, considering a passive
Computational fluid dynamic analysis resonance effects on the blade. potential advantages, with This approach is indeed
GENERALIZED AEROSERVOELASTIC STABILITY research and development are work, spectral methods of In a dynamical system, the rate
DOCTORAL PROGRAM IN
693
Objectives of the PhD Programme
Advisory Board
Louis Albrecht John Forester
Piero Bassetti Robin Hambleton
Luca Bertolini Klaus Kunzmann
696
PORT DEVELOPMENT AND PORT-CITY INTERFACE Asian model, which can be From a spatial point of view, mediating flows of people,
RECONSIDERING LOCAL CIVIC SERVICES. to an analytical-descriptive vision, it would be desirable allocated, while the private
701
International Disaster Risk making upon investment in in land tenure covenants.
1 David Mitchell, Land Tenure and Disaster Risk Management, Land Tenure Journal, 1 (2010); Torsten Grothmann
and Anthony Patt, Adaptive Capacity and Humang Cognition, sedac.ciesin.columbia.edu, 2003 <http://sedac.
ciesin.columbia.edu/openmtg/docs/grothmann.pdf> [accessed 4 June 2014]; Torsten Grothmann and Anthony
Patt, Adaptive Capacity and Human Cognition: the Process of Individual Adaptation to Climate Change, Global
Environmental Change, 15 (2005), 199213 <doi:10.1016/j.gloenvcha.2005.01.002>; Torsten Grothmann and
Fritz Reusswig, People at Risk of Flooding: Why Some Residents Take Precautionary Action While Others Do Not,
Natural Hazards, 38 (2006), 101120 <doi:10.1007/s11069-005-8604-6>.
702
Designed Landscapes of Discarded Fill discussed in this research will Parque do Tejo and Tranco, legibility and hiding, by means
705
My research looks at current
compromised by strong internal from public space. The loss of in inequalities and irreconcilable
competition. The increasing public visibility and the right to separations within the city.
presence of the homeless (tip
of the iceberg of a big segment
of impoverished population
and symptom of a growing
social vulnerability) within the
urban context raises problems
of public space management,
of urban order and security.
My analysis of the city of
Toronto reveals, unfortunately,
that homeless people are
increasingly marginalized and
experience social exclusion in
terms of increasing restrictions
on where and how they are
able to use public space. The
city of Toronto has manifested
and still manifests an explicit 2. Bus shelter at Harbord Street (in Toronto downtown): a woman sleeping on
will to make disturbing forms the newest bench with metal arms (photos is mine)
Interior Architecture and Exhibition Design
| Management, Economics and Industrial
Engineering | Materials Engineering |
Mathematical MODELS and METHODS IN
Engineering | Mechanical Engineering |
Physics | Preservation of THE Architectural
Heritage | Rotary-wing aircrafts | Spatial
Planning and Urban Development |
Structural Seismic and Geotechnical
Engineering | Technology and Design for
Environment and Building | Territorial
Design and Government | Aerospace
Engineering | Architectural and Urban
Design | Architectural Composition |
Architecture, Urban Design, Conservation
of Housing and Landscape | Bioengineering
| Building Engineering | Design | Design
and technologies for cultural heritage |
Electrical Engineering | Energy and Nuclear
Science and Technology | Environmental
and Infrastructures engineering |
Industrial Chemistry and Chemical
Engineering | Information Technology PhD Yearbook | 2015
708
DOCTORAL PROGRAM The study plan includes courses and seminars given by scientists,
Mehrdad Bagherinia - Advisors: Prof. Alberto Corigliano, Prof. Stefano Mariani 711
Martino Dossi - Supervisors: Prof. Alberto Corigliano and Prof. Stefano Mariani 719
The control of multipotency and differentiation than in monolayer culture. control the surface stiffness of nuclear deformation. A simple
DOCTORAL PROGRAM
733
The TEPAC doctorate course intends to provide the students with the time.
Social housing by public private created by local private higher number of social housing Three American case studies
Photovoltaic Flexibles:
Zhengyu Fan - Relatore: Prof.ssa Alessandra Zanelli, Prof. ssa Carol Monticelli
Tutor: Prof.ssa Alessandra Zanelli
739
The application of wafer based easier architectural utilization of is more sensitive to strain. The
FROM SCRAP TO PRODUCT the proposal of ecoinnovative An important aspect, emerging District is highly representative of
MULTICRITERIA TOOL TO IMPLEMENT THE The tool, consists of three main technologies and proposing Both structures obtained a high
The potential of a transitional (the only one for which the indices, especially in a semi- use of high-albedo materials
747
The necessity to reduce energy ventilated air gap, i.e. air flow
Dynamic Adaptive Faade advanced design modelling & and can open to a new conception and peculiar
DOCTORAL PROGRAM
753
The new doctoral program in Territorial Design and Government context, and the actual actions for mastering administrations and the private sector, research
QUARTIERI IN GIOCO.
755
Quartieri in gioco is the Neighbourhood Forum are.
The City Network and the Regional more specific) all these images languages. In this direction it comparisons thanks just to
Thanks to:
PhD School
Chairs and Secretaries of the Doctoral Programs
The Doctors of the Graduating Class of 2015