Вы находитесь на странице: 1из 16

Critiques Titles

2014-15

Coordinator Clive Speake


c.c.speake@bham.ac.uk,
Assessors
Vincent Boyer
Bill Chaplin
Mark Colclough
Martin Freer
Peter Jones
Paul Newman
Richard Palmer
Neil Thomas
Alan Watson
Shuang Zhan
Astrophysics and Particle Astrophysics

The NASA Kepler Mission has revolutionized our understanding of the


underlying planet population in the Galaxy [WJC]
Kepler completed its nominal 4-yr Mission in 2013, and is now continuing as the
re-purposed K2 Mission. Kepler has discovered thousands of candidate
exoplanets. To write a good critique you will need to research how this wealth of
data is mined to estimate the true, underlying planet population. What have
been the main discoveries and surprises?

The NASA TESS and ESA PLATO Missions will be needed to provide a
robust estimate of the frequency of small, habitable planets in the
Galaxy [WJC]
The NASA Kepler Mission has been hugely successful, but will it be able to
provide an accurate estimate of the frequency of potentially habitable planets?
Over the coming two decades the TESS and PLATO Missions will explore different
regions of the habitable-zone parameter space. To write a good critique you will
need to understand what TESS and PLATO will offer that Kepler, and other
existing observations, cannot.

Asteroseismology will revolutionize our understanding not only of stars


but also the structure and evolution of the Galaxy [WJC]
Asteroseismology is now enjoying huge success thanks in large part to spacebased observations, e.g. the NASA Kepler Mission and the French-led CoRoT
satellite.
To write a good critique you will need to research how
asteroseismology allows us to not only estimate fundamental stellar properties to
such high precision (notably ages), but how it also opens the possibility to
perform unique studies of the interior structure of stars. How have the likes of
Kepler and other Missions played such a crucial role in enabling seismic studies
of stars?

The GAIA satellite will revolutionize our understanding of the Galaxy


[WJC]
GAIA was launched in December 2013, and will map a billion stars in our Galaxy.
To write a good critique you will need to understand the different types of data
that GAIA will collect, and the way in which the exquisite quality of these data
will improve our understanding of the fundamental properties of different stellar
populations in the Galaxy.

The Sun is about to leave the modern Grand Maximum activity epoch
[WJC]

The current solar activity cycle has been significantly weaker than the past
several cycles, and one must go back over 100 years to find similar levels of
activity, to a time that pre-dates the so-called modern Grand Maximum epoch. To
write a good critique you will need to research the modern and archival
observations that are now suggesting that the Sun may be leaving the Grand
Maximum. What might be the consequences of this?

Uncertainties over the exact chemical composition of the Sun have


major implications for stellar, Galactic and extra-Galactic astrophysics
[WJC].
Updates over the past decade or so to estimates of chemical abundances in the
solar atmosphere have been the subject of some controversy. They have resulted
in a downward revision of the estimated solar abundance of metals, elements
heavier than hydrogen and Helium. Since measured abundances for other stars
are made relative to solar values, the absolute abundance scale for the Galaxy
is, arguably, somewhat uncertain. To write a good critique you will need to
research the updates that have been made to the solar measurements, and
understand why studies of other stars are so reliant on having a robust solar
zero-point for the abundance scale.

The "Bullet Cluster" proves the existence of dark matter [AV, TBD]
Summary to be added

The Universe experienced an inflationary epoch [AV, TBD]


The standard hot-big-bang cosmological model is very successful in explaining
our observational data but suffers from limitations that have led to the
introduction of an additional element in the theory, so-called inflation. You should
consider the theoretical foundations of inflation and the observational evidences
supporting the existence of an inflationary epoch and possible competitive
alternative theories.
There is no convincing evidence for the existence of intermediate mass
black holes i.e. those with masses between 100 solar masses and
supermassive black holes in galactic centres.[AV, TBD]
Black holes are extremely simple mathematical objects that emerge from
general relativity. Today, it is commonly accepted though you may not be
completely convinced that black holes exist in nature with mass of a few (or a
few tens of) solar masses and mass above about a million solar masses. Whether
or not black holes with mass between, say, 100 and 10^5 solar masses
(commonly called intermediate mass black holes) exist is unclear. In the critique
you should consider the theoretical scenarios that have been put forward to

suggest that intermediate mass black holes can form as well as the observational
evidences that have been gathered to support (or refute) their existence.
Primordial gravitational waves have been detected. [AV,TBD]
In March 2014 the BICEP2 collaboration announced the detection of the so-called
B-mode polarization signal in CMB maps at a specific angular scale. This
signature was attributed to gravitational waves produced in the very-early
universe, which in turns provides (at least at the order of magnitude level) an
indication of the energy scale and epoch of inflation. Since then this claim has be
furiously disputed. In the critique you should consider the signature of
gravitational waves in CMB maps as well as physical processes other than
gravitational waves the could mimic a similar signal, and critically review the
different pieces of information that have actually been accumulated so far.

Cold atoms
Quantum mechanics allows for absolutely secure communications (that
even the NSA cannot break) [VB]
Today's secure communications between your web browser and your bank online
banking website are guaranteed by the power of mathematics, which make it
extremely difficult and therefore improbable that someone can decipher the
content of the transactions. Hard but not impossible, and the latest revelations
about the NSA (and GCHQ!) spying programmes have revealed that they might
not be so far from cracking these encryption methods. In the meantime
physicists have invented quantum cryptography, where security is ensured by
the laws of physics. Surely those cannot be broken but the devil is in the details
and all physical systems have imperfections that can be exploited by hackers.
For this critique you will have to dive into the most mysterious, abstract and
counter-intuitive aspects of quantum mechanics to understand how quantum
encryption works, and look into the latest research in how to prove security and
how the exploit implementation imperfections to defeat it.

Laser cooling cannot work for arbitrary molecules [VB]


Laser cooling was invented in the 80s and is now routinely used to cool gases of
atoms down to a few millionths of a degree above absolute zero. These cold
atoms form the basis for the best clocks in the world and are now being used to
make the most sensitive magnetic and gravitational probes. The catch with laser
cooling is that it relies on many photon exchanges between atoms and laser
light, and only specific atomic transitions support this. Molecules do not have
such transitions and cannot be laser cooled, although there would be a wealth of
applications for cold molecules, ranging from electric field sensing to quantum
chemistry. But there may be ways around this limitation and physicists are
getting more and more cunning in their quest to obtain cold molecules. But those
are limited successes and we would really be able to laser cool many types of
molecules to unlock their potential. For this critique, you will have to get an
understanding of how laser cooling works and the difference between atomic and
molecular level structures, a trip deep into the roots of quantum mechanics!
Finally you will have to look into the latest research in cold molecules and
evaluate the prospects of finding a generic method to cool them to ultra-low
temperatures.

Condensed Matter Physics [MC]

Do "stripes" cause high temperature superconductivity?

Simple ideas based on a fermion gas or liquid are an effective way of describing
the electrical properties of many conductors. However, materials such as the
high temperature superconductors exhibit strong correlations amongst the
electrons that must be treated differently. In particular, there is a spontaneous
clustering into sheets or stripes of alternating high and low charge density. The
stripes can be static or dynamic, and appear to be important in both electrical
conductivity and superconductivity. But are they the key to the explanation of
superconductivity in these materials, or just bystanders? A good critique will
describe the basis and evidence for stripes, and how their behaviour is correlated
with the appearance of conductivity and superconductivity.

High Tc superconductors cooled by liquid nitrogen will be valuable for


high-field/high-current applications
The existence (since 1987) of superconductors having a superconducting
transition temperature above the boiling point of liquid nitrogen (77 K) would
seem to be a great advantage for practical devices that rely on
superconductivity, in particular superconducting electromagnets and power
cables. The refrigeration requirements appear to be much less than those for
conventional superconductors that require to be below ~10 K. However, most
practical devices still use the low temperature superconductors. The purpose of
this critique is to explain why, and address the likelihood of high current, high Tc
devices becoming common. It will need to outline the technical difficulties
inherent in the materials, the costs and benefits of low temperature operation,
and the capabilities of today's conventional superconductors.
The pnictide superconductors are just like the cuprates
The recently discovered pnictide superconductors have at least some superficial
similarities to the (high-temperature) cuprate superconductors, which have
been known and extensively studied since 1986. For example, they are layered
compounds, based on a non-superconducting parent which can be doped. In
both cases, doping causes a transition from a magnetically ordered material to a
superconductor. The question is: how deep do the parallels go? Is the
mechanism behind the superconductivity the same, or does superconductivity in
the pnictides have more in common with classical superconductors like
niobium? Will the same approaches that have yielded cuprate superconductors
of increasing transition temperature work for the pnictides? A good critique will
need to outline the behaviours that various experiments reveal, and try to
evaluate which similarities and differences are the most significant.
The strength of man-made magnetic fields has reached its practical
limit
The purpose of this critique is to look at the technical and economic limits of
high-strength magnets and judge what future progress is likely. There is an
interesting history of permanent magnetic materials, culminating in (for

example) neodymium-iron-boron. However, all such materials have a saturation


limit, and much bigger fields can be achieved with what amount to well-designed
solenoids. The demands on electrical power, cooling, and mechanical strength
are important limitations. Attention should be paid to the advantages of hybrid
devices, for example partly superconducting and partly copper devices, or
electromagnets incorporating magnetic materials, and the usefulness of magnets
that produce a high field of short duration, by pulsed of flux compression
techniques.

Medical Physics
Are antiprotons the way forward in radiotherapy? [PJ]
Radiotherapy with beams of protons or ions will replace x-ray based
treatments over the next 5-10 years [PJ]
Compton cameras will revolutionise the process of medical imaging [PJ]

Meta-materials
Is the photon momentum negative in a negative refractive index
medium? [SZ]

Can a positive index medium be used for perfect imaging based on the
principle of time reversal? [SZ]

Nanophysics [RP]
Nanotechnology will revolutionise health-care
Man-made nanoparticles are the same size as protein molecules and there have
been many proposals for their innovative use in both the diagnosis and the
treatment of disease. The question is are these proposals realistic and what
impact will they have? [And what are the risks?].
I would suggest bypassing the economic aspects of this question and focusing on
the properties and associated capabilities of the nanoparticles proposed, perhaps
with emphasis on one class of disease or diagnosis. Then you could compare
nano-technology with existing approaches and other proposals for the future.
Nanoparticles are ideal for magnetic data recording
Higher data density means smaller bits of information. Feynman envisaged a bit
consisting of a cluster of 100 metal atoms. But how would be write information to
and read information from these sub-microscopic species?
The core of this critique is an understanding of magnetism, specially the sizedependent transition between ferro- and para-magnetism and what we might do
in terms of atomic engineering to control it.
Graphene is the future of electronics
The isolation of single layers of graphite (graphene) 10 years ago sparked huge
interest and the award of a Nobel prize. Journalists sometimes call graphene the
wonder material. So whats hype and whats reality? I would suggest (a)
reviewing the physics of graphene and (b) considering a couple of sectors of
electronics, e.g., computing and displays, analysing the existing technology (e.g.
silicon MOSFETs) and establishing what needs to happen for graphene to take
over.

Nuclear Physics [MF,PJ]


The "Energy Amplifier" is the future of fission-based energy production
The energy amplifier was proposed by Nobel Prize winning Carlo Rubbia. It uses a
beam of protons to induce fission in a sub-critical reactor through the production
of a flux of neutrons. The energy produced from the fission exceeds that required
to run the accelerator producing the protons, and in this way the energy of the
proton beam is amplified. There are many, postulated, advantages of this
approach, which include transmutation of long-lived nuclear waste and the
inherent safety of a sub-critical assembly. A good critique would contain a
detailed evaluation of the potential of the energy amplifier, measured against
other fission reactors (including future generation) and present a critique of the
challenges and barriers to demonstrating and deploying the technology. The
critique would be expected to demonstrate a detailed understanding of both the
underlying physics together with a critique of the merits of the technology.

We know enough nuclear physics to understand nucleosynthesis


The fact that the elements are synthesised in stars and that the heavier
elements are recycled through repeated sequence of supernovae and star
formation has been known for 50 years following from great contributions from
Hoyle, Fowler and Gamow. The processes include, the CNO cycle, s-process, rprocess and rp-process. However, uncertainty still exists. There is debate about
the r-process and recently a new p-process has been introduced together with an
i-process! Part of the challenge is that many of the reactions involve nuclear
isotopes that do not exist on Earth and hence our knowledge of their properties
is limited. A good critique will involve a detailed review of the processes by which
elements are synthesised together with an examination of the present scientific
literature. The critique should highlight the main areas of uncertainties together
with key suggestions of how these uncertainties may be overcome. A good
critique will contain examples of order of magnitude estimates of the effects of
variations of the reaction rates.

Hyper-deformed nuclei will never be identified


The picture of the nucleus we acquire at school is one in which there is a
spherical collection of protons and neutrons in something approaching a liquid
drop. However, pioneering work in the 1980s found that it is possible for nuclei to
exist is what were called superdeformed states. Here the nucleus is deformed
such that the ratio of one axis is twice that of the other two rugby ball shaped.
It is also possible for nuclei to be pear-shaped octupole deformation. Given that
a liquid drop is has its lowest energy when spherical, how is it possible that
nuclear can have quasi-stable superdeformation. The holy grail has been to

discover hyperdeformation, where the ratio of the deformed axis is three times
the other two. The critique will understand the underlying physics behind stable
deformed nuclear shapes, the experimental evidence for such shapes together
with the historical development. A good critique will provide a critical evaluation
of the experiments which have been published searching for hyperdeformation
and provide a reasoned answer to the question posed in the title.
There is no limit to the size of the nucleus
The number of elements produced to date is 118 and the number of isotopes
about 3000. At the more massive scale a neutron star may be viewed as a giant
nucleus, where lessons learned from the nuclear equation of state from nuclei
can be applied to understand the maximum size of a neutron star and neutronstar glitches. In principle, we can keep adding more protons and neutrons to a
nucleus creating heavier elements and isotopes. What is it, if anything, that
limits the size of a nucleus? How in detail are objects such as neutron-stars
linked to nuclei? What are the current experimental limits? What is the potential
for synthesising massive nuclei? A good critique will explore the underlying
properties of the strong interaction and how these link to the limits of the
existence of nuclear matter. The critique will develop a detailed understanding of
the synthesis of superheavy elements and the challenges of reaching the island
of stability and beyond. It will also provide a credible link, or otherwise, between
the properties of objects such as neutron-stars and nuclei and understand where
the current scientific challenges lie.

Radioactive C14 is of limited practical use in dating artefacts


Radiocarbon dating is a widely used technique for evaluating the age of
archaeological objects. It was used, for example, to date the Shroud of Turin to
the 1300s, casting doubts over its authenticity. The half-life of carbon-14 is 5730
years which limits its usefulness. Moreover, there are contaminating sources of
carbon-14 from the bomb testing programme. So just how reliable is radiocarbon
testing, what are the limits on its accuracy, how is the dating performed and
what are the potential sources of error? The critique would review the range
dating techniques, including accelerator based methods. It would understand the
current limitations and critique the current developments aimed at improving
accelerator based dating methods including the introduction of new dating
isotopes. A projection for the 10 year horizon for the stat-of-the-art for the field
should be made.

We are unlikely to expand our knowledge of the nucleus by exploring


closer to the drip lines
As one moves from stable to beta-unstable nuclei this involves the addition of
protons or neutrons. For example, 14C to 19C are unstable against beta- decay.
As more and more neutrons are added the half-life gets shorter and eventually as

the drip-line is crossed the nucleus neutron-decays. Given that the neutron
interacts via the strong force and there is no electrostatic repulsion, why is it that
the neutron is not bound to the nucleus? Moreover, it has been discovered that
many of the lessons we have learned from stable, conventional, nuclei are no
longer valid when one reaches the drip-lines. The size of the nucleus changes
dramatically, magic numbers are different and borromean nuclei appear. How
can understanding such behaviour tell us about properties of the underlying
strong interaction? A good critique will examine the existing evidence for the
evolution of nuclear properties towards the drip-lines and attempt to unravel
what the current knowledge of how this links to the strong force. This would be
illustrated by a dissection of key publications in this field.

Particle Physics
A 250-500 GeV electron-positron linear collider has more scientific merit
than upgrades to the LHC [PN]
There are, at present, several different ideas around for the future direction of
particle physics, all of which have many advocates. In this critique, you are
invited to assess the scientific arguments in favour of each and draw appropriate
conclusions based on reasoned arguments. Following the discovery of the Higgs
boson, is further discovery more likely by probing its properties as precisely as
possible in the clean environment of an electron-positron `Higgs factory linear
collider, or is the planned high luminosity upgrade of the LHC more likely to
reveal previously unknown physics? What about other proposals such as very
large circular tunnels for electron-positron or proton-proton collisions at vastly
increased energies? Is it, in any case, right to base the future aims of particle
physics only on searching for new discoveries, or does detailed measurement
still have a role? This is clearly a subject where there is no absolute right or
wrong answer. A good critique will take a particular view and argue the case for
it, whilst also acknowledging the arguments for other views.

Flavour physics experiments will make the first discovery of physics


beyond the Standard Model [PN]
It is commonly assumed that new discoveries in particle physics require ever
larger new facilities probing ever higher centre-of-mass energies. However, this
view ignores the role of `quantum loops, the most well-known of which is the
fluctuation of a photon into an electron-positron pair, borrowing mass-energy
from the vacuum for a short period of time as allowed by the Uncertainty
principle, before the photon re-forms. Much more complex fluctuations become
possible in high energy physics experiments, in principle including temporarily
created particles with masses way beyond those that can be explored by directly
creating the particles. Heavy `flavour physics experiments such as LHCb and
NA62 are particularly sensitive to such effects, when they study ultra-rare

processes, some of which take place in only one in 10 billion of their collisions.
This critique invites you to look into these processes and to express a wellargued view on whether bigger is always better, or whether the more subtle
methods adopted by flavour physics experiments are more likely to reveal new
previously unknown particles.
Supersymmetry is dead [PN]
Before the Large Hadron Collider turned on, a large fraction (possibly a majority)
of particle physicists expected it to reveal supersymmetry (SUSY) and to do so
quickly; the signatures were expected by many to be much easier to establish
than those of the Higgs boson. There were also compelling theoretical reasons
for the need for SUSY to solve the so-called `hierarchy and `fine-tuning
problems. These arguments became stronger with the discovery of the Higgs
boson. Yet we have now reached the end of the first LHC run and, so far, there is
not a sniff of SUSY. Could it be that it is hiding undiscovered in the existing LHC
data? Could it be that its parameters are cunningly chosen by nature so as to
avoid detection? Are Stephen Hawking and others right to conclude that there is
no SUSY and instead the universe is balanced on a finely-tuned knife edge,
possibly in a metastable state that could decay at any moment? Will the future
runs of the LHC give us the answer, or are other experimental results likely to be
more revealing? This critique gives you the opportunity to wrestle with some of
the deepest philosophical questions currently facing fundamental physics and to
formulate an argument based on what we know so far, with a little added
speculation.
The Universe must have more than 4 space-time dimensions [PN]
How can it be that the gravitational force is so pathetically weak compared with
the other forces of nature; so weak that it can be completely neglected in all
experiments studying the interactions between fundamental particles. This huge
difference between the strengths of forces is a serious barrier to the
development of a unified understanding of all of the forces of nature in a single
framework. One possible explanation is that nature contains additional
dimensions beyond those that we experience in our everyday lives and that the
gravitational force is simply a relic that were able to observe of a much stronger
force acting in or between dimensions that we are not. Strange though these
concepts appear, they are well established; those who seek to unify all of the
forces using string theories generally allow themselves to think in as many
dimensions as they like. Some of the consequences, such as the possible
production of mini-black holes at the LHC, may be approachable experimentally.
In this critique, you are invited to ask whether a universe is possible with just the
obvious 4 space-time dimensions, or whether the force unification arguments are
so compelling that it is necessary to introduce more hidden dimensions.

A muon collider will be operational within the next 30 years [AW]

A muon collider could provide a compact, circular alternative to a TeV-scale linear


collider. There are however considerable technical challenges in producing muon
beams able to give the desired luminosities. You could consider the physics
potential of a muon collider compared with the alternative electron collider as
well as the technical feasibility of building one. You might touch on the financial
implications, but dont worry about the political aspects of the question.
The neutrino is a Majorana particle [AW]
The observation of neutrino mixing confirms that neutrinos are not massless.
However their masses are millions of times smaller than those of the other
fundamental fermions. If the neutrino is a Majorana fermion it is possible for a
see-saw mechanism to explain this difference in mass between neutrinos and
other fermions. This model might explain other outstanding problems in particle
cosmology. But is it true? You could consider the theoretical arguments, the
alternative models, and of course the experimental evidence and the prospects
for resolving the question.
The neutrino mixing matrix will be fully measured within the next
decade [AW]
Neutrino mixing has now been observed in a number of experiments using both
natural and artificial neutrino sources. With 3 generations of neutrinos the
neutrino mixing matrix can be described in terms of 3 mixing angles plus a
complex phase which would describe CP-violating effects. Although considerable
progress has been made, some elements of the matrix remain unmeasured. You
should consider different scenarios and the capabilities of extant and proposed
experiments to assess the prospects for the target described in the question
being achieved.
Plasma wakefield acceleration underpins the long-term future for
particle physics experiments.[AW]
One of the major drivers of the cost of linear colliders is the accelerating
gradient. With current RF acceleration technology a TeV-scale linear collider
would be 10s of km long. Plasma wakefield acceleration is a promising way of
providing high-gradient acceleration. You might compare with other acceleration
technologies and perhaps consider other approaches to measuring the properties
of fundamental particles and forces.
General Physics
The second law of Thermodynamics cannot be violated. [NT]
Living Systems obey the Second Law of Thermodynamics [NT]
Quantum Mechanics cannot describe physical reality [NT]
Quantum computers are limited by decoherence [NT]

Negative resistance and negative viscosity play an important role in


daily life [NT]
Statistical physics can be applied to finance and economics [NT]
D-wave Systems Inc. have made a quantum computer" [MC]
The physics and maths of quantum computation, in which binary bits become
qubits which can be in a quantum mechanical superposition of states, has been
extensively discussed. In some research laboratories, devices consisting of one
or a few qubits have been shown to (sometimes!) behave as quantum mechanics
describes. However, one company claims to be making complete quantum
computers, which it offers for sale. While the company has great technical
competence, many people view their claims of quantum computation with
suspicion. Deciding what is really going on in the D wave computer, and whether
it can be described as quantum computing as the term is usually understood, is
the aim of this critique.

Topical Physics

The future of energy production lies in fusion reactors [PJ]


The possibility of energy production via nuclear fusion, mirroring processes in the
Sun, promises a solution to future energy production with virtually limitless
supply of fuel. The UK has led this field through the JET experiment and the next
stage is the construction of ITER and then DEMO is aimed to produce electricity
into the grid around 2040. However, the challenges around realising fusion
power remain enormous; there remain scientific questions regarding the plasma
stability, how to produce sufficient tritium to keep the reactors sustainable
fuelled, how to extract sufficient electricity to make it worthwhile, and perhaps
most significantly how to develop materials that can withstand the huge neutron
fluxes produced in the reactions? There is also the alternative of inertially

confined fusion as well as alternative approaches to magnetic confinement


fusion. A good critique will present a detailed understanding of the key scientific
challenges, backed up with appropriate calculations. It would examine the
potential of fusion power as measured against other future generation nuclear
technologies. As such, the critique should cover material well beyond the basic
physics of fusion rectors encountered in the year 3/4 Fission and Fusion module.
Clean energy production will provide a significant fraction of the UK
power needs [PJ]
The UK is presently engaged in a major revolution in how it delivers electricity
with a bid to decarbonise to the level of 20% of that in the early 1990s this is
enshrined in the climate change act of 2008. Current electricity generation is 3040% by coal power. With the decommissioning of larger coal power stations
already started with Didcot Power Station the need to fill the gap is pressing.
Clean energy is synonymous with low carbon and the solutions are various;
including, but not limited to, wind, wave, tidal, nuclear, geothermal and solar. At
present the UK has been pursuing the use of offshore wind with the worlds
largest investment in this technology. The question is: what is the role of the
other technologies particularly now shale gas with potential carbon capture and
storage is on the horizon? A good critique would involve a detailed study of the
strengths and weaknesses of the various technologies, a range of calculations to
back up any views taken, and include a summary of the potential options and
priorities. The critique may also consider the future potential of energy storage
options.
Energy storage and transport and not the production of energy are the
keys to a society with clean power [PJ]
In its ambition to resculpt the generation of electricity, the UK government are
examining a range of possibilities that include low carbon electricity generation,
energy efficiency and smart use of electricity. Key elements of the latter are
energy storage and the alternative use of energy vectors. An example of the
latter is the use of hydrogen to generate electricity in hydrogen fuel cells and the
hydrogen can be stored and transported. There is also significant interest in the
intelligent use of electricity through the use of smart grids. With the significant
utilisation of offshore wind power, with its intrinsically variability novel solutions
are required to harness this form of energy production at capacity. This critique
is aimed at understanding how novel energy vectors and storage could
potentially play a role in the future. A good critique would evaluate the potential
capacity for hydrogen storage and energy storage using current technologies
and the practicality of deployment on a large scale. An assessment on how this
might impact on the cost of electricity, to understand competitiveness against
other approaches, would be an important dimension.

Вам также может понравиться