Академический Документы
Профессиональный Документы
Культура Документы
CURRENT
OPINION ICU management based on big data
Stefano Falini a, Giovanni Angelotti b, and Maurizio Cecconi a,c
Purpose of review
The availability of large datasets and computational power has prompted a revolution in Intensive Care.
Data represent a great opportunity for clinical practice, benchmarking, and research. Machine learning
algorithms can help predict events in a way the human brain can simply not process. This possibility comes
with benefits and risks for the clinician, as finding associations does not mean proving causality.
Recent findings
Downloaded from https://journals.lww.com/co-anesthesiology by BhDMf5ePHKav1zEoum1tQfN4a+kJLhEZgbsIHo4XMi0hCywCX1AWnYQp/IlQrHD3hIW04IhZ9AvNyXiDskTJpGAYXztNG71dMzsxvxmZhgI= on 04/12/2020
Current applications of Data Science still focus on data documentation and visualization, and on basic
rules to identify critical lab values. Recently, algorithms have been put in place for prediction of outcomes
such as length of stay, mortality, and development of complications. These results have begun being
implemented for more efficient allocation of resources and in benchmarking processes, to allow
identification of successful practices and margins for improvement. In parallel, machine learning models
are increasingly being applied in research to expand medical knowledge.
Summary
Data have always been part of the work of intensivists, but the current availability has not been completely
exploited. The intensive care community has to embrace and guide the data science revolution in order to
decline it in favor of patients’ care.
Keywords
benchmarking, big data, clinical prediction model, data science, intensive care medicine
0952-7907 Copyright ß 2020 Wolters Kluwer Health, Inc. All rights reserved. www.co-anesthesiology.com 163
movement of evidence-based medicine [15]. For i.e. the parts where engineers had found most dam-
years, the imperative has been to only accept evi- ages. Wald realized that this approach was flawed as
dence of the highest quality, that is randomized it focused only on the planes that made it the
controlled trials (RCTs) planned with statistical ground, while the truly critical injuries had pre-
rigor, thoroughly conducted, and possibly with an vented airplanes to land, making their data unavail-
ever-increasing number of participants. Unfortu- able for inspection. Engineers were just making the
nately, these expectations have proved hard to airplanes heavier without realizing where the real
meet: the proportion of clinical interventions based problem was (engines and cockpit) [23].
on quality evidence rarely exceeds 50% [16], and Reversely, it is undeniable that with proper
earlier estimates report figures even lower than 20% observation and empirical testing improvement
[17]. Recently, the disadvantages of pursuing exclu- can be achieved quickly. One of the first applica-
sively this approach have been pointed out. tions of Data Science that led from observations to a
Although still considered the strongest study design precise intervention is the cholera outbreak in Broad
to eliminate confounders, RCTs have been criticized Street, London in 1854. Anesthesiologist and epide-
for lacking external validity; also, they impose sig- miologist John Snow was able to identify the asso-
nificant costs and long periods of observation ciation between cholera deaths and the use of water
&
[18,19 ]. Although it would be nice to test every from a specific water pump. His hypothesis found
single intervention with trials, it is simply not feasi- confirmation when the simple intervention of
ble. They cannot represent the only framework for removing the water handle solved the outbreak [24].
advancing our knowledge. Although there are reports of a very high degree
of correlation between findings of randomized and
nonrandomized trials [25,26], it would be a mistake
EXTRAPOLATING KNOWLEDGE IN to get rid of RCTs. Research has to find a way to work
MEDICINE with observational studies and trials in a synergic
When, in the 1960s, the first prototype of EHR was way [27]. Big Data needs to meet Big Trials. Data
elaborated, the premise was to ease access to patient Science could help stratify in a more precise fashion
information. Since then, advancements and avail- the phenotypes of patients in which interventions
&
ability of computing power, storage capacity and can be tested better and faster [28 ].
networks have allowed EHRs to grow exponen-
tially, and become increasingly more advanced
and pervasive. Nowadays, they can capture many INTENSIVE CARE DATA
aspects of patients’ physiology in a progressively In general, EHRs mostly collect text data, such as
automated fashion [11]. These features, coupled clinical notes, and, to a minor extent, structured
with their ongoing diffusion, have allowed massive data, such as vital parameters and lab results. For
amounts of patient data to become available for noncritical patients, the rate at which data is entered
retrospective studies. into EHRs is relatively low, and only comprise rou-
Longitudinal analysis of EHRs has reinstated the tinely vitals check followed by an assessment note
once deprecated concept of observational studies from the clinician. On the contrary, in ICUs, where
[20]. Until not so long ago, these used to be relegated patients’ health is at risk and must be judiciously
to small physiological studies, or to evaluate inter- kept under surveillance, routine checks are espe-
ventions in real-life scenarios, but have been cially frequent and vitals are continuously moni-
revamped after the criticism moved to RCTs, espe- tored in high resolution [29].
&
cially in ICU [21,22 ]. The innovation is that thanks For this reason, Intensive Care Medicine (ICM),
to the increasing adoption of EHRs, they can now be compared with other medical specialities, is partic-
run with very large samples. In comparison to RCTs, ularly keen on generating large volumes of data.
observational studies are appealing for higher effi- True, it does not yet embed the -omic components
ciency, lower costs, limited research times, that oncology or haematology can boast, but it is
and reproducibility. likely only a matter of time: studies that focus on the
On the other hand, they are by definition non- different responses to therapies based on gene
randomized, and theoretically more prone to bias. expression profiling are already being published in
&
Indeed, association and causality can be difficult to swarms [30,31 ,32]. Likewise, it does not offer as
discriminate only by observations. During World much high-resolution imaging as radiology or
War II, statistician Abraham Wald helped the US pathology. Yet the instability that characterizes
Navy solving the problem of reinforcing war air- the typical ICU patient requires uninterrupted mon-
planes after air battles. Before his contribution, air- itoring of physiological parameters, frequent blood
crafts were being reinforced on wingtips and body, sampling, multiple clinical examinations per day,
Table 1. Types of data recorded regarding ICU patients and their frequency
and often even recurrent X-rays and scans. This at identifying suboptimal processes and at generat-
attitude results in a huge amount and variety of ing ideas for improvement.
data generated by every single patient, ranging from Similarly, experts and medical societies have
notes to waveforms and structured data (Table 1). explicitly endorsed the adoption of quality indica-
This immense resource, is, as a matter of fact, tors and performance assessment of ICUs [34,35].
heavily underused. It appears in the fact that, on Intuitively, it is only by measuring performance that
average, humans can process no more than 3 or 4 upgrades from baseline can be appreciated, and thus
independent variables at the time [33]. Decision that the efficacy of certain interventions be recog-
making in ICM is therefore limited to a relatively nized and quantified [36]. Furthermore, there is
small number of variables, indicated by clinical increasing evidence arguing that investments in this
guidelines or picked by physicians’ experience, direction are cost-effective and have been a major
and ultimately chosen as key indicators of health force in improving outcomes of critical patients
deterioration. This limitation does not hold for by minimizing diagnostic error and therapeutic
&
computers. Indeed, machine learning shines in this harm [19 ].
context, as it can see through thousands of variables Despite the positive trend toward better patient
in a very short time, and could be of help with at outcomes, it appears in fact that the choice of func-
least two critical aspects in intensive care: prompt- tional indicators for benchmarking is far from
ness of response and feature selection. straight-forward, and their implementation is rife
with challenges [37]. For example, standardized
mortality ratios have long been used for this pur-
BENCHMARKING INTENSIVE CARE pose, yet they rely on prognostic scores that have
The term benchmarking originates from the field of ultimately failed to test for external validation, and
land surveying, where it refers to the practice of thus they have been criticized for substantial bias;
carving horizontal marks in stone structures to form the same can be said about ICU length of stay, too
a ‘bench’ for the placement of a levelling rod. Such heavily influenced by the availability of step-down
benchmarks ensure that levelling rods can be repo- units or extra-hospital postacute care facilities [38].
sitioned identically in future occasions, and conse- Big data tools, in the form of predictive models or
quently that comparison between measurements cluster analyses, are expected to improve this aspect
made at different times can be accurate. The expres- by allowing more accurate profiling of patients and
sion has then spread to the business sector, where it better corrections for the case-mix bias [39].
broadly indicates the custom of measuring the per-
formance of a company or its processes against those
of the competitors, and more typically of the indus- ISSUES IN THE IMPLEMENTATION
try leaders, which are thus taken as reference. By If it truly could be so helpful and in so many ways,
using a series of standardized indicators, it is aimed why is not machine learning already in use in ICU?
0952-7907 Copyright ß 2020 Wolters Kluwer Health, Inc. All rights reserved. www.co-anesthesiology.com 165
The first issue is that algorithms need data to be outputs must therefore be inspected with extra
viable. For some models, especially those falling in care to make sure that learning is being done coher-
&
the ‘deep learning’ umbrella, data from millions of ently [45 ]. Hence, clinical input from physicians
patients. Such amounts are only now starting to be will continue playing a major role for the full
available. Often all this data is collected exclusively development pipeline of artificial intelligence
for administrative purposes in hospital servers, or is algorithms, as in them resides the experience and
secured by EHR manufacturers, and not made avail- generalized knowledge that current technology
&&
able to the extended research community. It is only cannot sustain [13 ,46].
recently that the researchers have had the opportu-
nity to work with vast ICU databases, first among all
the Medical Information Mart for Intensive Care III USE CASES
(MIMIC-III), developed by MIT’s Laboratory for Reports of machine learning algorithms in ICM have
Computational Physiology in collaboration with so far been copious, but so far limited to research
&& & &
Beth Israel Deaconess Medical Center in Boston [47 ], with very few exceptions [48 ,49 ]. Indeed,
[29]. MIMIC-III was a milestone in the field, offering barriers to the prospective implementation of learn-
structured and waveform data from more than 40 ing models are still holding strong, yet multiple
000 ICU patients. Afterwards, other major databases scenarios of application can be hypothesized.
were publicly released, such as the eICU, with more Assuming unsupervised, supervised, and reinforce-
than 200 000 admissions from several hospitals ment learning (Fig. 1), and data augmentation as the
&&
around the United States [40 ]. Among the latest major areas of machine learning research, some use
releases is Amsterdam UMC’s database, providing a cases can be found for each.
&&
first insight into the European population [41 ]. Unsupervised models (Table 2) are used to find
A second, greater, bottleneck in the application patterns in data and differ from supervised models as
of machine learning in ICM is the lack of standardi- they don’t require any outcome. These algorithms
zation. Different standards in data collection repre- compute distances between observations and then
sent a steep barrier to overcome, and ultimately infer a degree of similarity amongst those. The
translate into major difficulties when trying to concept of distance here can take several different
aggregate multiple data sets in a single workable meanings and formulations depending on the con-
database. Realistically, a solution to this problem text, but the key is to identify analogies between
can only be found by healthcare workers and data observations. For instance, responses to medications
scientists together. A shared infrastructure is yet to (i.e. vasopressors) are also a function of a patient’s
be developed, but signs of progress can be seen in the physiology, and not all patients respond in the same
identification of standards for the exchange of clin- manner. Clustering models could help in detecting
ical data [42]. Common ICU dictionaries are subgroups with similar responses to treatment,
largely awaited. based on additional covariates related to their
The third issue, partly a consequence of the first health history or parameters. Clusters would also
two, is the external validity of the analyses per- be of use to fine-tune doses and medications, thus
formed on such databases. For example, MIMIC- providing support to clinicians in the decision
III only stores data collected between 2001 and process.
2012 (time constraint), in the Boston area (geo- Supervised models, on the other hand, require
graphical constraint), and regarding mostly Cauca- an outcome or classification to be trained on. They
sian patients (variety constraint). It is hard to assess answer questions like ‘how likely is this observation
if models trained on such data could be reliable in to belong to class X?’, which can be easily turned in a
other contexts, and it is likely that they would favor prediction framework. Examples of supervised mod-
patients with characteristics similar to those of the els can be found in the yearly CinC/Physionet chal-
original data. Only when both handicaps of quan- lenges on physiological data, such as the automatic
tity and variety will be solved we will be able to talk classification of ECGs, or prediction of severe hypo-
&&
about true Big Data in ICU. The prospects are com- tension and sepsis [50 ,51,52].
pelling, major initiatives are being taken all around Reinforcement learning refers to a family of
the world, and the outlook is favorable as awareness models that falls somewhat in-between the previous
grows [43]. two. They learn from interacting in an environment
One final consideration regarding the aforemen- to take the action that yields to the highest reward.
tioned ‘correlation versus causation’ issue is due: as In a recent example, the titular ‘AI clinician’ is
already stated, when using observational data, it is reported learning the best strategy to maximize
not possible to assert the causality of events, survival of a septic patient by choosing between
&&
but only correlation amongst those [44]. Model supplying fluids and dosing vasopressors [53 ].
FIGURE 1. Three major types of machine learning paradigms. From top to bottom: in supervised frameworks the training set is
labelled and a prediction has to be made regarding a new value; unsupervised models find patterns in unlabeled data using
distances; in reinforcement learning, the agent tests actions and maximizes the reward function.
Finally, data augmentation models, namely gen- synthesized, and the generator, whose goal is
erative adversarial networks [54], leverage two com- deceiving the first. Ultimately, after training on
peting algorithms: the discriminator, whose goal is data, the generator should be able to create data
understanding if the output of the other is real or so similar to the original that the discriminator
Unsupervised learning K-Means Pattern discovery Identification of ICU patients with similar clinical trajectories
Hierarchical clustering Susceptibility of response to therapies
Biclustering Phenotypes of septic patients
Self-organizing maps
Supervised learning K-nearest neighbors Regression / prediction Prediction of hypotensive episodes
Linear regression Prediction of length of stay in ICU
Logistic regression Estimation of risk of mortality
Random forests Classification Assessment of sedation
X gradient boosting Prediction of survival
Neural networks
Reinforcement learning SARSA Policy discovery Weaning from mechanical ventilation
Deep Q network Optimal sepsis treatment
DDPG Antibiotic dosing strategy
0952-7907 Copyright ß 2020 Wolters Kluwer Health, Inc. All rights reserved. www.co-anesthesiology.com 167
10. Shah ND, Steyerberg EW, Kent DM. Big data and predictive analytics:
won’t be able to recognize it. The methodology may & recalibrating expectations. JAMA 2018; 320:27–28.
be employed to augment data of any kind, such as This viewpoint highlights 4 reasons for the longstanding hesitance to bring
predicting analytics into clinical practice: recurrent unfitness for risk-sensitive
ECGs [55], to train models when the available data decisions, failure to include calibration during model evaluation, hyped expecta-
are scarce. tions from vendors, and heterogeneity of data formats.
11. Evans RS. Electronic health records: then, now, and in the future. Yearb Med
Generative and reinforced models are expected Inform 2016; 25:S48–61.
to see the greatest development in ICU data science, 12. Dalianis H. The history of the patient record and the paper record. In:
Dalianis H, editor. Clinical text mining. Cham, Switzerland: Springer; 2018.
thanks to their capabilities and yet uncovered pp. 5–12.
potential. 13. Topol EJ. High-performance medicine: the convergence of human and arti-
&& ficial intelligence. Nat Med 2019; 25:44–56.
In this extensive review, best-selling author and prolific researcher Topol puts in
perspective both the potential and limitations of AI applied to healthcare. Review-
CONCLUSION ing the achievements obtained in the different medical branches, he redimensions
the common hype and reminds the reader that the much-feared full automation is
The data science revolution in ICM has started. As unlikely to ever be implemented.
14. Murdoch TB, Detsky AS. The inevitable application of big data to healthcare.
shown, data can be used for benchmarking, clinical JAMA 2013; 309:1351–1352.
decision support, and research. Machine learning 15. Bothwell LE, Greene JA, Podolsky SH, Jones DS. Assessing the gold
standard–lessons from the history of RCTs. N Engl J Med 2016;
algorithms can identify hidden patterns in clinical 374:2175–2181.
data. There are, as expected, both benefits and risks 16. Ebell MH, Sokol R, Lee A, et al. How good is the evidence to support primary
care practice? Evid Based Med 2017; 22:88–92.
in this opportunity, because finding associations 17. Darst JR, Newburger JW, Resch S, et al. Deciding without data. Congenit
does not mean proving causality. There is an urgent Heart Dis 2010; 5:339–342.
18. Frieden TR. Evidence for health decision making - beyond randomized,
need for data sharing and common data dictionar- controlled trials. N Engl J Med 2017; 377:465–475.
ies. The intensive care community has to embrace 19. Niven AS, Herasevich S, Pickering BW, Gajic O. The future of critical care lies
in quality improvement and education. Ann Am Thorac Soc 2019;
and guide the Data Science Revolution to improve &
16:649–656.
patients’ outcomes. In this review, it is explained why investments in quality improvement, adherence to
standards and guidelines, and integration of evidence-based interventions repre-
sent the best way not only to boost patient outcomes, but also to reduce the noise
Acknowledgements that confounds numerous RCTs.
20. Nair S, Hsu D, Celi LA. Challenges and opportunities in secondary analyses
None. of electronic health record data. In: MIT Critical data, editor. Secondary
analysis of electronic health records. Cham, Switzerland: Springer; 2016.
pp. 17–26.
Financial support and sponsorship 21. Vincent J-L. We should abandon randomized controlled trials in the intensive
care unit. Crit Care Med 2010; 38:S534–S538.
None. 22. Girbes ARJ, De Grooth H-J. Time to stop randomized and large pragmatic
& trials for intensive care medicine syndromes: the case of sepsis and acute
respiratory distress syndrome. J Thorac Dis 2020; 12:S101–S109.
Conflicts of interest Using Monte Carlo simulations, the authors demonstrate that syndrome-attribu-
There are no conflicts of interest. table risks of ICM diagnoses are often overestimated, and how this phenomenon
leads to underpowering of RCTs and ultimately to their often negative results: a
renewed focus on mechanistic research is seeked.
23. Operations Evaluation Group. A Reprint of ‘A Method of Estimating Plane
REFERENCES AND RECOMMENDED Vulnerability Based on Damage of Survivors’ by Abraham Wald. Alexandria,
VA: Center for Naval Analyses; 1980.
READING 24. Snow J. On the mode of communication of cholera. London, UK: John
Papers of particular interest, published within the annual period of review, have Churchill; 1855.
been highlighted as: 25. Ioannidis JP, Haidich AB, Pappa M, et al. Comparison of evidence of treatment
& of special interest effects in randomized and nonrandomized studies. JAMA 2001; 286:
&& of outstanding interest
821–830.
26. Schmidt AF, Rovers MM, Klungel OH, et al. Differences in interaction and
1. Asimov I. I, Robot. New York: Gnome Press; 1950. subgroup-specific effects were observed between randomized and nonran-
2. Contributors to Wikimedia projects. Hippocratic oath. Wikipedia; 2002. https:// domized studies in three empirical examples. J Clin Epidemiol 2013;
en.wikipedia.org/wiki/Hippocratic_Oath. Accessed 13 December 2019 66:599–607.
3. Makary MA, Daniel M. Medical error-the third leading cause of death in the US. 27. Angus DC. Fusing randomized trials with big data: the key to self-learning
BMJ 2016; 353:i2139. healthcare systems? JAMA 2015; 314:767–768.
4. Haynes AB, Weiser TG, Berry WR, et al. A surgical safety checklist to reduce 28. Timsit J-F, Aboab J, Parienti J-J. Is research from databases reliable? Yes.
morbidity and mortality in a global population. N Engl J Med 2009; & Intens Care Med 2019; 45:118–121.
360:491–499. This editorial addresses the potential uses of secondary analyses from databases:
5. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease although unfit to provide causal inference, they could be instrumental in raising new
catheter-related bloodstream infections in the ICU. N Engl J Med 2006; hypotheses for RCTs, or in increasing levels of confidence and ensuring general-
355:2725–2732. izability of results.
6. Thomson W. Electrical units of measurement. In: Popular lectures and 29. Johnson AEW, Pollard TJ, Shen L, et al. MIMIC-III, a freely accessible critical
addresses, Vol. 1: Constitution of matter. London, UK: Macmillan; 1889. care database. Sci Data 2016; 3:160035.
pp. 73–136. 30. Scicluna BP, van Vught LA, Zwinderman AH, et al. Classification of patients
7. Contributors to Wikimedia projects. W. Edwards Deming. Wikipedia; 2002. with sepsis according to blood genomic endotype: a prospective cohort
https://en.wikipedia.org/wiki/W._Edwards_Deming. Accessed 13 December study. Lancet Respir Med 2017; 5:816–826.
2019 31. Antcliffe DB, Burnham KL, Al-Beidh F, et al. Transcriptomic signatures in
8. Docherty AB, Lone NI. Exploiting big data for critical care research. Curr Opin & sepsis and a differential response to steroids. From the VANISH Randomized
Crit Care 2015; 21:467–472. Trial. Am J Respir Crit Care Med 2019; 199:980–986.
9. Cosgriff CV, Celi LA, Stone DJ. Critical care, critical data. Biomed Eng This post-hoc analysis from the VANISH trial identifies an association between
&& Comput Biol 2019; 10:1179597219856564. transcriptomic sepsis response signatures and response to corticosteroids in
The present review remarks the necessity of quality data to produce robust AI terms of mortality in septic patients. It is the first study addressing gene expression
models for clinical purposes. Open databases for research and the development of profiling in critically-ill adults receiving steroids and adds experimental evidence to
a collaborative form of clinical data science should be endorsed in this delicate the growing awareness that the ‘one size fits all’ approach is often not appropriate,
transition phase. and possibly a cause of many negative trials.
32. Rello J, van Engelen TSR, Alp E, et al. Towards precision medicine in sepsis: a 47. Shillan D, Sterne JAC, Champneys A, Gibbison B. Use of machine learning to
position paper from the European Society of Clinical Microbiology and && analyse routinely collected intensive care unit data: a systematic review. Crit
Infectious Diseases. Clin Microbiol Infect 2018; 24:1264–1272. Care 2019; 23:284.
33. Halford GS, Baker R, McCredden JE, Bain JD. How many variables can The present review is important as it reports the trends in research studies that
humans process? Psychol Sci 2005; 16:70–76. employ machine learning on routinely collected ICU data. It gives an overview of the
34. Rhodes A, Moreno RP, Azoulay E, et al. Prospectively defined indicators to approaches, techniques and results: clinical prediction models are the most
improve the safety and quality of care for critically ill patients: a report from the common, they are rapidly increasing throughout the years, and mostly adopt
Task Force on Safety and Quality of the European Society of Intensive Care neural networks, support vector machines, and tree-based methods.
Medicine (ESICM). Intensive Care Med 2012; 38:598–605. 48. Roggeveen LF, Fleuren LM, Guo T, et al. Right Dose Right Now: bedside
35. De Lange DW, Dongelmans DA, De Keizer NF. Small steps beyond bench- & data-driven personalized antibiotic dosing in severe sepsis and septic shock -
marking. Rev Bras Ter Intens 2017; 29:128–130. rationale and design of a multicenter randomized controlled superiority trial.
36. Peden CJ, Rooney KD. The science of improvement as it relates to quality and Trials 2019; 20:745.
safety in the ICU. J Intens Care Soc 2009; 10:260–265. This trial is among the first ones to implement a real-time clinical decision support
37. Woodhouse D, Berg M, van der Putten J, Houtepen J. Will benchmarking system. In this case, a platform capable of guiding antibiotic dosing will be
ICUs improve outcome? Curr Opin Crit Care 2009; 15:450–455. evaluated prospectively in terms of pharmacometric and clinically relevant end-
38. Salluh JIF, Soares M, Keegan MT. Understanding intensive care unit bench- points against standard practice.
marking. Intens Care Med 2017; 43:1703–1707. 49. McWilliams CJ, Lawson DJ, Santos-Rodriguez R, et al. Towards a decision
39. Salluh JIF, Chiche JD, Reis CE, Soares M. New perspectives to improve & support tool for intensive care discharge: machine learning algorithm devel-
critical care benchmarking. Ann Intensive Care 2018; 8:17. opment using electronic healthcare data from MIMIC-III and Bristol, UK. BMJ
40. Pollard TJ, Johnson AEW, Raffa JD, et al. The eICU Collaborative Research Open 2019; 9:e025925.
&& Database, a freely available multicenter database for critical care research. Sci This study focuses on the validity of a clinical decision support system aimed at
Data 2018; 5:180178. optimizing discharge from ICU, a very useful topic in a time of increasing demand
The article describes in detail the structure of the eICU Collaborative Research and constrained resources. It employs two different databases (US- and UK-
Database, a US-based multicenter database with high granularity data for over 200 based) yet finds consistent results.
000 ICU admissions monitored by Philips Healtcare’s telehealth system. The 50. Reyna MA, Josef CS, Jeter R, et al. Early prediction of sepsis from clinical data.
database is de-identified and publicly accessible for research purposes, and the && Crit Care Med 2019; 1. [Epub ahead of print]
paper can be used as guidance for its interrogation. The PhysioNet/Computing in cardiology challenge is an international competition
41. Amsterdam UMC Database. Amsterdam Medical Data Science. https://amster- held annually in which participants submit novel algorithms of complex physiologic
&& dammedicaldatascience.nl/#amsterdamumcdb. Accessed 13 December 2019. signal processing for medical classification problems. For the 20th-year edition the
Officially released in November 2019, this is the first publicly accessible ICU goal was to predict sepsis, hoping to contribute to the long-standing goal of early
database from within the European Union, endorsed by the European Society of detection for improved outcomes.
Intensive Care Medicine, and containing de-identified data from tens of thousands 51. Moody G, Lehman L. Predicting acute hypotensive episodes: the 10th annual
of ICU admissions. PhysioNet/Computers in cardiology challenge. Comput Cardiol 2009;
42. Bender D, Sartipi K. HL7 FHIR: An Agile and RESTful approach to healthcare 36:541–544.
information exchange. In: Pereira Rodrigues P, Pechenizkiy M, Gama J, 52. Clifford GD, Liu C, Moody B, et al. AF classification from a short single lead
editors. Proceedings of the 26th IEEE International Symposium on Compu- ECG recording: the PhysioNet/Computing in cardiology challenge. Comput
ter-Based Medical Systems (CBMS’13, Porto, Portugal, June 20-22, 2013). Cardiol 2017; 44:065–469.
Piscataway NJ: Institute of Electrical and Electronics Engineers; 2013. 53. Komorowski M, Celi LA, Badawi O, et al. The Artificial Intelligence Clinician
43. Aboab J, Celi LA, Charlton P, et al. A ‘datathon’ model to support cross- && learns optimal treatment strategies for sepsis in intensive care. Nat Med 2018;
disciplinary collaboration. Sci Transl Med 2016; 8:333s8. 24:1716–1720.
44. Bailly S, Meyfroidt G, Timsit J-F. What’s new in ICU in 2050: big data and This original research article is of outstanding interest as the first adopting
machine learning. Intens Care Med 2018; 44:1524–1527. reinforcement learning in ICM and among the first throughout all healthcare.
45. Roth JA, Battegay M, Juchler F, et al. Introduction to machine learning in Moreover, it provides additional insight on the best policy to apply for the
& digital healthcare epidemiology. Infect Control Hosp Epidemiol 2018; 39: hypotensive septic patient, helping to dissipate the perennial debate between
1457–1462. fluids and vasopressors. It is expected that this kind of modeling will reach notable
In this review, the importance of adding machine learning to an epidemiologist’s expansion in the following years.
armamentarium is reviewed, in order to better exploit the potential of big data for 54. Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets.
infection prevention and control, quality improvement, and optimal allocation of Adv Neural Inform Process Syst 2014; 27:2672–2680.
hospital resources. 55. Esteban C, Hyland SL, Rätsch G. Real-valued (Medical) time series generation
46. Bundy A. Preparing for the future of Artificial Intelligence. AI Soc 2017; with recurrent conditional GANs; 2017. http://arxiv.org/abs/1706.02633.
32:285–287. Accessed 13 December 2019.
0952-7907 Copyright ß 2020 Wolters Kluwer Health, Inc. All rights reserved. www.co-anesthesiology.com 169