Вы находитесь на странице: 1из 6

International Journal of Engineering Trends and Applications (IJETA) – Volume 7 Issue 5, Sep-Oct 2020

RESEARCH ARTICLE OPEN ACCESS

Application of AI in Medicine and its concerns


Durga Prasad Pragallapati
Dot Net, Team Leader, Jnet technologies pvt ltd, Hyderabad-India

ABSTRACT
Background: Medicine is turning into a more and more data-centred area, and past classical statistical methods, synthetic
intelligence (AI), and gadget learning (ML) is attracting tons of hobby for the evaluation of scientific data. It has been argued
that AI is experiencing a quick manner of commodification. This characterization efficiently displays the modern manner of
industrialization of AI and its attain into society. Therefore, societal problems associated with using AI and ML must be now no
longer neglected anymore and, in reality, now no longer withinside the scientific domain. These societal problems can also take
many forms; however, all of them entail the layout of fashions from a human-focused perspective, incorporating human-
relevant necessities and constraints. In this quick paper, we speak some of the unique problems affecting using AI and ML in
remedy, fairness, privateers and anonymity, explain ability, and interpretability, however additional, a few broader societal
problems and ethics rules. We reckon that every one of those is functional components to recollect to gain the goal of fostering
the popularity of AI- and ML-primarily based technology totally, in addition to confirming with evolving rules regarding the
effect of virtual technology on ethically and privations touchy matters. Our unique aim right here is to mirror how these kinds of
subjects affect scientific packages of AI and ML. This paper consists of many of the contents of the “2d Meeting of Science and
Dialysis: Artificial Intelligence,” prepared withinside the Belloite University Hospital, Barcelona, Spain. Summary and Key
Messages: AI and ML are attracting tons of hobby from the scientific network as critical methods of understanding extraction
from data. These methods are more and more colonizing ambits of social effect, along with remedy and healthcare. Social
relevance issues with an effect on remedy and healthcare include (even though they may be now no longer constrained to)
fairness, explain ability, privations, ethics, and rules.
Keywords — Artificial intelligence, Machine learning.

I. INTRODUCTION
increasing number of blurred, with the principle
As part of a phenomenon that affects all fields of life
professionals in AI and ML at the payroll of personal
sciences, medicine is becoming an increasing data-centred
organizations. In any case, because of this, AI structures and
discipline [1]. Data analysis in medicine has, for long, been
merchandise are attaining the society at large, and, therefore,
the territory of statisticians. However, scientific records are
that societal problem associated with using AI in preferred
attaining past the simply quantitative to take greater
and ML, in particular, must now be unnoticed anymore and
complicated forms, which include, for instance, textual facts
now no longer in the drugs and healthcare domains. These
in Electronic Health Records (EHR), pictures in lots of
societal problems can also additionally take many forms;
modalities, on their personal or blended with different styles
however, greater regularly than now no longer, they entail
of signals, or graphs describing biochemical pathways or
the layout of fashions from a human-focused perspective,
biomarker interactions [2]. This records complexity is at the
that is, fashions that comprise human-applicable necessities
back of the evolution from classical multivariate records
and constraints. This is absolutely the simplest in part
evaluation in the direction of the nascent subject of records
technical depend. In this quick paper, we cover, in a non-
science [3], which embraces a brand new from the factor of
exhaustive manner, some of the specific societal problems
view of drugs truth that consists of interconnected wearable
affecting the improvement of AI and ML strategies, which
gadgets and sensors. Beyond the greater classical statistical
include equity, privateers, and anonymity, and explain
approaches, synthetic intelligence (AI) and, greater, in
ability and interpretability, however additionally a few
particular, gadget studying (ML) are attracting a whole lot
broader societal problems, which include ethics and rules.
hobby withinside the evaluation of scientific records, even
Not that those problems must be taken into consideration
though arguably with a especially low effect but on
independently; on the contrary, they regularly overlap
scientific practice [4]. It has been recounted that AI is
difficultly. Let us summarily listing them right here:
experiencing a quick commodification (now no longer that
Legislation. The industrialization of AI exposes it to rules
that is a wholly new subject because it becomes already
regulating the social area wherein it is supposed to operate.
depends on instructional dialogue nearly 30 years ago [5]).
In a few cases, this overlaps problems of privateers and
This characterization is basically of a hobby to huge IT
anonymity, including AI algorithms used for computerized
organizations; however, it successfully displays the
face popularity in public domains. It may contain greater
contemporary method of industrialization of AI, wherein the
preferred contexts, which include AI-primarily based self-
educational and commercial limits of studies are an
sustaining riding or protection weapons. The rules are

ISSN: 2393-9516 www.ijetajournal.org Page 70


International Journal of Engineering Trends and Applications (IJETA) – Volume 7 Issue 5, Sep-Oct 2020
likewise worried in remedy and healthcare practice, and, hit some felony walls. One broadly mentioned case is using
therefore, we want to make sure that AI and ML technology AI as the idea for autonomously riding vehicles. When a
follow new rules. Explain ability and Interpretability. ML human is in charge of any selection-making on a vehicle's
and AI algorithms are regularly characterized as black boxes, wheel, felony duties are pretty definitely drawn. The brief
that is, strategies that generate records fashions which can be commercial improvement of semi-self-sufficient vehicles,
difficult (if now no longer impossible) to interpret due to the primarily closer to the goal of entirely self-sufficient riding,
fact the proper shape touching on the to be had records has stretched the seams of current legislation, however.
(input) to a given final result (the output) is some distance Again, any utility of AI and ML in real clinical exercise is
too complicated. This hassle has been exacerbated via the sure to generate dialogue approximately its felony
depth of the new hobby in deep studying (DL) strategies. limitations and implications.
Only interpretable fashions may be explained, and explain A pertinent instance is the recent (May 2018)
ability is paramount, while selection-making in remedy implementation of the European Union directive for the
(diagnosis, prognosis, etc.) should be conveyed to humans. General Data Protection Regulation (GDPR). This directive
Privacy and Anonymity. Privacy-keeping ML-primarily mandates a proper clarification of all selections made
based records evaluation should cope with the doubtlessly through "automatic or artificially smart algorithmic systems"
contradictory hassle of retaining the non-public facts [6]. According to Article thirteen of the directive, the proper
personal while aiming to version it, regularly to make to clarification means that the "facts controller" is legally
inferences to affect a given population. Data anonymity sure to offer soliciting for residents with "significant
glaringly refers to the impossibility of linking non-public statistics approximately the good judgment involved, in
records with facts approximately the person that is not addition to the importance and the envisaged outcomes of
supposed to be revealed. These are key troubles and worries such processing [automated decision making, as described in
within the scientific and healthcare domains, particularly its Article 22] for the facts subject" [6]. AI and ML can be
within the interplay among the private and non-private the gear used to offer such automatic selection making, and,
sectors. Ethics and Fairness. Biological intelligence is therefore, it locations that technology in a felony spotlight.
multifaceted and responds to the environmental pressures of Some suggestions for GDPR-compliant ML improvement
human societies. Ethics is one of these aspects for which AI have lately been supplied through Veale et al. [7]. The
remains unprepared. Interestingly, this subject matter has implications of GDPR for using AI and ML in medicinal
grown to be important to AI dialogue in the latest years. drug and healthcare aren't too hard to appreciate. Any AI- or
Ethics also are a middle subject in remedy and healthcare. ML-primarily based clinical selection assist machine
Such convergence of pursuits makes it crucial to create a (MDSS) whose motive it's far to help the health workers of
clear roadmap for the moral use of AI and ML in remedy. their selection-making can be explicitly supplying a
The software of ML and AI in regions of social relevance (semi)automatic selection on a man or woman (for instance,
should additionally aspire to be fair. How will we imbue ML diagnosis, diagnosis, or tips on remedy regarding man or
algorithms, which might be equity agnostic, with equity woman patients, possibly even in lifestyles-threatening
necessities? How will we keep away from gender or conditions). In this situation, the facts controller can be the
ethnicity, for instance, unfairly influencing a studying clinical professional (from nurses to specialists [8]) and the
algorithm's results? In the scientific area and healthcare group this professional belongs to. Note that this piece of
wherein touchy facts approximately the person can be legislation (of obligatory utility in all international locations
without problems to be had, how will we make sure that AI- belonging to the European Union) calls for something very
and ML-primarily based selection guide gear are not particular from the AI and ML technology (or, more
tormented by such bias? We reckon that each one of those is astonishing accurately, from the humans designing,
relevant factors to recollect to acquire the goal of fostering enforcing, and the use of them): interpretable and
AI- and ML-primarily based technology withinside the explainable models, as mentioned withinside the subsequent
scientific and healthcare domains and conform with section. A clinical professional or any healthcare machine
evolving rules regarding the effect of virtual technology on worker the use of that technology ought to be capable of
ethically and privateers-touchy matters. Our precise interpreting how they reached particular selections (say, why
intention right here is to mirror how these kinds of subjects an ML version recognized a mind tumour as metastasis and
affect scientific programs of AI and ML. now no longer a high-grade glioma) and ought to be capable
of explaining the ones selections to any human laid low with
II. SOCIETAL ISSUES OF AI AND ML them. In the implementation of the artificial kidney as one of
APPLICATION the leading promising technology in nephrology, we ought
to be concerned, for instance, approximately the opportunity
of an opaque AI- or ML-primarily based alarm machine now
Human societies are regulated through our bodies of
no longer being capable of explaining the idea for a fake
legislation. While final in the educational realm, AI and ML
alarm that could endanger the lifestyles of the dialysis
tendencies have stayed oblivious to felony concerns.
patient. At a better level, and on the idea of felony
However, the second those technologies begin occupying
safeguards inclusive of the GDPR, a healthcare machine
the social area at large; their effect on humans is probably to

ISSN: 2393-9516 www.ijetajournal.org Page 71


International Journal of Engineering Trends and Applications (IJETA) – Volume 7 Issue 5, Sep-Oct 2020
would possibly determine now no longer to put in force an some distance from widespread popularity in clinical
opaque MDSS in scientific exercise, despite its perceived practice. Arguably, one of the motives is precisely (lack of)
effectiveness, handiest to keep away from the possibility of interpretability, expressed as "the want to open the device
unsustainable litigation fees as a result of the false-positive mastering black container" [13]. As already mentioned, DL-
and -bad instances or the wrong estimations and predictions primarily based totally technology can get worse the hassle,
churned through those automatic systems. In the mild of this notwithstanding having already determined their manner
dialogue, we advise that health workers and healthcare into biomedicine and healthcare [14, 15]. In medicine, this
practitioners ought to preserve in thoughts the want to has clean implications: if an ML-primarily based MDSS
stability the effectiveness of AI- and ML-primarily based totally makes choices that can't be comprehensibly explained,
technology and their adherence to current legislation. the clinical professional may be positioned withinside the
Beyond GDPR and its relation to interpretability, this uncomfortable function of getting to vouch for the system's
problem overlaps with several the others we can speak trustworthiness, moving the consider on a choice that he or
withinside the following sections, inclusive of ethics, she cannot explain for to both the affected person or to
fairness, and privateness and anonymity. different clinical experts. This now no longer suggests that
efforts have no longer been made to imbue MDSS with
III. INTERPRETABILITY AND EXPLAIN understanding representations that might be understandable
ABILITY to people. Examples encompass rule-primarily based
representations totally, typically well suited with clinical
reasoning [16]; and nomograms, normally utilized by
Biological brains have now no longer always developed a
clinicians for visualizing the relative weights of signs and
way to explain themselves. Arguably, this has handiest in
symptoms on prognosis or prognosis [17]. AI- and ML-
species with social behavior (even though it can also be
primarily based structures totally can also additionally have
argued that social behavior can handiest occur in species
quantifiable desires and might nonetheless be vain until they
whose brains can explain themselves via a few shapes of
agree to scientific tips. Note that pc-primarily based
communication). In the human species, herbal language
structures, which include MDSS, are regularly visible with
plays that communicative or explanatory function. AI turned
the aid of using clinicians as a further burden of their
into at the start conceived as a try to reproduce factors of
everyday practice [18]. The hassle can also additionally
organic intelligence. However, self-explanatory talents have
seem while the MDSS conflicts with tips of clinical practice
been in no way a key component to consider. If the organic
[19], something certain to occur until the one's tips are by
mind turned into supposed to be understood as a shape of the
some means fed as the earlier understanding of the wise
information-processing system, so turned into AI, and the
structures. In this scenario, interpretability is probably
concept of social AI is fantastically new, for example, inside
visible to make version overall performance and compliance
the shape of wise dealers multi-agent structures [9]. Only
with tips well-suited desires. The function of ML in
recently, the interpretability and explainability of AI and ML
healthcare has been defined as acting "as a device to
structures have come to the leading edge of studies
resource and refine particular duties done with the aid of
withinside the field [10]. One key motive for that is the step
using human professionals" [20]. Note that because of this,
forward created with the aid of using DL technology. DL is
interpretability has to be now no longer taken into
an augmented model of conventional synthetic neural
consideration right here, a completely technical problem
networks. The latter has been long in the past maligned as
dissociated from the cognitive skills of the human interpreter.
black container opaque fashions. DL fashions threat being
As stated with the aid of using Dreiseitl and Binder [12]
taken into consideration augmented black boxes.
while discussing the susceptible ranges of adoption of
Interpretability in this context may be visible as a human-pc
MDSS on the factor of care, researchers regularly keep away
interplay hassle. We need to be capable of apprehending and
from real questions, which include whether or not adequate
interpreting the final results of an AI or ML version. We
"explanations [are] given for the system's prognosis"; "the
want to make certain that even a completely complicated
shape of clarification [is] best for the physicians the usage of
version may be explained (typically to different people). A
the system"; or "how intuitive is its use." An attempt must
human mind, colossally extra complicated, has advanced
be made to combine clinical professional understanding into
herbal language to bring a few degrees of clarification of its
the AI and ML fashions or use earlier professional
internal workings. Similar tries with AI and ML are
understanding informal frameworks for device-human
nonetheless very limited. Despite current and thorough tries
interplay within the pursuit of interpretability and
to cope with the difficulty of symbolizing interpretability in
explainability. The statistics analyst needs to play a
ML [11], such tries handiest spotlight the special issue
proactive function in looking for professional clinical
concerned withinside the clinical pursuit of actually
verification. In return, the clinical professional has to make
interpretable ML fashions. In the clinical domain, AI and
certain that the evaluation results are interpretable and
ML fashions are regularly a part of MDSS. Their capacity
usable in clinical practice.
and the feasible boundaries to their adoption were
investigated within the ultimate decade [12]. The paradox is
that those methods, notwithstanding their advantages, are

ISSN: 2393-9516 www.ijetajournal.org Page 72


International Journal of Engineering Trends and Applications (IJETA) – Volume 7 Issue 5, Sep-Oct 2020
IV. PRIVACY AND ANONYMITY drawbacks of this involvement may be discovered in Nature
journal's file of the United Kingdom Information
Commissioner's Office announced that the operator of 3
Technological advances and the massive adoption of
London-based hospitals "had damaged civil regulation while
networked computing and telecommunication structures
it gave fitness facts to Google's London-primarily based
flood our societies (and usually governments and era
subsidiary DeepMind" [26]. These facts have been supposed
providers) with facts. The bodily society bonds are being
to be the idea for fashions to check consequences for acute
unexpectedly amplified with the aid of using our use of
kidney injuries' symptoms and symptoms. According to the
digital social networks. In this scenario, facts privateness
Royal Statistical Society's government director, three classes
and anonymity have ended up predominant social issues,
are to be extracted from this unique case of software to the
and feature precipitated criminal initiatives, together with
clinical domain: (1) because of society's growing facts
the European GDPR mentioned in preceding sections.
accept as accurate with deficit, facts transference
Privateness and anonymity had been a center situation for
transparency and openness must be guaranteed; (2) facts
healthcare structures for some distance longer than for
transference must be proportional to the clinical challenge at
society. The modern-day adoption of EHRs in clinical
hand (on this case, the improvement of fashions for the
exercise complements this problem, as touchy affected
detection of symptoms and symptoms of acute kidney
person facts are uploaded in virtual shape to networked
injury); and (3) governance (now no longer only legislation)
structures with various degrees of safety structures in the
mechanisms of manage of facts handling, control and use
region. An exciting evaluation of safety and privateness in
must be bolstered or created while necessary. He
EHRs may be discovered withinside the observation with
additionally makes a key announcement while announcing
the aid of using Fernández-Alemán et al. [21]. The robust
that "improvements together with synthetic intelligence,
hyperlinks among privateness and anonymity, on one side,
system learning […] provide exquisite opportunities,
and Societal Issues Concerning the Application of AI in
however, will falter without a public consensus across the
legislation, at the different, are defined on this observe, even
function of facts" [26].
though it is also mentioned that "there was little or no
pastime in coverage improvement related to the several full-
size privateness problems raised with the aid of using a shift
V. ETHICS AND FAIRNESS
from an in large part disconnected, paper-primarily based
fitness report machine totally to 1 this is incorporated and The time-honored ultimate aspiration of AI is to replicate
electronic" [21]. This is not a problem omitted with the aid biological intelligence in silico. Biological intelligence,
of using the AI and ML groups as early as 2002, facts though, is the product of evolution and, as such, is multi-
confidentiality and anonymity in facts mining clinical faceted and, at least to some extent, the product of
packages have already been mentioned in journals of those environmental pressures of human societies. Ethics, as a
fields [22] the duties of facts miners to human subjects. compass for human decisionmaking, is one of those facets
Privacy-maintaining fashions and algorithms had been and could be argued to provide the foundations for the
mentioned in a few details [23]. A not unusual scenario for legislative regulation of societies, whose importance for
facts analysts in scientific environments is to examine facts medical applications of AI and ML has already been
that can be dispensed amongst more than one scientific discussed in this paper. However, the truth is that the AI and
parties. These parties (e.g., hospitals) might also additionally ML fields are still fairly unprepared to address this pressing
have privateness protocols in regions that save you merging matter [27]. Interestingly, this topic has become central to
facts from one-of-a-kind origins into centralized locations AI discussion only in recent years, once it has also become a
(in different words, save you facts "leaving" a given central topic in global research agendas [28]. In what sense
hospital). The AI and ML groups have already labored on might ethics be part of the AI and ML equation, and in what
generating decentralized analytical answers to pass this sense do we want these technologies to be imbued with
bottleneck [24]. There is a brand new and disruptive detail ethical considerations beyond the overlap with bodies of
of the privateness and anonymity dialogue in AI and ML regulation and legislation? Let us provide an illustrative
packages in medication that have to be considered: the en example: the ongoing debate on using AI as part of
masse touchdown of huge IT groups withinside the clinical autonomous weapons systems in defense and warfare.
field, lots of them featuring or integrating AI elements (a Uncrewed autonomous vehicles, at least partially driven by
few examples could be Microsoft's Hanover project, IBM's AI, are being used for the targeted bombing in areas of
Watson Oncology, or Google's DeepMind), collectively with conflict. International conventions quite clearly delineate the
a myriad of AI-primarily based medically orientated start- ethical issues involved in human decisions concerning the
ups [25]. The involvement of IT organizations in fitness choice of human targets in war periods, but who bears
provision increases the bar for privateness and anonymity ethical responsibility in the case of targets at least partially
problems that have already been at the desk because of the chosen by AI-driven machines? This type of problem
strain of coverage organizations, mainly withinside the currently drives the not-for-profit organization campaigns,
maximum liberalized countrywide fitness structures. An such as those undertaken by Article 36 [29], “to stop killer
illustrative instance of the complexities and capability robots” [30]. Ethics are also a core concern in medicine and

ISSN: 2393-9516 www.ijetajournal.org Page 73


International Journal of Engineering Trends and Applications (IJETA) – Volume 7 Issue 5, Sep-Oct 2020
healthcare that has attracted much academic discussion [31]. VI. CONCLUSIONS
Can AI- and ML-supported tools address the basic
biomedical ethical principles of respect for autonomy, non-
For decades, AI and ML have been often investigated and
maleficence, beneficence, and justice? Should they, or
evolved in the educational environment, with a few inroads
should this be left to the medical practitioners? Medical
into broader social domains. Over the ultimate years, though,
practitioners, though, do not usually develop the AI and ML
those fields are experiencing an excessive industrialization
tools for medical applications. Should they at least ensure
technique that incorporates societal strings attached. We
that AI and ML developers do not transgress these principles
have considered legislation, ethics and fairness,
in designing such tools? According to Magoulas and Prentza
interpretability and explainability and privateness and
[32], it is humans and not systems who can identify ethical
anonymity; however, besides, issues, together with
issues, and, therefore, it is important to consider “the
robustness and safety, economics and accessibility, or
motivations and ethical dilemmas of researchers, developers
complicated facts management, ought to have also been
and medical users of ML methods in medical applications.”
taken into consideration. Our last commentary is a name for
Such convergence of interests makes it important, in any
the collaboration among the AI-ML and medicine-healthcare
case, to create a clear roadmap for the ethical use of AI and
groups within the pursuit of methods, protocols, pointers,
ML in medicine that involves players both from the fields of
and facts evaluation pipelines that explicitly think about
medicine and AI. The concept of fairness may be considered
many of these societal issues.
subjective as the concept of ethics and, perhaps, more
vaguely defined. If distinguishing what is fair and what is REFERENCES
not in human society is difficult and often controversial,
trying to embed the concept of fairness in AI-based
decisionmaking might be seen as a hopeless endeavor. [1] Gadde, S. S., & Kalli, V. D. R. Medical Device
Nevertheless, the use of ML and AI in socially relevant Qualification Use. 10.17148/IJARCCE.2020.9410.
areas should at least aspire to be fair. As stated by Veale and [2] Vellido A, Martín-Guerrero JD, Lisboa PJG: Making
Binns [33], “real-world fairness challenges in ML are not machine learning models interpretable; in: Proceedings
abstract, […] but are institutionally and contextually of the European Symposium on Artificial Neural
grounded.” Let us illustrate this with an example: gender Networks, Computational Intelligence and Machine
bias can be added to an ML model by just biasing the choice Learning (ESANN 2012). Bruges, Belgium, i6doc.com,
with the data used to train the model are selected. Caliskan 2012, pp 163–172.
et al. [34] have recently shown that semantics derived [3] Doshi-Velez F, Kim B: Towards a rigorous science of
automatically using ML from language corpora will interpretable machine learning. arXiv preprint
incorporate human-like stereotyped biases. As noted by arXiv:1702.08608, 2017.
Veale and Binns [33], lack of fairness may sometimes be the [4] Dreiseitl S, Binder M: Do physicians value decision
inadvertent result of organizations not holding data on support? A look at the effect of decision support
sensitive attributes, such as gender, ethnicity, sexuality, or systems on physician opinion. Artif Intell Med
disability, due to legal, institutional, or commercial reasons. 2005;33:25–30.
Without such data, indirect discrimination-by proxy risks are [5] Leonelli S: Data-Centric Biology: A Philosophical
being increased. In the medical domain and healthcare in Study. University of Chicago Press, 2016.
particular, where sensitive information about the individual [6] Bacciu D, Lisboa PJ, Martín JD, Stoean R, Vellido A:
may be readily available, how do we ensure that AI- and Bioinformatics and medicine in the era of deep
MLbased decision support tools are not affected by such learning; in Proceedings of the European Symposium
bias? Fairness constraints can be integrated into learning on Artificial Neural Networks, Computational
algorithms, as shown in a study by Celis et al. [35]. Given Intelligence and Machine Learning (ESANN). Bruges,
that fairness criteria are reasonably clean-cut in the medical Belgium, i6doc.com, 2018, pp 345–354.
context, such constraints should be easier to integrate than in [7] Provost F, Fawcett T: Data science and its relationship
other domains. Following Veale and Binns [33], fairness to big data and data-driven decision making. Big Data
may be helped by trusted third parties with the selective 2013;1:51–59.
storage of those data that might be necessary for [8] Deo RC: Machine learning in medicine. Circulation
incorporating fairness constraints into model-building in a 2015;132:1920–1930.
privacy-preserving manner. A recent proposal of a [9] Gadde, S. S., & Kalli, V. D. R. Artificial
“continuous framework for fairness” [36] seeks to subject Intelligence To Detect Heart Rate Variability.
decision-makers to fairness constraints that can be 10.33144/23939516/IJETA-V7I3P2
operationalized in an algorithmic (and therefore in AI and [10] Cornwall-Jones K: The Commercialization of Artificial
ML) the setting, with such constraints facilitating a trade-off Intelligence in the UK. Doctoral dissertation,
between individual and group fairness, a type of trade-off University of Sussex, UK, 1990.
that could have clear implications in medical domains from [11] Goodman B, Flaxman S: European Union regulations
access to drugs and health services to personalized medicine. on algorithmic decision making and a “right to
explanation.” AI Mag 2017;38.

ISSN: 2393-9516 www.ijetajournal.org Page 74


International Journal of Engineering Trends and Applications (IJETA) – Volume 7 Issue 5, Sep-Oct 2020
[12] Veale M, Binns R, Van Kleek M: Some HCI priorities intelligence-companies-in-healthcare (accessed June
for GDPR-compliant machine learning. arXiv preprint 2018).
arXiv:1803.06174, 2018. [24] Shah H: The DeepMind debacle demands dialogue on
[13] O’Connor S: Big data and data science in health care: data. Nature 2017;547:259.
what nurses and midwives need to know. J Clin Nurs [25] Gadde, S. S., & Kalli, V. D. R. Descriptive
2017, DOI: 10.1111/ jocn.14164. Analysis of Machine Learning and Its Application in
[14] Castelfranchi C: The theory of social functions: Healthcare. 10.33144/23478578/IJCST-V8I2P28.
challenges for computational social science and multi- [26] Moor JH: The nature, importance, and difficulty of
agent learning. Cogn Syst Res 2001;2:5–38. machine ethics. IEEE Intell Syst 2006; 21:18–21.
[15] Ash JS, Berg M, Coiera E: Some unintended [27] Ladikas M, Stemerding D, Chaturvedi S, Zhao Y:
consequences of information technology in health care: Science and Technology Governance and Ethics: A
the nature of patient care information system-related Global Perspective from Europe, India and China.
errors. JAMA 2004;11: 104–112. Springer, 2015.
[16] Hoff T: Deskilling and adaptation among primary care [28] Article 36. http://www.article36.org.
physicians using two work innovations. Health Care [29] Campaign to stop killer robots. https://www.
Manage Rev 2011;36:338– 348. stopkillerrobots.org.
[17] Gadde, S. S., & Kalli, V. D. R. (2020). [30] Beauchamp T, Childress J: Principles of Biomedical
Technology Engineering for Medical Devices-A Lean Ethics, ed 7. New York, Oxford University Press, 2013.
Manufacturing Plant Viewpoint. [31] Magoulas GD, Prentza A: Machine learning in medical
10.17148/IJARCCE.2020.9401 applications; in Paliouras G, Karkaletsis V,
[18] Reid MJ: Black-box machine learning: implications for Spyropoulos CD (eds): Machine Learning and Its
healthcare. Polygeia 2017;April 6. Applications. Advanced Course on Artificial
[19] Fernández-Alemán JL, Señor IC, Lozoya PÁO, Toval Intelligence. Berlin, Heidelberg, Springer, 1999, pp
A: Security and privacy in electronic health records: a 300–307.
systematic literature review. J Biomed Inform [32] Veale M, Binns R: Fairer machine learning in the real
2013;46:541–562. world: mitigating discrimination without collecting
[20] Berman JJ: Confidentiality issues for medical data sensitive data. Big Data Society
miners. Artif Intell Med 2002;26:25–36. 2017;4:2053951717743530.
[21] Aggarwal CC, Philip SY: A general survey of privacy- [33] Caliskan A, Bryson JJ, Narayanan A: Semantics
preserving data mining models and algorithms; in derived automatically from language corpora contain
Aggarwal CC, Philip SY (eds): Privacy-Preserving human-like biases. Science 2017; 356:183–186.
Data Mining. Boston, MA, Springer, 2008, pp 11–52. [34] Gadde, S. S., & Kalli, V. D. R. Applications of
[22] Scardapane S, Altilio R, Ciccarelli V, Uncini A, Artificial Intelligence in Medical Devices and
Panella M: Privacy-preserving data mining for Healthcare. 10.33144/23478578/IJCST-V8I2P27
distributed medical scenarios; in Esposito A, Faudez- [35] Celis LE, Straszak D, Vishnoi NK: Ranking with
Zanuy M, Morabito FC, Pasero E (eds): fairness constraints. arXiv preprint arXiv:1704.06840,
Multidisciplinary Approaches to Neural Computing. 2017.
Springer, 2018, pp 119–128. [36] Hacker P, Wiedemann E: A continuous framework for
[23] The Medical Futurist: Top Artificial Intelligence fairness
Companies in Healthcare to Keep an Eye On. January
31, 2017. http://medicalfuturist.com/top-artificial-

ISSN: 2393-9516 www.ijetajournal.org Page 75

Вам также может понравиться