Вы находитесь на странице: 1из 6

Prabha Singh

Bartels

WRT 205

4/2/19

When conducting a primary research process about a technological topic such as artificial

intelligence, insight from specialists proved to be the most effective way to gather information

when not having a deep mathematical or scientific background. With lab reports being hard to

digest or sift through complex mathematical concepts, interviews and direct quotations from the

developers of artificial intelligence provided the personal opinions about the topic that were

more specific but coherent, and interestingly different from majority public opinion on artificial

intelligence.

With my research questions addressing intersecting ideas of ethics and the integration of

AI into society, it was essential in my personal interview to get insight into how a developer or

scientist in the field of artificial intelligence applied their own set of ethics to their research and

felt about the direction AI was going. When interviewing Garrett Katz, an assistant professor at

Syracuse University who teaches introductory as well as higher level AI courses along with

separate research into AI, I especially inquired about his ethics and opinions concerned AI. He

explained there’s always a moral dilemma as a developer to exceed the limits of AI but it’s

crucial for engineers to consider ethics even with the rapid development of AI. The gears behind

corporations, the scientists and engineers, are conscious of the technology they develop. Katz

emphasized how its not socially visible to the public about the in depth ethical consideration

given to each individual project. As an example, he brought up DARPA, a defense agency for
technological development, that held multiple ethics panels at a convention he went to get mass

input from specialists. To be specific, a particularly unpopular topic of interest was AI controlled

drones that had the potentially to deploy missiles with their own judgement and decision making.

DARPA’s interaction with ethics drew into my other research question concerned with the

process of integrating AI into society. Katz brought up a valuable point that public awareness is

the general key to successfully integrating more AI into our society; although DARPA would

introduce AI that would draw bad public opinion, it’s important to acknowledge the history

where a past military funded project, the internet, is the staple for society today. When inquiring

more specifically about his research, I was finally able to get insight into how AI fit into the

equation with the job market. He specialized in neural networks and imitation learning for robots

which are essential tools in creating useful automation. He stated that the whole process of

imitation learning is very difficult because when trying to model AI after a human brain,

something that contains millions of neurons and synaptic connections, robots are incredibly

incompetent at the learning process. While emphasizing that AI is evolving much slower than the

public perceives, he drew upon how AI could replicate and reproduce behaviors, actions, results

but continually struggles with cognitively applying its knowledge to brand new situations in the

way that a human can. He noted that the job market wouldn’t experience a surge of AI replacing

humans within the near future because development with neural networks still cannot process

things like emotion or provide adaptive solutions to new situations quite yet. Katz ended off with

how his research is more geared towards AI being utilized for answering deep and complex

mathematical concepts that humans simply cannot process in order to gain a broader

understanding of the universe rather than the convenience purposes that AI is widely discussed
with currently. Overall the personal interview allowed me to get valuable personal questions into

how an AI developer considers ethics in their research process, and some insight as to where

exactly AI is in terms of progression.

Transcripts of various interviews with CEOs andof AI companies provided an interesting

perspective that I have not incorporated yet. All the individuals that were interviewed shared a

similar positive outlook on AI and provided their point of view on controversial AI claims. Ross

Upton, the founder of Ultranomics, was part of the development of an AI analysis tool to

recognize certain medical diseases such as artery disease with more accuracy when it came to

determine if a patient needed surgery. Upton argued that the widespread public fear of

introducing AI to critical medical issues was unfounded and brought up the concept that

industries, specifically the medical industry, were heavily regulated. Implementing AI into the

medical industry would undergo heavy testing, regulation, and caution so there would not be a

need for ethical concern on AI health analysis. Simon Parkinson, COO of Dotgroup, shared his

insight about AI’s potential for assistive technology. He shared his perspective with the ongoing

discussion on whether AI poses a threat to the job market, and he dismissed the popular

controversial opinion of robots taking over millions of jobs. Instead he emphasized how assistive

AI would not necessarily replace caregivers or helpers for those with disabilities, but improve the

quality of life by removing the large human dependence for those needing the technology. The

last interview in the transcript was with Daniel Botterill, CEO of Ditto Sustainability who

developed AI generated consultations for companies trying to evaluate their sustainability and

waste levels. Botterill and Upton both made similar points in that AI would provide more precise

and efficient evaluations as opposed to if a human were doing the job. Botterill added that
consultations with this type of AI would be exceedingly cheaper than one with a sustainability

specialist that is human. It was interesting to see how Botterill expanded on the financial benefit

to the consumer that AI provided but left sort of a blank when it came to the employees.

Accordingly, across all three interviews, they all shared similar types of information concerning

the benefit to the consumer as higher ups in an AI technology company but did not provide much

information on how that would affect the average worker.

From my primary research, there was a lot of insight from those who were immersed

directly in the field whether it be higher ups or the developers themselves. The differentiating

ethical stances between the interviews were interesting but the information from my personal

interview with Garrett Katz seemed to provide the most genuine insightful information as I was

able to specifically understand the sort of moral ethical dilemma that goes on when creating AI.

My primary research provided information that wasn’t as easy to find or understand simply

because the perspective I aimed to get through primary research was typically overshadowed on

the topic of artificial intelligence.


Reflection

Figuring out the way to approach my primary research was my first task because I

wanted information from a different perspective than what I found in my secondary sources

which mostly shared public opinion. When deciding that I wanted to center my primary research

on interviews and opinions from specialists, I had to figure out what types of questions I could

ask. Drafting interview questions posed the biggest challenge to me because my questions

needed to have a level of professionalism. I also tried to structure my questions sort of like a

funnel so that I would start off with more general questions but then refine the questions as the

interview went on to more specific questions about his research and personal ethics. I mainly

wanted to understand how a developer working with AI technology considered their ethics and

how they proceeded with their research acknowledging their moral limits. When giving the

interview, I learned that improvisation was a big factor when interviewing someone because

when talking about a certain topic, questions about particular examples or experiences that they

bring up become way more relevant than what is written down on the drafted list of question.

Consequently, I noticed that with a different environment and social atmosphere, I found

different things I wanted to touch upon with my topic that were just as insightful even though

they were not mapped out beforehand. Lastly when trying to find a more textual source of

primary research, I was mostly looking for a different perspective on the topic and found that the

transcripts of other interviews would be useful to get insight of people that would be out of my

reach. Throughout my primary research process, I think I accomplished my main goal of

collecting the perspectives and ideas that differed from what I gathered in my secondary research

while additionally learning more about the interviewing process.


Works Cited

McHugh, Marian. "Fear Not."​Computer Reseller News,​ 2018, pp. 10-11​.


ProQuest,​ ​https://search.proquest.com/docview/2082462256?accountid=14214​.

Singh, Prabha, and Garrett Katz. “Artificial Intelligence Research.

Вам также может понравиться