Вы находитесь на странице: 1из 14

Okay, can we start with you introducing yourself? Your name and your affiliation.

I’m John C. Havens. I’m the author of a book called “Heartifical Intelligence.” I’m a consultant.
And I have the pleasure of, right now, being the Executive Director of the IEEE Global Initiative
for Ethics of Autonomous and Intelligent Systems. I’m also the Executive Director of something
called the Council on Extended Intelligence. For the interview, everything that I say will be my
opinions and don’t necessary reflect the actual views or formal policies of IEEE and the Council.

Great, thanks John. Can you tell me a little bit about how your work in AI began?

My work in AI began out of straight up fear. I was doing a series of interviews for Mashable. I’ve
written for Mashable and the Guardian and Slate and vast company, and I wasn’t in the AI
space, but about eight years ago I started interviewing people saying what’s the code of ethics
for AI? Ignorantly thinking everyone would refer to like “Oh it’s the Smithton Code written in
1985.” And more and more what happened is people said, “Well we use as Asimov’s Three
Laws of Robotics as our kind of code to ask questions.” And initially I was like that’s a short
story from the fifties. I’m a big fan of the story. So, it started from fear.

Yeah. Can you tell me a little bit about this notion of science fiction? So, you, you referenced to
the, the Three Laws but, the, the ways in which your work or your entry point to this work may or
may not have been shaped by science fiction?

So, in terms of science fiction and my work with artificial intelligence or what I and the people
I’m working with, we tend to call it autonomous and intelligent systems. And I’ll tell you about
that in a second. But I’ve been a fan of like Philip K. Dick, you know, Robert Heinlein, you know,
for years. And I watched Star Trek the original series, granted they were in the repeats at that
point, with my dad. And Battlestar Galactica, the newer one, not the older one it’s fantastic. So,
the narratives around what will the future bring is something I’ve immersed myself in ever since I
was a kid. Star Wars, same thing. And the reason I was initially fearful was mainly thinking how
much of technology is being built where there may be questions people are not asking. Not
because they are not good people, right? Meaning moral people or people who have the best
intentions, but I very quickly realized that intention isn’t really about what then manifests in the
technology. And this is I think where the message from a lot of really great science fiction films,
you know, the classics of 2001, etc. are these great questions about, you know, if there, if it’s
kind of like intermediate, you know, like Terminator type stuff. Like I’m not as interested in sort
of just like the us versus them narrative. But when they can ask these deeper questions of who
are we as humans and like my last book “Heartifical Intelligence” I asked the question, how will
machines know what we value if we don’t know ourselves? The original fear that I had was that
asking that question outside of any technology is very hard. Introspection is hard. And if we’re
not willing to do it and sort of want to let technology as cool, as it may be, kind of usurp certain
decisions then we risk losing things or giving things away without meaning to. Or taking it away
from others in one sense. Where it’s not causing, say, overt physical harm, but there may be
mental harm. There may be well-being harm. And there may be long-term sort of, again
usurpation of things that we in one sense can’t get back.

So, you mentioned you’ve done some work as a journalist it sounds like for the Guardian, firstly,
you wrote your book “Heartifiical Intelligence”. Can you tell us a little bit more about your past
work and your present work in this arena in regards to AI?

Sure, so in one of my past lives, I was an actor for fifteen years and I’m a journalist/writer.
Meaning I’ve written a number of pieces for the Guardian and for Mashable and Slate. I just had
a piece published yesterday in courts. So, journalism, for me, I never went to journalism school.
I know the ASJA guidelines and how to be objective and all that type of stuff, but really, I’m just,
I love learning. My grandfather, my mom’s dad was a high school principle for forty-five years,
and up until even the time he was ninety-two or ninety-three, you just saw a person who found
learning infectious. He had a big stack of books next to his chair. I fell in love with sixty minutes
because of him. This visceral “it’s Sunday night”, you know, like “I’m going to learn something.”
And the idea of self-improvement. I’ve always loved where it’s not like “Hey you’re broken, fix
yourself. But hey you’re at a great place. What else can you do to enhance?” And by enhance
meaning for me it’s always been more about arts like learning. So, I love asking questions. To
me, journalism is like I’ll find this germ of what I think is unique idea and then most of my articles
start off with like “Do you think this is unique?” And someone will be like “Except that fourteen
people have written a book about it.” And I’m like “Okay.” So then, but like a lot of my books
started from asking questions where I’ve never called myself a futurist. I often like back away
slowly from people, like I’m a futurist. As compared to like a presentist who kind of sees three or
four trends that are kind of moving towards themselves. That’s where science fiction to me isn’t
fiction. Like my work, I try to say look, like this thing over here. Like I’m very focused on virtual
reality and augmented reality and how that’s going to affect human well-being right now. Cause
I can point to fourteen companies doing stuff, but they already have a prototype or, you know,
something much more advanced in terms of headsets that people are wearing. But as to say
that trend doesn’t live in isolation from this thing over here, just because these two groups aren’t
thinking about it. So anyway, science fiction, you know, imaginaries, narratives, I think I have a
benefit of having been an actor and having been in TV shows like law and order. I’m laughing
cause this science, you know, it’s a certain amount of science, but involved there. But is the
sense of the world outside of the experts building the technology, the first thing they see is West
World, Black Mirror, they know these movies. And then to sort of say like “Hey, don’t be
frightened. Don’t be freaked out.” But this is still what appears when they watch TV. They
naturally going to say like where the positive message is and that’s like, you know, what you’re
doing here, and other people have done so beautifully is to say we have to paint a vision of a
positive future. And there’s some episodes of Black Mirror for instance which will have here’s a
really cool. The technology’s always awesome. Can be scary in terms of if it does negative,
unintended stuff. But then here’s a world when this is here that we can look to that could
positive. It makes it much easier not just to build it amongst the experts, but for the average
person as it were to build it in their minds.

Yeah. So, you’ve done a lot of work in communicating to the public directly in interesting ways.
I’m wondering if you can speak to a particular example or a particularly accurate example of
communicating a very complex system to a broad public. Whether it’s with a scientist with whom
you’ve worked or if you think there’s a particular corporation that’s done a nice job with that
work?

Yeah, I have to think for a second. A complex system easily translated to the public. That’s a
tough one because when you say translated to the public oftentimes what happens is the
translation is, hey that thing that’s in your house, sign a couple of waivers or this consent form
and there you go. And having worked in PR before, the main thing that is communicated are the
sort of consumer values or benefits. And I think that’s where again the unintended negative
consequences and to try to stay more positive and proactive, people tend to forget what
unintended positive consequences could be instead of just having kind of a hand-ringing feeling
of like oh the ethicists are in the room, here we go. You know like, here’s the limitations, what
legislations going to be. You know, I worked in the business world. I understand that’s scary.
Also, just when you’ve built this beautiful piece of technology and, and I get it, I’m a musician,
I’m a writer, was an actor for years, you create something and you’re like, “Look I just want you
to love the beauty of what I’ve created.” And then someone says things like “Well someone
might slip and fall” and you’re like “Aw, here we go, okay.” But the realm that I’ve lived in my
whole life, and I’m trying to get back to your idea of who’s communicated it well, so I’m just
trying to think of it while I’m talking, but my dad was a psychiatrist, my mom still is a minister,
and I’m a writer, well still a writer, used to be an actor, so I’m steeped in introspection, meaning
studying humans, myself included. Oftentimes it’s very annoying because you’re talking to
people and you’re like when have I said that, well Brecht said this, but Plato mentioned this. And
you’re like oh God, I just want to say something, and not have to examine it forty-eight times.
But I think a lot of times, oh here it’s coming to my mind, I was just at an event and UNICEF was
at this event, and we were talking about human rights and artificial intelligence and they had a
two- or three-page pamphlet that was beautiful in terms of communicating to parents. It was
highly visual, and it was like a two- to three-page thing where most of it was, you might picture it
like an infographic inside of Wired magazine. You know, what parents should be aware of. And
it, it had a caveat aspect to it, but it wasn’t a fear. And as a parent especially, I felt empowered
reading it. And the visuals were very beautifully done. It was, here’s what you need to do in
terms of your child’s data. Here [missed @ 10:48, 1st video] should you need to think about. It
was the best practices document. So, kudos to UNICEF. And kudos to also addressing a
massively huge globally important demographic which is parents. Because this is where when
discussions about data or other things happen. I think as a parent I speak for myself, you move
from the sort of what can sometimes be esoteric like the nature of privacy, you know, and the in
the ancient times, privacy to like my kids looking at that screen and can be accessed by these
negative actors, not any specific company, but these things, and UNICEF I think is a great
example of someone that did that well.

Can you speak a little bit more about what you see is your responsibility in communicating with
the public on AI? And perhaps, in work that you’ve been doing as in learning about there are
various publics it sounds like that you’re working with whether it’s practitioners, engineers,
ethicists, technologists who are developing devices but also a broader public as well.

In terms of the responsibility that I feel for the work I’m doing again I wear different hats. So, I’ll
speak for some of the organizations that I’m honored to be a part of and just restating not as
much of a disclaimer standpoint but saying that IEEE it’s the world’s largest technology
association, started 130 years ago by Thomas Edison. It’s a very respected organization over
420,000 members in 160 countries. But really the heart of the engineering community. And I’m
not an engineer, but I’ve been deeply honored to work with, and this comes back to like my
grandfather, like style of learning, I just never had many engineers in my family. My brother-in-
law is an engineer. And so, we always geek out together about the new technology. But when
you really understand sort of a sense of how to build say a standard and you actually, and
literally a standard like people get into a room for three years and write out a standard
document. You know, I’ve been on Broadway, right. I can get up in front of two thousand people
and sing. You want to lead a standards working group, you better have a thick skin and be
ready to navigate a room full of experts but also people that are, in terms of communicating,
there’s a very specific thing a term called requirements amongst engineers which I didn’t know
about. And I remember I had to struggle, people kept saying well the requirement for this and I
was like “Oh do I check with Bob, you know, in IEEE to make this.” And they’re like “No it’s a
requirement.” They kind of gave me this face, “It’s a requirement.” And it’s an actual language
thing. Dow Shell, like Shell, the word Shell, in terms of a standard means, then there’s no its
unequivocal that when you move, you know, this coffee mug from A to B, you shall do it this
way. All three hundred people doing a standard agree. It’s mesmerizing. It’s glorious. Because
also it means that people from around the world maybe speaking different languages still all say
in their different languages, from maybe a Tower of Babel inward standpoint, so we’re moving it
left to right, yes. Now when a piece of paper comes out. By the way it’s not like it’s perfect or
everyone agrees, but through consensus, when this thing comes out it can form this amazing
piece of what is often called soft or pre-legislation where people can go, you know what, thank
goodness these people got in a room and did this, because it means that three hundred smart
people already did all this thinking so unless we’re going to mirror and also work for three years,
they probably thought a lot of the things that we’re going to think of, so it saves so much time.
So, one thing is in terms of the duty or the responsibility, I’ll say it again, but I’m honored to work
with IEEE, I’m a fanboy, I’m completely not objective. Are there things about, you know, any,
any big organization, there are challenges, of course. But in terms of like, you know, the nobility
of engineering, all these things where I’m not an engineer so I can say this, like, you know, the
old adage of you don’t build a bridge to fall down, right? You don’t talk to an engineer, or I don’t,
I learned very quickly to not be like “Hey ethics, let’s talk about ethics” to an engineer cause
they’re like “Yeah. I build an elevator and I know about risk cause I save people’s lives.” That’s
the nobility side. What’s exciting for me though is when I first had an idea for what became the
work that’s now at IEEE that I’m now Executive Director of. By the way there’s a chair named
Raja Chatila who is a world-renowned roboticist. I am one part of a magnificent group of
experts, where I get the honor of sort of herding as it were very smart cats. But I also do a lot of
strategy and tactics and Konstantinos Karachalios, the Managing Director of the Standards
Association, I call him the Godfather. I hope he’s okay with that. He calls me the catalyst.
Because I brought an idea that was a germ. He’d already been working with IEEE to say like
how do we rethink essentially design. Cause that’s what a lot of people don’t get when the word
ethics comes up with AI. They go utilitarianism and the tunnel problem, if I talk about the tunnel
problem again, I hope that’s not what you’re going to ask me. I will hurt myself ethically. But is to
say when you ask more questions especially about human agency, data and emotion, using
methodologies like values-based design, value-sensitive design, you can take the existing
amazing bedrock of what engineers and programmers did as scientists already do and say the
questions that you’re already asking about risk, this is intended to be a compliment. This is
intended to be a yes and to help you. Because also one thing, when I brought this idea to IEEE,
I thought to myself with whatever this idea is that I’m bringing, the World’s leading engineering
organization that I’ve always thought of kind of like the UN of technology in one sense, cause
it’s not about IEEE says this, it’s all the members get together and through consensus then a
policy statements happens after the board ratifies it and says “Is this what we all think?” Yes. So
that sort of global consensus I said well if I bring this sort of idea that’s got ethics and applied
ethics and philosophy and social science to IEEE, by the way they’re already doing a lot of that
stuff anyway, but is, then they can build it. We need it built. I’m a pragmatist. We need to say,
how do we build this stuff? As compared to I have many friends who are philosophers, I love
them dearly, but had I gone to like a global philosophy group and said what are we going to
build, it would have been really interesting and cool and then I would say great, how do we build
a standard and, you know, maybe crickets. So, the real message, however, the real
responsibility is to bring these groups together and I get very nervous especially now in 2018
where in a University setting or in any setting “We’re going to be talking about what to do. We’re
going to be talking about what to do with the future of AI.” I’m like great, who is it? That’s data
scientists. Are there any social scientists? Or psychologists? Nope. And I’m like can you get the
social scientists, you know, cause what else are we going to do if we’re not cross pollinating.
So, anyway, that’s an answer sort of for the IEEE work. So, in terms of responsibility then, I’m
also Executive Director of this really cool newer thing we’re doing called the Council on
Extended Intelligence. I had the massive honor of meeting Joey Eto and I’m totally unsubtle
about my heroes, you know, people like Illah Nourbakhsh for instance, Joey Eto, Eric Clapton,
you know those types of people. And I do love Illah by the way. Not just saying that. Not just
saying that. But I got to meet Joey Eto who I think most people just know. He’s one of the most
fabulous brains on the planet. Full stop. And if someone’s like I don’t think Joey Eto like,
whatever I’m like “I got to go.” Anyways, so I went to this thing, I got invited to an Aspen round
table on AI and first of all I got in the room and I was like, “Why am I here?” You know, like the
head of the knight foundation, I was so honored and like, and after about like ten minutes of like
pure like terror like should I say anything, I’m like I got invited. And I was saying things that are a
lot of my personal work, but also that thankfully IEEE is mirroring or I should say that it’s not just
me that’s thinking this way. But again, I just am conscious of not wanting to give the impression
that IEEE is whatever. But I talk a lot about well-being, and this is something that on a personal
level I’ll tell in the third part of my answer. But this idea of well-being is not mood. It’s not about
happiness. It’s about understanding how a human being actually flourishes. And flourishing
means having known my dad’s work in mental health, things like productivity, right, like getting a
lot of stuff done. Great. Awesome. But, this sort of how everything is connected to like the work
that we do, you have to make money to earn a living, cool. But then when you work to not just
make enough to survive and take care of your family, but the messages like you always have to
have more. Which in general is what the gross domestic product or sort of productivity as a
value makes us do as humans, right? So that’s fairly Western. It’s definitely something that is
part of when you actually study the economic aspect of GDP, it’s a great metric in the sense of
people agree about it. You know, they can measure it. But it’s very myopic in the sense of is the
country only about how much stuff they produce? So, I bring this up because I was saying this
type of thing in this room in Aspen where you think people, they’re very polite, a lot of people
liked it, some people were like are we talking about AI, who’s the guy, who’s the whatever guy
talking about well-being. Cause I’m used to that, I’ve been writing about this for six or seven
years. And even I’m always like the OECD Better Life Index and Bhutan’s Gross National
Happiness. I’m patient because it’s a paradigm shift, like if you mention the word, you know,
neo-liberalism, sometimes people would be like “I don’t know what you mean.” Or if you even
dare to say, can we talk about capitalism? Like, it just turns instantly political which is sad,
because it’s like let’s remove these big names like capitalism, socialism because of course
they’ll be mired in. Well socialism marks Stalin, right, and like some really bad stuff with Stalin.
Granted. But it’s more about, hold on, hold on, hold on, what about the commons? What about
Eleanor Ostrom. Another one of my heroes. The only woman to be given the Nobel Prize for
Economics. Right, there’s thinking that we have to understand that values created the
philosophies that undergird economics. Economics is a lot of statistics, but it’s a lot of ideology
first and foremost, that drives who we are and what we do. So, long introduction to say I was
saying these types of things and Joey Eto was like nodding. And I was like I just did something
and Joey Eto nodded so unless he just got like a text “Do you want to get lunch later?” or
something, that’s cool. And we ended up talking and I was so thrilled cause he’s like I had this
idea or we’re doing a lot of work at MIT about extended intelligence and I didn’t know what that
meant. And extended intelligence, MIT’s doing a lot of really great work on this. It’s this idea of
systems thinking. [missed name @ 22:40, 1st video] who I’m a big fan of her. And it’s this idea
especially when you think of, of computationalism. The mindset amongst some circles of AI
practitioners that when you can copy, you know, all the dendrites of my brain, John’s brain over
here into Silicon, then John is A and B. And that’s a belief and I’m not here to say if that’s what a
person believes, they believe it. But the systems thinking idea, one aspect of it is to say, well
just because John’s intelligence as it were is copied from A to B, what about how he relates to
other humans, what about his relationship to the environment. There’s a systems mindset,
certainly something with the environment, for instance, this is obvious, but if we’re able to copy
all of our brains into silicon, then a lot of the actual world, the planet itself, wouldn’t be as
necessary in the sense of the main thing you need is then air conditioning. Then you don’t need
trees or water or anything else. Anyways so, I introduced Joey to Konstantinos Karachalios and
we came up with this idea for the Council on Extended Intelligence which, the first part of it is to
change the narrative and this is something I do feel a duty about with the public, is to change
the narrative around artificial intelligence, air quotes. Most practitioners will say, which you
agree, it’s kind of like saying electricity or the web, it’s just so general to say AI. The next
question most practitioners will say is wait, machine learning, cognitive computing, what are we
talking about? AGI? So, first of all, the phrase, you know, God bless Alan Turing and all the
people that made it what it is, but it’s a phrase that needs to evolve. So, we in IEEE our work
say Autonomous and Intelligent Systems, which again often needs clarity, but the actual phrase
in the media oftentimes comes with an us-versus-them. Artificial intelligence, whatever
company, you know, AlphaGo just beat the world’s best player at Go. This thing just beat
lawyers. This thing just beat writers. This thing just beat. What is the message we think as
technologists, by the way I’m saying this as media, not technologists, and I’m not putting down
the technology, I’m just saying the way that the technology is framed to, you know, my mom is
like, “Oh, I guess I suck.” Right? Because it’s inevitable that people are building things to beat
me in everything that I can do. And then we have these conversations about AI and work, and
this will never be replaced, and I’m like, do you read the papers? Maybe it’s link bait. But the
message that the media is sending is that A, technologists which is not true, IEEE, this is not the
case, IEEE is very positive about technology as they should be. That’s why I want to work with
them. It’s not just a, why would they be negative, you know? But if the media message is it’s just
a matter of time till you get replaced, that’s not going to help anything. And also, there’s this
whole idea kind of like augmentation, and I’m getting to extended intelligence here.
Augmentation, depending on who says it, there can also be kind of, if not an agenda, a delay.
Like hey, we’re going to work with, machines are going to live with AI. My answer’s cool, how
long, what does that mean? Be specific. And then go read Martin Ford’s “Rise of the Robots”. I
love Martin Ford. I met him years ago and he talked and a lot of people, he makes them mad.
But he gives good, solid examples. He’s like UPS, I think it’s him or NPR, but I think it was him.
UPS, and I don’t mean to demonize this specific company, I mean that type of, you know,
delivery system of a vehicle, now whatever number of companies might be doing this, if
seventy-five sensors so that when the truck backs up to the curb if it’s two inches versus one
that’s noted, right. And so that all these different things, what Martin’s basic point is, and it is any
of these examples, is he says be aware that when we say we’re living alongside of AI and we’ll
work together, 99% of the time that means there’s a system being trained to know what you do
so that you can be replaced. And I’ll go back to my GDP thing which is businesses have a legal
mandate to maximize especially if it’s a shareholder thing, maximize shareholder return. And
you can’t be like hey I can replace the human workforce and increase productivity by 60%,
decrease risk and decrease mistakes by 70%, and make seventeen-times much as money.
There’s no business imperative to not have that happen. So anyway, all that to say, extended
intelligence is this idea of like hold on, let’s rethink things like participant design’s return, like
thinking about the end users, or user-centric design. And especially thinking of the environment
not in a very Western sense. The environment most of us think about like, that’s the
environment, see those trees. I hope I can save those trees. As compared to more of an
Eastern mindset or of a systems thinking mindset is, we’re actually one with the environment
and if we don’t take care of what’s out there it is also in here, the water we drink, etc. And so,
also with the council it’s really exciting, we’re working with a lot of indigenous populations
because they also have mentioned that being anthropocentric, saying things like human-
centered AI, that phrase will actually upset a lot of indigenous people.

So, John, in regard to concern with labor, how do you think AI systems have changed the way
that people work up until now?

I think in terms of labor it’s obviously a multi-faceted issue. It’s very contextual about the country
where you live, so first I’ll start there. I’m always fascinated when I go to Europe versus, I live in
the States. You know, when you go to Europe and you talk about things like universal basic
income, a lot of times Europeans are like “You mean like what we have?” Right? Like universal.
Not every country, universal health care, kids go to college for free. A buddy of mine, he’ll
complain about taxes and he’s like 60% taxes. And I’m like 48 and I don’t get healthcare buddy.
And if he ever got let go by his company, I think he said it’s like 90% full pay and he’s got
healthcare. So, I’m like, do you drop things on occasion or like just mess up so you can get let
go cause like are you kidding me, like.

Yeah.

So, I think first of all it’s always interesting to talk about labor in the context of it. Secondly, I like
asking these questions more and more at AI events, which is I go talking about, you know,
future of work and robots will work together, and I raise my hand and I’m like just for interest,
how many people in this room, have ever gone without a job since college. So, you graduate
college or grad school and my logic is, said raise your hand if you haven’t had a job for more
than six months. Meaning you’ve been out of work for six months. Oftentimes what happens,
crickets. And I was an actor for fifteen years. The norm is to not have work. The norm is a lot of
my friends still don’t have health insurance. And if you don’t know that experience of, and
there’s massive Gallup data, etc. The second you don’t have work, mostly what happens, and
you don’t have insurance, is you immediately start worrying about your health and shockingly it
means your health goes down fast. And so, to ignore things like the mental health realities. And
not mental health of like, you know, conditions that might be more formally named or what have
you, just like of course someone gets fired and the second you lose your job, especially for me
as a father, how am I taking care of my kids and my wife. Or we both work, so how can my wife
take care of us? So, I bring all that up because I have no more patience to talk about these
things in isolation or in sort of esoteric, broad senses of like. Yeah, future of work and future of
now. And 2008 was a reality. And I’m not anti-American, but 2008 people premeditatively lied,
broke the law, the mortgage crisis happened, the economy was put in ruins and different sectors
were sort of allowed to kind of keep doing what they’re doing to restore an order, air quote. And
what happened is, we have these conversations now about AI and labor and I’m like, again, I
just think of a lot of my friends. And frankly, myself included. The middle-class was decimated in
the States. Decimated. It’s one ten-thousand-dollar bill that can ruin a family. And I don’t mean
ruin a family like it’s going to be a tough year this year, Christmas gifts are going to be limited.
No, no ten-thousand-dollar bill means you don’t pay mortgage and then boom, boom, boom,
boom, boom. And I’m like if 2008 happened and it happened, why are we not talking about that
in every one of these conversations. Not to come from a place of finger pointing or being
accusatory. That’s not my interest or to demonize. But is to say the technology also has to work
on the economics. And things like reskilling. In Europe happy to talk about reskilling. I’ve been in
a lot of panels in Europe, reskilling, I’m like cool, what does that mean. Well this University is
doing this. Or there’s this free application for whatever? Awesome. And by the way I’m never
going to not say reskilling, I mention my grandfather, any time there’s a learning opportunity,
great. But again, when you, I was an actor for fifteen years. Like, you don’t get a severance
when your job ends. And, especially now I just like to ask especially in the States, I will shoot my
hand up, what do you mean by reskilling? Well, emotional intelligence is important. Couldn’t
agree with you more. I’m like you know, the poster child for emotional intelligence. I’m really
glad now it’s invoked. But who pays my bills? Who pays for my kids to go to college? Well. You
know, that’s a universal basic income discussion. But, no it’s not. This whole event is about AI in
the future of work. The present state of work is most people, I don’t know, certainly around the
world, but largely in the States, if they have work, they’re clinging to it tenaciously. And then
when these things might come in and the technology is glorious and beautiful but out of context,
truck drivers, it’s a different discussion but that sort of example. It’s not just that they’ll lose their
jobs and we’ll feel bad. It’s that how quickly they’ll lose jobs and how that will affect the entire
economy. That it’s a sustainability question, not because we’re green or because yay we love
trees. This is about this, this is the habit of five or ten years, it’s not just retraining a certain
sector, it’s retraining us, policymakers and the technology makers.

Infrastructure, right? Because it’s this idea of reskilling is all but, in the point, if you don’t have
the infrastructure and institutional will to actualize that, right? To be agile and ready enough to
move in the different directions of the necessary skills to reskill a population.

Right, or actually I think reskilling, yes. The agility. But also, to say, reskilling in a GDP world,
we’re done. Like from my answer is we’re failed, done, period, end of story. People are like, but
no because reskilling where there’s a mandate for policymakers and businesses to be sort of
enslaved to a single bottom line. That single bottom line is exponential growth. Technology’s
designed to be autonomous and intelligent and replicate skills by design. Those two end values,
those two key performance indicators, they work in lovely unison, but they do not, they do not
honor human finite environmental finite levels. And to me, saying things like augmenting will
work together with, cool, awesome. But, we’re augmented, I get it, it’s five years from now, and
by the way I’m not trying to be funny here, I have augmented reality contact lenses put into my
eyes like Lasik, which would be kind of cool as a geek, I love that idea. But who’s paying my
bills? And I love asking those questions because the reskilling is a societal reskilling. Because
me learning more about emotional intelligence if I can’t get hired, and I’m the same as tens of
thousands, hundreds of thousands of people, then all these things can also be a massive
distraction to say, well let’s kind of be down here and sure, I’m not saying people shouldn’t get
reskilled and a lot of people will get rehired, great, but the actuality of the number of new jobs,
from my understanding all of the different research, from the forum, great places it’s like there
will be new types of jobs but there will be less and less types of people needed with much more
highly specialized things in them. So, reskilling someone like, hey let’s go reskill someone who’s
been, you know, a programmer or someone who’s even managed a small team of people to
understand in-depth philosophical ideas in emotional intelligence, like what?

So, can we talk a little bit more about power negotiations? Certainly, between populations that
comes to before especially in the themes that you’ve shared so far but I’m thinking about,
actually, the, the relationship between an individual and the AI tool that they may be working
with. So, I’m wondering if you can describe an AI tool where power has been transferred from
the human user to that system?

I think there’s a lot of examples of power transfer in the systems. I talk about this one a lot and
certainly AI, I think basic machine learning for GPS, GPS has been using types of AI for a while
now, but that’s the example I mention all the time which is most people don’t care about paper
maps anymore, they’re not lamenting that paper map thing. And that’s fine, like I love being able
to drive and I have a book on tape and then I know which directions to go and the nature of
speech by the way has changed when you’re driving, right, cause you’re talking with your wife
and you’ll be like, “Well I think our son should go, I think they should go because you’re hearing
like take a left, take a left, so that’s an interesting side thing.” But, you know, more and more you
read about something like you’re driving, and you see a big sign like road work ahead, and your
GPS even if it’s like Waze which is minute to minute updates didn’t update for some reason, the
satellite’s down, so you’re like “Oh I guess it says take a left.” And then some guy’s like “did you
see the sign? Hello, speed bump. Like are you an idiot.” And, you know, you’re like “my GPS
said to take a left.” And it’s not cause people are dumb, you know, like it’s because you quickly
trust something because it’s a really cool thing and it’s saving you time. But I talk about this all
the time it’s like devices in homes that can start to, if the right word is usurp or if the word is sort
of, we give over willingly little things like as a parent taking care of kids especially when kids are
young, like I’m joking here, right, you’ll almost kill someone to get a full night sleep, you’re
obviously not going to do that. But the point is you’re so desperate that if a machine or system
can sort of like help a parent, you know, and like books on tape, you know, devices that can
read stories in different voices, that’s really cool, but there’s a great TV show called “Humans”,
it’s a British show and the first season there’s this great scene I love. I find it harrowing but
wonderfully an example here of the giving over to a system. There’s an Android and of course
what’s really nice is most of the Androids are these glorious young models. Thankfully they have
like two old people Androids and I’m like yes, and that one’s fat. Thank goodness.

They’re old models.

Yes, they’re old models. But at least like one of them is sort of fat and I’m like sweet.

Good comes out of that.

Like I’m like sweet. Morphology. But there’s the, the main beautiful Android woman is taking
care of this family where the human mom suffers from alcoholism and the human mom comes
home one night after a bender and says to like the six-year-old daughter like “Come on, let’s go
upstairs. I want to read you a story.” And the girl goes “I want Susan to read to me” and points
to the robot. And the mom’s like “Well I’m here tonight.” She’s like “I know. I want Susan.” And
that, I saw that as a dad and that scared me more than any drones slaying me and cutting me in
half or robots taking over my brain. I was like there’s a strong chance that that line that’s
different for every family, different for every individual, and I’m not interested in telling the
person how they should parent except to say that I think if you’re not an invested parent I would
say why are you having kids and all that, but is to say like that line of like the technology to help,
that demarcation between where something can sort of take over if a person doesn’t know what
that is, then they won’t know when they’ve lost it until it’s too late. Or they will know when
they’ve lost it, but then they’ll go, what are you going to say in that example. Like they get rid of
the Android and the girl is bereft.

I think in that, in these examples you’re showing the deference to the technology of some level,
whether in the child’s example in, in Humans, or the deference in terms of the GPS. And I’m
wondering if you can speak a little bit more on the affect that has on human relationships as
well? On some level you’re tiptoeing towards the grief, right, that would come from the prospect
of the ideal relationship that a child in terms of whatever they’re projecting onto the reader,
right? But also, the ways in which these examples of deference might continue to erode our own
social contract with other people perhaps.

Yeah, the question about social contract and this also goes back to the question about work.
And I’ll talk now about well-being cause I really want, especially the last portion of my interview
to hopefully seem positive, cause that’s my goal. Is a question of worth. And I think a lot of
times, the reason I went to school to be a minister, I have a lot of friends who are in faith-based
traditions, you know, a lot of people we’re working with are Buddhist, you know, or Eastern
traditions, is sort of one central question about a lot of these technologies, or one assumption, I
should say, is ethics, values, augmentation. The real sort of like leaning forward and whispering
thing that people are saying is, it’s cause humans are so broken. We can fix the humans. And
on one level I’m like, yeah, there’s wars, violence and those things to fix them or address them.
Yes. But then there’s this question of like, what is our brokenness, like the same with this
woman who’s an alcoholic, like who doesn’t have a friend who struggles with some kind of
addiction? Is the logic like us-versus-them, well they’re an addict, like that’s too bad. And from a
faith-based tradition, I went to college to be a minister, meaning in the Methodist tradition, but
the more you actually study things like Greek Aramaic what’s called the New Testament,
meaning you read, you know, how things were passed around. It wasn’t like “Oh hey, the
English translation of the Bible just appears. Jesus’ words in red. Like...”

Thank you, King James.

Like we’re good. Oh, King James, please. You know, like, if you even know how things were
canonized in like 400, but when you actually follow things back and I’m a historian, right, I
majored in history in college, and you see, how did something pass from one person to another.
These things called Codex’s or, the idea was something they looked at read, or they heard from
an oral tradition standpoint. So, transformed them with a powerful message that changed who
they were. That and the case of early Christianity. People were Jewish. Like this is, you know,
shockingly if any people think that Jesus was Jewish. I don’t, okay. Enjoy the fruit, like I don’t
know what to tell you. Versus like understanding that from my understanding in that time in
Palestine or wherever else, the first, especially the first generation they saw somebody who
historically is called Jesus. And then what they did is they told this message at risk of being
ashooed from their current community, meaning Jewish community. And then certainly like
Rome and different types of, and amongst, you know, their own people like zealots who wanted
to focus on war. And for me, you know, and then of course what happened with the Christian
tradition, that’s a whole other conversation, but I can certainly understand when people say,
how can you be a Christian, because I agree, there’s so much horror that’s been done, not just
for Christianity, but in faith in religious, you know, name. But I’m like but that’s not why people at
least for me, my life was transformed. My life was transformed because at the core there are
times that I don’t feel I have worth. There are times that I feel worthless. And if the message
comes of healing, of transformation through faith and of course it’s not just through Judeo-
Christian traditions, and it’s not just Buddhism. It’s not even formal religious traditions.
Agnosticism, atheism. But an examined life, but if the message kind of keeps coming back to
you like, you’re okay but you’re actually broken and this can fix you. It’s not about the
technology or AI, it’s about what are we saying to ourselves that we can’t address some of
these things on our own and these questions of worth. So, I want to talk about well-being
because outside of a faith tradition that I hold for myself which is I believe I have worth. Instead
of the golden rule which is pretty common amongst a lot of religious and non-religious traditions,
my worth can come from increasing other people’s well-being through treating them as I would
like to be treated. So, one thing about the golden rule is it’s actually proactive, right. Do unto
others as you would have them do unto you. Do onto others. There’s an action to that. It’s not
leave people alone and don’t kill them, right. Which is sort of implied. But well-being based on
positive psychology, right, which is an empirical, you know, based on psychoanalysis for the last
twenty years is action based. It’s not mood. You can be a grumpy dude like I am at times,
pessimist, but gratitude right, you do an example of gratitude and you sort of think of here’s four
things I’m grateful for, you can be angry, mad, not happy, old, young, any color. This is what I
love about well-being is that it’s got a universality amongst humans anywhere around the world.
And you think, well I’m grateful for my son, I’m grateful for my wife, I’m grateful for this. And you
start to pause and there’s this physiological, an actual physiological change that happens. You
can measure an MRI machine and see the brain patterns change. You can see dopamine spike,
right. Is there’s a physical change to when you take action to improve well-being. And then
when you start to get to understand economic indicators that measure objective and subjective
well-being, and for me one thing if someone, if anyone watching this video, if you haven’t read
the 2009 Stiglitz Report, the Stiglitz Report, President Sarkozy of France at the time said to
Joseph Stiglitz, Amartya Sen, a lot of these world-leading economists, alright we’ve been
hearing now for a couple of decades since Bobby Kennedy gave the air quote Beyond GDP
speech, the GDP might not be the best, only measure for society. Let’s just get together and
think about this. When you read the report you might think like, is this like a squishy like
happiness thing, like everyone should, you know, eat candy and talk about clowns. Not at all.
It’s about how do we measure what we build. How do we measure a society, what prosperity
really means? And the Stiglitz Report says I’m basically quoting verbatim but look it up. Anyone
who’s watching this. Human well-being is easier to measure than productivity. That blew my
mind. I thought the Stiglitz Report was going to be like we should all hold hands and kumbaya.
It’s productivity is like something happens and a year later you can measure it. And productivity
this. Whereas well-being is a state of flourishing or the Greek word is eudaimonia which is
mirrored in Eastern traditions as well, is the sense of there’s a balance of an individual, what
well-being is. There’s subjective data, that’s how they talk about their experience. Their life
satisfaction. And there’s objective data. Do they have access to clean water, education, but this
lens of what is taken to measure goes from being this myopic productivity, productivity,
exponential growth, which demands this is what capitalism and consumerism mirrors. I’m worthy
cause I buy stuff. What if I can’t afford it? Am I not, I don’t have worth anymore? Versus I say all
this to say, the message I’m so excited, so many people seem to be agreeing on, is what we
can actually change is the whole idea of single bottom line, you go to triple bottom line, at least
people plant it in profit. You say at least things in unison all have the same level of importance,
so that each quarter, like when people, this is my dream, the CEO closes a door and she goes
in and talks to her shareholders, “Hey we made our fiscal numbers.” Ay alright, bonuses this
year. And we made their environmental numbers, right. It’s not just a corporate social
responsibility greenwashing, it’s, we had to make tough choices to not get more fiscal numbers
because our environment and then things like what about suicide and depression that’s at
pandemic levels. It’s not just their choice of things to donate money to. It’s the sense of and I’m
not trying to put this just on businesses, it’s got to be policy and all citizens as well, but if there’s
this sense of with these beautiful glorious technologies, not just AI for good, like let’s take care
of each of the UN SDGs separately in isolation, but rethink, really innovate and rethink these
ideas that were set like in the 1940s and 50s and 60s about how to measure human prosperity.
They don’t make sense in 2018 anymore. And the opportunity, by the way, I’m not saying that I
don’t know, maybe we will turn into in one sense machines. We’ll become cyborgs literally.
Maybe the machines that we look at will have ascensions to them like I’m okay with those
things, I wrote my book to work through that kind of fear, what I’m not okay with in my work is to
just sort of be like cool, let’s see what happens. Or, to say, that we can innovate everywhere
else but here and those two places here are money. You’re hindering innovation typically means
you’re messing with my money. And I was an actor for fifteen years. I get it. Like if you’ve ever
maxed out a credit card, like I get it. And the other thing is innovation, if a person has never
asked themselves tough questions about themselves, and they may not have come from, like
me, a faith tradition or whatever else, it’s not that I would say they’re wrong or bad, what a
glorious opportunity, do you know who you are? Do you know the beauty of who you are even in
your brokenness? Do you know that your brokenness and your foibles are probably the reason
that so many people maybe fell in love with you, right? A foible to one person is endearing to
someone else. And I’m not trying to in any way say we’ll allow bad things to happen in ethics
and say no, no, no it’s not till we actually ask these tough questions of who we are and then say
how do we imbue these into our systems. And especially things like the same example of GPS
kind of giving over the map skills to things like parenting. I’m happy me John to put a flag in the
ground and say I think it is wrong to just say I’m going to let machines take care of my kids. I do,
me, John, no one else that I work with. Because it’s not just that it’s wrong, like oh you’re a bad
person. It’s that you’re missing out. Why can you not trust that you can also be a good parent
and steward and have one of the best experiences of the human life to be a parent?

Yeah. And that’s where I’d like to pull us into kind of the closing because of this, this emphasis
on well-being and the dynamic richness of being able to articulate the features of well-being in
culturally specific particularly, but also trying to round out that relationship to this increasing
prevalence of semi-autonomous systems or autonomous systems the ways in which on some
level these systems might be perceived as augmentation of our ability to navigate these worlds
and build outer well-being. But also, the ways in which if, if we continually defer to this, this
semi-autonomous or autonomous systems, we’re also relinquishing our authorship of, of our
individual well-being. So, within the context of your work with IEEE, your partnership with these
engineers, how do you navigate discussions or how do you attend to questions pertaining to the
value of autonomy? What are the safeguards that you use to kind of navigate those
conversations when our values if they’re within particular narrow scopes on something like
optimization of learning, of earnings or optimization of user, frequency of user tools? How do
you, how do you work through those conversations pertaining to values in these autonomous
systems?

So, it’s a great question in terms of values and well-being and autonomy and one thing I love
about economic indicators. When you study something like the OECD Better Life Index and you
even look at the phrase like time management for instance. Time management is very didactic
and empirical, and what we’re used to in the West is to saying like how many hours a week do
you spend at work? Fifty? Sixty? Seventy? Eighty? Whatever it is. And then when you ask the
second question, well what about, for instance, your family? A lot of times it’s where you are in
your life that then dictates your values for that era. So typically, in your twenties, you know,
when you’re not married, if one chooses to be married, that’s when you’re like I can do sixty,
seventy hours, you know, my value at this time is to be with work, cool. What that actually
means is an examined life. You’re saying right now, the era where I’m in, I’m valuing time at
work to build a life, not just for money, but because I love my work. But then time management,
say when you’re a parent and you start realizing if you’re, you know, a parent who wants to
hang out with your kids more, you might say, well seventy, eighty-hour weeks mean I don’t see
my kids, and you get, maybe it’s guilt and awareness of that. Well, so, first of all if you don’t
even ask those questions, then the guilt or whatever else will just happen and some empirical
things happen, you weren’t with your kids as much. Does that mean that they’re going to be bad
kids or whatever? No. It just means that you don’t know. And I’m always intrigued when it’s
engineers or people who are empirically-minded, which I like to think I am, although I know I’m
very right-brained as well, is I ask the question, if you don’t know your values. I mean you can’t
list them. And then, then you can’t list like ten things, then I’m like how do you know you’re living
to them. And if it’s a faith-based thing, it’s like well it’s like I go to church, I go to temple, I do
that, cool. So, does that dictate your life, or it’s just kind of a place that you go every week
because they, they serve good food. Like with Methodists, you know food it’s all about the food.
Like I think at one-point John Wesley was like you got to have little white marshmallows in Jell-
O, I don’t know why, that seems to be a huge thing among Methodists, randomly. Anyway, but
the reason I bring this up is that then you start to, like last year I was in Dubai for the World
Government Summit and the people who’ve been doing the UN World Happiness Reports for
the last five years Jeffrey Sachs, Lord Richard Layard and John Helliwell put out the first version
of the Happiness Policy Document. It’s fascinating. And it’s things like with smarter city
technology iterative things, it’s not rocket science to think about like say there’s, you know,
we’re here in Pittsburgh at Carnegie Mellon, right. So, there’s a traffic jam over there. I wonder if
people’s well-being is diminishing cause of a traffic jam. You can wear a Fitbit thing that
measures sweat which is a correlation for stress. And you can, you know, measure all kinds of
physiological data, eyes narrowing, you know, in car tracking is already a big thing through
effect and stuff. But it’s not just that you can measure that people’s well-being as being affected,
you can immediately have interventions. Both from a psychological and mental health
standpoint, but a policy standpoint. And certainly, a business standpoint where if the data is
taken care of so that’s a different subject we haven’t touched on, but if people are at the center
of their data through sovereign data structure, peer to peer exchange, finite amounts of data,
their choice, all that. And you’re sitting in your car and you drive towards, like Waze can already
tell you like there’s a traffic jam ahead, and you’re like I know, I’m looking at it. But you start to
drive up to it and boom, you get a thing that says hey, you’re near Starbucks, sorry about the
stress, free coffee for the next hour if you want to pull over. And the policymakers who were
sitting. I mean the smarter city people, maybe they’re actually rerouting, and it’s like a flight, you
know like, anyone want 500 bucks to skip this flight and go to Denny’s for a couple of hours?
And you’re like, yeah, I have a choice, cool. I’m going to take the 500 bucks and take a flight
later today. So, that, people don’t think about that as economic indicators. They’re just like
smarter city. And I’m like well that’s an economic indicator, you can know mood and well-being
and you can, you can say, by the way you can also do maps in these parts of the city through,
through wearing these things. These particular spots for some reason at these times of day,
these wristbands register stress for some reason and it’s not traffic. So now you have a clue,
what is it about, you know, that physical part of the city that there’s something that we can
address. And the policy book if you read it it’s just mind-blowing, things like safety. Safety is
often and the OECD indexes, safety being measured as an economic indicator means that
policymakers can improve and help safety and the feeling of safety. And I think it was in Boston,
there was something that happened where crime rates, crime rates were raising at a certain
timeframe and in the policy report it explains it better. But they just put out more cop cars so
people could see them visually and I forget how they did it in such a way that there was the
question of, you know, do you feel scared and people are like yeah, I feel more scared the last
couple of months. And then the answer from the policymakers come out, we’re going to put out
more cops on the streets. And by the way, they did, right, it wasn’t just, but then when they put
them out, then they kind of said like, how do you feel? All the things can happen in seconds
through giving permission to some kind of device, this is where the machine learning or AI is
incredibly helpful, but then when the cop cars, people saw them. Physically, through a sweat
measurement, oh thank God. So, this is when you talk about happiness and well-being
indicators, it moves from being this sort of squishy like how do you measure this, because the
objective data is and we can argue about are they taking the data well, but, but they took some
data, maybe this eventually brain mapping and all this other stuff, but then the social science
asking through surveys, how do you feel about that. I feel great. I was freaked out a couple
weeks ago, you guys gave more cops in the streets, thank you. Thanks to the cops. And it has
to be done in ways where trust in imbued so it’s not just greenwashing or whatever, but my
whole point there is that well-being is oftentimes much more objective and solid than other
things. Quick example to Chip Connolly has a great Ted talk, fantastic, has got like millions of
views. He points out that the GDP, sixty percent of the GDP measures service-oriented
industries, and we all go a service okay, GDP. And he’s like but what is a service-oriented, what
happens when you go to a hotel? What do you do at the end of every hotel stay? You fill out a
survey. How was our service? Pretty good. So, it’s aggregated subjective data that becomes in
one sense objective data, cause ten thousand people said these hotels are great. So, in our
mind, the GDP is this sort of like, this is it, but again I keep talking about this because it’s, I keep
thinking how much it’s the one criteria that drives the planet because everyone says I hate these
terms developed versus undeveloped or third-world countries. But when you actually
understand a larger sense of flourishing, there’s a lot of countries that, and by the way, I would
never say don’t give them water or food and human rights and done, that’s the floor. Maslow
human rights that’s always the priority. But in places where a nuclear family is still solid, or
places like in Brazil where, you know unfortunately the government is dealing with a lot of
struggles with some pretty major negative stuff, but the family structure is strong with aunts and
uncles living around. There’s the objective things, I don’t have anyone to take care of my kids, I
can’t afford a babysitter. Well, I have my aunt, right? In the West, a lot of times isolated nuclear
families, all they can do is pay someone to take care of their child and then they don’t see their
kids and if that’s their choice, that’s their choice. But then well-being oftentimes diminishes. And
so, the whole point here is when you actually say like oh, economic indicators, OECD etc. are
simply more ways of looking at ourselves individually and at society, so we can actually say if
prosperity is not just this one thing, it becomes more complex, but it also becomes so much
easier. And so, I guess my final message here with well-being is there’s all these different
indicators and movements where the sort of AI for good, which is fantastic and the UN
sustainable development goals are oftentimes now considered an indicator, but it’s in their
holistic working together and be a recognition that we can’t just say, we’ll the benefits of AI have
to work for everyone. Well everyone and everything right, meaning if we call the planet a thing, it
actually then, to try to make these things work in this utterly constrained system of exponential
growth, how do we fix it over here. Then that’s not really innovation and I’m marveled that we
have conversations about AGI, and certainly artificial super intelligence and then I can be in
certain rooms and say “Hey, someday my phone might be Steve,” right, and I’m not joking, like it
might have a level of sentience beyond anthropomorphism. But where I talk about, but
innovation can actually be making these three things equal, and also saying what about a
person having worth without needing any form of augmentation or whatever else, meaning what
happens, you know, we all know this, you lose power on your iPhone or, you know, the, the
power goes out. Certainly, with hurricanes and stuff happening. Like are we going to be able to
maintain well-being without the technology as well as with. And that’s where I’m so excited
about, I keep seeing these little breaks where I’m never, and this is why I love working with
IEEE and with the Council, the technology is always glorious to me. I’m a geek. But, it’s the
technology with the recognition of how it will affect well-being that then we can actually have
positive, beautiful transformation for humans and what we’ve become.

Excellent. Well thank you so much.

Вам также может понравиться