Вы находитесь на странице: 1из 2

Should we be afraid of AI?

Hi my name is Tristan and I love my toothbrush


I love the way it feels,i like how it can't electrocute me when it's underwater
I like it so much I feel like we have a special thing going on.
I love it even more than i love my cat.
Now,I don't have a cat,but if i did it would rank lower on my list of priorities.
There are laws that protect cats and animals in general.Those laws forbid the
toruring and killing of animals in an inhuman way.
My toothbrush doesn't share the same rights.
Who decides weather,in 50,60 or even 20 years,a robot that was
programmed to learn and becomes self-aware has the same rights as the cats
do.As we do?
Should we be afraid of AI?
The field that studies and makes predictions about self-aware and automated
devices is called RoboEthics.
There is a big debate going on about what moral boundaries are there to
making robots that can possibly harm us.
Some people feel like we can control whatever intelligence we create with a
set of rules.
They think that programming a robot to be "morally aware" and to
distunguish right from wrong,altho not easy,would end the problem of AI
murdering us all in our sleep.
A popular sci-fi author Isaac Asimov,on whose books movies like "I Robot"
with Will Smith and "A halhatatlansg halla" with Jcint Juhsz were
made.You might now some of those more than others.He wrote a set of rules
called the Three rules of Robotics.
The first one is The robot must never harm a human.[EXPLAIN]
The robot must always obey humans orders unles it conflicts with the first
law.[EXPLAIN]
A robot must do everything he can to protect itself if it doesn't conflict with

the first or the second law.[EXPLAIN]


They have to go in that order otherwise robots become war machines.
Isaac spent the rest of his writing career testing those laws in a lot of
theoretical situations.And it turns out,they don't work all that well.
A lot of you right now are thinking "oh this is a cool presentation,i really like it
and i think the presenter should get an A but i wont need dis"
but you would be wrong.America,Russia and Korea are already using
unmanned drones to spot people that should not be there and predict
terorrist attacks.There is a turret on the South Korea border that detects
movement and then reports to the generals that give it a command to shoot
or not.Soon those generals might become obsolete and then we are dealing
with a robot apocalipse.Not sure how many of you watched The Terminator
but that is what AI in Millitary is all about.Creating atonomous droids that
protect a certan group of people from another.And if you don't have the
techonolgy yet,well,too bad.
AI is progressing much faster than we thought and robots realising they exist
might easily become something we have to deal with relatively soon
And hopefully ,in the notsodistant future,when my tootbrush asks me what its
purpous is we would already have had a set of laws that would tell me
weather i am or not allowed to pull a plug on it.
Thank you

Вам также может понравиться