Вы находитесь на странице: 1из 2

Do We Need Robot Law?

- I think the most pressing, and also the most difficult,


legislation that nations must come up with are those involving speculative changes.
Again, last week, we have read that once AGI reaches the level of a dumb human, it
would not take long for it to become an ASI. I started to understand the gravity of
the situation when an AGSI (they said it was general specialized?) recently defeated
one of the best Dota 2 players in the world. The AI learned only by playing against
copies of itself, and after playing for a total of two weeks, it was able to defeat the
best players in Dota 2 without any strategies being uploaded onto its system. It
learned by itself. When a true AGI comes along, I think it would only take hours in
order to become an ASI. And we will be very unprepared if by then, there is still no
law governing such ASI. What makes such law very difficult to create is that it
requires that we predict how an ASI would function. In short, we are required to
predict the unpredictable. The article also mentions that one of the 5 principles of
robotics is that "humans not robots are responsible agents and the person with legal
responsibility for a robot should always be attributed." I doubt that would be
applicable to an AGI which quickly transforms into an evil ASI. Such an ASI would
already have a mind of its own, completely aware of the legal consequences of its
actions (I am assuming it has already learned so much).

Another thing which the article failed to discuss is the military use of robots. That
falls under controlling changes, and we have to legislate on that as well. But as to the
requirement of having human impact assessments, such is a sound proposal but the
author acknowledges that it is an incomplete solution. AI may be beneficial most of
the time especially when they perform the job more efficiently. If possible, we have
to adopt at least a gradual policy of automation so that laborers who will be
prejudiced will have the opportunity to find other occupations.

At this point, facilitative changes may be the most practical to draft. At least we can
check our own laws, see where legal restrictions are in place, and push for the
amendment of those provisions.

The Law of the Horse - I wish I read this article before I started writing the
recommendation for my thesis (which is not related to IP or the Cyberspace, but
more related to state regulation). This article is very informative, and I agree that
the most effective way to regulate behavior on cyberspace is through the modality
of architecture. Take for example the case of fake news. Recently, Senator Joel
Villanueva proposed to criminalize those who would spread fake news in print,
broadcast, or online media. This was in response to the prevalence of fake news all
over the web, which frustratingly fools a lot of people. However, I think the solution
to fake news is not through law directly but through a change in social media's
architecture or its code. That is precisely what happened. Facebook started to
identify sites or articles which were flagged as fake news, allowed an option to flag
articles as fake news by asking people to provide it with information showing a
discrepancy with an actual reliable news source, etc. I think focusing on the code
really is the way to regulate cyberspace effectively, since unlike in real space, the
architecture in cyberspace can be extremely manipulated. In real space, you are
faced with the limits of physics, while in cyberspace, I do not think that there are
limits to coding?

How Should the Law Think About Robots? - The android fallacy, I believe, is more
apparent than real. However, I think the form of an AI will become immaterial if the
technology is on the AGI level heading to ASI. I have always envisioned that when an
AI reaches AGI, it may take any form it wants after just a few hours. By that time, it
would already have been an ASI, and it may already find a way to transfer its
functions or code to wherever. The question now is, where do you draw the line
exactly? Should the manufacturer of the robot be always held liable, or is there a line
which technology may cross and inevitably absolve the manufacturer from any
liability?

I agree that proper metaphors should be used when legislating, but the difficulty in
that is that sometimes a certain analogy may be imperfect given the architecture of
the cyberspace. I think there will be scenarios as well where no analogy could be
made because no such occurrence has existed before, and this is not unlikely given
the fact that the growth of science and technology may sometimes be unpredictable.

Infographic on the Law of Robots - How would we know that an AI has reached
the AGI level? Will that be publicly broadcasted immediately, or would that happen
first in an isolated science and technology center or laboratory? I am bringing up
this question because most of the principles here presuppose that we are aware of
the existence of the AI. But for all we know, maybe some form of military
experiments are being conducted on the military application of AIs, which will
eventually lead to the creation of an AGI or an ASI. When that ASI, through whatever
means, is able to transport itself to some other medium such as the internet, and is
able to learn an endless amount of knowledge, how do we now apply these
principles and laws of robotics? Or maybe, I just do not know how an actual AI
works.

Regardless, the principles and laws presented on the infographic are of course very
important in current AI technologies. I would just like to bring up one more point
before I end this reaction. In Satya Nadella's Principle and Goals, the first principle
provides that AI must be designed to the benefit of humankind and not to its
detriment. Who decides what is for the benefit of humankind? That is a broad
question which I cannot answer. What if the AI decides that it is for the benefit of
humankind to be reduced?

Вам также может понравиться