Вы находитесь на странице: 1из 4

Pacific Graphics (2019) Volume 38 (2019), Number 7

Augmented Virtuality: Storybook Using Real-Objects

Su Jin Park and Moon Ryul Jung


Graduate School of Media
Sogang University Seoul, Republic of Korea
concept.psj@gmail.com, moon@sogang.ac.kr

Abstract
We propose and implement a new form of virtual reality called “Augmented Virtuality”(AV). It is used to implement a
virtual storybook. In this storybook, the user brings up real objects into the computer-generated virtual space to make a
story go on. The objects are recognized by an object recognition software and their 3D models are inserted into the virtual
space and rendered. The rendered image of the augmented scene is projected on the desk, where the user can touch and
select objects. One issue with the concept of AV is the storybook is whether a [child] user would think that the real object
she has brought into the virtual space is a part of that space, or she would think that the image counterpart is a part of the
virtual space and the role of the real object is finished when it is brought into the virtual space. To find that, we tested
whether the user would still manipulate the real object after it has been brought into the virtual space ( with the image
counterpart following the moving real object) or she would displace the real object away from the virtual space and
manipulate the image counterpart in the virtual space. It turned out that the latter is the case. It means that the user thinks
that the role of the real object is finished when it has been brought into the virtual space. But we may suppose that the user
would feel different to the image brought into the virtual space than she would feel to the image that would have already
been in the virtual space.

CCS Concepts
• Virtual Environments - Immersive VR, • Virtual Environments - Augmented Reality

1. Introduction
To implement an augmented reality content such as a storybook,
the AR developer is supposed to put “markers” in the AR
storybook; The markers are recognized by the camera and the
graphic images or animations connected to the makers are
displayed on the physical page of the AR storybook. In contrast to
this method, we develop a new form of storybook. This storybook
is not a physical book but a virtual space displaced on the desk top.
To this virtual space, the user is asked to bring real objects. Then
the objects are recognized and their 3D models are inserted into the Figure 1. AR storybook App by “Quivervision": Step 1: The line-
virtual space and the augmented virtual space is rendered on the sketched figures are markers. Step 2: The user colors some marker.
desk. The new form of virtual reality is called “augmented Step 3: The user uses the smartphone to recognize the colored
virtuality”, because the virtual space is augmented by means of marker with its camera. The smartphone renders the 3D model
real-objects. corresponding to the recognized marker with the color of the
marker as its texture, and combines the rendered image with the
1.1. Interactive books based on Augmented Reality background image of the storybook and displays the combined
image.
The Figure 1 shows an AR coloring book, an augmented reality
book intended for the user's education and play. This is an
interesting experience because it allows the user to have their own 1.2. The concept of Augmented Virtuality
book. The concept of augmented virtuality (AV) can be explained by the
diagram in Figure 2. AV is the space that is virtual and real-
objects are augmented to. When real-objects are brought into the
virtual space, their 3D models are rendered into the virtual space.
We can say that the virtual space is augmented by real-objects
because the 3D models of real-objects are brought into the virtual
space and were not there originally.

Figure 4. AV Storybook System

2.2. Design of an AV storybook


The sequence of story “Snowwhite” is shown in Figure 5

Figure 2: The classification of various spaces: The X axis


represents whether the space is real or virtual. The Y axis
represents whether the given space is augmented by elements that
are foreign to the space.

2. Storybook using AV technique


We have developed a user environment that allows children to have Figure 5. Story of AV Storybook
various experience with our storybook. This software is developed
using 3D game engine Unity3D engine and run on Android 2.3. AV storybook Implementation
devices. Figure 3 shows that the 3D models of the original virtual
space and the 3D models of the augmented real objects are
rendered and projected onto the desk top by the projector. The
projector is an Android device which executes an app developed
using Unity3D.

Figure 6. The Structure of the AV storybook Code

The code of our AV storybook has a structure as shown in Fig. 6.


The Snowwhite story consists of 13 scenes. Each scene is
implemented by one Unity Script class. For example, Table 1
describes the objects in the scene where the Snowwhite appears.

Figure 3. The concept of AV Storybook


Object Function
2.1. Object recognition
To insert a real object into a virtual space, we use a pre-trained The 3D model corresponding to the
1 CrrObject
object recognition software MobileNet to recognize real objects recognized real object.
and associate them to the corresponding 3D models. In addition,
we have developed a user interface in which children can have
The text used to represent the screen
various experience. This software is developed with Unity3D 2 CrrText
coordinates the recognized real object.
engine and run on Android. Fig 4 shows a sequence in which real-
objects are recognized by the object recognition software, are
rendered, and projected onto the desk top. Camera
3 The script to control the camera.
Image

Label The script to load the models of the


4
Map MobileNet
3. User Experiments
The Pre-trained Machine Learning Model
5 Model The user test was conducted using the implemented AV storybook
Source.
as shown in Figure. 10 and 11. The participated users are children
2D image for Texture mapping of 3D 5 years old. The children were not instructed anything and allowed
6 Tex
models to play freely. We found that the users were interested in touching
the screen on the desk. The children followed the scenario (voice-
Table 1. Objects used in the 4 Scene
th
recorded) well, and also showed an active attitude for finding the
right object in the right context and recognizing it directly on the
As shown in Figure 7, to attract children's interest, we developed AV Story book.
the RGB slider bars, so that the users can change the color of the
background.

Figure 7. The user can use the RGB slider to adjust the Figure 10. User Test: (Left) The child holds an apple,
background color. (Right) The picture shows user experiencing AV Storybook.

The AV Storybook is equipped with voice narration function to


help users understand scenarios. The scenarios that Storybook has
are divided into 13 scenes. In line with this, 13 narrations were
created to describe the scenario. The recorded voice file is
selected fit each scene and is automatically started at the beginning
of each scene, as shown in Figure 8.

Figure 11. User Test: (Left) before recognized,


(Right) after recognized.

One issue with the concept of AV is the storybook is whether a


[child] user would think that the real object she has brought into
the virtual space is a part of that space, or she would think that the
image counterpart is a part of the virtual space and the role of the
real object is finished when it is brought into the virtual space. To
Figure 8. Narration in unity
find that, we tested whether the user would still manipulate the real
object after it has been brought into the virtual space ( with the
image counterpart following the moving real object) or she would
The main feature of our AV storybook is to enable the user to bring
real objects into the virtual space projected on the desk top, as displace the real object away from the virtual space and manipulate
shown in Fig. 9 the image counterpart in the virtual space.

It turned out that the latter is the case. It means that the user thinks
that the role of the real object is finished when it has been brought
into the virtual space. But we may suppose that the user would feel
different to the image brought into the virtual space than she would
feel to the image that would have already been in the virtual space.

4. Conclusion
Figure 9. Real-Object Augmented Virtuality Storybook: We have presented an augmented virtuality storybook that
recognizes real objects using a machine-learning trained software
(Left) The user is bringing an apple to the virtual scene on the desk. and puts theirs 3D models into the virtual space. This augmented
(Right) The apple is recognized and its 3D model is inserted into virtuality content combines the virtual space with real-objects to
the virtual space and the whole scene is rendered and projected enrich the virtual space. The users are found to have interests in
onto the desk. this form of new media. It is expected that a machine learning
based augmented virtuality storybook will have a positive effect by
presenting immersive storybook contents for children. This
augmented virtuality content has the advantage of realizing the
augmented virtuality by using real objects in the real world as
markers and also has the advantage of increasing the degree of
freedom of the user by not requiring the use of specific markers
designed in advance.

References
[1] Enrui Liu, Yutan Li, Su Cai, Xiaowen Li, The Effect of Augmented
Reality in Solid Geometry Class on Students’ Learning Performance
and Attitudes, Springer International Publishing AG, part of Springer
Nature 2019M. E. Auer and R. Langmann (Eds.): REV 2018, LNNS
47, pp. 549–558, 2019.
[2] Journal of the Korea contents association 16(1), 2016,01, 424-437
(14 pages).
[3] Dayang R, Dayang R A R, Wannisa M, Suziah S, Design and
Development of an Interactive Augmented Reality Edutainment
Storybook for Preschool, December 2012.
[4] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry
Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto,
Hartwig Adam, MobileNets: Efficient Convolutional Neural Networks
for Mobile Vision Applications, Apr 2017.
[5] http://www.quivervision.com/
[6] https://www.theverge.com/2015/10/5/9453703/disney-research-
augmented-reality-coloring-books
[7] https://ai.googleblog.com/2017/06/mobilenets-open-source-
models-for.html
[8] https://www.tractica.com/newsroom/press-releases/mobile-
augmented-reality-app-downloads-to-reach-1-2-billion-annually-by-
2019Conference Location:El Paso, Texas USA

Вам также может понравиться