Вы находитесь на странице: 1из 10

Arthur Interface

Idea

Use the shortcuts and popup menus that work well in similar
applications in windows. Keep a simple GUI with little to learn for the
user.
First the functionalities must be defined, secondly those can be
divided into menus which appear based on the context in which
they are triggered, e.g. object and system menu. Triggering a menu
on the workspace gives access to “insert object”, “configure grid”,
while triggering the menu on an object gives access to properties
for the object.
All functions are in the menus and some of them can be triggered
by a “shortcut gesture”, similar to hotkeys.
Sources of inspiration are object-based applications like Maya 3D,
Photoshop, Ms Office.

Set-up:

Head mounted display


Stereo cameras, stationary or head mounted. May be easiest to
have HMC.
Clean table
To improve segmentation, a cloth can be used on the table.

Compatibility

This can be used alone, or supported by EG’s context zones.


System commands can be placed on the side. Same principle of
context dependence, where this approach defines a zone from what
is pointed at, while EG’s zones are defined by position. They can
function simultaneously.
Another context is the state of the system.

Brainstorming, functionalities:
Inspiration: Maya 3D and Photoshop

• New workspace
• Load workspace
• Save workspace
• Insert object
• Transform object
o Rotate
o Scale
o Move
• Properties for object
o Style
o Colour
o Transparency
o Show wire frame/solid
o Snap to grid
o Snap to place holder
o Snap to virtual object
• Configure grid
• Copy-paste
• Select
o Select one
o Select all
o Add to selection
o Remove from selection
o Deselect all
• Group
• Ungroup
• Delete
• Undo, global
• Undo, local
• Recalibrate
• Exit
Can be divided into system menu (global operations) and object
menu (local operations).

Sequencing considerations:

Approach:
• Choose the shortest path, in terms of operations to the goal
• Stay in the neighbourhood of what the users are used to.
• Minimize the size of the gesture alphabet

Visual feedback:
• Pointing at objects highlights them.

Inserting objects:
• Call system menu, choose insert object it will appear in the
centre of the view point
• It will appear in the centre of the “world”
• It will appear where pointed by the user.

Copy-Paste:
• Point at object, use and hold copy gesture [the copy snaps to
finger], move to new location and release gesture. This way
the “select object“ gesture is spared.

Perform changes to an object:


• Point at object, use and hold menu gesture [object menu
appears], menu item pointed at by menu gesture is
highlighted, menu item is selected by releasing menu gesture.

Menus
• Submenus can appear when highlighted. Menu tree structure
to be defined below.
• System menu appears when menu gesture is activated
pointing at no object.
Multiple selections: How to distinguish between selecting one object
and then the next, or to add the second selection to the selection?
• First select “create group” and select a number of objects,
then call the menu and do your thing to the group.
• Use another gesture for the additional selections
• Use second hand to imitate holding down the CTRL key while
selecting additional objects, and selecting selected object will
remove from selection. Then call object menu and e.g. move
all selected objects or create group.

When selecting an object that is in a group: How to see if the group


is to be selected or only the object?
• Select [group is selected], then select again for the object only
• Imitate holding down CTRL to select only the object

Modifications (scale and rotate) to virtual objects:


• Virtual buttons
• Virtual levers; e.g. rotate: 3 levers, one for each DOF. Interact
with levers with grab gesture.

System menu

File (side menu) Edit (popup in workspace where


pointed)
• New workspace • Undo
• Load • Insert object
• Save o [Object types]
• Recalibrate • Configure Grid
• Exit o Placement
o Units
• Select All
• Deselect All
Object menu:

Edit object (popup in workspace


where pointed)
• Undo
• Transform
o Rotate
o Scale
o Move
• Properties
o Style
o Colour
o Transparency
o Wireframe/solid
• Snap
o To grid
o To placeholder
o To object
• Ungroup
• Delete

Note that menus can be simplified by changing item depending on the state context.
For example the ungroup item can be removed when a single object is chosen.
However, the advantage of having it present all the time is that the user can remember
seeing it, when (s)he has to use the ungroup item.

Example video (DivX 5):

This recording has been superimposed with the virtual highlights and menus. The cup
and the case are supposed virtual objects.

Actions performed:
• Insert cup
• Open object menu
• Don’t select anything from the men
• Insert case
• Move case
• Move cup
• Select all
• Delete
• Confirm
• Exit
• Confirm

Gestures used:

• Residue

• Pointing
• Menu trigger

• Grab shortcut

• Select all shortcut


• Confirm

arthur_gui.avi

Technical consideration

Simple background makes easier segmentation. Cloth can help, because the desk and
skin are almost same colour.
Gestures are rather static and different from each other.
Perhaps contour description is enough to recognise the gestures. Or a hand skeleton
can be used as template, where the palm is a 3D cross and each finger is a vector
pointing from the joint at the hand to the nail.
Many gestures can be considered 2D postures, but the grab shortcut requires that 3D
position of the hand is known. Triangulation can also help the segmentation.

A thought: What happens if a user misses a finger.

Segmentation experiments:
Image enhancment techniques. The bottom right can be used to find contour.

The same as above but only using the blue channel.


The lighting is mixed outdoor and indoor light.
Laplace convolution alone works nicely.

The same as above but only using the blue channel.

Вам также может понравиться