One of the primary research themes in the IS&UE RCE is developing and evaluating 3D user interfaces for virtual, augmented, and mixed reality. In particular, we are focused on exploring how to bring 3D user interface techniques and concepts into mainstream video games by leveraging the existing body of work in 3DUI and VR and devising new strategies and methodologies for bringing spatial 3D interaction to gamers. Additionally, we are interested in the continued learning and understanding of how humans interact with and are affected by 3D interfaces.
With the release of a variety of new motion controllers for both PC and console gaming, 3D user interfaces are becoming commonplace in modern games. The focus of this work is to explore how to best utilize 3D spatial interaction in the video game domain by examining existing interaction techniques and creating novel ones as well as understanding how these interfaces affect users.
The focus of this project is to explore how technologies that have traditionally been found in virtual reality but are now becoming mainstream in the commerical marketplace affect user performance in video games. Specifically, we are interested in whether 3D stereo as well as head and hand tracking improve a player's ability to learn to play video games and achieve better scores. In addition, we are exploring overall user experience when players use these technologies.
We are systematically exploring recognition of 3D gestures using spatially convenient input devices. Specifically, we are examining existing and developing new algorithms to improve 3D gesture recognition accuracy as well as exploring how many gestures can be reliably recognized with video game motion controllers.
RealDance investigates the potential for body-controlled dance games to be used as tools for entertainment, education, and exercise. Through several dance game prototypes built with Nintendo Wii Remotes and depth cameras, RealDance investigates visual, aural, and tactile methods for instruction and feedback.
3D object selection is highly demanding when, 1) objects densely surround the target object, 2) the target object is significantly occluded, and 3) when the target object is dynamically changing location. Most 3D selection techniques and guidelines were developed and tested on static or mostly sparse environments. In contrast, games tend to incorporate densely packed and dynamic objects as part of their typical interaction. With the increasing popularity of 3D selection in games using hand gestures or motion controllers, our current understanding of 3D selection needs revision. We present a study that compared four different selection techniques under five different scenarios based on varying object density and motion dynamics. We utilized two existing techniques, Raycasting and SQUAD, and developed two variations of them, Zoom and Expand, using iterative design. Our results indicate that while Raycasting and SQUAD both have weaknesses in terms of speed and accuracy in dense and dynamic environments, by making small modifications to them (i.e., flavoring), we can achieve significant performance increases.
After identifying how these selection techniques worked across the various scenarios, we pursued the development of a framework that would allow for dynamically choosing a selection technique in real time, based on contextual information. This Auto-Select framework was designed to allow the easy drop in of any selection technique, making it easy to use. We performed two additional user studies that measured the performance of such a framework against the standard techniques by themselves. Our results showed that while promising, there are many factors that affect how well the framework will operate. Among these are the similarity of techniques used, transitioning between them, and providing user feedback. These factors have been targeted for additional future research.
Presently, we are researching the construction of single selection techniques that operate under different modes, where each mode allows the technique to work well in different conditions. This is essentially taking two separate techniques and internally merging them, while eliminating the disparity between their operations to ensure a smooth transition between the two modes.
We are exploring the use of low cost commercial technology to create an interface capable of navigating scenes using the full body and natural interactions.
In the RealNav project, we made use of Wiimote hardware coupled with a Kalman filter to overcome the challenges faced by a quarterback in an American Football video game. The goal was to maintain natural movements, such as real time recognition of the user moving inside of a small area, along with common gestures such as running and throwing, so that a user could easily pick up and be recognized within the system.
In a similar series of projects, we made use of a combination of the Microsoft Kinect and Sony Playstation Move to accurately track a solder in training without adding more hardware to them than they would already be carrying. We were able to recognize where the user was moving, aiming, and basic gestures such as walking in place, crouching, and jumping. All of this allowed for a natural and immersive environment for the soldier to operate in. This was further expanded with the inclusion of multiple Kinects which could track the solder no matter what direction they were facing and no longer needed the Playstation Move to track which orientation they were facing.
We are also studying how full body interfaces can be used for video games when we used a Wizard of Oz approach and the commercial game Mirror's Edge to determine what natural movements users perform when asked to do a task. In this sense we gained knowledge for a series of full body tasks what the average user would do when asked to perform the task without much other guidance. This created a series of guidelines to be used in future projects.
We present a prototype system for interactive construction and modification of 3D physical models using building blocks. Our system uses a depth sensing camera and a novel algorithm for acquiring and tracking the physical models. The algorithm, Lattice-First, is based on the fact that building block structures can be arranged in a 3D point lattice where the smallest block unit is a basis in which to derive all the pieces of the model. The algorithm also makes it possible for users to interact naturally with the physical model as it is acquired, using their bare hands to add and remove pieces. We present the details of our algorithm, along with examples of the models we can acquire using the interactive system. We also show the results of an experiment where participants modify a block structure in the absence of visual feedback. Finally, we discuss two proof-of-concept applications: a collaborative guided assembly system where one user is interactively guided to build a structure based on another user's design, and a game where the player must build a structure that matches an on-screen silhouette.