top of page

What We've Learned About VR: A Brief Taxonomy of VR, Part 1

  • Writer: John "CC" Chamberlin
    John "CC" Chamberlin
  • Sep 29
  • 7 min read

Updated: Oct 8

The Learning Games Lab recently released its first VR products. Creating these programs has been an intense learning experience for us. As part of the process, we explored the various ways that virtual reality can serve users and enhance educational experiences. This series will discuss some of the user interface options that we explored and discussed. In the first part of this series, we will be discussing UI (user interface) interactivity, starting with the basics!


Fly on the Wall

Perhaps the simplest user interface is no UI at all. A "fly on the wall" scheme conveys information by just having the user watch a conversation or other educational event unfolding. Players can look around and see things happening around them, but they can't really interact.


Here's an example from M-Care where a dietician speaks to a patient, and you're cast in the role of that patient:



This type of interaction is not very engaging. But it does have the advantage of being accessible. A headset could be worn by someone who cannot use either of the controllers and they could still experience the content. (See also look-to-select, later in this blog post.)


The accessibility downside of this UI is that it plays out linearly in time. The game is pushing information at the game's convenience, not the player's. If you miss a part of the conversation, you missed it, so it's better for non-crucial information, short bursts, or in a larger context that allows replaying the content.


This sort of fly-on-the-wall approach doesn't necessarily have to be just a conversation; it could be any non-interactive, watch-as-the-story-unfolds experience, like Myth: A Frozen Tale, or Invasion!, where the story is engaging enough, or linear enough, that adding an interactive UI isn't really the goal. Immersive 360° videos will often play out this way, because video cannot react to the player input (other than abstractly to select another video, play/pause, quit, etc.).


In many ways, fly-on-the-wall VR is like watching a movie, but there are other considerations.  With non-interactive segments in a VR app, there's a need to focus the player's attention – you don't want the player looking in completely the wrong direction.  Myth handles this with the visual message "nothing interesting back there" – the world fades to black in the direction they don't want you looking.  Invasion! has a character that very obviously looks in awe and trepidation in the direction the director wants you to look, gives audio cues in different directions, and has important characters fly from front-to-back so you can't miss them. Those are just some examples of how to influence players' direction of focus. Achieving this might be easy or hard depending on the content and how expansive "in all directions" the final animation needs to be.


PROS

•  Simple for the end user. It just plays.

•  Accessible to people who can't easily use the controllers.

•  Useful when there is no meaningful input from the player needed.

•  Easy to develop – an animation that plays in a linear way.


CONS

•  Depending on content, can be dull or not very engaging.

•  Plays out at the game's pace, not the user's pace. User may miss things.


SCOPE IMPACT: Very Low


Zapping

"Zapping" is a way to enable a player to convey intent back to the game.  This is typically done by having "rays" emitted from the controller which can be pointed at, say, a button in the game world.  When you pull the trigger, whatever you pointed at activates.


Here's an example of the player zapping a button in a prototype VR version of our interactive module Stay Safe Working With Horses:



Zapping doesn't have to activate an abstract UI element, although it often does.  Another option is to define a natural action for an item, and then, when it is zapped, that action occurs.  For instance, in Storage Space 13 (a WebXR game that clocks in at under 13K bytes of total size – impressive!), the "natural action" for a crate is to be pushed in the closest "Manhattan direction" away from the player:



This technique can be quite immersive, in that it requires no UI other than the ray, but each object must have only one natural action.  For instance, if the crates also need to be opened, it becomes unclear what will happen when they are zapped, and another UI mechanism would need to be considered. (Perhaps the natural action would be to show two options as buttons, which in turn could be zapped to trigger the action.)


This technique also has the considerable benefit of being forgiving of distance. Since the ray can be extended very far, it doesn't matter how far the item is from the player, which can be helpful stationary-context applications, or when locomotion is burdensome. (Keep in mind, however, that targeting an element very far away can be difficult for the user.)


One consideration for this method is that you may not want the player to be able to trigger some actions from afar, in which case, you'd need to have a mechanism to shut down far away objects to signal the player to move closer to them.


PROS

•  Clear and simple.

•  Interact at a distance.

•  Can be used with abstract UI elements or objects.


CONS

•  Interact at a distance.

•  Really only works with elements that have obvious "natural" actions.


SCOPE IMPACT: Very Low


Tapping

In the real world, we don't have the ability to zap things from afar, so it's more immersive to have something more physical and real.  Objects can instead be set up so that you "tap" them to trigger their natural action. Here's an example from the Pollinator Park app, where you tap buttons with your hand to choose how to start the app:



As with zapping, tapping needs each element to have a natural action that is clear to the user, but it feels more immediate and real than zapping things.


Tapping does need to be physically within reach of the player, within their boundary. This can be a problem, especially with apps that have no locomotion or which intend to support both room-scale and stationary boundaries. In the example above, if the user is sitting rather than standing, they have to reach up to tap the buttons and look around the podium to see the screen beyond, which feels awkward. At other times, the user must use a locomotion system to get close enough to touch the buttons they want.  Touchable elements placed in a physical space need to be placed considering the possibilities of where the player might find themselves.


PROS

•  Clear and simple.

•  It feels very immersive to be able to tap things with your hand.

•  Can be used with abstract UI elements or objects.


CONS

•  Cannot interact at a distance; user may have to navigate closer or stand up/bend over to reach.

•  Really only works with elements that have obvious "natural" actions.

•  Imposes some restrictions on design of the virtual space.


SCOPE IMPACT: Low to Moderate, depending on virtual environment impact


Grabbing

Another common item interface mechanism in VR is "grabbing." There's a lot of physics-modeling apps out there, like shooting arrows, firing guns, picking up keys and putting them in locks, etc.  This document doesn't go into details of all those interactions, but it is worth considering that sometimes grabbing an element can work within educational contexts.


Here's an example from the Pollinator Park app where you grab a flower to look at it closely. This, in turn, triggers an animation of a moth to come land on it:



Here, the grab action attaches the flower to your hand, and when you let go, it snaps back.  This is low overhead and doesn't require a complete physics system to handle things like dropping the flower, throwing the flower, etc. However, if the flower had to exhibit realistic physics – say if you had to drop the flower in a bin – that would get complicated, and would likely require a way to reset the state.


PROS

•  Clear and simple.

•  Feels very immersive to be able to hold things with your hand.


CONS

•  Cannot really be used with abstract UI elements.

•  Cannot interact at a distance; user may have to navigate closer or stand up/bend over.

•  Can incur heavy overhead if the model for picking up and putting back is not trivial.


SCOPE IMPACT: Moderate, or Very High if physics is necessary.


Note: It is possible to mix and match between zapping, tapping, and grabbing, but doing. so makes things a lot more cumbersome and confusing for the end user, The Unity VR tutorials lead you through setting up a system like this, but the end result doesn't feel satisfying, because you're constantly having to fiddle with clicking buttons just to switch between zapping and grabbing contexts. It's almost certainly better to stick with one or the other mechanism and not put the burden on the end user to switch between "raycasting" and "touching things physically".


Buttons in Speech Bubbles

A fairly common UI element is a speech bubble with a dismissal button that you zap.


This shows up in a lot of places, both with voice narration and "omthars" (sounds that mimic speech but do not actually contain words, intended to give the impression of speech without having to have full voice narration).


Below is an example from our VR prototype of Stay Safe Working With Horses.



PROS

•  Clear and simple.

•  Text-driven scripts (especially with omthars) for agile development, long-term resiliency, and dead-simple localization.

•  Supports long scripts with little overhead, especially for localization.

•  User proceeds at their own pace.

•  User can interact with UI elements at a distance.


CONS

•  Ineffective if looking away or at odd angle.

•  Good only for them talking to you; you talking back needs new UI.


SCOPE IMPACT: Low


_________________________________________________________________________________________

That's all for now! We hope this introduction to some of the interactive possibilities for VR has been helpful. Stay tuned for Part 2!


Written by John “CC” Chamberlin, Lead Developer

bottom of page