A New (Virtual) Reality

Tom Valorsa
arup.io
Published in
6 min readJun 22, 2017

--

Recently at Arup we’ve been talking a lot about Virtual Reality (VR) and how we can apply it to our work in the built environment. VR has been steadily gaining momentum over the last couple of years and 2016, dubbed the ‘year of VR’, saw the release of a number of higher-end headsets. In the past the roots of VR experiences have largely been in entertainment, but now we’re starting to see more and more viable practical applications in various fields. These include healthcare, education and even sports training. Helping people better understand and experience the future of the built environment is why we see VR a key tool for us. Also, it’s really cool.

Digital Skills

At Arup our Skills Networks, supported by Arup University, provide us a platform for helping us develop the skills we think are going to be important for the future. Our Digital Skills Network is specifically focused on identifying and actioning digital skills we need, and AR/VR is a current focus of the network.

We have a few Arupians who have been quietly working away with their own VR projects, in and out of work, and others who are interested in learning more about it. We decided to round them up and send them down to Surry Hills for a day-long crash-course, developed in partnership with Academy Xi and hosted by Daniel Sim Lind, founder of Diesel Immersive.

Daniel with the Arup class

Our learning was geared towards designing VR experiences, with a quick look over some fundamentals. The idea was to develop a baseline knowledge of VR technologies and design considerations to enable us to go away and start working on our own ideas with a bit more structure than out current fragmented efforts.

The team at Academy Xi provided a HTC Vive for our class demos. One of our main interests for the day was getting to interact with our 3D models of the built environment that we had brought with us, so having the functionality to be able to walk around the models was a must. The Vive’s room-scale tracking, and upcoming release of the Vive Tracker, make it seem as if it’s going to be the most flexible set of hardware for at least the next couple of years in this respect. As part of the course we developed some familiarity with Unity, making use of the SteamVR plugin and VRTK.

Hands on with the HTC VIve

By the end of the session we had taken one of our 3D models (a bridge) and transformed it into a VR experience. Rather than zooming into and panning the 3D model on a 2D screen, the data could now be explored — and it was relatively simple. VRTK allowed us to rapidly churn out a prototype with basic teleportation controls, leaving the door open for additions and refinements down the track.

Lessons learned

Our instructor Daniel managed to squeeze a huge amount of information into one day. I’m going to share the top 3 things I took away.

Immersion

The most important aspect of designing for VR is without a doubt immersion. Everything has to be smooth and feel ‘real’, from physics to spatial audio to a user’s interactions with their environment. Some problems start to arise when you consider that people come in different shapes and sizes. With experiences on devices like Google Cardboard and GearVR, we need to start thinking about designing for the ‘average’ human. But what does the average human look like? How tall are they? How far apart are their eyes? Are they right or left-handed? When we fail to address these kinds of questions we might start to throw off a user’s immersion and can, at worst, cause nausea and headaches. One of the shortcomings in the design of the majority of 360° videos, for example, is having to film at a fixed height; if the user is taller or shorter than the camera the room can begin to feel out of proportion. Luckily by tracking the position of the user with some of the more sophisticated hardware offerings, we can mitigate some of these concerns.

Making the user understand

It’s important that your user understands both their environment and the effects they can have on it almost intuitively. Any confusing mechanics or awkward interactions can lead to a lapse in immersion. Objects modelled on those from the real world should work as expected, for example switches and buttons.

In many Vive experiences the user can see the controllers in the virtual world, allowing them to see the layout of buttons if they’re new to the hardware. A mesh-like boundary also appears in the virtual world, warning the user that they’re close to leaving the tracked space can could be about to walk into something. Someone taking part in your experience can’t exactly just take their headset off to see where they are or what they’re pressing, which means you need to give them prompts within your VR experience.

Interactions

One of our challenges for the day was to design interactions for selecting a group of characters and directing them to perform different tasks. The difficulty here was steering clear of the 2D interactions that we were used to. It was surprisingly difficult to think differently to begin with, and interesting to see just how ingrained our daily interactions with technology are (pretty much ‘point and click’, ‘click and drag’). Almost every person who attended the course fell into the trap of designing something that was a thinly veiled click and drag clone — as a result lassos were outlawed. This was eye-opening for me personally: you can make pretty much anything you want in VR and interactions don’t need to be restricted by the rules of physics, yet we instinctually went straight for point and click interactions.

There were a wide range of solutions, the most interesting of which was a paintball gun to assign different colours to a group of characters. By doing this you could both select and assign a task in one shot.

Another important aspect of user interactions is to consider if the user is left or right-handed. In the game Longbow (part of Valve’s ‘The Lab’) the player is presented with the bow before the level begins. Knowing how a bow works, the user will instinctually pick up the bow in their non-dominant hand, allowing them to draw arrows with their dominant hand. In the spirit of this, if a user needs to hold two objects they either need to be presented in the order of importance (i.e. user grabs with dominant hand first), or they need to understand the real-world object so they can plan accordingly. For example, it would be frustrating, inaccurate and a bit weird to fire a bow and arrow if you were holding it in your non-dominant hand.

Next steps

Our next steps as a group will be as follows:

  1. pull together a larger pool of hardware
  2. experiment with new projects and ideas to build upon what we’ve learned
  3. involve clients with said projects to get feedback on how we can use VR to solve some of their problems
  4. get others at Arup to drink the delicious VR Kool-Aid

--

--