Fetch
Karl Vaba
This was my first ever VR-related project. I wanted try out creating an environment that the player can experience using VR goggles. I ended up with a "game" where you can play fetch with a dog-like creature. Here is an introductory video:
And also a link to the build. Although to run it requires Steam VR... and a VR headset. https://drive.google.com/file/d/1Yk8v_mNJDSWRRCor_GT1bmrZwzXmXIUE/view?usp=sharing
Below you can find the entire development blog.
Dev blog
VR Space Explorer (name change might come later :) ) will be a game where the player can explore a virtual environment using VR goggles. The end goal is to have at least one colorful low-poly area where the player can move around. There will be some objective (like finding an easter egg or solving a maze or a puzzle) to make the project seem like an actual game.
The outline of the development plan will be as follows:
- Familiarize myself with the existing VR equipment, test it out, interface it with Unity. As I have never used VR equipment before, this seems like a standalone milestone.
- Create the environment to explore. Two starting ideas I have are either a winter forest or a tropical glade.
- Come up with and implement the game element (game logic) which will give the player some goal to achieve.
- Implement player movement. I have a feeling that this will be a bit different from regular keyboard WASD based movement.
- Create background music for the game.
Milestone 1 (05.10)
For this milestone I will not add any real features of the game. For the first one, I want to familiarize myself and test out the existing VR equipment that is provided by the CGVR lab. Couple of the things I want to achieve with this milestone are:
- Figure out how to interface the goggles with Unity (2h).
- Create some sample scene and be sure that I can see the scene using the goggles. I imagine a couple of different colored cubes on an empty plane (2h).
- Experiment with some basic movement to understand how the VR space works (3h).
Results
To start off, I tried connecting the HTC Vive headset to the computer according to the instructions on the CGVR page. This was not too difficult, with just some minor hiccups (like the VIVE wireless software not finding the headset). With the goggles successfully connected, I played "The Lab" game a bit to get some first hands-on experience with VR. At some point the game crashed for some reason and I decided to start trying out the headset with Unity.
I found this tutorial (https://www.raywenderlich.com/9189-htc-vive-tutorial-for-unity) which was specific for the VIVE headset. I did not follow It much (required registering an account and downloading premade assets), but it lead me to the SteamVR Unity plugin (https://www.raywenderlich.com/9189-htc-vive-tutorial-for-unity). Since the CGVR-Cook computer is SteamVR ready, I figured I would experiment with that.
I created a sample scene which can be see in the image below:
To get the bordered space, the SteamVR library has done a lot of the heavy lifting. To get started, all you needed to add were the Camera Rig and SteamVR components. With those added and SteamVR running, then you should see your scene after hitting "play" in Unity. I also added the four different colored cubes in orded to test if it loads my scene. I had some issue with this initially. It always loaded the default SteamVR scene. This problem went away after some restarts to Unity and SteamVR. The scene in VR view looked like in the image below:
I imagined the play area to be the same in the VR view as in the scene view, but it was not. I expected the play area in VR to be rectangular and the cubes to be in the corners. But I was able to to move around in the scene and look at the cubes, which was nice for the initial test.
The approximate time this milestone took me:
- Setting up the goggles and trying them out in "The Lab" - 1h
- Messing around with Unity and it's SteamVR plugin to create the sample scene - 6 hours. This took longer than expected - there were errors creating the empty Unity project and a lot of compilation errors when trying to add the SteamVR plugin to the project.
- Experimenting with basic movement - virtually no time. Nothing special was needed to be done to just move around in the scene. The plugin tracks the headset itself.
Milestone 2 (19.10)
For this milestone, I want to explore the SteamVR plugin some more and create two things to use In the game in the future:
- Teleporting between different areas in the scene (3.5h). While working on this, I also hope to gain some insight into how the play area can be mapped to the area covered by the beacons in the room.
- Creating grabbing and throwing interaction with objects in the scene (3.5h)
Results
I changed the way the player can move around in the environment. I want the player to be able to explore an area larger than the one covered by the VR headset lighthouses (obviously). The initial idea was to let the player move around in smaller areas and teleport to another one if they reach the edge of the physical space. But this approach would allow for the player to still move out of physical space. And if there are objects in the scene (trees, etc.), it would be difficult to stop the player from moving inside them (you could display warning messages or display a block, but you can't physically stop the player from moving). For those reasons, I decided to change the game into a standing / sitting VR experience. The player can move around using the controllers and see the world in the VR goggles.
SteamVR has a system for binding actions to controller buttons. I followed this guide to get started: https://sarthakghosh.medium.com/a-complete-guide-to-the-steamvr-2-0-input-system-in-unity-380e3b1b3311.
First, I added a new action under the "default" action set (you can open the menu from Window -> SteamVR input). The following image shows how the menu looks like:
I named the new action "MoveForward" and set its type to boolean. I could have created a new action set for the project, but the default on had everything set up that I needed to interact with object, so I just added a new action there. After the action is created, you can press "Save and generate", which will create .json files for the action sets. Next, I bound the action to the north DPAD button of the controller. The bindings can be changed in the binding UI, which is opened when you choose "Open binding UI" in the previous menu. The binding UI looks something like this:
Then I created a script which tracks the state of the action. The script itself is very simple (at least for now):
It needs references to the action (MoveForward) and the input source (handType). There is also a reference to a VR Camera. When the north button of the DPAD (of either controller) is held down, the player is moved along the forward axis of the referenced camera (the Y position is forced to 0 in order to not let the player fly around by looking up). Then the script is attached to a Player object (a SteamVR plugin prefab), which is the parent for the camera, hands, and other stuff. The scene hierarchy can be seen in the following image:
On the right you can see the movement script with the references to the required objects. With those things set up, I could move the character in the direction I was looking by pressing the "up" button on the DPAD.
To create a throwable object, all I needed to do was to add the "Throwable" script (came with SteamVR plugin) to an object in the scene (in my case - ThrowBall). The plane was needed so the ball would not fall down endlessly. The "Player" objects "LeftHand" and "RightHand" children contain all the necessary components to work with controller inputs, track them, etc. The following clip shows moving around in the scene and throwing the ball:
As a summary:
The times it took me to solve goals:
- Experimenting with teleporting, creating the controller movement: 3h
- Creating the throwing mechanic: 1h
- Trying to get spatial tracking: rotation only to work: 3h
The spatial tracking part is something I wish to solve in the future (if there is time). SteamVR allows to turn of positional tracking of the headset off and keep only rotational tracking. This would make sense when the person is not supposed to move around in the physical space. This worked for me when I did not care about the position of the hands in the scene. But as I want to throw things, so the hands are important. And turning spatial tracking off for the headset caused the hands to be in an offset position. Right now though, the movement works even with positional tracking turned on.
Additionally, I found a solution to the problem of the unity scene not showing up in VR (experienced in milestone 1 and 2). To solve it, you need to go to player settings > XR plugin management and make sure that "OpenVR loader" is ticked.
Milestone 3 (02.11)
For this milestone, I want to create the environment the player can move around in. I will try to create most of the models myself using Blender. The approximate time cost:
- Creating the concept of the scene: 1h
- Modelling the assets: 3h
- Setting up the scene in Unity: 3h
The concept I will go for will be a Japanese garden. An image that gave me some inspiration can be seen below:
Some of the assets I will need: cherry blossom tree-like trees (with pink-white leaves), bonsai tree-like trees (something green), hedges, other minor foliage, lanterns, bridge, a building (a temple-like building perhaps), fences, statues. I will probably come up with other objects on the go. I will also need the terrain to place everything on.
Results
For the scene, I created models for the following objects:
- 4 variants of trees (2 with green leaves, 2 with pink)
- Bush
- Bridge
- Fence
- Lantern
- Stairs
- Bottom of a building
- Top of building
Those assets can be seen in the image below (in the same order as described):
I used those models to create a scene in Unity, which can be seen in the following images:
I used Unitys "Terrain Tools" (https://assetstore.unity.com/packages/tools/terrain/terrain-tools-64852) package to create the ground (well, terrain) for the scene. In addition to creating the landscape, it provided a simple way to add grass to the scene and paint a walk path texture. Other than that, the only models I used for the scene are the ones I created myself.
Milestone 4 (16.11)
For the next milestone I want the player to be able to move only in areas that they are supposed to be able to move in. That means no moving over water, through trees, houses, etc. Right now, the player just moves along a plane and collisions with objects are not detected. Also, the height of the terrain or objects (e.g a bridge) is not taken into account.
The steps are:
- Research an optimal way of achieving this (4h; I assume this will have something to do with colliders)
- Set up the movement restriction logic in unity (3h)
- If I have time, apply the same kind of logic to the throwable object I am going to use (e.g, when you throw the ball into water, it will respawn, bounce off or something like that)
Results
I found out that Unity has build-in AI system which lets you create a navigation mesh (NavMesh). That navigation mesh determines where AI agents can and cannot walk on. This is the solution that I used to restrict player movement.
First I added the navigation mesh to the terrain and object that the player can walk on. It looked something like this:
When baking the navigation mesh, the walkable area is determined by a couple of properties of the baked agent, which can be seen on the following image:
The "holes" in the NavMesh are from objects that are marked "not walkable" when baking the mesh. Those object are petty much all the things I do not want the player to go through. When the objects are shown, the scene with the NavMesh looks like this:
I marked the water plane as not walkable. That is the reason why the area under water is not walkable, since the space between the water plane and the bottom is less than the height of the baked agent.
Then I simply added a "NavMeshAgent" component to the player object. Usually, this component is used for AI movement - you can set a destination for the agent and then the system finds a path along the NavMesh. But on my player object, this does not move my player - it just restricts the translation of the player transform if it goes out of bounds of the navigation mesh.
Using a navigation mesh also lets me reset a throwable object (in my case, the ball) if it out of reach for the player (later, for the dog or whoever goes to fetch the ball). When the ball has stopped moving, I sample the position of the ball from the NavMesh with some threshold (there is a convenient function for this - https://docs.unity3d.com/ScriptReference/AI.NavMesh.SamplePosition.html). If it finds a spot on the NavMesh, all is good. If not, I set the position of the ball to be under the player (essentially resetting the ball).
Milestone 5 (30.11)
(Pushed to milestone 6)
Milestone 6 (14.12)
For this milestone I will create the AI agent that will fetch the thrown ball. For this I want to create the following:
- Model for a dog (1h)
- Rigging and movement animation for the model (5h)
- The agent in Unity with the created model (1h)
Results
Model:
Simple animation (movement and idle combined):
Dog fetching the ball in the game: