Paabel VR Demo
Madis Vasser, Salme Ussanov, Markos Leemet (sprv. Madis Vasser)
Overview video here - https://drive.google.com/open?id=12gY1FgKvHrdti6Qbm0fm7q2RFkgc77s0
Repo here - https://gitlab.com/salmeu/arvutigraafika
VR Build here - https://drive.google.com/open?id=1q8YzpcJVGHQpzcLsNOEFCqjz4LmZOxdZ NB! Requires HTC Vive and one Vive Tracker, consult readme for key binds.
Non-VR Build here - https://drive.google.com/open?id=1Th9g_xUcxl0_vDoMh-mpMOqA3NIEzPA3 Consult readme for key binds. This version is bound to have bugs, as this was not the main focus on the project and really came to lifa as a last minute afterthought for those lacking a VR set.
What we had at the start of the project
- 3D objects - the model of the room was hand-made in Blender and followed the general shape of the actual room. Complex forms such as arches were mostly avoided while modelling. Smaller objects (monitors, chairs) were pulled from the internet and did not correspond much to actual objects. Below is the comparison between the real room (left) and the virtual copy (right).
- Visual effects - the scene was lacking proper lighting, reflections and shadows (see above)
- Realistic people - the project originally had two human characters in it, one was a floating head of Madis and the other was a full-body avatar of a random female. Both had problems. Madis was quite low-poly and did not even have smooth shading applied. While he had three different facial expressions, due to underlying mismatch of the topology the expressions did not blend between each other. The full-body avatar had creepy eyes (below).
- Some interactivity - the player could create balloons and smash a virtual wall. The user behind the computer could toggle the visibility of the full-body avatar, change the facial expressions of Madis, remove the floor of the lab and make the whole room sway from side to side. The code however was badly organized and hard to interpret, with many excess Blueprint nodes.
Our goals during this project (and who's responsible)
- Documenting the initial project (Madis) - DONE
- Remake the environment using photogrammetry or manual modelling (Madis) - DONE
- Upgrade Madis with blendshapes (Markos) - DONE
- Add correct smaller objects like monitors, chairs etc (Markos) - DONE
- Add some fancy shader effects (Markos, Madis) - DONE
- Remake a realistic full-body avatar (Salme) - DONE
- Add interactions like balls, zero gravity, air drawing (Madis, Salme, Markos) - DONE
Remaking the environment
As a first approach, we took 15 spherical photos of the room using a Samsung Gear 360 (2017) camera, stiched them using Gear 360 Action Director software and constructed a 3D model using Cupix.com platform. As seen below, the result was quite nightmareish.
With 43 images (below), the results were slightly better in terms of geometry, but much worse in terms of texture. This could also be due to the fact of taking photos while holding the camera overhead (first attempt was made with timer and tripod). It could also be that the Cupix platform just can't handle complex geometry.
Due to time constraints (each photogrammetry pass took hours with litte usable results) we ended up modifying the existing 3D room model made in Blender. We added more detail and fixed the lightmaps.
Progress on the room...
Mid-way through the development we noticed weird texture stretching on the walls of the lab - it turned out the UVs were quite messed up (left), but after some tedious manual work in Blender we got it looking much nicer (right).
Finally we arrived to the current state, with added objects and other small tweaks.
In the very last second we had a chance to scan the environment with a professional grade scanner. The camera used images and lasers to make a fairly realistic model of the room, seen below:
Upgrading Madis with blendshapes
The first step was to clean the topology of Madis, as the photoscanned version (left) was very messy and made blending between emotions impossible. The clean version (right) is already nicely morphable and will be improved further over time. Code-wise the head is no longer attached to one of the Vive controllers as before, but utilizes the Vive tracker to float around in the scene.
We ended up with a design decision to put Madis in a jar - the face now has 7 different shape keys (morph targets) that are being driven randomly to add some life to the floating head. The texture of the face could be improved in future.
Adding a realistic human
We made a character in MakeHuman and then greatly improved the textures using Substance Painter. For the eyes we used the example mesh and materials from Unreal, as this is the best possible real-time eye out there. We call the character Jack.
- First we upgraded the "balloon interaction" - now you can make bouncy balls instead!
- We also set up a system to easily add "zero gravity" interactions to any object. This means that when the player puts their hands high up in the air, the furniture in the room starts floating.
- The destructible wall also made a comeback, but the updated version also includes a space vacuum, so all the furniture goes flying out of the hole in the wall now.
- The rectangular play area can be made to fall away into deep space, using a button press. Under the hood the floor is driven by a Level Cinematic script
- In the end of the simulation, the walls can be turned into pixie dust with a push of a button, thanks to the new Niagara FX engine that allows to input mesh data for particle systems.
- As with any project, once you finish it you are 10 times better than when you started and you want to redo everything. Good thing that deadlines exist!
- We definitely put some of our time in the wrong places - like modelling a super high-poly mouse or messing around half a day to get perfect eyelashes only to not use them in the end.
- And we were too naive about doing photogrammetry - while our own approaches failed, we did get a last minute visit from some professional scanner guys, so we still might be able to hit that goal too. If we're not too naive again :)
- We learned that Unreal is very cool for computer graphics related stuff and we plan to use it a lot in the future. If only we had discovered the Niagara tool earlier!
- We all tried to push our limits with this homework, but the different tasks definitely had different levels of complexity due to our different levels of experience with unreal or texturing or modelling. So a task that would have taken one team member an hour, took a day for the other. In the end everything came together nicely though!