Musical Performance Anxiety Inducer
Theodor Taimla, Katrin Raigla, Viido Kaur Lutsar
The project is hosted at github: VR Anxiety Inducer
The goal of the project is to have a VR solution, that can induce performance anxiety for classical musicians (violinists as a start), so that they could practice performing and get better at managing their anxiety.
Right now we have the following ideas:
- Making anxiety inducing humans in MakeHuman application
- Animating humans using HTC Vive trackers
- Trying out just plain video at 120 fps integrated into VR scene
- Same as above but perhaps spherical or 3D video too/instead
- Making a scene to simulate performance and make it as anxiety inducing as possible
- 3D scanning a violin
MakeHuman models:
Problems:
- When importing makehuman models to UE4, materials are very basic. Skin looks like plastic, eyes are invisible, hair and eyebrows also plastic. (Katrin: Eyes were fixed with changing the eye geometry from high-poly to low-poly. Maybe this link will help https://forums.unrealengine.com/development-discussion/content-creation/51552-why-do-makehuman-models-look-like-plastic-in-ue)
- Performance issues: UE4 takes a while to start, then forever to load the project and then spends another eternity compiling 5k shaders.
- Git refuses to upload files larger than 100mb. Adding lfs support should have fixed it, however it was supposed to be added before committing the large file. No amount of deleting and recommitting works (Fixed)
Todo:
- Make the floor and violin models more realistic (suggestion, experiment with speculator light, add a normal map to add depth)
- Make a sphere in blender with normals flipped, so the sphere is only visible from the inside (Viido)
- Add the necessary components in UE4 to enable 360 video playback using that sphere (Viido)
- Make MakeHuman models more realistic in unity by changing material properties (Katrin)
- Make the scene more compact because in VR you want to go where you can see that you can go
- Animate 3D MakeHuman models (Model - Katrin. Behind the computer - Theo)
- We need a chair for humans to sit on (Katrin)
- Use blender decimate modifier to make the piano use less resources while retaining an OK look (Viido)
- All of the MakeHuman models need to be exported with GameEngine skeleton, .fbx with centimeters for scale. (Exported from the MakeHuman application)
- Git needs to properly manage LFS files for potentially large files (Done - Theo)
- Add HTC Vive tracker and make it move in tandem with the violin (Done - Theo)
- Blend two violin models to get a better model (Theo)
- Scan a violin (Done - Theo)
Coming from Unity, UE4 really does seem to take A LOT of time to open up the project. Somehow it seemed that it compiled shaders every time the project was opened, but after many months of using it, it seems that the problem disappeared. The shader recompiling might have had to do something with UE creating new project copies depending on which UE version was used. The problem went away, but the picture below is still something that can be seen in a nightmare:
Violin Scanning The (very expensive) scanner was attached to a really old computer with 4GB, maybe 8GB of RAM. So doing anything in the software on that computer resulted in hanging that would not resolve itself. Violin has a shiny surface and the scanner would lose track of what it is scanning repeatedly and to avoid that, we need to delicately track the wavelenghts (green on the monitor), so that they would stay in the center of the spectrum. It was impossible to get the whole model in one go and losing track would mean that the scanner would spit out info at the wrong location, partly ruining the scan if not noticed. We experimented with a lot of different postures, trying to get the most complete scan - ultimately everything took half a day to complete.
After scanning the violin, cloud data needed to be edited and different scan snapshots needed to be blended together by matching points that existed on both snapshots. Most floating debris could be removed automatically, however some needed special attention. The whole process took hours - manually aligning, deleting data that was just noise etc. Only trial version of Artec studio was available, so the only way to export models was to upload to viewshape.com at reduced quality (30k triangles and lowres texture). Also project saving was disabled. RAM used sometimes reached as high as 20GB, thankfully a computer with 64GB of RAM was used for this process.
Two models were finalized. Both with their own goods and bads. The left one has an extra edge that should not exist, but has beautiful f-holes. The one on the right has some texture problems, awful f-holes and some missing edges.
Viewshape.com had the models in .ply format with the texture separately and uv's were not unwrapped when opened in blender. An attempt was made to use the texture after UV unwrapping, but the end result is not usable. The solution would be to get access to full version of Artec studio 12 ultimate, redo the scans and export a proper model.
The following photo shows how the first prototype looked like. We thought that it might be more anxiety inducing to have a large stage, with lots of room and a huge crowd - like a concert hall. It is something we think classical musicians usually dread. Soon after some recommendations made by Raimond-Hendrik Tunnel, we decided that the stage should be done in a way that there won't be a lot of space where to go to because in VR people usually want to go where they can see that they can go. Also the decision was probably for the best due to possible performance issues down the road.
The piano was very dark after building lighting. Turned at that it did not have a light map and increasing light map to maximum possible value only seemed to help a little bit in terms of brightness. The picture on the right is with automatically unwrapped uv's, sadly overlapping a bit which caused the visual problems. The piano in the bottom shows up fine - the solution was a compromise between performance problems, visuals and uv unwrapping of a complex object (The object was set to stationary, instead of static in UE4)
We couldn't really figure out what kept causing some material problems with some characters but not others. Separating each model into their own folder to avoid possible material conflicts did not help. The one on the left has no skin material and the one on the right looks completely different from the intended result such as shown in the bottom. We resorted to just cherry picking our MakeHumans.
So we needed people to watch the performer. We had (mostly) ok MakeHuman characters that needed to get animated. Madis Vasser mentioned that Ats Kurvet has a UE4 mocap solution. We got permission to use it.
There were some problems with this. It was very difficult to get position to look natural and stay stable (see gif below). There was a tracker attached to a belt, one for each leg, controllers for hands and headset for head. I guess it was difficult to interpolate properly using just those. Also a lot of the times the mocap solution incorrectly assigned trackers to the meshes in game (such as Vive tracker meshes moving along side controller meshes, when they should be seperate and in this case controller buttons did not work). Usually no amount of steam VR and UE4 restarting and starting combinations helped get it fixed once the mocap solution started behaving like this. The only solution seemed to be a complete system restart coupled with best practices such as starting steam vr and controllers/trackers before the UE project and then praying for extra measure.
Even some tiny motions could result in a giggle that will make the animation unusable without post-processing. Getting it right involved a lot of trial and error. Then again, it's a free solution.
To get a MakeHuman to animate, it needs to be exported from the MakeHumanCommunity program as fbx, scale as centimeters and imported into UE with no skeleton assigned during import (important!). Then an animation needs to be exported from the mocap solution as fbx, also imported into UE4, however now the skeleton of the specific mesh you want to animate needs to be chosen (also very important!). Then an animation blueprint can be created with the chosen mesh skeleton.
After animating dozens of times and taking the best MakeHuman models and joining them together along with room modifications as per Raimond-Hendrik's recommendations, we got the following result:
The next big problem that we couldn't figure out was the strange offset and axis flips when tracking the violin. The tracker would be physically on the violin and violin in hand, but the mesh in-game would appear far away and get closer as we approached it. Also X and Y axis were inverted. Turned out that the VR actor should always be left at coordinates 0,0,0 and it must not be rotated. Instead the world must be adjusted if needed - for example if you'd like the player to spawn looking in a certain direction.
Proportions Judging proportions is quite difficult when you're not in VR. Except maybe when you have just the right angle and one of the objects is disproportionally small/big. If all of the objects are fine in comparison to each other, it might still turn out that everything is huge or very tiny when you check to see the result in VR.
Clapping After more trips to the VR-Lab, clapping animations were recorded - sort of. It's hard to clap without smashing controllers into each other repeatedly. A bigger problem was creating a trigger for when the audience should start clapping. Making the idle animations loop and transitions to clapping logic seemed fairly straightforward. Idle would go from sitting 1 to sitting 2 when two conditions were met: TimeToClap = False AND respective animation time remaining ratio < 0.1. So if it's TimeToClap, they would no longer loop and instead transition to Clapping would happen. The big difficulty was to figure out how to get an external trigger working that would change the boolean TimeToClap to True..
After experimenting with event dispatchers, level blueprints and object casting -- nothing seemed to work. Getting a reference to an Actor class and getting a public variable from it worked, however we needed to get a public variable from animation blueprint. Google search results indicated that it should be possible to get a reference from the mesh that the specific animation blueprint uses, get its animInstance and cast it to the specific animation blueprint in order to access the public variable, however that did not work either. The solution was to create a GameInstance, set the project to use that specific GameInstance and add a public variable to that. There's a get Game Instance function that is accessible from most blueprints. This is how the event graph for the animation blueprint should look like when Time To Clap? boolean is set in the level blueprint (checks of collision with a bounding box happened):
Found a way to output strings on screen, which turned out to be useful for debugging. Finally, one of the characters started clapping when the floating character blueprint was manually moved into the bounding box during simulated runtime (good way to test collision without implementing extra logic). All what's left is the tedious work of reworking the rest of the animation blueprints to match.
During the coach meeting Jaanus Jaggo suggested that there should be more interactions with the world. We decided that it would be anxiety inducing if there's a specific spot where the performer needs to stand before the performance actually begins, so a glowing starting point was added with dynamic lighting:
A strange problem occurred when the rest of the character clapping animations were added in. Turned out that the sitting animation for that character started at the wrong coordinates, something must have been different during that specific mocap recording.
After adding clapping animations to the characters and adjusting the starting position of the VR player, we got the following result:
Final result