Errors, errors everywhere!

Project introduction
The predictive coding theory of brain functioning posits that the computational function of the brain is to actively predict sensory inputs using internal generative models and to minimize sensory prediction errors in an optimal way. For example low levels of the visual hierarchy are thought to predict details (e.g. the contrast of a particular line segment) whereas high levels of the visual hierarchy are thought to make predictions about large objects and scenes. The present project will develop new virtual reality (VR) environments to study two crucial standing issues regarding the theory of predictive coding. First I’ll try to temporarily change the relative importance of prediction errors arising in the lower and higher stages of the sensory hierarchy in healthy individuals. As in the natural environment there is always a lot of change at the level of details (e.g. wind moves light objects, sun can change the contrast of objects unpredictably), the brain should ignore the prediction errors arising in low levels of the cortical hierarchy. Creating an environment where the high level properties (i.e. objects, environment itself) change unpredictably should make healthy subjects relatively less dependent on their high level prediction errors.
Project description
I will procedurally manipulate the prediction errors at high and low levels of sensory hierarchy, making the high level features of the virtual reality volatile. In particular, in virtual rooms the objects will change in their size, color and shape as the subject turns her head. Also, the agent’s movements in space unknowingly lead to an exaggerated amount of prediction errors. I will need to couple the movement of the subjects with the dynamics of the high-level features of the environment. The amount of changes taking place must be easily tweakable.
Tech description
The project will be realized in Unreal Engine and on the HTC Vive VR headset.
Milestone 1 (21.09)
- Goal 1: A simple scene with generic objects
- Result: Used the Unreal Content Example environment as a test backdrop, and filled the room with primitive objects like a cube, sphere and cone, straight from the game engine.

- Goal 2: Dynamic changes to objects shape, size and colour (depending on the VR headset pitch, yaw & roll).
- Result: Created a shader to dynamically change the colour of the material and also the position of the vertices. Both use the "CameraDirectionVector". The change in size of the objects is driven by a blueprint nodes "GetCameraRotation" and "Set World Scale 3D". Got funky results:

The current material setup using the CameraDirectionVector node to drive changes in colour and shape:

Current blueprint associated with the objects to control size:

As seen from the pictures and video, clearly objects with more vertices work better for transformation.
Milestone 2 (05.10)
- Goal: Dynamic changes to the environment and objects based on the users positional data.
- Goal: Dynamic changes to the objects based on the objects own position and rotation.
Below is the blueprint with highlighted additions in order to achieve dynamic object changes based on the users positional data (Camera Location) and objects own position and rotation (Actor Rotation & Actor Location). For the user location to have a differential effect on object at various distances, the location of the object is subtracted from the location of the camera. Other nodes are there to fine-tune the effect.

In the below video there are some additional changes shown. The primitive objects have been replaced with a teapot to assess the transform effect on more detailed meshes. The vertex displacement has been disabled, as it was maybe a bit too extreme for now. Also, I've added motion controllers and made the dynamic objects grabable. Since simply throwing teapots around was quite boring, I've added a task to set the floating teapots on a table in a neat row. This proved to be a quite difficult task for the user.
Milestone 3 (19.10)
- Goal: Restarting the project, adding back colour changes, positional data input and in addition implementing motion controller input.
Some details:

The material has three parameters, that are being fed by the HMD. This way I have more control over the actual data flowing around. Also added a texture so the environment would be more pleasent for the eye and depth perception would be better.

Each wall is actually the same blueprint copied many times. Inside the blueprint I change the material throgh dynamic parameters. "GetCameraRotation" node breaks down into X (-180 to -180), Y (-90 to 90) and Z (-180 to 180).

Moving the walls in accordance with the player movement is achieved with the GetCameraLocation node. Only X and Y are used as modifiers. On some walls the axis mapping is switched, because the objects are at different angles relative to the world.

Scaling the walls is done by a variable that is read from the player pawn blueprint (below).

Only the X coordinates and absolute values are used for the desired effect.
Next up I'd like to add some interactable objects in the scene and also utilize all the parameters I have yet to use in driving some change in the environment. The maze task is already quite hard, but can it be harder?
Milestone 4 (02.11)
Missed goal: Adding interactable objects, making them change dynamically and also mess around with gravity.
Milestone 5 (16.11)
Goal: Adding interactable objects, making them change dynamically and also mess around with gravity.
I started by fixing some things I didn't like about the previous iterations. For example, the scaling of walls by the position of motion controllers. Previously the distance between controllers was computed in a weird way via some x-axis measurement, now I simply subtract one controllers location from the other and get the lenght of the resulting vector. Seen below:

I also tweaked the colour changes of the walls - previously they were uniform all across the level, now each wall has a slightly different cycle depending on its location in space!
Next I added the good old teapots, made them grabbable and also added a bit of random forces (last bit shown below).

I also wanted to mess with "object permanence" - the notion that things generally stay pretty much the same from moment to moment. I am swapping out the static mesh randomly from a pool of 4 different objects stored in an array. This switch only happens when the object is held in the hand. Blueprint below:

I also added a task for the player - namely throwing every object into a hole in the ground. I also devised a counter to inform the player of how many objects are left. Counter blueprint below - the number goes down once an object hits a trigger box under the hole. Only the "teapot" blueprint triggers the counter.

Playtesting showed that the task is quite difficult and success is not guaranteed on every trial. This is exactly what was hoped for.
Milestone 6 (30.11)
- Goal: more mazes, add time trial, make walls more "solid", add visual weirdness, collect data.
- More mazes - now there are 4 different mazes
- Walls more "solid" - done!
- Add visual weirdness - now the sky also changes colour and the sun rotates opposite to player head rotation, but twice as fast.
- Collect data - work in progress.
- Time trial - a timer counts seconds from the beginning of the level. See below:

- The game is now playable in the CGVR lab @ Paabel or downloadable (requires HTC Vive):
https://drive.google.com/open?id=0B97-aac1_IQ5WVRIeVA0NWFzZXM
Later addition - vlog 6:
Further developments
- Create a proper promotional video