Institute of Computer Science
  1. Courses
  2. 2019/20 fall
  3. Computer Graphics Project (MTAT.03.328)
ET
Log in

Computer Graphics Project 2019/20 fall

  • Main
  • Projects
  • Topics
  • Results and Schedule
  • Formatting Hints
  • Links

Errors, errors everywhere!

Project introduction

The predictive coding theory of brain functioning posits that the computational function of the brain is to actively predict sensory inputs using internal generative models and to minimize sensory prediction errors in an optimal way. For example low levels of the visual hierarchy are thought to predict details (e.g. the contrast of a particular line segment) whereas high levels of the visual hierarchy are thought to make predictions about large objects and scenes. The present project will develop new virtual reality (VR) environments to study two crucial standing issues regarding the theory of predictive coding. First I’ll try to temporarily change the relative importance of prediction errors arising in the lower and higher stages of the sensory hierarchy in healthy individuals. As in the natural environment there is always a lot of change at the level of details (e.g. wind moves light objects, sun can change the contrast of objects unpredictably), the brain should ignore the prediction errors arising in low levels of the cortical hierarchy. Creating an environment where the high level properties (i.e. objects, environment itself) change unpredictably should make healthy subjects relatively less dependent on their high level prediction errors.

Project description

I will procedurally manipulate the prediction errors at high and low levels of sensory hierarchy, making the high level features of the virtual reality volatile. In particular, in virtual rooms the objects will change in their size, color and shape as the subject turns her head. Also, the agent’s movements in space unknowingly lead to an exaggerated amount of prediction errors. I will need to couple the movement of the subjects with the dynamics of the high-level features of the environment. The amount of changes taking place must be easily tweakable.

Tech description

The project will be realized in Unreal Engine and on the HTC Vive VR headset.

Milestone 1 (21.09)

  • Goal 1: A simple scene with generic objects
  • Result: I used the Unreal Content Example environment for now as a test backdrop, and filled the room with primitive objects straight from the game engine.
  • Goal 2: Dynamic changes to object shape, size and colour (depending on the users head pitch, yaw, roll).
  • Result: Created a shader to dynamically change the colour of the material and also the position of the vertices. Both use the "CameraDirectionVector". The change in size of the objects is driven by a blueprint nodes "GetCameraRotation" and "Set World Scale 3D".

Video:

Milestone 2 (05.10)

  • Dynamic changes to the environment based on the users positional data.

Milestone 3 (19.10)

  • Adding more randomness via motion controller movement inputs

Milestone 4 (02.11)

  • Adding data logging functionalities

Milestone 5 (16.11)

  • Working on the visual quality of the environment

Milestone 6 (28.11)

  • Adding an UI for in-game options, cleaning up project
  • Institute of Computer Science
  • Faculty of Science and Technology
  • University of Tartu
In case of technical problems or questions write to:

Contact the course organizers with the organizational and course content questions.
The proprietary copyrights of educational materials belong to the University of Tartu. The use of educational materials is permitted for the purposes and under the conditions provided for in the copyright law for the free use of a work. When using educational materials, the user is obligated to give credit to the author of the educational materials.
The use of educational materials for other purposes is allowed only with the prior written consent of the University of Tartu.
Terms of use for the Courses environment