Arvutiteaduse instituut
Courses.cs.ut.ee Arvutiteaduse instituut Tartu Ülikool
  1. Kursused
  2. 2025/26 kevad
  3. Arvutigraafika projekt (MTAT.03.328)
EN
Logi sisse

Arvutigraafika projekt 2025/26 kevad

  • Main
  • Projects
  • Topics
  • Results and Schedule
  • Formatting Hints
  • Links

Gesture-Driven Audiovisual System

Authors: Zofia Mizgalewicz

This project tries to develop an interactive audiovisual system controlled by hand gestures seen through a webcam. I use TouchDesigner to process a live camera feed, detect hand blobs and extract control signals — position, movement, and possibly size. These signals are mapped to parameters of a visual system, which allows the user to influence the visual environment through gestures.

The goal is an interface where body movement directly shapes what appears on screen — demonstrating real-time computer vision, signal processing, and interactive graphics.

| | (links added on completion)


Milestone 1 (09.03) — TD on Linux + Working Camera Input

  • TouchDesigner installed and running stably on Linux via Wine/Bottles
  • Webcam recognized and accessible as a video input inside TD
  • Basic threshold -> crop -> blob track pipeline tested

Getting TouchDesigner to run on Linux through Wine is not easy — device passthrough, library compatibility, and rendering backend all needed to be resolved before any actual project work could begin.

Environment setup: TouchDesigner is Windows-only software. To run it on Linux, I used Bottles — a GUI tool that manages isolated Wine environments. Wine is a compatibility layer that lets Windows applications run on Linux without a virtual machine.

The camera problem: The default Wine runner did not pass the webcam through to TouchDesigner — the device was simply invisible inside TD. I switched the runner to Soda (soda-9.0-1), this solved the camera passthrough.

Graphics translation: DXVK (dxvk-2.7.1) translates Direct3D 8/9/10/11 calls to Vulkan, and VKD3D-Proton handles Direct3D 12. These components allow TouchDesigner's renderer to work on Linux hardware.

Result: I built the initial pipeline: threshold -> crop -> blobtrack. The blob tracker is detecting the hand and producing output, which already gives me a start on Milestone 2.


Milestone 2 (23.03) — Stable Blob Detection Pipeline

  • Reliable hand isolation under varying lighting conditions
  • Tuned threshold / background subtraction for clean binary mask
  • Crop region configured to focus on gesture zone
  • BlobTrack2 consistently tracking the hand blob without false positives

You can add development notes here, or remarks on the progress / result. Screenshots and videos are always good!


Milestone 3 (06.04) — Control Signal Extraction

  • X/Y centroid position exported as normalized CHOP channels
  • Blob area mapped to a size/distance control signal
  • Velocity / movement speed derived and smoothed
  • All signals visible in a tidy parameter panel, ready for mapping

You can add development notes here, or remarks on the progress / result. Screenshots and videos are always good!


Milestone 4 (20.04) — Generative Visual System (First Version)

  • At least one generative visual system (particles, geometry) responding live to hand gestures
  • Horizontal hand movement controls one visual parameter (e.g. color, spread)
  • Vertical movement controls another (e.g. speed, intensity)
  • Distance/size controls a third (e.g. scale, density)
  • The exact features for control will be defined later

You can add development notes here, or remarks on the progress / result. Screenshots and videos are always good!


Milestone 5 (04.05) — Polish and Expressiveness

  • Multiple distinct gesture mappings with clear visual feedback
  • Aesthetic coherence: the visuals feel intentional and expressive
  • Optional: basic audio reactivity or sound synthesis tied to gestures

You can add development notes here, or remarks on the progress / result. Screenshots and videos are always good!


Milestone 6 (18.05) — Final Presentation

  • All milestone goals reviewed and documented
  • 1–2 minute demo video recorded and embedded
  • Repository cleaned up with a README explaining setup and usage
  • Live demonstration performed for the class

You can add development notes here, or remarks on the progress / result. Screenshots and videos are always good!

  • Arvutiteaduse instituut
  • Loodus- ja täppisteaduste valdkond
  • Tartu Ülikool
Tehniliste probleemide või küsimuste korral kirjuta:

Kursuse sisu ja korralduslike küsimustega pöörduge kursuse korraldajate poole.
Õppematerjalide varalised autoriõigused kuuluvad Tartu Ülikoolile. Õppematerjalide kasutamine on lubatud autoriõiguse seaduses ettenähtud teose vaba kasutamise eesmärkidel ja tingimustel. Õppematerjalide kasutamisel on kasutaja kohustatud viitama õppematerjalide autorile.
Õppematerjalide kasutamine muudel eesmärkidel on lubatud ainult Tartu Ülikooli eelneval kirjalikul nõusolekul.
Courses’i keskkonna kasutustingimused