Since 2001 I have been working in research environments – first part-time, then full-time. This page gives an overview on past projects I could contribute to. My research spans HCI research support tools, using computer vision for musical expression and healthcare applications, software tools for modeling and code generation, and domain-specific languages to design and visually program data collection and processing applications.
From 2006 till 2011, this was my PhD project at the Eindhoven University of Technology in the department of Electrical Engineering.
The objective in my PhD project was to enable a novel back-channel from products in the field to the company – a communication channel between the user and the maker of a particular product. Why? The motivation was that makers of commercial electronic products are often uncertain about the way their products are and will be used in the field. This novel backchannel is achieved in a flexible approach, called Adaptive Observation, which allows data to be collected flexible in time, amount and content. This research involves model-driven design and methodology, domain specific languages, models at runtime, and of course remote data collection systems, especially in their connection to human-computer interaction research. Read more here.
Interactive Visual Canon Platform
The Interactive Visual Canon Platform was the first project realized in the Vision Studio in the Industrial Design department at Eindhoven University of Technology. The project was conducted from 2008 till 2010 in collaboration with Christoph Bartneck. In music, a canon is a composition employing one or more repetitions of a melody, played with a certain delay. In visual performance, well, dancing, the first dancer’s movements are repeated by the following dancer with a certain delay. Looks fascinating, yet hard to achieve and practise. Therefore, we built the Interactive Visual Canon Platform as a technical means to let a single dancer invent, practise, and perform a canon dance. Read more about it here.
As a student research assistant in the Software Construction research group at RWTH Aachen university, I worked on the ViPER project in the team of Alexander Nyßen. This project overlapped with my Master project in 2006 and here I focused on the implementation of modelling tools for UML2 structure diagrams for the ViPER platform using Eclipse tooling and frameworks.
SoFA + FaceBrowser
At the Advanced Telecommunications Research Laboratory International (ATR) near Kyoto, Japan I worked on two projects based on computer vision together with Michael Lyons. The first one, SoFA – Sonifier of Facial Actions, is a musical instrument that lets a user play sound samples by moving parts of their face. Both sonification and visualization of the triggering is shown in real-time.
The second project, FaceBrowser, is a data highly aggregated visualization for offline browsing of images gathered from a possibly very long video capture. All video images are analysed before and faces are detected using computer vision, after which their relative movement is analyzed. Facial movement data is then visualized on a zoomable time line. The intended usage scenario is home monitoring for dementia patients, whose activity during the day can be easily retrieved by doctors using FaceBrowser.
REVISER + TREVIS + StateShade
While being a student research assistant at the Chair of Technical Computer Science at RWTH Aachen university, I worked on two projects with Nico Hamacher:
- REVISER – A tool for automatic guideline-based evaluation of interactive systems using an expert system
- TREVIS – A tool for analyzing the usability of interactive systems, e.g. with a formal evaluation using normative user models based on the GOMS theory
- StateShade – A GUI-Prototyping tool based on templates and statemachines for use with REVISER/TREVIS for automated analysis of GUIs