Beyond the numerous and fairly complex key combinations, all the 3D modeling applications (e.g. Maya, 3DS) have a common problem. They utilize the 2D mouse as an input device for navigating in a 3D environment. This 2D mouse hinders the ability of the user to fully interact with the 3D elements in the system. Developers of such applications came up with many tools and concepts to facilitate 3D interaction with 2D input devices (e.g. multiple viewports, gizmo for moving objects, snapping).

However, existing motion tracking tools like the Wiimote can be used to enhance the 3D experience by allowing users to move 3D objects directly with the movements and gestures of their hands. The mapping between the users' real 3D environment and the virtual one then becomes more natural.

After reading research papers and experimenting with the Wiimote libraries, we decided to focus on the following features for implementation: navigation in the viewport, IR tracking, user feedback from the system, gesture recognition, collision detection, raycasting, and bookmarking with smooth transitions.


To test the effectiveness of our system, we performed a summative evaluation. We designed a very basic 3D environment (top right image) populated with primitives on which tasks will be performed. The graphic quality of the rendered scene was voluntarily minimalist (basic lighting, no shadows) to match our evaluation criteria. We created a task list for our users and asked them to use our system. We asked them to think aloud and verbally give feedback.

Our user study effectively showed how much easier it was for a user to be able to have their physical hand movements directly mapped to the actions on the screen. Even novice Wiimote users who have never used virtual 3D environments were able to effectively use the navigation tools once they initially figured out how all the buttons and device rotations worked.

Download the video

English Francais