In the interest of time, many compromises to the ideal model attributes had to be made.

Play sounds from library

As previously stated, the ideal situation would involve giving this instrument its "voice" through the use of real time synthesis methods. Unfortunately, these synthesis methods require software outside of VRML to receive data and turn that data into an audio output. We chose, instead, to create a basic sound library of recorded audio files and allow the player to manipulate these files directly in the VRML world. The benefits of this are that a player will eventually be able to load appropriate sounds for their performance, not relying strictly on our presets.

Mouse for input

Because dataglove input and CAVE viewing wasn't possible, we settled for non-immersive monoscopic viewing on a monitor with standard mouse input. This limits the user to one action at a time, and restricts tactile feedback to what a mouse would normally provide.

No observation space

Even though our instrument incorporates the audience area, actual viewing by multiple people wasn't possible in the given time frame. Multiple viewpoints, then, are only achieved by the player, who can navigate around the instrument. Audience members are forced to watch the same monitor as the player.