About AMP Lab Projects Downloads Publications People Links
Aileen Chua |
|
Eric Lee |
Former Contributors:
Greg Bray |
|
Khalid Goudeaux |
|
Sooksan Panichpapiboon |
|
Sy-Bor Wang |
To provide a system that will reproduce an actual conference setting, so that people can communicate from anywhere at any time and still feel that they are in the same environment without realizing the limitation of the physical network.
When multiple participants are involved in video conferencing, current software places different participants in different windows. This arrangement makes it difficult for the participants to tell who is talking to whom, and there is no sense of immersion. By enhancing the capabilities of the network and adding intelligence to the end terminal, the quality of the audio-visual communication can be improved. For this reason, we have been implementing a prototype that uses human-like characters, known as avatars, to represent the users vividly and places the avatars in the same immersive environment.
1.
NetICE Server
NetICE
Server is a Multipoint Control Unit (MCU) that is responsible for
maintaining the
state of the system and distributing the information to the NetICE Clients.
2.
NetICE Client
NetICE
Client renders the 3D visual and 3D audio environment on the user terminal to
provide "immersive environment".
1.
Virtual Conference Room
A
virtual conference room with a
door and windows to provide an immersive
environment.
A table can also be added to this
room.
(Move your mouse cursor on the picture to add the table)
2.
Stereoscopic Display
The
user can navigate in the virtual conference room. The user's viewpoint will
change according
to his position. This provides a 3D
visual experience. Besides, the user can also turn his head or
zoom in/out by changing the position
of the projection plane.
3.
Avatar with 3D Body Model
An
avatar is a human-like character used to represent each user in the virtual
conference room.
The avatar is animated to simulate vivid human behaviors.
4.
Synthetic/Realistic Head Model
The
user can select a synthetic head model or a realistic head model to represent
himself/herself.
In the current version of NetICE, facial expressions and lip-synchronization are
allowed for the
synthetic head model. On the other hand, the realistic head model is not
animated, but it gives
other users better visual experience to treat the avatar as a real person. Click
here to see an
example of a realistic head model.
5.
Director Camera
the
user can select the viewpoint between 2 different positions: the avatar's own
viewpoint, or the
viewpoint of a camera above the
avatar's head. This feature further facilitates navigation.
(Move your mouse cursor on the picture to switch the view using the director camera)
1.
Real-Time Audio
The
user can use a microphone to transmit his/her own voice through the network to
the other clients.
This enhances real-time conversation.
2.
Audio Mixing
When
multiple users are speaking, their voices are mixed and then rendered to the
listening client to
provide the user a sense of
immersion.
3.
Stereoscopic And Directional Sound (3D Sound)
Head
related transfer function (HRTF) is used to provide stereoscopic and directional
sound.
The volume will be louder if the talking person is closer to the
listener and vice versa. Besides,
the
listener can feel the direction of the talking person. If the speaker is
on the left hand side,
then the sound will be steered towards the left sound channel.
1.
Facial Expressions
The
user is able to control the avatar's six elementary facial expressions:
joy, anger, surprise,
sadness, fear, disgust. These facial expressions are MPEG-4 compliant.
Joy Anger Surprise Sadness Fear Disgust
The
user can use a microphone to transmit his/her own voice through the network to
the other clients.
2. Hand Gesture
The user can raise the avatar's hand before speaking in order to get
the attention of other users.
1. Lip-Synchronized Animated Faces With Audio
The face is animated with lip movement synchronized with the audio
2. Hand Gesture With Speech
The hand movements can be driven by the energy of the user's speech to
enhance the speech
communication.
1. Virtual Shared Whiteboard
The
users can draw simultaneously on the shared whiteboard in the virtual
environment. The
user can also select a JPEG image to be displayed on the whiteboard and the same
image will
be
shown on the other user's whiteboard.
2. Collaboration between Physical and Virtual Whiteboard
It
is often difficult to draw or write using a mouse. Users may prefer working
directly on a physical
whiteboard. NetICE allows users to work naturally on a physical whiteboard with
a wireless pen
such as MIMIO. On one side the user can draw and write easily on a physical
whiteboard and on
the other side users who do not have physical whiteboard can still
collaborate using the virtual
whiteboard.
Physical Whiteboard with wireless pen Shared Virtual Whiteboard
3. Sharing of 3D Objects
The
user can drag-and-drop an 3D object specified by a VRML file onto the NetICE
Client window.
The NetICE Client will send this 3D object to the NetICE Server which is then
distributed to the
other
NetICE Clients.
Watch our NetICE research video online or download the following movie files in QuickTime format:
netice.mov [high-resolution] (18.1MB)
neticeweb.mov [low-resolution] (3.4MB)
One of the applications for NetICE is online auction. Watch our NetICE auction video by downloading the following movie files in QuickTime format:
auction_demo_high_res.mov [high-resolution] (37.7MB)
auction_demo_low_res.mov [low-resolution] (14.1MB)
We are working on new version of NetICE with more realistic avatars and more functionalities. Watch the preview of NetICE version 6:
NetICEv6_high_res.mov [high-resolution] (28.2MB)
NetICEv6_low_res.mov [low-resolution] (9.0MB)
Come and navigate in our virtual room. When there are other users present in the system, you can talk with them and feel the 3D audio. Download our NetICE client version 5.0 now.
How to execute the demo? Please see FAQ.
Use camera to track eyes in order to determine the head orientation of the user
Use Image-based rendering to render a realistic background
Make the avatar dance according to beat of the music
Use gesture recognition to make the avatar have the same gesture as the user
Manipulation of the 3D objects inside the virtual environment
Our work is used by... |
Dept. of Computer Science, University of Pittsburgh http://www.cs.pitt.edu/
Publications |
Any suggestions or comments are welcome. Please send them to Howard Leung.