This is an old revision of the document!
We are developing a prototype tele-immersive system for collaborative visual data analysis that consists of three major components. The backbone of the KeckCAVES prototype is a distributed system representing a shared 3D virtual world, implemented as a set of application-level client/server protocols layered over standard Internet protocols such as TCP or RTP. This backbone provides session management, i.e., the ability for users to log into and off the system dynamically, ensures that all participants see a consistent view of the shared world, and provides basic services for data exchange among participants. Layered over this backbone are application-specific protocols implementing visualizations of, and interaction with, particular types of scientific data, e.g., high-density 3D point clouds from LiDAR or 3D gridded data from 3D imaging or numerical simulation. Finally, the system provides several means of embedding participants themselves into the shared virtual world. This capability is achieved via several complementary means, including 3D spatialized audio transmission, real-time 2D video, or — ideally — real-time 3D video.
Two important design criteria for tele-immersive systems are portability and scalability. Client installations will range from high-end visualization environments like CAVEs (Cave Automatic Virtual Environments) to low-cost systems based on 3D-enabled televisions or computer screens, or even standard 2D desktop systems or laptops. Although the latter cannot technically be called “immersive,” our experiments have shown that they can still effectively participate in tele-immersion. Users will also have varying degrees of Internet connectivity, from high-speed Internet2 to consumer-level internet service providers, or even public wireless access points. Field installations will be at the low end both in terms of performance and network bandwidth.
Our infrastructure prototype is currently optimized for high-bandwidth wide-area networks. We intend to improve the performance and robustness of the system for low-bandwidth clients by implementing better data compression methods and transmission protocols. We also propose to develop application-specific visualization and interaction protocols for the data types used by our group, including LiDAR point clouds, 3D volumes, and reconstructed 3D surfaces and articulated skeletal models.
Here is an example of the system with 3D video capture of Oliver and Dawn: