Research
Home  >  Research  >  Details

Virtual Reality

Virtual Tour

Omnidirectional cameras can capture the appearance information in entire field of view in real time. By cropping the field of view from an omnidirectional image in real time through image processing to correct the distortion, we can interactively and dynamically change the field of view by head motion and present it using a head-mounted display.
While this free-viewpoint image presentation is common in computer graphics, the system can provide free-viewpoint video irrespective of whether it is actual or recorded video. Because actual images used by it offer greater realism than CG, the system is expected to produce a more immersive experience when it is used for simulations and safety training purposes.
One of the applications of this study is remote operation (tele-operation) of robots. It exhibited a significantly improved work efficiently compared with a normal camera presentation, since the wide field of view helps us to understand the remote environment.

Photo

International Conferences

  • Hajime Nagahara, Yasushi Yagi, Masahiko Yachida
    Super Wide Viewing for Tele-operation
    Proc . WSEAS Int. Conf. Instrumentation, Measurement, Control, Circuits and Systems, 2004.04
    BibTeX
  • Hajime Nagahara, Yasushi Yagi, Masahiko Yachida
    Super Wide View Tele-operation System
    Proc. IEEE Int. Conf. Multisensor Fusion and Integration for Intelligent Systems, pp.149-154, 2003.07
    BibTeX

Wide Field of View Catadioptrical Head-Mounted Display

Head-mounted displays (HMDs) have been widely used as a display device in virtual reality, mixed reality, and telepresence applications. Most of the commercially available HMDs, however, have a limited field of view (less than 60 degrees), which is remarkably narrower than that of human vision (200 degrees). This narrow field of view does not give sufficient immersive reality. This low-level immersive reality is considered to decrease the reality felt by users and the workability of users in virtual space.
We proposed an innovative catadioptric HMD based on hyperboloidal and ellipsoidal mirrors. It can present a wide field of view that almost covers that of humans, including peripheral vision. We experimentally verified that this wide field of view gives a high level of reality and an immersive feeling. The proposed HMD also has a 60-degree field of view overlap area, which provides a 3D-stereo view at the same time.

Photo

Journals

  • Hajime Nagahara, Yasushi Yagi, Masahiko Yachida
    Super Wide Field of View Head Mounted Display Using Catadioptrical Optics
    Presence, Teleoperators and Virtual Environments, Vol.8, No.5, pp.588-598, 2006.10
    BibTeX
  • Hajime Nagahara, Yasushi Yagi, Masahiko Yachida
    Super Wide Viewer Using Catadioptric Optics
    ACM Transaction on Graphics, Vol.23, No.3, p.732, 2004.08
    BibTeX

Super-resolution Modeling

Constructing the three-dimensional models used in computer graphics for movies and games has the problem that they are usually manually built and the cost is expensive. As a solution to this, image-based modeling is actively studied that automatically generates three-dimensional models from captured images using a computer.
In this study, we explored techniques for automatically modeling manmade environments consisting of planes, such as rooms and corridors, inside a building by analyzing the video from an omnidirectional camera. The omnidirectional video contains motion disparities of the landmarks for estimating scene geometry, as well as scene appearance for rendering, to the object textures obtained from various viewpoints. By using not only geometric models but also the redundant texture information contained in the video, we established a technique for estimating super-resolution texture models with a resolution higher than that of any input frames.
Photo

Journals

  • Hajime Nagahara, Yasushi Yagi, Masahiko Yachida
    SuperResolution Modeling Using an Omnidirectional Image Sensor
    IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS; PART B: CYBERNETICS, Vol.33, No.4, pp.607-615, 2003.08
    BibTeX