Research
Home  >  Research  >  Details

Computational Photography

Trasperent Object Recognition using Light Field

Recognizing the object category and detecting a certain object in the image are two important object recognition tasks, but previous appearance-based methods cannot deal with the transparent objects since the appearance of a transparent object dramatically changes when the background varies. Our proposed methods overcome previous problems using the novel features extracted from a light-field image. We propose a light field distortion (LFD) feature, which is background-invariant, for transparent object recognition. Light field linearity (LF-linearity) is proposed to measure the likelihood of a point comes from the transparent object or not. The occlusion detector is designed to locate the occlusion boundary in the light field image.
Transparent object categorization is performed by incorporating the LFD feature into the bag-of-features approach for recognizing the category of transparent object. Transparent object segmentation is realized by solving the pixel labeling problem. An energy function is defined and Graph-cut algorithm is applied for optimizing the pixel labeling problem. The regional term and boundary term are from the LF-linearity and occlusion detector output. Light field datasets (available by request) are acquired for the transparent object categorization and segmentation. The results demonstrate that the proposed methods successfully categorize and segment transparent objects from a light field image.

Journals (Peer-reviewed)

  1. Yichao Xu, Kazuki Maeno, Hajime Nagahara, Atsushi Shimada, Rin-ichiro Taniguchi
    Light Field Distortion Feature for Transparent Object Classification
    Computer Vision and Image Understanding (CVIU), Vol.139, pp.122-135, 2015.09
    DOI: 10.1016/j.cviu.2015.02.009
    BibTeX, ScienceDirect

International Conferences (Peer-reviewed)

  1. Yichao Xu, Hajime Nagahara, Atsushi Shimada, Rin-ichiro Taniguchi
    TransCut: Transparent Object Segmentation from a Light-Field Image
    International Conference on Computer Vision (ICCV), 2015.12
    BibTeX, arXiv pre-print
  2. Kazuki Maeno, Hajime Nagahara, Atsushi Shimada, Rin-ichiro Taniguchi
    Light Field Distortion Feature for Transparent Object Recognition
    IEEE Conference on Computer Vision and Pattern Recognition, pp.2786-2793, 2013.06
    BibTeX

Coded Aperture Camera

It is widely known that the camera aperture adjusts the amount of light and controls the point spread function (PSF) of the camera imaging system. Coded aperture imaging engineers the PSF by special aperture shapes. It has attracted a lot of attention in recent years. Traditional coded aperture imaging is achieved by inserting a cutout cardboard or printed photo-mask into the lens and, therefore, does not allow the pattern to be changed easily. While various aperture patterns have been proposed in the applications for image deblurring, depth estimation based on defocused scene images (depth from defocus), and light-field imaging, the optimal aperture depends on the shooting conditions, such as lighting and scene context. Some applications also require a combination of two or more patterns. In this study, we developed a programmable aperture camera that uses a liquid crystal on silicon (LCoS) to actively set the aperture pattern. This programmable aperture camera is able to adaptively capture an image with apertures suitable for individual scenes and shooting conditions. Also, the camera can capture a light-field image at high speed. To consider imaging by utilizing the programmability of the camera, we are searching for newer coded imaging techniques and applications.
Photo

Journals (Peer-reviewed)

  1. Hajime Nagahara, Changyin Zhou, Takuya Watanabe, Hiroshi Ishiguro, Shree K. Nayar
    Programmable Aperture Camera Using LCoS
    IPSJ Transactions on Computer Vision and Applications, Vol.4, pp.1-11, 2012.03
    BibTeX

International Conferences (Peer-reviewed)

  1. Hajime Nagahara, Changyin Zhou, Takuya Watanabe, Hiroshi Ishiguro, Shree Nayar
    Programmable Aperture Camera and Its Versatile Applications
    Proc. The 6th Joint Workshop on Machine Perception and Robotics, No.OS4-1, 2010.10
    BibTeX

Focus-sweep Camera

We developed a focus-sweep camera capable of engineering the point spread function (PSF) of the camera by controlling the focus of the lens. We propose focus-sweep imaging, which produces a superimposed image with different size blurring by moving the imaging device or lens along the optical axis during exposure. Unlike coded aperture imaging, focus-sweep imaging can control the PSF with an open-up aperture and, therefore, has the advantage that it can produce images with a higher SN ratio. By using different combinations of sweeping trajectories and exposure timings of imaging devises, we successfully implemented a variety of imaging techniques and applications, including extended depth of field, depth from defocuses, tilted depth of field, discontinuous depth of field, and curved depth of field. The traditional coded imaging technique requires a special lens and aperture. On the other hand, focus-sweep imaging, which controls the point spread function by changing the lens focus, can use the auto-focus mechanism, which consumer cameras already have. For this reason, we believe that focus-sweep imaging is practical and feasible. This study was conducted in collaboration with Columbia University. For more information, see this page.
Photo

Journals (Peer-reviewed)

  1. Sujit Kuthirummal, Hajime Nagahara, Changyin Zhou, Shree K. Nayar
    Flexible Depth of Field Photography
    IEEE Transactions on Pattern Recognition and Machine Intelligence, Vol.33, No.1, pp.58 - 71, 2011.01
    BibTeX

International Conferences (Peer-reviewed)

  1. Shuhei Matsui, Hajime Nagahara, Rin-ichiro Taniguchi
    Half-Sweep Imaging for Depth from Defocus
    Proc. 5th Pacific Rim Symposium on Advanced in Image and Video Technology, No.LNCS7087, pp.335-347, 2011.11
    BibTeX
  2. Hajime Nagahara, Sujit Kuthirummal, Changyin Zhou, Shree K. Nayar
    Flexible Depth of Field Photography
    Proc. European Conf. Computer Vision, No.LNCS 5305, pp.60-73, 2008.10
    BibTeX

Omnidirectional Camera

Because the field of view is remarkably narrower than those of human beings and animals, normal cameras are not suitable for robot vision, virtual reality, or any other applications that require wide field of view.
We have propose an omnidirectional camera that can capture 360 degrees field of view in real-time by combining a convex mirror and camera; thus, we have been prototyping various types of omnidirectional cameras. For example, a camera that uses a hyperboloidal mirror has a single viewpoint like normal cameras. For this reason, it offers the advantages that it can rectify a distorted omnidirectional image by processing it and that it can use the traditional algorithms in image processing and computer vision without having to make any changes to it. We proposed a new mirror designing technique that analytically determines the mirror convex surface so that the omnidirectional images will have uniform resolutions. As a result, we proposed the first omnidirectional camera to provide an unprecedented single viewpoint and a uniform resolution at the same time. In this way, we are looking into a variety of design techniques for omnidirectional cameras and actually prototyping them as well as exploring methods for processing omnidirectional images.

Photo

International Conferences (Peer-reviewed)

  1. Hajime Naghara, Koji Yoshida, Masahiko Yachida
    An Omnidirectional Vision Sensor with Single View and Constant Resolution
    Proc. IEEE Int. Conf. Computer Vision, 2007.10
    BibTeX
  2. Koji Yoshida, Hajime Nagahara, Masahiko Yachida
    An Omnidirectional Vision Sensor with Single Viewpoint and Constant Resolution
    Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, pp.4792-4797, 2006.10
    BibTeX

Hybrid Sensor Camera

With the growing demand for higher-quality digital cameras in recent years, camera resolution is dramatically improving. For digital still cameras, for example, even consumer products provide more than 10 million pixels. For digital video cameras, however, the resolution is far lower in pixels than those of digital still cameras. This results from the fact that the amount of data output from the imaging device is represented as image resolution (number of pixels) multiplied by frame rate. Therefore, if we need a higher frame rate for capturing smooth video, we must sacrifice image resolution. If we need a higher image resolution for still cameras, we must sacrifice frame rate. This means that there is a fundamental tradeoff between resolution and frame rate, and it is not possible in principle to improve both at the same time.
In order to achieve high resolution and a high frame rate, we proposed a hybrid sensor camera equipped with two different imaging devices with different resolutions and frame rates. The camera consists of a beam splitter, an imaging device with a low frame rate and high resolution, and an imaging device with low resolution and a high frame rate. We obtain two types of videos: one with priority on the resolution (2128 x 1952 pixels, 3.75 fps) and the other with priority on the frame rate (532 x 488 pixels, 90 fps) for the same scene. We produce a video with high resolution and a high frame rate (2128 x 1952 pixels, 90 fps) from these videos by image processing techniques. The approach used by this camera provides high-quality video at low cost and thus proposes a new imaging concept—an attempt to compress images while shooting them, unlike the recent trend where video is compressed after images are shot.
Photo

Journals (Peer-reviewed)

  1. Hajime Nagahara, Changyin Zhou, Takuya Watanabe, Hiroshi Ishiguro, Shree K. Nayar
    Programmable Aperture Camera Using LCoS
    IPSJ Transactions on Computer Vision and Applications, Vol.4, pp.1-11, 2012.03
    BibTeX
  2. Kiyotaka Watanabe, Yoshio Iwai, Hajime Nagahara, Masahiko Yachida, Toshiya Suzuki
    Video Synthesis with High Spatio-Temporal Resolution Using Motion Compensation and Spectral Fusion
    IEICE Transactions on Information and Systems, Vol.E89-D, No.7, pp.2186-2196, 2006.07
    BibTeX