H04N2013/0088

Systems and methods for indicating a field of view

A system for determining a person's field of view with respect to captured data (e.g., recorded audiovisual data). A multi-camera capture system (e.g. bodycam, vehicular camera, stereoscopic, 360-degree captures system) records audiovisual data of an event. The field of capture of a capture system or a field of capture of captured data combined from multiple capture systems may be greater than the field of view of a person. Facial features (e.g. eyes, ears, nose, and jawlines) of the person may be detected from the captured data. Facial features may be used to determine the field of view of the person with respect to the captured data. Sensors that detect head orientation may be used to determine the field of view of the person with respect to captured data. The field of view of the person may be shown with respect to the captured data when the captured data is played back.

Three-dimensional noise reduction

Systems and methods are disclosed for image signal processing. For example, methods may include receiving a current image of a sequence of images from an image sensor; combining the current image with a recirculated image to obtain a noise reduced image, where the recirculated image is based on one or more previous images of the sequence of images from the image sensor; determining a noise map for the noise reduced image, where the noise map is determined based on estimates of noise levels for pixels in the current image, a noise map for the recirculated image, and a set of mixing weights; recirculating the noise map with the noise reduced image to combine the noise reduced image with a next image of the sequence of images from the image sensor; and storing, displaying, or transmitting an output image that is based on the noise reduced image.

Image processing system and image processing program
09832447 · 2017-11-28 · ·

In the present invention, the following are provided: a 3D information generating unit (3) for generating 3D information as the data of a group of a plurality of points formed by projecting the values of respective pixels of a moving object in accordance with depth information detected from an image pickup image; an overlooking image generating unit (4) for generating an overlooking image by synthesizing the 3D information of the moving object with a space image of an image pickup target region; and a display control unit (5) for displaying the overlooking image. The present invention is configured so that, even in a case where there are multiple image pickup target regions in a large-scale building in which the floor configuration is complicated, it is unnecessary to display multiple image pickup images using split screen display, and by displaying one overlooking image formed by synthesizing the 3D information of the moving objects formed from the groups of a plurality of points with each of a plurality of image pickup target regions which are included in the entire space of the building, the overall state of the building can be ascertained in one glance by confirming the overlooking image.

Electronic device and method for displaying and generating panoramic image

Disclosed is a method for displaying a panoramic image by an electronic device. According to an example embodiment of the present disclosure, a method for generating a panoramic image may comprise sensing a direction that a first side surface of the electronic device faces through a sensor included in the electronic device, displaying a first partial image of the panoramic image corresponding to the sensed direction of the first side surface, determining a direction corresponding to the reference view information with respect to the first partial image if information regarding the first partial image differs from reference view information indicating a reference view for the panoramic image, and providing information about the determined direction.

IMAGE PROCESSING APPARATUS, IMAGE CAPTURING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
20170332067 · 2017-11-16 ·

An image processing apparatus processes a first image and a second image so as to detect a corresponding pixel in the second image which corresponds to a target pixel in the first image. The first image has a first parameter value, and the second image has a second parameter value different from the first parameter value. The first parameter value and the second parameter value are values of optical parameters of image capturing systems used to capture the first image and the second image. The image processing apparatus includes an area setter that sets a two-dimensional search area as a partial area in which the corresponding pixel is to be searched in the second image, based on a predetermined range in which each of the first and second parameter values can change, and a detector that detects the corresponding pixel by searching the two-dimensional search area.

Estimation of object properties in 3D world

Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.

Multi-aperture ranging devices and methods

Embodiments of systems and methods for multi-aperture ranging are disclosed. An embodiment of a device includes a main lens, configured to receive an image from the field of view of the main lens, a multi-aperture optical component having optical elements optically coupled to the main lens and configured to create a multi-aperture image set that includes a plurality of subaperture images, wherein at least one point in the field of view is captured by at least two of the subaperture images, an array of sensing elements, a ROIC configured to receive the signals, to convert the signals to digital data, and to output the digital data, and an image processing system, responsive to the digital data that is output from the ROIC, which is configured to generate disparity values that correspond to at least one point in common between the at least two subaperture images.

TRANSITION BETWEEN BINOCULAR AND MONOCULAR VIEWS
20170294045 · 2017-10-12 ·

An image processing system is designed to generate a canvas view that has smooth transition between binocular views and monocular views. Initially, the image processing system receives top/bottom images and side images of a scene and calculates offsets to generate synthetic side images for left and right view of a user. To realize smooth transition between binocular views and monocular views, the image processing system first warps top/bottom images onto corresponding synthetic side images to generate warped top/bottom images, which realizes the smooth transition in terms of shape. The image processing system then morphs the warped top/bottom images onto the corresponding synthetic side images to generate blended images for left and right eye views with the blended images. The image processing system creates the canvas view which has smooth transition between binocular views and monocular views in terms of image shape and color based on the blended images.

MULTI-APERTURE RANGING DEVICES AND METHODS

Embodiments of systems and methods for multi-aperture ranging are disclosed. An embodiment of a device includes a main lens, configured to receive an image from the field of view of the main lens, a multi-aperture optical component having optical elements optically coupled to the main lens and configured to create a multi-aperture image set that includes a plurality of subaperture images, wherein at least one point in the field of view is captured by at least two of the subaperture images, an array of sensing elements, a ROIC configured to receive the signals, to convert the signals to digital data, and to output the digital data, and an image processing system, responsive to the digital data that is output from the ROIC, which is configured to generate disparity values that correspond to at least one point in common between the at least two subaperture images.

Electronic device and method for providing image of surroundings of vehicle

Provided is an electronic apparatus and method for providing an image of surroundings of a vehicle. The electronic apparatus for providing an image of surroundings of the vehicle includes a first image sensor creating a first image by capturing surroundings of the vehicle; a second image sensor creating a second image by capturing surroundings of the vehicle; and a processor configured to obtain feature information of each of the first image and the second image and use a portion of the first image and a portion of the second image to create a composite image that represents the surroundings of the vehicle, based on the obtained feature information.