H04N2013/0074

Three-dimensional object detection device
09832444 · 2017-11-28 · ·

A three-dimensional object detection device includes an image capturing unit, an image conversion unit, a three-dimensional object detection unit and a light source detection unit. The image conversion unit converts a viewpoint of the images obtained by the image capturing unit to create bird's-eye view images. The three-dimensional object detection unit detects a presence of a three-dimensional object within the adjacent lane. The three-dimensional object detection unit determines the presence of the three-dimensional object within the adjacent lane-when the difference waveform information is at a threshold value or higher. The three-dimensional object detection unit set a threshold value lower so that the three-dimensional object is more readily detected in a rearward area than forward area with respect to a line connecting the light source and the image capturing unit.

Food Waste Detection Method and System
20220058388 · 2022-02-24 ·

A system (1) for detecting food related products (2) before thrown away, the system comprising: one or more cameras (11); a display unit (12); a computing device (13) that is communicatively connected to the cameras and the display; and a scale (3) that is communicatively connected to the computing device, the scale holding a trash bin (31), wherein the cameras obtain an image or a video of the products when the products are within a field of view of the cameras and before the products are in the trash bin, the scale configured to weigh the products in the trash bin, and wherein the computing device obtains information about the products from the obtained image or video by applying an image recognition algorithm, receives the weight from the scale and generates and outputs data on the display unit, the data being based on the information about products and the weight.

ACCESS PREVENTION AND CONTROL FOR SECURITY SYSTEMS
20170294063 · 2017-10-12 ·

A system is described herein that manages access to a secure housing facility. The system includes an illumination system configured to light an area surrounding the secure housing facility. A feature capture system configured to capture features of one or more individuals within the illumination area. The system stores information pertaining to unauthorized and authorized individuals in a database, where the information comprises facial images of the unauthorized and authorized individuals. The system uses a 3-D access module to compare the captured media to each of the facial images of the unauthorized and authorized individuals.

Method and apparatus for processing three-dimensional (3D) pseudoscopic images
09743063 · 2017-08-22 · ·

A method for detecting three-dimensional (3D) pseudoscopic images and a display device for detecting 3D pseudoscopic images are provided. The method includes extracting corresponding feature points in a first view and corresponding feature points in a second view, wherein the first view and the second view form a current 3D image; calculating an average coordinate value of the feature points in the first view and an average coordinate value of the feature points in the second view; based on the average coordinate value of the feature points in the first view and the average coordinate value of the feature points in the second view, determining whether the current 3D image is pseudoscopic or not; and processing the current 3D image when it is determined that the current 3D image is pseudoscopic.

METHODS AND SYSTEMS FOR CONTENT PROCESSING

Mobile phones and other portable devices are equipped with a variety of technologies by which existing functionality can be improved, and new functionality can be provided. Some aspects relate to visual search capabilities, and determining appropriate actions responsive to different image inputs. Others relate to processing of image data. Still others concern metadata generation, processing, and representation. Yet others concern user interface improvements. Other aspects relate to imaging architectures, in which a mobile phone's image sensor is one in a chain of stages that successively act on packetized instructions/data, to capture and later process imagery. Still other aspects relate to distribution of processing tasks between the mobile device and remote resources (“the cloud”). Elemental image processing (e.g., simple filtering and edge detection) can be performed on the mobile phone, while other operations can be referred out to remote service providers. The remote service providers can be selected using techniques such as reverse auctions, through which they compete for processing tasks. A great number of other features and arrangements are also detailed.

Image capturing device, image display method, and recording medium

In the related art, it was difficult to compare lengths of a plurality of objects which were present at different places. However, it is possible to easily compare lengths of photographed objects using an image capturing device which displays a length of an object which is calculated based on parallax information, by obtaining an image in which the object is photographed and the parallax information corresponding to the image as inputs, the device including an object extraction unit which extracts an image of an object using the parallax information from the photographed image; a comparison data maintaining unit which maintains the image of the object and the length of the object; an object comparison unit which compares the length of the object which is extracted using the object extraction unit to a length of comparison data which is extracted from the comparison data maintaining unit; and an image composition unit which combines a comparison result with the photographed image, and outputs the image.

Three-dimensional (3D) image rendering method and apparatus

A three-dimensional (3D) image rendering method for a heads-up display (HUD) system including a 3D display apparatus and a catadioptric system is provided. The 3D image rendering method includes determining optical images corresponding to both eyes of a user by applying, to each of the positions of the eyes, an optical transformation that is based on an optical characteristic of the catadioptric system, and rendering an image to be displayed on a display panel included in the 3D display apparatus, based on a position relationship between the optical images and the display panel.

Digital cinema projection method, optimization device and projection system

Embodiments of the present invention provide a digital cinema projection method comprising the following steps: projecting images to a screen by a digital cinema projector, and acquiring the images on the screen by an image capture unit; performing digital analysis on the acquired images by an image analysis unit to obtain a plurality of parameters of the acquired images; according to each of the parameters, performing correction process on image signals input from the digital cinema server, and then outputting to the digital cinema projector; and projecting the differently corrected and improved images targeted to each of the digital cinema projectors.

DEVICE AND METHOD OF DIMENSIONING USING DIGITAL IMAGES AND DEPTH DATA
20170264880 · 2017-09-14 ·

A device and method of dimensioning using digital images and depth data is provided. The device includes a camera and a depth sensing device whose fields of view generally overlap. Segments of shapes belonging to an object identified in a digital image from the camera are identified. Based on respective depth data, from the depth sensing device, associated with each of the segments of the shapes belonging to the object, it is determined whether each of the segments is associated with a same shape belonging to the object. Once all the segments are processed to determine their respective associations with the shapes of the object in the digital image, dimensions of the object are computed based on the respective depth data and the respective associations of the shapes.

THREE-DIMENSIONAL AUTO-FOCUSING DISPLAY METHOD AND SYSTEM THEREOF
20170257614 · 2017-09-07 ·

A 3D auto-focusing display method comprises executing an eye-tracking step on a 3D image to obtain focal point coordinates (x1, y1) of viewers of the image, mapping the focal point coordinates (x1, y1) of viewers to a coordinate location of a display to obtain display coordinates (x2, y2) for defining the coordinate location of the display corresponding to a depth diagram of the 3D image, determining a region where the image is located by using the display coordinates (x2, y2) as an input parameter and by use of the depth diagram of the image, determining whether the image is 3D stereoscopic images according to the region and executing a depth map step to revise the 3D image based on the image and a plurality of depth data of the region to reflect the display coordinates (x2, y2) as a focused image, and outputting the revised focused image to the display.