Patent classifications
G06V10/147
Polarization capture device for identifying feature of object
A device includes a first polarized image sensor configured to capture first image data relating to an object from a first perspective. The device also includes a second polarized image sensor configured to capture second image data relating to the object from a second perspective different from the first perspective. The device further includes a processor configured to obtain at least one of polarization information or depth information of the object based on at least one of the first image data or the second image data, and to extract a feature of the object based on the at least one of the polarization information or the depth information.
Optimizing bra sizing according to the 3D shape of breasts
Methods and systems for developing a sizing system through categorization and selection of prototypes, which can be regarded as the most appropriate fit model, is described. Once categorized and prototypes are selected, recommendations for the sizing of a target body part may be issued.
FINGERPRINT INPUT DEVICE USING PORTABLE TERMINAL HAVING CAMERA AND EXTERNAL OPTICAL DEVICE FOR INPUTTING FINGERPRINT
Disclosed is a fingerprint input device using a portable terminal equipped with a camera, and an external optical device for inputting a fingerprint. According to the present invention, a fingerprint image may be generated by an optical fingerprint input method by using the external optical device of the present invention even when an existing portable terminal does not have a configuration for an optical fingerprint input. To this end, the external optical device is provided as an external type to be mounted in the existing portable terminal provided with a camera, and has an optical refractor and a mirror. The external optical device may generate a user's fingerprint image by an optical fingerprint input method, in accordance with circumstances, without interrupting the main use of a camera and an LED of the existing portable terminal.
DEVICE, SYSTEM, AND METHOD FOR PERFORMANCE MONITORING AND FEEDBACK FOR FACIAL RECOGNITION SYSTEMS
Disclosed is a process for performance monitoring and feedback for facial recognition systems. A first image for image matching from a camera capture device at a first location is received for purposes of image matching. A highest match confidence score of the first image to a particular stored enrollment image is determined. One or more image or user characteristics associated with the first image or first user is identified. The identified image or user characteristics and highest match confidence score are added to a facial recognition monitoring and feedback model. Subsequently, a particular one of the stored image or user characteristics consistently associated with a below-threshold highest match confidence score is identified, and a notification is displayed or transmitted including an indication of an identified facial recognition low match pattern and identifying the particular one of the stored image characteristics or user characteristics.
OPTICAL TRANSFER DIAGNOSIS FOR DETECTION AND MONITORING OF TISSUE DISORDERS
Systems and methods for discriminating between malignant and benign pigmented skin lesions based on optical analysis using spatial distribution maps, morphological parameters, and additional diagnostic parameters derived from images of tissue lesions. A handheld optical transfer diagnosis device is disclosed capable of capturing a series of reflectance images of a skin lesion at a variety of angles of illumination and observation.
HEAD MOUNTED DISPLAY DEVICE
A head mounted display device, including a frame, a mask, at least one infrared (IR) transmitter, and at least one image capture device, is provided. The mask has a first light reflection layer on a first side. The IR transmitter is disposed in the frame and is used to emit an emitting light beam toward the first light reflection layer. The first light reflection layer reflects the emitting light beam to send a reflective light beam toward a target area. The image capture device is disposed in the frame and is used to capture a target area reflective image of the target area.
Computer Vision Systems and Methods for Generating Building Models Using Three-Dimensional Sensing and Augmented Reality Techniques
Computer vision systems and methods for generating building models using three-dimensional sensing and augmented reality (AR) techniques are provided. Image frames including images of a structure to be modeled are captured by a camera of a mobile device such as a smart phone, as well as three-dimensional data corresponding to the image frames. An object of interest, such as a structural feature of the building, is detected using both the image frames and the three-dimensional data. An AR icon is determined based upon the type of object detected, and is displayed on the mobile device superimposed on the image frames. The user can manipulate the AR icon to better fit or match the object of interest in the image frames, and can capture the object of interest using a capture icon displayed on the display of the mobile device.
Computer Vision Systems and Methods for Generating Building Models Using Three-Dimensional Sensing and Augmented Reality Techniques
Computer vision systems and methods for generating building models using three-dimensional sensing and augmented reality (AR) techniques are provided. Image frames including images of a structure to be modeled are captured by a camera of a mobile device such as a smart phone, as well as three-dimensional data corresponding to the image frames. An object of interest, such as a structural feature of the building, is detected using both the image frames and the three-dimensional data. An AR icon is determined based upon the type of object detected, and is displayed on the mobile device superimposed on the image frames. The user can manipulate the AR icon to better fit or match the object of interest in the image frames, and can capture the object of interest using a capture icon displayed on the display of the mobile device.
OBJECT DETECTION SYSTEM
A vehicular object detection system includes a camera and a lidar. With the camera mounted at a windshield of a vehicle, and with the lidar mounted at an exterior portion of the vehicle, and based at least in part on processing of image data captured by the camera and lidar data captured by the lidar, a plurality of individual objects present exterior of the vehicle are detected. Based at least in part on processing of captured image data and captured lidar data, (i) respective proximity relative to the vehicle of individual objects is determined, (ii) respective speed relative to the vehicle of individual objects is determined and (iii) respective location relative to the vehicle of individual objects is determined. Based at least in part on processing of captured image data and/or processing of captured lidar data, the system determines collision potential between the vehicle and an individual object.
OBJECT DETECTION SYSTEM
A vehicular object detection system includes a camera and a lidar. With the camera mounted at a windshield of a vehicle, and with the lidar mounted at an exterior portion of the vehicle, and based at least in part on processing of image data captured by the camera and lidar data captured by the lidar, a plurality of individual objects present exterior of the vehicle are detected. Based at least in part on processing of captured image data and captured lidar data, (i) respective proximity relative to the vehicle of individual objects is determined, (ii) respective speed relative to the vehicle of individual objects is determined and (iii) respective location relative to the vehicle of individual objects is determined. Based at least in part on processing of captured image data and/or processing of captured lidar data, the system determines collision potential between the vehicle and an individual object.