Patent classifications
H04N23/611
Enhanced Illumination-Invariant Imaging
Devices, systems, and methods for generating illumination-invariant images are disclosed. A method may include activating, by a device, a camera to capture first image data; while the camera is capturing the first image data, activating of a first, light source; receiving the first image data, the first image data having pixels having first color values; identifying first light generated by the first light source while the camera is capturing the first image data; identifying, based on the first image data, second light generated by a second light source; generating, based on the first light and the second light, second image data that are illumination-invariant; and presenting the second image data.
METHOD AND SYSTEM FOR AUTOMATIC PRE-RECORDATION VIDEO REDACTION OF OBJECTS
A system and a method for automatic video redaction are provided herein. The method may include: receiving, an input video comprising a sequence of frames captured by a camera, wherein the input video includes live video obtained directly from the camera, wherein recordation of the video directly from the camera is disabled; performing visual analysis of the input video, to detect portions of the frames of the input video in which one of a plurality of predefined objects or a descriptor thereof is detected; generating a redacted input video by replacing the portions of the frames with new portions of another visual content; and recording the redacted input video on a data storage device, wherein the generating of thethe redacted input video, is carried out by a computer processor, after the input video is captured by the camera and before the recording of the redacted input video on the data storage device.
Devices, Methods, and Graphical User Interfaces for Assisted Photo-Taking
An electronic device with a camera obtains, with the camera, one or more images of a scene. The electronic device detects a respective feature within the scene. In accordance with a determination that a first mode is active on the device, the electronic device provides a first audible description of the scene. The first audible description provides information indicating a size and/or position of the respective feature relative to a first set of divisions applied to the one or more images of the scene. In accordance with a determination that the first mode is not active on the device, the electronic device provides a second audible description of the plurality of objects. The second audible description is distinct from the first audible description and does not include the information indicating the size and/or position of the respective feature relative to the first set of divisions.
STABILIZATION OF FACE IN VIDEO
Placement of a face depicted within a video may be determined. One or more stabilization options for the video may be obtained. Stabilization option(s) may include angle stabilization option, a position stabilization option, and/or a size stabilization option. The video may be stabilized based on the placement of the face and the stabilization option(s).
STABILIZATION OF FACE IN VIDEO
Placement of a face depicted within a video may be determined. One or more stabilization options for the video may be obtained. Stabilization option(s) may include angle stabilization option, a position stabilization option, and/or a size stabilization option. The video may be stabilized based on the placement of the face and the stabilization option(s).
Two-dimensional image collection for three-dimensional body composition modeling
Described are systems and method directed to generation of a dimensionally accurate three-dimensional (“3D”) body model of a body, such as a human body, based on two-dimensional (“2D”) images of that body. A user may use a 2D camera, such as a digital camera typically included in many of today's portable devices (e.g., cell phones, tablets, laptops, etc.) and obtain a series of 2D body images of their body from different directions with respect to the camera. The 2D body images may then be used to generate a plurality of predicted body parameters corresponding to the body represented in the 2D body images. Those predicted body parameters may then be further processed to generate a dimensionally accurate 3D model of the body of the user.
Gimbal device
A gimbal device for supporting an external photographing device includes a depth camera, a control unit and an actuator. The depth camera is configured to obtain spatial coordinates of a subject being photographed. The control unit is configured to determine a direction adjustment value of the gimbal device according to the spatial coordinates. The actuator is configured to receive the direction adjustment value and to adjust spatial orientation of the gimbal device according to the direction adjustment value, so that the external photographing device is able to track the subject being photographed. The control unit is further configured to obtain initial three-axis data of the gimbal device and three-axis data after the gimbal device is set to determine an angle difference between the initial three-axis data and the set three-axis data, and the actuator is further configured to receive the angle difference.
Gimbal device
A gimbal device for supporting an external photographing device includes a depth camera, a control unit and an actuator. The depth camera is configured to obtain spatial coordinates of a subject being photographed. The control unit is configured to determine a direction adjustment value of the gimbal device according to the spatial coordinates. The actuator is configured to receive the direction adjustment value and to adjust spatial orientation of the gimbal device according to the direction adjustment value, so that the external photographing device is able to track the subject being photographed. The control unit is further configured to obtain initial three-axis data of the gimbal device and three-axis data after the gimbal device is set to determine an angle difference between the initial three-axis data and the set three-axis data, and the actuator is further configured to receive the angle difference.
Camera detection of human activity with co-occurrence
Methods, systems, and apparatus for camera detection of human activity with co-occurrence are disclosed. A method includes detecting a person in an image captured by a camera; in response to detecting the person in the image, determining optical flow in portions of a first set of images; determining that particular portions of the first set of images satisfy optical flow criteria; in response to determining that the particular portions of the first set of images satisfy optical flow criteria, classifying the particular portions of the first set of images as indicative of human activity; receiving a second set of images captured by the camera after the first set of images; and determining that the second set of images likely shows human activity based on analyzing portions of the second set of images that correspond to the particular portions of the first set of images classified as indicative of human activity.
Method and apparatus for authenticating a user of a computing device
A system for authenticating a user attempting to access a computing device or a software application executing thereon. A data storage device stores one or more digital images or frames of video of face(s) of authorized user(s) of the device. The system subsequently receives from a first video camera one or more digital images or frames of video of a face of the user attempting to access the device and compares the image of the face of the user attempting to access the device with the stored image of the face of the authorized user of the device. To ensure the received video of the face of the user attempting to access the device is a real-time video of that user, and not a forgery, the system further receives a first photoplethysmogram (PPG) obtained from a first body part (e.g., a face) of the user attempting to access the device, receives a second PPG obtained from a second body part (e.g., a fingertip) of the user attempting to access the device, and compares the first PPG with the second PPG. The system authenticates the user attempting to access the device based on a successful comparison of (e.g., correlation between, consistency of) the first PPG and the second PPG and based on a successful comparison of the image of the face of the user attempting to access the device with the stored image of the face of the authorized user of the device.