Patent classifications
G06V20/90
AUTO-ANNOTATING OBJECTS USING THERMAL IMAGING
This application relates to systems, methods, devices, and other techniques for methods of auto-annotating objects using thermal imaging.
DETERMINING IMAGE FORENSICS USING AN ESTIMATED CAMERA RESPONSE FUNCTION
An image forensics system estimates a camera response function (CRF) associated with a digital image, and compares the estimated CRF to a set of rules and compares the estimated CRF to a known CRF. The known CRF is associated with a make and a model of an image sensing device. The system applies a fusion analysis to results obtained from comparing the estimated CRF to a set of rules and from comparing the estimated CRF to the known CRF, and assesses the integrity of the digital image as a function of the fusion analysis.
Camera Identification Method, Authentication Method, System, and Terminal
A camera identification method comprises identifying a to-be-identified camera using a light sensitivity deviation of each pixel in the to-be-identified camera; generating corresponding identification data; identifying a target camera using a light sensitivity deviation of each pixel in the target camera; and identifying the target camera by comparing a similarity between data that identifies the target camera and identification data of a known camera.
CAMERA IDENTIFICATION
A computer, including a processor and a memory, the memory including instructions to be executed by the processor to divide each of one or more images acquired by a camera into a plurality of zones, determine respective camera noise values for respective zones based on the one or more images, determine one or more zone expected values for the one or more images by summing camera noise values multiplied by scalar coefficients for each zone and normalizing the sum by dividing by a number of zones in the plurality of zones, and determine a source of the camera as being one of the same camera or an unknown camera based on comparing the one or more zone expected values to previously acquired expected zone values.
METHOD OF MARKING A SOLID-STATE MATERIAL, MARKINGS FORMED FROM SUCH METHODS AND SOLID-STATE MATERIALS MARKED ACCORDING TO SUCH A METHOD
A process of forming a non-optically detectable authentication marking (210,320, 410,535) in a diamond (200,300). Authentication marking (210,320,410,535) is formed adjacent the outer surface of an article formed from a diamond material having intrinsic optical centers. Method includes the step of applying an image of predesigned authentication marking(210,320,410,535) to a region (201,310,530) of a diamond (200,300) at or adjacent the surface of the diamond (200,300) by way of a direct laser writing; wherein the fluorescence background of the diamond material from intrinsic optical center is suppressed by authentication marking(210,320, 410, 535) under fluorescent imaging, such that the non-optically detectable identifiable authentication marking (210,320,410,535) is viewable against the fluorescence background at the region (201,310,530) of the diamond (200,300) where the authentication marking (210,320,410,535) is applied.
REPEATABILITY PREDICTIONS OF INTEREST POINTS
The present disclosure describes approaches for evaluating interest points for localization uses based on a repeatability of the detection of the interest point in images capturing a scene that includes the interest point. The repeatability of interest points is determined by using a trained repeatability model. The repeatability model is trained by analyzing a time series of images of a scene and determining repeatability functions for each interest point in the scene. The repeatability function is determined by identifying which images in the time series of images allowed for the detection of the interest point by an interest point detection model.
REPEATABILITY PREDICTIONS OF INTEREST POINTS
The present disclosure describes approaches for evaluating interest points for localization uses based on a repeatability of the detection of the interest point in images capturing a scene that includes the interest point. The repeatability of interest points is determined by using a trained repeatability model. The repeatability model is trained by analyzing a time series of images of a scene and determining repeatability functions for each interest point in the scene. The repeatability function is determined by identifying which images in the time series of images allowed for the detection of the interest point by an interest point detection model.
Camera identification
A computer, including a processor and a memory, the memory including instructions to be executed by the processor to divide each of one or more images acquired by a camera into a plurality of zones, determine respective camera noise values for respective zones based on the one or more images, determine one or more zone expected values for the one or more images by summing camera noise values multiplied by scalar coefficients for each zone and normalizing the sum by dividing by a number of zones in the plurality of zones, and determine a source of the camera as being one of the same camera or an unknown camera based on comparing the one or more zone expected values to previously acquired expected zone values.
Coordination of multiple structured light-based 3D image detectors
Technologies are generally described for coordination of structured light-based image detectors. In some examples, one or more structured light sources may be configured to project sets of points onto the scene. The sets of points may be arranged into disjoint sets of geometrical shapes such as lines, where each geometrical shape includes a subset of the points projected by an illumination source. A relative position and or a color of the points in each geometrical shape may encode an identification code with which each illumination source may be identified. Thus, even when the point clouds projected by each of the illumination sources overlap, the geometrical shapes may still be detected, and thereby a corresponding illumination source may be identified. A depth map may then be estimated based on stereovision principles or depth-from-focus principles by one or more image detectors.
Calibrating inertial measurement units using image data
The systems and/or processes described herein may calibrate an inertial measurement unit (IMU) of an electronic device in part by using images captured by one or more cameras of the electronic device. In this regard, an IMU of an electronic device may comprise a gyroscope, an accelerometer, a magnetometer, or any other type of motion sensor or rotational sensor.