Patent classifications
G06V10/00
OBSERVATION DEVICE AND OBSERVATION METHOD
An observation apparatus includes a light source unit, an irradiation optical system, an imaging optical system, a modulation unit, an imaging unit, an analysis unit, beam splitters and, and mirrors. The analysis unit obtains a real part of a function χ(t)=log [1+U.sub.obj(t)/U.sub.ref(t)], defined by time series data U.sub.obj(t) of a complex amplitude image of object light on an imaging plane and time series data U.sub.ref(t) of a complex amplitude image of reference light on the imaging plane, based on time series data I(t) of an intensity image of interference light on the imaging plane and time series data I.sub.ref(t) of an intensity image of the reference light on the imaging plane. Further, the analysis unit obtains an imaginary part of χ(t) from the real part of χ(t) using the Kramers-Kronig relations, and further obtains U.sub.obj(t).
SYSTEMS, METHODS, AND DEVICES FOR GENERATING DIGITAL AND CRYPTOGRAPHIC ASSETS BY MAPPING BODIES FOR N-DIMENSIONAL MONITORING USING MOBILE IMAGE DEVICES
Provided are systems, methods, and devices for generating digital and/or cryptographic assets. An initial state of an environment is acquired using sensors that includes a state of each sensor, a region of interest including a 3D body, and a state of light sources. The asset is associated with the 3D body. A plurality of boundary conditions associated with a workflow for capturing the asset is determined. A visualization of a set of boundary conditions is displayed on a display that includes a plurality of visual cues including first and second visual cues. Each respective visual cue provides a visual indication of a state of a corresponding boundary condition in the set of boundary conditions. At least one visual cue is updated when each boundary condition in the set of boundary conditions is satisfied. When satisfied, the workflow at the computer-enabled imaging device is executed, thereby capturing the asset.
System and method for determining distance to object on road
Various aspects of a system, a method, and a computer program product for determining a distance to the object on a road are disclosed herein. In accordance with an embodiment, the system includes a memory and a processor. The processor may be configured to receive visual data, location data and motion data of the vehicle corresponding to the first instance in time, and map data corresponding to the location data. The processor may be configured to calculate a distance of the vehicle from the object based on the visual data. The processor may be further configured to validate the location data, the motion data, and the calculated distance of the vehicle from the object, based on the map data. The processor may be further configured to generate output data corresponding to the object, based on the validated location data, the validated motion data, and the validated distance of the vehicle from the object.
Activity classification based on multi-sensor input
A method for classifying activity based on multi-sensor input includes receiving, from two or more sensors, sensor data indicating activity within a building, determining, for each of the two or more sensors and based on the received sensor data, (i) an extracted feature vector for activity within the building and (ii) location data, labelling each of the extracted feature vectors with the location data, generating, using the extracted feature vectors, an integrated feature vector, detecting a particular activity based on the integrated feature vector, and in response to detecting the particular activity, performing a monitoring action.
Computer based object detection within a video or image
Described herein are software and systems for analyzing videos and/or images. Software and systems described herein are configured in different embodiments to carry out different types of analyses. For example, in some embodiments, software and systems described herein are configured to locate an object of interest within a video and/or image.
Camera auto-calibration system
A seed camera disposed a first location is manually calibrated. A second camera, disposed at a second location, detects a physical marker based on predefined characteristics of the physical marker. The physical marker is located within an overlapping field of view between the seed camera and the second camera. The second camera is calibrated based on a combination of the physical location of the physical marker, the first location of the seed camera, the second location of the second camera, a first image of the physical marker generated with the seed camera, and a second image of the physical marker generated with the second camera.
Image stitching device and image stitching method
An image stitching method includes: receiving a first image and a second image; determining that both the first image and the second image include a target object; obtaining a first brightness value and a second brightness value, the first brightness value being a brightness value of the target object in the first image, and the second brightness value being a brightness value of the target object in the second image; adjusting a brightness value of the first image and a brightness value of the second image according to the first brightness value and the second brightness value, so as to obtain a first image to be stitched and a second image to be stitched; and stitching the first image to be stitched and the second image to be stitched to obtain a first stitched image.
Method and device for generating a second image from a first image
Described are methods and devices for applying a color gamut mapping process on a first image to generate a second image, where the content of the first and second images is similar but the respective color spaces of the first and second images are different. The color gamut mapping process may be controlled using a color gamut mapping mode obtained from a bitstream where the color gamut mapping mode belongs to a set comprising at least two preset modes and an explicit parameters mode. If the obtained color gamut mapping mode is the explicit parameters mode and the color gamut mapping process is not enabled for the explicit parameters mode, the color gamut mapping process may be controlled by a substitute color gamut mapping mode determined from additional data.
Image processing utilizing an entigen construct
A method performed by a computing device includes obtaining a set of image segment identigens for image segments of an image to produce sets of image segment identigens. A set of image segment identigens is a set of possible interpretations of a first image segment of the image segments. The method further includes identifying a subset of valid image segment identigens of each set of image segment identigens by applying identigen rules to the sets of image segment identigens to produce subsets of valid image segment identigens. Each valid image segment identigen of a subset of valid image segment identigens represents a most likely interpretation of a corresponding image segment. The method further includes generating an image entigen group utilizing the subsets of valid image segment identigens, where the image entigen group represents a most likely interpretation of the image.
Method for optimizing a data model and device using the same
A method for optimizing a data model is used in a device. The device acquires data information and selecting at least two data models according to the data information, and utilizes the data information to train the at least two data models. The device acquires each accuracy of the at least two data models, determines a target data model which has greatest accuracy between the at least two data models, and optimizes the target data model.