G06T2207/10021

Systems and methods for mask-based temporal dithering
11562679 · 2023-01-24 · ·

In one embodiment, a computing system may determine a target grayscale value associated with a target image to be represented by a plurality of subframes. The system may determine grayscale ranges based on the target grayscale value. Each grayscale range may correspond to a combination of zero or more subframes of the plurality of subframes. The system may select dot subsets from a dithering mask based on the grayscale ranges. Each of the dot subsets may correspond to a grayscale range. The system may generate the subframes based on (1) the selected dot subsets and (2) respective combinations of zero or more subframes. The subframes may have a smaller number of bits per color than the target frame. The system may display the subframes sequentially in time domain on a display to represent the target image.

IMAGE PROCESSING-BASED WEIGHT ESTIMATION FOR AQUACULTURE
20230230409 · 2023-07-20 ·

Methods, systems, and apparatus, including computer programs encoded on computer-storage media, for fish weight estimation based on fish tracks identified in images. In some implementations, a method includes obtaining images of fish enclosed in a fish enclosure, identifying fish tracks shown in the images of the fish, determining a quality score for each of the fish tracks, selecting a subset of the fish tracks based on the quality scores, determining a representative weight of the fish in the fish enclosure based on weights of the fish shown in the subset of the fish tracks, and outputting the representative weight for display or storage at a device connected to the one or more processors.

Convolution-based camera and display calibration

Techniques for calibrating cameras and displays are disclosed. An image of a target is captured using a camera. The target includes a tessellation having a repeated structure of tiles. The target further includes unique patterns superimposed onto the tessellation. Matrices are formed based on pixel intensities within the captured image. Each of the matrices includes values each corresponding to the pixel intensities within one of the tiles. The matrices are convolved with kernels to generate intensity maps. Each of the kernels is generated based on a corresponding unique pattern of the unique patterns. An extrema value is identified in each of the intensity maps. A location of each of the unique patterns within the image is determined based on the extrema value for each of the intensity maps. A device calibration is performed using the location of each of the unique patterns.

METHOD AND APPARATUS FOR OPERATING A COMPANION PROCESSING UNIT
20230015427 · 2023-01-19 ·

Embodiments of apparatus and method for operating a companion processing unit. In an example, an apparatus includes an application processor, a memory, and a companion processing unit. The apparatus also includes an image sensor. The application processor is configured to turn on the image sensor, perform a scene detection on an image received from the image sensor, and determine whether a scene category of the image is supported by the companion processing unit. In response to the scene category being supported by the companion processing unit, the application processor controls the companion processing unit to start a boot-up sequence corresponding to the scene category. The boot-up sequence enables the companion processing unit to enter a mission mode in which the companion processing unit is ready to receive and process image data from the image sensor and send processed image data to the application processor.

APPARATUS FOR ANALYZING A PAYLOAD BEING TRANSPORTED IN A LOAD CARRYING CONTAINER OF A VEHICLE

An apparatus for analyzing a payload being transported in a load carrying container of a vehicle is disclosed. The apparatus includes a camera disposed to successively capture images of vehicles traversing a field of view of the camera. The apparatus also includes at least one processor in communication with the camera, the at least one processor being operably configured to select at least one image from the successively captured images in response to a likelihood of a vehicle and load carrying container being within the field of view in the at least one image, and image data associated with the least one image meeting a suitability criterion for further processing. The further processing includes causing the at least one processor to process the selected image to identify a payload region of interest within the image and to generate a payload analysis within the identified payload region of interest based the image data associated with the least one image.

ESTIMATING TRACKING SENSOR PARAMETRIZATION USING KNOWN SURFACE CONSTRAINTS

A sensor system and a method of operating a sensor system including a plurality of sensors tracking a moving object in an area having known bounding surfaces. The apparatus and method calculate a time-specific position of the object based on data and sensor parameters from at least two of the plurality of sensors and determine errors between the calculated time-specific positions calculated. The method and apparatus calculate a minimum system error attributable to the at least two sensors by constraining at least one dimension in the data of the sensor used in the calculated time-specific position of the object associated with the sensor, the constraining based on an object/surface interaction, the minimum system error calculated by solving for modified sensor parameters for each sensor.

Bacteria classification

A method, a computer program product, and a computer system for classifying bacteria. The method comprises extracting a morphology signature corresponding to one or more bacteria and extracting a motility signature corresponding to the one or more bacteria. The method further comprises merging the morphology signature and the motility signature into a merged vector signature and classifying the one or more bacteria based on the merged vector signature.

ANALYSIS APPARATUS, DATA GENERATION METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
20230008227 · 2023-01-12 · ·

An analysis apparatus includes: a communication unit configured to receive first three-dimensional sensing data from a first sensor and receive second three-dimensional sensing data from a second sensor provided in a position different from a position where the first sensor is provided; a calculation unit configured to calculate a transformation parameter used to transform a reference model indicating the three-dimensional shape of the target object into a three-dimensional shape of the target object indicated by the first and second three-dimensional sensing data; a correction unit configured to correct a transformation parameter in such a way that the reference model is transformed into a three-dimensional shape of the target object at the first timing based on a difference between the first timing and the second timing; and a generation unit configured to generate three-dimensional data obtained by transforming the reference model using the transformation parameter after the correction.

Apparatus and Method for Virtual Reality

A data processing apparatus comprises receiving circuitry to receive tracking data indicative of at least one of a tracked position and orientation of a head-mountable display (HMD), image processing circuitry to generate a sequence of images frames for display in dependence upon the tracking data, detection circuitry to detect an image feature in a first image frame and to detect a corresponding image feature in a second image frame, and correlation circuitry to: calculate a difference between the image feature in the first image frame and the corresponding image feature in the second image frame; generate difference data indicative of a difference between a viewpoint for the first image frame and a viewpoint for the second image frame in dependence upon the difference between the image feature in the first image frame and the corresponding image feature in the second image frame; and generate output data in dependence upon a difference between the difference data and the tracking data associated with the first and second image frames.

Methods and apparatus to generate photo-realistic three-dimensional models of a photographed environment

Methods and apparatus to generate photo-realistic three-dimensional models of a photographed environment are disclosed. An apparatus includes an object position calculator to determine a three-dimensional (3D) position of an object detected within a first image of an environment and within a second image of the environment. The apparatus further includes a 3D model generator to generate a 3D model of the environment based on the first image and the second image. The apparatus also includes a model integrity analyzer to detect a difference between the 3D position of the object and the 3D model. The 3D model generator automatically modifies the 3D model based on the difference in response to the difference satisfying a confidence threshold.