G06V10/803

IMAGE GAZE CORRECTION METHOD, APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT

An image gaze correction method, apparatus, electronic device, computer-readable storage medium, and computer program product related to the field of artificial intelligence technologies are provided. The image gaze correction method includes: acquiring an eye image from an image; performing feature extraction processing on the eye image to obtain feature information of the eye image; performing, based on the feature information and a target gaze direction, gaze correction processing on the eye image to obtain an initially corrected eye image and an eye contour mask; performing, by using the eye contour mask, adjustment processing on the initially corrected eye image to obtain a corrected eye image; and generating a gaze corrected image based on the corrected eye image.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
20230049796 · 2023-02-16 ·

The present technology relates to an information processing apparatus, an information processing method, and a program that allow a sensing time of a sensor to be easily and accurately determined. An information processing apparatus includes a control circuit that outputs a control signal for controlling a sensing timing of a sensor, a counter that updates a counter value in a predetermined cycle, and an addition circuit that adds, to sensor data output from the sensor, sensing time information including a first counter value, a second counter value, and a GNSS (Global Navigation Satellite System) time in a GNSS. The first counter value is obtained when the control signal is output from the control circuit, and the second counter value is obtained when a pulse signal synchronous with the GNSS time is output from a GNSS receiver. The present technology can be applied to, for example, a vehicle-mounted camera.

SYSTEM REPRESENTATION AND METHOD OF USE
20230050389 · 2023-02-16 ·

In variants, a system management platform can include a set of system representations and a set of platform-standard element models. Each system representation can include a set of component representations related by a set of constraint representations 140, which can represent the sensing components of a system and the relationships therebetween, respectively, and store component-specific and constraint-specific calibration parameter values, respectively. The component representations 120 can optionally reference the element models.

Systems, devices, and methods for in-field diagnosis of growth stage and crop yield estimation in a plant area

Methods, devices, and systems may be utilized for detecting one or more properties of a plant area and generating a map of the plant area indicating at least one property of the plant area. The system comprises an inspection system associated with a transport device, the inspection system including one or more sensors configured to generate data for a plant area including to: capture at least 3D image data and 2D image data; and generate geolocational data. The datacenter is configured to: receive the 3D image data, 2D image data, and geolocational data from the inspection system; correlate the 3D image data, 2D image data, and geolocational data; and analyze the data for the plant area. A dashboard is configured to display a map with icons corresponding to the proper geolocation and image data with the analysis.

Multi-channel lidar sensor module

The present invention relates to a multi-channel lidar sensor module capable of measuring at least two target objects using one image sensor. The multi-channel lidar sensor module according to an embodiment of the present invention includes at least one pair of light emitting units configured to emit laser beams and a light receiving unit formed between the at least one pair of emitting units and configured to receive at least one pair of reflected laser beams which are emitted from the at least one pair of light emitting units and reflected by target objects.

SENSOR AIMING DEVICE, DRIVING CONTROL SYSTEM, AND CORRECTION AMOUNT ESTIMATION METHOD

A sensor aiming device includes: a target positional relationship processing unit for outputting positional relationship information of first and second targets; a sensor observation information processing unit configured to convert the observation result of the first and second targets into a predetermined unified coordinate system according to a coordinate conversion parameter, perform time synchronization at a predetermined timing, and extract first target information indicating a position of the first target and second target information indicating a position of the second target; a position estimation unit configured to estimate a position of the second target using the first target information, the second target information, and the positional relationship information; and a sensor correction amount estimation unit configured to calculate a deviation amount of the second sensor using the second target information and an estimated position of the second target and estimate a correction amount.

METHOD OF PROCESSING IMAGE, ELECTRONIC DEVICE, AND STORAGE MEDIUM

A method of processing an image, an electronic device, and a storage medium, which relate to the artificial intelligence field, in particular to fields of computer vision and intelligent transportation technologies. The method includes: determining at least one key frame image in a scene image sequence captured by a target camera; determining a camera pose parameter associated with each key frame image in the at least one key frame image, according to a geographic feature associated with the key frame image; and projecting each scene image in the scene image sequence to obtain a target projection image according to the camera pose parameter associated with the key frame image, so as to generate a scene map based on the target projection image. The geographic feature associated with any key frame image indicates localization information of the target camera at a time instant of capturing the corresponding key frame image.

IMAGING SYSTEM FOR DETECTING HUMAN-OBJECT INTERACTION AND A METHOD FOR DETECTING HUMAN-OBJECT INTERACTION
20230039867 · 2023-02-09 ·

The present application discloses an imaging system for detecting human-object interaction and a method for detecting human-object interaction thereof. The imaging system includes an event sensor, an image sensor, and a controller. The event sensor is configured obtain an event data set of the targeted scene according to variations of light intensity sensed by pixels of the event sensor when an event occurs in the targeted scene. The image sensor is configured capture a visual image of the targeted scene. The controller is configured to detect human according to the event data set, trigger the image sensor to capture the visual image when the human is detected, and detect the human-object interaction in the targeted scene according to the visual image and a series of event data sets obtained by the event sensor during the event.

System, apparatus and method for automated medication adherence improvement

Computer and mobile device-based systems and computer-implemented methods are described for automated medication adherence improvement for patients in medication-assisted treatments. The computer and mobile device-based systems includes modules and components to help patients in identifying prescribed medications, logging medication events, and to provide patients with personalized and targeted adherence enhancing interventions consisting of short questions, tips, advices, suggestions, strategies etc. by applying data mining and statistical analysis techniques on the individual and population-level data collected primarily from the same system.

Sensor fusion for precipitation detection and control of vehicles

An apparatus includes a processor configured to be disposed with a vehicle and a memory coupled to the processor. The memory stores instructions to cause the processor to receive, at least two of: radar data, camera data, lidar data, or sonar data. The sensor data is associated with a predefined region of a vicinity of the vehicle while the vehicle is traveling during a first time period. At least a portion of the vehicle is positioned within the predefined region during the first time period. The method also includes detecting that no other vehicle is present within the predefined region. An environment of the vehicle during the first time period is classified as one state from a set of states that includes at least one of dry, light rain, heavy rain, light snow, or heavy snow, based on at least two of the sensor data to produce an environment classification. An operational parameter of the vehicle based on the environment classification is modified.