G06V10/80

METHOD OF DETERMINING VISUAL INTERFERENCE USING A WEIGHTED COMBINATION OF CIS AND DVS MEASUREMENT
20230013877 · 2023-01-19 ·

The embodiments herein provide a method of obtaining a weighted combination of dynamic vision sensor (DVS) measurements and contact image sensor (CIS) measurements for determining visual inference in an electronic device, the method includes receiving, by the electronic device, a DVS image and a CIS image from the image sensor; determining, by the electronic device, a plurality of parameters associated with the DVS image and feature velocities of a plurality of CIS features present in the CIS image; determining, by the electronic device, a determined DVS feature confidence based on the plurality of parameters associated with the DVS image; determining, by the electronic device, a determined CIS feature confidence based on the feature velocities of the plurality of features present in the CIS image; and calculating, by the electronic device, a weighted visual inference based on the determined DVS feature confidence and the determined CIS feature confidence.

METHOD OF DETERMINING VISUAL INTERFERENCE USING A WEIGHTED COMBINATION OF CIS AND DVS MEASUREMENT
20230013877 · 2023-01-19 ·

The embodiments herein provide a method of obtaining a weighted combination of dynamic vision sensor (DVS) measurements and contact image sensor (CIS) measurements for determining visual inference in an electronic device, the method includes receiving, by the electronic device, a DVS image and a CIS image from the image sensor; determining, by the electronic device, a plurality of parameters associated with the DVS image and feature velocities of a plurality of CIS features present in the CIS image; determining, by the electronic device, a determined DVS feature confidence based on the plurality of parameters associated with the DVS image; determining, by the electronic device, a determined CIS feature confidence based on the feature velocities of the plurality of features present in the CIS image; and calculating, by the electronic device, a weighted visual inference based on the determined DVS feature confidence and the determined CIS feature confidence.

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM

An image processing device including an image acquisition unit configured to acquire an image containing a subject via a lens unit; a distance information acquisition unit configured to acquire distance information indicating a distance to the subject; an auxiliary data generation unit configured to generate auxiliary data related to the distance information; a data stream generation unit configured to generate a data stream in which the image, the distance information, and the auxiliary data are superimposed; and an output unit configured to output the data stream to outside.

VEHICLE IDENTIFICATION DEVICE

An object herein is to provide a vehicle identification device by which the relation of a towing vehicle and a towed vehicle is accurately determined and stored. A vehicle identification device includes: a vehicle information acquisition unit for acquiring vehicle information indicative of positions, traveling directions and traveling speeds of multiple vehicles; a towing relation determination unit for extracting, from the vehicle information acquired by the vehicle information acquisition unit, a vehicle train that is a succession of vehicles, to thereby determine that a leading vehicle in the vehicle train is a towing vehicle and a portion of the vehicle train subsequent to the towing vehicle is at least one towed vehicle towed by the towing vehicle; and a towing relation storing unit for storing a towing relation represented by the towing vehicle and the at least one towed vehicle determined by the towing relation determination unit.

System, method, and platform for auto machine learning via optimal hybrid AI formulation from crowd

Aspects of the subject disclosure may include, for example, receiving a plurality of proposed machine learning solutions to a machine learning problem including receiving, for each respective proposed machine learning solution of the plurality of proposed machine learning solutions, one or more of a machine learning model, a dataset and a data pipeline output; automatically determining hybrid solutions to the machine learning problem, including combining, by the processing system, at least one of a first component from a first proposed machine learning solution with at least one of a second component from a second proposed machine learning solution; and ranking the hybrid solutions including determining a log loss score for each hybrid solution and sorting the hybrid solutions according to the log loss score for each hybrid solution. Other embodiments are disclosed.

ROADMAP GENERATION SYSTEM AND METHOD OF USING
20230222788 · 2023-07-13 ·

A method of determining a roadway map includes receiving an image from above a roadway. The method further includes generating a skeletonized map based on the received image, wherein the skeletonized map comprises a plurality of roads. The method includes identifying intersections based on joining of multiple roads of the plurality of roads in the skeletonized map. The method includes partitioning the skeletonized map based on the identified intersections, wherein partitioning the skeletonized map defines a roadway data set and an intersection data set. The method includes analyzing the roadway data set to determine a number of lanes in each roadway of the plurality of roads. The method further includes analyzing the intersection data set to lane connections in the identified intersections. The method further includes merging results of the analyzed road data set and the analyzed intersection data set to generate the roadway map.

ROADMAP GENERATION SYSTEM AND METHOD OF USING
20230222788 · 2023-07-13 ·

A method of determining a roadway map includes receiving an image from above a roadway. The method further includes generating a skeletonized map based on the received image, wherein the skeletonized map comprises a plurality of roads. The method includes identifying intersections based on joining of multiple roads of the plurality of roads in the skeletonized map. The method includes partitioning the skeletonized map based on the identified intersections, wherein partitioning the skeletonized map defines a roadway data set and an intersection data set. The method includes analyzing the roadway data set to determine a number of lanes in each roadway of the plurality of roads. The method further includes analyzing the intersection data set to lane connections in the identified intersections. The method further includes merging results of the analyzed road data set and the analyzed intersection data set to generate the roadway map.

Event-assisted autofocus methods and apparatus implementing the same
11558542 · 2023-01-17 · ·

A focus method and an image sensing apparatus are disclosed. The method includes capturing, by a plurality of event sensing pixels, event data of a targeted scene, wherein the event data indicates which pixels of the event sensing pixels have changes in light intensity, accumulating the event data for a predetermined time interval to obtain accumulated event data, determining whether a scene change occurs in the targeted scene according to the accumulated event data, obtaining one or more interest regions in the targeted scene according to the accumulated event data in response to the scene change, and providing at least one of the one or more interest regions for a focus operation. The image sensing apparatus comprises a plurality of image sensing pixels, a plurality of event sensing pixels, and a controller configured to perform said method.

MULTISCALE POINT CLOUD CLASSIFICATION METHOD AND SYSTEM

The present disclosure discloses a multiscale point cloud classification method. The method includes the following steps: acquiring 3D unordered point cloud data; performing feature extraction and classification on the acquired point cloud data using a pre-trained parallel classification network to obtain an output result, wherein the parallel classification network includes a plurality of basic networks with the same structures; and fusing the output results of the parallel network using a pre-trained deep Q network to obtain a final result of point cloud classification. The present disclosure can improve the accuracy and robustness of point cloud classification.

METHOD AND APPARATUS WITH OBJECT RECOGNITION

A method and apparatus for object recognition are provided. A processor-implemented method includes extracting feature maps including local feature representations from an input image, generating a global feature representation corresponding to the input image by fusing the local feature representations, and performing a recognition task on the input image based on the local feature representations and the global feature representation.