G06V10/803

IDENTIFYING OBJECTS WITHIN IMAGES FROM DIFFERENT SOURCES

Techniques are disclosed for providing a notification that a person is at a particular location. For example, a resident device may receive from a user device an image that shows a face of a first person, the image being captured by a first camera of the user device. The resident device may also receive, from another device having a second camera, a second image showing a portion of a face of a second person, the second camera having a viewable area showing a particular location. The resident device may determine a score indicating a level of similarity between a first set of characteristics associated with the face of the first person and a second set of characteristics associated with the face of a second person. The resident device may then provide to the user device a notification based on determining the score.

VERIFICATION SYSTEM
20210374217 · 2021-12-02 ·

A device includes memory and a processor. The device receives biometric information. The device receives location information. The device analyzes the received biometric information with stored biometric information. The device analyzes the received location information with stored location information. The device determines whether the received biometric information matches the stored biometric information. The device determines whether the received location information matches the stored location information. The device sends an electronic communication that indicates whether the received biometric information matches the stored biometric information and whether the received local information matches the stored location information.

Fusion System for Fusing Environment Information for a Motor Vehicle
20210370959 · 2021-12-02 ·

A fusion system for a motor vehicle includes at least two environment sensors, a neural network coupled to the environment sensors for fusing environment information from the environment sensors, a fusion apparatus for fusing environment information from the environment sensors, and a control device coupled to the neural network and the fusion apparatus. The control device is set up to adapt the environment information fused via the neural network, depending on the environment information fused by the fusion apparatus, and to provide the adapted environment information to a driver assistance system of the motor vehicle.

INFERENCING AND LEARNING BASED ON SENSORIMOTOR INPUT DATA
20210374578 · 2021-12-02 ·

One or more multi-layer systems are used to perform inference. A multi-layer system may correspond to a node that receives a set of sensory input data for hierarchical processing, and may be grouped to perform processing for sensory input data. Inference systems at lower layers of a multi-layer system pass representation of objects to inference systems at higher layers. Each inference system can perform inference and form their own versions of representations of objects, regardless of the level and layer of the inference systems. The set of candidate objects for each inference system is updated to those consistent with feature-location representations for the sensors as well as object representations at lower layers. The set of candidate objects is also updated to those consistent with candidate objects from other inference systems, such as inference systems at other layers of the hierarchy or inference systems included in other multi-layer systems.

Cross-modal sensor data alignment

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining an alignment between cross-modal sensor data. In one aspect, a method comprises: obtaining (i) an image that characterizes a visual appearance of an environment, and (ii) a point cloud comprising a collection of data points that characterizes a three-dimensional geometry of the environment; processing each of a plurality of regions of the image using a visual embedding neural network to generate a respective embedding of each of the image regions; processing each of a plurality of regions of the point cloud using a shape embedding neural network to generate a respective embedding of each of the point cloud regions; and identifying a plurality of region pairs using the embeddings of the image regions and the embeddings of the point cloud regions.

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, PROGRAM, RECORDING MEDIUM, AND CAMERA SYSTEM
20220207774 · 2022-06-30 · ·

An information processing device includes: an acquisition unit that acquires feature information of a target depicted in images; a storage unit that stores registration information containing feature information of registered targets; and a distinction unit that distinguishes, on a basis of a result of identification of the feature information acquired by the acquisition unit and the feature information contained in the registration information, one registered target of the registered targets, the one registered target corresponding to the target in the images. The registration information contains zip codes of sites relating to the registered targets. The distinction unit identifies a zip code of a site relating to the target in the images and zip codes contained in registration information with each other, and distinguishes one registered target corresponding to the target in the images using the result of identification of the feature information and using the identification of the zip codes.

ALL-WEATHER TARGET DETECTION METHOD BASED ON VISION AND MILLIMETER WAVE FUSION

An all-weather target detection method based on a vision and millimeter wave fusion includes: simultaneously acquiring continuous image data and point cloud data using two types of sensors of a vehicle-mounted camera and a millimeter wave radar; pre-processing the image data and point cloud data; fusing the pre-processed image data and point cloud data by using a pre-established fusion model, and outputting a fused feature map; and inputting the fused feature map into a YOLOv5 detection network for detection, and outputting a target detection result by non-maximum suppression. The method fully fuses millimeter wave radar echo intensity and distance information with the vehicle-mounted camera images. It analyzes different features of a millimeter wave radar point cloud and fuses the features with image information by using different feature extraction structures and ways, so that the advantages of the two types of sensor data complement each other.

BLIND AREA ESTIMATION APPARATUS, VEHICLE TRAVEL SYSTEM, AND BLIND AREA ESTIMATION METHOD

An object is to provide a technique capable of estimating a blind area region which can be used in an automatic driving to optimize the automatic driving, for example. A blind area estimation device includes an acquisition part and an estimation part. The acquisition part acquires an object region based on object information. The object information is information of an object in a predetermined region detected by a detection part. The object region is a region of the object. The estimation part estimates a blind area region based on the object region. The blind area region is a region which is a blind area for the detection part caused by the object.

GROUND VEHICLE MONOCULAR VISUAL-INERTIAL ODOMETRY VIA LOCALLY FLAT CONSTRAINTS
20220205788 · 2022-06-30 ·

A method of visual-inertial odometry for a ground vehicle is disclosed and includes obtaining an initial set of images with a camera on-board a vehicle, identifying features within the initial set of images, determining a three-dimensional pose using the visual features in the initial set of images, obtaining information indicative of vehicle movement with an inertial measurement unit, obtaining information indicative of vehicle movement with wheel speed sensors and a steering wheel angle sensor, fusing the identified features within the images, the vehicle movement from the IMU, and vehicle sensors within a two-dimensional plane, and determining a vehicle position relative to an initial start location based on the visual features in the images and the vehicle movement information from the IMU, wheel speed sensors, and the steering wheel angle.

METHOD AND APPARATUS FOR STITCHING IMAGES

An image stitching method and apparatus are disclosed. The image stitching apparatus includes at least one processor and a memory. The processor may obtain a plurality of images at different viewpoints, estimate homographies of each of the plurality of images, generate an aligned image by aligning the plurality of images based on the homographies of each of the plurality of images, obtain a collective energy map by inputting the aligned image to a neural network, estimate a stitching line of the aligned image based on the collective energy map, and generate a stitched image by blending the aligned image based on the stitching line.