G06V10/803

Mapping geographic areas using lidar and network data

A geographic area mapping system may enable collecting, from a set of mobile devices, radio frequency data, the radio frequency data comprising information about a set of network connections in the geographic area; collecting lidar data for the geographic area; generating a mapping between the collected radio frequency data and the collected lidar data for the geographic area; and providing a visualization of the mapped radio frequency data and lidar data for the geographic area.

Computer-generated image processing including volumetric scene reconstruction

An imagery processing system determines pixel color values for pixels of captured imagery from volumetric data, providing alternative pixel color values. A main imagery capture device, such as a camera, captures main imagery such as still images and/or video sequences, of a live action scene. Alternative devices capture imagery of the live action scene, in some spectra and form, and capture information related to pixel color values for multiple depths of a scene, which can be processed to provide reconstruction.

Systems and methods for counting repetitive activity in audio video content

Repetitive activities can be captured in audio video content. The AV content can be processed in order to predict the number of repetitive activities present in the AV content. The accuracy of the predicted number may be improved, especially for AV content with challenging conditions, by basing the predictions on both the audio and video portions of the AV content.

Enhanced remote control of autonomous vehicles

Devices, systems, and methods for remote control of autonomous vehicles are disclosed herein. A method may include receiving, by a device, first data indicative of an autonomous vehicle in a parking area, and determining, based on the first data, a location of the autonomous vehicle. The method may include determining, based on a the location, first image data including a representation of an object. The method may include generating second image data based on the first data and the first image data, and presenting the second image data. The method may include receiving an input associated with controlling operation of the autonomous vehicle, and controlling, based on the input, the operation of the autonomous vehicle.

REAR VIEW COLLISION WARNING INDICATION AND MITIGATION
20230005373 · 2023-01-05 ·

A device can comprise a memory and a processor operatively coupled to the memory and comprising computer executable components, comprising a trajectory determination component that determines a trajectory of an adjacent-lane traveling vehicle traveling in a lane adjacent to a vehicle comprising the device, wherein visibility of the adjacent-lane traveling vehicle, from the vehicle, is impaired by a succeeding vehicle traveling between the adjacent-lane traveling vehicle and the vehicle, a collision avoidance component that, in response to the trajectory of the adjacent-lane traveling vehicle being determined, by the trajectory determination component, to prevent a safe lane change by the vehicle to the lane, initiates a collision avoidance action for the vehicle.

Event detection method and system thereof

An event detection method is for detecting if an event being predetermined exists in a detected environment, in which a first wireless unit, at least one second wireless unit wirelessly communicating with the first wireless unit, and at least one cooperating detection device are disposed. The event detection method includes a live CSI data obtaining step, a live CSI data reducing step, a cooperating data obtaining step and an event determining step. The live CSI data reducing step includes reducing a size of a plurality of live CSI data to generate a plurality of preprocessed live CSI data. The event determining step includes inputting the preprocessed live CSI data to an event classifier and processing a plurality of cooperating data to determine if the event exists.

QUATERNION MULTI-DEGREE-OF-FREEDOM NEURON-BASED MULTISPECTRAL WELDING IMAGE RECOGNITION METHOD
20220414857 · 2022-12-29 ·

Disclosed is a quaternion multi-degree-of-freedom neuron-based multispectral welding image recognition method, comprising: using three cameras having different wavebands to obtain multispectral weld pool images, and respectively performing pre-processing and edge extraction on the weld pool images having the different wavebands obtained at a same moment by the three cameras; establishing a quaternion-based multispectral weld pool image edge model; extracting low-frequency features after a quaternion discrete cosine transform; using a quaternion-based multi-degree-of-freedom neuron network to perform classification, training and recognition on edge features of the multispectral weld pool images. Compared to traditional means, the present invention has multiple recognition information sources, strong anti-interference capabilities and high recognition accuracy.

THREE-DIMENSIONAL HUMAN POSE ESTIMATION METHOD AND RELATED APPARATUS
20220415076 · 2022-12-29 ·

This application discloses a three-dimensional human pose estimation method performed by a computer device. An initialization pose estimation result of a single video frame in a video frame sequence of n views is extracted based on a neural network model. Single-frame and single-view human pose estimation is performed on the initialization pose estimation result for each video frame, to obtain n single-view pose estimation sequences respectively corresponding to the n views. Single-frame and multi-view human pose estimation is performed according to single-view pose estimation results with the same timestamp in the n single-view pose estimation sequences, to obtain a multi-view pose estimation sequence. Multi-frame and multi-view human pose estimation is performed on a multi-view pose estimation result in the multi-view pose estimation sequence, to obtain a multi-view and multi-frame pose estimation result. Therefore, accuracy of human pose estimation is improved.

INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
20220415031 · 2022-12-29 · ·

Included are an object identification unit that identifies an identified object in an image; a mapping unit that generates a superimposed image by superimposing target points corresponding to ranging points and superimposing a rectangle surrounding the identified object to the image; an identical-object determination unit that specifies, in the superimposed image, two target points closest to the left and right line segments of the rectangle inside the rectangle; a depth addition unit that specifies, in a space, the positions of two edge points indicating the left and right edges of the identified object based on two ranging points corresponding to the two specified target points, and calculates two depth positions of two predetermined corresponding points different from the two edge points; and an overhead-view generation unit that generates an overhead view of the identified object from the positions of the two edge points and the two depth positions.

FACE IMAGE PROCESSING METHOD AND APPARATUS, COMPUTER-READABLE MEDIUM, AND DEVICE
20220415082 · 2022-12-29 ·

A face image processing method includes: obtaining a plurality of lattice depth images acquired by performing a depth image acquisition on a target face from different acquisition angles; performing a fusion processing on the plurality of lattice depth images to obtain a dense lattice depth image; and performing a face recognition processing on the dense lattice depth image to obtain a face recognition result.