Patent classifications
G06V2201/08
FUSION AND ASSOCIATION OF TRAFFIC OBJECTS IN DRIVING ENVIRONMENT
A method is provided. The method includes: obtaining first environmental information and second environmental information, where the first environmental information and the second environmental information are acquired by different sensors; determining, based on the first environmental information, information about a first lane of a first traffic object in the first environmental information, and determining; and determining whether the first traffic object and the second traffic object have an association relationship.
IMAGE PROCESSING METHOD, NETWORK TRAINING METHOD, AND RELATED DEVICE
This application provides an image processing method, a network training method, and a related device, and relates to image processing technologies in the artificial intelligence field. The method includes: inputting a first image including a first vehicle into an image processing network to obtain a first result output by the image processing network, where the first result includes location information of a two-dimensional 2D bounding frame of the first vehicle, coordinates of a wheel of the first vehicle, and a first angle of the first vehicle, and the first angle of the first vehicle indicates an included angle between a side line of the first vehicle and a first axis of the first image; and generating location information of a three-dimensional 3D outer bounding box of the first vehicle based on the first result.
VEHICULAR ACCESS CONTROL BASED ON VIRTUAL INDUCTIVE LOOP
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for monitoring events using a Virtual Inductive Loop system. In some implementations, image data is obtained from cameras. A region depicted in the obtained image data is identified, the region comprising lines spaced by a distance that satisfies a distance threshold. For each line included in the region: an object depicted crossing the line is determined whether to satisfy a height criteria indicating that the line is activated. In response to determining that an object depicted crossing the line satisfies the height criteria, an event is determined to have likely occurred using data indicating (i) which lines of the lines were activated and (ii) an order in which each of the lines were activated. In response to determining that an event likely occurred, actions are performed using at least some of the data.
Imaging device, video retrieving method, video retrieving program, and information collecting device
A drive recorder according to an embodiment of the present disclosure includes: an imaging unit that is mounted on a vehicle and captures a video of the surroundings of the vehicle; a video recording unit that has, recorded therein, video data captured; a network connecting unit that receives accident information including a time and date when an accident occurred and a place where the accident occurred; and a video retrieving unit that determines whether any video data captured in a predetermined time period and in a predetermined region are available in the video data recorded in the video recording unit, the predetermined time period including the time and date when the accident occurred, the predetermined region including the place where the accident occurred.
METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM FOR REMOTE DAMAGE ASSESSMENT OF VEHICLE
A method for remote damage assessment of a vehicle is provided. The present disclosure relates to the technical field of artificial intelligence, in particular to the technical field of image and text recognition. An implementation solution is: performing data collection on a target vehicle to determine damage information of the target vehicle; obtaining call content of an insurance claiming call for the target vehicle, and extracting accident-related information from the call content, wherein the accident-related information includes named entities in the call content and a relationship between the named entities; and determining a first fraud probability corresponding to the target vehicle at least based on the damage information and the accident-related information.
SYSTEMS, METHODS AND PROGRAMS FOR GENERATING DAMAGE PRINT IN A VEHICLE
The disclosure relates to systems, methods and computer readable media for providing network-based identification, generation and management of a unique damage (finger) print of vehicle(s) by geodetic mapping of stable key points onto a ground truth 3D model of the vehicle, and vehicle parts—identified from the raw images using supervised and unsupervised machine learning. Specifically, the disclosure relates to System and methods for the generation of unique damage print on a vehicle that is obtained from captured images of the damaged vehicle, photogrammetrically localized to a specific vehicle part, and the computer programs enabling the method, the damage print configured to be used, for example, in fraud detection in insurance claims.
SYSTEMS AND METHODS FOR PARTICLE FILTER TRACKING
Systems and methods for operating a mobile platform. The methods comprise, by a computing device: obtaining a LiDAR point cloud; using the LiDAR point cloud to generate a track for a given object in accordance with a particle filter algorithm by generating states of a given object over time (each state has a score indicating a likelihood that a cuboid would be created given an acceleration value and an angular velocity value); using the track to train a machine learning algorithm to detect and classify objects based on sensor data; and/or causing the machine learning algorithm to be used for controlling movement of the mobile platform.
ELECTRONIC DEVICE AND METHOD FOR TRACKING OBJECT THEREOF
An electronic device and a method for tracking an object thereof are provided. The electronic device identifies whether there is a first object being tracked, when obtaining an image and rotation information of a camera of the electronic device, corrects state information of the first object using the rotation information, when there is the first object, detects a second object matched to the first object from the image based on the corrected state information, and tracks a position of the second object using an object tracking algorithm.
Sensor fusion for precipitation detection and control of vehicles
An apparatus includes a processor configured to be disposed with a vehicle and a memory coupled to the processor. The memory stores instructions to cause the processor to receive, at least two of: radar data, camera data, lidar data, or sonar data. The sensor data is associated with a predefined region of a vicinity of the vehicle while the vehicle is traveling during a first time period. At least a portion of the vehicle is positioned within the predefined region during the first time period. The method also includes detecting that no other vehicle is present within the predefined region. An environment of the vehicle during the first time period is classified as one state from a set of states that includes at least one of dry, light rain, heavy rain, light snow, or heavy snow, based on at least two of the sensor data to produce an environment classification. An operational parameter of the vehicle based on the environment classification is modified.
Image-based kitchen tracking system with anticipatory preparation management
The subject matter of this specification can be implemented in, among other things, methods, systems, computer-readable storage medium. A method can include receiving, by a processing device, image data including one or more image frames indicative of a current state of a meal preparation area. The processing device determines a first quantity of a first ingredient disposed within a first container based on the image data. The processing device determines a meal preparation procedure associated with the first ingredient based on the first quantity. The processing device causes a notification indicative of the meal preparation procedure to be displayed on a graphical user interface (GUI).