G06V10/803

IMAGE DISPOSITIONING USING MACHINE LEARNING

Provided is a method, computer program product, and system for predicting image sharing decisions using machine learning. A processor may receive a set of annotated images and an associated text input from each user of a plurality of users. The processor may train, using the set of annotated images and the associated text input from each user, a neural network model to output an image sharing decision that is specific to a user.

Method and system for on-the-fly object labeling via cross modality validation in autonomous driving vehicles

The present teaching relates to method, system, medium, and implementation of in-situ perception in an autonomous driving vehicle. A plurality of types of sensor data are acquired continuously via a plurality of types of sensors deployed on the vehicle, where the plurality of types of sensor data provide information about surrounding of the vehicle. One or more items surrounding the vehicle are tracked, based on some models, from a first of the plurality of types of sensor data from a first type of the plurality of types of sensors. A second of the plurality of types of sensor data are obtained from a second type of the plurality of sensors and are used to generate validation base data. Some of the one or more items are labeled, automatically, via validation base data to generate labeled at least some item, which is to be used to generate model updated information for updating the at least one model.

Point cloud data processing method, apparatus, device, vehicle and storage medium

The present application provides a point cloud data processing method, an apparatus, a device, a vehicle, and a storage medium, the method includes: acquiring, according to a preset frequency, raw data collected by sensors on a vehicle; and performing, according to the raw data of the sensors, data fusion processing to obtain a fusion result. By acquiring, according to the preset frequency, the raw data collected by the sensors on the vehicle in a latest period, and performing the data fusion processing to obtain the fusion result, a synchronous clock source may be removed, a weak clock synchronization may be realized, and the cost may be effectively reduced. The preset frequency may be flexibly set, which, when set with a larger value, can reduce a time difference between the raw data of the sensors and improve data accuracy.

Systems and methods for secure tokenized credentials

Systems, devices, methods, and computer readable media are provided in various embodiments having regard to authentication using secure tokens, in accordance with various embodiments. An individual's personal information is encapsulated into transformed digitally signed tokens, which can then be stored in a secure data storage (e.g., a “personal information bank”). The digitally signed tokens can include blended characteristics of the individual (e.g., 2D/3D facial representation, speech patterns) that are combined with digital signatures obtained from cryptographic keys (e.g., private keys) associated with corroborating trusted entities (e.g., a government, a bank) or organizations of which the individual purports to be a member of (e.g., a dog-walking service).

Methods and systems for predicting a condition of living-being in an environment

A method for predicting a condition of living-being in an environment, the method including capturing image-data associated with the at-least one person and based thereupon determining a current-condition of the person; receiving content from a plurality of content-sources with respect to said at least one person being imaged, said content defined by at least one of text and statistics; defining one or more weighted parameters based on allocating a plurality of weights to at least one of the captured image data and the received content based on the current-condition; and predicting, by a predictive-analysis module, a condition of the at-least one person based on analysis of the one or more weighted parameters.

OBJECTION DETECTION USING IMAGES AND MESSAGE INFORMATION
20220405952 · 2022-12-22 ·

Disclosed are techniques for performing object detection and tracking. In some implementations, a process for performing object detection and tracking is provided. The process can include steps for obtaining, at a tracking object, an image comprising a target object, obtaining, at the tracking object, a first set of messages associated with the target object, determining a bounding box for the target object in the image based on the first set of messages associated with the target object, and extracting a sub-image from the image. In some approaches, the process can further include steps for detecting, using an object detection model, a location of the target object within the sub-image. Systems and machine-readable media are also provided.

METHOD AND APPARATUS FOR PROCESSING LANE LINE

The present disclosure provides a method and an apparatus for processing a lane line, and relates to the field of data processing and, in particular, to the fields of intelligent transportation, Internet of Vehicles and intelligent cockpit. A specific implementation scheme is: obtaining a lane edge line of a road and a lane dividing line of the road according to point cloud data and image information of the road; acquiring breakpoints of the lane edge line, and acquiring breakpoints of the lane dividing line; completing the lane edge line according to the breakpoints of the lane edge line, to obtain a continuous lane edge line; completing the lane dividing line according to the breakpoints of the lane dividing line and the continuous lane edge line, to obtain a continuous lane dividing line.

SENSOR SYSTEMS AND METHODS FOR AN AIRCRAFT LAVATORY
20220402610 · 2022-12-22 · ·

A method may comprise receiving, via a processor, a first indication that an object is in a first zone of interest of a first sensor in the plurality of sensors; receiving, via the processor, a second indication that the object is in a second zone of interest of a second sensor in the plurality of sensors; and determining, via the processor, whether the first sensor or the second sensor is falsely detecting the object within the respective zone of interest.

Image processing method and device and storage medium

The present disclosure relates to an image processing method and device, an electronic apparatus and a storage medium. The method comprises: acquiring an iris image group comprising at least two iris images to be compared; detecting iris locations in the iris images and segmentation results of iris areas in the iris images; performing multi-scale feature extraction and multi-scale feature fusion on an image area corresponding to the iris locations, to obtain iris feature maps corresponding to the iris images; performing comparison using the segmentation results and the iris feature maps respectively corresponding to the at least two iris images, and determining whether the at least two iris images correspond to the same object based on a comparison result of the comparison. Embodiments of the present disclosure realize accurate comparison of iris images.

Multi-view deep neural network for LiDAR perception

A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.