G06V10/803

Systems and methods for dynamic passphrases

Systems, devices, methods, and computer readable media are provided in various embodiments relating to generating a dynamic challenge passphrase data object. The method includes establishing, a plurality of data record clusters, representing a mutually exclusive set of structured data records of an individual, ranking the plurality of feature data fields based on a determined contribution value of each feature data field relative to the establishing of the data record cluster, and identifying, using the ranked plurality of feature data fields, a first and a second feature data field of the plurality of feature data fields. The method includes generating the dynamic challenge passphrase data object, wherein the first or the second feature data field is used to establish a statement string portion, and a remaining one of the first or the second feature data field is used to establish a question string portion and a correct response string.

APPARATUS AND METHODOLOGY OF ROAD CONDITION CLASSIFICATION USING SENSOR DATA

Methods and systems are provided for controlling a vehicle action based on a condition of a road on which a vehicle is travelling, including: obtaining first sensor data as to a surface of the road from one or more first sensors onboard the vehicle; obtaining second sensor data from one or more second sensors onboard the vehicle as to a measured parameter pertaining to operation of the vehicle or conditions pertaining thereto; generating a plurality of road surface channel images from the first sensor data, wherein each road surface channel image captures one of a plurality of facets of properties of the first sensor data; classifying, via a processor using a neural network model, the condition of the road on which the vehicle is travelling, based on the measured parameter and the plurality of road surface channel images; and controlling a vehicle action based on the classification of the condition of the road.

VEHICLE SENSOR DATA SHARING
20220036098 · 2022-02-03 ·

Two vehicles—an ego vehicle and an other vehicle—can share sensor data in a streamlined manner. One or more sensors can be configured to acquire first environment data of an external environment of the ego vehicle. A data summary based on second environment data of an external environment of the other vehicle can be received. Whether there is a common region of sensor coverage between the ego vehicle and the other vehicle can be determined. In response to there being a common region, the first environment data that is located within the common region can be identified and the resolution level of the identified first environment data can be reduced. The first environment data that has the reduced resolution level and a remainder of the first environment data excluding the identified first environment data can be transmitted to the other vehicle.

DANGEROUS DRIVING DETECTION DEVICE, DANGEROUS DRIVING DETECTION SYSTEM, DANGEROUS DRIVING DETECTION METHOD, AND STORAGE MEDIUM

A dangerous driving detection device includes: an acquisition section that acquires image information, which expresses captured images that are captured by an imaging section provided at a vehicle, and vehicle information that expresses a state of the vehicle; plural detection sections that, based on the image information and the vehicle information acquired by the acquisition section, detect types of dangerous driving that are respectively different from one another; and a deriving section that, based on results of detection of the plural detection sections, derives a degree of danger of dangerous driving of a driver.

SYSTEMS AND METHODS FOR GENERATING SUMMARY MEDICAL IMAGES
20220036612 · 2022-02-03 ·

There is provided a computer implemented method for generating summary images from 3D medical images, comprising: receiving a 3D medical image, dividing the 3D medical images into a sequence of a plurality 2D images, computing a similarity dataset indicative of an amount of similarity between each pair of the plurality of 2D images, segmenting the similarity dataset into a plurality of slabs by minimizing the amount of similarity between consecutive slabs and maximizing the amount of similarity within each slab, aggregating, for each respective slab, the plurality of 2D images into a respective summary image, and presenting on a display, the respective summary image.

SYSTEM AND METHOD FOR GENERATION OF PROCESS GRAPHS FROM MULTI-MEDIA NARRATIVES

A system for characterizing content relating to a desired outcome is disclosed. The disclosed system can be configured to identify context included in content collected from various content sources, map the identified context into graph nodes and graph edges connecting the graph nodes, identify one or more features of the identified context and adjust at least one of: a graph node and a graph edge based on the identified one or more features, identify a graph incorporating the graph nodes, the graph edges, and at least one of an adjusted graph node and an adjusted graph edge, and provide a recommendation for at least one action for achieving the desired outcome based on the identified graph.

Ground environment detection method and apparatus

A ground environment detection method and apparatus are disclosed, where the method includes: scanning a ground environment by using laser sounding signals having different operating wavelengths, receiving a reflected signal that is reflected back by the ground environment, determining scanning spot information of each scanning spot of the ground environment based on the reflected signal, determining space coordinate information and a laser reflection feature of each scanning spot based on each piece of scanning spot information, partitioning the ground environment into sub-regions having different laser reflection features, and determining a ground environment type of each sub-region. Lasers having different operating wavelengths are used to scan the ground, and the ground environment type is determined based on the reflection intensity of the ground environment under different wavelengths of lasers, thereby improving a perception effect of a complex ground environment, and better determining a passable road surface.

DOCUMENT REGION DETECTION
20170221213 · 2017-08-03 ·

One example provides a system. The system receives ate infrared image and processes the infrared image to detect lines in the infrared image. The system receives a color image corresponding to the infrared image and processes the color image to detect lines in the color image. The detected lines in the infrared image and the detected lines in the color image are combined. A document region is detected from the combined detected lines.

Automated detection and avoidance system

In general, certain embodiments of the present disclosure provide a detection and avoidance system for a vehicle. According to various embodiments, the detection and avoidance system comprises an imaging unit configured to obtain a first image of a field of view at a first camera channel. The first camera channel filters radiation at a wavelength, where one or more objects in the field of view do not emit radiation at the wavelength. The detection and avoidance system further comprises a processing unit configured to receive the first image from the imaging unit and to detect one or more objects therein, as well as a notifying unit configured to communicate collision hazard information determined based upon the detected one or more objects to a pilot control system of the vehicle. Accordingly, the pilot control maneuvers the vehicle to avoid the detected objects.

Method and device for image processing, method for training object detection model

A method and device for image processing, a method for training an object detection model are provided in the present disclosure. In the method for image processing, a visible light image is acquired. A central weight map corresponding to the visible light image is generated. Weight values represented by the central weight map gradually decrease from a center to an edge of a visible light image. The visible light image and the central weight map are inputted into an object detection model to obtain an object region confidence map. The object detection model is a model obtained by training according to multiple set of training data, each set of which includes a visible light image, a central weight map and a corresponding labeled object mask pattern for a same scenario. A target object in the visible light image is determined according to the object region confidence map.