G06V10/803

Guided batching

The present invention provides a method of generating a robust global map using a plurality of limited field-of-view cameras to capture an environment. Provided is a method for generating a three-dimensional map comprising: receiving a plurality of sequential image data wherein each of the plurality of sequential image data comprises a plurality of sequential images, further wherein the plurality of sequential images is obtained by a plurality of limited field-of-view image sensors; determining a pose of each of the plurality of sequential images of each of the plurality of sequential image data; determining one or more overlapping poses using the determined poses of the sequential image data; selecting at least one set of images from the plurality of sequential images wherein each set of images are determined to have overlapping poses; and constructing one or more map portions derived from each of the at least one set of images.

Hypercomplex deep learning methods, architectures, and apparatus for multimodal small, medium, and large-scale data representation, analysis, and applications

A method and system for creating hypercomplex representations of data includes, in one exemplary embodiment, at least one set of training data with associated labels or desired response values, transforming the data and labels into hypercomplex values, methods for defining hypercomplex graphs of functions, training algorithms to minimize the cost of an error function over the parameters in the graph, and methods for reading hierarchical data representations from the resulting graph. Another exemplary embodiment learns hierarchical representations from unlabeled data. The method and system, in another exemplary embodiment, may be employed for biometric identity verification by combining multimodal data collected using many sensors, including, data, for example, such as anatomical characteristics, behavioral characteristics, demographic indicators, artificial characteristics. In other exemplary embodiments, the system and method may learn hypercomplex function approximations in one environment and transfer the learning to other target environments. Other exemplary applications of the hypercomplex deep learning framework include: image segmentation; image quality evaluation; image steganalysis; face recognition; event embedding in natural language processing; machine translation between languages; object recognition; medical applications such as breast cancer mass classification; multispectral imaging; audio processing; color image filtering; and clothing identification.

Systems and methods for determining real-time lane level snow accumulation

A method comprises receiving an image of a road captured by a vehicle driving on the road, receiving a map of the road, the map comprising a road geometry of the road, obtaining an edge map of the road based on the image of the road, inputting the image, the map of the road, and the edge map into a trained regressor neural network, determining an estimated snow depth for each of one or more lanes of the road based on an output of the regressor neural network, and transmitting the estimated snow depth to an edge computing device.

Rearview camera field of view with alternative tailgate positions

Systems and methods for a rear-viewing camera system for a vehicle. A first camera is mounted on the vehicle with a field of view that is at least partially obstructed by a tailgate of the vehicle and/or a load carried by the vehicle. A second camera is mounted on the vehicle with a field of view that includes an unobstructed view of an imaging area that is obstructed in the field of view of the first camera. A tailgate position sensor is configured to output a signal indicative of a current position of the tailgate of the vehicle. By determining a position of the tailgate, an electronic controller is configured to generate an output image in which the tailgate and/or the load appear at least partially transparent by replacing image data that is obstructed in the image captured by the first camera with image data captured by the second camera.

Sensor fusion for precipitation detection and control of vehicles

An apparatus includes a processor configured to be disposed with a vehicle and a memory coupled to the processor. The memory stores instructions to cause the processor to receive, at least two of: radar data, camera data, lidar data, or sonar data. The sensor data is associated with a predefined region of a vicinity of the vehicle while the vehicle is traveling during a first time period. At least a portion of the vehicle is positioned within the predefined region during the first time period. The method also includes detecting that no other vehicle is present within the predefined region. An environment of the vehicle during the first time period is classified as one state from a set of states that includes at least one of dry, light rain, heavy rain, light snow, or heavy snow, based on at least two of the sensor data to produce an environment classification. An operational parameter of the vehicle based on the environment classification is modified.

SYSTEMS AND METHODS FOR SECURE TOKENIZED CREDENTIALS

Systems, devices, methods, and computer readable media are provided in various embodiments having regard to authentication using secure tokens, in accordance with various embodiments. An individual's personal information is encapsulated into transformed digitally signed tokens, which can then be stored in a secure data storage (e.g., a “personal information bank”). The digitally signed tokens can include blended characteristics of the individual (e.g., 2D/3D facial representation, speech patterns) that are combined with digital signatures obtained from cryptographic keys (e.g., private keys) associated with corroborating trusted entities (e.g., a government, a bank) or organizations of which the individual purports to be a member of (e.g., a dog-walking service).

MULTI-MODAL SENSOR DATA FUSION FOR PERCEPTION SYSTEMS

A method includes fusing multi-modal sensor data from a plurality of sensors having different modalities. At least one region of interest is detected in the multi-modal sensor data. One or more patches of interest are detected in the multi-modal sensor data based on detecting the at least one region of interest. A model that uses a deep convolutional neural network is applied to the one or more patches of interest. Post-processing of a result of applying the model is performed to produce a post-processing result for the one or more patches of interest. A perception indication of the post-processing result is output.

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
20230206622 · 2023-06-29 · ·

There is provided an information processing capable of robustly recognizing a recognition target while reducing power consumption required for recognition of the recognition target. The information processing device includes: a control unit configured to control switching of an operation unit to be activated between a first operation unit and a second operation unit related to recognition of a recognition target existing in a real space, on the basis of a detection result of a detection target, in which a first sensor configured to obtain first data in which the recognition target is recognized is attached to a first part of a body of a user, and a second sensor configured to obtain second data in which the recognition target is recognized is attached to a second part of the body different from the first part.

ELECTRONIC DEVICE AND METHOD WITH INDEPENDENT TIME POINT MANAGEMENT

An electronic device includes: a timekeeping processor configured to: determine a sensing time point of reference sensing data, generated through sensing by a reference sensor of a plurality of sensors, and a sensing time point of other sensing data of another sensor of the plurality of sensors, based on a clock rate of the timekeeping processor; determine a time difference between the sensing time point of the reference sensing data and the sensing time point of the other sensing data; and determine a task latency from the sensing time point of the reference sensing data to a task complete time point based on the clock rate of the timekeeping processor; and one or more other processors configured to correct, based on the task latency, a task result processed according to localization of the electronic device determined for the sensing time point of the reference sensing data based on the determined time difference, the reference sensing data, and the other sensing data.

ELECTRONIC DEVICE EMPLOYING THERMAL SENSOR AND IMAGE SENSOR
20230196826 · 2023-06-22 ·

There is provided a recognition system adaptable to a portable device or a wearable device. The recognition system senses a body heat using a thermal sensor, and performs functions such as the living body recognition, image denoising and body temperature prompting according to detected results.