Patent classifications
G06V30/2552
Object detection in vehicles using cross-modality sensors
A system includes first and second sensors and a controller. The first sensor is of a first type and is configured to sense objects around a vehicle and to capture first data about the objects in a frame. The second sensor is of a second type and is configured to sense the objects around the vehicle and to capture second data about the objects in the frame. The controller is configured to down-sample the first and second data to generate down-sampled first and second data having a lower resolution than the first and second data. The controller is configured to identify a first set of the objects by processing the down-sampled first and second data having the lower resolution. The controller is configured to identify a second set of the objects by selectively processing the first and second data from the frame.
Image processing apparatus, image processing method, and storage medium
Character recognition processing suitable to a handwritten character area and a printed character area among character areas in a scanned image of a document is performed. Next, character recognition results for the handwritten character area and character recognition results for the printed character area are integrated and a likelihood indicating a probability of being an extraction target is calculated for a candidate character string that is an extraction candidate among the integrated character recognition results and a character string that is the item value is determined. Then, at the time of the determination, different evaluation indications are used in a case where a character originating from the handwritten character area is included in characters constituting the candidate character string and in a case where such a character is not included.
SYSTEMS AND METHODS FOR SYNCHRONIZING AN IMAGE SENSOR
Systems and methods for synchronization are provided. In some aspects, a method for synchronizing an image sensor is provided. The method includes receiving image data captured using an image sensor that is moving along a pathway, and assembling an image sensor trajectory using the image data. The method also includes receiving position data acquired along the pathway using a position sensor, wherein timestamps for the image data and position data are asynchronous, and assembling a position sensor trajectory using the position data. The method further includes generating a spatial transformation that aligns the image sensor trajectory and position sensor trajectory, and synchronizing the image sensor based on the spatial transformation.
Image Processing and Automatic Learning on Low Complexity Edge Apparatus and Methods of Operation
An edge device for image processing includes a series of linked components which can be independently optimized. A specialized change detector which optimizes the events collected at the expense of false positives is accompanied by a trainable module, which uses training feedback to reduce the false positives over time. A “look ahead module” peeks ahead in time and determines whether an inference pipeline needs to run. This allocates a definite amount of time for the validation and training module. The training module is operated in terms of a quantum of time. Processing time during phases of no scene activity is reserved to carry out training. A lightweight detector and the classifier are trainable modules. A site optimizer is made up of rules and sub-modules using spatio-temporal heuristics to handle specific false positives while optimally combining the change detector and inference module results.
Automated pharmaceutical pill identification
A pill identification system identifies a pill type for a pharmaceutical composition from images of the pharmaceutical composition. The system extracts features from images taken of the pill. The features extracted from the pill image include color, size, shape, and surface features of the pill. In particular, the features include rotation-independent surface features of the pill that enable the pill to be identified from a variety of orientations when the images are taken. The feature vectors are applied to a classifier that determines a pill identification for each image. The pill identification for each image is scored to determine identification for the pharmaceutical composition.
VEHICLE AND METHOD OF MANAGING CLEANLINESS OF INTERIOR OF THE SAME
A method of managing cleanliness of an interior of a vehicle includes: detecting an indoor state using a detector including at least a camera; generating at least one of first guidance information on a lost article or second guidance information on a contaminant upon detecting at least one of the lost article or the contaminant as a result of the detecting the indoor state; and transmitting the at least one guidance information to the outside.
Methods, systems and media for joint manifold learning based heterogenous sensor data fusion
The present disclosure provides a method for joint manifold learning based heterogenous sensor data fusion, comprising: obtaining learning heterogeneous sensor data from a plurality sensors to form a joint manifold, wherein the plurality sensors include different types of sensors that detect different characteristics of targeting objects; performing, using a hardware processor, a plurality of manifold learning algorithms to process the joint manifold to obtain raw manifold learning results, wherein a dimension of the manifold learning results is less than a dimension of the joint manifold; processing the raw manifold learning results to obtain intrinsic parameters of the targeting objects; evaluating the multiple manifold learning algorithms based on the raw manifold learning results and the intrinsic parameters to determine one or more optimum manifold learning algorithms; and applying the one or more optimum manifold learning algorithms to fuse heterogeneous sensor data generated by the plurality sensors.
OBJECT DETECTION IN VEHICLES USING CROSS-MODALITY SENSORS
A system includes first and second sensors and a controller. The first sensor is of a first type and is configured to sense objects around a vehicle and to capture first data about the objects in a frame. The second sensor is of a second type and is configured to sense the objects around the vehicle and to capture second data about the objects in the frame. The controller is configured to down-sample the first and second data to generate down-sampled first and second data having a lower resolution than the first and second data. The controller is configured to identify a first set of the objects by processing the down-sampled first and second data having the lower resolution. The controller is configured to identify a second set of the objects by selectively processing the first and second data from the frame.
AUTHORIZATION USING AN OPTICAL SENSOR
A method of authorizing a device action includes accessing a first baseline model that represents image characteristics of an authorized first user or object. The first baseline model is used as a basis for selecting a first number of sensing structures of a camera image sensor, wherein the first number of sensing structures of the camera image sensor is less than a total number of sensing structures of the camera image sensor. The selected first number of sensing structures of the camera image sensor is activated and a first image sensed by the activated first number of sensing structures is obtaining. The first image is compared with the first baseline model. A next round of authorization processing is activated when an amount of correlation between the first image and the first baseline model satisfies a threshold correlation amount.
Translation of training data between observation modalities
A method for training a generator. The generator is supplied with at least one actual signal that includes real or simulated physical measured data from at least one observation of the first area. The actual signal is translated by the generator into a transformed signal that represents the associated synthetic measured data in a second area. Using a cost function, an assessment is made concerning to what extent the transformed signal is consistent with one or multiple setpoint signals, at least one setpoint signal being formed from real or simulated measured data of the second physical observation modality for the situation represented by the actual signal. Trainable parameters that characterize the behavior of the generator are optimized with the objective of obtaining transformed signals that are better assessed by the cost function. A method for operating the generator, and that encompasses the complete process chain are also provided.