Patent classifications
G06V10/70
Distance to obstacle detection in autonomous machine applications
In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
Electronic apparatus and method for assisting with driving of vehicle
An electronic apparatus and method for assisting with driving of a vehicle are provided. The electronic apparatus includes: a processor configured to execute one or more instructions stored in a memory, to: obtain a surrounding image of the vehicle via at least one sensor, recognize an object from the obtained surrounding image, obtain three-dimensional (3D) coordinate information for the object by using the at least one sensor, determine a number of planar regions constituting the object, based on the 3D coordinate information corresponding to the object, determine whether the object is a real object, based on the number of planar regions constituting the object, and control a driving operation of the vehicle based on a result of the determining whether the object is the real object.
Apparatus and method for identifying obstacle around vehicle
In an apparatus for identifying an obstacle around a vehicle, an acquirer is configured to acquire an image captured by a camera mounted to the vehicle. An extractor is configured to extract feature points of the image. A generator is configured to generate an optical flow that is a movement vector from each of the feature points of the image acquired before the current time to a corresponding feature point of the image acquired at the current time. A classifier configured to classify the optical flows into groups each corresponding to an object in the image based on pixel positions of the feature points. An identifier is configured to, for each of the groups that the optical flows are classified by the classifier into, identify whether an object corresponding to the group in the image is a stationary object or a moving object based on a degree of variability in lengths of the optical flows of the group.
Method for Detecting Lost Image Information, Control Apparatus for Carrying Out a Method of this Kind, Detection Device Having a Control Apparatus of this Kind and Motor Vehicle Having a Detection Device of this Kind
A method for detecting lost image information via a lighting device and an optical sensor. The lighting device and the optical sensor are controlled so as to be chronologically aligned with each other. A visible spacing region in an observation region of the optical sensor is determined from the chronological alignment of the control of the lighting device and the optical sensor. A recording of the observation region with the optical sensor is generated via the aligned control. Image information is identified in the recording in regions outside of the spacing region visible in the image, so as to make the identified image information accessible.
Method for Detecting Lost Image Information, Control Apparatus for Carrying Out a Method of this Kind, Detection Device Having a Control Apparatus of this Kind and Motor Vehicle Having a Detection Device of this Kind
A method for detecting lost image information via a lighting device and an optical sensor. The lighting device and the optical sensor are controlled so as to be chronologically aligned with each other. A visible spacing region in an observation region of the optical sensor is determined from the chronological alignment of the control of the lighting device and the optical sensor. A recording of the observation region with the optical sensor is generated via the aligned control. Image information is identified in the recording in regions outside of the spacing region visible in the image, so as to make the identified image information accessible.
Data model generation using generative adversarial networks
Methods for generating data models using a generative adversarial network can begin by receiving a data model generation request by a model optimizer from an interface. The model optimizer can provision computing resources with a data model. As a further step, a synthetic dataset for training the data model can be generated using a generative network of a generative adversarial network, the generative network trained to generate output data differing at least a predetermined amount from a reference dataset according to a similarity metric. The computing resources can train the data model using the synthetic dataset. The model optimizer can evaluate performance criteria of the data model and, based on the evaluation of the performance criteria of the data model, store the data model and metadata of the data model in a model storage. The data model can then be used to process production data.
CONTEXT-AIDED MACHINE VISION
Various embodiments herein each include at least one of systems, methods, software, and data structures for context-aided machine vision. For example, one method embodiment includes identifying a customer in a shopping area and maintaining an item bin in a computing system of data identifying items the customer has picked up for purchase. This method further includes receiving an image of the customer holding an item and performing item identification processing on the image to identify the item the customer is holding. The item identification processing may be performed based in part on a stored shopping history of the customer indicating items the customer is more likely to purchase. The identified item is then added to the item bin of the customer.
Automated Spatial Indexing of Images to Video
A spatial indexing system receives a video that is a sequence of frames depicting an environment, such as a floor of a construction site, and performs a spatial indexing process to automatically identify the spatial locations at which each of the images were captured. The spatial indexing system also generates an immersive model of the environment and provides a visualization interface that allows a user to view each of the images at its corresponding location within the model.
FACIAL STRUCTURE ESTIMATING DEVICE, FACIAL STRUCTURE ESTIMATING METHOD, AND FACIAL STRUCTURE ESTIMATING PROGRAM
A facial structure estimating device 10 includes an acquiring unit 11 and a controller 13. The acquiring unit 11 acquires a facial image. The controller 13 functions as an identifier 15, an estimator 16, and an evaluator 17. The identifier 15 identifies an individual based on a facial image. The estimator 16 estimates a facial structure based on the facial image. The evaluator 17 calculates a validity of the facial structure estimated by the estimator 16. The evaluator 17 allows facial images and facial structures whose validity is greater than or equal to a threshold to be applied to training of the estimator 16. The controller 13 causes application of facial images and facial structures whose validity is greater than or equal to a threshold to training of the estimator 16 to be based on identification results of individuals produced by the identifier 15.
FACIAL STRUCTURE ESTIMATING DEVICE, FACIAL STRUCTURE ESTIMATING METHOD, AND FACIAL STRUCTURE ESTIMATING PROGRAM
A facial structure estimating device 10 includes an acquiring unit 11 and a controller 13. The acquiring unit 11 acquires a facial image. The controller 13 functions as an identifier 15, an estimator 16, and an evaluator 17. The identifier 15 identifies an individual based on a facial image. The estimator 16 estimates a facial structure based on the facial image. The evaluator 17 calculates a validity of the facial structure estimated by the estimator 16. The evaluator 17 allows facial images and facial structures whose validity is greater than or equal to a threshold to be applied to training of the estimator 16. The controller 13 causes application of facial images and facial structures whose validity is greater than or equal to a threshold to training of the estimator 16 to be based on identification results of individuals produced by the identifier 15.