Patent classifications
G06V10/759
ESTIMATING DANGER FROM FUTURE FALLING CARGO
A method for estimating a future fall of a cargo, the method may include receiving by a computerized system, sensed information related to driving sessions of multiple vehicles; applying a machine learning process on the sensed information to detect actual or estimated cargo falling events and generate one or more future falling cargo predictors for multiple types of cargo; estimating, from the sensed information, an impact of cargo falling events related to at least some of the types of cargo; and responding to the estimating, wherein the responding comprises at least one out of (a) storing the one or more future falling cargo predictors for the multiple types of cargo, (b) transmitting the one or more future falling cargo predictors for the multiple types of cargo; (c) storing the estimated impact of cargo falling events related to the at least some of the types of cargo, and (d) transmitting the impact of cargo falling events related to the at least some of the types of cargo.
WRONG-WAY DRIVING WARNING
Using a read sensor to sense wrong-way driving. A method may include sensing, by a rear sensor of a vehicle, an environment of the vehicle to provide rear sensed information; processing the rear sensed information to provide at least one rear-sensed vehicle progress direction indications; generating or receiving at least one front-sensed vehicle progress direction indications; wherein the at least one front-sensed vehicle progress direction indications is generated by processing front-sensed information acquired during right-way progress; comparing at least one rear-sensed vehicle progress direction indications to the at least one front-sensed vehicle progress direction indications to determine whether the vehicle is wrong-way driving; and responding to the finding of the wrong-way driving
USING REAR SENSOR FOR WRONG-WAY DRIVING WARNING
Using a read sensor to sense wrong-way driving. A method may include sensing, by a rear sensor of a vehicle, an environment of the vehicle to provide rear sensed information; processing the rear sensed information to provide at least one rear-sensed vehicle progress direction indications; generating or receiving at least one front-sensed vehicle progress direction indications; wherein the at least one front-sensed vehicle progress direction indications is generated by processing front-sensed information acquired during right-way progress; comparing at least one rear-sensed vehicle progress direction indications to the at least one front-sensed vehicle progress direction indications to determine whether the vehicle is wrong-way driving; and responding to the finding of the wrong-way driving
Deep patch feature prediction for image inpainting
Techniques for using deep learning to facilitate patch-based image inpainting are described. In an example, a computer system hosts a neural network trained to generate, from an image, code vectors including features learned by the neural network and descriptive of patches. The image is received and contains a region of interest (e.g., a hole missing content). The computer system inputs it to the network and, in response, receives the code vectors. Each code vector is associated with a pixel in the image. Rather than comparing RGB values between patches, the computer system compares the code vector of a pixel inside the region to code vectors of pixels outside the region to find the best match based on a feature similarity measure (e.g., a cosine similarity). The pixel value of the pixel inside the region is set based on the pixel value of the matched pixel outside this region.
Adaptive clothing 3D model
Systems and methods provide adapted content to a visitor to a physical environment. An example method receives an image of a visitor to an environment. A visitor portion of the image is distinct from an environment portion of the image. The method detects one or more shapes in the visitor portion of the image using an automatic shape detection technique and defines an approximate boundary of the one or more shapes using a mask. The one or more shapes can be shapes of the visitor's clothing items. The method then calculates an attribute for an area of the image within the mask and identifies electronic content based on the attribute for the area of the image within the mask. The attribute can be a color attribute for the area such as a median color or a dominant color. The method provides the identified electronic content for display in the environment.
ANCHOR DETERMINATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
An anchor determination includes: performing feature extraction on an image to be processed to obtain a first feature map of the image to be processed; performing anchor prediction on the first feature map via an anchor prediction network to obtain position information of anchors and shape information of the anchors in the first feature map, the position information of the anchors referring to information about positions in the first feature map where the anchors are generated. A corresponding anchor determination apparatus and a storage medium are also provided.
IMAGE ANALYSIS METHOD, IMAGE ANALYSIS DEVICE, AND PROGRAM
Disclosed is an image analysis method implemented by a computer, the method including analyzing a partial image which is a part of an image of a planar subject, generating partial-image analysis data representing a characteristic of the partial image, comparing, for each of a plurality of images, candidate-image analysis data with the partial-image analysis data, the candidate-image analysis data representing a characteristic of each of the plurality of images, and selecting a candidate image among the plurality of images, the candidate image including a part corresponding to the partial image.
Real-time identification of moving objects in video images
The disclosed technology generally relates to detecting and identifying objects in digital images, and more particularly to detecting, identifying and/or tracking moving objects in video images using an artificial intelligence neural network configured for deep learning. In one aspect, a method comprises capturing a video input from a scene comprising one or more candidate moving objects using a video image-capturing device, where the video input comprises at least two temporally spaced image frames captured from the scene. The method additionally includes transforming the video input into one or more image pattern layers, where each of the image pattern layers comprises a pattern representing one of the candidate moving objects. The method additionally includes determining a probability of match between each of the image pattern layers and a stored image in a big data library. The method additionally includes adding one or more image pattern layers having the probability of match that exceeds a predetermined level to the big data library automatically, and outputting the probability of match to a user.
Information processing apparatus, method of controlling information processing apparatus, and storage medium
An information processing apparatus, comprising: a control unit configured to control a pattern that a projection apparatus projects onto an object; an obtainment unit configured to obtain a plurality of images respectively captured at a plurality of times by a plurality of image capturing apparatuses that capture the object onto which the pattern has been projected; and a measurement unit configured to measure range information of the object by performing matching, between images respectively captured by the plurality of image capturing apparatuses, using information of temporal change of pixel values of the images.
EMERGENCY DRIVER ASSISTANT
A method for emergency assistance, the method may include detecting a vehicle situation or receiving information regarding the vehicle situation; monitoring an interaction of a driver with at least one member of a group of vehicle control elements to provide monitored interaction information, wherein the group of vehicle control elements comprises a brake and a gas pedal; determining, based on the vehicle situation and the monitored interaction information, whether the monitored interaction is indicative of an emergency situation; wherein the determining comprises applying a decision rule that is responsive to the vehicle situation; and applying an emergency situation driving procedure when determining that the monitored interaction is indicative of the emergency situation.