Patent classifications
G06V10/758
MAPPING NETWORKED DEVICES
Systems, methods, and non-transitory media are provided for localizing and mapping smart devices. An example method can include receiving, by an extended reality (XR) device, an identification output from a connected device that is coupled directly or indirectly to the XR device, the identification output including an audio pattern, a display pattern, and/or a light pattern; detecting the identification output from the connected device; and based on the identification output from the connected device, mapping the connected device in a coordinate system of the XR device.
Tuning simulated data for optimized neural network activation
Techniques described herein are directed to comparing, using a machine-trained model, neural network activations associated with data representing a simulated environment and activations associated with data representing real environment to determine whether the simulated environment is causes similar responses by the neural network, e.g., a detector. If the simulated environment and the real environment do not activate the same way (e.g., the variation between neural network activations of real and simulated data meets or exceeds a threshold), techniques described herein are directed to modifying parameters of the simulated environment to generate a modified simulated environment that more closely resembles the real environment.
Computer-implemented method and system for generating a virtual vehicle environment
A computer-implemented method for creating a virtual vehicle environment includes: receiving data of a real vehicle environment; generating a first feature vector representing a respective real object by applying a second machine learning algorithm to the respective real object and storing the first feature vector; providing a plurality of stored second feature vectors representing synthetically generated objects; identifying a second feature vector having a greatest degree of similarity to the first feature vector; selecting the identified second feature vector and retrieving a stored synthetic object that is associated with the second feature vector and that corresponds to the real object or procedurally generating the synthetic object that corresponds to the real object, depending on the degree of similarity of the identified second feature vector to the first feature vector; and integrating the synthetic object into a predetermined virtual vehicle environment.
Endoscope system
An electronic endoscope system includes an image processing unit that uses a numerical value to evaluate an appearance feature appearing in a biological tissue by using an images captured by an electronic endoscope. The image processing unit calculates a first pixel evaluation value indicating a degree of a first feature, which is featured by a first color component or a first shape appearing in an attention area in the biological tissue, and which relates to the first color component or the first shape, for each pixel from the image, and calculates a first representative evaluation value relating to the first feature by integrating the first pixel evaluation value. Furthermore, the image processing unit evaluates a degree of a second feature that shares the first color component or the first shape with the first feature.
OBJECT DETECTION METHOD AND CAMERA APPARATUS
An object detection method of effectively increasing identification accuracy is applied to a camera apparatus and includes acquiring a reference image and a second reference image containing a specific surveillance area respectively via a first exposure parameter and a second exposure parameter greater than the first exposure parameter, utilizing the first exposure parameter and the second exposure parameter to respectively capture a first detection image and a second detection image during an object detection period, computing a first pixel value variation between the first reference image and the first detection image and a second pixel value variation between the second reference image and the second detection image, and comparing the first pixel value variation with the second pixel value variation to determine whether a target object is within the specific surveillance area.
USING CAPTURED VIDEO DATA TO IDENTIFY POSE OF A VEHICLE
Disclosed herein are systems, methods, and computer program products for predicting movement of an object in a real-world environment. The methods comprise: obtaining a plurality of image frames captured in a sequence during a period of time; identifying first image frames of the plurality of image frames that contain an image of at least one object with one or more turn signals; analyzing the first image frames to obtain a classification for a pose of the at least one object; using the classification of the pose of the at least one object to further obtain a type classification for at least one of the turn signals and a state classification for a state of at least one of the turn signals; and predicting movement of the at least one object based at least on the type and state classifications obtained for at least one of the turn signals.
EFFICIENT LOCATION AND IDENTIFICATION OF DOCUMENTS IN IMAGES
Efficient location and identification of documents in images. In an embodiment, at least one quadrangle is extracted from an image based on line(s) extracted from the image. Parameter(s) are determined from the quadrangle(s), and keypoints are extracted from the image based on the parameter(s). Input descriptors are calculated for the keypoints and used to match the keypoints to reference keypoints, to identify classification candidate(s) that represent a template image of a type of document. The type of document and distortion parameter(s) are determined based on the classification candidate(s).
MONITORING AND INTELLIGENCE GENERATION FOR FARM FIELD
Various embodiments described herein provide monitoring and intelligence generation for one or more farm fields by: using remote sensing to monitor progress of a developing crop; comparing a user's field to another field in the area; providing cropped area extent estimates for a current season; using one or more disease risk models to determine disease risk (or disease pressure) with respect to a field; or some combination thereof,
ITEM IDENTIFICATION METHOD AND APPARATUS, DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
Provided are an item identification method and apparatus, a device, and a computer-readable storage medium. The method includes: categories of items contained in a group of to-be-identified items and quantity information of items in each category contained in the group of to-be-identified items are determined by performing short-range wireless communication with each item in a group of to-be-identified items in a scenario region, each item containing a short-range wireless communication tag corresponding to a category of each item; acquired images of the group of to-be-identified items are identified to obtain an image identification result of the group of to-be-identified items, the image identification result including a category identification result of each item in the group of to-be-identified items; and an item identification result of the group of to-be-identified items is determined based on the quantity information and the image identification result.
DATA DRIVEN DYNAMICALLY RECONFIGURED DISPARITY MAP
In some examples, a system may receive, from at least one camera of a vehicle, at least one image including a road. The system may further receive vehicle location information including an indication of a location of the vehicle. In addition, the system may receive at least one of historical information from a historical database, or road anomaly information, where the road anomaly information is determined from at least one of a road anomaly database or real-time road anomaly detection. Based on the at least one image, the indication of the location of the vehicle, and the at least one of the historical information or the road anomaly information, the system may generate at least one of a disparity map or a disparity image.