Patent classifications
G06V20/10
Systems, devices, and methods for in-field diagnosis of growth stage and crop yield estimation in a plant area
Methods, devices, and systems may be utilized for detecting one or more properties of a plant area and generating a map of the plant area indicating at least one property of the plant area. The system comprises an inspection system associated with a transport device, the inspection system including one or more sensors configured to generate data for a plant area including to: capture at least 3D image data and 2D image data; and generate geolocational data. The datacenter is configured to: receive the 3D image data, 2D image data, and geolocational data from the inspection system; correlate the 3D image data, 2D image data, and geolocational data; and analyze the data for the plant area. A dashboard is configured to display a map with icons corresponding to the proper geolocation and image data with the analysis.
Method of acquisition and interpretation of images used for commerce items
Image acquisition and interpretation method used for articles of commerce, consisting of one or more gondolas or shelves (A) in which shelves (X) are installed, to which are added four repair marks (B, E) that, in combination with a mobile device with a photographic camera (D), will take and process images in an automated way thanks to areas designed for this purpose (C) where reference marks (F, G) are displayed, correcting the perspective deformation and cutting the image to the area delimited by the repair marks, to then send the processed images to a computer server that can index data such as quantity and details of articles, number of units of each, physical characteristics, prices, etc.
Method of acquisition and interpretation of images used for commerce items
Image acquisition and interpretation method used for articles of commerce, consisting of one or more gondolas or shelves (A) in which shelves (X) are installed, to which are added four repair marks (B, E) that, in combination with a mobile device with a photographic camera (D), will take and process images in an automated way thanks to areas designed for this purpose (C) where reference marks (F, G) are displayed, correcting the perspective deformation and cutting the image to the area delimited by the repair marks, to then send the processed images to a computer server that can index data such as quantity and details of articles, number of units of each, physical characteristics, prices, etc.
Method for size estimation by image recognition of specific target using given scale
The present invention relates to a method for size estimation by image recognition of a specific target using a given scale. First, a reference objected is recognized in an image and the corresponding scale is established. Then the specific target is searched and the size of the specific target is estimated according to the acquired scale.
Temporal information prediction in autonomous machine applications
In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
Temporal information prediction in autonomous machine applications
In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
Automated clinical documentation system and method
A method, computer program product, and computing system for proactive encounter scanning is executed on a computing device and includes obtaining encounter information of a patient encounter. The encounter information is proactively processed to determine if the encounter information is indicative of one or more medical conditions and to generate one or more result set. The one or more result sets are provided to the user.
Automated clinical documentation system and method
A method, computer program product, and computing system for proactive encounter scanning is executed on a computing device and includes obtaining encounter information of a patient encounter. The encounter information is proactively processed to determine if the encounter information is indicative of one or more medical conditions and to generate one or more result set. The one or more result sets are provided to the user.
Systems and methods for utilizing images to determine the position and orientation of a vehicle
Described are systems and methods to utilize images to determine the position and/or orientation of a vehicle (e.g., an autonomous ground vehicle) operating in an unstructured environment (e.g., environments such as sidewalks which are typically absent lane markings, road markings, etc.). The described systems and methods can determine the vehicle's position and orientation based on an alignment of annotated images captured during operation of the vehicle with a known annotated reference map. The translation and rotation applied to obtain alignment of the annotated images with the known annotated reference map can provide the position and the orientation of the vehicle.
Viewpoint dependent brick selection for fast volumetric reconstruction
A method to culling parts of a 3D reconstruction volume is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data with low usage of computational resources and storage spaces. The method includes culling parts of the 3D reconstruction volume against a depth image. The depth image has a plurality of pixels, each of which represents a distance to a surface in a scene. In some embodiments, the method includes culling parts of the 3D reconstruction volume against a frustum. The frustum is derived from a field of view of an image sensor, from which image data to create the 3D reconstruction is obtained.