Patent classifications
G06F16/58
Synchronizing image data with either vehicle telematics data or infrastructure data pertaining to a road segment
Techniques for collecting, synchronizing, and displaying various types of data relating to a road segment enable, via one or more local or remote processors, servers, transceivers, and/or sensors, (i) enhanced and contextualized analysis of vehicle events by way of synchronizing different data types, relating to a monitored road segment, collected via various different types of data sources; (ii) enhanced and contextualized analysis of filed insurance claims pertaining to a vehicle incident at a road segment; (iii) advantageous machine learning techniques for predicting a level of risk assumed for a given vehicle event or a given road segment; (iv) techniques for accounting for region-specific driver profiles when controlling autonomous vehicles; and/or (v) improved techniques for providing a GUI to display collected data in a meaningful and contextualized manner.
DATA COLLECTION FOR OBJECT DETECTORS
A computer-implemented method of generating metadata from an image may comprise sending the image to an object detection service, which generates detections metadata from the image. The image may also be sent to a visual features extractor, which extracts visual features metadata from the image. The generated detections metadata may then be sent to an uncertainty score calculator, which computes an uncertainty score from the detections metadata. The uncertainty score may be related to a level of uncertainty within the detections metadata. The image, the visual features metadata, the detections metadata and the uncertainty score may then be stored in a database accessible over a computer network.
SYSTEM AND METHOD FOR IMAGE-BASED CROP IDENTIFICATION
A system and a method for image-based crop identification are disclosed. The image-based crop identification system includes a database, a communication module and a model library. The database stores sample aerial data and annotated aerial data. The communication module is coupled to the database, and is configured to provide the sample aerial data to a user and receive the annotated aerial data from the user. The model library is coupled to the database, and is configured to obtain the annotated aerial data, train a crop classification model based on the annotated aerial data, and provide the trained crop classification model for subsequent crop identification. The annotated aerial data include determination of the type of the crop appearing in the sample aerial data.
Process and system for supporting an autonomous vehicle
Technologies and techniques for supporting an autonomous vehicle wherein objects in surroundings of the vehicle are captured by a sensor system, and wherein objects are identified and recognized by object recognition from captured data relating to the surroundings. When an unknown object is present in the surroundings, the unknown object if found in at least one database, typical properties of the unknown object are determined on the basis of the search result, a recommended course of action is derived for the autonomous vehicle on the basis of the typical properties of the unknown object, and the derived recommended course of action are provided to the vehicle.
System And Method For Capturing And Sharing A Location Based Experience
A system and method for capturing a location based experience at an event including a plurality of mobile devices having a camera employed near a point of interest to capture random, crowdsourced images and associated metadata near said point of interest. In a preferred form, the images include depth camera information from prepositioned devices around the point of interest during the event. A network communicates images, depth information, and metadata to build a 3D model of the region, preferably with the location of contributors known. Users connect to this experience platform to view the 3D model from a user selected location and orientation and to participate in experiences with, for example, a social network.
POPULATING SEARCH RESULTS WITH INTENT AND CONTEXT-BASED IMAGES
An information handling system receives a search query determines a first key attribute associated with a first search term and a second key attribute associated with a second search term, determines an intent of a user and context of the search query based on the first key attribute, and determines whether the search result corresponding to the search query includes an image with a first feature corresponding to the first key attribute and a second feature corresponding to the second key attribute. Responsive to a determination that the search result does not include the image with the first feature and the second feature, the system may generate a clubbed image that includes a first image and a second image, wherein the first image includes the first feature and the second image includes the second feature.
Image search using intersected predicted queries
A method for receiving a first user query from a user for searching an item, forming a first filter based on the first user query, and forming a first filtered item collection is provided. The method includes predicting a new query based on the first user query and a historical query log, forming a second filter for the new query, and applying the second filter to the first filtered item collection to form a second filtered item collection. Further, associating an item score to each of a plurality of items in the first and second filtered item collections, sorting the plurality of items in the first and second filtered item collections according to the item score associated to each of the plurality of items, and providing, to a user display, an item in the plurality of items in the first or second filtered item collections according to a sorting order.
Image search using intersected predicted queries
A method for receiving a first user query from a user for searching an item, forming a first filter based on the first user query, and forming a first filtered item collection is provided. The method includes predicting a new query based on the first user query and a historical query log, forming a second filter for the new query, and applying the second filter to the first filtered item collection to form a second filtered item collection. Further, associating an item score to each of a plurality of items in the first and second filtered item collections, sorting the plurality of items in the first and second filtered item collections according to the item score associated to each of the plurality of items, and providing, to a user display, an item in the plurality of items in the first or second filtered item collections according to a sorting order.
Methods and systems for depth-aware image searching
Embodiments provide systems, methods, and non-transitory computer storage media for providing search result images based on associations of keywords and depth-levels of an image. In embodiments, depth-levels of an image are identified using depth-map information of the image to identify depth-segments of the image. The depth-segments are analyzed to determine keywords associated with each depth-segment based on objects, features, or content in each depth-segment. An image depth-level data structure is generated by matching keywords generated for the entire image with the keywords at each depth-level and assigning the depth-level to the keyword in the image depth-level data structure for the entire image. The image depth-level data structure may be queried for images that contain keywords and depth-level information that match the keywords and depth-level information specified in a search query.
Methods and systems for depth-aware image searching
Embodiments provide systems, methods, and non-transitory computer storage media for providing search result images based on associations of keywords and depth-levels of an image. In embodiments, depth-levels of an image are identified using depth-map information of the image to identify depth-segments of the image. The depth-segments are analyzed to determine keywords associated with each depth-segment based on objects, features, or content in each depth-segment. An image depth-level data structure is generated by matching keywords generated for the entire image with the keywords at each depth-level and assigning the depth-level to the keyword in the image depth-level data structure for the entire image. The image depth-level data structure may be queried for images that contain keywords and depth-level information that match the keywords and depth-level information specified in a search query.