Patent classifications
G06V10/771
DIGITAL ASSISTANT REFERENCE RESOLUTION
Systems and processes for operating a digital assistant are provided. An example process for performing a task includes, at an electronic device having one or more processors and memory, receiving a spoken input including a request, receiving an image input including a plurality of objects, selecting a reference resolution module of a plurality of reference resolution modules based on the request and the image input, determining, with the selected reference resolution module, whether the request references a first object of the plurality of objects based on at least the spoken input, and in accordance with a determination that the request references the first object of the plurality of objects, determining a response to the request including information about the first object.
SIMULTANEOUS ORIENTATION AND SCALE ESTIMATOR (SOSE)
A method and hardware based system provide for descriptor-based feature mapping during terrain relative navigation (TRN). A first reference image/premade terrain map and a second image are acquired. Features in the first reference image and the second image are detected. A scale and an orientation of the one or more detected features are estimated based on an intensity centroid (IC), moments of the detected features, an orientation which is in turn based on an angle between a center of each of the detected features and the IC, and an orientation stability measure which is in turn based on a radius. Signatures are computed for each of the detected features using the estimated scale and orientation and then converted into feature descriptors. The descriptors are used to match features from the two images which are then used to perform TRN.
SIMULTANEOUS ORIENTATION AND SCALE ESTIMATOR (SOSE)
A method and hardware based system provide for descriptor-based feature mapping during terrain relative navigation (TRN). A first reference image/premade terrain map and a second image are acquired. Features in the first reference image and the second image are detected. A scale and an orientation of the one or more detected features are estimated based on an intensity centroid (IC), moments of the detected features, an orientation which is in turn based on an angle between a center of each of the detected features and the IC, and an orientation stability measure which is in turn based on a radius. Signatures are computed for each of the detected features using the estimated scale and orientation and then converted into feature descriptors. The descriptors are used to match features from the two images which are then used to perform TRN.
SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR IMAGE ANALYSIS
Image analytics systems, methods, and computer program products to autonomously analyze an image to identify and detect features in the image, such as the horizon, and/or identify and detect objects of interest therein, such as, smoke or possible smoke. The image is captured, for example, by RGB cameras, and depicts a scene to be analyzed. The intelligent image analytic system is configured to provide alerts and/or other information to one or more concerned parties and/or computing systems to take an appropriate response.
SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR IMAGE ANALYSIS
Image analytics systems, methods, and computer program products to autonomously analyze an image to identify and detect features in the image, such as the horizon, and/or identify and detect objects of interest therein, such as, smoke or possible smoke. The image is captured, for example, by RGB cameras, and depicts a scene to be analyzed. The intelligent image analytic system is configured to provide alerts and/or other information to one or more concerned parties and/or computing systems to take an appropriate response.
Method and device for reliably identifying objects in video images
A computer-implemented method for reliably identifying objects in a sequence of input images received with the aid of an imaging sensor, positions of light sources in the respective input image being ascertained from the input images in each case with the aid of a first machine learning system, in particular, an artificial neural network, and objects from the sequence of input images being identified from the resulting sequence of positions of light sources, in particular, with the aid of a second machine learning system, in particular, with the aid of an artificial neural network.
Optimizing inference time of entity matching models
Methods, systems, and computer-readable storage media for receiving input data including a set of entities of a first type and a set of entities of a second type, providing a set of features based on entities of the first type, the set of features including features expected to be included in entities of the second type, filtering entities of the second type based on the set of features to provide a sub-set of entities of the second type, and generating an output by processing the set of entities of the first type and the sub-set of entities of the second type through a ML model, the output comprising a set of matching pairs, each matching pair in the set of matching pairs comprising an entity of the set of entities of the first type and at least one entity of the sub-set of entities of the second type.
Optimizing inference time of entity matching models
Methods, systems, and computer-readable storage media for receiving input data including a set of entities of a first type and a set of entities of a second type, providing a set of features based on entities of the first type, the set of features including features expected to be included in entities of the second type, filtering entities of the second type based on the set of features to provide a sub-set of entities of the second type, and generating an output by processing the set of entities of the first type and the sub-set of entities of the second type through a ML model, the output comprising a set of matching pairs, each matching pair in the set of matching pairs comprising an entity of the set of entities of the first type and at least one entity of the sub-set of entities of the second type.
Quotation method executed by computer, quotation device, electronic device and storage medium
Disclosed is a quotation method executed by a computer, comprising: obtaining structure parameters and electrical parameters of a product (S101); constructing an external view of the product by using the structure parameters of the product, and performing similarity comparison on the external view of the product and the external view of a historical product to obtain an appearance similarity sorting (102); performing similarity comparison on the electrical parameters of the product and the electrical parameters of the historical product to obtain an electrical parameter similarity sorting (103); on the basis of the cost weights of a structural member and an electrical component and the appearance similarity sorting and the electrical parameter similarity sorting, obtaining a comprehensive sorting which is based on the structure parameters and the electrical parameters (S104); and determining, based on the comprehensive sorting, a bill of materials of the product, and calculating, based on the bill of the materials of the product, the product quotation (105).
Systems and methods for digitally representing a scene with multi-faceted primitives
Disclosed is a system and associated methods for generating and rendering a polyhedral point cloud that represents a scene with multi-faceted primitives. Each multi-faceted primitive stores multiple sets of values that represent different non-positional characteristics that are associated with a particular point in the scene from different angles. For instance, the system generates a multi-faceted primitive for a particular point of the scene that is captured in first capture from a first position and a second capture from a different second position. Generating the multi-faceted primitive includes defining a first facet with a first surface normal oriented towards the first position and first non-positional values based on descriptive characteristics of the particular point in the first capture, and defining a second facet with a second surface normal orientated towards the second position and second non-positional values based on different descriptive characteristics of the particular point in the second capture.