G06V10/426

Indoor navigation method, indoor navigation equipment, and storage medium

An indoor navigation method is provided, including: receiving an instruction for navigation, and collecting an environment image; extracting an instruction room feature and an instruction object feature carried in the instruction, and determining a visual room feature, a visual object feature, and a view angle feature based on the environment image; fusing the instruction object feature and the visual object feature with a first knowledge graph representing an indoor object association relationship to obtain an object feature, and determining a room feature based on the visual room feature and the instruction room feature; and determining a navigation decision based on the view angle feature, the room feature, and the object feature.

Indoor navigation method, indoor navigation equipment, and storage medium

An indoor navigation method is provided, including: receiving an instruction for navigation, and collecting an environment image; extracting an instruction room feature and an instruction object feature carried in the instruction, and determining a visual room feature, a visual object feature, and a view angle feature based on the environment image; fusing the instruction object feature and the visual object feature with a first knowledge graph representing an indoor object association relationship to obtain an object feature, and determining a room feature based on the visual room feature and the instruction room feature; and determining a navigation decision based on the view angle feature, the room feature, and the object feature.

FACE IMAGE RESTORATION METHOD, SYSTEM, STORAGE MEDIUM AND DEVICE

The present application discloses a face image restoration method, a system, a storage medium and a device, the restoration model adopted in the above method starts from structured information of a face, generates a structured face graph based on features of the to-be-restored face image, and restores face image by the structured face graph and the decoder, which can solve the problem that it is difficult to capture and learn structured information based on the traditional convolution operation, and improve the indicators of the face restoration and enriches the visualization effect.

FACE IMAGE RESTORATION METHOD, SYSTEM, STORAGE MEDIUM AND DEVICE

The present application discloses a face image restoration method, a system, a storage medium and a device, the restoration model adopted in the above method starts from structured information of a face, generates a structured face graph based on features of the to-be-restored face image, and restores face image by the structured face graph and the decoder, which can solve the problem that it is difficult to capture and learn structured information based on the traditional convolution operation, and improve the indicators of the face restoration and enriches the visualization effect.

Systems and methods for identifying roads in images
09547805 · 2017-01-17 · ·

Methods and systems described herein enable self-supervised road detection in images. The method includes receiving an image, segmenting the image into at least one fragment based at least in part on at least one pixel feature, determining, using a processor, a road likeness score for the at least one fragment based at least in part on a medial radius, and identifying roads based at least in part on the road likeness score.

Activity recognition systems and methods
09547678 · 2017-01-17 · ·

An activity recognition system is disclosed. A plurality of temporal features is generated from a digital representation of an observed activity using a feature detection algorithm. An observed activity graph comprising one or more clusters of temporal features generated from the digital representation is established, wherein each one of the one or more clusters of temporal features defines a node of the observed activity graph. At least one contextually relevant scoring technique is selected from similarity scoring techniques for known activity graphs, the at least one contextually relevant scoring technique being associated with activity ingestion metadata that satisfies device context criteria defined based on device contextual attributes of the digital representation, and a similarity activity score is calculated for the observed activity graph as a function of the at least one contextually relevant scoring technique, the similarity activity score being relative to at least one known activity graph.

Feature mask determination for images
09547908 · 2017-01-17 · ·

Implementations relate to feature mask determination for images. In some implementations, a computer-implemented method to determine a feature mask for an image includes estimating one or more prior regions in the image that define a feature in the image. The method determines superpixels based on multiple pixels of the image similar in color. The method constructs a graph, each node of the graph corresponding to a superpixel, and determines a superpixel score for each superpixel based on a number of pixels of the superpixel. The method determines one or more segmented regions in the image based on applying a graph cut technique to the graph based at least on the superpixel scores, and determines the feature mask based on the segmented regions. The feature mask indicates a degree to which pixels of the image depict the feature. The method modifies the image based on the feature mask.

Facilitation of error tolerant image tracing optimization

A stroke refinement technique can be used refine the location, orientation, and shape of input strokes by exploiting gradient features from an underlying image being traced. The technique can also be used to adjust user strokes with respect to image gradients. The stroke refinement technique can comprise a local optimization, a semi-global optimization, and a global optimization to facilitate error tolerant image optimization.

METHOD AND SYSTEM FOR RECOGNIZING FACES
20170004387 · 2017-01-05 ·

A method and a system for recognizing faces have been disclosed. The method may comprise: retrieving a pair of face images; segmenting each of the retrieved face images into a plurality of image patches, wherein each patch in one image and a corresponding one in the other image form a pair of patches; determining a first similarity of each pair of patches; determining, from all pair of patches, a second similarity of the pair of face images; and fusing the first similarity determined for the each pair of patches and the second similarity determined for the pair of face images.

Unified framework for multi-modal similarity search

Technology is disclosed herein for enhanced similarity search. In an implementation, a search environment includes one or more computing hardware, software, and/or firmware components in support of enhanced similarity search. The one or more components identify a modality for a similarity search with respect to a query object. The components generate an embedding for the query object based on the modality and based on connections between the query object and neighboring nodes in a graph. The embedding for the query object provides the basis for the search for similar objects.