G06V10/426

Computing a pathological condition
12142379 · 2024-11-12 · ·

A computer-implemented method for computing a pathological condition of a subject, comprising obtaining (10) initial cranial image data of a subject from an input interface, and incorporating the initial cranial image data into a knowledge model comprised within a semantic network stored in a memory performing (12), via a processor, at least one processing sequence on the initial cranial image data using the semantic network to thus provide, in the semantic network, at least one element comprising topographical data of the subject's brain, or a portion of the subject's brain, referenced to a reference coordinate system wherein the at least one processing sequence performs at least one state iteration of at least a portion of the semantic network from a first state into a second state comparing (14) the topographical data of the subject's brain to one, or more pathological condition prediction elements of the semantic network to form an indication of a pathological condition of the subject, and generating (16) an additional element in the semantic network comprising the indication of the pathological condition of the subject.

Computing a pathological condition
12142379 · 2024-11-12 · ·

A computer-implemented method for computing a pathological condition of a subject, comprising obtaining (10) initial cranial image data of a subject from an input interface, and incorporating the initial cranial image data into a knowledge model comprised within a semantic network stored in a memory performing (12), via a processor, at least one processing sequence on the initial cranial image data using the semantic network to thus provide, in the semantic network, at least one element comprising topographical data of the subject's brain, or a portion of the subject's brain, referenced to a reference coordinate system wherein the at least one processing sequence performs at least one state iteration of at least a portion of the semantic network from a first state into a second state comparing (14) the topographical data of the subject's brain to one, or more pathological condition prediction elements of the semantic network to form an indication of a pathological condition of the subject, and generating (16) an additional element in the semantic network comprising the indication of the pathological condition of the subject.

SYSTEM AND A METHOD FOR LEARNING FEATURES ON GEOMETRIC DOMAINS

A method for extracting hierarchical features from data defined on a geometric domain is provided. The method includes applying on said data at least an intrinsic convolution layer, including the steps of applying a patch operator to extract a local representation of the input data around a point on the geometric domain and outputting the correlation of a patch resulting from the extraction with a plurality of templates. A system to implement the method is also described.

Fast and robust identification of extremities of an object within a scene

Described herein are a system and method for identifying extremities of an object within a scene. The method comprises operating an image processing system to receive image data from a sensor. The image data represents an image of the scene with the object. The image data comprises a two-dimensional array of pixels and each pixel contains a depth value indicating distance from the sensor. The image processing system slices the image into slices. Each respective slice comprises those pixels with depth values that lie within a respective range of distances defined relative to a reference. For each of the slices, the method identifies one or more connected regions of pixels that are neighbors in the two-dimensional array of pixels. The method builds, based on the connected region of pixels that have been identified for the slices and depth information inherent to the respective slices, a graph consisting of interconnected nodes. The connected regions form the nodes of the graph and the nodes are interconnected in the graph based on their relative distance to the reference. Extremities of the object are determined based on the graph.

Searches over graphs representing geospatial-temporal remote sensing data

Various technologies pertaining to identifying objects of interest in remote sensing images by searching over geospatial-temporal graph representations are described herein. Graphs are constructed by representing objects in remote sensing images as nodes, and connecting nodes with undirected edges representing either distance or adjacency relationships between objects and directed edges representing changes in time. Geospatial-temporal graph searches are made computationally efficient by taking advantage of characteristics of geospatial-temporal data in remote sensing images through the application of various graph search techniques.

DETERMINING AN ITEM THAT HAS CONFIRMED CHARACTERISTICS

In various example embodiments, a system and method for determining an item that has confirmed characteristics are described herein. An image that depicts an object is received from a client device. Structured data that corresponds to characteristics of one or more items are retrieved. A set of characteristics is determined, the set of characteristics being predicted to match with the object. An interface that includes a request for confirmation of the set of characteristics is generated. The interface is displayed on the client device. Confirmation that at least one characteristic from the set of characteristics matches with the object depicted in the image is received from the client device.

Edge-aware bilateral image processing

Example embodiments may allow for the efficient, edge-preserving filtering, upsampling, or other processing of image data with respect to a reference image. A cost-minimization problem to generate an output image from the input array is mapped onto regularly-spaced vertices in a multidimensional vertex space. This mapping is based on an association between pixels of the reference image and the vertices, and between elements of the input array and the pixels of the reference image. The problem is them solved to determine vertex disparity values for each of the vertices. Pixels of the output image can be determined based on determined vertex disparity values for respective one or more vertices associated with each of the pixels. This fast, efficient image processing method can be used to enable edge-preserving image upsampling, image colorization, semantic segmentation of image contents, image filtering or de-noising, or other applications.

Gauge equivariant geometric graph convolutional neural network

Certain aspects of the present disclosure provide a method for performing machine learning, comprising: determining a plurality of vertices in a neighborhood associated with a mesh including a target vertex; determining a linear transformation configured to parallel transport signals along all edges in the mesh to the target vertex; applying the linear transformation to the plurality of vertices in the neighborhood to form a combined signal at the target vertex; determining a set of basis filters; linearly combining the basis filters using a set of learned parameters to form a gauge equivariant convolution filter, wherein the gauge equivariant convolution filter is constrained to maintain gauge equivariance; applying the gauge equivariant convolution filter to the combined signal to form an intermediate output; and applying a nonlinearity to the intermediate output to form a convolution output.

Gauge equivariant geometric graph convolutional neural network

Certain aspects of the present disclosure provide a method for performing machine learning, comprising: determining a plurality of vertices in a neighborhood associated with a mesh including a target vertex; determining a linear transformation configured to parallel transport signals along all edges in the mesh to the target vertex; applying the linear transformation to the plurality of vertices in the neighborhood to form a combined signal at the target vertex; determining a set of basis filters; linearly combining the basis filters using a set of learned parameters to form a gauge equivariant convolution filter, wherein the gauge equivariant convolution filter is constrained to maintain gauge equivariance; applying the gauge equivariant convolution filter to the combined signal to form an intermediate output; and applying a nonlinearity to the intermediate output to form a convolution output.

System and method for ontology guided indoor scene understanding for cognitive robotic tasks

Existing cognitive robotic applications follow a practice of building specific applications for specific use cases. However, the knowledge of the world and the semantics are common for a robot for multiple tasks. In this disclosure, to enable usage of knowledge across multiple scenarios, a method and system for ontology guided indoor scene understanding for cognitive robotic tasks is described where in scenes are processed based on techniques filtered based on querying ontology with relevant objects in perceived scene to generate a semantically rich scene graph. Herein, an initially manually created ontology is updated and refined in online fashion using external knowledge-base, human robot interaction and perceived information. This knowledge helps in semantic navigation, aids in speech, and text based human robot interactions.