G06F18/2325

Contextual Content Placement In Virtual Universes

Techniques for placing content in virtual universes at locations contextually compatible with the content are disclosed. A system trains a machine learning model to identify virtual environments compatible with content based on attributes representing contexts of the environments. Using the machine learning model, the system determines a contextual environment for a target content item. The system selects the particular contextual environment for placement of the target content item based on the compatibility score.

FEATURE VECTOR CALCULATION APPARATUS, CLUSTERING APPARATUS, TRAINING APPARATUS, METHOD, AND STORAGE MEDIUM

A feature vector calculation apparatus includes processing circuitry. The processing circuitry is configured to: acquire target data, a plurality of pieces of deformed data obtained by deforming the target data, the target data comprising a plurality of pieces of target data, and a trained model adapted to receive input of each of the pieces of deformed data and output a feature vector; calculate the feature vector using each of the pieces of the deformed data and the trained model; and calculate, for each of the pieces of target data, a degree of variation indicative of a degree of variation in the feature vector.

FEATURE VECTOR CALCULATION APPARATUS, CLUSTERING APPARATUS, TRAINING APPARATUS, METHOD, AND STORAGE MEDIUM

A feature vector calculation apparatus includes processing circuitry. The processing circuitry is configured to: acquire target data, a plurality of pieces of deformed data obtained by deforming the target data, the target data comprising a plurality of pieces of target data, and a trained model adapted to receive input of each of the pieces of deformed data and output a feature vector; calculate the feature vector using each of the pieces of the deformed data and the trained model; and calculate, for each of the pieces of target data, a degree of variation indicative of a degree of variation in the feature vector.

Kinematic invariant-space maximum entropy tracker (KISMET)

A processor-implemented method for simultaneously tracking one or more objects includes receiving, via a dynamical system with a set of sensors, a first set of unlabeled measurements from one or more objects. Each of the measurements is a function of time. A set of candidate tracks is determined for the one or more objects. Probabilities of each of the first set of unlabeled measurements being assigned to each of the set of candidate tracks are computed. A track from the set of candidate tracks is determined for each of the one or more objects based on a joint probability distribution of track attributes and the probabilistic assignment of each of the first set of unlabeled measurements to each of the set of candidate tracks.

Kinematic invariant-space maximum entropy tracker (KISMET)

A processor-implemented method for simultaneously tracking one or more objects includes receiving, via a dynamical system with a set of sensors, a first set of unlabeled measurements from one or more objects. Each of the measurements is a function of time. A set of candidate tracks is determined for the one or more objects. Probabilities of each of the first set of unlabeled measurements being assigned to each of the set of candidate tracks are computed. A track from the set of candidate tracks is determined for each of the one or more objects based on a joint probability distribution of track attributes and the probabilistic assignment of each of the first set of unlabeled measurements to each of the set of candidate tracks.

TRANSACTION EXEMPLARS FOR MACHINE LEARNING
20250053617 · 2025-02-13 ·

Provided are systems and methods which can use machine learning to draw additional inferences about transaction records from transaction strings. In one example, a method may include converting a plurality of transaction strings corresponding to a plurality of transactions into a plurality of vectors in multidimensional vector space, respectively, via execution of a machine learning model, identifying a cluster of vectors in the multidimensional space that correspond to a subset of transactions among the plurality of transactions that are related based on distances between the cluster of vectors in the multidimensional space, identifying a representative vector within the cluster that corresponds to an exemplary transaction of the subset of transactions based on the cluster of vectors, and storing the representative vector within a data store.

TRANSACTION EXEMPLARS FOR MACHINE LEARNING
20250053617 · 2025-02-13 ·

Provided are systems and methods which can use machine learning to draw additional inferences about transaction records from transaction strings. In one example, a method may include converting a plurality of transaction strings corresponding to a plurality of transactions into a plurality of vectors in multidimensional vector space, respectively, via execution of a machine learning model, identifying a cluster of vectors in the multidimensional space that correspond to a subset of transactions among the plurality of transactions that are related based on distances between the cluster of vectors in the multidimensional space, identifying a representative vector within the cluster that corresponds to an exemplary transaction of the subset of transactions based on the cluster of vectors, and storing the representative vector within a data store.

CLUSTERING-BASED DEVIATION PATTERN RECOGNITION

In some implementations, a device may obtain first data associated with one or more accounts. The device may process the first data to obtain clustering information associated with the first data. The device may cluster the first data into one or more clusters based on the clustering information. The device may identify, via a second machine learning model, one or more deviation patterns associated with a portion of the first data that is included in a cluster of the one or more clusters. The device may determine, for the cluster, one or more operations to be performed to mitigate deviations based on the one or more deviation patterns. The device may perform an action, that is based on the one or more operations, associated with second data grouped into the cluster.

CLUSTERING-BASED DEVIATION PATTERN RECOGNITION

In some implementations, a device may obtain first data associated with one or more accounts. The device may process the first data to obtain clustering information associated with the first data. The device may cluster the first data into one or more clusters based on the clustering information. The device may identify, via a second machine learning model, one or more deviation patterns associated with a portion of the first data that is included in a cluster of the one or more clusters. The device may determine, for the cluster, one or more operations to be performed to mitigate deviations based on the one or more deviation patterns. The device may perform an action, that is based on the one or more operations, associated with second data grouped into the cluster.

DATA DIFFERENCE EVALUATION VIA MODEL COMPARISON
20250117443 · 2025-04-10 ·

A computer-implemented method for performing data difference evaluation is provided. Aspects include obtaining a first data set and a second data set, creating a first plurality of feature vectors by inputting the first data set into each of a plurality of models, and creating a second plurality of feature vectors by inputting the second data set into each of the plurality of models. Aspects also include identifying a mapping between elements of the first plurality of vectors and elements the second plurality of feature vectors created by a same model of the plurality of models, calculating, for each of the plurality of models based at least in part on the mapping, a model distance between the first data set and the second data set, and calculating, based at least in part on the model distances, an ensemble distance between first data set and the second data set.