G06F18/2325

UNSUPERVISED CLUSTERING FEATURE ENGINEERING
20230079513 · 2023-03-16 · ·

A method of generating an input for a machine learning algorithm may include collecting data records. Each data record may include a plurality of categories of data. The method may include using vector quantization to partition the plurality of data records into a plurality of groupings. Each of the groupings may be based on one or more of the plurality of categories of data. The method may include generating a correlation score for each of the plurality of groupings. The correlation score may be indicative of whether a particular group is indicative of a given outcome.

KINEMATIC INVARIANT-SPACE MAXIMUM ENTROPY TRACKER (KISMET)

A processor-implemented method for simultaneously tracking one or more objects includes receiving, via a dynamical system with a set of sensors, a first set of unlabeled measurements from one or more objects. Each of the measurements is a function of time. A set of candidate tracks is determined for the one or more objects. Probabilities of each of the first set of unlabeled measurements being assigned to each of the set of candidate tracks are computed. A track from the set of candidate tracks is determined for each of the one or more objects based on a joint probability distribution of track attributes and the probabilistic assignment of each of the first set of unlabeled measurements to each of the set of candidate tracks.

CODE-BASED PATTERN EXTRACTION AND APPLICATION IN A NAMED ENTITY RECOGNITION PIPELINE

Various systems and methods are presented regarding code-based pattern extraction (Code-PE) and the application of Code-PE to a named entity recognition pipeline. Patterns can be generated from named entities, wherein the entities have an assigned type. Codes are identified within the entities, subsequently vectorized and clustered based upon the presence of the identified codes. Patterns are identified for the respective clusters. The patterns can be applied to an untyped entity, in the event of the pattern matching, the entity can be typed with the type assigned to the pattern. The typed entity can be used to recursively update knowledge regarding typed- and untyped-entities. In the event a pattern incorrectly types an entity, the pattern can be retrained with the updated knowledge.

CODE-BASED PATTERN EXTRACTION AND APPLICATION IN A NAMED ENTITY RECOGNITION PIPELINE

Various systems and methods are presented regarding code-based pattern extraction (Code-PE) and the application of Code-PE to a named entity recognition pipeline. Patterns can be generated from named entities, wherein the entities have an assigned type. Codes are identified within the entities, subsequently vectorized and clustered based upon the presence of the identified codes. Patterns are identified for the respective clusters. The patterns can be applied to an untyped entity, in the event of the pattern matching, the entity can be typed with the type assigned to the pattern. The typed entity can be used to recursively update knowledge regarding typed- and untyped-entities. In the event a pattern incorrectly types an entity, the pattern can be retrained with the updated knowledge.

POST SYNDICATION THROUGH ARTIFICIAL INTELLIGENCE CROSS-POLLINATION
20240193232 · 2024-06-13 · ·

Systems, apparatuses and methods provide technology that identifies a first post that is submitted to a first group of a social network. The technology identifies that the first post is a cross-pollination candidate, identifies a second group of the social network, generates a first vector that is to represent one of the first post or the first group, generates a second vector that is to represent the second group, determines whether the second group matches a cross-pollination criteria based on a comparison of the first vector to the second vector, and determines whether to automatically generate a second post based on the first post, and submit the second post to the second group based on whether the second group matches the cross-pollination criteria.

POST SYNDICATION THROUGH ARTIFICIAL INTELLIGENCE CROSS-POLLINATION
20240193232 · 2024-06-13 · ·

Systems, apparatuses and methods provide technology that identifies a first post that is submitted to a first group of a social network. The technology identifies that the first post is a cross-pollination candidate, identifies a second group of the social network, generates a first vector that is to represent one of the first post or the first group, generates a second vector that is to represent the second group, determines whether the second group matches a cross-pollination criteria based on a comparison of the first vector to the second vector, and determines whether to automatically generate a second post based on the first post, and submit the second post to the second group based on whether the second group matches the cross-pollination criteria.

REAL TIME IMPLEMENTATION OF RECURRENT NETWORK DETECTORS
20190065950 · 2019-02-28 ·

Various examples related to real time detection with recurrent networks are presented. These can be utilized in automatic insect recognition to provide accurate and rapid in situ identification. In one example, among others, a method includes training parameters of a kernel adaptive autoregressive-moving average (KAARMA) using a signal of an input space. The signal can include source information in its time varying structure. A surrogate embodiment of the trained KAARMA can be determined based upon clustering or digitizing of the input space, binarization of the trained KAARMA state and a transition table using the outputs of the trained KAARMA for each input in the training set. A recurrent network detector can then be implemented in processing circuitry (e.g., flip-flops, FPGA, ASIC, or dedicated VLSI) based upon the surrogate embodiment of the KAARMA The recurrent network detector can be configured to identify a signal class.

Cognitive dynamic video summarization using cognitive analysis enriched feature set

Accurate and concise summarization of a media production is achieved using cognitive analysis which groups segments of the production into clusters based on extracted features, selects a representative segment for each cluster, and combines the representative segments to form a summary. The production is separated into a video stream, a speech stream and an audio stream, from which the cognitive analysis extracts visual features, textual features, and aural features. The clustering groups segments together whose visual and textual features most closely match. Selection of the representative segments derives a score for each segment based on factors including a distance to a centroid of the cluster, an emotion level, an audio uniqueness, and a video uniqueness. Each of these factors can be weighted, and the weights can be adjusted in accordance with user input. The factors can have initial weights which are based on statistical attributes of historical media productions.

COGNITIVE DYNAMIC VIDEO SUMMARIZATION USING COGNITIVE ANALYSIS ENRICHED FEATURE SET

Accurate and concise summarization of a media production is achieved using cognitive analysis which groups segments of the production into clusters based on extracted features, selects a representative segment for each cluster, and combines the representative segments to form a summary. The production is separated into a video stream, a speech stream and an audio stream, from which the cognitive analysis extracts visual features, textual features, and aural features. The clustering groups segments together whose visual and textual features most closely match. Selection of the representative segments derives a score for each segment based on factors including a distance to a centroid of the cluster, an emotion level, an audio uniqueness, and a video uniqueness. Each of these factors can be weighted, and the weights can be adjusted in accordance with user input. The factors can have initial weights which are based on statistical attributes of historical media productions.

Contextual Content Placement In Virtual Universes

Techniques for placing content in virtual universes at locations contextually compatible with the content are disclosed. A system trains a machine learning model to identify virtual environments compatible with content based on attributes representing contexts of the environments. Using the machine learning model, the system determines a contextual environment for a target content item. The system selects the particular contextual environment for placement of the target content item based on the compatibility score.