Patent classifications
G06F18/21355
Interactive-aware clustering of stable states
Analysis of genetic disease progression may be provided. Data about a set of molecular status may be received. A dynamic prediction model of molecular interactions may be provided over time. The molecular statuses of the set over time may be determined using the dynamic prediction model. The determined molecular statuses may be clustered by applying an interaction-aware metric for the analysis of the genetic disease progression.
Inter-cluster intensity variation correction and base calling
The technology disclosed corrects inter-cluster intensity profile variation for improved base calling on a cluster-by-cluster basis. The technology disclosed accesses current intensity data and historic intensity data of a target cluster, where the current intensity data is for a current sequencing cycle and the historic intensity data is for one or more preceding sequencing cycles. A first accumulated intensity correction parameter is determined by accumulating distribution intensities measured for the target cluster at the current and preceding sequencing cycles. A second accumulated intensity correction parameter is determined by accumulating intensity errors measured for the target cluster at the current and preceding sequencing cycles. Based on the first and second accumulated intensity correction parameters, next intensity data for a next sequencing cycle is corrected to generate corrected next intensity data, which is used to base call the target cluster at the next sequencing cycle.
VISION-LiDAR FUSION METHOD AND SYSTEM BASED ON DEEP CANONICAL CORRELATION ANALYSIS
A vision-LiDAR fusion method and system based on deep canonical correlation analysis are provided. The method comprises: collecting RGB images and point cloud data of a road surface synchronously; extracting features of the RGB images to obtain RGB features; performing coordinate system conversion and rasterization on the point cloud data in turn, and then extracting features to obtain point cloud features; inputting point cloud features and RGB features into a pre-established and well-trained fusion model at the same time, to output feature-enhanced fused point cloud features, wherein the fusion model fuses RGB features to point cloud features by using correlation analysis and in combination with a deep neural network; and inputting the fused point cloud features into a pre-established object detection network to achieve object detection. A similarity calculation matrix is utilized to fuse two different modal features.
Noise-driven coupled dynamic pattern recognition device for low power applications
A pattern recognition device comprising: a coupled network of damped, nonlinear, dynamic elements configured to generate an output response in response to at least one environmental condition, wherein each element has an associated multi-stable potential energy function that defines multiple energy states of an individual element, and wherein the elements are tuned such that environmental noise triggers stochastic resonance between energy levels of at least two elements; a processor configured to monitor the output response over time and to determine a probability that the pattern recognition device is in a given state based on the monitored output response; and detecting a pattern in the at least one environmental condition based on the probability.
MULTI-SCALE DRIVING ENVIRONMENT PREDICTION WITH HIERARCHICAL SPATIAL TEMPORAL ATTENTION
In accordance with one embodiment of the present disclosure, method includes obtaining multi-level environment data corresponding to a plurality of driving environment levels, encoding the multi-level environment data at each level, extracting features from the multi-level environment data at each encoded level, fusing the extracted features from each encoded level with a spatial-temporal attention framework to generate a fused information embedding, and decoding the fused information embedding to predict driving environment information at one or more driving environment levels.
AUTOMATED ANALYSIS OF CUSTOMER INTERACTION TEXT TO GENERATE CUSTOMER INTENT INFORMATION AND HIERARCHY OF CUSTOMER ISSUES
Methods and apparatuses are described for automated analysis of customer interaction text to generate customer intent information and a hierarchy of customer issues. A server captures computer text segments including a first portion comprising a transcript of an interaction and a second portion comprising notes about the interaction. The server generates interaction embeddings corresponding to the first portion of the computer text segment for a trained neural network. The server executes the neural network using the interaction embeddings to generate an interaction summary for each computer text segment. The server converts each interaction summary into a multidimensional vector and aggregates the multidimensional vectors into clusters based upon a similarity measure. The server aligns the clusters of vectors with attributes of the interaction summaries to generate a hierarchical mapping of customer issues.
MEASURING THE PERFORMANCE OF RADAR, ULTRASOUND OR AUDIO CLASSIFIERS
A method for measuring the performance of a classifier for radar, ultrasound or audio spectra. The classifier is configured to map a radar, ultrasound or audio spectrum to a set of classification scores with respect to classes of a given classification. The method includes: providing a set of test radar, ultrasound or audio spectra that form part of, and/or define, a common distribution or manifold; obtaining at least one evaluation spectrum that is a modification of at least one test spectrum with substantially the same semantic content as this at least one test spectrum, and/or does not form part of the common distribution or manifold; mapping, using the classifier, the at least one evaluation spectrum to a set of evaluation classification scores; and determining the performance based on the set of evaluation classification scores, and/or on a further outcome produced by the classifier during the processing of the evaluation spectrum.
Systems and methods involving semantic determination of job titles
In one example, a computer-based system determines a relationship between a first job and a second job at one or more companies, by using a title data store, a training module, and a prediction module, wherein the title data store accepts job-related information characterizing at least one job-related position that includes at least one of title, corporate entity, job description, and job-related interest data. The training module accepts input data from the title data store, calculates or generates a set of coefficients and a set of job-related vectors from the input data, and stores the coefficients into a database. The prediction module may accept: a first set of data including at least one of a first title, a first corporate designation data, a second set of data including at least one of a second title and a second corporate designation data, and the coefficients from the training module; and then a similarity between the first set of data and the second set of data may be calculated.
Name and face matching
Described are methods, systems, and computer-program product embodiments for selecting a face image based on a name. In some embodiments, a method includes receiving the name. Based on the name, a name vector is selected from a plurality of name vectors in a dataset that maps a plurality of names to a plurality of corresponding name vectors in a vector space, where each name vector includes representations associated with a plurality of words associated with each name. A plurality of face vectors corresponding to a plurality of face images is received. A face vector is selected from the plurality of face vectors based on a plurality of similarity scores calculated for the plurality of corresponding face vectors, where for each name vector, a similarity score is calculated based on the name vector and each face vector. The face image is output based on the selected face vector.
JOINT OBJECT AND OBJECT PART DETECTION USING WEB SUPERVISION
A method for generating object and part detectors includes accessing a collection of training images. The collection of training images includes images annotated with an object label and images annotated with a respective part label for each of a plurality of parts of the object. Joint appearance-geometric embeddings for regions of a set of the training images are generated. At least one detector for the object and its parts is learnt using annotations of the training images and respective joint appearance-geometric embeddings, e.g., using multi-instance learning for generating parameters of scoring functions which are used to identify high scoring regions for learning the object and its parts. The detectors may be output or used to label regions of a new image with object and part labels.