G06F18/21355

INTER-CLUSTER INTENSITY VARIATION CORRECTION AND BASE CALLING

The technology disclosed corrects inter-cluster intensity profile variation for improved base calling on a cluster-by-cluster basis. The technology disclosed accesses current intensity data and historic intensity data of a target cluster, where the current intensity data is for a current sequencing cycle and the historic intensity data is for one or more preceding sequencing cycles. A first accumulated intensity correction parameter is determined by accumulating distribution intensities measured for the target cluster at the current and preceding sequencing cycles. A second accumulated intensity correction parameter is determined by accumulating intensity errors measured for the target cluster at the current and preceding sequencing cycles. Based on the first and second accumulated intensity correction parameters, next intensity data for a next sequencing cycle is corrected to generate corrected next intensity data, which is used to base call the target cluster at the next sequencing cycle.

METHOD FOR IDENTIFYING AN ITEM BY OLFACTORY SIGNATURE
20220137018 · 2022-05-05 · ·

A method implemented by a computer processing circuit connected to an electronic nose, for identifying a given item by an olfactory signature, the method making use of the electronic nose to obtain an olfactory signature, repeating the use of the electronic nose a first number K of times in order to acquire K olfactory signatures, making use of the computer processing circuit in order to estimate, on the basis of the K olfactory signatures, a model of the olfactory signature of the given item, acquiring, with an electronic nose of the same type, a current measurement of the olfactory signature of a current item of the same type as the given item, and comparing the current measurement to the model, in order to estimate a similarity (SIM) between the current item and the given item.

TRAINING MORE SECURE NEURAL NETWORKS BY USING LOCAL LINEARITY REGULARIZATION

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes processing each training input using the neural network and in accordance with the current values of the network parameters to generate a network output for the training input; computing a respective loss for each of the training inputs by evaluating a loss function; identifying, from a plurality of possible perturbations, a maximally non-linear perturbation; and determining an update to the current values of the parameters of the neural network by performing an iteration of a neural network training procedure to decrease the respective losses for the training inputs and to decrease the non-linearity of the loss function for the identified maximally non-linear perturbation.

SYSTEMS AND METHODS FOR GENERATING PROCESSABLE DATA FOR MACHINE LEARNING APPLICATIONS
20230252337 · 2023-08-10 ·

Systems and methods for converting distributed raw user data into processable data for data analysis, such as machine learning (ML) training or the like. In one embodiment, the method comprises generating, at a server, from a data schema comprising one or more data types, an instruction schema comprising, for each data type in said one or more data types, one or more instructions to be applied to the data type; for each device in a plurality of devices communicatively coupled to said server: sending, from the server, to the device, the instruction schema; receiving, at the device, the instruction schema; applying, at the device, each instruction in the instruction schema on locally stored raw user data, so as to generate an embedding of processable data; sending, from the device, to the server, the embedding; and receiving, at said server, the embedding from each device.

Systems and Methods for Per-Cluster Intensity Correction and Base Calling

The technology disclosed generates variation correction coefficients on a cluster-by-cluster basis to correct inter-cluster intensity profile variation for improved base calling. An amplification coefficient corrects scale variation. Channel-specific offset coefficients correct shift variation along respective intensity channels. The variation correction coefficients for a target cluster are generated based on combining analysis of historic intensity data generated for the target cluster at preceding sequencing cycles of a sequencing run with analysis of current intensity data generated for the target cluster at a current sequencing cycle of the sequencing run. The variation correction coefficients are then used to correct next intensity data generated for the target cluster at a next sequencing cycle of the sequencing run. The corrected next intensity data is then used to base call the target cluster at the next sequencing cycle.

METHOD AND SYSTEM OF RETRIEVING MULTIMODAL ASSETS

A system and method and for retrieving one or more one or more multimodal assets includes receiving a search query for searching for one or more multimodal assets from among a plurality of candidate multimodal assets, encoding the search query into one or more query embedding representations via a trained query representation machine-learning (ML) model, comparing, via a matching unit, the one or more query embedding representations to a plurality of multimodal tensor representations, each of the plurality of multimodal tensor representations being a representation of one of the plurality of candidate multimodal assets, and identifying, based on the comparison, at least one of the plurality of the candidate multimodal assets as a search result for the search query, and providing the at least one of the plurality of the candidate multimodal assets for display as the search result.

NAME AND FACE MATCHING

Described are methods, systems, and computer-program product embodiments for selecting a face image based on a name. In some embodiments, a method includes receiving the name. Based on the name, a name vector is selected from a plurality of name vectors in a dataset that maps a plurality of names to a plurality of corresponding name vectors in a vector space, where each name vector includes representations associated with a plurality of words associated with each name. A plurality of face vectors corresponding to a plurality of face images is received. A face vector is selected from the plurality of face vectors based on a plurality of similarity scores calculated for the plurality of corresponding face vectors, where for each name vector, a similarity score is calculated based on the name vector and each face vector. The face image is output based on the selected face vector.

Expression recognition method under natural scene

An expression recognition method under a natural scene comprises: converting an input video into a video frame sequence in terms of a specified frame rate, and performing facial expression labeling on the video frame sequence to obtain a video frame labeled sequence; removing natural light impact, non-face areas, and head posture impact elimination on facial expression from the video frame labeled sequence to obtain an expression video frame sequence; augmenting the expression video frame sequence to obtain a video preprocessed frame sequence; from the video preprocessed frame sequence, extracting HOG features that characterize facial appearance and shape features, extracting second-order features that describe a face creasing degree, and extracting facial pixel-level deep neural network features by using a deep neural network; then, performing vector fusion on these three obtain facial feature fusion vectors for training; and inputting the facial feature fusion vectors into a support vector machine for expression classification.

Machine-learned models for user interface prediction, generation, and interaction understanding

Generally, the present disclosure is directed to user interface understanding. More particularly, the present disclosure relates to training and utilization of machine-learned models for user interface prediction and/or generation. A machine-learned interface prediction model can be pre-trained using a variety of pre-training tasks for eventual downstream task training and utilization (e.g., interface prediction, interface generation, etc.).

METHOD FOR MONITORING A NETWORK

A method for monitoring operation of a controller area network (CAN) comprising a plurality of nodes. The method comprises measuring a voltage associated with a CAN message transmitted on the network, determining a message signature in dependence on the measured voltage, and comparing the message signature with a node signature to determine the authenticity of the CAN message. One or more actions may be taken in dependence on the determined authenticity.