Patent classifications
G06N3/0455
HANDS-ON ARTIFICIAL INTELLIGENCE EDUCATION SERVICE
Indications of sample machine learning models which create synthetic content items are provided via programmatic interfaces. A representation of a synthetic content item produced by one of the sample models in response to input obtained from a client of a provider network is presented. In response to a request from the client, a machine learning model is trained to produce additional synthetic content items.
MULTI-TASK DEEP LEARNING-BASED REAL-TIME MATTING METHOD FOR NON-GREEN-SCREEN PORTRAITS
A multi-task deep learning-based real-time matting method for non-green-screen portraits is provided. The method includes: performing binary classification adjustment on an original dataset, inputting an image or video containing portrait information, and performing preprocessing; constructing a deep learning network for person detection, extracting image features by using a deep residual neural network, and obtaining a region of interest (ROI) of portrait foreground and a portrait trimap in the ROI through logistic regression; and constructing a portrait alpha mask matting deep learning network. An encoder sharing mechanism effectively accelerates a computing process of the network. An alpha mask prediction result of the portrait foreground is output in an end-to-end manner to implement portrait matting. In this method, green screens are not required during portrait matting. In addition, during the matting, only original images or videos need to be provided, without a need to provide manually annotated portrait trimaps.
INTENT RECOGNITION MODEL TRAINING AND INTENT RECOGNITION METHOD AND APPARATUS
The present disclosure provides intent recognition model training and intent recognition methods and apparatuses, and relates to the field of artificial intelligence technologies. The intent recognition model training method includes: acquiring training data including a plurality of training texts and first annotation intents of the plurality of training texts; constructing a neural network model including a feature extraction layer and a first recognition layer; and training the neural network model according to word segmentation results of the plurality of training texts and the first annotation intents of the plurality of training texts to obtain an intent recognition model. The method for intent recognition includes: acquiring a to-be-recognized text; and inputting word segmentation results of the to-be-recognized text to an intent recognition model, and obtaining a first intent result and a second intent result of the to-be-recognized text according to an output result of the intent recognition model.
METHOD FOR TRAINING RANKING LEARNING MODEL, RANKING METHOD, DEVICE AND MEDIUM
The technical solution relates to the field of artificial intelligence technologies, such as machine learning technologies, natural language processing technologies, or the like. A plurality of training samples are collected, each of the plurality of training samples includes information of a known training target protein, information of two training drugs, and a real difference between affinities of the two training drugs for the known training target. The ranking learning model is trained with the plurality of training samples, such that the ranking learning model learns a capability of predicting a magnitude relationship between the affinities of the two training drugs for the known training target protein in each of the plurality of training samples.
STORAGE MEDIUM, ESTIMATION METHOD, AND INFORMATION PROCESSING APPARATUS
A non-transitory computer-readable storage medium storing an estimation program that causes at least one computer to execute a process, the process includes inputting an input data into a trained variational autoencoder that includes an encoder and a decoder; converting, into a first probability distribution, a probability distribution of a latent variable that is generated by the trained variational autoencoder according to the input based on a magnitude of a standard deviation output from the encoder; converting the first probability distribution into a second probability distribution based on an output error of the decoder regarding the input data; and outputting the second probability distribution as an estimated value of a probability distribution of the input data.
ANOMALY DETECTION ON DYNAMIC SENSOR DATA
Methods and systems for anomaly detection include determining whether a system is in a stable state or a dynamic state based on input data from one or more sensors in the system, using reconstruction errors from a respective stable model and dynamic model. It is determined that the input data represents anomalous operation of the system, responsive to a determination that the system is in a stable state, using the reconstruction errors. A corrective operation is performed on the system responsive to a determination that the input data represents anomalous operation of the system.
ANOMALY DETECTION ON DYNAMIC SENSOR DATA
Methods and systems for anomaly detection include determining whether a system is in a stable state or a dynamic state based on input data from one or more sensors in the system, using reconstruction errors from a respective stable model and dynamic model. It is determined that the input data represents anomalous operation of the system, responsive to a determination that the system is in a stable state, using the reconstruction errors. A corrective operation is performed on the system responsive to a determination that the input data represents anomalous operation of the system.
Medical diagnostic tool with neural model trained through machine learning for predicting coronary disease from ECG signals
A diagnostic tool includes a sensor for capturing at least one biosignal produced by a patient's heart and a computer device that implements a neural network iteratively trained via machine learning to generate a prediction about a heart condition of the patient. After the neural network is trained, the computer device can convert the at least one biosignal to a multi-dimensional input matrix for the deep neural network generated from a number (N) of biosignals captured by the sensor. The computer device then processes the multi-dimensional input matrix through the deep neural network, which subsequently outputs the prediction about the heart condition of the patient.
VIDEO CLIP POSITIONING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
This application discloses a video clip positioning method performed by a computer device. In this application, clip features of video clips in a video are determined according to the unit features of video units within the video clips, so that the acquired clip features integrate the features of the video units and the time sequence correlation between the video units; and then the clip features of the video clips and a text feature of a target text are fused. The features of video clip dimensions and the time sequence correlation between the video clips are fully used in the feature fusion process, so that more accurate attention weights can be acquired based on the fused features. The attention weights are used to represent matching degrees between the video clips and the target text, and then a target video clip matching the target text can be positioned more accurately.
DIALOGUE GENERATION METHOD AND NETWORK TRAINING METHOD AND APPARATUS, STORAGE MEDIUM, AND DEVICE
A dialogue generation method, a network training method and apparatus, a storage medium, and a device are provided. The method includes: predicting, based on a plurality of a plurality of pieces of candidate knowledge text in a first candidate knowledge set, a preliminary dialogue response of a first dialogue preceding text; processing the first dialogue preceding text based on the preliminary dialogue response to obtain a first dialogue preceding text vector; obtaining a piece of target knowledge text based on a probability value of the piece of target knowledge text of being selected to be used in generating a final dialogue response, the probability value being obtained based on the first dialogue preceding text vector; and generating the final dialogue response based on the first dialogue preceding text and the piece of target knowledge text.