Patent classifications
G06F18/41
Artificial intelligence based method and apparatus for processing information
An artificial intelligence based method and apparatus for processing information. A specific embodiment of the method includes: acquiring search click information recorded within a predetermined time period; generating a candidate entry set by selecting, from the search click information, entries having click volumes exceeding a click volume threshold within a preset unit time period; forming, for each candidate entry in the candidate entry set, a click volume sequence according to a chronological order of each of the click volumes corresponding to the candidate entry in the predetermined time period; determining, based on click volume sequences, categories of the candidate entries respectively corresponding to click volume sequences; and determining candidate entries having the categories being a preset category as points of interest to generate a set of points of interest.
Gesture recognition using multiple antenna
Various embodiments wirelessly detect micro gestures using multiple antenna of a gesture sensor device. At times, the gesture sensor device transmits multiple outgoing radio frequency (RF) signals, each outgoing RF signal transmitted via a respective antenna of the gesture sensor device. The outgoing RF signals are configured to help capture information that can be used to identify micro-gestures performed by a hand. The gesture sensor device captures incoming RF signals generated by the outgoing RF signals reflecting off of the hand, and then analyzes the incoming RF signals to identify the micro-gesture.
Re-training a model for abnormality detection in medical scans based on a re-contrasted training set
A method includes generating first contrast significance data for a first computer vision model generated from a first training set of medical scans. First significant contrast parameters are identified based on the first contrast significance data. A first re-contrasted training set is generated based on performing a first intensity transformation function on the first training set of medical scans, where the first intensity transformation function utilizes the first significant contrast parameters. A first re-trained model is generated from the first re-contrasted training set, which is associated with corresponding output labels based on abnormality data for the first training set of medical scans. Re-contrasted image data of a new medical scan is generated based on performing the first intensity transformation function. Inference data indicating at least one abnormality detected in the new medical scan is generated based on utilizing the first re-trained model on the re-contrasted image data.
Method and system for generating and correcting classification models
Data having some similarities and some dissimilarities may be clustered or grouped according to the similarities and dissimilarities. The data may be clustered using agglomerative clustering techniques. The clusters may be used as suggestions for generating groups where a user may demonstrate certain criteria for grouping. The system may learn from the criteria and extrapolate the groupings to readily sort data into appropriate groups. The system may be easily refined as the user gains an understanding of the data.
Information processing apparatus and non-transitory computer readable medium storing program
An information processing apparatus includes a processor configured to acquire a first recognition result and a first recognition probability on target data from a first recognizer, acquire a second recognition result and a second recognition probability on the target data from a second recognizer, execute checking of the first recognition result and the second recognition result, and execute first control in a case where the first recognition result and the second recognition result match each other as a result of the checking. The first control is control for executing either of first processing or second processing on the matched recognition result and outputting a processing result based on at least one of the first recognition probability or the second recognition probability. A human workload for the first processing is smaller than a human workload for the second processing.
EMPATHIC ARTIFICIAL INTELLIGENCE SYSTEMS
Embodiments of the present disclosure provide systems and methods for training a machine-learning model for predicting emotions from received media data. Methods according to the present disclosure include displaying a user interface. The user interface includes a predefined media content, a plurality of predefined emotion tags, and a user interface control for controlling a recording of the user imitating the predefined media content. Methods can further include receiving, from a user, a selection of one or more emotion tags from the plurality of predefined emotion tags, receiving the recording of the user imitating the predefined media content, storing the recording in association with the selected one or more emotion tags, and training, based on the recording, the machine-learning model configured to receive input media data and predict an emotion based on the input media data.
Embedding human labeler influences in machine learning interfaces in computing environments
A mechanism is described for facilitating embedding of human labeler influences in machine learning interfaces in computing environments, according to one embodiment. A method of embodiments, as described herein, includes detecting sensor data via one or more sensors of a computing device, and accessing human labeler data at one or more databases coupled to the computing device. The method may further include evaluating relevance between the sensor data and the human labeler data, where the relevance identifies meaning of the sensor data based on human behavior corresponding to the human labeler data, and associating, based on the relevance, human labeler data with the sensor data to classify the sensor data as labeled data. The method may further include training, based on the labeled data, a machine learning model to extract human influences from the labeled data, and embed one or more of the human influences in one or more environments representing one or more physical scenarios involving one or more humans.
Fine-motion virtual-reality or augmented-reality control using radar
This document describes techniques for fine-motion virtual-reality or augmented-reality control using radar. These techniques enable small motions and displacements to be tracked, even in the millimeter or sub-millimeter scale, for user control actions even when those actions are small, fast, or obscured due to darkness or varying light. Further, these techniques enable fine resolution and real-time control, unlike conventional RF-tracking or optical-tracking techniques.
ROAD SIGN CONTENT PREDICTION AND SEARCH IN SMART DATA MANAGEMENT FOR TRAINING MACHINE LEARNING MODEL
Systems and method for machine-learning assisted road sign content prediction and machine learning training is disclosed. A sign detector model processes images or video with road signs. A visual attribute prediction model extracts visual attributes of the sign in the image. The visual attribute prediction model can communicate with a knowledge graph reasoner to validate the visual attribute prediction model by applying various rules to the output of the visual attribute prediction model. A plurality of potential sign candidates are retrieved that match the visual attributes of the image subject to the visual attribute prediction model, and the rules help to reduce the list of potential sign candidates and improve accuracy of the model.
SYSTEM AND METHOD FOR CROWDSOURCING A VIDEO SUMMARY FOR CREATING AN ENHANCED VIDEO SUMMARY
System and method for crowdsourcing a video summary for creating an enhanced video summary are disclosed. The method includes receiving videos, analysing the videos, creating the video summary of the videos using a building block model, storing the video summary in a video library database, crowdsourcing the video summary to at least one of the plurality of users, enabling the at least one of the plurality of users to review the video summary and identify at least one new characteristic, enabling the at least one of the plurality of users to share the at least one new characteristic on the platform, comparing at least one existing characteristic of the building block model with the corresponding new characteristic, reconciling the video summary along with at least one inserted new characteristic, creating a new building block model, editing the video summary for creating the enhanced video summary.