Patent classifications
G06F18/41
Empathic artificial intelligence systems
Embodiments of the present disclosure provide systems and methods for training a machine-learning model for predicting emotions from received media data. Methods according to the present disclosure include displaying a user interface. The user interface includes a predefined media content, a plurality of predefined emotion tags, and a user interface control for controlling a recording of the user imitating the predefined media content. Methods can further include receiving, from a user, a selection of one or more emotion tags from the plurality of predefined emotion tags, receiving the recording of the user imitating the predefined media content, storing the recording in association with the selected one or more emotion tags, and training, based on the recording, the machine-learning model configured to receive input media data and predict an emotion based on the input media data.
Misuse index for explainable artificial intelligence in computing environments
A mechanism is described for facilitating misuse index for explainable artificial intelligence in computing environments, according to one embodiment. A method of embodiments, as described herein, includes mapping training data with inference uses in a machine learning environment, where the training data is used for training a machine learning model. The method may further include detecting, based on one or more policy/parameter thresholds, one or more discrepancies between the training data and the inference uses, classifying the one or more discrepancies as one or more misuses, and creating a misuse index listing the one or more misuses.
SYSTEMS AND USER INTERFACES FOR ENHANCEMENT OF DATA UTILIZED IN MACHINE-LEARNING BASED MEDICAL IMAGE REVIEW
Systems and techniques are disclosed for improvement of machine learning systems based on enhanced training data. An example method includes providing a visual concurrent display of a set of images of features, the features requiring classification by a reviewing user. The user interface is provided to enable the reviewing user to assign classifications to the images, the user interface being configured to create, read, update, and/or delete classifications. The user interface is responsive to the user, with the user response indicating at least two images with a single classification. The user interface is updated to represent the single classification.
Generating a robot control policy from demonstrations collected via kinesthetic teaching of a robot
Techniques are described herein for generating a dynamical systems control policy. A non-parametric family of smooth maps is defined on which vector-field learning problems can be formulated and solved using convex optimization. In some implementations, techniques described herein address the problem of generating contracting vector fields for certifying stability of the dynamical systems arising in robotics applications, e.g., designing stable movement primitives. These learning problems may utilize a set of demonstration trajectories, one or more desired equilibria (e.g., a target point), and once or more statistics including at least an average velocity and average duration of the set of demonstration trajectories. The learned contracting vector fields may induce a contraction tube around a targeted trajectory for an end effector of the robot. In some implementations, the disclosed framework may use curl-free vector-valued Reproducing Kernel Hilbert Spaces.
USER-IN-THE-LOOP OBJECT DETECTION AND CLASSIFICATION SYSTEMS AND METHODS
A detection device is adapted to traverse a search area and generate sensor data associated with an object that may be present in the search area, the detection device comprising a first logic device configured to detect and classify the object in the sensor data, communicate object detection information to a control system when the detection device is within a range of communications of the control system, and generate and store object analysis information for a user of the control system when the detection device is not in communication with the control system. A control system facilitates user monitoring and/or control of the detection device during operation and to access the stored object analysis information. The object analysis information is provided in an interactive display to facilitate user detection and classification of the detected object by the user to update the detection information, trained object classifier, and training dataset.
Information processing apparatus, information processing method, and program
Provided are an information processing apparatus, an information processing method, and a program capable of accumulating appropriate relearning data. An information processing apparatus includes an input unit that inputs input data to a learned model acquired in advance through machine learning using learning data, an acquisition unit that acquires output data output from the learned model through the input using the input unit, a reception unit that receives correction performed by a user for the output data acquired by the acquisition unit, and a storage controller that performs control for storing, as relearning data of the learned model, the input data and the output data that reflects the correction received by the reception unit in a storage unit in a case where a value indicating a correction amount acquired by performing the correction for the output data is equal to or greater than a threshold value.
Systems and user interfaces for enhancement of data utilized in machine-learning based medical image review
Systems and techniques are disclosed for improvement of machine learning systems based on enhanced training data. An example method includes providing a visual concurrent display of a set of images of features, the features requiring classification by a reviewing user. The user interface is provided to enable the reviewing user to assign classifications to the images, the user interface being configured to create, read, update, and/or delete classifications. The user interface is responsive to the user, with the user response indicating at least two images with a single classification. The user interface is updated to represent the single classification.
Enhanced supervised form understanding
Interfaces and systems are provided for harvesting ground truth from forms to be used in training models based on key-value pairings in the forms and to later use the trained models to identify related key-value pairings in new forms. Initially, forms are identified and clustered to identify a subset of forms to label with the key-value pairings. Users provide input to identify keys to use in labeling and then select/highlight text from forms that are presented concurrently with the keys in order to associate the highlighted text with the key(s) as the corresponding key-value pairing(s). After labeling the forms with the key-value pairings, the key-value pairing data is used as ground truth for training a model to independently identify the key-value pairing(s) in new forms. Once trained, the model is used to identify the key-value pairing(s) in new forms.
Machine learning system and method for determining or inferring user action and intent based on screen image analysis
System(s) and method(s) that analyze image data associated with a computing screen operated by a user, and learns the image data (e.g., using pattern recognition, historical information analysis, user implicit and explicit training data, optical character recognition (OCR), video information, 360°/panoramic recordings, and so on) to concurrently glean information regarding multiple states of user interaction (e.g., analyzing data associated with multiple applications open on a desktop, mobile phone or tablet). A machine learning model is trained on analysis of graphical image data associated with screen display to determine or infer user intent. An input component receives image data regarding a screen display associated with user interaction with a computing device. An analysis component employs the model to determine or infer user intent based on the image data analysis; and an action component provisions services to the user as a function of the determined or inferred user intent. In an implementation, a gaming component gamifies interaction with the user in connection with explicitly training the model.
Empathic artificial intelligence systems
Embodiments of the present disclosure provide systems and methods for training a machine-learning model for predicting emotions from received media data. Methods according to the present disclosure include displaying a user interface. The user interface includes a predefined media content, a plurality of predefined emotion tags, and a user interface control for controlling a recording of the user imitating the predefined media content. Methods can further include receiving, from a user, a selection of one or more emotion tags from the plurality of predefined emotion tags, receiving the recording of the user imitating the predefined media content, storing the recording in association with the selected one or more emotion tags, and training, based on the recording, the machine-learning model configured to receive input media data and predict an emotion based on the input media data.