Patent classifications
G06F18/40
SYSTEMS AND METHODS FOR GENERATING CUSTOMIZED TRAINING
A system may be configured to perform a method for generating customized training. The system may receive first user interaction data associated with a user. The system may determine, using a machine learning model (MLM), whether the first user interaction data exceeds a predetermined threshold. Based on such determination, the system may assign a training module to the user. The system may access a user profile associated with the user, the user profile comprising a plurality of training modules. The system may generate a training plan based on the training module and the plurality of training modules. The system may receive second user interaction data associated with the user, and may determine an efficacy level of the training plan based on the second user interaction data. The system may dynamically update the training plan based on the efficacy level, and may dynamically display the training plan in the user profile.
SYSTEMS AND METHODS FOR GENERATING CUSTOMIZED TRAINING
A system may be configured to perform a method for generating customized training. The system may receive first user interaction data associated with a user. The system may determine, using a machine learning model (MLM), whether the first user interaction data exceeds a predetermined threshold. Based on such determination, the system may assign a training module to the user. The system may access a user profile associated with the user, the user profile comprising a plurality of training modules. The system may generate a training plan based on the training module and the plurality of training modules. The system may receive second user interaction data associated with the user, and may determine an efficacy level of the training plan based on the second user interaction data. The system may dynamically update the training plan based on the efficacy level, and may dynamically display the training plan in the user profile.
System and method for finding and classifying lines in an image with a vision system
This invention provides a system and method for finding line features in an image that allows multiple lines to be efficiently and accurately identified and characterized. When lines are identified, the user can train the system to associate predetermined (e.g. text) labels with respect to such lines. These labels can be used to define neural net classifiers. The neural net operates at runtime to identify and score lines in a runtime image that are found using a line-finding process. The found lines can be displayed to the user with labels and an associated probability score map based upon the neural net results. Lines that are not labeled are generally deemed to have a low score, and are either not flagged by the interface, or identified as not relevant.
System and method for finding and classifying lines in an image with a vision system
This invention provides a system and method for finding line features in an image that allows multiple lines to be efficiently and accurately identified and characterized. When lines are identified, the user can train the system to associate predetermined (e.g. text) labels with respect to such lines. These labels can be used to define neural net classifiers. The neural net operates at runtime to identify and score lines in a runtime image that are found using a line-finding process. The found lines can be displayed to the user with labels and an associated probability score map based upon the neural net results. Lines that are not labeled are generally deemed to have a low score, and are either not flagged by the interface, or identified as not relevant.
Empathic artificial intelligence systems
Embodiments of the present disclosure provide systems and methods for training a machine-learning model for predicting emotions from received media data. Methods according to the present disclosure include displaying a user interface. The user interface includes a predefined media content, a plurality of predefined emotion tags, and a user interface control for controlling a recording of the user imitating the predefined media content. Methods can further include receiving, from a user, a selection of one or more emotion tags from the plurality of predefined emotion tags, receiving the recording of the user imitating the predefined media content, storing the recording in association with the selected one or more emotion tags, and training, based on the recording, the machine-learning model configured to receive input media data and predict an emotion based on the input media data.
Some automated and semi-automated tools for linear feature extraction in two and three dimensions
A system for vector extraction comprising a vector extraction engine stored and operating on a network-connected computing device that loads raster images from a database stored and operating on a network-connected computing device, identifies features in the raster images, and computes a vector based on the features, and methods for feature and vector extraction.
MANAGEMENT OF VIDEO PLAYBACK SPEED BASED ON OBJECTS OF INTEREST IN THE VIDEO DATA
Systems, methods, and software described herein manage the playback speed of video data based on processing objects in the video data. In one example, a video processing service obtains video data from a video source and identifies objects of interest in the video data. The video processing service further determines complexity in frames of the video data related to the objects of interest and updates playback speeds for segments of the video data based on the complexity of the frames.
Automatically Managing User Message Conveyance Utilizing Multiple Messaging Channels
A method, system and/or computer usable program product for automatically managing the conveying of messages among multiple communication channels including (i) receiving, from a first computing system, an on-line message addressed to a user, (ii) automatically categorizing the message among a predetermined set of message categories stored in memory, (iii) identifying a set of on-line message channels preselected by the addressee user for receiving messages for each of the predetermined set of message categories, (iv) identifying a set of performance metrics stored in memory for optimizing message channel selection, (v) utilizing the performance metrics to automatically select an optimum message channel from the preselected message channels for sending the categorized message to a second computing system of the addressee user, (vi) automatically formatting the categorized message for the optimum message channel, and (vii) sending the formatted message on-line to the second computing system of the addressee user across the optimum message channel.
DWELL TIME RECORDING OF DIGITAL IMAGE REVIEW SESSIONS
Systems and methods describe dwell time recording of digital image review sessions. The system displays, at a user interface (UI), a portion of an image on at least one monitor, where the image is segmented into a multitude of patches. The system then receives UI events involving a change in the currently displayed patches. For each of the UI events, the system records one or more dwell times representing durations for which the current patches of the image were displayed. The system also receives a report associated with the image review session, and processes the text of the report to determine a classification label for the image. Finally, the system trains a machine learning model, using at least the recorded dwell times and the classification label for the image.
DWELL TIME RECORDING OF DIGITAL IMAGE REVIEW SESSIONS
Systems and methods describe dwell time recording of digital image review sessions. The system displays, at a user interface (UI), a portion of an image on at least one monitor, where the image is segmented into a multitude of patches. The system then receives UI events involving a change in the currently displayed patches. For each of the UI events, the system records one or more dwell times representing durations for which the current patches of the image were displayed. The system also receives a report associated with the image review session, and processes the text of the report to determine a classification label for the image. Finally, the system trains a machine learning model, using at least the recorded dwell times and the classification label for the image.