Patent classifications
G06F18/2431
System-on-a-chip incorporating artificial neural network and general-purpose processor circuitry
A circuit system and a method of analyzing audio or video input data that is capable of detecting, classifying, and post-processing patterns in an input data stream. The circuit system may consist of one or more digital processors, one or more configurable spiking neural network circuits, and digital logic for the selection of two-dimensional input data. The system may use the neural network circuits for detecting and classifying patterns and one or more the digital processors to perform further detailed analyses on the input data and for signaling the result of an analysis to outputs of the system.
Digital unpacking of CT imagery
An improvement to automatic classifying of threat level of objects in CT scan images of container content, methods include automatic identification of non-classifiable threat level object images, and displaying on a display of an operator a de-cluttered image, to improve operator efficiency. The de-cluttered image includes, as subject images, the non-classifiable threat level object images. Improvement to resolution of non-classifiable threat objects includes computer-directed prompts for the operator to enter information regarding the subject image and, based on same, identifying the object type. Improvement to automatic classifying of threat levels includes incremental updating the classifying, using the determined object type and the threat level of the object type.
Training a card type classifier with simulated card images
A computer model to identify a type of physical card is trained using simulated card images. The physical card may exist with various subtypes, some of which may not exist or be unavailable when the model is trained. To more robustly identify these subtypes, the training data set for the computer model includes simulated card images that are generated for the card type. The simulated card images are generated based on a semi-randomized background that varies in appearance, onto which an identifying marking of the card type is superimposed, such that the training data for the computer model includes additional randomized sample card images and ensure the model is robust to further variations in subtypes.
AUTOMATIC CAPTURE OF USER INTERFACE SCREENSHOTS FOR SOFTWARE PRODUCT DOCUMENTATION
Embodiments of the invention are directed to automatically capturing user interface screenshots for use in documentation of a software product. Aspects include identifying a user interface window of the software product and creating a degree-of-completion graph for the user interface window. Aspects also include capturing a plurality of screenshots of the user interface window during use of the software product and calculating a degree-of-completion percentage for each of the plurality of screenshots. Aspects further include identifying a subset of the plurality of screenshots to be included in the software product documentation based on the degree-of-completion percentage.
Automatic spotlight in video conferencing
Automatic spotlighting of participant video feeds in a video conference are based on one or more triggers detected in the participant video feeds. Participant video feeds are added to a spotlight video queue. The participant video feeds are elevated to an active spotlight status based on certain criteria. The participant video feeds that are elevated to the active spotlight status are displayed adjacent to a host video feed on a display.
Multi-stage adaptable continuous learning / feedback system for machine learning models
Data is received that specifies a term generated by user input in a graphical user interface. Thereafter, the term is looked up in a dictionary in which there are multiple classes for terms. The term can be classified based on a first class having a top ranked effective count for the term within the dictionary when a ratio of the first class relative to a second class having a second ranked effective count for the term in the dictionary is above a pre-defined threshold. In addition, the term is classified using a machine learning model when the ratio of the first class relative to the second class is below the pre-defined threshold. Data can be provided which characterizes the classifying. Related apparatus, systems, techniques and articles are also described.
System, device, and method of classifying encrypted network communications
Systems, devices, and methods of classifying encrypted network communications. A Traffic Monitoring Unit operates to monitor network traffic, and to capture HTTPS-encrypted packets that are exchanged over an HTTPS connection between an end-user device and a web server. An HTTPS Traffic Classification Unit operates to detect discrete HTTPS-encrypted objects within that HTTPS connection, and to classify those discrete HTTPS-encrypted objects based on at least one of: a first Analysis Model that classifies HTTPS-encrypted objects based on a type of content that is represented in the HTTPS-encrypted object; a second Analysis Model that classifies HTTPS-encrypted objects based on a type of server-side application that is associated with the HTTPS-encrypted object. Each Analysis Model utilizes Machine Learning (ML), Deep Learning (DL), Artificial Intelligence (AI), or Statistical and Mathematical Analysis (SMA).
Multimodal sentiment classification
Sentiment classification can be implemented by an entity-level multimodal sentiment classification neural network. The neural network can include left, right, and target entity subnetworks. The neural network can further include an image network that generates representation data that is combined and weighted with data output by the left, right, and target entity subnetworks to output a sentiment classification for an entity included in a network post.
METHOD FOR TRAINING ASYMMETRIC GENERATIVE ADVERSARIAL NETWORK TO GENERATE IMAGE AND ELECTRIC APPARATUS USING THE SAME
A method for training an asymmetric generative adversarial network to generate an image and an electronic apparatus using the same are provided. The method includes the following. A first real image belonging to a first category, a second real image belonging to a second category and a third real image belonging to a third category are input to an asymmetric generative adversarial network for training the asymmetric generative adversarial network, and the asymmetric generative adversarial network includes a first generator, a second generator, a first discriminator and a second discriminator. A fourth real image belonging to the second category is input to the first generator in the trained asymmetric generative adversarial network to generate a defect image.
Systems and methods for utilizing models to detect dangerous tracks for vehicles
A device may receive accelerometer data and video data for a vehicle and may identify bounding boxes and object classes for objects near the vehicle. The device may identify tracks for the objects and may filter out tracks that are not associated with vehicles or vulnerable road users to generate one or more tracks or an indication of no tracks. The device may generate a collision cone identifying a drivable area of the vehicle to identify objects more likely to be involved in a collision and may filter out tracks from the one or more tracks, based on the bounding boxes, and to generate a subset of tracks or another indication of no tracks. The device may determine scores for the subset of tracks and may identify a track of the subset of tracks with a highest score. The device may perform actions based on the identified track.