G06V10/768

Automated honeypot creation within a network

Systems and methods for managing Application Programming Interfaces (APIs) are disclosed. Systems may involve automatically generating a honeypot. For example, the system may include one or more memory units storing instructions and one or more processors configured to execute the instructions to perform operations. The operations may include receiving, from a client device, a call to an API node and classifying the call as unauthorized. The operation may include sending the call to a node-imitating model associated with the API node and receiving, from the node-imitating model, synthetic node output data. The operations may include sending a notification based on the synthetic node output data to the client device.

Digital unpacking of CT imagery

An improvement to automatic classifying of threat level of objects in CT scan images of container content, methods include automatic identification of non-classifiable threat level object images, and displaying on a display of an operator a de-cluttered image, to improve operator efficiency. The de-cluttered image includes, as subject images, the non-classifiable threat level object images. Improvement to resolution of non-classifiable threat objects includes computer-directed prompts for the operator to enter information regarding the subject image and, based on same, identifying the object type. Improvement to automatic classifying of threat levels includes incremental updating the classifying, using the determined object type and the threat level of the object type.

Representative document hierarchy generation

In some aspects, a method includes performing optical character recognition (OCR) based on data corresponding to a document to generate text data, detecting one or more bounded regions from the data based on a predetermined boundary rule set, and matching one or more portions of the text data to the one or more bounded regions to generate matched text data. Each bounded region of the one or more bounded regions encloses a corresponding block of text. The method also includes extracting features from the matched text data to generate a plurality of feature vectors and providing the plurality of feature vectors to a trained machine-learning classifier to generate one or more labels associated with the one or more bounded regions. The method further includes outputting metadata indicating a hierarchical layout associated with the document based on the one or more labels and the matched text data.

IDENTIFYING AND TRANSFORMING TEXT DIFFICULT TO UNDERSTAND BY USER

A computer-implemented method, system and computer program product for improving understandability of text by a user. A final word vector for each word in a sentence of a document is computed, such as by averaging a first word vector and a second word vector for that word. Furthermore, elements of a user portrait are vectorized. A distance is then computed between a vector for each word in the sentence and a vectorized element in the user’s portrait which is summed to form an evaluation result for the element. An evaluation result is also formed for every other element in the user’s portrait by performing such a computation step. A “final evaluation result” is then generated corresponding to the evaluation results for every element in the user’s portrait. The document is then transformed in response to the final evaluation result indicating a lack of understanding of the sentence by the user.

Image tagging based upon cross domain context

A method described herein includes receiving a digital image, wherein the digital image includes a first element that corresponds to a first domain and a second element that corresponds to a second domain. The method also includes automatically assigning a label to the first element in the digital image based at least in part upon a computed probability that the label corresponds to the first element, wherein the probability is computed through utilization of a first model that is configured to infer labels for elements in the first domain and a second model that is configured to infer labels for elements in the second domain. The first model receives data that identifies learned relationships between elements in the first domain and elements in the second domain, and the probability is computed by the first model based at least in part upon the learned relationships.

Structured adversarial, training for natural language machine learning tasks

A method includes obtaining first training data having multiple first linguistic samples. The method also includes generating second training data using the first training data and multiple symmetries. The symmetries identify how to modify the first linguistic samples while maintaining structural invariants within the first linguistic samples, and the second training data has multiple second linguistic samples. The method further includes training a machine learning model using at least the second training data. At least some of the second linguistic samples in the second training data are selected during the training based on a likelihood of being misclassified by the machine learning model.

Object Information Derived from Object Images
20180011877 · 2018-01-11 ·

An object is recognized from image data as a target object and linked to a user based on an interaction by the user, information about the target object is obtained and a purchase of the target object is initiated.

Systems and methods for determining likelihood of traffic incident information

A method includes receiving a first set of images from an image capture device of a vehicle. The method also includes performing a first analysis of movement of biomechanical points of occupants of the vehicle in the first set of images. The method further includes receiving an indication that a traffic incident has occurred. The method also includes receiving a second set of images from the image capture device corresponding to when the traffic incident occurred. The method further includes performing a second analysis of movement of the biomechanical points of the occupants in the second set of images. The method also includes determining a likelihood of injury or a severity of injury to the occupants based on the first analysis of movement and the second analysis of movement.

RECOGNITION DEVICE, RECOGNITION METHOD, AND COMPUTER PROGRAM PRODUCT
20180012111 · 2018-01-11 ·

According to an embodiment, a recognition device includes a detector, a recognizer, and a matcher. The detector is configured to detect a character candidate from an input image. The recognizer is configured to generate recognition candidate from the character candidate. The matcher is configured to match the recognition candidate with a knowledge dictionary and contains modeled character strings to be recognized, and generate a matching result obtained by matching a character string presumed to be included in the input image with the dictionary. Any one of a real character code that represents a character and a virtual character code that specifies a command is assigned to an edge. The matcher gives, when shifting a state of the dictionary in accordance with an edge to which the virtual character code is assigned, a command specified by the virtual character code assigned to the edge to a command processor.

Systems and methods of detecting and responding to a visitor to a smart home environment

A method of detecting and responding to a visitor to a smart home environment via an electronic greeting system of the smart home environment, including determining that a visitor is approaching an entryway of the smart home environment; initiating a facial recognition operation while the visitor is approaching the entryway; initiating an observation window in response to the determination that a visitor is approaching the entryway; obtaining context information from one or more sensors of the smart home environment during the observation window; and at the end of the time window, initiating a response to the detected approach of the visitor based on the context information and/or an outcome of the facial recognition operation.