Patent classifications
G06N3/094
System and Method for Capturing, Preserving, and Representing Human Experiences and Personality Through a Digital Interface
A system and method to capture and interact with a comprehensive digital record of an individual's biographical history and produce a synthetic model of their personality. The captured biographical history is a detailed record of this individual's actions, interactions, and experiences over a period which may span decades of their lifetime. The biographical history is indexed by areas of data variability and neural network confidence variability to identify points of likely human interest. A synthetic personality model is generated as a representation of the individual's personality structure, biases, sentiments, and traits. The synthetic personality can be interacted with through a digital interface and demonstrates the interaction patterns, triggers, and habits of the original individual. The functioning and the performance of the system over an individual's lifespan are optimized through data synthesis and disposition.
Simulate live video presentation in a recorded video
An embodiment for simulating a live video presentation in a recorded video is provided. The embodiment may include receiving a previously recorded online meeting. The embodiment may also include transcribing and indexing the transcription of the previously recorded online meeting. The embodiment may further include receiving audio content from a user. The embodiment may also include searching for a response to the audio content in the transcription. The embodiment may further include in response to determining the response is found in the transcription, generating a solution for the audio content from the transcription. The embodiment may also include integrating the generated solution into the previously recorded online meeting. The embodiment may further include updating one or more video frames of the previously recorded online meeting based on the generated solution.
Few-Shot Domain Adaptation in Generative Adversarial Networks
The present disclosure provides improved methods for learning a generative model with limited training data, by leveraging a pre-trained GAN model from a related domain and adapting it to the new domain given a set of target examples from the new or target domain.
Few-Shot Domain Adaptation in Generative Adversarial Networks
The present disclosure provides improved methods for learning a generative model with limited training data, by leveraging a pre-trained GAN model from a related domain and adapting it to the new domain given a set of target examples from the new or target domain.
LEARNING APPARATUS, METHOD, AND PROGRAM, IMAGE GENERATION APPARATUS, METHOD, AND PROGRAM, TRAINED MODEL, VIRTUAL IMAGE, AND RECORDING MEDIUM
A processor inputs a first training image having a first feature to a generator, which is a generative model and generates a training virtual image having a second feature. The processor derives a plurality of types of conversion training images with different observation conditions by performing a plurality of types of observation condition conversion processing on a second training image. The processor derives a plurality of types of conversion training virtual images with the different observation conditions by performing the plurality of types of observation condition conversion processing on the training virtual image. The processor trains the generative model using evaluation results regarding the plurality of types of conversion training images and the plurality of types of conversion training virtual images.
LEARNING APPARATUS, METHOD, AND PROGRAM, IMAGE GENERATION APPARATUS, METHOD, AND PROGRAM, TRAINED MODEL, VIRTUAL IMAGE, AND RECORDING MEDIUM
A processor inputs a first training image having a first feature to a generator, which is a generative model and generates a training virtual image having a second feature. The processor derives a plurality of types of conversion training images with different observation conditions by performing a plurality of types of observation condition conversion processing on a second training image. The processor derives a plurality of types of conversion training virtual images with the different observation conditions by performing the plurality of types of observation condition conversion processing on the training virtual image. The processor trains the generative model using evaluation results regarding the plurality of types of conversion training images and the plurality of types of conversion training virtual images.
EMBEDDING CONTEXTUAL INFORMATION IN AN IMAGE TO ASSIST UNDERSTANDING
A computer-implemented method, system and computer program product for embedding contextual information in an image or video frames. A generative adversarial network (GAN) is trained to provide contextual information to be embedded in an image or video frames, where the contextual information includes text, sound and/or video frames that provides context to the image or video frames. After training the GAN, an image or video frames are received to be embedded with contextual information if necessary. Features are then extracted from the received image/video frames. An image(s) or video frame(s) are identified in a database using the GAN associated with features with a similarity to the extracted features of the received image/video frames that exceeds a threshold value. Such identified images and/or video frames are associated with “references” containing contextual information which are extracted. The received image/video frames are then augmented with the extracted references to provide context.
EMBEDDING CONTEXTUAL INFORMATION IN AN IMAGE TO ASSIST UNDERSTANDING
A computer-implemented method, system and computer program product for embedding contextual information in an image or video frames. A generative adversarial network (GAN) is trained to provide contextual information to be embedded in an image or video frames, where the contextual information includes text, sound and/or video frames that provides context to the image or video frames. After training the GAN, an image or video frames are received to be embedded with contextual information if necessary. Features are then extracted from the received image/video frames. An image(s) or video frame(s) are identified in a database using the GAN associated with features with a similarity to the extracted features of the received image/video frames that exceeds a threshold value. Such identified images and/or video frames are associated with “references” containing contextual information which are extracted. The received image/video frames are then augmented with the extracted references to provide context.
SYSTEM AND METHOD FOR SIMILARITY LEARNING IN DIGITAL PATHOLOGY
Systems and methods for similarity learning in digital pathology are provided. In one aspect, an apparatus for generating training image data includes a hardware memory configured to store executable instructions and a hardware processor in communication with the hardware memory, wherein the executable instructions, when executed by the processor, cause the processor to obtain a plurality of histopathology images, classify two or more of the histopathology images as similar or dissimilar, and create a dataset of training image data including the classified histopathology images.
SYSTEM AND METHOD FOR SIMILARITY LEARNING IN DIGITAL PATHOLOGY
Systems and methods for similarity learning in digital pathology are provided. In one aspect, an apparatus for generating training image data includes a hardware memory configured to store executable instructions and a hardware processor in communication with the hardware memory, wherein the executable instructions, when executed by the processor, cause the processor to obtain a plurality of histopathology images, classify two or more of the histopathology images as similar or dissimilar, and create a dataset of training image data including the classified histopathology images.