Patent classifications
G06V40/174
Control system and method using in-vehicle gesture input
A control system and method for controlling a vehicle's functions using an in-vehicle gesture input, and more particularly, a system for receiving an occupant's gesture and controlling the execution of vehicle functions. The control system using an in-vehicle gesture input includes an input unit configured to receive a user's gesture, a memory configured to store a control program using an in-vehicle gesture input therein, and a processor configured to execute the control program. The processor transmits a command for executing a function corresponding to a gesture according to a usage pattern.
INFORMED PATIENT CONSENT THROUGH VIDEO TRACKING
A system, method, and computer-readable media for obtaining the informed consent of a patient for a medical procedure. Specifically, a video describing a medical procedure may be provided to the patient through a client device having two display portions. As the video is playing in a first portion, the client device may capture the patient watching the video and display the captured image on the second portion. The entire display of the client device may be recorded, providing a record that the patient has watched the video describing the medical procedure and consents to the medical procedure.
System and Method for Capturing, Preserving, and Representing Human Experiences and Personality Through a Digital Interface
A system and method to capture and interact with a comprehensive digital record of an individual's biographical history and produce a synthetic model of their personality. The captured biographical history is a detailed record of this individual's actions, interactions, and experiences over a period which may span decades of their lifetime. The biographical history is indexed by areas of data variability and neural network confidence variability to identify points of likely human interest. A synthetic personality model is generated as a representation of the individual's personality structure, biases, sentiments, and traits. The synthetic personality can be interacted with through a digital interface and demonstrates the interaction patterns, triggers, and habits of the original individual. The functioning and the performance of the system over an individual's lifespan are optimized through data synthesis and disposition.
AUTOMATIC GENERATION OF AN IMAGE HAVING AN ATTRIBUTE FROM A SUBJECT IMAGE
The technology disclosed herein enables automatic generation of an image having an attribute using a subject image as a basis for the generated image. In a particular embodiment, a method includes executing a generator and a discriminator. Until a criterion is satisfied, the method includes iteratively performing steps a-c. In step a, the method includes providing an input vector and an attribute to the generator. The generator outputs a generated image. In step b, the method includes providing the generated image to the discriminator. The discriminator outputs feedback that indicates differences between the generated image and a subject image. In step c, the method includes determining whether the differences satisfy the criterion. When the differences do not satisfy the criterion, the input vector comprises the feedback in the next iteration of steps a-c. When the differences satisfy the criterion, the method includes associating the generated image with the attribute.
ELECTRONIC APPARATUS, AND METHOD FOR DISPLAYING IMAGE ON DISPLAY DEVICE
Disclosed are an electronic apparatus, and a method for displaying an image on a display device. The electronic apparatus comprises: a display device; an image acquisition device, which is configured to acquire a surrounding image of the display device; and a processor, which is configured to determine a background image of the display device according to the surrounding image, acquire a target range, and a target object in the background image, determine a target image according to the background image, the target range and the target object, and control the display device to display the target image, wherein the target image does not include the target object.
METHOD FOR TRACKING TARGET OBJECTS IN A SPECIFIC SPACE, AND DEVICE USING THE SAME
A method for tracking one or more objects in a specific space is provided. The method includes steps of: (a) inputting original images of the specific space taken from camera to an obfuscation network and instructing the obfuscation network to obfuscate the original images to generate obfuscated images such that the obfuscated images are not identifiable as the original images by a human but the obfuscated images are identifiable as the original images by a learning network; (b) inputting the obfuscated images into the learning network, and instructing the learning network to detect obfuscated target objects, corresponding to target objects to be tracked, in the obfuscated images, to thereby output information on the obfuscated target objects; and (c) tracking the obfuscated target objects in the specific space by referring to the information on the obfuscated target objects.
Multimodal inputs for computer-generated reality
Implementations of the subject technology provide determining an operating mode of an electronic device based at least in part on whether the electronic device is communicatively coupled to an associated base device. Based on the determined operating mode, the subject technology identifies a set of input modalities for initiating a recording of content within a field of view of the electronic device. The subject technology monitors sensor information generated by at least one sensor included in, or communicatively coupled to, the electronic device. Further, the subject technology initiates the recording of content within the field of view of the electronic device when the monitored sensor information indicates that at least one of the identified set of input modalities has been triggered.
Digital assistant and a corresponding method for voice-based interactive communication based on detected user gaze indicating attention
Method for voice-based interactive communication using a digital assistant, wherein the method comprises, an attention detection step, in which the digital assistant detects a user attention and as a result is set into a listening mode; a speaker detection step, in which the digital assistant detects the user as a current speaker; a speech sound detection step, in which the digital assistant detects and records speech uttered by the current speaker, which speech sound detection step further comprises a lip movement detection step, in which the digital assistant detects a lip movement of the current speaker; a speech analysis step, in which the digital assistant parses said recorded speech and extracts speech-based verbal informational content from said recorded speech; and a subsequent response step, in which the digital assistant provides feed-back to the user based on said recorded speech.
AGGREGATION OF UNCONSCIOUS AND CONSCIOUS BEHAVIORS FOR RECOMMENDATIONS AND AUTHENTICATION
A system may use unconscious behaviors and conscious behaviors for recommendations and authentication. A method, system, computer readable storage medium, or apparatus provides for sending stimulus, wherein the stimulus is in the presence of an object, wherein the stimulus comprises video, audio, or text; observing activity of the object, wherein the object comprises human or animal; measuring reaction of the object to the stimulus; classifying the reaction of the object to the stimulus; and transmitting a message based on the classification.
Media manipulation using cognitive state metric analysis
Data on a user interacting with a media presentation is collected at a client device. The data includes facial image data of the user. The facial image data is analyzed to extract cognitive state content of the user. One or more emotional intensity metrics are generated. The metrics are based on the cognitive state content. The media presentation is manipulated, based on the emotional intensity metrics and the cognitive state content. An engagement score for the media presentation is provided. The engagement score is based on the emotional intensity metric. A facial expression metric and a cognitive state metric are generated for the user. The manipulating includes optimization of the previously viewed media presentation. The optimization changes various aspects of the media presentation, including the length of different portions of the media presentation, the overall length of the media presentation, character selection, music selection, advertisement placement, and brand reveal time.