Patent classifications
G06V40/174
Determining Features based on Gestures and Scale
A system, method, and computer-readable medium for associating a person’s gestures with specific features of objects is disclosed. Using one or more image capture devices, a person’s gestures and the location of that person in an environment is determined. Using determined distances between the person and objects in the environment and scales associated with features of those objects, the list of specific features in the person’s field-of-view may be determined. Further, a facial expression of the person may be scored and that score associated with one or more specific features.
BODY OR CAR MOUNTED CAMERA SYSTEM
A camera system includes an administrator and a series of camera nodes which includes an audio microphone for also including sound in the video data file. The administrator includes a computer server, a wireless transmitter/receiver (transceiver), and machine learning (ML) chips. Each camera node includes a video capturing device or camera, a wireless transmitter/receiver (transceiver), a plurality of ML chips, and a GPS sensor. The ML chips are capable of receiving data in the form of a video file and processing the data to determine if the data includes “actionable” data. The ML chip makes certain inferences from the captured video data. The camera nodes are wirelessly linkable to each other. The communication between the several camera nodes form a “mesh network” wherein the several camera nodes may transmit data to each other, thus propagating the camera nodes with the mesh network with common data, rules, or inferences.
SYSTEMS AND METHODS FOR EVALUATING HEALTH OUTCOMES
A system and method for determining a health outcome, comprising: receiving first and second images or videos of a wound of a patient; comparing the images or videos to detect a characteristic of the wound, the characteristic including an identification of a change in the wound; receiving at least one non-image or non-video data input that includes data about the patient; executing a machine learning algorithm comprising a dataset of images or videos to analyze the identified change in the wound and to correlate at least one first image or video and at least one second image or video with the at least one non-image or non-video data input and to train the machine learning algorithm with the identification of a change in the wound; and generating a medical outcome prediction regarding a status and recovery of the patient in response to correlating the at least one additional input with the first and second images or videos.
Person replacement utilizing deferred neural rendering
Techniques are disclosed for performing video synthesis of audiovisual content. In an example, a computing system may determine first parameters of a face and body of a source person from a first frame in a video shot. The system also determines second parameters of a face and body of a target person. The system determines that the target person is a replacement for the source person in the first frame. The system generates third parameters of the target person based on merging the first parameters with the second parameters. The system then performs deferred neural rendering of the target person based on a neural texture that corresponds to a texture space of the video shot. The system then outputs a second frame that shows the target person as the replacement for the source person.
Facial synchronization utilizing deferred neural rendering
Techniques are disclosed for performing video synthesis of audiovisual content. In an example, a computing system may determine first facial parameters of a face of a particular person from a first frame in a video shot, whereby the video shot shows the particular person speaking a message. The system may determine second facial parameters based on an audio file that corresponds to the message being spoken in a different way from the video shot. The system may generate third facial parameters by merging the first and the second facial parameters. The system may identify a region of the face that is associated with a difference between the first and second facial parameters, render the region of the face based on a neural texture of the video shot, and then output a new frame showing the face of the particular person speaking the message in the different way.
System and method for an interactive digitally rendered avatar of a subject person
A system and method for an interactive digitally rendered avatar of a subject person to participate in a web meeting is described. In one embodiment, the method includes receiving an invite to a web meeting on a video conferencing platform, wherein the invite identifies a subject person and the video conferencing platform. The method also includes generating an interactive avatar of the subject person based on a data collection associated with the subject person stored in a database. The method further includes instantiating a platform integrator associated with the video conferencing platform identified in the invite and joining, by the interactive avatar of the subject person, the web meeting on the video conferencing platform. The platform integrator transforms outputs and inputs between the video conferencing platform and an interactive digitally rendered avatar system so that the interactive avatar of the subject person participates in the web meeting.
Photo album management method, storage medium and electronic device
The present disclosure provides a photo album management method. The method includes obtaining voice search information from a user, performing intent recognition on the voice search information to obtain an intent recognition result which indicates an intent of the user for a photo album, obtaining a voiceprint feature from the voice search information to determine identity information of the user, sending the intent recognition result and the identity information of the user, and opening the photo album according to the intent recognition result and the identity information.
System and method for visually tracking persons and imputing demographic and sentiment data
A visual tracking system for tracking and identifying persons within a monitored location, comprising a plurality of cameras and a visual processing unit, each camera produces a sequence of video frames depicting one or more of the persons, the visual processing unit is adapted to maintain a coherent track identity for each person across the plurality of cameras using a combination of motion data and visual featurization data, and further determine demographic data and sentiment data using the visual featurization data, the visual tracking system further having a recommendation module adapted to identify a customer need for each person using the sentiment data of the person in addition to context data, and generate an action recommendation for addressing the customer need, the visual tracking system is operably connected to a customer-oriented device configured to perform a customer-oriented action in accordance with the action recommendation.
Selectively activating a resource by detecting emotions through context analysis
A method selectively activates a resource to accommodate an advanced emotion. A supervisor computer receives a first piece of content, and then applies an emotion classifier to the first piece of content in order to create a first concept/emotion/sentiment/time tuple. The supervisor computer creates a second concept/emotion/sentiment/time tuple for a second piece of content, and compares the first and second tuples. If the concept in the first piece of content matches the concept in the second piece of content but that at least one of the emotion, sentiment, and time of the first piece of content does not match the emotion, sentiment, and time of the second piece of content, the supervisor computer determines that the emotion of the second piece of content is an advanced emotion that is not expressed by the first or second pieces of content, and activates a resource that accommodates the advanced emotion.
Control apparatus and computer-readable storage medium
A control apparatus is provided, including: an other-vehicle emotion acquiring unit configured to acquire an other-vehicle emotion indicating an emotion of an occupant of a second vehicle different from a first vehicle; a determination unit configured to determine whether to perform notification to an occupant of the first vehicle based on the other-vehicle emotion; and a notification control unit configured to perform control to notify the occupant of the first vehicle of notification information based on the other-vehicle emotion when the determination unit determines to perform the notification.