Patent classifications
G06V10/803
METHOD FOR OPERATING A VEHICLE INTERIOR MONITORING SYSTEM, CONTROL DEVICE, VEHICLE INTERIOR MONITORING SYSTEM AND VEHICLE
The disclosure relates to a method for operating a vehicle interior monitoring system including at least one camera unit. A control device sets an adjustable camera parameter of the camera unit by a camera-specific control command in the camera unit and receives and evaluates at least one image recorded by the camera unit, and a result of the evaluation is output as a camera-specific result datum. The control device generates a general control command that sets the adjustable camera parameter of the camera unit, the general control command is converted based on a camera configuration of the camera unit saved in the control device into the camera-specific control command, the camera-specific result datum is converted based on the camera configuration of the camera unit saved in the control device into a general result datum, and the general result datum of the camera unit is provided to a data fusion device.
SYSTEMS AND METHODS FOR DYNAMIC PASSPHRASES
A technical validation mechanism is described that includes the use of facial feature recognition and tokenization technology operating in combination with machine learning models can be used such that specific facial or auditory characteristics of how an originating script is effectuated can be used to train the machine learning models, which can then be used to validate a video or a particular dynamically generated passphrase by comparing overlapping phonemes or phoneme transitions between the originating script and the dynamically generated passphrase.
IDENTIFYING OBJECTS WITHIN IMAGES FROM DIFFERENT SOURCES
Techniques are disclosed for providing a notification that a person is at a particular location. For example, a resident device may receive from a user device an image that shows a face of a first person, the image being captured by a first camera of the user device. The resident device may also receive, from another device having a second camera, a second image showing a portion of a face of a second person, the second camera having a viewable area showing a particular location. The resident device may determine a score indicating a level of similarity between a first set of characteristics associated with the face of the first person and a second set of characteristics associated with the face of a second person. The resident device may then provide to the user device a notification based on determining the score.
IDENTIFYING OBJECTS WITHIN IMAGES FROM DIFFERENT SOURCES
Techniques are disclosed for providing a notification that a person is at a particular location. For example, a resident device may receive from a user device an image that shows a face of a first person, the image being captured by a first camera of the user device. The resident device may also receive, from another device having a second camera, a second image showing a portion of a face of a second person, the second camera having a viewable area showing a particular location. The resident device may determine a score indicating a level of similarity between a first set of characteristics associated with the face of the first person and a second set of characteristics associated with the face of a second person. The resident device may then provide to the user device a notification based on determining the score.
HEARING AID SYSTEMS AND MEHODS
A system may include a wearable camera configured to capture a plurality of images from an environment of a user and a microphone configured to capture sounds from an environment of the user. The system may also include a processor programmed to receive the images; identify a representation of an individual in one of the images; identify a lip movement associated with a mouth of the individual, based on analysis of the images; receive audio signals representative of the sounds; identify, based on analysis of the sounds, an audio signal associated with a voice of the individual; and cause transmission of the audio signal to a hearing interface device configured to provide sound to an ear of the user.
Lip-tracking hearing aid
A system may include a wearable camera configured to capture a plurality of images from an environment of a user and a microphone configured to capture sounds from an environment of the user. The system may also include a processor programmed to receive the images; identify a representation of one individual in one of the images; identify a lip movement associated with a mouth of the individual, based on analysis of the images; receive audio signals representative of the sounds; identify, based on analysis of the sounds, a first audio signal associated with a first voice and a second audio signal associated with a second voice; cause selective conditioning of the first audio signal based on a determination that the first audio signal is associated with the identified lip movement; and cause transmission of the selectively conditioned first audio signal to a hearing interface device.
Digital Health Passport to Verify Identity of a User
The technology disclosed relates to authenticating users using a plurality of non-deterministic biometric identifiers. The method includes generating a scannable code upon receiving a success nonce from a registration server. The registration server can access a user identifier and a hash of at least a signature using the success nonce. The signature can be generated based at least in part upon a biometric identifier of a user. The method includes recreating the hash of the signature stored by the registration server. The method includes generating the scannable code by encrypting the success nonce and the recreated hash. The biometric identifier of the user is generated by feeding a plurality of non-deterministic biometric inputs to a trained machine learning model producing a plurality of feature vectors. The method includes projecting the plurality of feature vectors onto a surface of a unit hyper-sphere and computing a characteristic identity vector representing the user.
Vehicle sensor data sharing
Two vehicles—an ego vehicle and an other vehicle—can share sensor data in a streamlined manner. One or more sensors can be configured to acquire first environment data of an external environment of the ego vehicle. A data summary based on second environment data of an external environment of the other vehicle can be received. Whether there is a common region of sensor coverage between the ego vehicle and the other vehicle can be determined. In response to there being a common region, the first environment data that is located within the common region can be identified and the resolution level of the identified first environment data can be reduced. The first environment data that has the reduced resolution level and a remainder of the first environment data excluding the identified first environment data can be transmitted to the other vehicle.
Training dataset generation for depth measurement
A system for generation of training dataset is provided. The system controls a depth sensor to capture, from a first viewpoint, a first image a first depth value associated with the first object. The system receives tracking information from a handheld device associated with the depth sensor, based on a movement of the handheld device and the depth sensor in a 3D space. The system generates graphic information corresponding to the first object based on the received tracking information. The graphic information includes the first object from a second viewpoint. The system calculates a second depth value associated with the first object, based on the graphic information. The system generates, for a neural network model, a training dataset which includes a first combination of the first image and the first depth value, and a second combination of second images corresponding to the graphic information and the second depth value.
Multi-modal sensor data fusion for perception systems
A method includes fusing multi-modal sensor data from a plurality of sensors having different modalities. At least one region of interest is detected in the multi-modal sensor data. One or more patches of interest are detected in the multi-modal sensor data based on detecting the at least one region of interest. A model that uses a deep convolutional neural network is applied to the one or more patches of interest. Post-processing of a result of applying the model is performed to produce a post-processing result for the one or more patches of interest. A perception indication of the post-processing result is output.