Patent classifications
G06F2218/22
SELECTIVE SOUND MODIFICATION FOR VIDEO COMMUNICATION
In various embodiments, a communication application selectively modifies sounds associated with a selected location or entity in one or more images. In operation, the communication application receives an image of an environment of a first device and an audio signal associated with the environment. The communication application receives first user input selecting a location in the image, and modifies a first sound included in the audio signal based on the selected location in the image, where at least a portion of the audio signal is unmodified.
System and method for non-invasive assessment of elevated left ventricular end-diastolic pressure (LVEDP)
A system for noninvasive extraction, identification, and marking of the heart valve signals to evaluate and monitor elevated left ventricular end-diastolic pressure (LVEDP) or pulmonary capillary wedge pressure (PCWP) using at rest assessment of hemodynamic performance, based on quantitative measurements of heart and lung related parameters and cardiac events for diagnostic and therapeutic purposes includes one or more signals from one or more noninvasive sensors or transducers that measure one or more physiological effects that are correlated with cardiopulmonary functions, transmission of the data to a computing device and analysis software where a trained algorithm processes the data to determine the state or condition of elevated LVEDP or PCWP and provides an output indicative of the state or condition of the analysis. The described noninvasive cardiopulmonary health assessment and monitoring systems and methods can provide effective at-home self-assessment or an integrated telehealth remote patient monitoring (RPM) system.
Systems and methods for identifying biological structures associated with neuromuscular source signals
A system comprising a plurality of neuromuscular sensors, each of which is configured to record a time-series of neuromuscular signals from a surface of a user's body; and at least one computer hardware processor programmed to perform: applying a source separation technique to the time series of neuromuscular signals recorded by the plurality of neuromuscular sensors to obtain a plurality of neuromuscular source signals and corresponding mixing information; providing features, obtained from the plurality of neuromuscular source signals and/or the corresponding mixing information, as input to a trained statistical classifier and obtaining corresponding output; and identifying, based on the output of the trained statistical classifier, and for each of one or more of the plurality of neuromuscular source signals, an associated set of one or more biological structures.
PAIN ASSESSMENT METHOD BASED ON DEEP LEARNING MODEL AND ANALYSIS DEVICE
Disclosed is a pain assessment method using a deep learning model, the pain assessment method including operations of receiving, by an analysis device, an image indicating activity in a specific brain area of a subject animal and allowing the analysis device to input images of regions of interest in the image into a neural network model and assess the pain of the subject animal according to a result output by the neural network model.
End-to-end deep neural network for auditory attention decoding
In one aspect of the present disclosure, method includes: receiving neural data responsive to a listener's auditory attention; receiving an acoustic signal responsive to a plurality of acoustic sources; for each of the plurality of acoustic sources: generating, from the received acoustic signal, audio data comprising one or more features of the acoustic source, forming combined data representative of the neural data and the audio data, and providing the combined data to a classification network configured to calculate a similarity score between the neural data and the acoustic source using one or more similarity metrics; and using the similarity scores calculated for each of the acoustic sources to identify, from the plurality of acoustic sources, an acoustic source associated with the listener's auditory attention.
APPARATUSES, COMPUTER-IMPLEMENTED METHODS, AND COMPUTER PROGRAM PRODUCTS FOR IMPROVED IDENTITY VERIFICATION USING SENSOR DATA PROCESSING
Embodiments of the present disclosure provide improved user identity validation. Embodiments of the present disclosure provide accurate and secure user identity validation in contexts where existing user validation algorithm(s), for example facial recognition algorithms, fail due to obfuscation of user physical characteristic(s) by clothing, equipment such as PPE masks, and the like. Some example embodiments receive captured data associated with a user, the captured data comprising at least imaging data associated with the user, detect, from the imaging data, machine decodable data associated with the user, determine an asserted user identity associated with the user by decoding the machine decodable data, and validate the asserted user identity associated with the user utilizing at least a remaining portion the captured data.
User-specific customization of video conferences using multimodal biometric characterization
In one embodiment, a method includes an intelligent communication device detecting that a person is visible to a camera of the device, determining a first biometric characteristic of the person discernable by the device, associating the first biometric characteristic with a user identifier unique to the person, determining, while the person is identifiable based on the first biometric characteristic, a second biometric characteristic of the person discernable by the device, and associating the second biometric characteristic with the user identifier. The method also includes the device determining that a detected person has a detected biometric characteristic, determining that the detected person is associated with the user identifier by matching the detected biometric characteristic to the first biometric characteristic or the second biometric characteristic, and applying, while the detected person is identifiable based on the detected biometric characteristic, a user-specific customization associated with the user identifier.
VOICE RECEIVING METHOD AND DEVICE
A voice receiving device configured for accurate listening includes a microphone array, a camera, a capturing module, a determining module, a time module, a calculating module, and a de-noising module. The microphone array captures a first voice signal and a second voice signal and the camera captures mouth pictures of a user. The determining module determines whether the first voice signal is synchronized with the mouth pictures, and if so compares the first voice signal to a model preset voice signal of a user to determine a target voice signal. The time module obtains time delay difference between one voice reaching different microphones. The calculating module calculates a position of sound source of the target voice signal. According to the position of the sound source, the de-noising module de-noises by reference to the second voice signal. The disclosure further provides a voice receiving method.
Cognitive blind source separator
Described is a cognitive blind source separator (CBSS). The CBSS includes a delay embedding module that receives a mixture signal (the mixture signal being a time-series of data points from one or more mixtures of source signals) and time-lags the signal to generate a delay embedded mixture signal. The delay embedded mixture signal is then linearly mapped into a reservoir to create a high-dimensional state-space representation of the mixture signal. The state-space representations are then linearly mapped to one or more output nodes in an output layer to generate pre-filtered signals. The pre-filtered signals are passed through a bank of adaptable finite impulse response (FIR) filters to generate separate source signals that collectively formed the mixture signal.
METHOD AND APPARATUS FOR PROCESSING AUDIO SIGNAL
Provided are an audio signal processing method and apparatus for adjusting a location of an audio object in correspondence to a location of a visual object. The audio signal processing apparatus includes a matching unit configured to select an audio object corresponding to a visual object extracted from a video signal among at least one audio object extracted from an audio signal, a location adjusting unit configured to adjust a location of a sound image of the audio signal based on a location of the selected audio object and a location of a visual object corresponding to the selected audio, and an output unit configured to output an audio signal whose the location of the sound image is adjusted.