Patent classifications
G06V40/171
Image recognition sporting event entry system and method
A system and method for image recognition registration of an athlete into a sporting event. The athlete is registered in the sporting event using image recognition technology. A digital commencement image of the athlete taken by a camera (106) as the athlete crosses a starting line. The digital commencement image is compared with a stored profile image of the athlete to identify the athlete and enter them into the event without the need for the athlete to pre-register for the particular event. Enhanced recognition techniques incorporating pattern recognition may be used to increase identity accuracy.
PICTURE PROCESSING DEVICE, PICTURE PROCESSING METHOD AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM
A video acquisitor acquires a video of a space in which a person exists, the video being imaged by an imager. A video analyzer analyzes the acquired video and detects a place where the person has stayed in the space. A picture constructor constructs display pictures that allow all stay places where the person has stayed to be recognized, from the start of imaging by the imager. The imager may be an imager installed in a room in a building and structure to image a person in the room.
ONLINE STREAMER AVATAR GENERATION METHOD AND APPARATUS
This application provides techniques of generating a virtual character for an online streamer. The techniques comprises obtaining a human body image of a target online streamer captured by an image collection device, wherein the human body image of the target online streamer comprises at least a face and an upper body part of the target online streamer; separately performing face recognition and upper-body limb recognition on the human body image to obtain face features and limb features; determining parameters associated with a virtual character corresponding to the target online streamer based on the face features and the limb features; and generating the virtual character corresponding to the target online streamer based on the parameters, wherein the generated virtual character has a motion and an expression corresponding to that of the target online streamer.
ELECTRONIC DEVICE AND METHOD FOR PROCESSING SPEECH BY CLASSIFYING SPEECH TARGET
Various embodiments of the disclosure provide a method and a device which includes multiple cameras arranged at different positions, multiple microphones arranged at different positions, a memory, and a processor operatively connected to at least one of the multiple cameras, the multiple microphones, and the memory, wherein the processor is configured to: determine, using at least one of the multiple cameras, whether at least one of a user wearing the electronic device or a counterpart having a conversation with the user makes an utterance, configure directivity of at least one of the multiple microphones based on the determination, obtain an audio from at least one of the multiple microphones based on the configured directivity, obtain an image including a mouth shape of the user or the counterpart from at least one of the multiple cameras, and process speech of an utterance target in a different manner based on the obtained audio and the image.
ANIMATED EMOTICON GENERATION METHOD, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER DEVICE
An animated emoticon generation method, a computer-readable storage medium, and a computer device are provided. The method includes: displaying an emoticon input panel on a chat page; detecting whether a video shooting event is triggered in the emoticon input panel; acquiring video data in response to detecting the video shooting event; obtaining an edit operation for the video data; processing video frames in the video data according to the edit operation to synthesize an animated emoticon; and adding an emoticon thumbnail corresponding to the animated emoticon to the emoticon input panel, the emoticon thumbnail displaying the animated emoticon to be used as a message on the chat page based on a user selecting the emoticon thumbnail in the emoticon input panel.
SANITIZING PERSONALLY IDENTIFIABLE INFORMATION (PII) IN AUDIO AND VISUAL DATA
Techniques for sanitizing personally identifiable information (PII) from audio and visual data are provided. For instance, in a scenario where the data comprises an audio signal with speech uttered by a person P, these techniques can include removing/obfuscating/transforming speech-related PII in the audio signal such as pitch and acoustic cues associated with P's vocal tract shape and/or vocal actuators (e.g., lips, nasal air bypass, teeth, tongue, etc.) while allowing the content of the speech to remain recognizable. Further, in a scenario where the data comprises a still image or video in which a person P appears, these techniques can include removing/obfuscating/transforming visual PII in the image or video such as P's biological features and indicators of P's location/belongings/data while allowing the general nature of the image or video to remain discernable. Through this PII sanitization process, the privacy of individuals portrayed in the audio or visual data can be preserved.
Collation system
A collation system of the present invention includes an imaging means for acquiring a captured image of a pre-passage side area with respect to a gate, a collation means for performing a collation process between a previously registered target and a target in the captured image, and a determination means for determining propriety of passage of the target with respect to the gate, on the basis of a result of the collation process. The collation means initiates the collation process on the basis of a condition, set to each area of the captured image, for the target located in the area.
METHOD FOR DETERMINING A VALUE OF AT LEAST ONE GEOMETRICO-MORPHOLOGICAL PARAMETER OF A SUBJECT WEARING AN EYEWEAR
A method for determining a value of at least one geometrico-morphological parameter of a subject wearing an eyewear. The method including obtaining at least one image of a head of the subject wearing the eyewear, identifying simultaneously, on the at least one image obtained, a set of remarkable points of the image of the eyewear and a set of remarkable points of the image of the head of the subject, using an image processing algorithm determined based on a predetermined database comprising a plurality of reference images of heads wearing an eyewear, the image processing algorithm being based on machine learning, and determining at least one value of a geometrico-morphological parameter taking into account the sets of remarkable points of the image of the eyewear and head of the subject identified.
COMPUTER IMPLEMENTED METHODS AND DEVICES FOR DETERMINING DIMENSIONS AND DISTANCES OF HEAD FEATURES
Computer implemented methods and devices for determining dimensions or distances of head features are provided. The method includes identifying a plurality of features in an image of a head of a person. A real dimension of at least one target feature of the plurality of features or a real distance between at least one target feature of the plurality features and a camera device used for capturing the image is estimated based on probability distributions for real dimensions of at least one feature of the plurality of features and a pixel dimension of the at least one feature of the plurality of features.
Systems and methods for continuous authentication and monitoring
Systems, apparatuses, methods, and computer program products are disclosed for providing continuous session authentication and monitoring. An example method includes authenticating, at a first time, a session for a user of the client device based on an authentication image data structure and a plurality of first video frames captured before the first time. The example method further includes extracting sample data from a monitor region for each of a plurality of second video frames captured after the first time and generating motion data based on the extracted sample data. The example method further includes detecting, at a second time, a re-authentication trigger event based on the motion data. Subsequently, the example method includes re-authenticating the session based on the authentication image data structure and a plurality of third video frames captured after the second time.