Patent classifications
G06V40/178
E-CIGARETTE AND AUTHENTICATION SYSTEM AND AUTHENTICATION METHOD FOR E-CIGARETTE
An authentication system and authentication method for an electronic cigarette and an electronic cigarette configured to be connected within such system so that the authentication method can be applied to the electronic cigarette. The system can be divided into 3 main components, namely the electronic cigarette itself, a mobile terminal in communication with the electronic cigarette and reading a security label from the electronic cigarette, and a service terminal connected to the mobile terminal, for instance through the cloud. The system and method protects particularly from counterfeit cartridges and secures that a cartridge of the intended content is connected in the electronic cigarette. In addition, age verification can be performed.
System and method for scalable cloud-robotics based face recognition and face analysis
A system and method for performing distributed facial recognition divides processing steps between a user engagement device/robot, having lower processing power, and a remotely located server, having significantly more processing power. Images captured by the user engagement device/robot are processed at the device/robot by applying a first set of image processing steps that includes applying a first face detection. First processed images having at least one detected face is transmitted to the server, whereat a second set of image processing steps are applied to determine a stored user facial image matching the detected face of the first processed image. At least one user property associated to the given matching user facial image is then transmitted to the user engagement device/robot. An interactive action personalized to the user can further be performed at the user engagement device/robot.
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
Embodiments of the present disclosure relate to the field of image processing technologies and disclose an image processing method and apparatus, an electronic device, and a computer-readable storage medium. The image processing method includes: when an obtained first image includes a human face, performing a first transformation process on the first image to obtain a second image; determining, based on a first target face key point of the human face in the first image, a target position, in the first image, of a second target face key point of the human face in the second image; performing a first movement process on the second image based on the target position; and generating a target image based on the first image and the second image processed through the first movement process, and displaying the target image.
Method and system for identifying biometric characteristics using machine learning techniques
A method and system may use machine learning analysis of audio data to automatically identify a user's biometric characteristics. A user's client computing device may capture audio of the user. Feature data may be extracted from the audio and applied to statistical models for determining several biometric characteristics. The determined biometric characteristic values may be used to identify individual health scores and the individual health scores may be combined to generate an overall health score and longevity metric. An indication of the user's biometric characteristics which may include the overall health score and longevity metric may be displayed on the user's client computing device.
Vehicle control apparatus and method using speech recognition
A vehicle control apparatus and method use speech recognition and include: a passenger recognizing device configured to recognize passengers including a first passenger and at least one second passenger in a vehicle; a voice recognizing device configured to receive and to recognize a voice utterance by the first passenger or the at least one second passenger and to output a speech recognition result based on the received voice utterance; and a processor configured to additionally query the at least one second passenger or the first passenger based on the speech recognition result of the voice utterance of the first passenger or the at least one second passenger, respectively, to provide each of the first passenger and the at least one second passenger with a customized service.
METHOD FOR CLASSIFICATION OF CHILD SEXUAL ABUSIVE MATERIALS (CSAM) IN AN ANIMATED GRAPHICS
There is provided a method of training a machine learning model, comprising: extracting faces from first images, creating an age training dataset comprising records each including a face and a ground truth label indicating whether the face is below a legal age, training an age component on the age training dataset for generating a first outcome indicative of a target face of the target image being below the legal age, creating a sexuality training dataset comprising second records each including a second image and ground truth label indicative of sexuality, training a sexuality component on the sexuality training dataset for generating a second outcome indicative of sexuality depicted in the target image, defining a combination component that receives an input of a combination of the first outcome and the second outcome, and generates a third outcome indicative of child sexual abusive materials (CSAM) depicted in the target image.
METHOD FOR CLASSIFICATION OF CHILD SEXUAL ABUSIVE MATERIALS (CSAM) IN A VIDEO
There is provided a method of training a machine learning model, comprising: extracting faces depicted in videos, creating an age training dataset comprising records, each including a face and a ground truth label indicating whether the face is below a legal age, training an age component on the age training dataset for generating a first outcome indicative of a target face from the target video being below the legal age, creating a sexuality training dataset comprising records each including frame(s) and ground truth label indicative of sexuality depicted therein, training a sexuality component on the sexuality training dataset for generating a second outcome indicative of sexuality depicted in target frame(s) of the target video, defining a combination component that receives an input of a combination of the first outcome and the second outcome, and generates a third outcome indicative of child sexual abusive materials (CSAM) depicted in the target frame(s).
Face matching method and apparatus, storage medium
Examples of the present disclosure provide a face matching method and a face matching apparatus, and a storage medium. The face matching method includes: obtaining a first attribute of first face information which is to be matched; determining one or more preferential matching ranges based on the first attribute; and comparing the first face information with second face information in the one or more preferential matching ranges.
Method of moving in power assist mode reflecting physical characteristics of user and robot implementing thereof
A robot can include a cart to receive one or more objects; a camera sensor to photograph a periphery of the robot and capture an image of a user of the robot; a handle assembly coupled to the cart; a moving part to move the robot; a force sensor to sense a force applied to the handle assembly; and a controller configured to generate physical characteristics information on physical characteristics of the user of the robot based on the image of the user, and adjust at least one of a moving direction of the robot, a moving speed of the moving part and a value of torque applied to a motor of the moving part, based on the physical characteristics information and a magnitude of the force applied to the handle assembly sensed by the force sensor.
SYSTEM AND METHOD FOR TRACKING DEVICE USER SATISFACTION
A system and method for tracking multifunction peripheral device user satisfaction captures user images and audio during device operation. User characteristics such as gestures, posture, spoke words or facial expressions are used in conjunction with device status information to determine whether the user is satisfied with the device. If not, remedial action is initiated, such as launching a virtual assistant on the multifunction peripheral touchscreen or summoning of a human assistant.