Patent classifications
G06F3/017
Measurement method and system
Methods and systems for determining an individual gaze value are disclosed herein. An exemplary method involves: (a) receiving gaze data for a first wearable computing device, wherein the gaze data is indicative of a wearer-view associated with the first wearable computing device, and wherein the first wearable computing device is associated with a first user-account; (b) analyzing the gaze data from the first wearable computing device to detect one or more occurrences of one or more advertisement spaces in the gaze data; (c) based at least in part on the one or more detected advertisement-space occurrences, determining an individual gaze value for the first user-account; and (d) sending a gaze-value indication, wherein the gaze-value indication indicates the individual gaze value for the first user-account.
Method and device for recognizing speech in vehicle
The present disclosure relates to a method and a device for recognizing speech in a vehicle. The method for recognizing the speech in the vehicle may include collecting one or more types of information, determining information to be linked with each other for speech recognition based on an information processing priority predefined corresponding to each type of the collected information, analyzing the determined information to perform the speech recognition for a signal input through a microphone, and extracting at least one of a wake up voice or a command voice through the speech recognition to control the vehicle. Therefore, the present disclosure has an advantage of more accurately performing the speech recognition by linking collected various information in the vehicle with each other.
Content Transmission Method, Device, and Medium
A content transmission method is provided. The method may include: A first device determines that a distance between the first device and a second device is less than a distance threshold. The first device provides a user with a prompt that content transmission can be performed between the first device and the second device. The first device recognizes a gesture operation performed by the user on the first device, and determines transmission content and a transmission direction of the transmission content between the first device and the second device based on the recognized gesture operation. The first device receives the transmission content from the second device or sends the transmission content to the second device based on the determined transmission direction.
VEHICLE CONTROL METHOD, APPARATUS AND SYSTEM
The present disclosure provides a vehicle control method, an apparatus and a system, applied to a terminal control device in a vehicle. The terminal control device is in communication connection to a human-machine interface device, and the vehicle comprises a plurality of executive mechanisms. The method comprises: when a start instruction for a target function is monitored, receiving current gesture data sent by the human-machine interface device; and when the current gesture data meets a preset condition, generating a target control instruction according to the current gesture data, and sending the target control instruction to a target executive mechanism for the target executive mechanism to execute an operation corresponding to the current gesture data.
IMAGE DISPLAY METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
An image display method includes: obtaining a homography matrix between a first target plane and a second target plane according to the first target plane and the second target plane; obtaining a target displacement according to the homography matrix and an attitude; obtaining a target pose according to the target displacement, the target pose including a position and an attitude of a camera coordinate system of a current frame image in a world coordinate system; and displaying an AR image according to the target pose.
METHODS FOR RECOGNIZING HUMAN HAND AND HAND GESTURE FROM HUMAN, AND DISPLAY APPARATUS
A method for recognizing a human hand comprises: recognizing a human body target by using a plurality of frames of detection information acquired by a millimeter wave radar within a preset time period; determining whether a new detection target satisfying setting conditions exists within a preset range centering on the human body target, according to a current frame of detection information, the setting conditions including: having a radial velocity; if so, determining the new detection target satisfying the setting conditions as a hand corresponding to the human body target; and if not, determining that the hand corresponding to the human body target does not exist in the current frame.
CONTROL DEVICE, NON-TRANSITORY COMPUTER-READABLE MEDIUM, AND AUTHENTICATION SYSTEM
A reception interface (121) is configured to receive first authentication information related to first contactless authentication for authenticating a user and second authentication information related to second contactless authentication for authenticating the user in a different manner from the first contactless authentication. A processor (122) is configured to determine whether the first contactless authentication and the second contactless authentication are respectively approved based on the first authentication information and the second authentication information, and to control operation of a controlled device (21) in a case where it is determined that both the first contactless authentication and the second contactless authentication are approved.
CONTACTLESS TOUCH INPUT SYSTEM
A proximity sensor, including light emitters and light detectors mounted on a circuit board, two stacked lenses, positioned above the emitters and the detectors, including an extruded cylindrical lens and a Fresnel lens array, wherein each emitter projects light through the two lenses along a common projection plane, wherein a reflective object located in the projection plane reflects light from one or more emitters to one or more detectors, and wherein each emitter-detector pair, when synchronously activated, generates a greatest detection signal at the activated detector when the reflective object is located at a specific 2D location in the projection plane corresponding to the emitter-detector pair, and a processor sequentially activating the emitters and synchronously co-activating one or more detectors, and identifying a location of the object in the projection plane, based on amounts of light detected by the detector of each synchronously activated emitter-detector pair.
MEASUREMENT DEVICE AND MEASUREMENT METHOD, AND PROGRAM
There is provided a measurement device which is operable to measure a movable range of a finger easily in a further small size and a measurement method, and a program. The measurement device includes: a first linear motion mechanism which has a contacting part contacting a back of a hand and linearly moves along a longitudinal direction, the back of the hand serving as reference of measurement; a second linear motion mechanism which moves together with a finger targeted for the measurement and linearly moves along a longitudinal direction; and a rotation mechanism which rotatably connects one end of the first linear motion mechanism and one end of the second linear motion mechanism and has a sensor detecting a rotation amount of the second linear motion mechanism, the second linear motion mechanism rotating with respect to the first linear motion mechanism. Then, current values of the rotation amount detected by the sensor are obtained, and a measured value of a joint angle is calculated from a threshold value updated by a maximum value of the current values, the measured value showing a movable range of the finger. The present technology is applicable to, for example, a measurement device which measures a movable range of a finger.
CONTROL DEVICE, CONTROL METHOD AND COMPUTER-READABLE STORAGE MEDIUM
A control device 1B includes a preprocessor 21B, a translator 22B and an intention detector 23B. The preprocessor 21B is configured to generate movement signals of a target human 10C subjected to assistance by processing a detection signal Sd outputted by a first sensor which senses the target human 10C. The translator 21B is configured to identify a gesture of the target human 10C by use of the movement signals Sd, the gesture being expressed by a pose and/or movement of the target human 10C. The intention detector 23B is configured to detect an intention of the target human 10C based on history of an event and the identified gesture, the event relating to the assistance.