Patent classifications
B60K2360/1464
DRIVING SUPPORT DEVICE, DRIVING SUPPORT SYSTEM, AND DRIVING SUPPORT METHOD
In a driving support device, an image output unit outputs an image including a vehicle object representing a vehicle and a peripheral situation of the vehicle, to a display unit. An operation signal input unit receives a gesture operation by a user that involves moving of the vehicle object in the image displayed on the display unit. A command output unit outputs a command according to the gesture operation, to an automatic driving control unit that controls automatic driving.
RADAR-BASED GESTURAL INTERFACE
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for providing a gestural interface in vehicle. In one aspect, movement data corresponding to a gesture of a driver of a vehicle is received from a radar receiver arranged to detect movement at the interior of the vehicle. The gesture is determined to be a particular gesture from among a first predetermined set of gestures for selecting an operating mode of a computing device. In response, a computing device is caused to enter an operating mode corresponding to the particular mode selection gesture, and a determination is made whether a subsequent movement of the driver represents a gesture from a second predetermined set of gestures that is different from the first predetermined set of gestures.
COMMAND PROCESSING USING MULTIMODAL SIGNAL ANALYSIS
A first set of signals corresponding to a first signal modality (such as the direction of a gaze) during a time interval is collected from an individual. A second set of signals corresponding to a different signal modality (such as hand-pointing gestures made by the individual) is also collected. In response to a command, where the command does not identify a particular object to which the command is directed, the first and second set of signals is used to identify candidate objects of interest, and an operation associated with a selected object from the candidates is performed.
OPERATION SYSTEM
An operation system includes: an operation device manually operated by a user and inputting a command of an operation content to a command target apparatus selected from multiple apparatuses; a selection device selecting one apparatus as the command target apparatus according to multiple visual line regions individually set in relation to the apparatuses and a visual line direction of the user detected by a visual line detection sensor, the one device relating to one visual line region disposed in the visual line direction; and a selection maintaining device maintaining a selection state of the command target apparatus even when the visual line direction is changed to another direction pointing to none of the visual line regions while the command target apparatus is selected.
Control system and method using in-vehicle gesture input
Provided are a control system and method using an in-vehicle gesture input, and more particularly, a system for receiving an occupant's gesture and controlling the execution of vehicle functions. The control system using an in-vehicle gesture input includes an input unit configured to receive a user's gesture, a memory configured to store a control program using an in-vehicle gesture input therein, and a processor configured to execute the control program. The processor transmits a command for executing a function corresponding to a gesture according to a usage pattern.
In-vehicle mid-air gesture-based interaction method, electronic apparatus, and system
This disclosure provides an in-vehicle mid-air gesture-based interaction method, an electronic apparatus, and a system, and relates to the field of intelligent vehicle technologies. The method includes: obtaining a first mid-air gesture detected by a camera; and starting, when a preset response operation corresponding to the first mid-air gesture matches a first user who initiates the first mid-air gesture, the preset response operation corresponding to the first mid-air gesture in response to the first mid-air gesture. The method can be used in an in-vehicle mid-air gesture-based interaction scenario, reduce a mid-air gesture operation rate, and improve driving safety and interaction experience.
Determining transport operation level for gesture control
An example operation includes one or more of detecting a gesture in a transport, responsive to the gesture being detected, identifying an action to be performed by the transport, identifying currently engaged transport operations, determining whether performing the action will exceed a threshold transport operation level based on the currently engaged transport operations, and determining whether to perform or cancel the action corresponding to the detected gesture based on whether the threshold transport operation level will be exceeded.
NON-CONTACT INTERFACE DEVICE AND METHOD FOR CONTROLLING THE SAME
A non-contact interface device for a vehicle and a method for controlling the same, may display an operation system menu for a driver and passengers to operate various kinds of devices of the vehicle as a hologram, enable a user to operate the displayed operation system menu in a non-contact manner by use of the user's finger or the like, and provide the user with a tactile feedback for a non-contact operation, so that the user can recognize whether to have operated the menu.
Device and method for signalling a successful gesture input
A device and a method for signaling a successful gesture input by means of a user. A user gesture is sensed and classified by a processing device. In response to the classified gesture, an animation may be generated that visually emphasizes a position that can change linearly at least in certain parts, along an edge of a screen assigned to the gesture input.
Driving support device, driving support system, and driving support method
In a driving support device, an image output unit outputs an image including a vehicle object representing a vehicle and a peripheral situation of the vehicle, to a display unit. An operation signal input unit receives a gesture operation by a user that involves moving of the vehicle object in the image displayed on the display unit. A command output unit outputs a command according to the gesture operation, to an automatic driving control unit that controls automatic driving.