Patent classifications
B60K2360/1464
Gesture and facial expressions control for a vehicle
The present approach relates to a vehicle having a plurality of devices and a human-machine interface (HMI) for the gesture- and/or facial expression-based actuation of a function of a vehicle device, which comprises a camera for recording a specific occupant of a vehicle and a control unit connected to the camera.
VIRTUAL IMAGE DISPLAY DEVICE
A virtual image display device includes a virtual image position selector, a vision measurement interface, a display information acquirer, a display image generator, and a projection processor. The virtual image position selector selects either a first virtual image position or a second virtual image position as a display position of the virtual image. The vision measurement interface measures a vision of a user based on a user's response to a vision measurement image projected as a virtual image at the display position. The display information acquirer acquires information to be shown. The display image generator generates a display image showing an image corresponding to the information acquired by the display information acquirer in a size determined by the vision acquired by the vision measurement interface. The projection processor performs a projection process of projecting the display image generated by the display image generator as a virtual image.
MULTIMODAL USER INTERFACE FOR A VEHICLE
Some embodiments described herein relate to a multimodal user interface for use in an automobile. The multimodal user interface may display information on a windshield of the automobile, such as by projecting information on the windshield, and may accept input from a user via multiple modalities, which may include a speech interface as well as other interfaces. The other interfaces may include interfaces allowing a user to provide geometric input by indicating an angle. In some embodiments, a user may define a task to be performed using multiple different input modalities. For example, the user may provide via the speech interface speech input describing a task that the user is requesting be performed, and may provide via one or more other interfaces geometric parameters regarding the task. The multimodal user interface may determine the task and the geometric parameters from the inputs.
TERMINAL, VEHICLE HAVING THE TERMINAL, AND METHOD FOR CONTROLLING THE VEHICLE
A terminal is provided to recognize a touch input and a gesture input intended by a user. A vehicle includes the terminal configured to display buttons to be selected by a user, and receive a touch input as user input. An image input receives an image of the user for receiving a gesture input as the user input and a controller divides an area of the terminal into a first area and a second area. The controller determines a button selected by the user among buttons displayed in the first area based on the touch signal output by the touch input, and determines a button selected by the user among buttons displayed in the second area based on a finger image and an eye image in the image.
VEHICLE AND CONTROL METHOD THEREOF
A vehicle is provided and includes a gesture detector that detects a gesture of a driver to designate a manipulation target object and a voice recognizer that recognizes a voice command generated by the driver to operate the designated manipulation target object. A controller is configured to transmit a control signal corresponding to the voice command to the manipulation target object, and to operate the manipulation target object to perform an operation corresponding to the voice command.
Vehicle interior component with user interface
A user interface system for a vehicle interior includes a contact surface, a sensor grid and a controller. The sensor grid is configured for variable electrical resistance in response to applied pressure and the controller is configured to detect the electrical resistance of the sensor grid by monitoring a voltage. The controller detects the location of an input from a vehicle occupant, the intensity of the input, and the duration of the input.
Vehicle systems and methods for determining a target based on a virtual eye position and a pointing direction
Vehicle systems and methods for determining a target position are disclosed. A vehicle includes a user detection system configured to output a gesture signal in response to a hand of a user performing at least one gesture to indicate a final target position. The vehicle also includes a user gaze monitoring system configured to output an eye location signal that indicates an actual eye position of the user. The vehicle also includes one or more processors and one or more non-transitory memory modules communicatively coupled to the processors. The processors store machine-readable instructions that, when executed, cause the one or more processors to determine a first point and a second point located on the hand of the user based at least in part on the gesture signal from the user detection system. The first point and the second point define a pointing axis of the hand of the user.
IN-VEHICLE PERFORMANCE DEVICE, IN-VEHICLE PERFORMANCE SYSTEM, IN-VEHICLE PERFORMANCE METHOD, STORAGE MEDIUM, AND COMMAND MEASUREMENT DEVICE
An in-vehicle performance device is an in-vehicle performance device that performs a game that is played by an occupant in a vehicle, and includes a motion detector configured to detect a motion of the occupant irrelevant to driving the vehicle, a display configured to display an image visually recognizable by the occupant, and a display controller configured to display a response image according to the motion of the occupant on the display on the basis of the motion of the occupant detected by the motion detector, and output a result of a game based on a predetermined rule.
OPTICAL-EFFECT TOUCHPAD ON A STEERING WHEEL FOR FINGER DETECTION
Disclosed is a system for detecting command gestures made by a finger of a driver of a motor vehicle, including an interface pad, a light source that emits an optical beam in the infrared band toward the interface pad, an imaging sensor, for capturing images steered by the interface pad away from the driver, with a base frame and a movable plate, an optical zone of interest seen by the imaging sensor being defined at the interface between the base frame and the movable plate, the interface pad including an elastic deformable seal interposed between the base frame and the movable plate, the deformable seal including a first inclined facet, so that an optical path passing via the first inclined facet is proportionally modified by the deformation of the seal under the effect of the movement of the movable pad.
SYSTEM AND METHOD FOR INITIATING AND EXECUTING AN AUTOMATED LANE CHANGE MANEUVER
A system for initiating and executing an automated lane change maneuver in a vehicle may include a steering wheel interface having a display; and a monitor to detect a viewing direction of the user. The interface may detect a first predetermined gesture by the user; in response to the first predetermined gesture, transmit a first signal to the vehicle instructing the vehicle to prepare for the automated lane change maneuver; display a status of the automated lane change maneuver; display a prompt for the user to visually confirm safety of the automated lane change maneuver. The monitor may continuously detect the viewing direction of the user; and in response to the viewing direction of the user changing, transmit a second signal to the vehicle instructing the vehicle to execute the automated lane change maneuver.