Patent classifications
B60K2360/1464
Vehicle-mounted information device
A vehicle-mounted information device that does not create a cause of an erroneous operation is provided. In a vehicle-mounted information device capable of editing objects 11 to 18 displayed on a touch panel 121, an edit mode of displaying a mark 20 (20a) indicating a direction in which the object 11 is movable is provided, and by a touch operation onto the mark 20 (20a), the selected object 11 and an object 12 of interest which is located in the direction indicated by the mark 20 (20a) are interchanged.
VEHICLE APPLICATION STORE FOR CONSOLE
The present disclosure is directed to an application store on board a vehicle. The application store contains, in one configuration, a plurality of applications for installation on an on board computer of the vehicle, with the applications provided to the vehicle operator being provided to the operator being based on predetermined types of information related to the vehicle, its state, operation, and/or configuration, vehicle location, vehicle type, make, model, and/or year of manufacture, and/or occupant(s) and/or occupant(s) of other vehicles.
Control System for Touchless Operation of Mechanical Input Devices
Systems and methods for touchless operation of mechanical input devices are disclosed. In embodiments, a control system includes a mechanical input device, a RADAR sensor, and a controller in communication with the mechanical input device and the RADAR sensor. The RADAR sensor is in proximity to the mechanical input device and configured to track user hand and finger movements. The controller is configured to detect a gesture indicating a user action corresponding to a manipulation of the mechanical input device based on the user hand and finger movements tracked by the RADAR sensor. The controller is further configured to generate a control signal based upon the user action.
SYSTEM FOR INTERACTING WITH OBJECTS USING GESTURES IN AN ENVIRONMENT
Disclosed is a system for allowing users to capacitively interact with objects using gestures in an environment. The system includes a smart sense unit for communicating gesture information and a smart sensing master unit is capacitively coupled to the smart sense unit for identifying the received gesture information and identification information from the smart sense unit to operate the object. The gestures may either be 2D, 3D, multiple 2D or 3D gestures from different users or different body parts. The smart sense unit identifies the change in a provided electric field caused by the user and the object and converts into a digital value. The smart sense unit further identifies the location of the object and the user and further communicates the information to the smart sensing master unit. The smart sensing master unit identifies the gesture and the identification information to interact with the object on receiving the impedance changes forked out from a resonator.
CONTROL DEVICE AND CONTROL METHOD
A control device includes: a recognition unit that recognizes predetermined behavior of a user; an estimation unit that estimates a situation of the user on the basis of the predetermined behavior that is recognized by the recognition unit; an evaluation unit that evaluates certainty of an estimation result from the estimation unit; and an output unit that outputs to the control unit, an operation instruction in accordance with the certainty.
Transport gait and gesture interpretation
An example operation includes one or more of receiving, by a computer associated with a transport, a gait of an individual from at least one camera associated with the transport, validating, by the computer, the gait, receiving, by the computer, a gesture of the individual from the at least one camera, validating, by the computer, the gesture, and performing, by the computer, one or more functions based on the validated gait and the validated gesture.
Method for operating a display and operating device, display and operating device, and motor vehicle
A control device of a display and operating device specifies a graphical configuration of a display element to be displayed by the display device. Based on the specified graphical configuration, at least one light wave parameter for a respective light wave of a group of light waves is ascertained. A totality of the light wave parameters describes an interference pattern which describes an at least partial superposition of the light waves for generating an image representation of the display element. An interference pattern signal which describes the totality of the ascertained light wave parameters is generated and the generated interference pattern signal is transferred to an interference output device of the display and operating device. Based on the transferred interference pattern signal, the interference output device generates a light wave interference, and an output of a real image of the display element is displayed.
User-vehicle interface including gesture control support
There is provided a user-vehicle interface, comprising a gesture control system that is arranged to: sense a direction of a gesture of a user of the vehicle; and process the sensed gesture to control a location of an indicator on a display in accordance with that sensed direction, such that a location of the indicator is arranged to move in accordance with changes in the sensed direction, the indicator being used to show a current position for user interaction with content on the display, and being in a plane in which that content resides.
Multi-Screen Interaction Method and Apparatus, Terminal Device, and Vehicle
A multi-screen interaction method and apparatus, a terminal device, and a vehicle are provided, and relate to the field of intelligent vehicle technologies. In this method, after it is detected a specific gesture, content displayed on a first display is converted into a sub-image, and an identifier of another display capable of displaying the sub-image and orientation information relative to the first display are displayed on the first display, so that a user can intuitively see a movement direction of a subsequent gesture. Then, a second display is determined based on a movement direction of the specific gesture, and the sub-image is controlled to move as the specific gesture moves on the first display. When it is detected that a movement distance of the specific gesture is greater than a specified threshold, the sub-image moves to the second display. Therefore, a multi-screen interaction function is implemented.
INTERFACE APPARATUS AND METHOD
Embodiments of the present invention provide a human-machine interface method (300) for a vehicle, comprising monitoring (305) one or more occupants within the vehicle to identify an occupant request, determining (320) a source of the occupant request corresponding to one of the one or more occupants within the vehicle, determining (330) an authority level associated with the source of the occupant request, and composing (375, 380) an output in dependence on the authority level being greater than or equal to a predetermined authority level threshold.