Patent classifications
B60K2360/1464
VEHICLE MIDDLEWARE
The present disclosure describes a vehicle implementing one or more processing modules. These modules are configured to connect and interface with the various buses in the vehicle, where the various buses are connected with the various components of the vehicle to facilitate information transfer among the vehicle components. Each processing module is further modularized with the ability to add and replace other functional modules now or in the future. These functional modules can themselves act as distinct vehicle components. Each processing modules may hand-off processing to other modules depending on its health, processing load, or by third-party control. Thus, the plurality of processing modules helps to implement a middleware point of control to the vehicle with redundancy in processing and safety and security awareness in their applications.
Method And Device For Operating An Electronic Appliance
A method for preselecting and/or selecting a menu, a submenu, a function, or a function value of an electronic device. The menu, the submenu, the function, or the function value is preselectable and/or selectable using a touchpad with a first body part and an audible feedback is output table. The audible feedback is activated by an additional touch of the touchpad at any point with a second body part.
SIGN LANGUAGE INPUTS TO A VEHICLE USER INTERFACE
Sensors in a vehicle detect execution of a sign language symbol. The sign language symbol invokes a vehicle interface function. In some embodiments, the sign language symbol is an ASL sign. Sensors include three-dimensional optical, ultrasonic, or other sensors. The locations of targets (fingertips, knuckles, palm, etc.) are detected in the individual outputs of the sensors. The locations of targets determined from the outputs of multiple sensors may be associated with one another and filtered, such as using an RBMCDA algorithm and Kalman filtering. The system may be calibrated to detect each symbol by measuring the target locations with the user executing the each symbol.
BACKLIGHT EMBEDDED INFRARED PROXIMITY SENSING AND GESTURE CONTROL
A motor vehicle includes a display screen having a plurality of light emitting diodes emitting visible light and at least one infra-red energy-emitting element. An infra-red photo detector receives infra-red energy emitted by the infra-red energy-emitting element and reflected by a person within a passenger compartment of the motor vehicle. An electronic processor is communicatively coupled to the infra-red photo detector. The electronic processor receives signals from the infra-red photo detector. The electronic processor determines, based on the signals, a position of a body part of the person within the passenger compartment of the motor vehicle.
Operation system
An operation system includes: an operation device manually operated by a user and inputting a command of an operation content to a command target apparatus selected from multiple apparatuses; a selection device selecting one apparatus as the command target apparatus according to multiple visual line regions individually set in relation to the apparatuses and a visual line direction of the user detected by a visual line detection sensor, the one device relating to one visual line region disposed in the visual line direction; and a selection maintaining device maintaining a selection state of the command target apparatus even when the visual line direction is changed to another direction pointing to none of the visual line regions while the command target apparatus is selected.
VIRTUAL TOUCH RECOGNITION APPARATUS AND METHOD FOR CORRECTING RECOGNITION ERROR THEREOF
A virtual touch recognition apparatus and a method of correcting a recognition error of the virtual touch recognition apparatus utilize a user's eye position and an image display position or projection position of a head-up display (HUD). The virtual touch recognition apparatus includes a gesture recognizer detecting the eye position of a user, the head-up display projecting an image on the image display position in front of the user, and a controller correcting a recognition error of a virtual touch based on the eye position of the user and the image display position. The apparatus and method can minimize a virtual touch recognition error occurring depending on the user's eye position and the image display position of the HUD when gesture control technology using the virtual touch is applied to a vehicle.
Vehicle display system, vehicle system, and vehicle
A vehicle display system provided to a vehicle, includes a first display device, a second display device, and a display control unit. The first display device is configured to emit a light pattern toward a road surface outside the vehicle. The second display device is located inside the vehicle and configured to display predetermined information toward a passenger in the vehicle so that the predetermined information is superimposed on a real space outside the vehicle. The display control unit is configured to control the first display device. The display control unit is configured to control emission of the light pattern according to a passenger's input operation on a display area where the predetermined information can be displayed.
Method for operating a mobile terminal using a gesture recognition and control device, gesture recognition and control device, motor vehicle, and an output apparatus that can be worn on the head
A gesture recognition and control device recognizes a mobile terminal and ascertains a current graphical user interface generated by a display device of the mobile terminal. The gesture recognition and control device provides an output signal describing, as display content, the graphical user interface generated by the display device of the mobile terminal, and transmits the output signal to an output apparatus that can be worn on the head for outputting the display content in a predefined output region in the interior of the motor vehicle as part of augmented reality or virtual reality output by the output apparatus. During the process of outputting the display content, the gesture recognition and control device recognizes a spatial gesture of the user, generates a remote control signal for triggering an operating function of the mobile terminal, and transmits the remote control signal to a control device of the recognized mobile terminal.
SYSTEM AND METHOD FOR USE OF A SENSOR MATRIX IN A VEHICLE INTERIOR
A system/method using a sensor arrangement in a vehicle interior with an occupant and vehicle systems may comprise a user interface and computing system to process input/signal and data from data sources to facilitate operation and to provide output/signal with connectivity with occupants, vehicle systems, networks, etc. relating to operation, events, conditions, etc. Output may comprise information/interaction at a user interface, to vehicle systems, network communications, etc. The system may use data to provide output comprising enhancement and/or augmentation of input; output may comprise a signal based on data analytics/processing including application of artificial intelligence models and/or augmented reality models. The system may comprise a distributed sensor matrix to obtain data/input from a sensor field within the vehicle; data sources may comprise the sensor matrix, vehicle systems, data storage and/or networks. The sensor/matrix may be for use with an interior component in personal vehicles, commercial/industrial vehicles, autonomous vehicles, etc.
MULTISENSORY GESTURAL-AUDIO INTERFACE TO PROMOTE SITUATIONAL AWARENESS FOR IMPROVED AUTONOMOUS VEHICLE CONTROL
A method for a multisensory gestural-audio interface system is described. The method includes providing a high-level overview of a driving environment to a driver of an ego vehicle through non-visual communication. The method also includes scanning a vehicle cabin to detect a gesture of the driver in response to the high-level overview of the driving environment. The method further includes providing a non-visual description of a selected vehicle control object according to the detected gesture of the driver. The method also includes performing a vehicle control action based on the selected vehicle control object based on a confirmation from the driver to perform the vehicle control action.