Patent classifications
B60K2370/1464
AIR IMAGING APPARATUS FOR VEHICLE AND HUMAN-MACHINE INTERACTIVE IN-VEHICLE ASSISTANCE SYSTEM
Disclosed is an air imaging apparatus for a vehicle. The air imaging apparatus for vehicle being mounted in a vehicle, comprises: an image source configured for generating a graphic for display; and an imaging magnifier configured for magnifying the graphic generated by the image source and forming a real image in the air inside a vehicle. Further disclosed is a human-machine interactive in-vehicle assistance system, comprising the air imaging apparatus for vehicle as described above and a gesture recognition apparatus nearby the real image. With the enlarged image, the icons for displaying contents are also enlarged; as such, the gesture recognition apparatus can easily recognize which command icon is to be touched by the user's gesture. The gesture sliding distance is also correspondingly enlarged for the user's sliding gesture operation, which significantly lowers the requirement on the precision of the gesture recognition apparatus.
METHOD OF EMERGENCY BRAKING OF VEHICLE
A method of emergency braking of a vehicle includes: displaying a brake button in a vehicle; determining pre-touch information on a distance before the brake button is touched by a user; determining post-touch information on a distance after the brake button is touched; and performing any one of a first brake control in which general braking is performed according to the degree of the pressing of the brake button and braking force is added from a first time point after the brake button is pressed, a second brake control in which the general braking is performed and braking force is added from a second time point that is after the first time point, and a third brake control in which the general braking is performed according to the pre-touch information and the post-touch information.
Method to Provide a Speech Dialog in Sign Language in a Speech Dialog System for a Vehicle
A method to provide a speech dialog in sign language in a speech dialog system for a vehicle is disclosed. In the method, the following steps are carried out: performing an optical detection of input information from a vehicle occupant; performing an evaluation of the detected input information; and providing a visual output in sign language depending on the evaluation.
SYSTEMS AND METHODS FOR TRIGGERING ACTIONS BASED ON TOUCH-FREE GESTURE DETECTION
Systems, methods and non-transitory computer-readable media for triggering actions based on touch-free gesture detection are disclosed. The disclosed systems may include at least one processor. A processor may be configured to receive image information from an image sensor, detect in the image information a gesture performed by a user, detect a location of the gesture in the image information, access information associated with at least one control boundary, the control boundary relating to a physical dimension of a device in a field of view of the user, or a physical dimension of a body of the user as perceived by the image sensor, and cause an action associated with the detected gesture, the detected gesture location, and a relationship between the detected gesture location and the control boundary.
Air imaging apparatus for vehicle and human-machine interactive in-vehicle assistance system
Disclosed is an air imaging apparatus for a vehicle. The air imaging apparatus for vehicle being mounted in a vehicle, comprises: an image source configured for generating a graphic for display; and an imaging magnifier configured for magnifying the graphic generated by the image source and forming a real image in the air inside a vehicle. Further disclosed is a human-machine interactive in-vehicle assistance system, comprising the air imaging apparatus for vehicle as described above and a gesture recognition apparatus nearby the real image. With the enlarged image, the icons for displaying contents are also enlarged; as such, the gesture recognition apparatus can easily recognize which command icon is to be touched by the user's gesture. The gesture sliding distance is also correspondingly enlarged for the user's sliding gesture operation, which significantly lowers the requirement on the precision of the gesture recognition apparatus.
Apparatus for a vehicle, method for controlling the same, and display for a vehicle
According to embodiments of the present invention, an apparatus fora vehicle is provided. The apparatus includes a support structure adapted to support a display panel, a drive assembly capable of selectively driving the support structure to rotate about a first axis and to rotate about a second axis, the first axis being different from the second axis, and a sensor arrangement located at a first predetermined position along a path of the rotation about the first axis and at a second predetermined position along a path of the rotation about the second axis, the sensor arrangement being configured to directly detect the support structure selectively at the first and second predetermined positions. According to further embodiments of the present invention, a display for a vehicle and a method for controlling an apparatus for a vehicle are also provided.
Method and apparatus for controlling a mobile terminal
A method and an apparatus for controlling a mobile terminal. The apparatus includes a receiving module for receiving gesture information from a first mobile terminal with an image capturing device, wherein the gesture information is from a user; and a sending module for sending a particular operation instruction to a second mobile terminal whose graphic user interface is being currently displayed on a display screen of a display device to instruct the second mobile terminal to execute an operation corresponding to the particular operation instruction, wherein the particular operation instruction is related to the received gesture information.
Delimitation in unsupervised classification of gestures
A method for classifying a gesture made in proximity to a touch interface. A system receives data related to the position and/or movement of hand. The data is delimited by identifying a variable length window of touch frames. The variable length window of touch frames is selected to include touch frames indicative of feature data. The variable length window of touch frames is classified based upon classifications learned by the classifying module to identify gestures.
ELECTRONIC DEVICE FOR VEHICLE AND METHOD OF OPERATING ELECTRONIC DEVICE FOR VEHICLE
Disclosed is an electronic device for a vehicle, including a processor acquiring image data acquired by a camera mounted in a vehicle, determining a riding intention of an outside person based on the location, posture, and gesture of the outside person detected from the image data, and generating a control signal to stop the vehicle based on the riding intention.
Systems and methods for triggering actions based on touch-free gesture detection
Systems, methods and non-transitory computer-readable media for triggering actions based on touch-free gesture detection are disclosed. The disclosed systems may include at least one processor. A processor may be configured to receive image information from an image sensor, detect in the image information a gesture performed by a user, detect a location of the gesture in the image information, access information associated with at least one control boundary, the control boundary relating to a physical dimension of a device in a field of view of the user, or a physical dimension of a body of the user as perceived by the image sensor, and cause an action associated with the detected gesture, the detected gesture location, and a relationship between the detected gesture location and the control boundary.