Patent classifications
B60K2360/1464
Method for Displaying Points of Interest on a Digital Map
A method for displaying points of interest on a digital map on a display, each point of interest being assigned at least one category and the space in front of the display being segmented into spatial regions where each spatial region is assigned a category. The method includes detecting the position of a hand of a user or part of the hand of the user by sensors, more particularly by cameras, identifying the spatial region in which the position of the hand or the part of the hand lies, identifying the category assigned to the identified spatial region, and displaying or highlighting the points of interest that have been assigned to the identified category.
SYSTEMS AND METHODS FOR DISTRIBUTING HAPTIC EFFECTS TO USERS INTERACTING WITH USER INTERFACES
A system includes a user interface configured to receive an input from a user of the system, a sensor configured to sense a position of a user input element relative to the user interface, and a processor configured to receive an input signal from the sensor based on the position of the user input element relative to the user interface, determine a haptic effect based on the input signal, and output a haptic effect generation signal based on the determined haptic effect. A haptic output device is configured to receive the haptic effect generation signal from the processor and generate the determined haptic effect to the user, the haptic output device being located separate from the user interface so that the determined haptic effect is generated away from the user interface.
AUTOMOBILE DASHCAM SYSTEM AND METHOD FOR CONTROLLING STORAGE OF IMAGE
Provided is an automobile dashcam and a method therefor, and more particularly, to an automobile dashcam system capable of controlling storage of image by simple motion of a user, without directly inputting a command, and a method therefor.
SYSTEMS AND METHODS FOR TRIGGERING ACTIONS BASED ON TOUCH-FREE GESTURE DETECTION
Systems, methods and non-transitory computer-readable media for triggering actions based on touch-free gesture detection are disclosed. The disclosed systems may include at least one processor. A processor may be configured to receive image information from an image sensor, detect in the image information a gesture performed by a user, detect a location of the gesture in the image information, access information associated with at least one control boundary, the control boundary relating to a physical dimension of a device in a field of view of the user, or a physical dimension of a body of the user as perceived by the image sensor, and cause an action associated with the detected gesture, the detected gesture location, and a relationship between the detected gesture location and the control boundary.
CONTROL APPARATUS, CONTROL METHOD, AGENT APPARATUS, AND COMPUTER READABLE STORAGE MEDIUM
A control apparatus controls an agent apparatus functioning as a user interface of a request processing apparatus that acquires a request indicated by a voice of a user and performs a process corresponding to the request. The control apparatus includes a gaze point specifying section that specifies a gaze point of the user, and a state determining section that makes a determination to change a state of the agent apparatus from a standby state of processing an activation request for starting a response process via the agent to an activation state of processing a request other than the activation request via the agent, if the gaze point is positioned at (i) a portion of the agent used to transmit information to the user or (ii) a portion of an image output section that displays or projects an image of the agent.
Image display device and image display method
An image display device includes an acquisition unit that acquires two or more types of information, a detection unit that detects at least one of an acceleration, an orientation, and an angular velocity of the image display device, an image change section that determines a state of the image display device on the basis of a detection result of the detection unit, switches information, which is to be selected from the two or more types of information acquired by the acquisition unit, in accordance with the determined state of the image display device, and generates display data based on the switched information, and a display unit that displays the display data.
Touch type operation device, and operation method and operation program thereof
A touch pad includes a flat portion and a convex type three-dimensional portion having a hemispherical shape. An outer circumferential surface of the three-dimensional portion constitutes a spherical operation area. A second recognition unit of the touch-pad controller recognizes a gesture operation of at least two fingers to be the same gesture operation as a pinch-in operation in a planar operation area, in a case where movement trajectories of the at least two fingers in the spherical operation area are arc-shaped movement trajectories which are bulged outwardly in a case where the spherical operation area is viewed in a plan view.
EFFICIENT CONFIGURATION OF SCENARIOS FOR EVENT SEQUENCING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for initiating actions based on sequences of events. In one aspect, a process includes receiving, from each of multiple devices, data specifying an event detected by the device. The events are compared to configured scenarios that each specify a sequence of trigger events and an action to be initiated in response to detecting the sequence of trigger events. A determination is made that the events match a first scenario of the scenarios. In response to determining that the events detected by the multiple devices match the first scenario, a first action specified by the first scenario is initiated. A determination is made that a combination of one or more of the events detected by the multiple devices and the first action matches a second scenario. In response, a second action specified by the second scenario is initiated.
Method for the contactless shifting of visual information
A method for the contactless shifting of visual information includes detecting at least one viewing direction to detect visual information in a first display area, and a movement of a hand and/or a movement of a head of a user toward a second display area. After the detection of the visual information in the first display area, and after the detected motion indicating a shift, the visual information is shown in the second display area upon the completion of the pivoting movement or movements.
SYSTEMS AND METHODS FOR TRIGGERING ACTIONS BASED ON TOUCH-FREE GESTURE DETECTION
Systems, methods and non-transitory computer-readable media for triggering actions based on touch-free gesture detection are disclosed. The disclosed systems may include at least one processor. A processor may be configured to receive image information from an image sensor, detect in the image information a gesture performed by a user, detect a location of the gesture in the image information, access information associated with at least one control boundary, the control boundary relating to a physical dimension of a device in a field of view of the user, or a physical dimension of a body of the user as perceived by the image sensor, and cause an action associated with the detected gesture, the detected gesture location, and a relationship between the detected gesture location and the control boundary.