Patent classifications
G01C21/265
HAPTIC INFORMATION PROVISION DEVICE
The present invention relates to a haptic information provision device. The haptic information provision device (100) according to the present invention comprises: a receiver (120) for receiving external notification information; a controller (130) for converting the notification information to a haptic signal; and an operation unit (110) for transferring haptic information to a user according to the haptic signal, wherein the operation unit (110) includes a plurality of operation units (110a-110j), the respective operation units (110a-110j) operating in response to different notification information and thus transferring different haptic information to the user.
Robot and method for controlling the same
A robot according to an embodiment of the present disclosure includes an authentication interface for authenticating a user's boarding of the robot using authentication information of the user, a position detector for detecting a position of the robot in a space, a processor for identifying a first section corresponding to the detected position among at least one section in the space, recognizing at least one driving mode for the first section among a plurality of driving modes with different driving speeds, setting one of the recognized at least one driving mode as a driving mode for the first section based on the authentication information, and controlling driving of the robot based on the set driving mode, and a display for outputting information on the set driving mode.
Projecting images captured using fisheye lenses for feature detection in autonomous machine applications
In various examples, sensor data may be adjusted to represent a virtual field of view different from an actual field of view of the sensor, and the sensor data—with or without virtual adjustment—may be applied to a stereographic projection algorithm to generate a projected image. The projected image may then be applied to a machine learning model—such as a deep neural network (DNN)—to detect and/or classify features or objects represented therein.
NAVIGATION DEVICE AND METHOD OF MANUFACTURING NAVIGATION DEVICE
A navigation device includes an outer panel and an inertial sensor. The outer panel includes a pair of side plates separated from each other in a first direction and facing each other. A pair of fixing portions are provided on the pair of side plates. The pair of fixing portions are fixed to moving body-side fixing members. The inertial sensor is provided inside surrounded by the outer panel and arranged at a position sandwiched between the pair of fixing portions in the first direction.
Measurement device, measurement method and program
The above measurement device acquires output data from a sensor unit for detecting surrounding feature, and extracts, from the output data, data corresponding to detection result in a predetermined range in a predetermined positional relation with an own position. The predetermined range is determined in accordance with accuracy of the own position. Then, the measurement device executes predetermined processing based on the extracted data.
Display system
A display system of the present disclosure forms an AR route by shifting node information included in road map data to a lane on which a subject vehicle is to travel on the basis of lane information. Thus, it is possible to display the AR route which matches a shape of a route on which the subject vehicle is to travel without providing a feeling of strangeness while resolving inconvenience that the AR route is largely displaced from the route on which the subject vehicle is to travel at positions such as an intersection and a branch point, where a plurality of roads intersect.
Guidance display system for work vehicles and work implements
A work vehicle guidance display system comprising: at least one imaging device disposed on a work vehicle; a display disposed in the work vehicle configured to display images from the imaging device; and a controller configured to: select a field of view of the imaging device to display; receive a static dimension associated with the work vehicle; receive a dynamic dimension associated with the work vehicle; and display on the display a field view with a first machine travel path based on the static dimension and a second machine travel path based on the dynamic dimension.
METHOD FOR IDENTIFYING A TARGET POSITION OF A DISPLAY- AND/OR CONTROL UNIT IN A HOLDING DEVICE, AND CORRESPONDING DISPLAY- AND/OR CONTROL UNIT
A method, and a device that carries out the method, in which the proper holding and/or orientation of the display and/or control unit in the holding device for the carrying out of the further functions is provided.
PRESENTATION CONTROL DEVICE, SYSTEM, METHOD AND NON-TRANSITORY COMPUTER-READABLEMEDIUM STORING PROGRAM THEREIN
A presentation control device is provided for guiding a visitor in a specific area to an appropriate point in the area. The presentation control device (100) includes an acquisition unit (130) that acquires a photographed image photographed by a predetermined photographing device, an authentication control unit (140) that extracts a face area or facial feature information from the photographed image and causes an authentication device (200) to perform face authentication, a specification unit (160) that specifies presentation information to be presented for guiding to a predetermined point, based on a movement history and an action history being associated with a user ID of a user who succeeds in the face authentication, and an output unit (170) that transmits the presentation information specified by the specifying unit (160) to a predetermined display terminal.
PROJECTING IMAGES CAPTURED USING FISHEYE LENSES FOR FEATURE DETECTION IN AUTONOMOUS MACHINE APPLICATIONS
In various examples, live perception from wide-view sensors may be leveraged to detect features in an environment of a vehicle. Sensor data generated by the sensors may be adjusted to represent a virtual field of view different from an actual field of view of the sensor, and the sensor data—with or without virtual adjustment—may be applied to a stereographic projection algorithm to generate a projected image. The projected image may then be applied to a machine learning model—such as a deep neural network (DNN)—to detect and/or classify features or objects represented therein. In some examples, the machine learning model may be pre-trained on training sensor data generated by a sensor having a field of view less than the wide-view sensor such that the virtual adjustment and/or projection algorithm may update the sensor data to be suitable for accurate processing by the pre-trained machine learning model.