Patent classifications
G05D1/2249
Information processing device, mobile device, information processing system, and method
Provided is a data processing unit of a user terminal that sets a real object included in a camera-captured image as a marker, generates marker reference coordinates with a configuration point of the set marker as an origin, and transmits position data on the marker reference coordinates to a mobile device. The data processing unit transforms the destination position of the drone or the position of the tracking target from the coordinate position on the user terminal camera coordinates to the coordinate position on the marker reference coordinates, and transmits the transformed coordinate position to the drone. The data processing unit receives the movement path from the drone as coordinate position data on the marker reference coordinates, transforms the coordinate position data into a coordinate position on the user terminal camera coordinates, and displays the path information on the display unit.
SYSTEM, APPARATUS, AND METHOD FOR PROVIDING AUGMENTED REALITY ASSISTANCE TO WAYFINDING AND PRECISION LANDING CONTROLS OF AN UNMANNED AERIAL VEHICLE TO DIFFERENTLY ORIENTED INSPECTION TARGETS
A method for controlling an unmanned aerial vehicle using a control apparatus, comprises: executing a navigation process by: obtaining a live video moving image from a navigation camera device of the UAV; and generating a navigation display interface for display on a display device of the control apparatus, the navigation display interface comprising a plurality of navigation augmented reality display elements related to a determined waypoint superimposed over the live video moving image; and when the UAV reaches the determined waypoint, executing a precision landing process by: generating a precision landing display interface for display on the display device, the precision landing display interface comprising a plurality of precision landing AR display elements related to a landing target associated with the determined waypoint superimposed over the live video moving image obtained from a precision landing camera device of the UAV.
Safeguarded Tele-Operation for Autonomous Driving
A system may receive a video stream from a camera of a vehicle in a transportation network. The system may also receive an input indicating an acceleration and a steering angle of a simulated lead vehicle. The system may determine a pose of the simulated vehicle relative to a home pose based on the acceleration input and the steering input, and display an overlay of a representation of the simulated vehicle in the video stream at pixel coordinates based on the pose. The system may determine, from the pixel coordinates, spatial coordinates of the simulated vehicle in the transportation network, wherein the spatial coordinates are relative to at least one of the transportation network or the camera. The system may transmit the spatial coordinates to the vehicle to cause the vehicle to follow a path based on the spatial coordinates.
DRIVING ASSISTANCE SYSTEM, DRIVING ASSISTANCE APPARATUS, AND DRIVING ASSISTANCE METHOD
A driving assistance system (1001) according to the present disclosure is a driving assistance system including: an acquisition unit (11) that acquires information regarding a driving operation amount of a moving object; a specifying unit (12) that specifies a region in which the moving object can travel based on the information acquired by the acquisition unit (11); and a display information generation unit (13) that displays the region specified by the specifying unit (12) to a user.
Robotic cleaner
A robotic cleaning system may include a robotic cleaner configured to generate a map of an environment and a mobile device configured to communicatively couple to the robotic cleaner, the robotic cleaner configured to communicate the map to the mobile device. The mobile device may include a camera configured to generate an image of the environment, the image comprising a plurality of pixels, a display configured to display the image and to receive a user input while displaying the image, the user input being associated with one or more of the plurality of pixels, a depth sensor configured to generate depth data that is associated with each pixel of the image, an orientation sensor configured to generate orientation data that is associated with each pixel of the image, and a mobile controller configured to localize the mobile device within the map using the depth data and the orientation data.
REMOTE OPERATION SUPPORT SYSTEM AND REMOTE OPERATION SUPPORT METHOD
A remote operation support system and method that can improve spatial recognition accuracy by an operator who remotely operates work machine is provided. A captured image representing a sight around work machine (40) is captured by an imaging device (412b) mounted on the work machine (40). A synthetic image is generated by superimposing an index image representing an index member positioned in a cab (454) (operator's room) of the work machine (40) in a pseudo manner on the captured image. The synthetic image is displayed on a remote output interface (220) of a remote operation device (20) that allows remote operation of the work machine (40).
AUGMENTED REALITY ROAD DISPLAY
An example operation includes one or more of receiving sensor data of a vehicle as the vehicle travels along a road, determining that an environment around the vehicle has deteriorated based on the sensor data of the vehicle, extracting previously-captured sensor data of the road, wherein the previously-captured sensor data is captured by one or more other vehicles while travelling along the road, determining a view of the road based on execution of an artificial intelligence (AI) model on the previously-captured sensor data, and displaying the view of the road within the vehicle via augmented reality while the vehicle travels along the road.
WORKING SYSTEM
A working system can perform work by a working mobile body remotely operated by a user, and includes an image generation unit configured to generate a virtual space image corresponding to a real space around the working mobile body, and a head-mounted display configured to be worn by the user and give the user the virtual space image generated by the image generation unit. The image generation unit generates the virtual space image corresponding to the position and direction of the working mobile body.
Autonomous monitoring robot systems
An autonomous mobile robot includes a chassis, a drive supporting the chassis above a floor surface in a home and configured to move the chassis across the floor surface, a variable height member being coupled to the chassis and being vertically extendible, a camera supported by the variable height member, and a controller. The controller is configured to operate the drive to navigate the robot to locations within the home and to adjust a height of the variable height member upon reaching a first of the locations. The controller is also configured to, while the variable height member is at the adjusted height, operate the camera to capture digital imagery of the home at the first of the locations.
Systems and methods to ensure safe driving behaviors associated with remote
Systems and methods to ensure safe driving behaviors in remote driving applications may include a vehicle having an imaging device and a teleoperator station in communication with each other via a network. For example, a safety tunnel having various safety tunnel parameters may be generated based on location data, map data, vehicle data, and/or sensor data. Remote operation of the vehicle may be monitored with respect to the safety tunnel parameters, and various visual, audio, and/or haptic alerts or feedback may be presented or emitted for the teleoperator to encourage or enforce vehicle operation within the safety tunnel parameters. Further, various autonomous remote operation programs or control routines may be initiated or instructed to ensure safe driving behaviors of the vehicle based on the safety tunnel parameters.