Patent classifications
G06T2207/30244
INFORMATION PROCESSING DEVICE, PROGRAM, AND METHOD
An information processing device that includes a control unit configured to track an object in an image using images input in time series, using a tracking result obtained by performing tracking in units of a tracking region corresponding to a specific part of the object.
MAP INFORMATION UPDATE METHOD, LANDMARK GENERATION METHOD, AND FEATURE POINT DISTRIBUTION ADJUSTMENT METHOD
A map information update method includes: (a) obtaining map information; (b) obtaining landmark observed positions indicating positions of one or more landmarks in a captured image; (c) adding that includes (i) generating added map information by adding information pertaining to the landmark observed positions to the map information, and (ii) updating the map information obtained in (a) to the added map information; (d) predicting that includes (i) calculating predicted map information based on the map information updated in (c), by using a neural network inference engine that has been trained, and (ii) updating the map information to the predicted map information; and updating information that includes (i) calculating updated map information based on the map information updated in (d), by using a gradient method, and (ii) updating the map information to the updated map information.
GENERATING VIRTUAL IMAGES BASED ON CAPTURED IMAGE DATA
Systems and methods for generating a virtual view of a virtual camera based on an input image are described. A system for generating a virtual view of a virtual camera based on an input image can include a capturing device including a physical camera and a depth sensor. The system also includes a controller configured to determine an actual pose of the capturing device; determine a desired pose of the virtual camera for showing the virtual view; define an epipolar geometry between the actual pose of the capturing device and the desired pose of the virtual camera; and generate a virtual image depicting objects within the input image according to the desired pose of the virtual camera for the virtual camera based on an epipolar relation between the actual pose of the capturing device, the input image, and the desired pose of the virtual camera.
Single-pass object scanning
Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.
Localization method and system for mobile remote inspection and/or manipulation tools in confined spaces
A localization method and system for mobile remote inspection and/or manipulation tools in confined spaces are provided. The system comprises a mobile remote inspection and/or manipulation device including a carrier movable within the confined space and an inspection and/or manipulation tool, such as an inspection camera, pose sensors arranged on the movable carrier for providing signals indicative of the position and orientation of the movable carrier, and distance sensors arranged on the movable carrier for providing signals indicative of the distance to interior surfaces of the confined space. The localization method makes use of probalistic sensor fusion of the measurement data provided by the pose sensors and the distance sensors in order to precisely determine the actual pose of the movable carrier and localize data generated by the inspection and/or manipulation tool.
Dynamic image capturing apparatus and method using arbitrary viewpoint image generation technology
Embodiments relate to a dynamic image capturing method and apparatus using an arbitrary viewpoint image generation technology, in which an image of background content displayed on a background content display unit or an image of background content implemented in a virtual space through a chroma key screen, having a view matching to a view of seeing a subject at a viewpoint of a camera is generated, and a final image including the image of the background content and a subject area is obtained.
Localization and mapping method and moving apparatus
A localization and mapping method is for localizing and mapping a moving apparatus in a moving process. The localization and mapping method includes an image capturing step, a feature point extracting step, a flag object identifying step, and a localizing and mapping step. The image capturing step includes capturing an image frame at a time point of a plurality of time points in the moving process by a camera unit. The flag object identifying step includes identifying whether the image frame includes a flag object among a plurality of the feature points in accordance with a flag database. The flag database includes a plurality of dynamic objects, and the flag object is corresponding to one of the dynamic objects. The localizing and mapping step includes performing localization and mapping in accordance with the image frames captured and the flag object thereof in the moving process.
Using mapped elevation to determine navigational parameters
Systems and methods for navigating a host vehicle. The system may perform operations including receiving, from an image capture device, at least one image representative of an environment of the host vehicle; analyzing the at least one image to identify an object in the environment of the host vehicle; determining a location of the host vehicle; receiving map information associated with the determined location of the host vehicle, wherein the map information includes elevation information associated with the environment of the host vehicle; determining a distance from the host vehicle to the object based on at least the elevation information; and determining a navigational action for the host vehicle based on the determined distance.
Autonomous mobile apparatus and control method thereof
The present disclosure provides an autonomous mobile apparatus and a control method thereof. The method includes: starting a SLAM mode; obtaining first image data captured by a first camera; extracting a first tag image of positioning tag(s) from the first image data; calculating a three-dimensional camera coordinate of feature points of the positioning tag(s) in a first camera coordinate system of the first camera based on the first tag image; calculating a three-dimensional world coordinate of the feature points of the positioning tag(s) in a world coordinate system based on a first camera pose of the first camera when obtaining the first image data in the world coordinate system and the three-dimensional camera coordinate; and generating a map file based on the three-dimensional world coordinate of the feature points of the positioning tag(s).
Mobile robot system and method for generating map data using straight lines extracted from visual images
A mobile robot is configured to navigate on a sidewalk and deliver a delivery to a predetermined location. The robot has a body and an enclosed space within the body for storing the delivery during transit. At least two cameras are mounted on the robot body and are adapted to take visual images of an operating area. A processing component is adapted to extract straight lines from the visual images taken by the cameras and generate map data based at least partially on the images. A communication component is adapted to send and receive image and/or map data. A mapping system includes at least two such mobile robots, with the communication component of each robot adapted to send and receive image data and/or map data to the other robot. A method involves operating such a mobile robot in an area of interest in which deliveries are to be made.