Patent classifications
G06T7/579
Systems and methods for providing mixed-reality experiences under low light conditions
Systems and methods are provided for facilitating computer vision tasks (e.g., simultaneous location and mapping) and pass-through imaging include a head-mounted display (HMD) that includes a first set of one or more cameras configured for performing computer vision tasks and a second set of one or more cameras configured for capturing image data of an environment for projection to a user of the HMD. The first set of one or more cameras is configured to detect at least a visible spectrum light and at least a particular band of wavelengths of infrared (IR) light. The second set of one or more cameras includes one or more detachable IR filters configured to attenuate IR light, including at least a portion of the particular band of wavelengths of IR light.
Localization and mapping method and moving apparatus
A localization and mapping method is for localizing and mapping a moving apparatus in a moving process. The localization and mapping method includes an image capturing step, a feature point extracting step, a flag object identifying step, and a localizing and mapping step. The image capturing step includes capturing an image frame at a time point of a plurality of time points in the moving process by a camera unit. The flag object identifying step includes identifying whether the image frame includes a flag object among a plurality of the feature points in accordance with a flag database. The flag database includes a plurality of dynamic objects, and the flag object is corresponding to one of the dynamic objects. The localizing and mapping step includes performing localization and mapping in accordance with the image frames captured and the flag object thereof in the moving process.
Autonomous mobile apparatus and control method thereof
The present disclosure provides an autonomous mobile apparatus and a control method thereof. The method includes: starting a SLAM mode; obtaining first image data captured by a first camera; extracting a first tag image of positioning tag(s) from the first image data; calculating a three-dimensional camera coordinate of feature points of the positioning tag(s) in a first camera coordinate system of the first camera based on the first tag image; calculating a three-dimensional world coordinate of the feature points of the positioning tag(s) in a world coordinate system based on a first camera pose of the first camera when obtaining the first image data in the world coordinate system and the three-dimensional camera coordinate; and generating a map file based on the three-dimensional world coordinate of the feature points of the positioning tag(s).
Autonomous mobile apparatus and control method thereof
The present disclosure provides an autonomous mobile apparatus and a control method thereof. The method includes: starting a SLAM mode; obtaining first image data captured by a first camera; extracting a first tag image of positioning tag(s) from the first image data; calculating a three-dimensional camera coordinate of feature points of the positioning tag(s) in a first camera coordinate system of the first camera based on the first tag image; calculating a three-dimensional world coordinate of the feature points of the positioning tag(s) in a world coordinate system based on a first camera pose of the first camera when obtaining the first image data in the world coordinate system and the three-dimensional camera coordinate; and generating a map file based on the three-dimensional world coordinate of the feature points of the positioning tag(s).
Mobile robot system and method for generating map data using straight lines extracted from visual images
A mobile robot is configured to navigate on a sidewalk and deliver a delivery to a predetermined location. The robot has a body and an enclosed space within the body for storing the delivery during transit. At least two cameras are mounted on the robot body and are adapted to take visual images of an operating area. A processing component is adapted to extract straight lines from the visual images taken by the cameras and generate map data based at least partially on the images. A communication component is adapted to send and receive image and/or map data. A mapping system includes at least two such mobile robots, with the communication component of each robot adapted to send and receive image data and/or map data to the other robot. A method involves operating such a mobile robot in an area of interest in which deliveries are to be made.
Mobile robot system and method for generating map data using straight lines extracted from visual images
A mobile robot is configured to navigate on a sidewalk and deliver a delivery to a predetermined location. The robot has a body and an enclosed space within the body for storing the delivery during transit. At least two cameras are mounted on the robot body and are adapted to take visual images of an operating area. A processing component is adapted to extract straight lines from the visual images taken by the cameras and generate map data based at least partially on the images. A communication component is adapted to send and receive image and/or map data. A mapping system includes at least two such mobile robots, with the communication component of each robot adapted to send and receive image data and/or map data to the other robot. A method involves operating such a mobile robot in an area of interest in which deliveries are to be made.
Passive wide-area three-dimensional imaging
Radar, lidar, and other active 3D imaging techniques require large, heavy sensors that consume lots of power. Passive 3D imaging techniques based on feature matching are computationally expensive and limited by the quality of the feature matching. Fortunately, there is a robust, computationally inexpensive way to generate 3D images from full-motion video acquired from a platform that moves relative to the scene. The full-motion video frames are registered to each other and mapped to the scene coordinates using data about the trajectory of the platform with respect to the scene. The time derivative of the registered frames equals the product of the height map of the scene, the projected angular velocity of the platform, and the spatial gradient of the registered frames. This relationship can be solved in (near) real time to produce the height map of the scene from the full-motion video and the trajectory.
SLAM-BASED ELECTRONIC DEVICE AND AN OPERATING METHOD THEREOF
A simultaneous localization and mapping-based electronic device includes: a data acquisition device configured to acquire external data; a memory; and a processor configured to be operatively connected to the data acquisition device and the memory, wherein the processor is further configured to extract features of surrounding objects from the acquired external data, calculate a score of a registration error of the extracted features when the number of the extracted features is greater than a set number stored in the memory, and select the set number of features from the among the extracted features, based on the calculated score.
Method and apparatus for image processing and computer storage medium
A method and an apparatus for processing an image are provided. The method may include: acquiring a set of image sequences, the set of image sequences including a plurality of image sequence subsets divided according to similarity measurements between image sequences, each image sequence subset including a basic image sequence and other image sequence, wherein a first similarity measurement corresponding to the basic image sequence is greater than or equal to a first similarity measurement corresponding to the other image sequence; creating an original three-dimensional model using the basic image sequence; and creating a final three-dimensional model using the other image sequence based on the original three-dimensional model.
DETERMINING CAMERA ROTATIONS BASED ON KNOWN TRANSLATIONS
In example embodiments, techniques are provided for calculating camera rotation using translations between sensor-derived camera positions (e.g., from GPS) and pairwise information, producing a sensor-derived camera pose that may be integrated in an early stage of SfM reconstruction. A software process of a photogrammetry application may obtain metadata including sensor-derived camera positions for a plurality of cameras for a set of images and determine optical centers based thereupon. The software process may estimate unit vectors along epipoles from a given camera of the plurality of cameras to two or more other cameras. The software process then may determine a camera rotation that best maps unit vectors defined based on differences in the optical centers to the unit vectors along the epipoles. The determined camera rotation and the sensor-derived camera position form a sensor-derived camera pose that may be returned and used.