Patent classifications
H04N13/221
Single-pass object scanning
Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.
Control device and master slave system
Provided is a control device including a control unit that calculates a first positional relationship between an eye of an observer observing an object displayed on a display unit and a first point in a master-side three-dimensional coordinate system, and controls an imaging unit that images the object so that a second positional relationship between the imaging unit and a second point corresponding to the first point in a slave-side three-dimensional coordinate system corresponds to the first positional relationship.
Control device and master slave system
Provided is a control device including a control unit that calculates a first positional relationship between an eye of an observer observing an object displayed on a display unit and a first point in a master-side three-dimensional coordinate system, and controls an imaging unit that images the object so that a second positional relationship between the imaging unit and a second point corresponding to the first point in a slave-side three-dimensional coordinate system corresponds to the first positional relationship.
DEVICE AND METHOD FOR ACQUIRING DEPTH OF SPACE BY USING CAMERA
A device and method of obtaining a depth of a space are provided. The method includes obtaining a plurality of images by photographing a periphery of a camera a plurality of times while sequentially rotating the camera by a preset angle, identifying a first feature region in a first image and an n-th feature region in an n-th image, the n-th feature region being identical with the first feature region, by comparing adjacent images between the first image and the n-th image from among the plurality of images, obtaining a base line value with respect to the first image and the n-th image, obtaining a disparity value between the first feature region and the n-th feature region, and determining a depth of the first feature region or the n-th feature region based on at least the base line value and the disparity value.
DEVICE AND METHOD FOR ACQUIRING DEPTH OF SPACE BY USING CAMERA
A device and method of obtaining a depth of a space are provided. The method includes obtaining a plurality of images by photographing a periphery of a camera a plurality of times while sequentially rotating the camera by a preset angle, identifying a first feature region in a first image and an n-th feature region in an n-th image, the n-th feature region being identical with the first feature region, by comparing adjacent images between the first image and the n-th image from among the plurality of images, obtaining a base line value with respect to the first image and the n-th image, obtaining a disparity value between the first feature region and the n-th feature region, and determining a depth of the first feature region or the n-th feature region based on at least the base line value and the disparity value.
AUTOMATED IMAGE CAPTURING APPARATUS AND SYSTEM THEREOF
A system and apparatus for automated image capturing, comprising a microcontroller, an image capturing device operatively coupled to the pair of guiding apparatus using a first electric rotary actuator, a rotary plate operatively mounted on a second electric rotary actuator. The pair of guiding apparatus and the first electric rotary actuator is actuated to cause change in position of the image capturing device relative to an object positioned on the rotary plate and second electric rotary actuator is actuated causing change in angle of orientation of the object positioned on the rotary plate. By varying lighting conditions and for different background images, plurality of images of object are captured using the image capturing device by actuating electro-mechanical components of the apparatus.
SYSTEM AND METHOD FOR GENERATING COMBINED EMBEDDED MULTI-VIEW INTERACTIVE DIGITAL MEDIA REPRESENTATIONS
Various embodiments describe systems and processes for capturing and generating multi-view interactive digital media representations (MIDMRs). In one aspect, a method for automatically generating a MIDMR comprises obtaining a first MIDMR and a second MIDMR. The first MIDMR includes a convex or concave motion capture using a recording device and is a general object MIDMR. The second MIDMR is a specific feature MIDMR. The first and second MIDMRs may be obtained using different capture motions. A third MIDMR is generated from the first and second MIDMRs, and is a combined embedded MIDMR. The combined embedded MIDMR may comprise the second MIDMR being embedded in the first MIDMR, forming an embedded second MIDMR. The third MIDMR may include a general view in which the first MIDMR is displayed for interactive viewing by a user on a user device. The embedded second MIDMR may not be viewable in the general view.
SYSTEM AND METHOD FOR GENERATING COMBINED EMBEDDED MULTI-VIEW INTERACTIVE DIGITAL MEDIA REPRESENTATIONS
Various embodiments describe systems and processes for capturing and generating multi-view interactive digital media representations (MIDMRs). In one aspect, a method for automatically generating a MIDMR comprises obtaining a first MIDMR and a second MIDMR. The first MIDMR includes a convex or concave motion capture using a recording device and is a general object MIDMR. The second MIDMR is a specific feature MIDMR. The first and second MIDMRs may be obtained using different capture motions. A third MIDMR is generated from the first and second MIDMRs, and is a combined embedded MIDMR. The combined embedded MIDMR may comprise the second MIDMR being embedded in the first MIDMR, forming an embedded second MIDMR. The third MIDMR may include a general view in which the first MIDMR is displayed for interactive viewing by a user on a user device. The embedded second MIDMR may not be viewable in the general view.
Stereoscopic image capturing systems
A stereoscopic imager system, comprising: a sensor array comprising a first plurality of photosensors and a second plurality of photosensors spaced apart from the first plurality of photosensors by a gap, the first plurality of photosensors and the second plurality of photosensors being configured to detect ambient light in a scene; a moving component coupled to the sensor array and operable to move the sensor array between a first position and a second position within a full rotational image capturing cycle; and a system controller coupled to the sensor array and the moving component. The system controller can be configured to: move a field of view of a sensor array by instructing the moving component to capture a first image of an object in the scene with the first plurality of photosensors from a first perspective at the first position, and to capture a second image of the scene of the object in the scene with the second plurality of photosensors from a second perspective at the second position; and calculate, based on the first image and the second image, a distance to the object using an optical baseline defined by the gap.
MATCHING SEGMENTS OF VIDEO FOR VIRTUAL DISPLAY OF A SPACE
Systems, methods, and non-transitory computer-readable medium storing instructions that, when executed, causes a processor to perform operations to display a three-dimensional (3D) space. The methods may include, with an imaging device, capturing a first series of frames as the imaging device travels from a first location to a second location within a space, and capturing a second series of frames as the imaging device travels from the second location to the first location. The method may also include determining a first segment in the first series of frames that matches a second segment in the second series of frames to create a segmentation dataset, generating video clip data based on the segmentation dataset, the video clip data defining a series of video clips, and displaying the series of video clips.