Patent classifications
H04N13/221
Single-Pass Object Scanning
Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.
Methods and systems for camera calibration
An image capture method may include obtaining two or more sets of images. The two or more sets of images may include a first image captured by a first image capture device and a second image captured by a second image capture device. The method may also include determining, for a set of images, two or more pairs of points. Each of the two or more pairs of points may include a first point in the first image and a second point in the second image, and the first point and the second point may correspond to a same object. The method may also include determining a first rotation matrix based on the pairs of points in the two or more sets of images. The first rotation matrix may be associated with a relationship between positions of the first image capture device and the second image capture device.
Methods and systems for camera calibration
An image capture method may include obtaining two or more sets of images. The two or more sets of images may include a first image captured by a first image capture device and a second image captured by a second image capture device. The method may also include determining, for a set of images, two or more pairs of points. Each of the two or more pairs of points may include a first point in the first image and a second point in the second image, and the first point and the second point may correspond to a same object. The method may also include determining a first rotation matrix based on the pairs of points in the two or more sets of images. The first rotation matrix may be associated with a relationship between positions of the first image capture device and the second image capture device.
3D rotational presentation generated from 2D static images
A computer-implemented method may be used to generate a 3D interactive presentation, referred to as a rotograph, illustrating a main object from a rotating viewpoint. A plurality of two-dimensional images may be received. A three-dimensional scene may be generated, with a virtual camera and an axis of rotation. Each of the two-dimensional images may be positioned in the three dimensional scene such that the plurality of two-dimensional images are oriented at different orientations about the axis of rotation. A motion pathway may be defined within the three-dimensional scene, by which the virtual camera is rotatable about the axis of rotation to view the plurality of two-dimensional images in sequence. A plurality of rotograph images may be captured with the virtual camera during motion of the virtual camera along the motion pathway to generate the rotograph, which may be displayed on a display screen.
Sensor signal visualization for sensors of coordinate measuring machines and microscopes
Sensor signals from a sensor of a coordinate measuring machine or microscope describe a workpiece arranged within a space. The sensor and the space are movable relative to one another. A method of visualizing the sensor signals includes obtaining data relating to a three-dimensional scene that is stationary relative to the space. The method includes generating a two-dimensional view image of the scene. The view image has opposing edges predefined with respect to at least one of the two directions. A central region of the view image is located between the edges. The method includes, repeatedly, obtaining a two-dimensional sensor representation of the workpiece and combining the sensor representation with the view image to form a two-dimensional output image. The method includes, in response to movement between the sensor and the space, generating a new view image if the central region would extend beyond either of the edges.
Sensor signal visualization for sensors of coordinate measuring machines and microscopes
Sensor signals from a sensor of a coordinate measuring machine or microscope describe a workpiece arranged within a space. The sensor and the space are movable relative to one another. A method of visualizing the sensor signals includes obtaining data relating to a three-dimensional scene that is stationary relative to the space. The method includes generating a two-dimensional view image of the scene. The view image has opposing edges predefined with respect to at least one of the two directions. A central region of the view image is located between the edges. The method includes, repeatedly, obtaining a two-dimensional sensor representation of the workpiece and combining the sensor representation with the view image to form a two-dimensional output image. The method includes, in response to movement between the sensor and the space, generating a new view image if the central region would extend beyond either of the edges.
STEREOSCOPIC AERIAL-VIEW IMAGES
According to an aspect of an embodiment, a method may include obtaining a first digital image that depicts a first aerial view of a first area of a setting. The method may additionally include obtaining a second digital image that depicts a second aerial view of a second area of the setting. Further, the method may include determining an overlapping area where the first area and the second area overlap and obtaining a third digital image based on the overlapping area, the first digital image, and the second digital image. In addition, the method may include generating a stereoscopic image of the setting based on the first digital image and the third digital image.
Digital camera user interface for video trimming
A digital video camera comprising: user controls enabling a user to select between at least an up input, a down input, a left input, a right input, and a confirmation input; and a program memory storing instructions to implement a method for trimming a digital video sequence. The method includes: selecting a digital video sequence; initiating a trimming operation; accepting user input to select a start frame and an end frame for a trimmed digital video sequence, wherein the up input and the down input are used to select between a start frame selection mode and an end frame selection mode, and the left input and the right input are used to scroll through the frames of the selected digital video sequence; and trimming the selected video sequence to include the frames between the selected start frame and the selected end frame.
Still-image extracting method and image processing device for implementing the same
A still-image extracting method is disclosed. Frames of an object are extracted as still images from a moving image stream chronologically continuously captured by a camera. The camera moves relative to the object. First frames are extracted from the moving image stream. Image capture times of the extracted first frames are obtained. Image capture positions of the camera at the image capture times of the first frames are identified based on the first frames. Image capture times of the frames captured at image capture positions spaced at equal intervals are estimated based on both the image capture positions, identified by the first frames, of the camera and the obtained image capture times. Second frames at the estimated image capture times are extracted as frames captured and obtained at image capture positions spaced apart at equal intervals from the moving image stream.
Multiscopic whitetail scoring game camera systems and methods
A game scoring camera system is disclosed for capturing images of game animals for the purpose of scoring the antlers using an accepted scoring method. One or more cameras are used in a multiscopic arrangement for capturing two-dimensional (2-D) images which are then converted to three-dimensional (3-D) data models, the resulting 3-D data models being used for determining measurements of various antler structures for calculating a score for the set of antlers captured in the images, the score being based on existing antler scoring systems. Some embodiments include one or more cameras, each being mounted on an unmanned aerial vehicle or drone, for capturing images during an aerial survey of game animals located within a particular area. Other embodiments include at least two cameras mounted in a stationary configuration for capturing images of game animals located within a particular area.