Patent classifications
H04N5/2226
VARIED DEPTH DETERMINATION USING STEREO VISION AND PHASE DETECTION AUTO FOCUS (PDAF)
Disclosed are systems, methods, and non-transitory computer-readable media for varied depth determination using stereo vision and phase detection auto focus (PDAF). Computer stereo vision (stereo vision) is used to extract three-dimensional information from digital images. To utilize stereo vision, two optical sensors are displaced horizontally from one another and used to capture images depicting two differing views of a real-world environment from two different vantage points. The relative depth of the objects captured in the images is determined using triangulation by comparing the relative positions of the objects in the two images. For example, the relative positions of matching objects (e.g., features) identified in the captured images are used along with the known orientation of the optical sensors (e.g., distance between the optical sensors, vantage points the optical sensors) to estimate the depth of the objects.
Depth determination using time-of-flight and camera assembly with augmented pixels
A camera assembly for determining depth information for a local area includes a light source assembly, a camera assembly, and a controller. The light source assembly projects pulses of light into the local area. The camera assembly images a portion of the local area illuminated with the pulses. The camera assembly includes augmented pixels, each augmented pixel having a plurality of gates and at least some of the gates have a respective local storage location. An exposure interval of each augmented pixel is divided into intervals associated with the gates, and each local storage location stores image data during a respective interval. The controller reads out, after the exposure interval of each augmented pixel, the image data stored in the respective local storage locations of each augmented pixel to generate image data frames. The controller determines depth information for the local area based in part on the image data frames.
Smart phones for motion capture
A series of smart phones are mounted in respective tripods to capture motion of a person wearing markers, such as marker balls or reflectors. The videos from the phones are stripped of objects other than the markers and the videos of the markers are combined to render a 3D motion capture structure that may be applied to an image of a VR icon to cause the VR icon to move as the person originally moved.
Image processing apparatus, image processing method, and storage medium
An image processing apparatus acquires first shape information representing a three-dimensional shape about an object located within an image capturing region based on one or more images obtained by one or more imaging apparatuses for performing image capturing of the image capturing region from a plurality of directions, acquires second shape information representing a three-dimensional shape about an object located within the image capturing region based on one or more images obtained by one or more imaging apparatuses, acquires viewpoint information indicating a position and direction of a viewpoint, and generates a virtual viewpoint image corresponding to the position and direction of the viewpoint indicated by the acquired viewpoint information based on the acquired first shape information and the acquired second shape information, such that at least a part of the object corresponding to the second shape information is displayed in a translucent way within the virtual viewpoint image.
Inspection method using a perching UAV with a releasable crawler
A method of inspection or maintenance of a curved ferromagnetic surface using an unmanned aerial vehicle (UAV) having a releasable crawler is provided. The method includes: flying the UAV from an initial position to a pre-perching position in a vicinity of the ferromagnetic surface; autonomously perching the UAV on the ferromagnetic surface; maintaining magnetic attachment of the perched UAV to the ferromagnetic surface; releasing the crawler from the magnetically attached UAV onto the ferromagnetic surface; moving the crawler over the curved ferromagnetic surface while maintaining magnetic attachment of the released crawler to the ferromagnetic surface; inspecting or maintaining the ferromagnetic surface using the magnetically attached crawler; and re-docking the released crawler with the perched UAV.
Optical tracking device with built-in structured light module
A system is disclosed that includes an optical tracking device and a surgical computing device. The optical tracking device includes a structured light module and an optical module that includes an image sensor and is spaced from the structured light module at a known distance. The surgical computing device includes a display device, a non-transitory computer readable medium including instructions, and processor(s) configured to execute the instructions to generate a depth map from a first image captured by the image sensor during projection of a pattern into a surgical environment by the structured light module. The pattern is projected in a near-infrared (NIR) spectrum. The processor(s) are further configured to execute the stored instructions to reconstruct a 3D surface of anatomical structure(s) based on the generated depth map. Additionally, the processor(s) are configured to execute the stored instructions to output the reconstructed 3D surface to the display device.
Depth image sensor with always-depleted photodiodes
Examples are disclosed that relate to the use of an always-depleted photodiode in a ToF depth image sensor. One example provides a method of operating a pixel of a depth image sensor, the method comprising receiving photons in a photocharge generation region of the pixel, the photocharge generation region of the pixel comprising an always-depleted photodiode formed by a doped first region comprising one of p-doping or n-doping and a more lightly-doped second region comprising the other of p-doping or n-doping. The method further comprises, during an integration phase, energizing a clock gate for a pixel tap, thereby directing photocharge generated in the photocharge generation region to an in-pixel storage comprising a capacitor, and in a readout phase, reading charge out from the in-pixel storage.
Image processing method, image processing apparatus and computer readable storage medium
An image processing method, an image processing apparatus, an electronic device and a computer readable storage medium are provided. The image processing method includes the following. A background image and a portrait region image of a current user that a preset parameter of the background image matching the preset parameter of the portrait region image are acquired. The portrait region image and the background image are merged to obtain a merged image.
MULTI-APERTURE RANGING DEVICES AND METHODS
Embodiments of systems and methods for multi-aperture ranging are disclosed. An embodiment of an image processing system includes at least one processor and memory configured to receive a multi-aperture image set that includes a high-resolution subaperture image and a low-resolution subaperture image, wherein the high-resolution subaperture image and the low-resolution subaperture image were captured simultaneously from a camera using dissimilar focal lengths, predict a high-resolution predicted disparity map from the high-resolution subaperture image using a neural network, predict a low-resolution predicted disparity map from the low-resolution subaperture image using the neural network, and generate an integrated range map from the high-resolution and low-resolution predicted disparity maps, wherein the integrated range map includes an array of range information that corresponds to the multi-aperture image set and that is generated by overlaying common points in both the high-resolution predicted disparity map and the low-resolution predicted disparity map.
Chroma key content management systems and methods
A system of properly displaying chroma key content is presented. The system obtains a digital representation of a 3D environment, for example a digital photo, and gathers data from that digital representation. The system renders the digital representation in an environmental model and displays that digital representation upon an output device. Depending upon the context, content anchors of the environmental model are selected which will be altered by suitable chroma key content. The chroma key content takes into consideration the position and orientation of the chroma key content relative to the content anchor and relative to the point of view that the environmental model is displayed from in order to accurately display chroma key content in a realistic manner.