Patent classifications
H04N13/246
Method and system for generating a multiview stereoscopic image
A method and a system for generating a multiview stereoscopic image are provided. The method includes the following steps. An image capturing apparatus captures a real calibration panel to obtain multiple images, and a processor obtains a datum image and multiple images to be calibrated by analyzing the images including the real calibration panel. The processor respectively calculates a homography matrix of each of the images to be calibrated corresponding to the datum image according to the datum image and the images to be calibrated. The processor obtains a calibration matrix of the homography matrix by performing a matrix disassembly calculation on each of the homography matrices. The processor multiplies the images to be calibrated by the corresponding calibration matrix to obtain multiple calibrated images. The processor outputs the multiview stereoscopic image including the datum image and the calibrated images.
Case for head-mounted device
A case for a head-mounted device includes a body portion and a power supply. The body portion includes one or more walls defining an interior cavity that is configured to receive and house the head-mounted device. The power supply is included in the body portion for transferring power to the head-mounted device.
Case for head-mounted device
A case for a head-mounted device includes a body portion and a power supply. The body portion includes one or more walls defining an interior cavity that is configured to receive and house the head-mounted device. The power supply is included in the body portion for transferring power to the head-mounted device.
VEHICLE SPEED INTELLIGENT MEASUREMENT METHOD BASED ON BINOCULAR STEREO VISION SYSTEM
A method for intelligently measuring vehicle speed based on a binocular stereo vision system includes: training a Single Shot Multibox Detector neural network to obtain a license plate recognition model; calibrating the binocular stereo vision system to acquire parameters of two cameras; detecting the license plates in the captured video frames with the license plate recognition model, locating the license plate position; performing feature point extraction and stereo matching by a feature-based matching algorithm; screening and eliminating the matching point pairs, and reserving the coordinates of the matching point pair closest to the license plate center; performing stereo measurement on the screened matching point pair to get the spatial coordinates of the position; calculating and obtaining the speed of the target vehicle. The present invention is easy to install and adjust, could simultaneously recognize multiple trained features automatically, and better suit the intelligent transportation networks and IoT (Internet of Things).
DENTAL POD
A dental pod that includes a receiver in a form of a cavity for receiving a distal end of a dental handheld device and one or more nozzles on a side of the receiver for cleaning and disinfecting the distal end of the dental handheld device. It also includes a cartridge compartment for receiving one or more cartridges support cleaning, setup, maintenance and other operations of and on the dental handheld device.
DENTAL POD
A dental pod that includes a receiver in a form of a cavity for receiving a distal end of a dental handheld device and one or more nozzles on a side of the receiver for cleaning and disinfecting the distal end of the dental handheld device. It also includes a cartridge compartment for receiving one or more cartridges support cleaning, setup, maintenance and other operations of and on the dental handheld device.
Stereoscopic visualization camera and platform
A stereoscopic imaging apparatus and platform are disclosed. An example stereoscopic imaging apparatus includes a main objective assembly and left and right lens sets defining respective parallel left and right optical paths from light that is received from the main objective assembly of a target surgical site. Each of the left and right lens sets includes a front lens, first and second zoom lenses configured to be movable along the optical path, and a lens barrel configured to receive the light from the second zoom lens. The example stereoscopic imaging apparatus also includes left and right image sensors configured to convert the light after passing through the lens barrel into image data that is indicative of the received light. The example stereoscopic visualization camera further includes a processor configured to convert the image data into stereoscopic video signals or video data for display on a display monitor.
Stereoscopic visualization camera and platform
A stereoscopic imaging apparatus and platform are disclosed. An example stereoscopic imaging apparatus includes a main objective assembly and left and right lens sets defining respective parallel left and right optical paths from light that is received from the main objective assembly of a target surgical site. Each of the left and right lens sets includes a front lens, first and second zoom lenses configured to be movable along the optical path, and a lens barrel configured to receive the light from the second zoom lens. The example stereoscopic imaging apparatus also includes left and right image sensors configured to convert the light after passing through the lens barrel into image data that is indicative of the received light. The example stereoscopic visualization camera further includes a processor configured to convert the image data into stereoscopic video signals or video data for display on a display monitor.
Wide-angle stereoscopic vision with cameras having different parameters
A stereoscopic vision system uses at least two cameras having different parameters to image a scene and create stereoscopic views. The different parameters of the two cameras can be intrinsic or extrinsic, including, for example, the distortion profile of the lens in the cameras, the field of view of the lens, the orientation of the cameras, the positions of the cameras, the color spectrum of the cameras, the frame rate of the cameras, the exposure time of the cameras, the gain of the cameras, the aperture size of the lenses, or the like. An image processing apparatus is then used to process the images from the at least two different cameras to provide optimal stereoscopic vision to a display.
FREE VIEWPOINT VIDEO GENERATION AND INTERACTION METHOD BASED ON DEEP CONVOLUTIONAL NEURAL NETWORK
A Free Viewpoint Video (FVV) generation and interaction method based on a deep Convolutional Neural Network (CNN) includes the steps of: acquiring multi-viewpoint data of a target scene by a synchronous shooting system with a multi-camera array arranged accordingly to obtain groups of synchronous video frame sequences from a plurality of viewpoints, and rectifying baselines of the sequences at pixel level in batches; extracting, by encoding and decoding network structures, features of each group of viewpoint images input into a designed and trained deep CNN model, to obtain deep feature information of the scene, and combining the information with the input images to generate a virtual viewpoint image between each group of adjacent physical viewpoints at every moment; and synthesizing all viewpoints into frames of the FVV based on time and spatial position of viewpoints by stitching matrices. The method eliminates the need for camera rectification and depth image calculation.