Patent classifications
H04N13/261
Photographing Method, Image Processing Method, and Electronic Device
A photographing method implemented by an electronic device includes receiving a photographing instruction of a user, shooting a primary camera image using a primary camera, shooting a wide-angle image using a wide-angle camera, and generating a three-dimensional (3D) image based on the primary camera image and the wide-angle image, where the 3D image includes a plurality of frames of images that is converted from a 3D viewing angle and corresponds to different viewing angles.
Imaging systems and methods
At least one combined image may be created from a plurality of images captured by a plurality of cameras. A sensor unit may receive the plurality of images from the plurality of cameras. At least one processor in communication with the sensor unit may correlate each received image with calibration data for the camera from which the image was received. The calibration data may comprise camera position data and characteristic data. The processor may combine at least two of the received images from at least two of the cameras into the at least one combined image by orienting the at least two images relative to one another based on the calibration data for the at least two cameras from which the images were received and merging the at least two aligned images into the at least one combined image.
Imaging systems and methods
At least one combined image may be created from a plurality of images captured by a plurality of cameras. A sensor unit may receive the plurality of images from the plurality of cameras. At least one processor in communication with the sensor unit may correlate each received image with calibration data for the camera from which the image was received. The calibration data may comprise camera position data and characteristic data. The processor may combine at least two of the received images from at least two of the cameras into the at least one combined image by orienting the at least two images relative to one another based on the calibration data for the at least two cameras from which the images were received and merging the at least two aligned images into the at least one combined image.
PREDICTING STEREOSCOPIC VIDEO WITH CONFIDENCE SHADING FROM A MONOCULAR ENDOSCOPE
A surgical robotic system includes an image processing device configured to receive an endoscopic video feed and generate a stereoscopic video feed with confidence shading overlays on display. The confidence shading is based on a level of confidence associated with uncertain regions within images making up the stereoscopic video feed.
Program guide graphics and video in window for 3DTV
Video data is received in 2D or 3D format from different channels as a user scrolls through an electronic guide. The video data may be displayed in a portion of the on: screen display along with graphic and text associated with the EPG data. The received video data may be converted to a suitable format (e.g., a 2D format, or a 3D format) to be displayed with the Electronic Program Guide (EPG). The selection of converting the received video data can be based on a display format of a previously viewed channel prior to requesting the EPG to be displayed.
Program guide graphics and video in window for 3DTV
Video data is received in 2D or 3D format from different channels as a user scrolls through an electronic guide. The video data may be displayed in a portion of the on: screen display along with graphic and text associated with the EPG data. The received video data may be converted to a suitable format (e.g., a 2D format, or a 3D format) to be displayed with the Electronic Program Guide (EPG). The selection of converting the received video data can be based on a display format of a previously viewed channel prior to requesting the EPG to be displayed.
Image processing method and device, and three-dimensional imaging system
Disclosed are an image processing method and device, and a three-dimensional imaging system. The method comprises the following steps of: acquiring a two-dimensional image to be processed; aligning the two-dimensional image to be processed to a grid template; performing mapping processing on the two-dimensional image to be processed by using a grid mapping table to acquire a first image, wherein the grid mapping table is used for representing the mapping relationship of grid images; mirroring the first image to acquire a second image; and synthesizing the first image and the second image to acquire the superimposed image of the first image and the second image. According to the method, the grid template and the grid mapping table are used for performing mapping processing on the two-dimensional image to be processed so as to simulate a left-eye image and a right-eye image acquired by human eyes; and a same two-dimensional image to be processed need to be mapped only once to acquire the left-eye image and the right-eye image, the steps of image processing being reduced, thus the time of image processing being shortened, and providing favorable conditions for the follow-up real-time conversion of the superimposed two-dimensional image into a three-dimensional image.
Image processing method and device, and three-dimensional imaging system
Disclosed are an image processing method and device, and a three-dimensional imaging system. The method comprises the following steps of: acquiring a two-dimensional image to be processed; aligning the two-dimensional image to be processed to a grid template; performing mapping processing on the two-dimensional image to be processed by using a grid mapping table to acquire a first image, wherein the grid mapping table is used for representing the mapping relationship of grid images; mirroring the first image to acquire a second image; and synthesizing the first image and the second image to acquire the superimposed image of the first image and the second image. According to the method, the grid template and the grid mapping table are used for performing mapping processing on the two-dimensional image to be processed so as to simulate a left-eye image and a right-eye image acquired by human eyes; and a same two-dimensional image to be processed need to be mapped only once to acquire the left-eye image and the right-eye image, the steps of image processing being reduced, thus the time of image processing being shortened, and providing favorable conditions for the follow-up real-time conversion of the superimposed two-dimensional image into a three-dimensional image.
3D DISPLAY SYSTEM AND 3D DISPLAY METHOD
A 3D display system and a 3D display method are provided. The 3D display system includes a 3D display, a memory and one or more processors. The memory records a plurality of modules, and the processor accesses and executes the modules recorded by the memory. The modules include a bridge interface module and a 3D display service module. When an application is executed by the processor, the bridge interface module creates a virtual extend screen, and moves the application to the virtual extend screen. The bridge interface module obtains a 2D content frame of the application from the virtual extend screen by a screenshot function. The 3D display service module converts the 2D content frame into a 3D format frame by communicating with a third-party software development kit, and provides the 3D format frame to the 3D display for displaying.
3D DISPLAY SYSTEM AND 3D DISPLAY METHOD
A 3D display system and a 3D display method are provided. The 3D display system includes a 3D display, a memory and one or more processors. The memory records a plurality of modules, and the processor accesses and executes the modules recorded by the memory. The modules include a bridge interface module and a 3D display service module. When an application is executed by the processor, the bridge interface module creates a virtual extend screen, and moves the application to the virtual extend screen. The bridge interface module obtains a 2D content frame of the application from the virtual extend screen by a screenshot function. The 3D display service module converts the 2D content frame into a 3D format frame by communicating with a third-party software development kit, and provides the 3D format frame to the 3D display for displaying.