H04N13/10

Real-time 3D virtual or physical model generating apparatus for HoloPortal and HoloCloud system
10516869 · 2019-12-24 · ·

A novel electronic system provides fast three-dimensional model generation, social content sharing of dynamic three-dimensional models, and monetization of the dynamic three-dimensional models created by casual consumers. In one embodiment, a casual consumer utilizes a dedicated real-time 3D model reconstruction studio with multiple camera angles, and then rapidly create dynamic 3D models with novel computational methods performed in scalable graphics processing units. In another embodiment, uncalibrated multiple sources of video recording of a targeted object are provided by a plurality of commonly-available consumer video recording devices (e.g. a smart phone, a camcorder, a digital camera, etc.) located at different angles, after which the uncalibrated multiple sources of video recording are transmitted to a novel cloud computing system for real-time temporal, spatial, and photometrical calibration and 3D model reconstruction. The dynamic 3D models can be uploaded, listed, and shared among content creators and viewers in an electronic sharing platform.

Real-time 3D virtual or physical model generating apparatus for HoloPortal and HoloCloud system
10516869 · 2019-12-24 · ·

A novel electronic system provides fast three-dimensional model generation, social content sharing of dynamic three-dimensional models, and monetization of the dynamic three-dimensional models created by casual consumers. In one embodiment, a casual consumer utilizes a dedicated real-time 3D model reconstruction studio with multiple camera angles, and then rapidly create dynamic 3D models with novel computational methods performed in scalable graphics processing units. In another embodiment, uncalibrated multiple sources of video recording of a targeted object are provided by a plurality of commonly-available consumer video recording devices (e.g. a smart phone, a camcorder, a digital camera, etc.) located at different angles, after which the uncalibrated multiple sources of video recording are transmitted to a novel cloud computing system for real-time temporal, spatial, and photometrical calibration and 3D model reconstruction. The dynamic 3D models can be uploaded, listed, and shared among content creators and viewers in an electronic sharing platform.

DATA PROCESSING METHOD AND APPARATUS, ACQUISITION DEVICE, AND STORAGE MEDIUM
20190387347 · 2019-12-19 ·

Disclosed is a data processing method, comprising: acquiring space information of audio acquisition devices of an acquisition device, an acquisition space corresponding to the acquisition device being formed into a geometry, the spatial orientation deployed by video acquisition devices of the acquisition device covering the entire geometry, and the setting orientation of each video acquisition device being correspondingly provided with N audio acquisition devices, wherein N is a positive integer; regarding the N audio acquisition devices provided corresponding to the setting orientation of each video acquisition device, encoding audio data acquired by the N audio acquisition devices according to the space information of the audio acquisition devices, to form M pieces of audio data, the M pieces of audio data carrying space information of audios. Embodiments of the present invention further provide an acquisition device, a data processing device, and a storage medium.

IMAGING APPARATUS CAPABLE OF SWITCHING DISPLAY METHODS
20190373182 · 2019-12-05 ·

An imaging apparatus comprises an image pickup unit, a cutout image generation unit for cutting out a specified area in a pickup image taken by the image pickup unit to generate a cutout image enlarged at a specified magnification, an image display unit for displaying one or both of the pickup image taken by the image pickup unit and the cutout image generated by the cutout image generation unit, a display image control unit for controlling a method of displaying an image the image display unit displays, a manual focus operation unit for the user to control through manual operation the focus position of the image pickup unit, and a manual zoom operation unit for the user to control the zoom magnification of the image pickup unit.

GRATING BASED THREE-DIMENTIONAL DISPLAY METHOD FOR PRESENTING MORE THAN ONE VIEWS TO EACH PUPIL
20190373239 · 2019-12-05 · ·

The invention features techniques for presenting more than one perspective views to each eye of the viewer, through generating viewing zones with an interval smaller than the diameter of the viewer's pupil by display panel/grating pair/pairs. In the first method, the arraying direction of the small-interval viewing zones is designed having an appropriate small inclination angle to the line connecting the viewer's two pupils, so as to cover each eye with more than one viewing zones which are different with each other. In the extreme case, 4 small-interval viewing zones can implement 3D display with two views for each eye. This is absolutely different with existing grating-based 3D display, which aligns viewing zones along the direction with a small angle (</4) to the line connecting the viewer's two pupils and thus a rather large number of small-interval viewing zone is needed for covering the viewer's two eyes.

GRATING BASED THREE-DIMENTIONAL DISPLAY METHOD FOR PRESENTING MORE THAN ONE VIEWS TO EACH PUPIL
20190373239 · 2019-12-05 · ·

The invention features techniques for presenting more than one perspective views to each eye of the viewer, through generating viewing zones with an interval smaller than the diameter of the viewer's pupil by display panel/grating pair/pairs. In the first method, the arraying direction of the small-interval viewing zones is designed having an appropriate small inclination angle to the line connecting the viewer's two pupils, so as to cover each eye with more than one viewing zones which are different with each other. In the extreme case, 4 small-interval viewing zones can implement 3D display with two views for each eye. This is absolutely different with existing grating-based 3D display, which aligns viewing zones along the direction with a small angle (</4) to the line connecting the viewer's two pupils and thus a rather large number of small-interval viewing zone is needed for covering the viewer's two eyes.

Multisensory data fusion system and method for autonomous robotic operation

A robotic system includes one or more optical sensors configured to separately obtain two dimensional (2D) image data and three dimensional (3D) image data of a brake lever of a vehicle, a manipulator arm configured to grasp the brake lever of the vehicle, and a controller configured to compare the 2D image data with the 3D image data to identify one or more of a location or a pose of the brake lever of the vehicle. The controller is configured to control the manipulator arm to move toward, grasp, and actuate the brake lever of the vehicle based on the one or more of the location or the pose of the brake lever.

Multisensory data fusion system and method for autonomous robotic operation

A robotic system includes one or more optical sensors configured to separately obtain two dimensional (2D) image data and three dimensional (3D) image data of a brake lever of a vehicle, a manipulator arm configured to grasp the brake lever of the vehicle, and a controller configured to compare the 2D image data with the 3D image data to identify one or more of a location or a pose of the brake lever of the vehicle. The controller is configured to control the manipulator arm to move toward, grasp, and actuate the brake lever of the vehicle based on the one or more of the location or the pose of the brake lever.

Capturing and aligning three-dimensional scenes

Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor.

Capturing and aligning three-dimensional scenes

Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor.