H04N13/243

CODING SCHEME FOR VIDEO DATA USING DOWN-SAMPLING/UP-SAMPLING AND NON-LINEAR FILTER FOR DEPTH MAP

Methods of encoding and decoding video data are provided. In an encoding method, source video data comprising one or more source views is encoded into a video bitstream. Depth data of at least one of the source views is nonlinearly filtered and downsampled prior to encoding. After decoding, the decoded depth data is up-sampled and nonlinearly filtered.

System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function

A method for generating a three-dimensional (3D) model of an object includes: capturing images of the object from a plurality of viewpoints, the images including color images; generating a 3D model of the object from the images, the 3D model including a plurality of planar patches; for each patch of the planar patches: mapping image regions of the images to the patch, each image region including at least one color vector; and computing, for each patch, at least one minimal color vector among the color vectors of the image regions mapped to the patch; generating a diffuse component of a bidirectional reflectance distribution function (BRDF) for each patch of planar patches of the 3D model in accordance with the at least one minimal color vector computed for each patch; and outputting the 3D model with the BRDF for each patch.

System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function

A method for generating a three-dimensional (3D) model of an object includes: capturing images of the object from a plurality of viewpoints, the images including color images; generating a 3D model of the object from the images, the 3D model including a plurality of planar patches; for each patch of the planar patches: mapping image regions of the images to the patch, each image region including at least one color vector; and computing, for each patch, at least one minimal color vector among the color vectors of the image regions mapped to the patch; generating a diffuse component of a bidirectional reflectance distribution function (BRDF) for each patch of planar patches of the 3D model in accordance with the at least one minimal color vector computed for each patch; and outputting the 3D model with the BRDF for each patch.

Methods and apparatus for encoding, communicating and/or using images

Methods and apparatus for capturing, communicating and using image data to support virtual reality experiences are described. Images, e.g., frames, are captured at a high resolution but lower frame rate than is used for playback. Interpolation is applied to captured frames to generate interpolated frames. Captured frames, along with interpolated frame information, are communicated to the playback device. The combination of captured and interpolated frames correspond to a second frame playback rate which is higher than the image capture rate. Cameras operate at a high image resolution but slower frame rate than images could be captured with the same cameras at a lower resolution. Interpolation is performed prior to delivery to the user device with segments to be interpolated being selected based on motion and/or lens FOV information. A relatively small amount of interpolated frame data is communicated compared to captured frame data for efficient bandwidth use.

SYSTEMS AND METHODS FOR TRAINING POSE ESTIMATORS IN COMPUTER VISION

A data capture stage includes a frame at least partially surrounding a target object, a rotation device within the frame and configured to selectively rotate the target object, a plurality of cameras coupled to the frame and configured to capture images of the target object from different angles, a sensor coupled to the frame and configured to sense mapping data corresponding to the target object, and an augmentation data generator configured to control a rotation of the rotation device, to control operations of the plurality of cameras and the sensor, and to generate training data based on the images and the mapping data.

SYSTEMS AND METHODS FOR TRAINING POSE ESTIMATORS IN COMPUTER VISION

A data capture stage includes a frame at least partially surrounding a target object, a rotation device within the frame and configured to selectively rotate the target object, a plurality of cameras coupled to the frame and configured to capture images of the target object from different angles, a sensor coupled to the frame and configured to sense mapping data corresponding to the target object, and an augmentation data generator configured to control a rotation of the rotation device, to control operations of the plurality of cameras and the sensor, and to generate training data based on the images and the mapping data.

PRODUCT TARGET QUALITY CONTROL SYSTEM

A process includes receiving a target quality value, receiving a measured quality value, receiving a source quality value, and sending a source control instruction. The source control instruction is based at least in part on the target quality value, the measured quality value, and the source quality value. The target quality value, the measured quality value, the source quality value, and the source control instruction are communicated via the communication port. The measured quality value is generated by an inspection device configured to inspect a sample. The source quality value is associated with a quality level of a first group of samples. The target quality value indicates a desired quality value of an output group of samples. The source control instruction causes a source selecting device to select one of a plurality of groups of samples, each group having identified quality characteristics.

PRODUCT TARGET QUALITY CONTROL SYSTEM

A process includes receiving a target quality value, receiving a measured quality value, receiving a source quality value, and sending a source control instruction. The source control instruction is based at least in part on the target quality value, the measured quality value, and the source quality value. The target quality value, the measured quality value, the source quality value, and the source control instruction are communicated via the communication port. The measured quality value is generated by an inspection device configured to inspect a sample. The source quality value is associated with a quality level of a first group of samples. The target quality value indicates a desired quality value of an output group of samples. The source control instruction causes a source selecting device to select one of a plurality of groups of samples, each group having identified quality characteristics.

Human-powered advanced rider assistance system
11487122 · 2022-11-01 · ·

A bicycle system with omnidirectional viewing having front-facing, stereoscopic video camera devices relying on computer vision. The front-facing, stereoscopic video camera devices positioned on the bicycle help identify, classify, and recommend a safe trajectory around obstacles in real-time using augmented reality. The bicycle system details safety-related and guidance-related information.

Human-powered advanced rider assistance system
11487122 · 2022-11-01 · ·

A bicycle system with omnidirectional viewing having front-facing, stereoscopic video camera devices relying on computer vision. The front-facing, stereoscopic video camera devices positioned on the bicycle help identify, classify, and recommend a safe trajectory around obstacles in real-time using augmented reality. The bicycle system details safety-related and guidance-related information.