H04N13/10

Stereoscopic mobile retinal imager
11483537 · 2022-10-25 · ·

Disclosed herein are devices and methods for generating stereoscopic views of the eye (or any desired anatomic structure) using a dual-camera portable computing device. The locations of the two cameras are fixed, and the camera lenses may have different focal lengths. For example, the focal length of the second camera lens may be longer than the focal length of the first camera lens. One variation of a detachable imaging system comprises an objective lens and a relay lens that are disposed over the two cameras. The relay lens may be disposed over the first and second cameras, and have a focal length that is greater than the focal length of the first camera lens and less than or equal to the focal length of the second camera lens.

Directed interpolation and data post-processing

An encoding device evaluates a plurality of processing and/or post-processing algorithms and/or methods to be applied to a video stream, and signals a selected method, algorithm, class or category of methods/algorithms either in an encoded bitstream or as side information related to the encoded bitstream. A decoding device or post-processor utilizes the signaled algorithm or selects an algorithm/method based on the signaled method or algorithm. The selection is based, for example, on availability of the algorithm/method at the decoder/post-processor and/or cost of implementation. The video stream may comprise, for example, downsampled multiplexed stereoscopic images and the selected algorithm may include any of upconversion and/or error correction techniques that contribute to a restoration of the downsampled images.

Data processing method and electronic device

Embodiments of the present application provide a data processing method and an electronic device. The data processing method includes: determining whether a current collection scene satisfies a condition for enabling a high-dynamic range (HDR) collection function; automatically enabling the HDR collection function in response to the current collection scene satisfying the condition for enabling the HDR collection function; and collecting at least two two-dimensional images with different exposures within a collection time of one frame of three-dimensional video data based on the HDR collection function; wherein the at least two two-dimensional images are configured to enable a mobile edge computing (MEC) server to build a three-dimensional video.

Imaging apparatus capable of switching display methods
11622082 · 2023-04-04 · ·

An imaging apparatus comprises an image pickup unit, a cutout image generation unit for cutting out a specified area in a pickup image taken by the image pickup unit to generate a cutout image enlarged at a specified magnification, an image display unit for displaying one or both of the pickup image taken by the image pickup unit and the cutout image generated by the cutout image generation unit, a display image control unit for controlling a method of displaying an image the image display unit displays, a manual focus operation unit for the user to control through manual operation the focus position of the image pickup unit, and a manual zoom operation unit for the user to control the zoom magnification of the image pickup unit.

Imaging apparatus capable of switching display methods
11622082 · 2023-04-04 · ·

An imaging apparatus comprises an image pickup unit, a cutout image generation unit for cutting out a specified area in a pickup image taken by the image pickup unit to generate a cutout image enlarged at a specified magnification, an image display unit for displaying one or both of the pickup image taken by the image pickup unit and the cutout image generated by the cutout image generation unit, a display image control unit for controlling a method of displaying an image the image display unit displays, a manual focus operation unit for the user to control through manual operation the focus position of the image pickup unit, and a manual zoom operation unit for the user to control the zoom magnification of the image pickup unit.

Augmented three dimensional point collection of vertical structures

Automated methods and systems are disclosed, including a method comprising: obtaining a first three-dimensional-data point cloud of a horizontal surface of an object of interest, the first three-dimensional-data point cloud having a first resolution and having a three-dimensional location associated with each point in the first three-dimensional-data point cloud; capturing one or more aerial image, at one or more oblique angle, depicting at least a vertical surface of the object of interest; analyzing the one or more aerial image with a computer system to determine three-dimensional locations of additional points on the object of interest; and updating the first three-dimensional-data point cloud with the three-dimensional locations of the additional points on the object of interest to create a second three-dimensional-data point cloud having a second resolution greater than the first resolution of the first three-dimensional-data point cloud.

Augmented three dimensional point collection of vertical structures

Automated methods and systems are disclosed, including a method comprising: obtaining a first three-dimensional-data point cloud of a horizontal surface of an object of interest, the first three-dimensional-data point cloud having a first resolution and having a three-dimensional location associated with each point in the first three-dimensional-data point cloud; capturing one or more aerial image, at one or more oblique angle, depicting at least a vertical surface of the object of interest; analyzing the one or more aerial image with a computer system to determine three-dimensional locations of additional points on the object of interest; and updating the first three-dimensional-data point cloud with the three-dimensional locations of the additional points on the object of interest to create a second three-dimensional-data point cloud having a second resolution greater than the first resolution of the first three-dimensional-data point cloud.

Directed interpolation and data post-processing

An encoding device evaluates a plurality of processing and/or post-processing algorithms and/or methods to be applied to a video stream, and signals a selected method, algorithm, class or category of methods/algorithms either in an encoded bitstream or as side information related to the encoded bitstream. A decoding device or post-processor utilizes the signaled algorithm or selects an algorithm/method based on the signaled method or algorithm. The selection is based, for example, on availability of the algorithm/method at the decoder/post-processor and/or cost of implementation. The video stream may comprise, for example, downsampled multiplexed stereoscopic images and the selected algorithm may include any of upconversion and/or error correction techniques that contribute to a restoration of the downsampled images.

Directed interpolation and data post-processing

An encoding device evaluates a plurality of processing and/or post-processing algorithms and/or methods to be applied to a video stream, and signals a selected method, algorithm, class or category of methods/algorithms either in an encoded bitstream or as side information related to the encoded bitstream. A decoding device or post-processor utilizes the signaled algorithm or selects an algorithm/method based on the signaled method or algorithm. The selection is based, for example, on availability of the algorithm/method at the decoder/post-processor and/or cost of implementation. The video stream may comprise, for example, downsampled multiplexed stereoscopic images and the selected algorithm may include any of upconversion and/or error correction techniques that contribute to a restoration of the downsampled images.

MULTISENSORY DATA FUSION SYSTEM AND METHOD FOR AUTONOMOUS ROBOTIC OPERATION

A robotic system includes one or more optical sensors configured to separately obtain two dimensional (2D) image data and three dimensional (3D) image data of a brake lever of a vehicle, a manipulator arm configured to grasp the brake lever of the vehicle, and a controller configured to compare the 2D image data with the 3D image data to identify one or more of a location or a pose of the brake lever of the vehicle. The controller is configured to control the manipulator arm to move toward, grasp, and actuate the brake lever of the vehicle based on the one or more of the location or the pose of the brake lever.