H04N13/271

Monocular cued detection of three-dimensional structures from depth images

Detection of three dimensional obstacles using a system mountable in a host vehicle including a camera connectible to a processor. Multiple image frames are captured in the field of view of the camera. In the image frames, an imaged feature is detected of an object in the environment of the vehicle. The image frames are portioned locally around the imaged feature to produce imaged portions of the image frames including the imaged feature. The image frames are processed to compute a depth map locally around the detected imaged feature in the image portions. Responsive to the depth map, it is determined if the object is an obstacle to the motion of the vehicle.

Creating shockwaves in three-dimensional depth videos and images
11763420 · 2023-09-19 · ·

A virtual shockwave creation system comprises an eyewear device that includes a frame, a temple connected to a lateral side of the frame, and a depth-capturing camera. Execution of programming by a processor configures the virtual shockwave creation system to generate, for each of multiple initial depth images, a respective shockwave image by applying a transformation function to the initial three-dimensional coordinates. The virtual shockwave creation system creates a warped shockwave video including a sequence of the generated warped shockwave images. The virtual shockwave creation system presents, via an image display, the warped shockwave video.

Optical imaging system and methods thereof
11190752 · 2021-11-30 · ·

An optical imaging system to image a target object includes a light source configured to emit one or more light rays to illuminate the target object and an image detector configured to capture a three-dimensional topography image of the target object when emitted light is emitted from the target object in response to being illuminated by the light rays emitted by the light source. A fluorescence image detector captures a fluorescence image of the target object when fluorescence is emitted from the target object in response illumination by light rays emitted by the light source. A controller instructs the image detector to capture the 3D topography image and the fluorescence image detector to detect the fluorescence image of the target object intraoperatively and to co-register and simultaneously display intraoperatively the co-registered topography and fluorescence information to the user via a display.

Optical imaging system and methods thereof
11190752 · 2021-11-30 · ·

An optical imaging system to image a target object includes a light source configured to emit one or more light rays to illuminate the target object and an image detector configured to capture a three-dimensional topography image of the target object when emitted light is emitted from the target object in response to being illuminated by the light rays emitted by the light source. A fluorescence image detector captures a fluorescence image of the target object when fluorescence is emitted from the target object in response illumination by light rays emitted by the light source. A controller instructs the image detector to capture the 3D topography image and the fluorescence image detector to detect the fluorescence image of the target object intraoperatively and to co-register and simultaneously display intraoperatively the co-registered topography and fluorescence information to the user via a display.

SYSTEM AND METHOD FOR CAPTURING OMNI-STEREO VIDEOS USING MULTI-SENSORS
20220030212 · 2022-01-27 ·

A method of calibrating cameras used to collect images to form an omni-stereo image is disclosed. The method may comprise determining intrinsic and extrinsic camera parameters for each of a plurality of left eye cameras and right eye cameras arranged along a viewing circle or ellipse and angled tangentially with respect to the viewing circle or ellipse; categorizing left-right pairs of the plurality of left eye cameras and the plurality of right eye cameras into at least a first category, a second category or a third category; aligning the left-right pairs of cameras that fall into the first category; aligning the left-right pairs of cameras that fall into the second category; and aligning the left-right pairs of cameras that fall into the third category by using extrinsic parameters of the left-right pairs that fall into the first category, and of the left-right pairs that fall into the second category.

SYSTEM AND METHOD FOR CAPTURING OMNI-STEREO VIDEOS USING MULTI-SENSORS
20220030212 · 2022-01-27 ·

A method of calibrating cameras used to collect images to form an omni-stereo image is disclosed. The method may comprise determining intrinsic and extrinsic camera parameters for each of a plurality of left eye cameras and right eye cameras arranged along a viewing circle or ellipse and angled tangentially with respect to the viewing circle or ellipse; categorizing left-right pairs of the plurality of left eye cameras and the plurality of right eye cameras into at least a first category, a second category or a third category; aligning the left-right pairs of cameras that fall into the first category; aligning the left-right pairs of cameras that fall into the second category; and aligning the left-right pairs of cameras that fall into the third category by using extrinsic parameters of the left-right pairs that fall into the first category, and of the left-right pairs that fall into the second category.

Technologies for generating computer models, devices, systems, and methods utilizing the same
11189085 · 2021-11-30 · ·

Technologies for generating 3D and using models are described. In some embodiments the technologies employ a content creation device to produce a 3D model of an environment based at least in part on depth data and color data, which may be provided by one or more cameras. Contextual information such as location information, orientation information, etc., may also be collected or otherwise determined, and associated with points of the 3D model. Access points to the imaged environments may be identified and labeled as anchor points within the 3D model. Multiple 3D models may then be combined into an aggregate model, wherein anchor points of constituent 3D models in the aggregate model are substantially aligned. Devices, systems, and computer readable media utilizing such technologies are also described.

Technologies for generating computer models, devices, systems, and methods utilizing the same
11189085 · 2021-11-30 · ·

Technologies for generating 3D and using models are described. In some embodiments the technologies employ a content creation device to produce a 3D model of an environment based at least in part on depth data and color data, which may be provided by one or more cameras. Contextual information such as location information, orientation information, etc., may also be collected or otherwise determined, and associated with points of the 3D model. Access points to the imaged environments may be identified and labeled as anchor points within the 3D model. Multiple 3D models may then be combined into an aggregate model, wherein anchor points of constituent 3D models in the aggregate model are substantially aligned. Devices, systems, and computer readable media utilizing such technologies are also described.

METHOD FOR FOCUSING A CAMERA
20220030157 · 2022-01-27 ·

Aspects of the present disclosure are directed to a method for focusing a camera. In one embodiment, the method includes: dividing the field of view of the camera in to at least two segments: assigning, in each case, at least one operating element or at least one position of an operating element to the at least two segments; recognizing and tracking at least one object in at least two segments; automatically assigning the recognized at least two objects to the respective operating element or position of the operating element depending on which segment the objects are assigned to; and focusing the camera on the object assigned to the operating element or the position of the operating element in response to the operating element being actuated or the operating element being brought into the corresponding position.

METHOD FOR FOCUSING A CAMERA
20220030157 · 2022-01-27 ·

Aspects of the present disclosure are directed to a method for focusing a camera. In one embodiment, the method includes: dividing the field of view of the camera in to at least two segments: assigning, in each case, at least one operating element or at least one position of an operating element to the at least two segments; recognizing and tracking at least one object in at least two segments; automatically assigning the recognized at least two objects to the respective operating element or position of the operating element depending on which segment the objects are assigned to; and focusing the camera on the object assigned to the operating element or the position of the operating element in response to the operating element being actuated or the operating element being brought into the corresponding position.