H04N5/222

IMAGING APPARATUS, IMAGING METHOD, AND STORAGE MEDIUM
20230048045 · 2023-02-16 ·

An apparatus includes an image sensor configured to capture a plurality of images different in in-focus position, at least one memory configured to store instructions, and at least one processor in communication with the at least one memory and configured to execute the instructions to determine a predetermined value of exposure in advance, control exposure so that the image sensor captures the plurality of images with the exposure less than the predetermined value, and correct brightness of at least a part of the plurality of images based on the predetermined value.

DISTRIBUTED COMMAND EXECUTION IN MULTI-LOCATION STUDIO ENVIRONMENTS

A content production management system within a distributed studio environment includes a command interface module and a command queue management module. The command interface module is configured to render a user interface for a set of content production entities associated with a set of content production volumes within the distributed studio environment. The command queue management module, upon execution of software instructions, is configured to perform the operations of receiving, from the command interface module, a command targeting a target content production entity, assigning a synchronized execution time to the command, enqueueing the command into a command queue associated with the target content production entity according to the synchronized execution time, and enabling the target content production entity to execute the command from the command queue according to the synchronized execution time.

Obtaining image data of an object in a scene

A method and processor system are provided which analyze a depth map, which may be obtained from a range sensor capturing depth information of a scene, to identify where an object is located in the scene. Accordingly, a region of interest may be identified in the scene which includes the object, and image data may be selectively obtained of the region of interest, rather than of the entire scene containing the object. This image data may be acquired by an image sensor configured for capturing visible light information of the scene. By only selectively obtaining the image data within the region of interest, rather than all of the image data, improvements may be realized in the computational complexity of a possible further processing of the image data, the storage of the image data and/or the transmission of the image data.

Broadcast lighting system and the method of use thereof
11595556 · 2023-02-28 ·

Embodiments of a live broadcast lighting system are disclosed. In one example embodiment, the live broadcast lighting system includes a light emitting apparatus, a control box being connected to the light emitting apparatus, and a device holder coupled to the control box. The device holder can be configured to releasably retain a video recording device. The control box can include an electronic control circuit configured to control rotation of the light emitting apparatus. The device holder can be configured to be rotatable independent of the rotation of the light emitting apparatus.

Live Teleporting System and Apparatus
20180007314 · 2018-01-04 ·

A method of producing a Pepper's Ghost, includes projecting an image of a subject onto a reflective and transparent screen to create a virtual image of the subject alongside an object, the subject in the virtual image having a colour temperature. The object is illuminated with light having a colour and intensity that results in a colour temperature of the object at least approximately matching the colour temperature of the subject in the virtual image. The subject in the virtual image has a luminance and may be illuminated with light having a colour and intensity that results in a luminance of the object at least approximately matching the luminance of the subject in the virtual image.

Integrated Camera System Having Two Dimensional Image Capture and Three Dimensional Time-of-Flight Capture With A Partitioned Field of View
20180007347 · 2018-01-04 ·

An apparatus is described that includes an integrated two-dimensional image capture and three-dimensional time-of-flight depth capture system. The three-dimensional time-of-flight depth capture system includes an illuminator to generate light. The illuminator includes arrays of light sources. Each of the arrays is dedicated to a particular different partition within a partitioned field of view of the illuminator.

Image processing

Apparatus comprises a camera configured to capture images of a user in a scene; a depth detector configured to capture depth representations of the scene, the depth detector comprising an emitter configured to emit a non-visible signal; a mirror arranged to reflect at least some of the non-visible signal emitted by the emitter to one or more features within the scene that would otherwise be occluded by the user and to reflect light from the one or more features to the camera; a pose detector configured to detect a position and orientation of the mirror relative to at least one of the camera and depth detector; and a scene generator configured to generate a three-dimensional representation of the scene in dependence on the images captured by the camera and the depth representations captured by the depth detector and the pose of the mirror detected by the pose detector.

Volumetric representation of digital objects from depth renderings

An image processing system includes a computing platform having processing hardware, a display, and a system memory storing a software code. The processing hardware executes the software code to receive a digital object, surround the digital object with virtual cameras oriented toward the digital object, render, using each one of the virtual cameras, a depth map identifying a distance of that one of the virtual cameras from the digital object, and generate, using the depth map, a volumetric perspective of the digital object from a perspective of that one of the virtual cameras, resulting in multiple volumetric perspectives of the digital object. The processing hardware further executes the software code to merge the multiple volumetric perspectives of the digital object to form a volumetric representation of the digital object, and to convert the volumetric representation of the digital object to a renderable form.

A METHOD FOR CODING SPACE INFORMATION IN CONTINUOUS DYNAMIC IMAGES
20230239422 · 2023-07-27 ·

A coding method for space information in continuous dynamic images is provided, which includes the following steps: parsing and extracting space information data, constructing a space information data packet, and coding the space information data packet. According to the coding method in the present invention, the physical parameters, such as lens, position and orientation of a camera as well as space depth information in a plurality of continuous dynamic images can be recorded and stored in real time, and therefore, the coded and stored parameters of the camera and the space depth information are applied to virtual simulation and graphic vision enhancement scenarios, such that in many application scenarios, such as photographing movie and television, producing advertising, and personal video vlogs, abundant and integrated graphic text enhancement effects can be implanted, in real time, a plurality of thereby improving the final image display effect.

CAMERA MODULE
20230236478 · 2023-07-27 · ·

A camera module comprises: a first housing; a lens module disposed in the first housing; a second housing coupled to the first housing; a first printed circuit board disposed in the inner space of the first housing and the second housing; an image sensor disposed on the first printed circuit board; and a shield can disposed under the first printed circuit board in the second housing, wherein the shield can comprises a rib which comes into contact with the lower surface of the first printed circuit board.