H04N5/2624

VIDEO PROCESSING METHOD AND APPARATUS, AND TERMINAL AND STORAGE MEDIUM
20220377254 · 2022-11-24 ·

The embodiments of the disclosure provide a video processing method and apparatus, and terminal and storage medium. The method includes: turning on a first camera on a first side and a second camera on a second side of a terminal in response to a first preset operation, wherein the first side and the second side are opposite or have different directions; and using the first camera and the second camera for simultaneous video recording. According to the method of the disclosure, a video recording method is improved by using the cameras on two sides of the terminal for simultaneous video recording, so that more flexible choices may be provided for video presentation and editing.

PERIPHERY MONITORING DEVICE FOR WORKING MACHINE

A periphery monitoring device calculates an expected passage range indicating a range of a locus of a machine body when a lower travelling body travels in an imaging direction of a camera, based on a slewing angle of an upper slewing body and an attitude of an attachment, and superimposes a range image indicating the calculated expected passage range on an image captured by the camera to display the superimposed image on the display.

Display-covered camera

One embodiment provides a method, including: receiving, at an information handling device having a display, an indication to capture an image of a scene using a camera sensor positioned underneath the display; capturing, responsive to the receiving, a plurality of partial images of the scene, wherein the capturing comprises adjusting, using an adjustment mechanism, a physical position of the camera sensor after each of the plurality of partial images of the scene are captured; and stitching, subsequent to the capturing, the plurality of partial images together to form the image of the scene. Other aspects are described and claimed.

VIDEO TRANSMISSION METHOD, VIDEO PROCESSING DEVICE, AND VIDEO GENERATING SYSTEM FOR VIRTUAL REALITY
20230057768 · 2023-02-23 ·

The disclosure provides a video transmission method for virtual reality. The method includes reorganizing a first video and a second video obtained to generate a third video suitable for transmission through a physical wire, hence avoiding distortion caused by compression of a high-definition video. The disclosure further includes a video processing device and a video generating system employing the video transmission method.

Data Sharing Method and Data Sharing System Capable of Providing Various Group Calling Modes

A data sharing method includes logging in a first account through a communication interface by a first receiver for establishing a link between the first receiver and a server corresponding to the communication interface, logging in a second account through the communication interface by a second receiver for establishing a link between the second receiver and the server corresponding to the communication interface, and transmitting image data from a first transmitter to the second receiver through the first receiver and the server for sharing the image data. The first receiver is linked to a first display. The second receiver is linked to a second display. The image data is shared with the first display and the second display.

Optical tracking device with built-in structured light module

A system is disclosed that includes an optical tracking device and a surgical computing device. The optical tracking device includes a structured light module and an optical module that includes an image sensor and is spaced from the structured light module at a known distance. The surgical computing device includes a display device, a non-transitory computer readable medium including instructions, and processor(s) configured to execute the instructions to generate a depth map from a first image captured by the image sensor during projection of a pattern into a surgical environment by the structured light module. The pattern is projected in a near-infrared (NIR) spectrum. The processor(s) are further configured to execute the stored instructions to reconstruct a 3D surface of anatomical structure(s) based on the generated depth map. Additionally, the processor(s) are configured to execute the stored instructions to output the reconstructed 3D surface to the display device.

Camera and visitor user interfaces
11589010 · 2023-02-21 · ·

The present disclosure generally relates to camera and visitor user interfaces. In some examples, the present disclosure relates to techniques for switching between configurations of a camera view. In some examples, the present disclosure relates to displaying indications of visitors detected by an accessory device of the home automation system. In some examples, the present disclosure relates to displaying a multi-view camera user interface.

Fixed pattern calibration for multi-view stitching
11587259 · 2023-02-21 · ·

An apparatus includes an interface and a processor. The interface may be configured to receive pixel data representing respective fields of view of two or more cameras arranged to obtain a predetermined field of view, where the respective fields of view of each adjacent pair of the two or more cameras overlap. The processor may be configured to process the pixel data arranged as video frames and perform a fixed pattern calibration for facilitating multi-view stitching. The fixed pattern calibration may comprise applying a pose calibration process to the video frames. The pose calibration process generally uses (i) intrinsic parameters, a respective translate vector, a respective rotation matrix, and distortion parameters for each lens of the two or more cameras and (ii) a calibration board to obtain configuration parameters for the respective fields of view of the two or more cameras. The pose calibration process may comprise changing a z value of the respective translate vector for each lens of the two or more cameras to at least one of a middle distance value and a long distance value while maintaining the respective rotation matrix for each lens of the two or more cameras unchanged.

Multi-camera device

This specification describes: using a first camera of a multi-camera device to obtain first video data of a first region; using a second camera of the multi-camera device to obtain second video data of a second region; generating a multi-camera video output from the first and second video data using a first video mapping to map the first video data to a first portion of the multi-camera video output and using a second video mapping to map the second video data to a second portion of the multi-camera video output; and generating an audio output from obtained audio data, the audio output comprising an audio output having a directional component within the first portion of the video output and an audio output having a directional component within the second portion of the video output, wherein generating the audio output comprises using a first audio mapping to map audio data having a directional component within the first region to the audio output having a directional component within the first portion of the video output and using a second audio mapping to map audio data having a directional component within the second region to the audio output having a directional component within the second portion of the video output.

SYSTEM AND METHOD FOR REMOTE OBSERVATION IN A NON-NETWORKED PRODUCTION FACILITY

A method includes: via a user interface of a hub device, receiving a first set of user credentials associated with a first operator, the hub device connected to a set of mobile devices; accessing a first operator schedule defining a first scheduled manufacturing operation and a first observer of the first scheduled manufacturing operation, the first observer characterized by a first set of observer credentials; detecting disconnection of a first mobile device in the set of mobile devices from the hub device, the first mobile device associated with a first device ID; in response to detecting disconnection of the first mobile device from the hub device, associating the first device ID with the first operator; and routing a first video feed from the first mobile device to the first observer based on the first device ID and the first set of observer credentials.