Patent classifications
H04N5/2624
ENHANCED VIRTUAL CONFERENCING
A virtual conferencing method includes receiving, at an on-premise mobile device, video data from a front-facing camera of the on-premise mobile device and a rear-facing camera of the on-premise mobile device, and receiving remote video data from a remote device, the remote video data is from a front-facing camera of the remote device. The virtual conferencing method includes simultaneously displaying, on a display of the on-premise mobile device, the video data from the front-facing camera and the rear-facing camera of the on-premise mobile device with the remote video data from the front-facing camera of the remote device.
Around-view image control device and around-view image processing method therefor
An around-view image processing method comprises: generating a first around-view image signal obtained by image synthesis using image information acquired from a plurality of cameras; generating a second around-view image signal obtained by image correction using image information acquired from the plurality of cameras over a predetermined period of time; and outputting the second around-view image signal or outputting the first around-view and second around-view image signals. When the first around-view and second around-view image signals are output, the around-view image processing method may further comprise selecting one image signal from the first around-view and second around-view image signals, and outputting the image signal selected from the first around-view and second around-view image signals.
Image processing method
Provided is an image processing method for easily viewing images obtained by imaging multiple components at a time, the method including image capturing processing of capturing each component holding state relating to multiple suction nozzles mounted on a mounting head as one image, image dividing processing of dividing a region relating to a predetermined component holding state for image data of the multiple component holding states obtained by the image capturing processing, direction conversion processing of converting a direction of the component holding state for divided image data divided by the image dividing processing, and display processing of displaying an image based on the divided image data subjected to the direction conversion processing.
Artificial window system
In general, the present disclosure is directed to an artificial window system that can simulate the user experience of a traditional window in environments where exterior walls are unavailable or other constraints make traditional windows impractical. In an embodiment, an artificial window consistent with the present disclosure includes a window panel, a panel driver, and a camera device. The camera device captures a plurality of image frames representative of an outdoor environment and provides the same to the panel driver. A controller of the panel driver sends the image frames as a video signal to cause the window panel to visually output the same. The window panel may further include light panels, and the controller may extract light characteristics from the captured plurality of image frames to send signals to the light panels to cause the light panels to mimic outdoor lighting conditions.
Built-in safety of control station and user interface for teleoperation
A method and system may receive tiled video feed data sourced from one or more remotely situated vehicles. A teleoperator user interface is generated to include a concurrent display of a plurality of distinct video tiles from the tiled video feed data. A respective video tile displayed in the teleoperator user interface may include visual safety cues. A user interface segment that is displaying a respective video tile may be modified in response to teleoperator input.
Dynamic cloud video composition
Implementations for combining at least two video streams received from devices of a plurality of participants of a video conference into a composite video stream are described. A video conference including the video streams received from the devices of the plurality of participants is established. A capability associated with consuming at least two of the video streams is received from at least one of the devices. The at least two video streams are then combined into a composite video stream based on the capability associated with consuming the at least two video streams. The composite video stream is transmitted to the at least one of the devices.
METHODS, SYSTEMS, APPARATUS, AND ARTICLES OF MANUFACTURE FOR DOCUMENT SCANNING
Methods, systems, apparatus, and articles of manufacture for document scanning are disclosed. An example apparatus includes a base structured to position a mobile device, the base including an opening corresponding to a camera of the mobile device, and at least two side panels couplable to and foldable toward the base, the side panels to maintain a first distance between the base and a target document, the side panels slidable along the target document.
VIDEO SPECIAL EFFECT CONFIGURATION FILE GENERATION METHOD AND APPARATUS, AND VIDEO RENDERING METHOD AND APPARATUS
Provided are a video special effect configuration file generation method and apparatus, and a video rendering method and apparatus. The video special effect configuration file generation method includes: obtaining a reference image; receiving a screen splitting processing operation of a user on the reference image; performing screen splitting processing on the reference image based on the screen splitting processing operation to obtain a plurality of sub-screens; and associating, in response to a special effect setting operation of the user on a target sub-screen among the plurality of sub-screens, at least one first special effect corresponding to the special effect setting operation with the target sub-screen, to generate a video special effect configuration file. The user can customize the screen splitting processing manner and the rendering special effect of the sub-screen, the video special effect configuration file generation method is more flexible and is more convenient for use.
VIDEO PROCESSING DEVICE AND VIDEO PROCESSING METHOD
A video processing device includes a state memory storing a plurality of setting states of each setting related to video processing; a state applying processor configured to apply the setting states to the settings related to the video processing; a history memory setting a series of changes in the settings related to the video processing as a change history of one group and store a plurality of change histories of the one group; a history reproduction processor reproducing the series of changes of the settings; an execution sequence memory configured to store a sequence of the setting states to be applied among the plurality of setting states and the change histories; and a sequencer configured to execute the application of the setting states by the state applying processor and the reproduction of the change histories by the history reproduction processor in the sequence stored in the execution sequence memory.
Multiple camera system for wide angle imaging
Systems and techniques are described for large field of view digital imaging. A device's first image sensor captures a first image based on first light redirected from a first path onto a redirected first path by a first light redirection element, and the device's second image sensor captures a second image based on second light redirected from a second path onto a redirected second path by a second light redirection element. A virtual extension of the first path beyond the first light redirection element can intersect with a virtual extension of the second path intersect beyond the second light redirection element. The device can modify the first image and second image using perspective distortion correction, and can generate a combined image by combining the first image and the second image. The combined image can have a larger field of view than the first image and/or the second image.