Patent classifications
H04N5/272
Image capturing control apparatus capable of displaying OSD and image capturing control method
An image capturing control apparatus that enables a user to recognize a transparency of OSD for recording. A system controller sets a transparency of OSD superimposed on a captured LV image and sets whether or not to record the captured image in a state combined with the OSD, as an image file. The OSD is displayed in a state superimposed on the LV image at a transparency of OSD for display, regardless of a setting concerning recording of the image file, and in a case where it is set to record the LV image in the state combined with the OSD, the OSD is displayed according to a specific operation, in a state superimposed on the LV image, at a transparency of OSD for recording, regardless of the transparency of OSD for display.
A METHOD FOR CAPTURING AND DISPLAYING A VIDEO STREAM
The present invention relates to a method for capturing and displaying a video stream, comprising: capturing with one or a plurality of cameras a plurality of video streams of a scene, said scene comprising at least one person; reconstructing from said plurality of video streams a virtual environment representing the scene, determining the gaze direction of said person using at least one of said plurality of video streams; projecting said virtual environment onto a plane normal to said gaze direction for generating a virtual representation corresponding to what that person is looking at and from the point of view of that person; displaying said virtual representation on a display.
INTEGRATED HUB SYSTEMS CONTROL INTERFACES AND CONNECTIONS
Systems, methods, and instrumentalities are disclosed for switching a control scheme to control a set of system modules and/or modular devices of a surgical hub. A surgical hub may determine a first control scheme that is configured to control a set of system modules and/or modular devices. The surgical hub may receive an input from one of the set of modules or a device located in an OR. The surgical hub may make a determination that at least one of a safety status level or an overload status level of the surgical hub is higher than its threshold value. Based on at least the received input and the determination, the surgical hub may determine a second control scheme to be used to control the set of system modules. The surgical hub may send a control program indicating the second control scheme to one or more system modules and/or modular devices.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
A setting reception unit obtains information identifying an object selected by a user from foreground objects as a target to be a part of the background. A backgrounded target determination unit identifies the model ID of the selected object based on the object identifying information obtained and three-dimensional shape data. Based on the three-dimensional shape data, the determination unit identifies a foreground ID corresponding to the identified model ID, in a captured image from an actual camera. The determination unit obtains coordinate information and mask information in foreground data corresponding to the foreground ID identified, generates a correction foreground mask, and sends the mask to a background correction unit in an image processing unit. The background correction unit generates a correction image by masking the captured image using the mask, superimposes the correction image onto the background image, and outputs it as a corrected background image.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
A setting reception unit obtains information identifying an object selected by a user from foreground objects as a target to be a part of the background. A backgrounded target determination unit identifies the model ID of the selected object based on the object identifying information obtained and three-dimensional shape data. Based on the three-dimensional shape data, the determination unit identifies a foreground ID corresponding to the identified model ID, in a captured image from an actual camera. The determination unit obtains coordinate information and mask information in foreground data corresponding to the foreground ID identified, generates a correction foreground mask, and sends the mask to a background correction unit in an image processing unit. The background correction unit generates a correction image by masking the captured image using the mask, superimposes the correction image onto the background image, and outputs it as a corrected background image.
VEHICULAR VISION SYSTEM WITH REDUCED MEMORY FOR ADAPTABLE GRAPHIC OVERLAYS
A vehicular vision system includes a camera disposed at a vehicle, an electronic control unit, and a video display disposed in the vehicle. The system, responsive to processing of captured image data, displays video images on the video display. The system retrieves graphic overlay data from memory. The graphic overlay data represents a plurality of graphic overlay portions, with each graphic overlay portion being associated with a respective display portion of a plurality of display portions of the video display. Responsive to occurrence of a driving condition, the system retrieves graphic overlay data, displays the graphic overlay portions, and adjusts a transparency of one or more of the displayed graphic overlay portions such that at least one displayed graphic overlay portion is viewable by the driver and at least one other displayed graphic overlay portion is not viewable by the driver.
VEHICULAR VISION SYSTEM WITH REDUCED MEMORY FOR ADAPTABLE GRAPHIC OVERLAYS
A vehicular vision system includes a camera disposed at a vehicle, an electronic control unit, and a video display disposed in the vehicle. The system, responsive to processing of captured image data, displays video images on the video display. The system retrieves graphic overlay data from memory. The graphic overlay data represents a plurality of graphic overlay portions, with each graphic overlay portion being associated with a respective display portion of a plurality of display portions of the video display. Responsive to occurrence of a driving condition, the system retrieves graphic overlay data, displays the graphic overlay portions, and adjusts a transparency of one or more of the displayed graphic overlay portions such that at least one displayed graphic overlay portion is viewable by the driver and at least one other displayed graphic overlay portion is not viewable by the driver.
ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF
An electronic device and a control method thereof are provided. The control method of an electronic device includes: projecting, by the electronic device, a first test projection image including at least one marker of a first color while an external device projects a second test projection image including at least one marker of a second color that is different from the first color; obtaining a captured image of projection regions on which the first test projection image and the second test projection image are projected while the projection region on which at least one of the first test projection image or the second test projection image is projected is changed; and identifying an overlapping region between the first test projection image and the second test projection image based on at least one of (1) a third marker of a third color or (ii) the marker of the first color and the marker of the second color, in the captured image, the marker of the third color being the marker of the first color and the marker of the second color overlapping each other, and the third color being different from the first color and the second color.
AUGMENTED REALITY BASED IMAGE PROTECTION IN ENTERPRISE SETTINGS
Disclosed are various examples for augmented reality based image protection in enterprise settings. In one example, a managed camera application can generate an artificial reality based camera user interface using image data from a field of view of a camera. An indoor position can be identified using global positioning system (GPS) and indoor positioning data. A sector of the field of view can be identified as a protected image area that depicts a protected or confidential object, and the user interface can be updated to include an AR user interface element that is generated relative to the protected image area that is identified.
AUGMENTED REALITY BASED IMAGE PROTECTION IN ENTERPRISE SETTINGS
Disclosed are various examples for augmented reality based image protection in enterprise settings. In one example, a managed camera application can generate an artificial reality based camera user interface using image data from a field of view of a camera. An indoor position can be identified using global positioning system (GPS) and indoor positioning data. A sector of the field of view can be identified as a protected image area that depicts a protected or confidential object, and the user interface can be updated to include an AR user interface element that is generated relative to the protected image area that is identified.