Patent classifications
G06F3/0425
Image projection device
An image projection device which can correctly discern content of touch operation when a user performs various kinds of touch operation on an image projected on a projection screen is provided. An imaging unit is adjusted to come into focus on the projection screen. An image data extracting unit extracts image data in which a finger or the like exists and in which the finger or the like is brought into focus in image data obtained by the imaging unit. An operation determining unit determines content of operation performed with the finger or the like, on the basis of the image data extracted by the image data extracting unit. An input control unit recognizes content of an input instruction corresponding to the operation performed with the finger or the like, on the basis of data relating to the content of the operation performed with the finger or the like, position data of the finger or the like, and reference data for specifying a position and a size of the image projected on the projection screen, and controls a projection unit in accordance with the recognized content of the input instruction.
Interactive and shared surfaces
The interactive and shared surface technique described herein employs hardware that can project on any surface, capture color video of that surface, and get depth information of and above the surface while preventing visual feedback (also known as video feedback, video echo, or visual echo). The technique provides N-way sharing of a surface using video compositing. It also provides for automatic calibration of hardware components, including calibration of any projector, RGB camera, depth camera and any microphones employed by the technique. The technique provides object manipulation with physical, visual, audio, and hover gestures and interaction between digital objects displayed on the surface and physical objects placed on or above the surface. It can capture and scan the surface in a manner that captures or scans exactly what the user sees, which includes both local and remote objects, drawings, annotations, hands, and so forth.
Surgery system, contactless control panel for surgery system, and control method
A surgery system includes a contactless control panel, an infrared camera, a computer and a display. The contactless control panel includes control areas which are arranged in a predetermined pattern and are coated with infrared reflective material to reflect infrared radiation. The infrared camera captures an infrared image of the control areas. The computer performs image recognition on the infrared image, determines, based on the predetermined pattern stored in advance and a result of the image recognition, which one of the control areas is masked, and generates a device control signal based on a function corresponding to the one of the control areas that is determined to be masked. The display device displays images based on the device control signal.
MEETING INTERACTION SYSTEM
Described is an interaction system comprising an imaging device, such as a camera system, configured to image one or more users, wherein the interaction device is configured to determine one of more properties of each user. For example, the interaction system may be used to determine whether the hand of each user is raised or an orientation of each user's face.
DISPLAY WITH INTEGRATED CAMERA
The present invention provides an interactive display screen integrated with an image capture device optimized to capture the user, the user's correct gaze, and information inputted on or through the interactive display screen. The present invention does not require extraneous video production equipment or technical expertise to operate while providing a compact and easily transportable system. The display screen is transparent such as an organic light emitting diode (OLED) display, with an optional touchscreen. The display screen may display digital photos or other multimedia objects that a user can annotate or otherwise manipulate. The device captures the displayed multimedia information and combines it with a video stream captured from the image capture device. The present invention also implements a display screen with a display cycle that is offset by 180 degrees from the capture cycle of the image capture device.
Method and system for augmented reality content production based on attribute information application
A method for augmented reality content production based on attribute information application according to an embodiment of the present disclosure, as a method for augmented reality content production based on attribute information application by a production application executed by at least one or more processors of a computing device, comprises providing a virtual object authoring space which is a virtual space for authoring a virtual object and includes at least one or more reference objects; providing a virtual object authoring interface for the virtual object authoring space; generating augmentation relationship attribute information based on a virtual object generated based on the provided virtual object authoring interface and at least one reference object of the virtual object authoring space; storing the virtual object by including the generated augmentation relationship attribute information; and displaying the stored virtual object on a reference object in a different space other than the virtual object authoring space based on the augmentation relationship attribute information.
Electronic apparatus operating method and electronic apparatus
A method for operating as electronic apparatus includes displaying, by a display apparatus, a first image based on first image data transmitted from a first terminal apparatus to the electronic apparatus in a first area of a display surface and changing, by the first terminal apparatus the amount of the first image data transmitted by the first terminal apparatus per unit time period to the electronic apparatus in accordance with a change instruction to change the size of the first area.
INTERACTIVE AND SHARED SURFACES
The interactive and shared surface technique described herein employs hardware that can project on any surface, capture color video of that surface, and get depth information of and above the surface while preventing visual feedback (also known as video feedback, video echo, or visual echo). The technique provides N-way sharing of a surface using video compositing. It also provides for automatic calibration of hardware components, including calibration of any projector, RGB camera, depth camera and any microphones employed by the technique. The technique provides object manipulation with physical, visual, audio, and hover gestures and interaction between digital objects displayed on the surface and physical objects placed on or above the surface. It can capture and scan the surface in a manner that captures or scans exactly what the user sees, which includes both local and remote objects, drawings, annotations, hands, and so forth.
INTERACTIVE DISPLAY WITH INTEGRATED CAMERA FOR CAPTURING AUDIO AND VISUAL INFORMATION
The present invention provides an interactive display screen integrated with a video camera optimized to capture the user, the user's correct gaze, and information inputted on or through the interactive display screen. A presenter writes or draws information on the display screen while facing an audience. The display screen displays digital photos or other multimedia objects that a user can annotate or otherwise manipulate. Meanwhile, the device captures the displayed multimedia information and combines it with a video stream captured from the video camera. No extraneous video production equipment or technical expertise is required to operate while providing a compact and easily transportable system.
Virtualization of tangible interface objects
An example system includes a computing device located proximate to a physical activity surface, a video capture device, and a detector. The video capture device is coupled for communication with the computing device and is adapted to capture a video stream that includes an activity scene of the physical activity surface and one or more interface objects physically intractable with by a user. The detector processes the video stream to detect the one or more interface objects included in the activity scene, to identify the one or more interface objects that are detectable, to generate one or more events describing the one or more interface objects, and to provide the one or more events to an activity application configured to render virtual information on the one or more computing devices based on the one or more events.