Patent classifications
G06F3/002
WIDE BASELINE STEREO FOR LOW-LATENCY RENDERING
A virtual image generation system and method of operating same are provided. A left synthetic image and a right synthetic image of a three-dimensional scene are rendered respectively from a first left focal center and a first right focal center relative to a first viewpoint. The first left and first right focal centers are spaced from each other a distance greater than the inter-ocular distance of an end user. The synthetic image and the right synthetic image are warped respectively to a second left focal center and a second right focal center relative to a second viewpoint different from the first viewpoint. The second left and right focal centers are spaced from each other a distance equal to the inter-ocular distance of the end user. A frame is constructed from the left and right warped synthetic images, and displayed to the end user.
SYSTEMS AND METHODS FOR PROVIDING AUGMENTED REALITY-LIKE INTERFACE FOR THE MANAGEMENT AND MAINTENANCE OF BUILDING SYSTEMS
The present invention relates to systems and methods for improved building systems management and maintenance. The present invention provides a system for providing an augmented reality-like interface for the management and maintenance of building systems, specifically the mechanical, electrical, and plumbing (MEP) systems within a building, including the heating, ventilation, and air-conditioning (HVAC) systems.
Systems and method of interacting with a virtual object
The technology disclosed relates to a method of interacting with a virtual object. In particular, it relates to referencing a virtual object in an augmented reality space, identifying a physical location of a device in at least one image of the augmented reality space, generating for display a control coincident with a surface of the device, sensing interactions between at least one control object and the control coincident with the surface of the device, and generating data signaling manipulations of the control coincident with the surface of the device.
METHODS AND APPARATUS FOR ESTABLISHING SHARED MEMORY SPACES FOR DATA ACCESS AND DISTRIBUTION
In some implementations, methods and apparatuses herein relate to generating shared memory spaces that can share files or applications between users and between user devices. For example, a processor can allocate a first portion of a memory of a client device to serve as a shared memory space for at least one dynamic application object, and instantiating a user interface on a display associated with the client device. The user interface can be based on a content of the shared memory space and representing the at least one dynamic application object. A processor can define access rights for a user of a second electronic device for receiving a copy of the instantiated user interface. The processor can define user rights for the user for use of the at least one dynamic application object with the second electronic device. The at least one dynamic application object can be a data file or a live user experience.
DISPLAY APPARATUS
Included are display unit (2) configured to display an image, recording unit (3) configured to record three-dimensional data of an object to be displayed on display unit (2), and arithmetic unit (4) configured to, based on a first irradiation direction in which luminance of light irradiated on display unit (2) is highest, a first luminance of light from the first irradiation direction, and a second luminance of light from a direction other than the first irradiation direction, the second luminance being lower than the first luminance, calculate a shape of a shade of the object using the three-dimensional data and the first irradiation direction, calculate a density of the shade using the first luminance, and calculate a correction coefficient of the density of the shade using the second luminance. Display unit (2) displays an image of the object being shaded based on a result calculated by arithmetic unit (4).
Image Identification Based Interactive Control System and Method for Smart Television
An image recognition based interactive control system and method for a smart television. The system comprises: an image acquisition module for acquiring a card image; a gesture recognition module for recognizing a gesture of a user holding a card and outputting a gesture recognition result, wherein the gesture recognition result is channel switching, program selecting or content searching; a card recognition module for recognizing the content of the card image and outputting a card recognition result; and an interactive control module for performing a relevant interactive operation according to the gesture recognition result and the card recognition result.
SYSTEM AND METHOD FOR RETRIEVING INFORMATION FROM AN INFORMATION CARRIER BY MEANS OF A CAPACITIVE TOUCH SCREEN
The present invention relates to a method comprising providing one or more information carrier(s) with a dielectric and/or conductive pattern and a detection device having a capacitive touch screen and inducing an interaction between the information carrier and the touch screen, wherein the interaction is based on a difference in the dielectric coefficient and/or the conductivity of the pattern and generates a touch signal and wherein the interaction is induced by relative motion between the information carrier and the touch screen. The invention further relates to a system comprising an information carrier comprising a dielectric and/or conductive pattern which encodes information and a detection device having a touch screen; the detection device is able to decode the information upon interaction between the information carrier and the touch screen, wherein the interaction is caused by a difference in the dielectric coefficient and/or the conductivity of the pattern.
VISUAL SEARCH TO LAUNCH APPLICATION
Systems and methods herein access a visual identifier, perform a visual search of the visual identifier, in response to performing the visual search of the visual identifier, cause presentation of an application menu within a graphical user interface of a computing device, receive a selection of a first user interface element within the application menu, and in response to receiving the selection, run the computer application.
Wearable terminal and control method
A wearable terminal includes: a body having a display that performs display, a sensor that detects a first angle of rotation by which the display has been rotated with respect to a first axis as an axis of rotation, and a controller that controls the display according to the first angle of rotation; and a band that is connected to the body and extends around the forearm in an arcuate shape, wherein the first axis is perpendicular to a second axis and is parallel to a direction in which the forearm extends, when the first angle of rotation is within a first angle range, the controller causes a first display image displayed, and when the first angle of rotation changes the first angle range to a second angle range, the controller causes a part of the first display image and a part of a second display image simultaneously displayed.
Application execution based on object recognition
Disclosed are various embodiments for initiating execution of an application based at least in part on an identification of the object in an image, video, or other graphical representation of the object. A graphical representation of an object is obtained using an image capture device. The object in the graphical representation is identified along with a list of applications associated with the identified object. A user interface is then rendered that allows the user to execute or install one or more of the applications associated with the identified object.