Patent classifications
H04N13/293
System and method for combining text with three dimensional content
A system and method for combining and/or displaying text with three-dimensional content. The system and method inserts text at the same level as the highest depth value in the 3D content. One example of 3D content is a two-dimensional image and an associated depth map. In this case, the depth value of the inserted text is adjusted to match the largest depth value of the given depth map. Another example of 3D content is a plurality of two-dimensional images and associated depth maps. In this case, the depth value of the inserted text is continuously adjusted to match the largest depth value of a given depth map. A further example of 3D content is stereoscopic content having a right eye image and a left eye image. In this case the text in one of the left eye image and right eye image is shifted to match the largest depth value in the stereoscopic image. Yet another example of 3D content is stereoscopic content having a plurality of right eye images and left eye images. In this case the text in one of the left eye images or right eye images is continuously shifted to match the largest depth value in the stereoscopic images. As a result, the system and method of the present disclosure produces text combined with 3D content wherein the text does not obstruct the 3D effects in the 3D content and does not create visual fatigue when viewed by a viewer.
Managing multi-modal rendering of application content
A device implementing a system for managing multi-modal rendering of application content includes at least one processor configured to receive content, provided by an application running on a device, for displaying in a three-dimensional display mode. The at least one processor is further configured to determine that the content corresponds to two-dimensional content. The at least one processor is further configured to identify a portion of the two-dimensional content for enhancement by a three-dimensional render. The at least one processor is further configured to enhance, in response to the determining, the portion of the two-dimensional content by the three-dimensional renderer. The at least one processor is further configured to provide for display of the enhanced portion of the two-dimensional content on a display of the device in the three-dimensional display mode.
PROCESSING OF SIGNALS USING A RECURRENT STATE ESTIMATOR
In one implementation, a method includes receiving pixel events output by an event sensor that correspond to a feature disposed within a field of view of the event sensor. Each respective pixel event is generated in response to a specific pixel within a pixel array of the event sensor detecting a change in light intensity that exceeds a comparator threshold. A characteristic of the feature is determined at a first time based on the pixel events and a previous characteristic of the feature at a second time that precedes the first time. Movement of the feature relative to the event sensor is tracked over time based on the characteristic and the previous characteristic.
PROCESSING OF SIGNALS USING A RECURRENT STATE ESTIMATOR
In one implementation, a method includes receiving pixel events output by an event sensor that correspond to a feature disposed within a field of view of the event sensor. Each respective pixel event is generated in response to a specific pixel within a pixel array of the event sensor detecting a change in light intensity that exceeds a comparator threshold. A characteristic of the feature is determined at a first time based on the pixel events and a previous characteristic of the feature at a second time that precedes the first time. Movement of the feature relative to the event sensor is tracked over time based on the characteristic and the previous characteristic.
DEVICE AND METHOD FOR TRANSMITTING DATA OF MULTIPLE APPLICATIONS WITH LOW LATENCY
Various embodiments of the present disclosure provide an electronic device comprising: a communication module comprising communication circuitry, a memory, and a processor operatively connected to the communication module and the memory, wherein the processor is configured to: control the electronic device to establish a connection to a wearable display device through the communication module, receive gaze information from the wearable display device, determine a first application and a second application corresponding to the gaze information to be displayed on a screen, identify profiles of the determined first application and second application, and combine graphic data corresponding to the first application and graphic data corresponding to the second application and transmit the combined graphic data to the wearable display device, or transmit each of graphic data corresponding to the first application and graphic data corresponding to the second application to the wearable display device, based on the identified profiles.
DEVICE AND METHOD FOR TRANSMITTING DATA OF MULTIPLE APPLICATIONS WITH LOW LATENCY
Various embodiments of the present disclosure provide an electronic device comprising: a communication module comprising communication circuitry, a memory, and a processor operatively connected to the communication module and the memory, wherein the processor is configured to: control the electronic device to establish a connection to a wearable display device through the communication module, receive gaze information from the wearable display device, determine a first application and a second application corresponding to the gaze information to be displayed on a screen, identify profiles of the determined first application and second application, and combine graphic data corresponding to the first application and graphic data corresponding to the second application and transmit the combined graphic data to the wearable display device, or transmit each of graphic data corresponding to the first application and graphic data corresponding to the second application to the wearable display device, based on the identified profiles.
Video reconstruction method, system, device, and computer readable storage medium
A method, a system, a device, and a computer readable storage medium for video reconstruction are disclosed. The method includes: obtaining an image combination of multi-angle free-perspective video frames, parameter data corresponding to the image combinations of the video frames, and position information of a virtual viewpoint based on a user interaction; selecting texture images and depth maps of corresponding groups in the image combinations of the video frames at a time moment of the user interaction according to a preset rule and based on the position information of the virtual viewpoint and the parameter data corresponding to the image combinations of the video frames; and combining and rendering the texture images and the depth maps of the corresponding groups based on the position information of the virtual viewpoint and parameter data corresponding to the depth maps and the texture images of the corresponding groups to obtain a reconstructed image.
Video reconstruction method, system, device, and computer readable storage medium
A method, a system, a device, and a computer readable storage medium for video reconstruction are disclosed. The method includes: obtaining an image combination of multi-angle free-perspective video frames, parameter data corresponding to the image combinations of the video frames, and position information of a virtual viewpoint based on a user interaction; selecting texture images and depth maps of corresponding groups in the image combinations of the video frames at a time moment of the user interaction according to a preset rule and based on the position information of the virtual viewpoint and the parameter data corresponding to the image combinations of the video frames; and combining and rendering the texture images and the depth maps of the corresponding groups based on the position information of the virtual viewpoint and parameter data corresponding to the depth maps and the texture images of the corresponding groups to obtain a reconstructed image.
SYSTEMS AND METHODS FOR DETECTING POSTURES OF A USER OF AN IHS (INFORMATION HANDLING SYSTEM)
Methods and systems are provided for determining a posture of a user of an Information Handling System (IHS). One or more cameras of the IHS are utilized to generate a two-dimensional image of the user as they operate the IHS. Landmarks that correspond to physical features of the user are identified through processing of the two-dimensional image generated using the cameras. A time-of-flight sensor of the IHS is utilized to generate a three-dimensional image of the user. The identified physical feature landmarks of the user are overlayed onto the three-dimensional image, thus allowing distances from the IHS to each of the landmarks to be determined. Based on the overlay of the physical feature landmarks onto the three-dimensional image a posture of the user relative to the IHS is determined. The posture may be scored based on the degree to which the user's posture deviates from an ergonomic posture.
SYSTEMS AND METHODS FOR DETECTING POSTURES OF A USER OF AN IHS (INFORMATION HANDLING SYSTEM)
Methods and systems are provided for determining a posture of a user of an Information Handling System (IHS). One or more cameras of the IHS are utilized to generate a two-dimensional image of the user as they operate the IHS. Landmarks that correspond to physical features of the user are identified through processing of the two-dimensional image generated using the cameras. A time-of-flight sensor of the IHS is utilized to generate a three-dimensional image of the user. The identified physical feature landmarks of the user are overlayed onto the three-dimensional image, thus allowing distances from the IHS to each of the landmarks to be determined. Based on the overlay of the physical feature landmarks onto the three-dimensional image a posture of the user relative to the IHS is determined. The posture may be scored based on the degree to which the user's posture deviates from an ergonomic posture.