Patent classifications
G09G2360/08
Separately processing regions or objects of interest from a render engine to a display engine or a display panel
Video or graphics, received by a render engine within a graphics processing unit, may be segmented into a region of interest such as foreground and a region of less interest such as background. In other embodiments, an object of interest may be segmented from the rest of the depiction in a case of a video game or graphics processing workload. Each of the segmented portions of a frame may themselves make up a separate surface which is sent separately from the render engine to the display engine of a graphics processing unit. In one embodiment, the display engine combines the two surfaces and sends them over a display link to a display panel. The display controller in the display panel displays the combined frame. The combined frame is stored in a buffer and refreshed periodically. In accordance with another embodiment, video or graphics may be segmented by a render engine into regions of interest or objects of interest and objects not of interest and again each of the separate regions or objects may be transferred to the display engine as a separate surface. Then the display engine may transfer the separate surfaces to a display controller of a display panel over a display link. At the display panel, a separate frame buffer may be used for each of the separate surfaces.
GRAPHICS WITH ADAPTIVE TEMPORAL ADJUSTMENTS
An embodiment of an electronic processing system may include an application processor, persistent storage media communicatively coupled to the application processor, a graphics subsystem communicatively coupled to the application processor, an object space adjuster communicatively coupled to the graphics subsystem to adjust an object space parameter based on a screen space parameter, and a sample adjuster communicatively coupled to the graphics subsystem to adjust a sample parameter of the graphics subsystem based on a detected condition. Other embodiments are disclosed and claimed.
MANAGING A DISPLAY OF AN INFORMATION HANDLING SYSTEM
In one embodiment, a method for managing a display of an information handling system includes: monitoring, by a display controller, a usage period of the display indicating a period of time in which the display is in an illuminated state; determining, by the display controller, that the usage period is greater than a threshold usage period; causing, by the display controller, a brightness level of the display to decrease; sending, by the display controller, a signal to a graphics processing unit; receiving, by the graphics processing unit, the signal; determining, by the graphics processing unit, a contrast level associated with one or more images presented to a user; determining, by the graphics processing unit, a gamma level associated with the one or more images; and processing, by the graphics processing unit, the one or more images based on the contrast level and the gamma level.
Accelerated frame transmission
A graphics processing unit (GPU) of a processing system transmits pixel data for a frame to a display in a compressed burst, so that the pixel data is communicated at a rate that is higher than the rate at which the display scans out the pixel data to refresh the frame at a display panel. By transmitting pixel data for the frame in a compressed burst, the GPU shortens the time spent transmitting the pixel data and extends the time before the next frame of pixel data is to be transmitted. During the extended time before the next frame of pixel data is to be transmitted, the GPU saves power by placing portions of the processing system in a reduced power mode.
Controlling interactivity of digital content overlaid onto displayed data via graphics processing circuitry using a frame buffer
An apparatus, method, and computer readable medium that access a frame buffer of a graphics processing unit (GPU), analyze, in the frame buffer, a frame representing displayed data, based on the analyzed frame, identify a reference patch that includes an instruction to retrieve content, generate an overlay including an augmentation layer which includes the content, superimpose the overlay onto the displayed data such that the content is viewable while a portion of the base layer is obscured, detect a user input, determine a location of the user input in the augmentation layer, associate the location in the augmentation layer with a target location in the base layer, and associate, within memory, the target location with an operation such that the user input in the augmentation layer activates an input in the base layer.
Ship information display device and method of displaying ship information
A ship information display device is provided, which may include a first processor, a second processor, a graphic processor, and a display. The first processor may generate a first image based on first ship information received from a first ship sensor and generate a screen to be synthesized including the first image and a blank image. The second processor may generate a second image based on second ship information received from a second ship sensor. The graphic processor may generate a synthesized screen including the first image and the second image by replacing the blank image of the screen to be synthesized by the second image generated by the second processor. The display may display the synthesized screen.
SPATIAL DITHERING TECHNOLOGY THAT SUPPORTS DISPLAY SCAN-OUT
Methods, systems and apparatuses may provide for technology that generates a seed value, wherein the seed value is dedicated to a position of an input pixel, generates a dithered pixel value based on the seed value and a value of the input pixel, and conducts a scan-out of the dithered pixel value to a display panel. In one example, the technology generates an intermediate value based on the seed value and one or more fixed constants and generates a pseudo random number based on the intermediate value and a programmable constant, wherein the dithered pixel value is generated based on the pseudo random number and the value of the input pixel.
ENABLING DISPLAY FILTERS IN COLLABORATIVE ENVIRONMENTS
Display filters, including color filters, can be enabled in collaborative environments. When a user of an end user device desires to have a color filter applied, a windowing system or other source of graphics data can render a frame via a graphics driver. Once the frame is rendered, the graphics driver can enable a collaboration tool to capture the frame and share it via a collaboration solution. Separately from the rendering of the frame, the windowing system can leverage a color filter module to directly apply a color filter to the frame. Once the color filter is applied, the windowing system can cause the frame to be displayed locally. Because the graphics driver is not used to apply the color filter, the color filter will not be applied to any frame that the collaboration tool captures and shares.
DISPLAY ENGINE INITIATED PREFETCH TO SYSTEM CACHE TO TOLERATE MEMORY LONG BLACKOUT
A disclosed technique includes prefetching display data into a cache memory, wherein the display data includes data to be displayed on a display during a memory black-out period for a memory; triggering the memory black-out period; and during the black-out period, reading from the cache memory to obtain data to be displayed on the display.
Collaborative multi-user virtual reality
- Deepak S. Vembar ,
- Atsuo Kuwahara ,
- Chandrasekaran Sakthivel ,
- Radhakrishnan Venkataraman ,
- Brent E. Insko ,
- Anupreet S. Kalra ,
- Hugues Labbe ,
- Altug Koker ,
- Michael Apodaca ,
- Kai Xiao ,
- Jeffery S. Boles ,
- Adam T. Lake ,
- David M. Cimini ,
- Balaji Vembu ,
- Elmoustapha Ould-Ahmed-Vall ,
- Jacek Kwiatkowski ,
- Philip R. Laws ,
- Ankur N. Shah ,
- Abhishek R. Appu ,
- Joydeep Ray ,
- Wenyin Fu ,
- Nikos Kaburlasos ,
- Prasoonkumar Surti ,
- Bhushan M. Borole
An embodiment of a graphics apparatus may include a processor, memory communicatively coupled to the processor, and a collaboration engine communicatively coupled to the processor to identify a shared graphics component between two or more users in an environment, and share the shared graphics components with the two or more users in the environment. Embodiments of the collaboration engine may include one or more of a centralized sharer, a depth sharer, a shared preprocessor, a multi-port graphics subsystem, and a decode sharer. Other embodiments are disclosed and claimed.