Method of generating OSD data
20230021833 · 2023-01-26
Assignee
Inventors
Cpc classification
G09G5/397
PHYSICS
G09G2370/04
PHYSICS
International classification
Abstract
The present invention provides a method for generating a plurality of on-screen display (OSD) data used in a back-end (BE) circuit. The BE circuit is configured to process a plurality of image data to be displayed on a display device. The method includes steps of: receiving the plurality of image data from an application processor (AP); and extracting information of a detecting layer embedded in the plurality of image data, wherein the information of the detecting layer indicates the plurality of OSD data corresponding to at least one user-interface (UI) layer in the plurality of image data.
Claims
1. A method of generating a plurality of on-screen display (OSD) data, used in a back-end (BE) circuit, the BE circuit being configured to process a plurality of image data to be displayed on a display device, the method comprising: receiving the plurality of image data from an application processor (AP); and extracting information of a detecting layer embedded in the plurality of image data, wherein the information of the detecting layer indicates the plurality of OSD data corresponding to at least one user-interface (UI) layer in the plurality of image data.
2. The method of claim 1, wherein an image of the detecting layer is not shown on the display device.
3. The method of claim 1, wherein the detecting layer comprises a transparent area and a non-transparent area, and the method further comprises: detecting the plurality of OSD data corresponding to the at least one UI layer overlapping the non-transparent area of the detecting layer.
4. The method of claim 3, further comprising: reconstructing a frame of image data to be shown on the display device according to the plurality of image data in the transparent area of the detecting layer.
5. The method of claim 3, wherein in a large region of the detecting layer, at least one pixel is allocated to the transparent area.
6. The method of claim 3, wherein pixels of the non-transparent area of the detecting layer arranged at a position in which an image of the at least one UI layer probably appears have a higher density than pixels of the non-transparent area of the detecting layer arranged at another position in which the image of the at least one UI layer rarely appears.
7. A method of generating a plurality of on-screen display (OSD) data, used in an application processor (AP), the AP being configured to generate a plurality of image data to be displayed on a display device, the method comprising: embedding at least one user-interface (UI) layer and a detecting layer with a video layer to be displayed on the display device; and transmitting the plurality of image data blended with the at least one UI layer, the detecting layer and the video layer to a back-end (BE) circuit, wherein the detecting layer is configured to detect the at least one UI layer.
8. The method of claim 7, wherein an image of the detecting layer is not shown on the display device.
9. The method of claim 7, wherein the detecting layer comprises a transparent area and a non-transparent area, and the plurality of OSD data are detected in the non-transparent area.
10. The method of claim 7, further comprising: inserting the detecting layer between the at least one UI layer and the video layer.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
DETAILED DESCRIPTION
[0018] Please refer to
[0019] An on-screen display (OSD) bitmap is a bit array mapped to a frame of image data, for indicating which pixels show the image of the video layer and which pixels show the image of the UI layer(s). In an embodiment, the OSD bit may be set to “1” if the corresponding pixel shows the UI image, and set to “0” if the corresponding pixel shows the background image, as shown in
[0020] In an embodiment, the OSD data may be obtained by deliberately inserting a detecting layer in the blended images in the AP, where the image pattern of the detecting layer is predetermined and known by the BE circuit; hence, the OSD data may be extracted by the BE circuit according to the image data of the detecting layer. In such a situation, the additional efforts and resources for determination, storage, transmission and synchronization of the OSD bits can be saved.
[0021] Please refer to
[0022] In an embodiment, the AP 200 may be, but not limited to, a system on chip (SoC) or any other main processing circuit implemented with an operating system (e.g., android) in which various applications can be installed, which may generate image content including the video and UI. A common example of the SoC is the Snapdragon series of Qualcomm. The BE circuit 210 may be, but not limited to, a graphics processing unit (GPU), discrete graphics processing unit (dGPU), independent display chip, independent motion estimation and motion compensation (MEMC) chip, or any other image processing circuit of an electronic device capable of display function. A common example of the BE circuit is the X1 processor of Sony. In another embodiment, the AP 200 may be an SoC of a set-top box of the television.
[0023] After receiving the image data, the BE circuit 210 may extract the information of the detecting layer embedded in the image data, and obtain the OSD data corresponding to the image data indicated by the extracted information, where the OSD data includes multiple OSD bits indicating whether the corresponding pixels have UI images or not. Since the BE circuit 210 already knows the image information of the inserted detecting layer, the BE circuit 210 may remove the image of the detecting layer based on the known information, so as to reconstruct the image content. Note that the detecting layer has an image pattern that does not need to be shown on the display device, and thus the image of the detecting layer should be removed before the BE circuit 210 outputs the image data.
[0024]
[0025] In order to detect the UI layers L1-L3 and determine the OSD data corresponding to the UI layers L1-L3, a detecting layer having image data L.sub.i and transparency parameter a, may be inserted between the UI layers L1-L3 and the video layer. The UI layers L1-L3, the detecting layer and the video layer superposed together construct the image to be output by the AP 200.
[0026] In the non-transparent area, the image information of the video layer is entirely blocked, and only the UI image may be shown (if there is a UI image). Therefore, the BE circuit 210 may extract the image information of the non-transparent area to determine the corresponding OSD data. More specifically, supposing that the detecting layer has an all-black image, if the BE circuit 210 finds that the image of a pixel in the non-transparent area is black, it may determine that the pixel seems to show the image of the detecting layer and there is no UI layer in this pixel, and thereby set the corresponding OSD bit to “0”; if the BE circuit 210 finds that the image of a pixel in the non-transparent area is not black, it may determine that the pixel seems to show a UI image and there may be at least one UI layer in this pixel (since the above UI layer (s) is/are not blocked by the non-transparent detecting layer), and thereby set the corresponding OSD bit to “1”.
[0027] Please refer to
output image data=L.sub.video×(1−α.sub.UI)+L.sub.UI×α.sub.UI,
which is the image content composed of the video layer and the UI layers to be shown on the display device. If the specific pixel is in the non-transparent area (α=1), the output image data of this pixel may be obtained as:
output image data=L.sub.UI×α.sub.UI,
where the image of the video layer is entirely blocked, and thus the UI layers L1-L3 above the detecting layer may be easily detected.
[0028] As mentioned above, the image pattern of the detecting layer is known information for the BE circuit 210; hence, the BE circuit 210 may obtain the OSD data according to the image information. Since only the UI image can be shown in the non-transparent area of the detecting layer, the BE circuit 210 may detect the OSD bits corresponding to the UI layers L1-L3 overlapping the non-transparent area of the detecting layer. As for those pixels in the transparent area, the corresponding OSD bits cannot be detected directly. Therefore, the BE circuit 210 may estimate the OSD bits in the transparent area through interpolation, e.g., calculating each OSD bit in the transparent area with reference to nearby pixels in the non-transparent area. In an embodiment, the BE circuit 210 may obtain an OSD bitmap corresponding to an image frame by combining the OSD data detected in the non-transparent area and the OSD data calculated in the transparent area.
[0029] Please note that the detecting layer may change the image to be output to the display device, especially in the non-transparent area, and thus the BE circuit 210 is requested to reconstruct the original image data without the image of the detecting layer. As mentioned above, the images in the transparent area are not affected by the detecting layer; hence, a frame of image data may be reconstructed based on the image data in the transparent area, so as to restore the images to be shown on the display device. In an embodiment, the image frame may be reconstructed through interpolation; that is, the BE circuit 210 may determine the image data in the non-transparent area with reference to nearby pixels in the transparent area. The reconstructed image frame may further be sent to the display device. In an embodiment, the reconstructed image frame including restored information of the UI layers, which may further be used to determine the OSD bitmap with a higher accuracy.
[0030] Therefore, it is preferable to allocate the image data and transparency parameters of the detecting layer such that the transparent area and the non-transparent area are arranged alternately (e.g., to become a checkerboard or similar pattern), so as to facilitate the reconstruction of the output image through interpolation.
[0031] Please refer to
[0032]
[0033] In an embodiment, the image pattern of the detecting layer may be different for different image frames. For example, as for two consecutive image frames, the checkerboard pattern of the detecting layer may be changed; that is, a transparent pixel in this frame may be a non-transparent pixel in the next frame, and/or a non-transparent pixel in this frame may be a transparent pixel in the next frame. In such a situation, the BE circuit may reconstruct the image data based on those of the previous and/or next image frame, so as to achieve a better reconstruction effect.
[0034] In general, a UI layer embedded with the video layer is used to generate images to be shown on the display device. However, the detecting layer is served to detect the UI layer, and the image pattern of the detecting layer should be removed from the image data through reconstruction. Therefore, the images of the detecting layer may not be shown on the display device. This feature of the detecting layer is quite different from other UI layers.
[0035] Further, in order to successfully reconstruct the original image, the inserted detecting layer should be composed of the transparent area and the non-transparent area, and the transparent area may be arranged in a manner that allows the reconstruction to be performed correctly. In an embodiment, most pixels in an image frame may be allocated to the transparent area, and only a few pixels are allocated to the non-transparent area to be served to detect the OSD bits. Alternatively or additionally, the detecting layer may not include a large region (at least larger than a specific area or including more than a specific number of pixels) in which all pixels are allocated to the non-transparent area; that is, in a large region of the detecting layer, there should be at least one pixel allocated to the transparent area. In other words, the detecting layer may not have a great number of non-transparent pixels gathered together. In such a situation, the original blended image without the detecting layer may be reconstructed accurately.
[0036] In addition, the OSD bits can only be detected in the non-transparent area, but cannot be directly detected in the transparent area; hence, the OSD bits in the transparent area may be obtained with reference to nearby pixels. Also, if the UI image of a UI layer only appears on the transparent area of the detecting layer, this UI layer may not be successfully detected.
[0037] Moreover, the transparent area and the non-transparent area may be arranged in any manner, which is not limited to the checkerboard pattern as described in this disclosure. In an embodiment, the arrangement of the transparent pixels and non-transparent pixels may be adjusted appropriately in different places. For example, at the position(s) where the image of any UI layer probably appears, such as those areas close to the boarder of the panel or screen, non-transparent pixels may be allocated with a higher density, so as to achieve a better detection effect for the OSD bits. In contrast, at the position(s) where the image of the UI layer rarely appears, such as the middle display area, non-transparent pixels may be allocated with a lower density (where the transparent area may be larger), or there may be no non-transparent pixel in the position(s), so as to reconstruct the original image more easily and enhance the accuracy of the reconstruction.
[0038] Please note that the present invention aims at providing a method of generating the OSD data by inserting a detecting layer in the original output image. Those skilled in the art may make modifications and alterations accordingly. For example, in the above embodiments, the transparency parameter is “0” in the transparent area and “1” in the non-transparent area. However, in another embodiment, the transparency parameters of the detecting layer may be set to any values and/or adjusted with an appropriate manner. For example, the transparency parameter in the non-transparent area of the detecting layer may have a value approximately equal to “1”, such as “0.95” or “0.9”. In such a situation, the BE circuit may still determine the OSD data based on the image in the non-transparent area, and the output image may be reconstructed more effectively since the non-transparent area also includes image information of the video layer which is helpful in the image reconstruction. In addition, in the above embodiments, the detecting layer has an all-black image; but in another embodiment, other color may also be feasible. As long as the color of the detecting layer is different from the main color of the UI image and the color information is known by the BE circuit, the corresponding UI layer may be detected successfully. In an alternative embodiment, multiple colors may be applied in one detecting layer, and/or the detecting layers for different image frames may be composed of different colors, so as to achieve different detection effects.
[0039] Furthermore, in the above embodiments, the detecting layer is inserted above the video layer and below all of the UI layers. In another embodiment, the detecting layer may be inserted between the video layer and one or more target UI layers, and the OSD bits may be obtained for the target UI layer(s). For example, in the image layer architecture as shown in
[0040] The abovementioned operations of generating the OSD data may be summarized into a process 70, as shown in
[0041] Step 700: Start.
[0042] Step 702: The AP generates a detecting layer configured to detect at least one UI layer.
[0043] Step 704: The AP embeds the at least one UI layer and the detecting layer with the video layer.
[0044] Step 706: The AP transmits the image data blended with the at least one UI layer, the detecting layer and the video layer to the BE circuit.
[0045] Step 708: The BE circuit extracts information of the detecting layer embedded in the image data, wherein the information of the detecting layer indicates the OSD data corresponding to the at least one UI layer in the image data.
[0046] Step 710: The BE circuit reconstructs a frame of image data to be shown on the display device by removing the information of the detecting layer.
[0047] Step 712: End.
[0048] The detailed operations and alterations of the process 70 are illustrated in the above paragraphs, and will not be narrated herein.
[0049] To sum up, the present invention provides a method of generating the OSD data by deliberately inserting a detecting layer in the blended image. The detecting layer may include a transparent area and a non-transparent area with different transparency parameters arranged as a checkerboard pattern, where the UI image and the video layer are shown in the transparent area, while the video layer is blocked and only the UI image is shown in the non-transparent area. Therefore, the OSD bits may be detected based on the image information obtained in the non-transparent area, and the OSD bits in the transparent area may be calculated with reference to nearby pixels in the non-transparent area, so as to generate an OSD bitmap. Since the transparent area includes the information of the original output image, the image data in the non-transparent area may be reconstructed with reference to nearby pixels in the transparent area through interpolation. As a result, the OSD data may be extracted from the image information more effectively, the display system does not need additional transmission interface or bandwidth for transmitting the OSD bits, and the OSD bits may be synchronous to the image content more easily and conveniently.
[0050] Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.