G06T3/053

Method, Head-Up Display and Output System for the Perspective Transformation and Outputting of Image Content, and Vehicle
20190005608 · 2019-01-03 ·

A method, a head-up display and a display system for the perspective transformation and displaying of rendered image content, as well as a corresponding vehicle, are provided. In the perspective transformation and outputting method, the image content to be displayed is subdivided into a plurality of tiles, and the individual tiles are each transformed in perspective using perspective transformation. The individual tiles that have been transformed in perspective are then combined to form a transformed image content, and the image content transformed in perspective is projected onto a projection area of the head-up display or displayed on a display unit.

Method and Head-Up Display for the Perspective Transformation and Displaying of Image Content, and Vehicle
20190005628 · 2019-01-03 ·

A method and a head-up display for the perspective transformation and displaying of rendered image content, as well as a corresponding vehicle, are provided. In the perspective transformation and displaying method, the image content to be displayed is subdivided into a plurality of tiles, and the individual tiles are each transformed in perspective using perspective transformation. The individual tiles that have been transformed in perspective are then combined to form a transformed image content, and the image content transformed in perspective is projected onto a projection area associated with the head-up display.

Foveated video rendering
10157448 · 2018-12-18 · ·

Techniques are described for generating and rendering video content based on area of interest (also referred to as foveated rendering) to allow 360 video or virtual reality to be rendered with relatively high pixel resolution even on hardware not specifically designed to render at such high pixel resolution. Processing circuitry may be configured to keep the pixel resolution within a first portion of an image of one view at the relatively high pixel resolution, but reduce the pixel resolution through the remaining portions of the image of the view based on an eccentricity map and/or user eye placement. A device may receive the images of these views and process the images to generate viewable content (e.g., perform stereoscopic rendering or interpolation between views). Processing circuitry may also make use of future frames within a video stream and base predictions on those future frames.

Simultaneous zoom in windows on a touch sensitive device
10140003 · 2018-11-27 · ·

Certain aspects of the present disclosure relate to a technique for generating simultaneous zoom in windows on a touch sensitive device. A first portion of the user content is zooming into by touching the display screen in a proximity of the first portion using the touch input device while retaining an original zoom size of a first remaining portion of the user content. A second portion of the user content from the first remaining portion is zoomed into by touching the display screen in a proximity of the second portion using the touch input device while retaining a zoomed in first portion and an original zoom size of a second remaining portion of the of the first remaining portion, the original zoom size of the first and the second remaining portions being the same.

Pixel buffering

In an example method and system, image data to an image processing module. Image data is read from memory into a down-scaler, which down-scales the image data to a first resolution, which is stored in a first buffer. A region of image data which the image processing module will request is predicted, and image data corresponding to at least part of the predicted region of image data is stored in a first buffer, in a second resolution, higher than the first. When a request for image data is received, it is then determined whether image data corresponding to the requested image data is in the second buffer, and if so, then image data is provided to the image processing module from the second buffer. If not, then image data from the first buffer is up-scaled, and the up-scaled image data is provided to the image processing module.

Image processing device and image processing method

A subject detecting unit 107 calculates a subject position on the basis of a subject region obtained by calculating the difference between an overhead image generated from an image for difference calculation stored in an image for difference calculation storage unit 106 and an overhead image generated by an overhead image generating unit 105. A projection plane calculating unit 108 forms a projection plane at the subject position, and a subject image generating unit 109 projects camera images of image taking devices 1a to 1c onto the projection plane, and generates a subject image. A display image generating unit 110 outputs to a display 2 an image formed by synthesizing the subject image with the overhead images generated by the overhead image generating unit 105.

ASSISTANCE APPARATUS
20180288371 · 2018-10-04 · ·

An assistance apparatus includes: an imaging unit that captures an image of a surrounding of a vehicle to generate a captured image; and a processing unit that corrects deviation of a correction target based on a preset reference value and a position of a vanishing point, a vanishing line, or a horizontal line specified in the captured image to assist driving of the vehicle.

MAP-LIKE SUMMARY VISUALIZATION OF STREET-LEVEL DISTANCE DATA AND PANORAMA DATA
20180276875 · 2018-09-27 ·

Architecture that summarizes a large amount (e.g., thousands of miles) of street-level image/video data of different perspectives and types (e.g., continuous scan-type data and panorama-type data) into a single view that resembles aerial imagery. Polygons surfaces are generated from the scan patterns and the image data is projected onto the surfaces, and then rendered into the desired orthographic projection. The street-level data is processed using a distributed computing approach across cluster nodes. The collection is processed into image tiles on the separate cluster nodes representing an orthographic map projection that can be viewed at various levels of detail. Map features such as lower-level roads, that are at lower elevations than higher-level roads, and are hidden by higher-level overpassing roads, can be navigated in the map. With the summarized data, the maps can be navigated and zoomed efficiently.

Image-processing apparatus for indicating a range within an input image, processing method, and medium
12096110 · 2024-09-17 · ·

There is provided with an image-processing apparatus for indicating a range, which is displayed by a device or a system comprising a display area for displaying an image, within an input image. A first obtaining unit obtains information that represents a display form of the device or the system comprising the display area. A second obtaining unit obtains input image data representing the input image. An identification unit identifies the range, which is displayed in the display area, within the input image based on the input image data and the information. An output unit outputs information that represents the identified range. A shape of the identified range depends on the display area that corresponds to at least a curved screen or a plurality of flat screens.

System and method for processing geographical information with a central window and frame

Processing geographical information includes storing geographical information including a map dataset and associated context dataset in a memory. A central window subset based on the map data set and the associated context dataset is extracted from the memory. A frame window subset based on an associated context dataset that is adjacent to the central window subset is extracted from the memory. The central window subset and the frame window subset are transferred to a graphics memory.