Patent classifications
G06T3/073
METHOD AND SYSTEM FOR MONITORING AND CONTROLLING ONLINE BEVERAGE CAN COLOR DECORATION SPECIFICATION
A system is provided in an automated machine vision inspection environment. The system includes inspection cameras and a spectrophotometer or spectrometer, both implemented to be used online to detect the absolute colors of printed portions of items being inspected. The spectrophotometer or spectrometer is aimed at a fixed spot within a field of view of one of the digital cameras of an inspection system, which has a-priori knowledge as to exactly where the spectrophotometer or spectrometer is aimed. The image taken by the camera will be used to determine whether the desired measurement spot on the decoration pattern has actually been measured by the most recent snap of the spectrophotometric data. When the vision system determines that the spectrophotometer or spectrometer was truly aiming at the correct region when it captured its inspection data, it will instruct the system to accept the color measurement and will log the related data and information accordingly. If the correct spot is not measured, the data may simply be discarded or may be kept for other uses.
RE-PROJECTING FLAT PROJECTIONS OF PICTURES OF PANORAMIC VIDEO FOR RENDERING BY APPLICATION
Innovations in reconstruction and rendering of panoramic video are described. For example, a view-dependent operation controller of a panoramic video playback system receives an indication of a view direction for an application and, based at least in part on the view direction, identifies a section of a picture of panoramic video in an input projection. The view-dependent operation controller limits operations of a color converter, video decoder, and/or streaming controller to the identified section. In this way, the panoramic video playback system can avoid performing operations to reconstruct sections of the picture of panoramic video that will not be viewed. As another example, a mapper of a panoramic video playback system re-projects at least some sample values in an input flat projection towards a center location for a view direction, producing an output flat projection, which an application can use to generate one or more screen projections.
IMAGE PROCESSING METHOD AND DEVICE FOR PROJECTING IMAGE OF VIRTUAL REALITY CONTENT
The present invention relates to a technology for a sensor network, machine to machine (M2M) communication, machine type communication (MTC), and the Internet of things (IoT). The present invention can be utilized for intelligent services (smart home, smart building, smart city, smart car or connected car, health care, digital education, retail, security and safety-related services, and the like) based on the technology. The present invention relates to an efficient image processing method and device for virtual reality content, and according to one embodiment of the present invention, the image processing method for projecting an image of virtual reality content comprises the steps of: acquiring a first planar image projected by dividing a front part and a rear part of a spherical image for expressing a 360-degree image; generating a second planar image projected by sampling the first planar image on the basis of a pixel position; and encoding the second planar image.
Methods and Systems for Generating and Using Localisation Reference Data
Methods and systems for classifying data points of a point cloud indicative of the environment around a vehicle by using features of a digital map relating to a deemed current position of the vehicle. Such methods and systems can be used to detect road actors, such as other vehicles, around a vehicle capable of sensing its environment as a point cloud; preferably used by highly and fully automated driving applications.
Equatorial stitching of hemispherical images in a spherical image capture system
Hyper-hemispherical images may be combined to generate a rectangular projection of a spherical image having an equatorial stitch line along of a line of lowest distortion in the two images. First and second circular images are received representing respective hyper-hemispherical fields of view. A video processing device may project each circular image to a respective rectangular image by mapping an outer edge of the circular image to a first edge of the rectangular image and mapping a center point of the circular image to a second edge of the first rectangular image. The rectangular images may be stitched together along the edges corresponding to the outer edge of the original circular image.
MAPPING OF SPHERICAL IMAGE DATA INTO RECTANGULAR FACES FOR TRANSPORT AND DECODING ACROSS NETWORKS
A system captures a first hemispherical image and a second hemispherical image, each hemispherical image including an overlap portion, the overlap potions capturing a same field of view, the two hemispherical images collectively comprising a spherical FOV and separated along a longitudinal plane. The system maps a modified first hemispherical image to a first portion of the 2D projection of a cubic image, the modified first hemispherical image including a non-overlap portion of the first hemispherical image, and maps a modified second hemispherical image to a second portion of the 2D projection of the cubic image, the modified second hemispherical image also including a non-overlap portion. The system maps the overlap portions of the first hemispherical image and the second hemispherical image to the 2D projection of the cubic image, and encodes the 2D projection of the cubic image to generate an encoded image representative of the spherical FOV.
PRE-PROCESSING METHOD FOR CREATING 3D VIRTUAL MODEL AND COMPUTING DEVICE THEREFOR
A hole-filling method, performable on a computing device, for providing a three-dimensional virtual model according to a technical aspect of the present application may comprises the operations of: acquiring an original training image and a hole creation training image, wherein the hole creation training image is an image in which at least one hole is created based on the original training image; creating a hole-filling training image by performing hole-filling on the hole creation training image using a neural network; performing spherical transformation on each of the hole-filling training image and the original training images; and training the neural network based on the difference between the spherically transformed hole-filling training image and the spherically transformed original training image.
Converting Spatial Features to Map Projection
Embodiments relate to converting spatial features to a map projection. Initially, a map request that specifies the map projection for a geographic area is obtained. A spatial feature is identified for projecting into the map projection. Until a bisect threshold is satisfied for each line segment in the spatial feature, a bisect is determined for each of the line segments; each line segment is projected into the map projection; and if the bisect threshold is not satisfied for a line segment, the line segment is divided into subsegments, where the bisect threshold specifies an error distance for the line segment after projection. The modified spatial feature is projected into the map projection to obtain a projected spatial feature, and a polar coordinate system that corresponds to the map projection is used to render the projected spatial feature in a spatial map.
Method for generating panoramic image
A processing apparatus capable of generating a panoramic image from a plurality of captured images acquired by a plurality of times of imaging includes an input unit configured to input a superimposition parameter for determining a superimposition position of a predetermined image on the captured image, a generation unit configured to generate the panoramic image from the plurality of the captured images by transformation processing of coordinate values of the plurality of the captured images acquired by the plurality of times of imaging, and a determination unit configured to determine the superimposition position of the predetermined image on the panoramic image according to position information on the panoramic image in which transformation processing of the coordinate values is performed by the generation unit and the superimposition parameter for determining the superimposition position of the predetermined image on the captured image.
Mapping of spherical image data into rectangular faces for transport and decoding across networks
A system receives an encoded image representative of the 2D projection of a cubic image, the encoded image generated from two overlapping hemispherical images separated along a longitudinal plane of a sphere. The system decodes the encoded image to produce a decoded 2D projection of the cubic image, and perform a stitching operation to portions of the decoded 2D projection representative of overlapping portions of the hemispherical images to produce stitched overlapping portions. The system combine the stitched overlapping portions with portions of the decoded 2D projection representative of the non-overlapping portions of the hemispherical images to produce a stitched 2D projection of the cubic image, and encode the stitched 2D projection of the cubic image to produce an encoded cubic projection of the stitched hemispherical images.