G06T7/586

Systems and methods for digitally representing a scene with multi-faceted primitives
11593959 · 2023-02-28 · ·

Disclosed is a system and associated methods for generating and rendering a polyhedral point cloud that represents a scene with multi-faceted primitives. Each multi-faceted primitive stores multiple sets of values that represent different non-positional characteristics that are associated with a particular point in the scene from different angles. For instance, the system generates a multi-faceted primitive for a particular point of the scene that is captured in first capture from a first position and a second capture from a different second position. Generating the multi-faceted primitive includes defining a first facet with a first surface normal oriented towards the first position and first non-positional values based on descriptive characteristics of the particular point in the first capture, and defining a second facet with a second surface normal orientated towards the second position and second non-positional values based on different descriptive characteristics of the particular point in the second capture.

Systems and methods for digitally representing a scene with multi-faceted primitives
11593959 · 2023-02-28 · ·

Disclosed is a system and associated methods for generating and rendering a polyhedral point cloud that represents a scene with multi-faceted primitives. Each multi-faceted primitive stores multiple sets of values that represent different non-positional characteristics that are associated with a particular point in the scene from different angles. For instance, the system generates a multi-faceted primitive for a particular point of the scene that is captured in first capture from a first position and a second capture from a different second position. Generating the multi-faceted primitive includes defining a first facet with a first surface normal oriented towards the first position and first non-positional values based on descriptive characteristics of the particular point in the first capture, and defining a second facet with a second surface normal orientated towards the second position and second non-positional values based on different descriptive characteristics of the particular point in the second capture.

IMAGE PROCESSING METHOD, PROGRAM, AND IMAGE PROCESSING DEVICE
20230027047 · 2023-01-26 ·

Image processing includes obtaining image I[0,0] of a picture captured by an image capture means, in a state where light is irradiated to the picture from a light source at a reference position relative to a normal line of the picture, obtaining image I[α1,0] of the picture captured by an image capture means, in a state where the light is irradiated to the picture from the light source at a position inclined from the reference position at an angle α1 in the first direction, obtaining image I[0, β1] of the picture captured by an image capture means, in a state where the light is irradiated to the picture from the light source at a position inclined by an angle β1 from the reference position in a second direction different from the first direction, creating a three-dimensional map of the picture, using a set of images I[0, β1] and I[0, β2], merging at least a part of each of image I[α1,0], image I[0,β1], and image I[0,β2] with respect to image I[0,0], and recording as two-dimensional image data the image subjected to the emphasizing process.

OPTICAL CONTROL APPARATUS AND OPTICAL CONTROL METHOD

The optical control apparatus includes a light source, a light collecting section, and an optical path control section. The light source emits light. The light collecting section collects the light emitted from the light source and illuminates the light onto an object.

Acquisition of optical characteristics

An apparatus (1, 5, 6) is described which includes two or more colour displays (2) arranged to provide piece-wise continuous illumination of a volume. The apparatus (1, 5, 6) also includes one or more cameras (3). Each camera (3) is arranged to image the volume. The apparatus (1, 5, 6) is configured to control the two or more colour displays (2) and the one or more cameras (3) to illuminate the volume with each of two or more illumination conditions. The apparatus (1, 5, 6) is also configured to obtain two or more sets of images. Each set of images is obtained during illumination of the volume with one or more corresponding illumination conditions. The two or more sets of images include sufficient information for calculation of a reflectance map and a photometric normal map of an object or subject (4) positioned within the volume. When viewed from the volume, the apparatus (1, 5, 6) only provides direct illumination of the volume from angles within a zone of a hemisphere. The zone is less than a hemisphere and corresponds to a first range (Δα) of latitudinal angles and a second range (Δβ) of longitudinal angles. Each of the first (Δα) and second ranges (Δβ) is no more than 17π/18.

PACKAGE DELIVERY SHARING SYSTEMS AND METHODS
20230222435 · 2023-07-13 ·

A package delivery sharing system includes a holding area for holding packages intended for delivery to one or more package recipients and a computing system with a processor and memory storing records of packages in the holding area. The processor is configured to track each package in the holding area awaiting delivery to the one or more package recipients, to offer a fee to individuals other than the package recipients for delivering a given package in the holding area to a particular package recipient, to select a person who accepts the fee in return for transporting the given package to the particular package recipient, to enable the person to access the holding area, to help the person to find the given package, through light guidance or other visual cues, and take the given package, and to confirm that the person took a correct package from the holding area.

Methods for collecting and processing image information to produce digital assets
11699243 · 2023-07-11 · ·

Paired images of substantially the same scene are captured with the same freestanding sensor. The paired images include reflected light illuminated with controlled polarization states that are different between the paired images. Information from the images is applied to a convolutional neural network (CNN) configured to derive a spatially varying bi-directional reflectance distribution function (SVBRDF) for objects in the paired images. Alternatively, the sensor is fixed and oriented to capture images of an object of interest in the scene while a light source traverses a path that intersects the sensor's field of view. Information from the paired images of the scene and from the images captured of the object of interest when the light source traverses the field of view are applied to a CNN to derive a SVBDRF for the object of interest. The image information and the SVBRDF are used to render a representation with artificial lighting conditions.

Methods for collecting and processing image information to produce digital assets
11699243 · 2023-07-11 · ·

Paired images of substantially the same scene are captured with the same freestanding sensor. The paired images include reflected light illuminated with controlled polarization states that are different between the paired images. Information from the images is applied to a convolutional neural network (CNN) configured to derive a spatially varying bi-directional reflectance distribution function (SVBRDF) for objects in the paired images. Alternatively, the sensor is fixed and oriented to capture images of an object of interest in the scene while a light source traverses a path that intersects the sensor's field of view. Information from the paired images of the scene and from the images captured of the object of interest when the light source traverses the field of view are applied to a CNN to derive a SVBDRF for the object of interest. The image information and the SVBRDF are used to render a representation with artificial lighting conditions.

IMAGING SYSTEMS AND METHODS INCORPORATING IMPROVED CULLING OF VIRTUAL OBJECTS
20220414907 · 2022-12-29 · ·

An imaging system including visible-light camera(s), pose-tracking means, and processor(s). The processor(s) is/are configured to: control visible-light camera(s) to capture visible-light image, whilst processing pose-tracking data to determine pose of camera(s); obtain three-dimensional model of real-world environment; create occlusion mask, using three-dimensional model; cull part of virtual object(s) to generate culled virtual object(s), wherein virtual object(s) is to be embedded at given position in visible-light image; detect whether width of culled part or remaining part of virtual object(s) is less than predefined percentage of total width of virtual object(s); if width of culled part is less than predefined percentage, determine new position and embed entirety of virtual object(s) at new position to generate extended-reality image; and if width of remaining part is less than predefined percentage, cull entirety of virtual object(s).

IMAGING SYSTEMS AND METHODS INCORPORATING IMPROVED CULLING OF VIRTUAL OBJECTS
20220414907 · 2022-12-29 · ·

An imaging system including visible-light camera(s), pose-tracking means, and processor(s). The processor(s) is/are configured to: control visible-light camera(s) to capture visible-light image, whilst processing pose-tracking data to determine pose of camera(s); obtain three-dimensional model of real-world environment; create occlusion mask, using three-dimensional model; cull part of virtual object(s) to generate culled virtual object(s), wherein virtual object(s) is to be embedded at given position in visible-light image; detect whether width of culled part or remaining part of virtual object(s) is less than predefined percentage of total width of virtual object(s); if width of culled part is less than predefined percentage, determine new position and embed entirety of virtual object(s) at new position to generate extended-reality image; and if width of remaining part is less than predefined percentage, cull entirety of virtual object(s).