SYSTEM AND METHOD FOR CONTROLLING A LIGHT SOURCE FOR ILLUMINATING A SCENE OF INTEREST
20250374404 · 2025-12-04
Assignee
Inventors
Cpc classification
B64U20/87
PERFORMING OPERATIONS; TRANSPORTING
B64U2101/30
PERFORMING OPERATIONS; TRANSPORTING
B64U2101/70
PERFORMING OPERATIONS; TRANSPORTING
B64U2101/00
PERFORMING OPERATIONS; TRANSPORTING
H05B47/115
ELECTRICITY
B64D47/04
PERFORMING OPERATIONS; TRANSPORTING
International classification
H05B47/115
ELECTRICITY
B64D47/04
PERFORMING OPERATIONS; TRANSPORTING
Abstract
The invention relates to a for controlling a light source, the method using (a) at least one pose estimate of a camera configured to capture one or more images of a scene of interest which comprises at least one landmark, as said light source is operated to emit light which illuminates said scene of interest, (b) a landmark map comprising at least 3D location information of a plurality of landmarks comprising the at least one landmark in the scene of interest, (c) an illumination model describing a relationship between an emission illumination power and a reflection illumination power, wherein said emission illumination power is the power of light emitted by the light source to illuminate said scene of interest, and said reflection illumination power is the illumination power of light reflected by one or more landmarks in said scene of interest and received by the camera, wherein the method comprises the following steps: (1) determining, for at least one of the plurality of landmarks, at least one emission illumination power of light to be emitted by the light source, and an illumination time during which the light source should be operated to emit light which has an emission illumination power which is equal to the at least one emission illumination power, using (i) the at least one pose estimate of the camera, (ii) the 3D location information of the at least one of the plurality of landmarks, (iii) the illumination model; and (2) operating the light source to emit light which has an emission illumination power which is equal to the at least one emission illumination power, for a time period which is equal to the determined illumination time course; and (3) updating the illumination model parameters, wherein the illumination model parameters which are updated comprise reflectivity parameters of at least one landmark.
Claims
1. Method for controlling a light source, the method using (a) at least one pose estimate of a camera configured to capture one or more images of a scene of interest which comprises at least one landmark, as said light source is operated to emit light which illuminates said scene of interest, (b) a landmark map comprising at least 3D location information of a plurality of landmarks comprising the at least one landmark in the scene of interest, (c) an illumination model describing a relationship between an emission illumination power and a reflection illumination power, wherein said emission illumination power is the power of light emitted by the light source to illuminate said scene of interest, and said reflection illumination power is the illumination power of light reflected by one or more landmarks in said scene of interest and received by the camera, wherein the method comprises the following steps: (1) determining, for at least one of the plurality of landmarks, at least one emission illumination power of light to be emitted by the light source, and an illumination time during which the light source should be operated to emit light which has an emission illumination power which is equal to the at least one emission illumination power, using (i) the at least one pose estimate of the camera, (ii) the 3D location information of the at least one of the plurality of landmarks, (iii) the illumination model; and (2) operating the light source to emit light which has an emission illumination power which is equal to the at least one emission illumination power, for a time period which is equal to the determined illumination time course; and (3) updating the illumination model parameters, wherein the illumination model parameters which are updated comprise reflectivity parameters of at least one landmark.
2. The method according to claim 1 wherein the step (3) of updating the illumination model parameters comprises, updating the illumination model parameters based on a deviation between the predicted received illumination power and measured received illumination power.
3. Method according to claim 1, wherein the at least one emission illumination power is a solution to an optimization problem for energy of the emitted light delivered in the predefined illumination time course, so that said at least one emission illumination power is an optimized emission illumination power.
4. Method according to claim 1, wherein the at least one emission illumination power is a solution to an optimization problem for the uncertainty of a posterior first pose estimate, so that said at least one emission illumination power is an optimized emission illumination power.
5. Method according to claim 1, wherein the illumination model is configured to model a non-isotropically emitting light source.
6. Method according to claim 1, wherein the at least one pose estimate comprises a first pose estimate, and wherein the determining of the at least one emission illumination power comprises determining a first emission illumination power by: (a) determining distances between the first pose estimate and the 3D location of the plurality of landmarks, (b) sorting the distances in an ascending order or descending order, (c) choosing an M-th distance from the sorted distances, and (d) using the M-th distance for determining the first emission illumination power using at least the illumination model.
7. Method according to claim 1, wherein the at least one emission illumination power is determined using a constrained optimization algorithm with a predefined illumination time course, wherein the constrained optimization algorithm is configured to extremize a cost function while fulfilling constraints.
8. Method according to claim 7, wherein the constrained optimization algorithm is configured to minimize or maximize the cost function, by varying at least the at least one emission illumination power.
9. Method according to claim 1, wherein the illumination model comprises illumination model parameters, wherein at least one of the illumination model parameters is a stochastic parameter, wherein determining the at least one emission illumination power involves stochastically propagating illumination model parameter uncertainty through the illumination model.
10. Method according to claim 1, wherein the at least one pose estimate comprises a first pose estimate and wherein the first pose estimate comprises a first pose estimate uncertainty, wherein a first emission illumination power is determined together with a posterior first pose estimate uncertainty of the camera, which posterior first pose estimate uncertainty is determined, together with the first emission illumination power, using at least (i) the first pose estimate and the first pose estimate uncertainty, (ii) the illumination model, and (iii) a localization model for determining a pose of the camera using positions of landmarks in an image acquired by the camera at the pose, wherein the first emission illumination power is determined in such a way that the posterior first pose estimate uncertainty is below a predefined posterior uncertainty threshold.
11. Method according to claim 10, wherein the first emission illumination power is determined together with a second pose estimate uncertainty of the camera, which second pose estimate uncertainty is determined, together with the first emission illumination power, by additionally using a movement model of the camera, wherein the first emission illumination power is determined in such a way that the second pose estimate uncertainty, obtained by forward-projecting the posterior first pose estimate uncertainty using the movement model to a time tt.sub.2, tt.sub.2>tt.sub.1, at which the camera is configured to capture a subsequent image, is below a predefined uncertainty threshold.
12. Method according to claim 1, wherein the first emission illumination power is determined by solving the following constrained optimization problem
13. Method according to claim 12, wherein the constrained optimization algorithm is relaxed to an unconstrained optimization algorithm,
14. Method according to claim 7, wherein a first emission illumination power is determined by solving the following constrained optimization problem,
15. Method according to claim 1, wherein the illumination model is embodied as follows,
16. Method according to claim 1, wherein the determining of the at least one emission illumination power comprises comparing a predefined threshold reflection illumination power to a predicted received illumination power using the illumination model, wherein the at least one emission illumination power is set in such a way that a corresponding at least one predicted received illumination power is equal to or greater than the predefined threshold reflection illumination power.
17. Method according to claim 1, wherein the at least one pose estimate comprises a plurality of pose estimates and wherein the at least one emission illumination power comprises a plurality of emission illumination powers, the plurality of pose estimates and emission illumination powers relating to a planned and/or predicted movement of the camera, and adapting the planned and/or predicted movement based on an output of the constrained optimization algorithm.
18. Method according to claim 17, wherein the at least one emission illumination power is determined by solving the following constrained optimization problem
19. A non-transitory storage medium containing computer program product comprising instructions which when executed by a computer, cause the computer to carry out a method according to claim 1.
20. Assembly, comprising (a) a light source, (b) a camera, (c) a plurality of landmarks, and (d) a controller which is configured to carry out a method according to claim 1.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0063] Exemplary embodiments of the invention are disclosed in the description and illustrated by the drawings in which:
[0064]
[0065]
DETAILED DESCRIPTION OF DRAWINGS
[0066]
[0067] The at least one pose estimate 1 may relate to a 3D position and orientation of a camera, which camera is configured to capture images of a scene of interest. The scene of interest may be in an indoor environment, e.g., a warehouse in which a drone carrying at least a camera and a light source may need to navigate. In case the at least one pose estimate comprises a plurality of pose estimates, these pose estimates may be related to 3D positions and orientations of the camera at times at which a corresponding plurality of images is captured by the camera. A plurality of pose estimates may relate to poses of the camera at future times, e.g., to a planned and/or estimated future motion of the camera. Future motion may be at least partly inferred from past motion, e.g., using a Kalman filter for extrapolation, and/or may be obtained from control input. A drone carrying the camera may have inertia, which inertia may prohibit arbitrarily fast movement changes.
[0068] A drone carrying the camera and the light source may operate in an indoor environment equipped with landmarks, which landmarks may be used by the drone to determine its current location in the indoor environment. In prior art, the problem of 3D pose determination of a calibrated camera using images of such external landmarks captured by said calibrated camera is known as perspective-n-point problem. If sufficiently many landmarks with known position in a world coordinate system are visible in an image captured by the camera (two landmarks may need to be visible in case additional orientation information of the projected landmarks is available, or generally three landmarks may need to be visible), a pose of the calibrated camera may be determined using known algorithmic solutions. A 3D pose of the camera may also be determined using an inertial measurement unit (IMU) attached to the drone, using a known coordinate transformation between the IMU and the camera, or by a combination of inertial measurements and computer-vision-based pose determination algorithms, potentially combined using a Kalman filter. A Kalman filter may also be used for determining a pose. Pose estimates may be provided by a Kalman filter through extrapolation.
[0069] For a computer-vision-based pose determination of the calibrated camera, e.g., using well-known algorithmic solutions to the perspective-n-point problem, sufficiently many landmarks may need to be visible in an image captured by the calibrated camera. In order to facilitate landmark detection and visibility in images, at least some landmarks may be embodied as retroreflectors, which retroreflectors may be installed at known positions in the scene of interest. In case a light source is mounted in vicinity to the camera on the drone, and said light source is used for emitting light, the retroreflector may be clearly visible in an image captured by the camera. The landmark map 2 may comprise information on the positions on landmarks in a world coordinate system. The landmark map 2 may also be determined using a simultaneous-localization-and-mapping (SLAM) algorithm carried out while the drone carrying the camera and the light source moves about the environment.
[0070] The illumination model 3 describes how much of the power emitted by the light source arrives at the camera after reflection by a landmark, e.g., a retroreflector. The illumination model 3 is therefore preferentially a physical model, which physical model may describe the power losses occurring between light emission by the light source and light reception by the camera. The illumination model 3 may therefore, e.g., need to reflect whether the light source is an isotropically or non-isotropically emitting light source, whether a landmark reflects diffusely or narrowly etc. Some parameters of the illumination model 3 may be only known approximately. In this case, these parameters may be estimated during operation of the drone carrying the camera and light source, and/or may be continuously tracked in case they are changing over time. The illumination model 3 may comprise information on the distance between light source and landmark, in particular embodied as retroreflector, and between landmark and camera. Some parameters of the illumination model 3 may therefore differ between different landmarks, e.g., due to different distances of landmarks to the light source.
[0071] Using the illumination model 3, it may be determined how muchvirtuallyemitted light power reaches the camera, specifically the subset of pixels of an image sensor of the camera onto which the corresponding landmark is imaged. In order for the captured feature (image of the illuminated landmark) to be detectable in a reliable and accurate manner from an image, the captured feature needs to be sufficiently bright. The output of the illumination model 3 may therefore be virtually compared to the predefined threshold illumination power 4, which predefined threshold illumination power 4 may, e.g., relate to a noise floor of the camera. This way, an optimized emission illumination power of power emitted by the light source may be determined which may guarantee that a specific landmark can be detected in a reliable manner in an image.
[0072] For pose determination of the camera using projections of landmarks into a captured image, a specific set of landmarks may be chosen. The chosen set of landmarks may be chosen in such a way that reliable pose determination can be achieved, e.g., landmarks whose projections into an image are too close to each other may be disregarded. The optimized emission illumination power may be set in such a way so as to facilitate detection of the chosen set of landmarks in a captured image. The optimized emission illumination power is determined based on the at least one pose estimate 1. At any given camera pose, it may not be possible to see all landmarks with the camera. In the optimization of power for the emission illumination power, such landmarks which are not visible may be disregarded. Landmarks which are currently not visible but may become visible at a later timepointe.g., using knowledge of a planned and/or predicted movement of a drone carrying the cameramay, however, be included in the emission illumination power optimization process, while landmarks which are currently visible but may disappear at a later timepointe.g., using knowledge of a planned and/or predicted movement of the dronemay potentially be disregarded in the emission illumination power optimization process, e.g., in case the emission illumination power is determined for a future pose of the camera.
[0073] Determining 5 the optimized emission illumination power 6 may be carried out using an optimization algorithm, in particular a constrained optimization algorithm. Once a set of landmarks is determined, distances between the camera and the chosen set of landmarks may be determined using the at least one pose estimate and the landmark map. The determined distances can then be used for parametrizing the illumination model 3, which illumination model 3 is used for providing a link between the emission illumination power and a received illumination power received by the camera (the term received illumination power corresponds to the term reflection illumination power). The optimized illumination power then provides a set of features in an image captured by the camera, which set of features can be used for pose determination. The pose estimate provided as input may in this way be transformed into a determined pose, i.e., into a posterior pose estimate, e.g., by using a Kalman filter.
[0074]