INDOOR VISIBLE LIGHT POSITIONING METHOD AND SYSTEM BASED ON SINGLE LED LAMP
20200374005 ยท 2020-11-26
Inventors
Cpc classification
H04N23/6845
ELECTRICITY
F21W2111/10
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F21V33/0052
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F21Y2115/10
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
International classification
F21V33/00
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
Abstract
An indoor visible light positioning method and system based on a single LED lamp. The system includes an LED communication module and a smartphone module. The LED communication module sends its coordinates and shape data to the smartphone module. The smartphone module includes an inertial measurement unit (IMU) and a camera. The IMU is configured to obtain movement data of a smartphone. The camera is configured to shoot video streams of the LED lamp. Computing processing is performed on center point coordinates of the LED lamp and IMU data in the video streams, and constraints are provided by using a homography matrix of ellipses in the video streams, to obtain accurate location information of the smartphone, and provide location-based services such as navigation and query for a user.
Claims
1. An indoor visible light positioning method based on a single LED lamp, comprising: step 1: using the visible light communication technology to encode and modulate to-be-sent information, and transmitting the information through an LED lamp; step 2: shooting video streams of the LED lamp by using a camera of a smartphone, and recording data measured by an inertial measurement unit (IMU) of the smartphone; and measuring, by the IMU, acceleration (a.sub.x, a.sub.y, a.sub.z) and rotation angle information (,,) of the smartphone in all directions to calculate a movement direction and displacement (t.sub.x, t.sub.y, t.sub.z) of the smartphone; step 3: capturing LED lamp images from each frame of the video streams, decoding and demodulating the data transmitted by the LED lamp, extracting an LED lamp shape from the images, and collecting a location (X.sub.o, Y.sub.o, Z.sub.o) and a radius R.sub.o of the LED lamp in a world coordinate system; step 4: dividing the video streams into single image frames by timeslot, and for every two adjacent images I.sub.i and I.sub.i+1, calculating a homography matrix:
H.sub.i,i+1=K[R.sub.i,i+1+t.sub.i,i+1n.sub.i.sup.T/z.sub.i]K.sup.1 wherein H.sub.i,i+1 is the homography matrix, K is an intrinsic matrix of the smartphone camera, n.sub.i is a normal vector of a lamp plane in an i.sup.th camera coordinate system, z.sub.i is a distance from a center of the camera to the lamp plane when the i.sup.th image is taken, R.sub.i,i+1 is a rotation matrix of the camera, and t.sub.i,i+1 is a displacement parameter of the camera; a general elliptic equation is:
ax.sup.2+bxy+cy.sup.2+dx+ey+f=0 for multi-view geometry, exporting the homography matrix H.sub.i,i+1 between I.sub.i and I.sub.i+1 through elliptical shapes projected onto the images, wherein an ellipse on each image is represented by a conic coefficient matrix C.sub.i:
Z.sub.c=Bf/d wherein d is a length between the center points of the LED lamp in two images; B is an actual interval between the two images taken by the camera; B is calculated based on acceleration sensor data in the IMU; or the radius R.sub.o of the LED lamp is compared with a minor axis of an ellipse obtained from the image, to obtain a ratio parameter s, and then d is used to calculate B according to the same ratio parameter s; step 7: obtaining a projection matrix of the camera based on the intrinsic and extrinsic matrices of the camera, and for any image point P.sub.w(X.sub.w, Y.sub.w, Z.sub.w) in space, obtaining a location (, v) of an image point p.sub.i based on the projection matrix and the distance Z.sub.c from the LED lamp to the center of the camera:
2. The indoor visible light positioning method based on a single LED lamp according to claim 1, wherein coding and modulation is performed for the LED lamp by using run length limited (RLL) coding and Raptor coding.
3. The indoor visible light positioning method based on a single LED lamp according to claim 2, wherein a video taken by the smartphone is 30 or 60 frames per second.
4. A system using the indoor visible light positioning method based on a single LED lamp according to claim 1, comprising: an LED communication module, a power module, and a smartphone module, wherein the power module supplies power to the LED communication module; the LED communication module comprises a coding and modulation module, a control module, and an LED lamp, wherein the control module controls the LED lamp according to the coding and modulation module; and the smartphone module comprises a camera, a decoding and demodulation module, an IMU module, a positioning module, and a location-based service module, wherein the camera, the decoding and demodulation module, and the positioning module are connected in turn, and the camera, the IMU module, and the location-based service module are all connected to the positioning module.
5. The system according to claim 4, wherein the location-based service module comprises a positioning and navigation module, an information service module, a tracking and identification module, and a security check module.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0041]
[0042]
[0043]
[0044]
[0045]
[0046] In the figure, 1. circular LED lamp, 2. smartphone, 3. room with only one LED lamp, 4. encoding for the LED lamp, and 5. user with a smartphone in hand.
DETAILED DESCRIPTION
[0047] The following further describes embodiments of the present invention with reference to the drawings and specific embodiments. It should be understood that these embodiments are only used to illustrate the present invention but not intended to limit the scope of the present invention. Those skilled in the art should understand that any equivalent modifications to the present invention shall fall within the scope defined by the claims.
[0048] As shown in
[0049] In embodiments of the present invention, after coding and modulation are performed for the circular LED lamp on the ceiling, an IMU of a smartphone is used to collect movement data of the smartphone, and meanwhile, a camera is used to shoot videos of the LED lamp at the same time.
[0050] The IMU measures acceleration (a.sub.x, a.sub.y, a.sub.z) and rotation angle information (,,) of the smartphone in all directions to calculate a movement direction and displacement (t.sub.x, t.sub.y, t.sub.z) of the smartphone. During actual operations, the smartphone must capture video streams and collect the IMU data at the same time, and the time and quantity units of the extracted data and video frames must be unified.
[0051] As a circular LED lamp is used in various embodiments of the present invention, a changed elliptic shape is projected onto a video frame image, together with some feature point information in the ellipse. In such cases, it is necessary to obtain a focal length of the smartphone camera and a physical dimension of each pixel, and collect and preprocess all IMU data, so as to obtain values of a pitch angle , a roll angle , and a yaw angle , and relative location change data of the smartphone during the movement of the smartphone. Ideally, intrinsic and extrinsic matrices of the smartphone camera can be obtained based on such data. Generally, the intrinsic matrix M.sub.1 of a same smartphone does not change, while the extrinsic matrix M.sub.2 changes as the smartphone moves or rotates.
[0052] For computer vision, basic knowledge of camera perspective projection models can be obtained from relevant literature, for example, a conversion relationship between an image coordinate system, a camera coordinate system, and a world coordinate system, and intrinsic and extrinsic parameters of the camera described in the pinhole imaging model (linear camera model). The intrinsic matrix M.sub.1 of the camera is responsible for converting the camera coordinate system to a pixel coordinate system:
[0053] f represents a focal length of the camera, d.sub.X and d.sub.Y represent physical dimensions of each pixel in an image in the X axis and Y axis, and (u.sub.0, v.sub.0) represents an origin of an image coordinate system, and defines an intersection of the camera's optical axis and the image plane, usually at the center of the image. The extrinsic matrix is generated by the camera during movement and rotation and keeps changing. The extrinsic matrix is generally expressed as follows:
[0054] In the extrinsic matrix, 0 (0,0,0).sup.T; R is a rotation matrix of the smartphone, a matrix of orthogonal units with a size of 33; and t is a three-dimensional translation variable that contains displacement data of the smartphone during moving in three directions. A projection matrix of the camera can be obtained by obtaining the intrinsic and extrinsic matrices of the camera. Then, for any point P.sub.w in space, a location (, v) of an image point p.sub.i can be obtained through projection matrix conversion
[0055] Similarly, when the image point p.sub.i (, v) is known, a point p.sub.i in the image coordinate system can be converted to P.sub.c in the camera coordinate system by using the intrinsic matrix M.sub.1, P.sub.c is converted to P.sub.w in the world coordinate system by using the extrinsic matrix M.sub.2, and then specific coordinates (X.sub.o, Y.sub.o, Z.sub.o) of the LED lamp in the world coordinate system are added to P.sub.w, to obtain coordinates P.sub.w of the smartphone. During the conversion, a distance Z.sub.c from the LED lamp to the center of the camera needs to be obtained.
[0056] In embodiments of the present invention, multiple images of the LED lamp are obtained by shooting video streams, to simulate a binocular visual ranging method, and obtain Z.sub.c based on triangle similarity theorems.
Z.sub.c=Bf/d
[0057] d is a parallax between two images, that is, a length between the center points of the LED lamp in two images, and B is an actual interval between the two images taken by the camera. B can be calculated based on acceleration sensor data in the IMU. Alternatively, the radius R.sub.o of the LED lamp is compared with a minor axis of an ellipse obtained from the image, to obtain a ratio parameter s, and then d is used to calculate B according to the same ratio parameter s.
[0058] Then, Z.sub.c is used for conversion between the pixel coordinate system and the world coordinate system to obtain the location of the smartphone in the world coordinate system. During the data measurement, however, a measured value of a yaw angle in a rotation angle is quite different from a true value because a magnet sensor is vulnerable to interference from the ambient environment. This reduces the positioning accuracy. Therefore, the following method is used to constrain the location of the smartphone.
[0059] The video streams of the LED lamp shot by the camera are divided into single image frames by timeslot. For every two adjacent images I.sub.i and I.sub.i+1, a homography matrix can be calculated based on the knowledge of computer vision:
H.sub.i,i+1=K[R.sub.i,i+1+t.sub.i,i+1n.sub.i.sup.T/z.sub.i]K.sup.1
[0060] n.sub.i is a normal vector of a lamp plane in an i.sup.th camera coordinate system, z.sub.i is a distance from a center of the camera to the lamp plane when the i.sup.th image is taken. When image frames are extracted by timeslot, movement data of the smartphone is processed to obtain the intrinsic matrix M.sub.1, the extrinsic matrix M.sub.2, and the movement distance (t.sub.x, t.sub.y, t.sub.z) of the smartphone between every two image frames.
[0061] For multi-view geometry, the homography matrix H.sub.i,i+1 between I.sub.i and I.sub.i+1 can be exported through elliptical shapes projected onto the images. An ellipse on each image can be represented by a conic coefficient matrix:
[0062] Under the homography matrix, H.sub.i,i+1, C.sub.i is converted into C.sub.i+1=H.sub.i,i+1.sup.TC.sub.iH.sub.i,i+1.sup.1. In this way, a precise constraint is defined for calibrating R.sub.i,i+1 and t.sub.i,i+1. Due to errors in the three rotation angles and each displacement change, accurate solutions may not be produced. In this case, the system uses the following method to solve the problem.
[0063] Solutions to various data are obtained by using optimized methods, to calibrate R.sub.i,i+1 and t.sub.i,i+1, constrain a movement status of the smartphone, correct the IMU measurement data of the smartphone, reduce the error of the rotation angles in all directions, and improve the positioning accuracy of the LED lamp. In this way, the problem that the magnet sensor is vulnerable to interference is resolved, and the location of the smartphone is obtained.
[0064] After the positioning is realized, at least one embodiment of the present invention uses a highly reliable modulation and coding technology to build a visible light communication network, and provides location-based information services to smartphones through the communication network. User authentication and authorization can be performed first. During provision of the location-based service, information that interests users most is filtered and sent to the users, and users are allowed to store personalized information. Embodiments of the present invention can provide users with various services such as positioning, navigation, query, identification, and event inspection in places like shopping malls, scenery spots, and underground garages.
[0065] As shown in
[0066] The smartphone module includes the camera and the IMU. The camera with a focal length of f is configured to shoot video streams of the LED lamp, and meanwhile, the IMU collects movement status data of the smartphone, such as rotation angles (, , ) in all directions and acceleration (a.sub.x, a.sub.y, a.sub.z) in all directions, to obtain an intrinsic matrix M.sub.1 and an extrinsic matrix M.sub.2 of the smartphone. LED lamp images after short exposure are obtained by the camera, and data such as the specific frequency of the LED light, center point coordinates, and a radius of the LED lamp can be identified from ellipse images of the LED lamp.
[0067] As shown in
[0068] Firstly, the LED lamp is deployed on the ceiling, and a location of the LED lamp, such as (0, 0, 0) is set. In more complex indoor scenarios, it is necessary to assign specific accurate coordinates to each LED lamp, and then perform coding and modulation to facilitate the management by a system positioning module. In addition, a focal length of the smartphone camera, and a ratio between a pixel distance and an actual distance for photography need to be extracted as basic inputs. Then, video streams of the LED lamp are shot, and IMU measurement data of the smartphone is recorded.
[0069] As shown in
[0070] As shown in
[0071] The above descriptions are merely preferred implementations of the present invention. It should be noted that a person of ordinary skill in the art may further make several improvements and modifications without departing from the principle of the present invention, but such improvements and modifications shall also be deemed as falling within the protection scope of the present invention.