System and Method for Exchanging Compressed Images Over LoRaWAN Gateways

20230231976 · 2023-07-20

    Inventors

    Cpc classification

    International classification

    Abstract

    A system and method for creating small data size representations of images, created on the image detection device, over wireless connections when there is insufficient bandwidth to support detailed images is disclosed. Image data size is often too large to send over low-powered long-distance wireless connections. Image data must be dramatically reduced to enable use of the lowest-power longest-distance wireless platforms including LoRa with LoRaWAN. Common image compression algorithms include jpeg and mpeg that provide only moderate reductions in data size. The described invention reduces data size beyond jpeg compression by reducing targeted image objects to simple outlines, contours or vectors. Monitoring security, wildlife, agricultural and other natural events require images of objects including insects, crops, livestock, wildlife or intruders. Contours, outlines or vectors of targeted objects are often sufficiently recognizable to provide useful information.

    Claims

    1. A remote device for compressing image files compatible with edge computing technologies comprised of the following parts: a) an electronic circuit board; b) sensors; c) onboard digital image processing software; d) an onboard battery; e) a solar cell; f) an antenna; g) an enclosure; and h) a LoRaWAN gateway.)

    2. The system for compressing image files compatible with edge computing technologies of claim 1, wherein the electronic circuit board is further comprised of a central processing unit, onboard memory, a transceiver, a LoRaWAN gateway module, an ethernet port, a USB cable port an LTE cellular connection, SD card port and microSD card port.)

    3. The system for compressing image files compatible with edge computing technologies of claim 1, wherein the sensors is further comprised an image sensor with optical lens, an infrared motion sensor, a temperature sensor, a humidity sensor, a barometric pressure sensor, an ambient light sensor and GPS location sensor.)

    4. The system for compressing image files compatible with edge computing technologies of claim 1, wherein the onboard digital image processing software is further comprised of a non-transitory computer readable medium including computer readable instructions.)

    5. The system for compressing image files compatible with edge computing technologies of claim 1, wherein the onboard digital image processing software is further comprised of algorithms that include background subtraction, cropping, smoothing, sharpening, thresholding, contouring, landmark detection, a predefined shape dictionary, pixel averaging, pixel background averaging, trace bitmapping, thresholding, blob matching, object recognition, blob contour insertion, predefined shape dictionary replacement, and blob distillation.)

    6. A method for compressing image files compatible with edge computing technologies comprised of the following steps: a) providing the remote device of claim 1; b) storing the predefined shape dictionary; c) Acquiring a first baseline image of field conditions; d) inserting a time lapse; e) acquiring additional images of field conditions; f) storing images on the onboard memory as an array of pixels; g) performing pixel averaging on images; h) subtracting the image's static background; i) performing trace bitmapping; (using portrace image file size reduced) j) thresholding the image; k) outline contouring; (Image file size reduced l) performing pixel background averaging; m) searching for blobs; (moving objects that differ from the baseline background) n) performing blob matching; o) performing object recognition; p) performing blob contour insertion; q) performing predefined shape dictionary replacement; r) performing blob distillation; and s) transmitting image data between the remote device and the data receiving end.)

    7. The method for compressing image files compatible with edge computing technologies of claim 6, wherein the storing the predefined shape dictionary further comprising the step of storing said predefined shape dictionary on the remote device and a data receiving end.)

    8. The method for compressing image files compatible with edge computing technologies of claim 6, wherein the acquiring a first baseline image of field conditions includes the step of using the optical lens and sensors to detect the size and shape of moving objects.)

    9. The method for compressing image files compatible with edge computing technologies of claim 6, wherein the inserting a time lapse includes the step of the user determining said time lapse.)

    10. The method for compressing image files compatible with edge computing technologies of claim 6, wherein the acquiring additional images of field conditions includes the step of using the optical lens and sensors to detect the size and shape of moving objects.)

    11. The method for compressing image files compatible with edge computing technologies of claim 6, wherein the subtracting the image's static background includes the step of stepping through each of the pixels from two images and subtracting corresponding pixel colors.)

    12. The method for compressing image files compatible with edge computing technologies of claim 6, wherein the performing trace bitmapping includes the step of determining if the size and/or shape of blobs match the size and/or shape of targeted objects of interest in an image.)

    13. The method for compressing image files compatible with edge computing technologies of claim 6, wherein the performing object recognition includes the step of comparing blobs using neural networks and other machine learning software techniques.)

    14. The method for compressing image files compatible with edge computing technologies of claim 6, wherein the performing blob contour insertion includes the step of creating outlines of the blobs are added to the object of interest by stepping through horizontal rows of image pixels and saving the x,y locations.)

    15. The method for compressing image files compatible with edge computing technologies of claim 6, wherein the performing predefined shape dictionary replacement includes the step of matching portions of a detected blob or chunk with a predefined dictionary.)

    16. The method for compressing image files compatible with edge computing technologies of claim 6, wherein the performing blob distillation includes the step of distilling the threshold, outlined and measured blobs to small data packets that can be wirelessly transmitted over LoRa or LoRaWAN to additional computing resources.)

    17. The method for compressing image files compatible with edge computing technologies of claim 6, wherein the transmitting image data between the remote device and the data receiving end includes the step of using the LoRaWAN gateway module and antenna.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0010] FIG. 1 illustrates how LoRaWAN devices can be deployed in remote field locations to exchange image based information with users or cloud based computing resources.

    [0011] FIG. 2 illustrates components inside of a field device that can digitally process image information for exchange over long-range low-powered wireless connections.

    [0012] FIG. 3 illustrates a decision tree progression of digital image processing steps that can reduce the size of image information to small packets that can be exchanged over long-range low-powered wireless connections.

    [0013] FIG. 4. shows a general representative view of the disclosure's overall method.

    [0014] FIG. 4A. illustrates an example of software flow that uses image processing techniques to detect moving objects of interest and then dramatically reduce the data size by representing an image with tools that can optionally include a predefined shape dictionary of graphical elements in addition to or instead of mathematically defined lines, curves and outlines.

    DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

    [0015] FIG. 1 depicts a wireless network configuration that includes a remote device 102 capable of acquiring and processing images for transmission over a LoRaWAN network. Remote device 102 has an optical lens 104 for acquiring images of field conditions 106. Remote device 102 has an antenna 108 for wireless transmission of data 110 between remote device 102 and a LoRaWAN gateway 112. When internet access is not needed or not practical, LoRaWAN gateway 112 can provide field data to a nearby user device (typically a computer) 116 via various options for a data connection 114 including an Ethernet cable, USB cable or WiFi. Options for connection types and formats are provided by the manufacturer of the LoRaWAN gateway 112 and user device 116.

    [0016] In cases where an internet 26 connection is needed or available, the gateway 112 manufacturer provides options for connection types that can include WiFi, Ethernet, USB or an LTE cellular connection 120. After the data reaches LoRaWAN server software on the internet 118, end users of the data can access it via a computer 122 or smartphone 124 via various data connection types 126. When LoRaWAN is used instead of simple LoRa, multiple remote devices 102, 128 and 130 can form networks. A distantly located remote device 128 can use an intermediate device 130 as a relay or bridge for data that must make multiple hops 132 and 134 to communicate with gateway 112.

    [0017] FIG. 2 depicts components inside of a remote device 102. Inside of remote device 102 is a printed circuit board 44 with electronic components that are chosen to acquire, process, store and transmit field data. Remote device 102 has an optical lens 102 that focuses an image onto image sensor 204. Image sensor 204 can respond to visible or infrared light. Data from image sensor 204 is acquired by central processing unit (CPU) 206 and stored temporarily in memory 208. CPU 206 can store data in removable storage 210. Removable storage 210 can be in the form of an SD card, microSD card or USB storage device. CPU 206 also controls the transmission and reception of data through the LoRa or LoRaWAN module 54. LoRaWAN module 212 can use either an internal or external antenna 108 to communicate with other remote devices or users and the internet. CPU 206 can acquire, store and transmit data from various sensors 214 in addition to image data including infrared motion, temperature, humidity, barometric pressure, ambient light and GPS location. Electronic components on printed circuit board 202 can be powered by either primary or secondary batteries 216. Secondary batteries provide for automatic recharging in the field with optional solar panel 218.

    [0018] FIG.3 depicts digital imaging processing steps that onboard digital image processing software performs to dramatically reduce the size of data from an image sensor so that useful information derived from the original image can be efficiently transmitted over limited bandwidth wireless connections. Said software being a non-transitory computer readable medium including computer readable instructions with various algorithms and routines to be discussed herein. The most useful information from an image often involves the size and shape of a moving object. Moving objects can be detected by infrared sensors or image processing or a combination of both. FIG.3 depicts how a moving object can be detected by background subtraction. Image 302 was taken first. Image 304 was taken several seconds after image 302 (system designers can configure the amount of time between images based on how fast a target object is expected to pass through the image sensor's field of view). In the case of images 302 and 304, there were no moving objects which results in two images that are almost identical.

    [0019] Data from image sensor 204 is acquired and stored by CPU 206 as an array of pixels. Each pixel in a typical 24-bit color scheme has a value between 0 and 255 for each of three color channels. CPU can subtract a static, ‘baseline background’ by creating a new image by stepping through each of the pixels from two images 302 and 304 and subtracting corresponding pixel colors. When images 302 and 304 are almost identical, subtracting corresponding pixel color channels results in a value close to zero. The absolute value of this small (often zero) value from the color channels blank value (255 in this example) results in image 306 which is mostly blank.

    [0020] When the same process is applied to subsequent ‘search images’ 304 and 308, many of the pixels are not identical because a new object has now moved into the image sensor's field of view. When these images are subtracted, the new object is revealed against a blank background (image 310). The next image processing step can be thresholding where CPU 48 steps through each of the subtracted pixels to determine if it is above or below a threshold. If a pixel color value is above the threshold and close to the blank value of 255, it is changed to be exactly 255. If a pixel color value is below the threshold (further away from blank) its value is changed to 0 (black). The result is depicted as image 312. The image processing performed to arrive at image 312 has already dramatically reduced the data size of the image. For example, if the original images 302, 304 and 308 were 1200×600 24-bit bitmap images, the file size would be 5,760,000 bytes and would require about 20 days to transmit over a LoRa wireless connection. Image 312 could be transmitted as a black and white bitmap that is 240,000 bytes and would take only 20 hours to transmit instead of 20 days. It could be further compressed by a jpeg algorithm to 15,000 bytes that would take only 1.3 hours to transmit.

    [0021] Image 312 can be further reduced by image processing steps that reduce a black-and-white object to simple line contours or outlines as depicted by image 314 (herein referred to as ‘outline contouring’). Line endpoints can be transmitted as a set of x,y coordinates instead of a bitmapped image. Alternative definitions of outlines, contours and vectors utilize arcs, Bezier curves and geometric shapes, all with the goal of wirelessly transmitting meaningful image-based information with a minimal amount of data. When the contours in image 314 are reduced to approximations a still-recognizable object shape can be transmitted with about 800 bytes and require only about 4 minutes over a LoRa connection. Image 316 adds some measurement vectors of the detected object which can enhance the usefulness of the transmitted data while remaining within reasonable data packet sizes for LoRa and LoRaWAN connections.

    [0022] Reducing transmission times from 20 days to a few minutes makes image-based LoRaWAN feasible for many agriculture, security and natural sciences. Returning to FIGs.1 and 2, CPU 206 can store full bitmap images on memory components 208 or 210. In cases where additional image processing or analysis performed by users or computers 116, 118 or 122 reveals a high value detection, control signals can be sent back to remote device 10 to proceed with sending more complete image data. A number of open source tools for manipulating digital images can be used to reduce a bit-mapped image to a series of outlines, contours, curves and shapes. Potrace is one such tool that traces bitmapped (herein referred to as ‘trace bitmapping’) objects into scalable lines. Scalable Vector Graphics (SVG) is an open source system for defining lines and curves as formatted text, SVG exemplifies vector art.

    [0023] FIG.4 shows a general, representative view of the disclosure's overall method (a more detailed decision tree defining and discussing the terms introduced in FIG. 4 are included in the next section associated with FIG. 4A). The overall, general steps of the method include but are not limited to: storing a digital, predefined, shape dictionary file on the remote device and on a computing device on the data receiving end; acquiring a first baseline image of field conditions using the optical lens and sensors of the remote device size (capturing the shape of moving objects); inserting a time lapse or dwell time (as determined by the user); acquiring additional images of field conditions using the optical lens and sensors of said remote device; storing captured images on the remote device's onboard memory as an array of pixels; performing pixel averaging on a plurality of captured images; subtracting images' static backgrounds by stepping through each of the pixels from at least two images and subtracting corresponding pixel colors; trace bitmapping images using existing, portrace software to further reduce image file size; thresholding the image further reducing image file size; outline contouring further reducing image size; performing pixel background averaging; searching for blobs (searching for chunks of digital assets on moving objects that differ from the baseline background image); performing blob matching (determining if the size and/or shape of blobs match the size and/or shape of targeted objects of interest in an image); performing object recognition (comparing blobs using neural networks and other machine learning software techniques); performing blob contour insertion (creates outlines of the blobs are added to the object of interest by various means including stepping through horizontal rows of image pixels and saving the x,y locations); performing predefined shape dictionary replacement (matching portions of a detected blob or chunk with a predefined dictionary); performing blob distillation (distills the threshold, outlines and measured blobs into small data packets that can be wirelessly transmitted over LoRa or LoRaWAN to additional computing resources); and finally, transmitting image data between the remote device and the using the LoRaWAN gateway module and antenna of claim to the data receiving end.

    [0024] FIG.4A depicts an example of decision process 400 implemented by software running on CPU 206. Decision process 400 begins the task of searching for moving objects of interest in step 402 by acquiring a series of images from image sensor 204. These baseline images are averaged together, pixel-by-pixel, to produce a robust image of the background (referred to as ‘pixel background averaging’). The software decision process 400 then begins acquiring images in step 404 to search for moving objects that differ from the baseline background. Software step 406 then calculates pixel-by-pixel differences between the newly acquired search images 404 and the background baseline images 402. Software step 408 processes the image pixels and regions that differ from the baseline background. Step 408 identifies areas of substantial difference called blobs. Blobs can be thresholded to produce a simpler and more compact two-tone 312 image than when the pixels contain all of the original color information.

    [0025] If step 410 determines that there was little difference between the search image and baseline background then it returns to step 404 to acquire another search image. If step 410 does detect sufficient differences between search and baseline it then proceeds to step 412 to determine if the size and/or shape of the detected moving blobs match the size and/or shape of targeted objects of interest in the process called ‘blob matching.’ Applications typically search for a particular type of target like an animal or intruder where the general size and shape of the target are known ahead of time. The size and shape of the moving blob can then be compared to the detected blobs in step 412. Step 412 can be enhanced using neural networks and other machine learning software techniques referred to as ‘object recognition.’ If the areas of detected differences appear sufficiently scattered or widespread as to indicate an environmental change instead of a moving object of interest, the software returns to 402 to acquire a new background baseline. Examples of events that can change the background environment include clouds that move in front of the sun, wind blowing background vegetation or the sun's position in the sky gradually changing with time of day.

    [0026] If step 412 determines that the detected blobs have characteristics that match the target object then step 414 creates outlines of the blobs are added to the object of interest by various means including stepping through horizontal rows of image pixels and saving the x,y locations of transitions through a process called ‘blob contouring insertion.’ Step 416 provides an option for using a predefined shape dictionary instead of or in addition to object outlines. Image objects can be compared to and replaced by predefined chunks of image objects through the process called ‘predefined shape dictionary replacement.’ This process sends identifiers and locations for items in a predefined dictionary of object parts requires far less data than sending a complete pixel image representation of the object. Step 418 matches portions of a detected blob with the predefined dictionary. Step 420 provides a way for the software process 400 to define new types of image object chunks that can be forwarded to additional computing resources on the cloud or elsewhere.

    [0027] Step 422 distills the thresholded, outlined and measured blobs to small data packets that can be wirelessly transmitted over LoRa or LoRaWAN to additional computing resources instep 424 through a process called ‘blob distillation.’ LoRaWAN provides two-way communication between remote device 102 and additional computing resources. These additional computing resources are typically far more powerful than remote device 102. As such, additional image processing including neural net object recognition can be performed on the image object components sent from device 102. If this additional analysis (step 426) determines that additional higher-resolution images and other sensor (214) data are warranted then the device 102 can be commanded to do so in step 428.