EDGE-BASED PROCESSING OF AGRICULTURAL DATA
20220129673 · 2022-04-28
Inventors
Cpc classification
G06N5/01
PHYSICS
G06V10/25
PHYSICS
A01B69/001
HUMAN NECESSITIES
International classification
Abstract
Implementations are disclosed for selectively operating edge-based sensors and/or computational resources under circumstances dictated by observation of targeted plant trait(s) to generate targeted agricultural inferences. In various implementations, triage data may be acquired at a first level of detail from a sensor of an edge computing node carried through an agricultural field. The triage data may be locally processed at the edge using machine learning model(s) to detect targeted plant trait(s) exhibited by plant(s) in the field. Based on the detected plant trait(s), a region of interest (ROI) may be established in the field. Targeted inference data may be acquired at a second, greater level of detail from the sensor while the sensor is carried through the ROI. The targeted inference data may be locally processed at the edge using one or more of the machine learning models to make a targeted inference about plants within the ROI.
Claims
1. A method implemented using one or more resource-constrained edge processors at an edge of a distributed computing network that are remote from one or more central servers of the distributed computing network, the method comprising: acquiring triage data from a sensor of an edge computing node carried through an agricultural field by agricultural equipment, wherein the triage data is acquired at a first level of detail; locally processing the triage data at the edge using one or more machine learning models stored on or executed by the edge computing node to detect one or more targeted plant traits exhibited by one or more plants in the agricultural field; based on the detected one or more targeted plant traits, establishing a region of interest (ROI) in the agricultural field; acquiring targeted inference data from the sensor at a second level of detail while the sensor of the edge computing node is carried through the ROI of the agricultural field, wherein the second level of detail is greater than the first level of detail; and locally processing the targeted inference data at the edge using one or more of the machine learning models stored on or executed by the edge computing node to make a targeted inference about plants within the ROI of the agricultural field.
2. The method of claim 1, wherein the sensor operates at a first frequency, and acquiring the triage data at the first level of detail comprises sampling sensor output of the sensor at a second frequency that is less than the first frequency.
3. The method of claim 1, wherein the sensor comprises a vision sensor, the first level of detail comprises a first framerate, and the second level of detail comprises a second framerate that is greater than the first framerate.
4. The method of claim 1, wherein the sensor comprises a vision sensor, the first level of detail comprises a first resolution, and the second level of detail comprises a second resolution that is greater than the first resolution.
5. The method of claim 1, further comprising downloading, from one or more of the central servers through the distributed computing network, parameters of the machine learning model used to process the targeted inference data, wherein the parameters are selected from a library of machine learning model parameters based on the detected one or more targeted plant traits.
6. The method of claim 1, wherein the same machine learning model is used to process the triage data and the targeted inference data.
7. The method of claim 1, wherein the machine learning model used to process the triage data is a triage machine learning model, the machine learning model used to process the targeted inference data is an targeted inference machine learning model, and processing the triage data using the triage machine learning model utilizes less computing resources than processing the targeted inference data using the targeted inference machine learning model.
8. The method of claim 1, wherein the agricultural equipment comprises an autonomous robot that is equipped with the sensor.
9. The method of claim 1, wherein the agricultural equipment comprises a farming vehicle, and the sensor is mounted on a boom of the farming vehicle.
10. The method of claim 9, wherein the sensor is integral with a modular computing device that is removably mounted on the boom, wherein the modular computing device is equipped with one or more of the edge processors.
11. The method of claim 1, further comprising acquiring additional targeted inference data from an additional sensor at a third level of detail while the additional sensor is carried through the ROI of the agricultural field.
12. The method of claim 1, wherein the detected one or more targeted plant traits comprise one or more of: a plant type; one or more heights of one or more plants; a density of a plurality of plants; presence of a plant disease or malady; a color of one or more plants; presence of one or more fruits or flowers on one or more plants; or one or more sizes of one or more fruits or flowers on one or more plants.
13. An edge computing node of a distributed computing network, the edge computing node comprising one or more sensors, one or more processors, and memory storing instructions that, in response to execution of the instructions by the one or more processors, cause the one or more processors to: acquire triage data from one or more of the sensors while the edge computing node is carried through an agricultural field by agricultural equipment, wherein the triage data is acquired at a first level of detail; locally process the triage data using one or more machine learning models stored on or executed by the edge computing node to detect one or more targeted plant traits exhibited by one or more plants in the agricultural field; based on the detected one or more targeted plant traits, establish a region of interest (ROI) in the agricultural field; acquire targeted inference data from the same sensor or a different sensor at a second level of detail while the edge computing node is carried through the ROI of the agricultural field, wherein the second level of detail is greater than the first level of detail; and locally processing the targeted inference data using one or more of the machine learning models stored on or executed by the edge computing node to make a targeted inference about plants within the ROI of the agricultural field.
14. The edge computing node of claim 12, wherein the sensor operates at a first frequency, and acquiring the triage data at the first level of detail comprises sampling sensor output of the sensor at a second frequency that is less than the first frequency.
15. The edge computing node of claim 12, wherein the sensor comprises a vision sensor, the first level of detail comprises a first framerate, and the second level of detail comprises a second framerate that is greater than the first framerate.
16. The edge computing node of claim 12, wherein the sensor comprises a vision sensor, the first level of detail comprises a first resolution, and the second level of detail comprises a second resolution that is greater than the first resolution.
17. The edge computing node of claim 12, further comprising instructions to download, from one or more central servers through the distributed computing network, parameters of the machine learning model used to process the targeted inference data, wherein the parameters are selected from a library of machine learning model parameters based on the detected one or more targeted plant traits.
18. At least one non-transitory computer-readable medium comprising instructions that, in response to execution of the instructions by one or more resource-constrained edge processors at an edge of a distributed computing network that are remote from one or more central servers of the distributed computing network, cause the one or more processors to perform the following operations: acquiring triage data from a sensor of an edge computing node carried through an agricultural field by agricultural equipment, wherein the triage data is acquired at a first level of detail; locally processing the triage data at the edge using one or more machine learning models stored on or executed by the edge computing node to detect one or more targeted plant traits exhibited by one or more plants in the agricultural field; based on the detected one or more targeted plant traits, establishing a region of interest (ROI) in the agricultural field; acquiring targeted inference data from the sensor at a second level of detail while the sensor of the edge computing node is carried through the ROI of the agricultural field, wherein the second level of detail is greater than the first level of detail; and locally processing the targeted inference data at the edge using one or more of the machine learning models stored on or executed by the edge computing node to make a targeted inference about plants within the ROI of the agricultural field.
19. The at least one non-transitory computer-readable medium of claim 18, wherein the sensor operates at a first frequency, and acquiring the triage data at the first level of detail comprises sampling sensor output of the sensor at a second frequency that is less than the first frequency.
20. The at least one non-transitory computer-readable medium of claim 18, wherein the sensor comprises a vision sensor, the first level of detail comprises a first framerate, and the second level of detail comprises a second framerate that is greater than the first framerate.
21. An agricultural vehicle comprising one or more sensors, one or more edge processors, and memory storing instructions that, in response to execution of the instructions by the one or more edge processors, cause the one or more edge processors to: acquire triage data from one or more of the sensors while the agricultural vehicle travels through an agricultural field, wherein the triage data is acquired at a first level of detail; locally process the triage data using one or more machine learning models stored on or executed by one or more of the edge processors to detect one or more targeted plant traits exhibited by one or more plants in the agricultural field; based on the detected one or more targeted plant traits, establish a region of interest (ROI) in the agricultural field; acquire targeted inference data from the same sensor or a different sensor at a second level of detail while the agricultural vehicle travels through the ROI of the agricultural field, wherein the second level of detail is greater than the first level of detail; and locally processing the targeted inference data using one or more of the machine learning models stored on or executed by one or more of the edge processors to make a targeted inference about plants within the ROI of the agricultural field.
22. The agricultural vehicle of claim 21, wherein the agricultural vehicle takes the form of a robot.
23. The agricultural vehicle of claim 21, wherein the agricultural vehicle comprises a tractor equipped an edge node, wherein the edge node includes one or more of the sensors and one or more of the edge processors.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
DETAILED DESCRIPTION
[0026]
[0027] One edge site 102.sub.1 is depicted in detail in
[0028] In various implementations, components of edge sites 102.sub.1-N and central agricultural inference system 104A collectively form a distributed computing network in which edge nodes (e.g., client device 106, edge agricultural inference system 104B, farm equipment 108) are in network communication with central agricultural inference system 104A via one or more networks, such as one or more wide area networks (“WANs”) 110A. Components within edge site 102.sub.1, by contrast, may be relatively close to each other (e.g., part of the same farm or plurality of fields in a general area), and may be in communication with each other via one or more local area networks (“LANs”, e.g., Wi-Fi, Ethernet, various mesh networks) and/or personal area networks (“PANs”, e.g., Bluetooth), indicated generally at 110B.
[0029] An individual (which in the current context may also be referred to as a “user”) may operate a client device 106 to interact with other components depicted in
[0030] Central agricultural inference system 104A and edge agricultural inference system 104B (collectively referred to herein as “agricultural inference system 104”) comprise an example of a distributed computing network in which the techniques described herein may be implemented. Each of client devices 106, agricultural inference system 104, and/or farm equipment 108 may include one or more memories for storage of data and software applications, one or more processors for accessing data and executing applications, and other components that facilitate communication over a network. The computational operations performed by client device 106, farm equipment 108, and/or agricultural inference system 104 may be distributed across multiple computer systems.
[0031] Each client device 106 (and in some implementation, some farm equipment 108), may operate a variety of different applications that may be used, for instance, to obtain and/or analyze various agricultural inferences (real time and delayed) that were generated using techniques described herein. For example, a first client device 106.sub.1 operates agricultural (“AG”) client 107 (e.g., which may be standalone or part of another application, such as part of a web browser). Another client device 106.sub.X may take the form of a HMD that is configured to render 2D and/or 3D data to a wearer as part of a VR immersive computing experience. For example, the wearer of client device 106.sub.X may be presented with 3D point clouds representing various aspects of objects of interest, such as fruits of crops, weeds, crop yield predictions, etc. The wearer may interact with the presented data, e.g., using HMD input techniques such as gaze directions, blinks, etc.
[0032] Individual pieces of farm equipment 108.sub.1-M may take various forms. Some farm equipment 108 may be operated at least partially autonomously, and may include, for instance, an unmanned aerial vehicle 108.sub.1 that captures sensor data such as digital images from overhead field(s) 112. Other autonomous farm equipment (e.g., robots) may include a robot (not depicted) that is propelled along a wire, track, rail or other similar component that passes over and/or between crops, a wheeled robot 108.sub.M, or any other form of robot capable of being propelled or propelling itself past crops of interest. In some implementations, different autonomous farm equipment may have different roles, e.g., depending on their capabilities. For example, in some implementations, one or more robots may be designed to capture data, other robots may be designed to manipulate plants or perform physical agricultural tasks, and/or other robots may do both. Other farm equipment, such as a tractor 108.sub.2, may be autonomous, semi-autonomous, and/or human-driven. Any of farm equipment 108 may include various types of sensors, such as vision sensors (e.g., 2D digital cameras, 3D cameras, 2.5D cameras, infrared cameras), inertial measurement unit (“IMU”) sensors, Global Positioning System (“GPS”) sensors, X-ray sensors, moisture sensors, barometers (for local weather information), photodiodes (e.g., for sunlight), thermometers, etc.
[0033] In some implementations, farm equipment 108 may take the form of one or more edge computing nodes 108.sub.3. An edge computing node 108.sub.3 may be a modular and/or portable data processing device that, for instance, may be carried through an agricultural field 112, e.g., by being mounted on another piece of farm equipment (e.g., on a boom affixed to tractor 108.sub.2 or to a truck) that is driven through field 112 and/or by being carried by agricultural personnel. Edge computing node 108.sub.3 may include logic such as processor(s), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGA), etc., configured with selected aspects of the present disclosure to capture and/or process various types of sensor data to make agricultural inferences.
[0034] In some examples, one or more of the components depicted as part of edge agricultural inference system 104B may be implemented in whole or in part on a single edge computing node 108.sub.3, across multiple edge computing nodes 108.sub.3, and/or across other computing devices, such as client device(s) 106. Thus, when operations are described herein as being performed by/at edge agricultural inference system 104B, it should be understood that those operations may be performed by one or more edge computing nodes 108.sub.3, and/or may be performed by one or more other computing devices at the edge 102, such as on client device(s) 106.
[0035] In various implementations, edge agricultural inference system 104B may include a vision data module 114B, a sampling module 116, and an edge inference module 118B. Edge agricultural inference system 104B may also include one or more edge databases 120B for storing various data used by and/or generated by modules 114B, 116, and 118B, such as vision and/or other sensor data gathered by farm equipment 108.sub.1-M, agricultural inferences, machine learning models that are applied and/or trained using techniques described herein to generate agricultural inferences, and so forth. In some implementations one or more of modules 114B, 116, and/or 118B may be omitted, combined, and/or implemented in a component that is separate from edge agricultural inference system 104B.
[0036] In various implementations, central agricultural inference system 104A may be implemented across one or more computing systems that may be referred to as the “cloud.” Central agricultural inference system 104A may receive the massive sensor data generated by farm equipment 108.sub.1-M (and/or farm equipment at other edge sites 102.sub.2-N) and process it using various techniques to make agricultural inferences. However, as noted previously, the agricultural inferences generated by central agricultural inference system 104A may be delayed (and are referred to herein as “delayed crop agricultural inferences”), e.g., by the time required to physically transport portable data devices (e.g., hard drives) from edge sites 102.sub.1-N to central agricultural inference system 104A, and/or by the time required by central agricultural inference system 104A to computationally process this massive data.
[0037] Agricultural personnel (e.g., farmers) at edge sites 102 may desire agricultural inferences much more quickly than this. However, and as noted previously, computing resources at edge agricultural inference system 104B (including edge computing node(s) 108.sub.3) may be limited, especially in comparison to those of central agricultural inference system 104A. Accordingly, edge agricultural inference system 104B at edge site 102.sub.1 may be configured to generate “triage” agricultural inferences based on relatively low-detail sensor data (referred to herein as “triage data”) gathered by farm equipment 108.sub.1-M. Other edge sites may be similarly equipped to make triage agricultural inferences based on triage data. While these triage agricultural inferences themselves may or may be not be sufficiently accurate to cause agricultural personnel to alter agricultural behavior, the triage inferences can nonetheless be leveraged to pursue, at the edge, “targeted agricultural inferences.” These targeted agricultural inferences may or may not be as accurate as the delayed agricultural inferences generated by central agricultural inference system 104A. But even if the targeted agricultural inferences are somewhat less accurate, this loss of accuracy may be acceptable to farmers who need to be able to generate agricultural inferences in a timely manner.
[0038] The delayed agricultural inferences made by central agricultural inference system 104A may be used for a variety of purposes, not the least of which is to train the machine learning models used by edge inference module 118B to generate triage and/or targeted agricultural inferences. For example, central agricultural inference system 104A may include a training module 122, a central inference module 118A (which may share some characteristics with edge inference module 118B), and a central database 120A that stores one or more machine learning models. Central agricultural inference system 104A in general, and training module 122 and/or central inference module 118A in particular, may be configured to train those machine learning models (before and/or throughout their deployment) to generate triage and/or targeted agricultural inferences. To perform this training, training module and central inference module 118A may utilize sensor data generated by farm equipment 108.sub.1-M, e.g., alone and/or in concert with other data 124.
[0039] In various implementations, edge agricultural inference system 104B is able to generate triage agricultural inference in real time or near real time because edge inference module 118B selectively processes less than all the massive sensor data generated by farm equipment 108.sub.1-M. In other words, the triage data has a level of detail that is lower than, for instance, level(s) of detail the sensor(s) are capable of generating or that the sensor(s) actually generate. For example, in some implementations, sampling module 116 may be configured to sample, e.g., from one or more sensors onboard one or more farm equipment 108.sub.1-M, triage data at a frequency that is lower than the data actually generated by those sensor(s). Alternatively, the sensors themselves may be operated at lower frequencies than they are capable while obtaining triage data. Levels of detail of sensor data are not limited to temporal frequencies (or temporal resolutions). For example, with digital imagery, levels of detail may include, in addition to framerate (i.e. temporal frequency), spatial resolution (e.g., how many pixels are included the images), and/or spectral resolution (e.g., how many different color bands are represented by each pixel).
[0040] Whichever the case, the triage data may be applied, e.g., continuously and/or periodically by edge inference module 118B, as input across one or more machine learning models stored in edge database 120B to generate output indicative of one or more targeted plant traits detected in/on one or more plants in the agricultural field 112. Based on the detected one or more targeted plant traits, edge agricultural inference system 104B may establish a region of interest (ROI) in agricultural field 112. Then, while the same sensor(s) or different sensor(s) are carried (e.g., by farm equipment 108) through the ROI of the agricultural field, sampling module 116 may obtain targeted inference data from the sensor(s) at a greater level(s) of detail than the triage data. For example, targeted inference data may have greater temporal, spatial, and/or spectral resolution than triage data.
[0041] Edge agricultural inference system 104B may process the targeted inference data at the edge using one or more of the machine learning models stored in database 120B. In some cases, one or more of these machine learning model(s) may be stored and/or applied directly on farm equipment 108, such as edge computing node 108.sub.3, to make a targeted inference about plants within the ROI of the agricultural field. Thus, edge agricultural inference system 104B is only making selected targeted agricultural inferences—e.g., in response to detection of one or more targeted plant traits—in ROIs of agricultural field 112 (as opposed to across its entirety). Consequently, in spite of the relatively constrained computing resources available at edge 102.sub.1, edge agricultural inference system 104B is able to generate “good enough” agricultural inferences in real time or near real time.
[0042] In some implementations, edge agricultural inference system 104B may selectively (e.g., on an “as needed” basis) download and/or install trained models that are stored in database 120A of central agricultural inference system 104A. For example, if edge inference module 118B determined based on processing of triage data, that a particular plaint trait is detected, edge agricultural inference system 104B may download new machine learning model(s) that are trained to make inferences related to those detected plant traits. As one example, inference module 118B may apply a triage machine learning model to triage data to detect, generically, the presence of plant disease, without detecting which specific plant disease(s) are present. Then, inference module 118B may request and/or download, from central agricultural inference system 104A, one or more machine learning models that are trained to detect specific types of plant disease. Inference module 118B may then apply these newly-obtained model(s) to highly-detailed target inference data to determine which specific plant diseases are present. Then it is possible for agricultural personnel to practice more finely-targeted remedial measures.
[0043] In contrast to edge agricultural inference system 104B, central inference module 118A may have the virtually limitless resources of the cloud at its disposal. Accordingly, central inference module 118A may apply all of the sensor data generated by farm equipment 108.sub.1-M (e.g., a superset of high resolution digital images acquired during a given time interval) as input across machine learning model(s) stored in central database 120A to generate the delayed agricultural inferences described previously. And in some implementations, training module 122 may train the machine learning model(s) stored in database 120A based on a comparison of these delayed agricultural inferences to ground truth data (e.g., realized crop yields, human-observed disease or blight). Based on such a comparison, training module 122 may employ techniques such as back propagation and/or gradient descent to update the machine learning model(s) stored in central database 120A. The updated machine learning model(s) may subsequently be used by both edge inference module 118B and central inference module 118A to generate, respectively, real time and delayed agricultural inferences.
[0044] In some implementations, edge agricultural inference system 104B may employ techniques other than (or in addition to) obtaining sensor data at various levels of detail (e.g., triage and targeted inference) in order to generate real time crop yield predictions more quickly and/or accurately. For example, one or more components of edge agricultural inference system 104B such as vision data module 114B and/or edge inference module 118B may process a subset of high spatial/temporal/spectral resolution data sampled by sampling module 116 to generate one or more image embeddings (or vectors). In some such implementations, this processing may include applying the subset of high resolution digital images as input across at least a portion of a machine learning module such as a CNN to generate the image embeddings/vectors. Using image embeddings may be more efficient than, for instance, counting individual crops, which may require 3D reconstruction from a point cloud, object tracking, etc. With image embeddings, it is possible to estimate the density of plant parts of interest (e.g., strawberries), rather than counting individual plant parts of interest. Density of plant parts of interest may be measured per plant, per meter, etc.
[0045] As noted previously, various types of machine learning models may be applied by inference modules 118A/B to generate crop yield predictions (real time and delayed). Additionally, various types of machine learning models may be used to generate image embeddings that are applied as input across the various machine learning models. These various models may include, but are not limited to, RNNs, LSTM networks (including bidirectional), transformer networks, feed-forward neural networks, CNNs, support vector machines, random forests, decision trees, etc.
[0046] Additionally, other data 124 may be applied as input across these models besides sensor data or embeddings generated therefrom. Other data 124 may include, but is not limited to, historical data, weather data (obtained from sources other than local weather sensors), data about chemicals and/or nutrients applied to crops and/or soil, pest data, crop cycle data, previous crop yields, farming techniques employed, and so forth. Weather data may be obtained from various sources other than sensor(s) of farm equipment 108, such as regional/county weather stations, etc. In implementations in which local weather and/or local weather sensors are not available, weather data may be extrapolated from other areas for which weather data is available, and which are known to experience similar weather patterns (e.g., from the next county, neighboring farms, neighboring fields, etc.).
[0047] In this specification, the term “database” and “index” will be used broadly to refer to any collection of data. The data of the database and/or the index does not need to be structured in any particular way and it can be stored on storage devices in one or more geographic locations. Thus, for example, database(s) 120A and 120B may include multiple collections of data, each of which may be organized and accessed differently.
[0048]
[0049] As shown by the called-out window at top right, edge computing node 234.sub.M includes one or more sensors in the form of vision sensors 236.sub.1-N, one or more lights 238, a light controller 241, and logic 242 that is configured to carry out selected aspects of the present disclosure. Other edge computing nodes may or may not be similarly configured. Vision sensors 236.sub.1-N may various forms, and may or may not be the same as each other. These forms may include, for instance, an RGB digital camera, an infrared camera, a 2.5D camera, a 3D camera, and so forth. In some implementations, one or more or more of vision sensors 236.sub.1-N may take the form of a 2D camera with an RGB and/or mono resolution of, for instance, 2590 px×2048 px or greater. One or more of these 2D vision sensors 236.sub.1-N may capture images at various framerates (frames per second, or “FPS”), such as 30 FPS. In some implementations in which one or more of vision sensors 236.sub.1-N captures 2.5D or 3D data, a point cloud having a resolution of, for instance, 640 px×480 px and/or a framerate of 10 FPS or greater may be implemented. In some implementations, vision sensor(s) 236 may capture 2D data and then generate 3D data (e.g., point clouds) using techniques such as structure from motion (SFM).
[0050] Light(s) 238 and light controller 241 may be configured to illuminate plants 240, e.g., in synch with operation of vision sensors 236.sub.1-N, in order to insure that the vision data that is captured is illuminated sufficiently so that it can be used to make accurate agricultural inferences (whether triage or targeted inferences). Light(s) 238 may take various forms, such as the light emitting diode (LED) depicted in
[0051] Edge computing node 234.sub.M also includes one or more wireless antenna 244.sub.1-P. In some implementations, each wireless antenna 244 may be configured to transmit and/or receive different types of wireless data. For example, a first antenna 244.sub.1 may be configured to transmit and/or receive Global Navigation Satellite System (GNSS) wireless data, e.g., for purposes such as localization and/or ROI establishment. Another antenna 244.sub.P may be configured to transmit and/or receive IEEE 802.12 family of protocols (Wi-Fi) or Long-Term Evolution (LTE) data. Another antenna 244 may be configured to transmit and/or receive 5G data. Any number of antennas 244 may be provided to accommodate any number of wireless technologies.
[0052] In some implementations, an edge computing node 234 may be capable of localizing itself within agricultural field 112 using various technologies. For example, the GNSS antenna 244.sub.1 may interact with satellite(s) to obtain a position coordinate. Additionally or alternatively, edge computing node 234 may use techniques such as inertial measurement units (IMU) that are generated by, for instance, sensor(s) integral with wheels (not depicted) of vehicle 232, accelerometer(s), gyroscope(s), magnetometer(s), etc. In yet other implementations, wireless triangulation may be employed. In some implementations, edge computing node 234 may be capable of localizing itself with an accuracy of 20 cm or less, e.g., at a frequency of 10 Hz or greater (or an IMU frequency of 100 Hz or greater).
[0053] Logic 242 may include various types of circuitry (e.g., processor(s), FPGA, ASIC) that is configured to carry out selected aspects of the present disclosure. For example, and as shown in the called-out window at top left in
[0054] Other configurations are possible. For example, instead of some number of TPUs, in some examples, an edge computing node 234 may include some number of GPUs, each with some number of cores. With the example operational parameters of edge computing node 234 described herein, in some examples, edge computing node 234 may be capable of being moved (or moving itself) at various speeds to perform its tasks, such as up to 12 m/s.
[0055] Storage module 248 may be configured to acquire and store, e.g., in various types of memories onboard edge computing node 234, sensor data acquired from one or more sensors (e.g., vision sensors 236.sub.1-N). In order to accommodate sensor input streams of, for instance, 1 GB/s, storage module 248 may, in some cases, initially write sensor data to a dedicated logical partition on a Non-Volatile Memory Express (NVMe) drive. Subsequently, e.g., after processing by inference module 118B, the sampled data may be copied to a Redundant Array of Inexpensive Disks (RAID) solid state drive for long-term storage. Stereo module 250 may be provided in some implementations in order to reconcile images captured by 2D vision sensors that are slightly offset from each other, and/or to generate 3D images and/or images with depth.
[0056]
[0057] An ROI is established in the example of
[0058] In
[0059] Exit conditions may take various forms and may be configured, for instance, by agricultural personnel. In the example of
[0060] Boom 230 is then carried over the second-from-left row, upward along the page. In this row, a second ROI 350.sub.2 is established upon detection of weed 362. Second ROI 350.sub.2 may be tied off when either two consecutive instances of the desired plant type are detected at the top of the second-from-left row, or at the end of the second-from-left row no matter what plants are detected. Similarly, in the third-from left row (travelling downward on the page), another ROI 350.sub.3 is established at detection of another weed 364, and continues until the end of the row because there were no instances of two or more consecutive desired plants. The last row on the right contained only desired plants, and consequently, no ROIs were established there.
[0061] Various types of targeted inferences may be generated based on the targeted inference data captured while boom 230 is carried over ROIs 350.sub.1-3. For example, one or more machine learning models trained to identify specific types of weeds may be downloaded, e.g., by edge agricultural inference system 104B from central agricultural inference system 104A, in response to detection of non-desired plants from the triage data. These newly-downloaded machine learning models (e.g., CNNs) may be applied, e.g., by edge inference module 118B, to the targeted inference data obtained while boom 230 is carried through ROIs 350.sub.1-3 to determine which type(s) of weeds were detected. Depending on the type(s) of weeds detected, different remedial actions may be taken, such as applying selected herbicides, manually destroying/pulling weeds, culling whole rows to prevent further spread, and so forth.
[0062]
[0063] In
[0064] Inference module 118B may process the triage data and, upon detecting one or more targeted plant traits exhibited by one or more plants in the agricultural field, send a command to sampling module 116 to start (or establish, or open) an ROI. Upon receiving that command, sampling module 116 may begin sampling targeted inference data from the high framerate/resolution stream of vision data generated by vision sensor 436. This targeted inference data may have greater detail (e.g., higher framerate and/or spatial/spectral resolution) than the triage data. In some cases the targeted inference data may have the same or similar level of detail as the data generated natively by vision sensor 436.
[0065] Upon processing the targeted inference data, inference module 118B may make one or more targeted inferences, data indicative of which may be provided to AG client 107. AG client 107 may in turn generate output that conveys the targeted inference in some fashion. For example, AG client 107 may report the inference directly (e.g., chart(s) showing crop yield predictions, maps showing location(s) of weeds/blight, etc.). Additionally or alternatively, AG client 107 may make recommended changes to agricultural operations (e.g., “increase irrigation by 20%), recommend remedial actions (e.g., eliminating weeds or pests) as applicable, and so forth.
[0066] Once the exit condition for the ROI is detected (e.g., no more plants exhibiting a particular trait) by inference module 118B, inference module 118B may send sampling module 116 a command to end the ROI. This may cause sampling module 116 to cease obtaining highly-detailed targeted inference data from vision sensor 436, and go back to sampling less-detailed triage data from vision sensor 436. In various implementations, the highly-detailed vision data generated by vision sensor natively 436 may be provided to, for instance, central agricultural inference system 104A, for more comprehensive, if somewhat delayed, processing and inferences.
[0067]
[0068] Inference module 118B may process the triage data and, upon detecting one or more targeted plant traits exhibited by one or more plants in the agricultural field, send a command to sampling module 116 to start (or establish, or open) an ROI. In response, sampling module 116 may send a command to vision sensor 436 to increase the level of detail at which it captures image data. In response, vision sensor 436 begins sending targeted inference data which, as noted previously, has a greater level of detail (e.g., framerate, spatial and/or spectral resolution) than the triage data.
[0069] As was the case in
[0070]
[0071] At block 502, the system may acquire triage data from a sensor of an edge computing node (e.g., 108.sub.3, 234) carried through an agricultural field by agricultural equipment. The triage data may be acquired at a first level of detail that is lower than, for instance, a level of detail of which the sensor is capable. For example, sampling module 116 may sample the triage data as a subset of a data stream generated by the sensor, as demonstrated in
[0072] At block 504, the system may locally (i.e. at the edge site 102) process the triage data at the edge using one or more machine learning models stored on or executed by the edge computing node to detect one or more targeted plant traits exhibited by one or more plants in the agricultural field. For example, one or more edge computing nodes 108.sub.3/234 may apply image data captured by their vision sensor(s) 236 as input across one or more CNNs to detect presence of various phenotypic attributes and/or other plant traits, such as plant type, presence of plant disease/blight, coloring indicative of ill plant health, presence of pests, fruits/flowers/buds/nuts (e.g., present, at a particular stage of development, at a threshold density), and so forth.
[0073] Based on the detected one or more targeted plant traits, at block 506, the system may establish an ROI in the agricultural field, such as the ROIs 350.sub.1-3 depicted in
[0074] At block 508 (which may be performed before, after, or concurrently with the operations of block 510), the system may download, from one or more central servers through the distributed computing network, parameters of the machine learning model to be used to process targeted inference data. The parameters may be selected from a library of machine learning model parameters (e.g., central database 120A) based on the detected one or more targeted plant traits. For example, if plant disease is potentially detected via leaves having spots, then one or more sets of machine learning model parameters associated with detection of one or more plant diseases that might cause spots may be selectively downloaded, e.g., by edge inference module 118B. If the requisite machine learning models/parameters are already stored at the edge computing node 108.sub.3/234 that will be performing the targeted inferencing, e.g., in database 120B, then block 508 may be omitted. Alternatively, edge computing node 108.sub.3/234 may download the parameters from a local server deployed at the edge (which itself may obtain those parameters from database 120A).
[0075] At block 510, the system, e.g., by way of sampling module 116, may acquire targeted inference data from the sensor at a second level of detail while the sensor of the edge computing node is carried through the ROI of the agricultural field. As noted previously, the second level of detail may be greater than the first level of detail, e.g., in terms of spatial, spectral, and/or temporal resolution/frequency, and may already be generated by the sensor (
[0076] At block 512, the system, e.g., by way of edge inference module 118B, may locally process the targeted inference data at the edge using one or more of the machine learning models stored on or executed by the edge computing node to make a targeted inference about plants within the ROI of the agricultural field.
[0077]
[0078] User interface input devices 622 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In some implementations in which computing device 610 takes the form of a HMD or smart glasses, a pose of a user's eyes may be tracked for use, e.g., alone or in combination with other stimuli (e.g., blinking, pressing a button, etc.), as user input. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computing device 610 or onto a communication network.
[0079] User interface output devices 620 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, one or more displays forming part of a HMD, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computing device 610 to the user or to another machine or computing device.
[0080] Storage subsystem 624 stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 624 may include the logic to perform selected aspects of the method 500 described herein, as well as to implement various components depicted in
[0081] These software modules are generally executed by processor 614 alone or in combination with other processors. Memory 625 used in the storage subsystem 624 can include a number of memories including a main random access memory (RAM) 630 for storage of instructions and data during program execution and a read only memory (ROM) 632 in which fixed instructions are stored. A file storage subsystem 626 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem 626 in the storage subsystem 624, or in other machines accessible by the processor(s) 614.
[0082] Bus subsystem 612 provides a mechanism for letting the various components and subsystems of computing device 610 communicate with each other as intended. Although bus subsystem 612 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses.
[0083] Computing device 610 can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computing device 610 depicted in
[0084] While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.