Patent classifications
G06V20/38
Semantic Segmentation to Identify and Treat Plants in a Field and Verify the Plant Treatments
A farming machine including a number of treatment mechanisms treats plants according to a treatment plan as the farming machine moves through the field. The control system of the farming machine executes a plant identification model configured to identify plants in the field for treatment. The control system generates a treatment map identifying which treatment mechanisms to actuate to treat the plants in the field. To generate a treatment map, the farming machine captures an image of plants, processes the image to identify plants, and generates a treatment map. The plant identification model can be a convolutional neural network having an input layer, an identification layer, and an output layer. The input layer has the dimensionality of the image, the identification layer has a greatly reduced dimensionality, and the output layer has the dimensionality of the treatment mechanisms.
Micro-precision application of multiple treatments to agricultural objects
Various embodiments relate generally to computer vision and automation to autonomously identify and deliver for application a treatment to an object among other objects, data science and data analysis, including machine learning, deep learning, and other disciplines of computer-based artificial intelligence to facilitate identification and treatment of objects, and robotics and mobility technologies to navigate a delivery system, more specifically, to an agricultural delivery system configured to identify and apply, for example, an agricultural treatment to an identified agricultural object. In some examples, a method may include, receiving data representing a policy specifying a type of action for an agricultural object, selecting an emitter with which to perform a type of action for the agricultural object as one of one or more classified subsets, and configuring the agricultural projectile delivery system to activate an emitter to propel an agricultural projectile to intercept the agricultural object.
MULTIPLE EMITTERS TO TREAT AGRICULTURAL OBJECTS FROM MULTIPLE PAYLOAD SOURCES
Various embodiments relate generally to computer vision and automation to autonomously identify and deliver for application a treatment to an object among other objects, data science and data analysis, including machine learning, deep learning, and other disciplines of computer-based artificial intelligence to facilitate identification and treatment of objects, and robotics and mobility technologies to navigate a delivery system, more specifically, to an agricultural delivery system configured to identify and apply, for example, an agricultural treatment to an identified agricultural object. In some examples, a method may include, receiving data representing a policy specifying a type of action for an agricultural object, selecting an emitter with which to perform a type of action for the agricultural object as one of one or more classified subsets, and configuring the agricultural projectile delivery system to activate an emitter to propel an agricultural projectile to intercept the agricultural object.
SYSTEM AND METHOD FOR VIEWSHED ANALYSIS
In variants, a method for viewshed analysis can include: determining a location, determining a set of location viewpoints for the location, determining a viewshed for the location, determining a set of view factors for the location, and determining a view factor representation for the location based the viewshed and the set of view factors, optionally determining a view parameter for the location, and/or any other suitable elements.
Method for tracing pollution at drainage outlet of culvert
A method for tracing pollution at a drainage outlet of a culvert. (1) The main line of the culvert is fully detected through an inspection robot to obtain the image data, water quality data, coordinates and other basic data of the outlet. (2) The type of the outlet is determined according to the data information obtained in step (1). (3) Pollutants in the drainage outlet with suspected rainwater-sewage connection are traced, and sources of the pollutants flowing to the outlet are all located.
UPWARD FACING LIGHT SENSOR FOR PLANT DETECTION
A farming machine is configured to identify and treat plants in a field. The farming machine includes one or more light sensors for measuring a characteristic of light. The one or more light sensors are coupled to the farming machine and are directed a substantially upwards orientation away from the plants. A control system adjusts settings of an image acquisition system based on a characteristic of light measured by the one or more light sensors. The image acquisition system captures an image of a plant using one or more image sensors coupled to the farming machine, the one or more image sensors directed in a substantially downwards orientation towards the plants. The control system identifies a plant in the image and actuates a treatment mechanism to treat the identified plant.
Method for categorizing a scene comprising a sub-scene with machine learning
A method for identifying a scene, comprising a computing device receiving a plurality of data points corresponding to a scene; the computing device determining one or more subsets of data points from the plurality of data points that are indicative of at least one sub-scene in said scene, said at least one sub-scene displayed on a display device that is part of said scene, wherein said at least one sub-scene does not represent said scene; the computing device categorizing said scene, disregarding said at least one sub-scene, wherein the categorizing includes interpreting said scene by a computer vision system such that said at least one sub-scene is not taken into account in the categorizing of said scene.
Image-based soil characteristic mapping
Embodiments of a soil mapping process include the step or process of receiving georeferenced field images of vegetation within an imaged crop field, the georeferenced field images transmitted from a field imaging source to an image-based soil mapping system. The image-based soil mapping system visually analyzes the georeferenced field images to identify and categorize non-crop vegetation distributions present within the imaged crop field; compiles a soil characteristic map for the imaged crop field based, at least in part, on a comparison between the non-crop vegetation distributions and data stored in a first database correlating different categories of non-crop vegetation with variations in one or more soil characteristics; and then provides the soil characteristic map for user viewing on a display device.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, IMAGING APPARATUS, AND STORAGE MEDIUM
Provided is an imaging apparatus including a determination unit configured to perform scene determination and a combination unit configured to combine a plurality of images different in focus position in an optical axis direction. The combination unit automatically performs the combination based on a result of the scene determination. A composite image to be generated in a case where the combination unit performs the combination is deeper in depth of field than the plurality of images.
Plant Group Identification
A farming machine moves through a field and includes an image sensor that captures an image of a plant in the field. A control system accesses the captured image and applies the image to a machine learned plant identification model. The plant identification model identifies pixels representing the plant and categorizes the plant into a plant group (e.g., plant species). The identified pixels are labeled as the plant group and a location of the pixels is determined. The control system actuates a treatment mechanism based on the identified plant group and location. Additionally, the images from the image sensor and the plant identification model may be used to generate a plant identification map. The plant identification map is a map of the field that indicates the locations of the plant groups identified by the plant identification model.