Patent classifications
G06F16/53
METHOD AND APPARATUS FOR EMPLOYING DEEP LEARNING NEURAL NETWORK TO INFER REGENERATIVE COVER CROP PRACTICES
A computer-implemented method for predicting a cropland data layer (CDL) for a current year includes: retrieving a first set of records from a historical CDL database, where the first set corresponds to sampled areas of a region taken over a period for a number of years; retrieving a second set of records from a historical imagery database, where the second set corresponds to the sampled areas of the region, the period, and the number of years; employing the second set as inputs to train a deep learning network to generate the first set; retrieving a third set of records from a current imagery database, where the third set corresponds to a prescribed region, and where the third set corresponds to the time period and the current year; and using the third set as inputs and executing the trained deep learning network to generate a predicted CDL for the current year.
Systems and methods for providing an extended reality interface
Some embodiments include a system comprising an extended reality (XR) display device configured to display an XR interface to a user; at least one hardware processor communicatively coupled to the XR display device and configured to perform: receiving a model of a room; receiving a plurality of furniture models corresponding to a plurality of pieces of furniture; providing the XR interface using the model of the room and the plurality of furniture models at least in part by: displaying, via the XR display device, a furniture display comprising at least some of the plurality of furniture models and a search field to permit entry of text; detecting entry of a text string into the search field; identifying, using the text string, at least one furniture model from the plurality of furniture models; and displaying, via the XR display device, the at least one furniture model in the furniture display.
Systems and methods for providing an extended reality interface
Some embodiments include a system comprising an extended reality (XR) display device configured to display an XR interface to a user; at least one hardware processor communicatively coupled to the XR display device and configured to perform: receiving a model of a room; receiving a plurality of furniture models corresponding to a plurality of pieces of furniture; providing the XR interface using the model of the room and the plurality of furniture models at least in part by: displaying, via the XR display device, a furniture display comprising at least some of the plurality of furniture models and a search field to permit entry of text; detecting entry of a text string into the search field; identifying, using the text string, at least one furniture model from the plurality of furniture models; and displaying, via the XR display device, the at least one furniture model in the furniture display.
EDGE COMPUTING-BASED CONTROL METHOD AND APPARATUS, EDGE DEVICE AND STORAGE MEDIUM
Provided are an edge computing-based control method and apparatus, an edge device and a storage medium. The method includes that: an analysis processing tool for implementing image analysis processing in a cloud server is acquired; in a case where the cloud server is in a fault state, image analysis processing is performed on a to-be-processed image with the analysis processing tool to obtain an analysis processing result; and the analysis processing result is synchronized to the cloud server.
EDGE COMPUTING-BASED CONTROL METHOD AND APPARATUS, EDGE DEVICE AND STORAGE MEDIUM
Provided are an edge computing-based control method and apparatus, an edge device and a storage medium. The method includes that: an analysis processing tool for implementing image analysis processing in a cloud server is acquired; in a case where the cloud server is in a fault state, image analysis processing is performed on a to-be-processed image with the analysis processing tool to obtain an analysis processing result; and the analysis processing result is synchronized to the cloud server.
GENERATING OBJECT-BASED LAYERS FOR DIGITAL IMAGE EDITING USING OBJECT CLASSIFICATION MACHINE LEARNING MODELS
The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating image layers and determining layer labels utilizing a machine learning approach. For example, the disclosed systems utilize an image segmentation machine learning model to segment the digital image and identify individual objects depicted within the digital image. Additionally, in some embodiments, the disclosed systems determine object classifications for the depicted objects by utilizing an object classification machine learning model. In some cases, the disclosed systems further generate image layers for the digital image by generating a separate layer for each identified object (or for groups of similar objects). In certain embodiments, the disclosed systems also determine layer labels for the image layers according to the object classifications of the respective objects depicted in each of the image layers.
GENERATING OBJECT-BASED LAYERS FOR DIGITAL IMAGE EDITING USING OBJECT CLASSIFICATION MACHINE LEARNING MODELS
The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating image layers and determining layer labels utilizing a machine learning approach. For example, the disclosed systems utilize an image segmentation machine learning model to segment the digital image and identify individual objects depicted within the digital image. Additionally, in some embodiments, the disclosed systems determine object classifications for the depicted objects by utilizing an object classification machine learning model. In some cases, the disclosed systems further generate image layers for the digital image by generating a separate layer for each identified object (or for groups of similar objects). In certain embodiments, the disclosed systems also determine layer labels for the image layers according to the object classifications of the respective objects depicted in each of the image layers.
ROAD SIGN CONTENT PREDICTION AND SEARCH IN SMART DATA MANAGEMENT FOR TRAINING MACHINE LEARNING MODEL
Systems and method for machine-learning assisted road sign content prediction and machine learning training is disclosed. A sign detector model processes images or video with road signs. A visual attribute prediction model extracts visual attributes of the sign in the image. The visual attribute prediction model can communicate with a knowledge graph reasoner to validate the visual attribute prediction model by applying various rules to the output of the visual attribute prediction model. A plurality of potential sign candidates are retrieved that match the visual attributes of the image subject to the visual attribute prediction model, and the rules help to reduce the list of potential sign candidates and improve accuracy of the model.
ROAD SIGN CONTENT PREDICTION AND SEARCH IN SMART DATA MANAGEMENT FOR TRAINING MACHINE LEARNING MODEL
Systems and method for machine-learning assisted road sign content prediction and machine learning training is disclosed. A sign detector model processes images or video with road signs. A visual attribute prediction model extracts visual attributes of the sign in the image. The visual attribute prediction model can communicate with a knowledge graph reasoner to validate the visual attribute prediction model by applying various rules to the output of the visual attribute prediction model. A plurality of potential sign candidates are retrieved that match the visual attributes of the image subject to the visual attribute prediction model, and the rules help to reduce the list of potential sign candidates and improve accuracy of the model.
Method and Apparatus for Inputting Food Information
Provided are a food-information inputting method and apparatus. The food-information inputting method may include operating in a photographing mode where an input guide and an input button moved along the input guide are displayed on a photographing screen; and receiving at least one piece of food information using the input guide and the input button, while operating in the photographing mode.