AI for Ag: agricultural production machine learning

Time:2021-7-22

By Chris padwick
Compile Flin
Source: Media

How does agriculture affect your life today? If you live in a city, you may feel disconnected from the farms and fields that produce your food. Agriculture is a core part of our life, but we often take it for granted.

Today’s farmers face a huge challenge – to feed a growing global population with less land. It is estimated that by 2050, the world’s population will grow to nearly 10 billion, increasing global food demand by 50%.

With the growth of food demand, land, water and other resources will face greater pressure. Inherent variability in agriculture, such as changes in climate conditions, and threats from weeds and pests, can also affect farmers’ ability to produce food. The only way to produce more food while using less resources is through intelligent machines, which can help farmers with difficult work and provide higher consistency, accuracy and efficiency.

Agricultural robot

At Blue River technology, we are building the next generation of intelligent machines. Farmers use our tools to control weeds and reduce costs to promote sustainable agricultural development.

Our weeding robot integrates camera, computer vision, machine learning and robot technology, and produces an intelligent sprayer which can drive in the field (using AutoTrac to minimize the driver’s load), and quickly lock the target and spray weeds to keep the crops intact.

This machine needs to decide in real time what crops are and what weeds are. When the machine is running in the field, the high-resolution camera collects images at a high frame rate.

We developed a convolutional neural network (CNN) to analyze each frame using Python and generate a pixel accurate map of crops and weeds. Once all the plants are identified, each weed and crop is mapped to the field location, and the robot only sprays weeds.

The whole process is completed in a few milliseconds, because the efficiency is so high that farmers can cover as much land as possible.

This is a great see & spray video that details the process.

To support machine learning (ML) and robotics, we built an impressive computing unit based on NVIDIA Jetson AgX Xavier edge AI platform.

Because all our inferences are real-time and it takes too long to upload to the cloud, we brought the server farm to the scene.

The total computing power of robots for visual reasoning and spray robots is comparable to that of IBM’s supercomputer Blue Gene (2007). This makes it the most powerful mobile machine in the world!

Establishment of weed detection model

My team of researchers and engineers trained neural network models for identifying crops and weeds. This is a challenging problem because many weeds look like crops. Professional agronomists and weed scientists train our tagging staff to mark images correctly – can you find any of the following weeds?

In the figure below, cotton plants are green and weeds are red.

Machine learning stack

In machine learning, we have a complex stack. We train all our models with Python. We have built a set of internal libraries on python, which allows us to carry out repeatable machine learning experiments. My team responsibilities fall into three categories:

  • Build production model to deploy to robot

  • In order to continuously improve the performance of the model, machine learning experiments and research are carried out

  • Data analysis / data science related to machine learning, a / B testing, process improvement and software engineering

We chose Python because it is very flexible and easy to debug. New team members can quickly keep up with the progress, and the documentation is very detailed.

Before using pytorch, our team used cafe and tensorflow extensively. In 2019, we decided to switch to python, and the transition was smooth. It also provides support for the research of workflow.

For example, we use the torchvision library for image transformation and tensor transformation. It contains some basic functions and integrates well with complex enhancement packages like imgauge. The integration of transform object and imgauge in torch vision is a piece of cake.

Here is an example of using the fashion MNIST dataset(https://github.com/zalandoresearch/fashion-mnist)Code example for.

The class named customaugust initializes the IAA. Sequential object in the constructor, and then__ call__ Call invoke_ in the method image()。 Then, before totensor(), add customaugust() to the call to transforms. Compose().

Now, when loading batches for training and validation, the train and val data loaders will apply the enhancements defined in customaugust().

from imgaug import augmenters as iaa
import numpy as np
import os
import torch
from torch.utils.data import DataLoader, random_split
from torchvision.datasets import FashionMNIST
from torchvision import datasets, transforms

DATA_DIR = './fashionMNIST/'

class CustomAugmentor:
    
    def __init__(self):
            self.aug = iaa.Sequential([iaa.flip.Fliplr(p=0.5),
                            iaa.GaussianBlur(sigma=(0.0, 0.1)),
                            iaa.Multiply((0.9, 1.1)),
                            iaa.Dropout((0, 0.05)),
                            iaa.AdditiveGaussianNoise(scale=(0, 0.05*255))
                           ])
    
    def __call__(self, img):
        img = np.array(img)
        #Return a copy here to resolve the error: valueerror: at least one step
        #It is negative in the given numpy array and the tensor is negative 
        #Not currently supported.
        return self.aug.augment_image(img).copy()

#Convert image
transform=transforms.Compose([CustomAugmentor(), transforms.ToTensor()])
fmnist_train = FashionMNIST(DATA_DIR, train=True, download=True, transform=transform)
fmnist_test = FashionMNIST(DATA_DIR, train=False, download=True, transform=transforms.ToTensor())
fmnist_train, fmnist_val = random_split(fmnist_train, [55000, 5000])

train_dl = DataLoader(fmnist_train, batch_size=64)
val_dl = DataLoader(fmnist_val, batch_size=64)
test_dl = DataLoader(fmnist_test, batch_size=64)

In addition, pytorch has become a favorite tool in the computer vision ecosystem (see https://paperswithcode.com/ , pytorch is a common commit). In this way, we can easily try new technologies such as debased contractual learning for semi supervised training.

In model training, we have two normal workflow: production and research.

For research applications, our team ran pytorch on an internal local computing cluster. Jobs executed on the local cluster are managed by slurm, a scheduler based on HPC batch jobs. It’s free, easy to set up and maintain, and provides all the features our team needs to run thousands of machine learning assignments.

For production based workflow, we use Argo workflow on top of kubernetes (k8s) cluster hosted by AWS. Our pytorch training code is deployed to the cloud using docker.

Deploying model on agricultural robot

For production deployment, one of our first tasks is to do high-speed reasoning on edge computing devices. If the robot needs to drive slower to wait for inference, it will not be as efficient in the field.

Therefore, we use tensorrt to transform the network into NVIDIA Jetson AgX Xavier optimization model. Tensorrt does not accept the JIT model as input, so we use onnx to convert from JIT to onnx format, and then use tensorrt to convert to tensorrt engine files deployed directly to the device.

With the development of tool stack, we expect this process to be improved. Our model is deployed to the artifact using Jenkins build process and deployed to the remote machine on site by extracting from the artifact.

In order to monitor and evaluate our machine learning performance, we found thathttp://wandb.com/Platform is the best solution. Their API makes it faster to integrate W & B logs into existing code bases. We use W & B to monitor ongoing training, including real-time training curve and verification loss.

SGD vs Adam Project

As an example of using Python and W & B, I’ll run an experiment and compare the results of using different solvers in Python. There are many different solvers in Python – the obvious question is which one should you choose?

Adam is a popular solution. It usually gives good results without setting any parameters, and is the model we usually choose. The solver is located in thehttps://pytorch.org/docs/stable/optim.html#torch.optim.Adamlower

Another popular solver choice for machine learning researchers is random gradient descent (SGD). This solver is available in Pythonhttps://pytorch.org/docs/stable/optim.html#torch.optim.SGDget

If you don’t know the difference, or if you need to review, I suggest you write down the difference. Momentum is an important concept in machine learning. It can avoid falling into the local minimum in the optimization space and find a better solution. The question with SGD and momentum is: can I find the momentum setting for SGD that beat Adam?

The experimental setup is as follows. I use the same training data for each run and evaluate the results in the same test set. I want to compare the F1 score in different runs. I set up many runs with SGD as the solver and scanned the momentum values from 0 – 0.99 (when momentum is used, any value greater than 1.0 causes the solver to diverge).

I set it to run 10 times, momentum value from 0 to 0.9, increment of 0.1. Next, I ran 10 more times, this time with momentum between 0.90 and 0.99, in increments of 0.01. After looking at these results, I also conducted a group of experiments with momentum values of 0.999 and 0.9999. Each run uses a different random seed and is labeled “SGD scan” in W & B. The results are shown in Figure 1.

It is very clear from Figure 1 that the larger the momentum, the higher the F1 score. The optimal value of 0.9447 appears when the momentum value is 0.999, and decreases to 0.9394 when the momentum value is 0.9999. The values are shown in the table below.

Table 1: each run is shown as a row in the table above. The last column is the momentum setting for the run. Shows F1 scores, accuracy and recall for category 2 (crops).

How do these results compare with Adam’s? To test this, I only use torch. Optim. Adam as the default parameter. I use the “Adam runs” tag in W & B to identify these run records. I also marked each set of SGD runs for comparison.

Because each run uses a different random seed, the solver initializes differently and uses different weights in the last epoch. This gives slightly different results in each run of the test set. To compare them, I need to measure the distribution of Adam and SGD run values. This can be easily achieved by using the box diagram grouped by label in W & B.

The results are shown graphically in Figure 2 and tabular in Table 2. Full reports are also available online.

As you can see, you can’t beat Adam just by adjusting momentum through SGD. A momentum setting of 0.999 gives very similar results, but Adam runs with smaller variance and higher average. Therefore, Adam seems to be a good choice to solve our plant segmentation problem!

Python visualization

By integrating with Python, W & B can obtain gradients at each layer, allowing us to check the network during training.

W & B experiment tracking also makes it easier to visualize the python model during training, so you can view the loss curve in real time on the central dashboard. We use these visualizations in team meetings to discuss the latest results and share updates.

When the image passes through the python model, we record the prediction in weights & bias to visualize the results of model training. Here we can see predictions, proper tagging and tagging. This makes it easy to determine if the performance of the model does not meet our expectations.

Here, we can quickly browse the correct annotation, prediction and the difference between them. We mark crops green and weeds red. As you can see, the model is quite reasonable in identifying crops and weeds in the image.

Here is a short code example of how to use data frames in W & B:

# Per-sample dataframe with images.
data = []

# Loop through the mini-batches.  Each batch is a dictionary.
For batch in batches:
    image = batch[“image”].astype(np.uint8)
    label = batch[“label”]
    pred = batch[“prediction”]

    zeros = np.zeros_like(image)
    diff = np.zeros_like(pred)
    diff[np.where(pred != label)] = 255.0
            
    datapoint = {}
    datapoint['image'] = wandb.Image(image)

    # colorize_segmentation is a method which alpha-blends the class colors into an image.
    datapoint['pred'] = wandb.Image(colorize_segmentation(zeros, pred, alpha=1.0))
    datapoint['label'] = wandb.Image(colorize_segmentation(zeros, label, alpha=1.0))
    datapoint['diff'] = wandb.Image(diff)
    data.append(datapoint)

# Convert the list of datapoints to a pandas dataframe and log it to W&B.
log_df = pd.DataFrame(data)
wandb.run.summary['my_awesome_dataframe’'] = log_df

Replicable model

Reproducibility and traceability are the key characteristics of any ml system, and it is difficult to get the correct results. When comparing different network architectures and hyper parameters, the input data must be the same to make the operation comparable.

Usually, individual practitioners in ML teams will save yaml or JSON configuration files – it’s very painful to find the team members’ running records and carefully look at their configuration files to find out what training sets and hyper parameters are used. We’ve all done it. We all hate it.

A new feature just released by W & B solves this problem. Artifact allows us to track the input and output of training and evaluation runs. This is very helpful for our repeatability and traceability. I can know which datasets are used to train the model, which models are generated (from multiple runs), and the results of model evaluation.

Typical use cases are as follows. The data staging process will download the latest and largest data, and store it on disk for training and testing (each data set is separate). These datasets are designated as artifacts.

The training run takes the training set artifact as the input and outputs the trained model as the output artifact. The evaluation process takes the test set artifact and the trained model artifact as input, and outputs the evaluation that may contain a set of measures or images.

Directed acyclic graph (DAG) is formed and visualized in W & B. This is helpful because it is important to track the artifacts involved in publishing the machine learning model to production. Such DAGs are easy to form

One of the advantages of the artifacts feature is that you can choose to upload all the artifacts (dataset, model, evaluation) or only the references to the artifacts. This is a good feature because moving large amounts of data is time-consuming and slow. For dataset artifacts, we only need to store references to these artifacts in W & B, which allows us to maintain control of the data (and avoid long-time transmission), and still achieve traceability and reproducibility in machine learning.

#Initialize wandb
wandb_run = wandb.init(job_type="train", reinit=True, project=”blog-post”,                                                   
            tags=[“SGD Sweep”], tensorboard=False)                                                                      
                                     
#Specifies the artifact used for the training run. Here we specify a training data set and a test data set.
artifact_list = [{“name”: “blueriver/blog-post/train-dataset:latest”, “type”: “dataset”},
    {“name”: “blueriver/blog-post/test-dataset:latest”, “type”: “dataset”}]

#Traverse the list and tell wandb to use the artifact.
for elem  in artifact_list:
    artifact = wandb_run.use_artifact(elem[“name”], elem[“type”])

Lead the ML team

Looking back on the years I led the team of machine learning engineers, I saw some common challenges:

efficiency: when we develop new models, we need to experiment quickly and share the results. Python makes it easy to add new features quickly, and W & B gives us the visibility we need to debug and improve our models.

flexibility: working with our customers brings new challenges every day. Our team needs tools that can meet our changing needs, which is why we chose Python as our thriving ecosystem and W & B as our lightweight modular integration.

performanceIn the final analysis, we need to build the most accurate and fastest model for our agricultural robots. Pytorch enables us to iterate quickly and then productize and deploy the model to the field. We have complete visibility and transparency in the development process of W & B, which makes it easy to determine the best performance model.

Link to the original text:https://medium.com/pytorch/ai-for-ag-production-machine-learning-for-agriculture-e8cfdb9849a1

Welcome to panchuang AI blog:
http://panchuang.net/

Sklearn machine learning official Chinese document:
http://sklearn123.com/

Welcome to pancreato blog Resource Hub:
http://docs.panchuang.net/

Recommended Today

ES6 Usage Summary

Destructuring assignment Realize variable exchange let a =1; let b=2 [a,b]=[b,a] console.log(a,b)//2,1 Default value assignment [a,b,c=3]=[1,2] console.log(a,b,c)//1,2,3 Get function return value directly function fc(){ return [1,2] } let a,b; [a,b]=fc() console.log(a,b)//1,2 Nested deconstruction assignment of objects let metadata={ title:’currentTitle’, test:[{ title:”deepTitle”, desc:’description’ }] } let {title:titleone,test:[{title:titlescond}]}=metadata; console.log(titleone,titlescond)//currentTitle,deepTitle String extension If the Unicode code is greater […]