Python uses cuml to train your machine learning model

Time:2021-4-8

By khuyen tran
Compile VK
Source: towards Data Science

motivation

Sklearn is a good library with various machine learning models that can be used to train data. But if your data is large, you may need a long time to train your data, especially when you use different super parameters to find the best model.

Is there a way to make the training speed of machine learning model 150 times faster than that of sklearn? The answer is that you can use cuml.

The chart below compares the time required for the first mock exam of RandomForestClassifier and cuML to train the same model using Sklearn’s RandomForestClassifier.

Cuml is a fast, GPU accelerated machine learning algorithm designed for data science and analysis tasks. Its API is similar to sklearn’s, which means that you can train cuml’s model with the code that trains sklearn’s model.

from cuml.ensemble import RandomForestClassifier

clf = KNeighborsClassifier(n_neighbors=10)
clf.fit(X, y)

In this article, I’ll compare the performance of these two libraries using different models. I’ll also show you how to add a graphics card to make it 10 times faster.

Install cuml

To install cuml, follow the instructions on the rapids page. Make sure to check the prerequisites before installing the library. You can install all the packages or just cuml. If your computer space is limited, I recommend installing cudf and cuml.

Although in many cases, it is not necessary to install cudf to use cuml, cudf is a good supplement to cuml because it is a GPU data frame.

Make sure you choose the right option for your computer.

Create data

Because cuml is usually better than sklearn when there is a lot of data, we will use it sklearn.datasets .

Importing datasets from sklearn

from sklearn import datasets
X, y  = datasets.make_classification(n_samples=40000)

Convert data type to np.float32 Because some cuml models require input to be np.float32 .

X = X.astype(np.float32)
y = y.astype(np.float32)

Support vector machine

We will create functions for training the model. Using this function will make it easier to compare different models.

def train_data(model, X=X, y=y):
    clf = model
    clf.fit(X, y)

We used the iPhone Magic command% timeit to run each function seven times, taking the average of all the experiments.

from sklearn.svm import SVC 
from cuml.svm import SVC as SVC_gpu

clf_svc = SVC(kernel='poly', degree=2, gamma='auto', C=1)
sklearn_time_svc = %timeit -o train_data(clf_svc)

clf_svc = SVC_gpu(kernel='poly', degree=2, gamma='auto', C=1)
cuml_time_svc = %timeit -o train_data(clf_svc)

print(f"""Average time of sklearn's {clf_svc.__class__.__name__}""", sklearn_time_svc.average, 's')
print(f"""Average time of cuml's {clf_svc.__class__.__name__}""", cuml_time_svc.average, 's')

print('Ratio between sklearn and cuml is', sklearn_time_svc.average/cuml_time_svc.average)
Average time of sklearn's SVC 48.56009825014287 s
Average time of cuml's SVC 19.611496431714304 s
Ratio between sklearn and cuml is 2.476103668030909

The SVC of cuml is 2.5 times faster than that of sklearn!

Let’s visualize it with pictures. We create a function to plot the speed of the model.

!pip install cutecharts

import cutecharts.charts as ctc 

def plot(sklearn_time, cuml_time):

    chart = ctc.Bar('Sklearn vs cuml')
    chart.set_options(
        labels=['sklearn', 'cuml'],
        x_label='library',
        y_label='time (s)',
        )

    chart.add_series('time', data=[round(sklearn_time.average,2), round(cuml_time.average,2)])
    return chart
plot(sklearn_time_svc, cuml_time_svc).render_notebook()

Better graphics card

Because cuml’s model is faster than sklearn’s model in running big data, because they are trained with GPU, what happens if we triple the memory of GPU?

In the previous comparison, I used an Alienware M15 laptop with geforce2060 and 6.3gb of graphics memory.

Now, I’m going to test the speed of GPU memory growth using a Dell precision 7740 with Quadro RTX 5000 and 17 GB of graphics memory.

Average time of sklearn's SVC 35.791008955999914 s
Average time of cuml's SVC 1.9953700327142931 s
Ratio between sklearn and cuml is 17.93702840535976

When it is trained on a 17 GB machine, cuml’s SVM is 18 times faster than sklearn’s! It’s 10 times faster than a laptop’s training speed, and its graphics memory is 6.3gb.

That’s why if we use GPU accelerators like cuml.

Random forest classifier

clf_rf = RandomForestClassifier(max_features=1.0,
                   n_estimators=40)
sklearn_time_rf = %timeit -o train_data(clf_rf)

clf_rf = RandomForestClassifier_gpu(max_features=1.0,
                   n_estimators=40)
cuml_time_rf = %timeit -o train_data(clf_rf)

print(f"""Average time of sklearn's {clf_rf.__class__.__name__}""", sklearn_time_rf.average, 's')
print(f"""Average time of cuml's {clf_rf.__class__.__name__}""", cuml_time_rf.average, 's')

print('Ratio between sklearn and cuml is', sklearn_time_rf.average/cuml_time_rf.average)
Average time of sklearn's RandomForestClassifier 29.824075075857113 s
Average time of cuml's RandomForestClassifier 0.49404465585715635 s
Ratio between sklearn and cuml is 60.3671646323408

Cuml’s randomforest classifier is 60 times faster than sklearn’s randomforest classifier! If training sklearn’s randomforest classifier takes 30 seconds, then training cuml’s randomforest classifier takes less than half a second!

Better graphics card

Average time of Sklearn's RandomForestClassifier 24.006061030143037 s
Average time of cuML's RandomForestClassifier 0.15141178591425808 s.
The ratio between Sklearn’s and cuML is 158.54816641379068

When training on my Dell precision 7740 laptop, cuml’s random forest classifier is 158 times faster than sklearn’s random forest classifier!

Nearest neighbor classifier

Average time of sklearn's KNeighborsClassifier 0.07836367340000508 s
Average time of cuml's KNeighborsClassifier 0.004251259535714585 s
Ratio between sklearn and cuml is 18.43304854518441

Note: 20m on Y-axis means 20ms.

Cuml’s knightbors classifier is 18 times faster than sklearn’s.

Larger graphics memory

Average time of sklearn's KNeighborsClassifier 0.07511190322854547 s
Average time of cuml's KNeighborsClassifier 0.0015137992111426033 s
Ratio between sklearn and cuml is 49.618141346401956

When I trained on my Dell precision 7740 laptop, cuml’s knightbors classifier was 50 times faster than sklearn’s knightbors classifier.

summary

You can find other comparison codes here.

The following two tables summarize the speeds of the different models between the two libraries:

  • Alienware M15 geforce 2060 and 6.3 GB graphics memory
index sklearn(s) cuml(s) sklearn/cuml
SVC 50.24 23.69 2.121
RandomForestClassifier 29.82 0.443 67.32
KNeighborsClassifier 0.078 0.004 19.5
LinearRegression 0.005 0.006 0.8333
Ridge 0.021 0.006 3.5
KNeighborsRegressor 0.076 0.002 38
  • Dell precision 7740 Quadro RTX 5000 and 17 GB graphics memory
index sklearn(s) cuml(s) sklearn/cuml
SVC 35.79 1.995 17.94
RandomForestClassifier 24.01 0.151 159
KNeighborsClassifier 0.075 0.002 37.5
LinearRegression 0.006 0.002 3
Ridge 0.005 0.002 2.5
KNeighborsRegressor 0.069 0.001 69

Quite impressive, isn’t it?

conclusion

You just learned how fast training different models on cuml is compared to sklearn. If it takes a long time to train your model with sklearn, I strongly recommend that you try cuml because there is no change in the code compared with sklearn’s API.

Of course, if the library uses GPU to execute code like cuml, the better the graphics card you have, the faster the training will be.

For more information about other machine learning models, see the cuml documentation:https://docs.rapids.ai/api/cuml/stable/

Link to the original text:https://towardsdatascience.com/train-your-machine-learning-model-150x-faster-with-cuml-69d0768a047a

Welcome to panchuang AI blog:
http://panchuang.net/

Sklearn machine learning official Chinese document:
http://sklearn123.com/

Welcome to pancreato blog Resource Hub:
http://docs.panchuang.net/