Linux: building deep learning environment through docker
background
not long ago, my friend forwarded a link to me. At first glance, it was father Ma who sent me warmth during the epidemic. full time college students can get one2-core 4GIt’s ECs. Half a year first, and half a year later when it is about to expire.
It’s really fragrant! Before bought the student machine a year 120 ocean only 1 core 2G!
just recently, there have been many experiments in deep learning classes, and my notebook has to do other things, so I get a computer to run data.
conveniently record the way to configure the environment with docker, so as not to forget later.
to configure:
Portal: collect ECS during epidemic period
Docker installation
first, make sure you get the ECS (this is not nonsense 2333), then connect it with SSH software, putty / xshell and so on.
then start the installation of docker. For the students who are not familiar with docker, you can learn about it first, which is a very useful tool.
first of all, upgrade Yum:
yum -y update
Docker package has been included in the default CentOS extras software source. So let’s go straight:
yum install docker
installation completed.Glory of KingsDocker, start!
$ systemctl start docker
Set up the power on:
$ systemctl enable docker
Docker installation is complete.
Deepo
docker hub searched the image of a relatively complete deep learning framework. I saw itufoym/deepo
This image.
Deepo is a series of docker images that
- allows you to quickly set up your deep learning research environment
- supports almost all commonly used deep learning frameworks
- supports GPU acceleration(CUDA and cuDNN included), also works in CPU-only mode
- works on Linux CPU version, Windows and OS X
and their Dockerfile generator that
- allows you to customize your own environment with Lego-like modules
- automatically resolves the dependencies for you
It was convenient, so it was decided that it was him.
Because it’s ECs, without a graphics card, you can’t use CUDA. You can directly use CPU mode.
install
Download image.
docker pull ufoym/deepo:cpu
use
start the container running. There are two ways
1、 If you don’t bring jupyter, just run the python file
docker run -it --ipc=host -v /Data/py_workspace:/data ufoym/deepo:cpu bash
among them-v /Data/py_workspace:/data
It means:Put the/Data/py_workspace
Map to the/data
Under the directory. You can change it according to your own file location.
for example, if I set it like this, I can enter it in the container/data
Directory, you can access to my hard disk/Data/py_workspace
The contents under the folder, as shown in the figure.
2、 You can also run it with jupyter.
docker run -i -p 8888:8888 --ipc=host -v /Data/py_workspace:/data ufoym/deepo:cpu jupyter notebook --no-browser --ip=0.0.0.0 --allow-root --NotebookApp.token=7c4a8d09ca3762af61e59520943dc26494f8941b --notebook-dir='/data'
amongNotebookApp.token=
It’s followed by the code, rightsha1
Encrypted. Can find a SHA1 online encryption website to generate.-p 8888:8888
This means mapping the external 8888 port to the container 8888 port.
After running, directly visit the 8888 port of ECS through the browser.
Let’s log in. Enter the password, note that SHA1 encrypted string.
success.
Give it a try.
Is it convenient. Don’t listen to the fans.
Operation and maintenance
press when without jupyterCtrl + P + Q
You can make the container run in the background.
if you want to re-enter the container, first check the container ID:
docker ps
Then enter the container:
Docker exec - it container ID / bin / Bash
With jupyter, I run the nohup command directly in the background, because I don’t like to use jupyter very much and don’t study it very much
nohup docker run -i -p 8888:8888 --ipc=host -v /Data/py_workspace:/data ufoym/deepo:cpu jupyter notebook --no-browser --ip=0.0.0.0 --allow-root --NotebookApp.token=7c4a8d09ca3762af61e59520943dc26494f8941b --notebook-dir='/data' >/Data/py_jupyter.out &
Well, students in need can try!