From scratch to build a development environment in line with their own needs
In the long-term continuous update of this article, keep consistent with the actual development environment, welcome to pay attention to the exchange and discussion!
This article, on the one hand, summarizes itself, on the other hand, gives some ideas to Xiao Bai, who is on the new path, and on the other hand, when can we reorganize the expansion working group in the university?
Current situation analysis
Growing up in a family with an IT background, he participated in various robot competitions from primary school to university, and won the first prize in the country many times.
Beijing 211 undergraduate, majoring in electronic information engineering, computer science and technology, virtualization and cloud infrastructure. He studied in Paris for more than a year and worked as a Java back-end Engineer in the Internet Department of a science and technology company in Xinjiang.
Personal knowledge and technology mainly focus on Java back-end technology stack, followed by operation and maintenance and software engineering, but front-end, mobile end and game related development technology also pay close attention.
My current notebook is a Huawei 2020 matebook 13, with Intel Core i5-10210u 1.6GHz (reference speed 2.11ghz), 16GB 2133mhz memory, mx250 2G, 500GB solid-state, 10:2k touch screen, 1 lightning port (charging port), 1 common type-C port, boot button with fingerprint unlocking, and windows 10 family Chinese version.
Reference thinking of configuration analysis and purchase
As a development, it is necessary to have a computer with better configuration, which can reduce a lot of unnecessary trouble.
CPU: in fact, i5 and i7 are not much different. You can save some money here. The number of cores, in general software use, because there is usually no optimization based on multi-core, reasonable dual core is enough. But considering the multi-core call optimization of the system itself, and most importantly, multi-core will be used when the back-end program and local operation and maintenance simulation cluster, so please try to buy four core CPU. As for the desktop 8-core CPU, think about it carefully? More money should be put on the speed of single core.
Memory: in my work experience, the more memory is, the better. But as a development, please try your best to store at least 16GB of memory, not less than 8GB. If you have more money, you should first invest in memory storage capacity, followed by speed, but avoid buying miscellaneous brand memory modules with the same brand and parameters. There is no need to ask which series it is more expensive, because it says that the advantages of the series you can’t use or use The experience is not obvious.
Duxian: This is my laptop. I usually have to fight Jedi to survive General development does not need to show alone, if it involves 3D game development, please match a high-level graphics card.
Hard disk: in 2020, no more mechanical hard disk, solid state for generations As a developer, it’s never just the dynamic data in the memory that we deal with. Let’s exchange the remaining money of the graphics card and CPU for a 1T m.2 solid state. The experience improvement brought by this is the most obvious in the hardware upgrade.
2k and touch screen: 2k and 4K, used, look back at 1080p, you will suspect that your eyesight is poor Touch screen, occasionally write an android app, do not connect the mobile phone to open the developer mode, and directly slide the simulator on the touch screen for the first time. Isn’t it good? Usually browsing web pages or pictures, used to, but also more convenient than using mouse wheel and keyboard shortcut keys.
Type-C and lightning port: it’s good to use it. Of course, if the computer has other ports, it’s better. It’s great to buy a USB and lightning dual port U disk separately.
System biology unlocking: originally I like the camera window Hello unlocking very much, but after using the power on button fingerprint unlocking for a period of time, I feel more stable. The speed of both is usually faster than typing the password. The faster it is, the more I can ensure that I will not forget the inspiration of that moment when I turn on the computer temporarily.
Operating system: I use this computer at present. I usually play games like Jedi survival with my friends. Naturally, I still stick to the windows camp. However, from the perspective of development experience, Mac OS is very popular except for being anti-human at the beginning. I still don’t recommend Linux system, such as Ubuntu, which has a good UI, because Mac OS is a good tool for all Linux and windows, but if the developers of Linux don’t have enough skills, they can easily be delayed by some trivial things. Mac OS has the same convenient file management structure as Linux, the same UI as windows (I think it is even better), and it is more convenient to interact with Linux server than windows. Sometimes I want to do some simple local simulation test of server-side programs. Windows is often a headache for developers for various reasons, while Mac OS and Linux are much more convenient because they are UNIX based systems. If the windows system is win 10, please try to keep it updated.
Equipment configuration pits encountered
For example, gitlab is too laggy, and nexus repository 3 is recommended for hardware with 4 8G and above. Of course, a little configuration can run, but too low will cause problems, low will be very stuck, the better the overall configuration.
For example, docker, windows 10 needs the 19000 + version of pro or home, because virtualization is involved. Compatibility with Hyper-V architecture requires something that the default system of the lower version of home does not have.
An alicloud 1c2g lightweight application server is currently equipped with a personal website, a blog system under construction and a private commercial calculator.
Two Alibaba cloud 1c2g ordinary cloud servers are used for daily experiments.
Requirement analysis of development environment
A complete set of software project development process will start with document related requirements analysis, then code writing, then verification testing, deployment after passing, and finally operation and maintenance. Whether it is traditional waterfall or agile and Devops now, the basic process unit of requirement + Design + code + test + deployment has not been completely changed. The evolution of software engineering under the Internet mode only speeds up the cycle speed of the process, simplifies some optimization steps appropriately, and adds automation in some processes to reduce the overall cost And improve efficiency.
Demand:First of all, requirements should be recorded and included in the timeline of the development process, which can be checked and verified again before deployment. Secondly, requirements should be divided into details. Clear requirements analysis can greatly improve the efficiency of design, code and even testing, and it is easier to make priority adjustment, so as to plan the time and project cycle. Finally, it’s better to mark the version of the requirement, separate different parts, facilitate code merging and deployment, and make the content of each iteration clearer. On the demand side, there is an urgent need for a public display board and a log that can distinguish versions.
Design:At present, I use XMIND to draw mind map to filter ideas and modularize important functions. It can also be expressed as simple class diagram and data structure diagram, so as to make clear the data flow direction and relationship. Most importantly, it is free and can be exported into many formats. Then the demand of design is a file management system that can label and classify carefully.
Test:In the previous work, the testing part was relatively perfunctory, and the back-end would have manual unit testing, but the purpose was to verify that the compilation did not report errors and generate interface documents. The front-end testing and the overall testing were only the only testing engineer who manually tested the designed test cases and wrote the test reports himself, or made too simple code changes when he was busy All tests may be skipped. Test related requirements are very simple, at least to achieve automation.
Deployment:The development and test environment is automatically deployed by submitting code, which greatly improves the efficiency of joint debugging with the front end. At present, my personal website back-end uses the beego framework for simple development. After manual cross compilation, it is deployed manually. Now let’s think about it. In fact, the deployment environment is usually fixed. The deployment operations can be refined and written into scripts to run automatically. Therefore, the deployment needs to be automated.
Cloud server operating system
The operating system of my cloud server is Ubuntu 20.04 x86_ The reason for recommendation is that apt is easy to use and all kinds of packages are more fully updated.
The real reason for the final determination of the operating system is that Ubuntu 20.04 will continue to be maintained until 2030, CentOS research direction will be changed to CentOS stream, and CentOS 9 will no longer exist, while CentOS 7 maintenance will end in 2024 and CentOS 8 maintenance will end in 2021.
I have been using CentOS for a long time. After all, the introduction is the “black bible” (Bird’s Linux private dish).
Some people say that CentOS has better performance and stability than Ubuntu. In fact, the server-side system is also a long-term supported version of LTS. The performance and stability of the system are both fair and similar. The performance and stability of other program codes running on the server are the key factors.
Currently, the docker is simple to use, without complex container layout. Because of its installation and configuration, it is really too simple. The probability of encountering various exotic problems in early use is relatively low. There are many online articles, tutorials, and supporting tools (docker hub and docker compose are not good).
Kubernetes (k8s) may be completely used in the future. After all, k8s announced that it would completely abandon dockershim. At present, both k8s and docker support containerd in container runtime. When the demand for container choreography becomes more important later, I believe the transformation is also convenient.
First of all, version control services. Git supports git LFS, SVN and other tools. Is it really that good? Those who have used git say it is good, even Unreal Engine integrates GIT.
In addition to running git services and code repository, I finally chose gitea to build a private code repository (using MariaDB for storage).
GitHub, bitbucket, gitee, coding, teambition and so on, there are too many free restrictions related to private warehouse and team cooperation. Although small-scale development is certainly enough, sooner or later, it will face the situation of payment or effort for free.
Gitlab, gogs, gitea and so on. It’s only because the very cheap cloud servers that individuals can buy in China are difficult to run well (this thing requires at least 4c8g). Gogs and gitea are of the same origin. They are developed by golang, super lightweight and fully functional. The difference is that the former is relatively slow in personal maintenance and update, while the latter is relatively fast in community maintenance and update.
Finally, let’s look at the code storage. My gitea data storage is MariaDB. The reason is very simple: I try to install as few databases as possible. If I don’t pay attention to the incompatibility of various data, it will be very troublesome to deal with the remaining ones. MySQL is commonly used. At first, MySQL and MariaDB are the same thing. However, the former is a commercial product, and the latter is completely open source community maintenance. Both MySQL community version and MariaDB are free, but MySQL community version does not provide thread pool (this part is provided in the enterprise version of charging), and MariaDB supports thread pool, so MariaDB is really delicious.
Continuous integration deployment
That is to say, cicd (continuous integration & continuous deployment). Of course, some people say that it also includes continuous delivery. That’s right. It’s a set of process, from code uploading to code building, then running tests, and finally deploying after testing. To automate this set of process, we need cicd tools.
In the end, I chose drone, a lightweight tool, instead of Jenkins and gitlab Ci, or other online services. Jenkins and gitlab CI are both too heavyweight. My system configuration (1c2g) can’t keep up with them. Besides, Jenkins’ default UI is ugly. Most of the other online services have repackaged Jenkins and created a nice visual interface. However, they are not free to use.
Development environment installation
Tip: in the following installation process, all download addresses have been changed to domestic addresses, and there is no need to cross the wall. All parameters are required and recommended, and the parameters not filled in are optional.
Note: the following contents are installed in the same server. However, dockers set up internal subnets, and different containers in dockers assign their own intranet IP, just to simulate the deployment on different servers. The following places using intranet IP can be replaced with equivalent extranet IP, but the places using extranet IP can not be replaced with intranet IP (and should be replaced with domain name path). The reason is that these extranet IP are exposed to the actual remote terminal customers. If they use intranet IP, the customers cannot resolve the address because they are not in the same subnet.
Install container engine
Installing docker on Cloud Server
Use SSH to connect to the cloud server, such as Windows users. Here we recommend mobaxterm. It is free for personal use, runs smoothly, has beautiful UI interface, and has many functions (I haven’t opened putty since I used it).
Update apt package index and install some necessary system toolkits to allow apt to use repository via HTTPS.
sudo apt-get update sudo apt-get -y install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common
Install the GPG key.
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
By searching the last 8 bits of the normal key (0ebfcd88), verify that the GPG key is installed successfully.
sudo apt-key fingerprint 0EBFCD88 pub rsa4096 2017-02-22 [SCEA] 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88 uid [ unknown] Docker Release (CE deb) sub rsa4096 2017-02-22 [S]
Set up a stable repository.
sudo add-apt-repository \ "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu \ $(lsb_release -cs) \ stable"
Install and verify
Update apt package index and install the latest version of docker engine and containerd.
sudo apt-get -y install docker-ce docker-ce-cli containerd.io
Run the image in the official remote database Hello world, and set the container name to hi docker.
sudo docker run --name hi-docker hello-world
If you see “Hello from docker!”, then docker is installed normally.
Hello from Docker! This message shows that your installation appears to be working correctly.
Delete the successful hi docker container, and then delete the useless local Hello world image.
sudo docker rm hi-docker sudo docker image rm hello-world
For the subsequent docker container, use the docker network command to create a subnet, which is bridged with the host by default.
sudo docker network create --subnet 172.18.0.0/16 \ --gateway 172.18.0.1 \ ex-network
If it is created successfully, an ID number of the network will be displayed. You can query the details of the subnet through the ID number or the name of the network.
sudo docker network inspect ex-network
Build code warehouse
Install gitea on the cloud server to build git service and private remote code warehouse
Install MariaDB as a container to store gitea data
sudo mkdir -p /data/docker/mariadb sudo docker run --name mariadb \ --network ex-network \ --ip 172.18.0.2 \ -p 3306:3306 \ -v /data/docker/mariadb/:/var/lib/mysql \ --restart always \ -e "MYSQL_ROOT_PASSWORD=root" \ -e "MYSQL_DATABASE=gitea" \ -e "MYSQL_USER=gitea" \ -e "MYSQL_PASSWORD=gitea" \ -itd \ mariadb:latest
Install gitea as a container
sudo mkdir -p /data/docker/gitea sudo docker run --name gitea \ --network ex-network \ --ip 172.18.0.3 \ -p 3000:3000 \ -p 3022:22 \ -v /data/docker/gitea:/data \ --restart always \ -itd \ gitea/gitea:latest
Gitea initial setup
Open a web browser and enter in the address barHttp: / / cloud server public IP address: 3000You will see a UI interface for gitea’s initialization configuration.
Tip: if the following settings are not specified, the default values will be maintained. See above for database host, user name, database user password and database name.
Database type: MySQL
Database host: 172.18.0.2:3306
User name: gitea
Database user password: gitea
Database name: gitea
Character set: utf8mb4
Site name: explosion
Administrator account settings: your user name, password and email.
Modify gitea address
The default address of gitea is localhost. If all operations are local and internal, it doesn’t matter. You can skip this step. If you need to interact with gitea from other places, for example, after the subnet is divided, the program in the container with different addresses wants to interact through webhooks, and for example, if the external network client clones the code from the remote end, the problem of inconsistent actual address of localhost and unauthorized access will occur. At this time, you need to change the localhost to an accessible public network address.
The operation is very simple. Find / data / docker / gitea / gitea / conf/ app.ini Open this file and replace all the localhost with the public IP address of the cloud server (preferably the domain name).
Configure continuous integration deployment tool
Create a new oauth2 application
After logging into gitea, click on the head picture in the upper right corner, click settings, and select “application” pagination. In the management of oauth2 application, create a new oauth2 application, fill in the content.
Application Name: drone
Redirection URI:Http: / / cloud server public IP address: 3080 / login
Click generate to generate the client ID (drone)_ GITEA_ CLIENT_ ID) and client key (drone)_ GITEA_ CLIENT_ To save the record, click save.
Generate shared secret
Generate shared secret (drone)_ RPC_ Secret) is used for communication between runners and central drone server.
openssl rand -hex 16
Install drone server
Tip: at present, the authentication of drone adopts Oath2, so it is no longer necessary to set drone_ GIT_ User name and drone_ GIT_ PASSWORD。 If the visibility of the repository is not completely public, drone will only clone the private repository of GitHub by default. In other cases, you need to manually clone the drone_ GIT_ ALWAYS_ Auth is set to true.
sudo docker run --name=drone \ --volume=/data/docker/drone:/data \ --env=DRONE_ GITEA_ Server = "http: // cloud server public IP address: 3000"\ --env=DRONE_GITEA_CLIENT_ID="DRONE_GITEA_CLIENT_ID" \ --env=DRONE_GITEA_CLIENT_SECRET="DRONE_GITEA_CLIENT_SECRET" \ --env=DRONE_RPC_SECRET="DRONE_RPC_SECRET" \ --env=DRONE_ SERVER_ Host = "public IP address of cloud server: 3080"\ --env=DRONE_SERVER_PROTO="http" \ --env=DRONE_GIT_ALWAYS_AUTH=true \ --network ex-network \ --ip 172.18.0.4 \ -p 3080:80 \ -p 443:443 \ --restart=always \ -itd \ drone/drone:latest
Install drone runner
sudo docker run --name drone-runner \ -v /var/run/docker.sock:/var/run/docker.sock \ -e DRONE_RPC_PROTO=http \ -e DRONE_RPC_HOST="18.104.22.168:3080" \ -e DRONE_RPC_SECRET="bc6b8c32f8513f7abbb1069262f1ebb7" \ -e DRONE_RUNNER_CAPACITY=2 \ -e DRONE_RUNNER_NAME="DroneRunner" \ --network ex-network \ --ip 172.18.0.5 \ -p 6000:3000 \ --restart always \ -itd \ drone/drone-runner-docker:latest
After the runner is installed, you can use docker logs to view the log content, so as to verify whether the installation is successful.
sudo docker logs drone-runner