DLI is a server less big data computing service that supports multi-mode engine, which well implements the characteristics of serverless
1. Weakening the connection between storage and computing;
2. Code execution does not require manual resource allocation;
3. Charge by usage.
So how can we better realize the server less service and avoid becoming a traditional single distributed application? Microservice architecture is undoubtedly the best choice.The overall deployment architecture of DLI based on microservice architecture is as follows:
In other words, services are provided in the form of pure api, and microservices are divided according to sub domains based on domain model with apigateway as the application entrance, so as to realize the big data computing service of serverless.
So for such a microservice architecture based serverless service, how do we deploy and operate in the production environment, so as to achieve rapid iterative online under the premise of ensuring service SLA?
With the development of technology, the deployment process and architecture have undergone fundamental changes, and now we have entered a lightweight, short life cycle technology era.
From the big data computing platform initially deployed on the physical machine, to the big data platform deployed by the elastic computing cloud server based on the public cloud, and then to the server less service such as DLI, it shows the evolution of big data computing services. So how to better implement the deployment of serverless big data computing services? The answer of DLI is to deploy micro services based on kubernetes + docker.
Kubernetes deployment is a good way to deploy services without downtime, but how to deal with errors after receiving production traffic and make the new version of services more reliable? This can be seen by dividing the problem into two parts:
1. Deployment, that is, the service is put online to run in the production environment;
2. Publish, even if the service can be used to handle production traffic.
Traditionally, separating the deployment process from the release process has been a challenge. But now we have a good choice, which is based on service grid. In the deployment of DLI, we combine kubernetes + istio to realize service discovery and traffic routing by traffic management of istio, so as to easily separate deployment and release, and make the new version of services more reliable.
Click follow to learn about Huawei’s new cloud technologies~