Overview: KubeCon EU 2019 Application Management New Point of View!

Time:2019-8-13

Summary:KubeCon EU 2019 just drew its curtain in Barcelona. The lecturer group from Alibaba economy shared various landing experiences and lessons of large-scale Kubernetes cluster in the Internet scene. From the growing community, we can see that more and more people embrace open source, evolve to standards, and take this high-speed train to Yunnan.

As we all know, the central project of cloud native architecture is Kubernetes, while Kubernetes revolves around “application”. Only by making applications deployed better and developers more efficient can we bring tangible benefits to teams and organizations and enable cloud native technology change to play a greater role. The momentum of change has engulfed old and closed internal systems like floods, and spawned more new developer tools like spring rain. In this KubeCon, there are many new knowledge about application management and deployment. What ideas and ideas are worth learning from in this knowledge so that we can avoid detours? Behind them, what is the direction of technological evolution?

In this article, we invited Deng Hongchao, a technical expert of Ali cloud container platform, an engineer of the original CoreOS company and a core author of the K8s Operator project. He selected the essence of the “application management” field for readers to analyze and comment one by one.

The Config Changed

Applications deployed on Kubernetes typically store configurations in ConfigMap and then mount them to the Pod file system. When ConfigMap changes, only files mounted in Pod are automatically updated. This approach is OK for some applications that automatically do hot updates, such as nginx. However, for most application developers, they are more inclined to think that a new grayscale publication is needed to change the configuration, and that the container associated with ConfigMap should do a grayscale upgrade.

Gray level upgrade not only simplifies user code and enhances security and stability, but also embodies the idea of immutable infrastructure. Once the application is deployed, no changes are made. When upgrading is needed, just deploy a new version of the system and destroy the old version after validating OK; if validation fails, it is easy to roll back to the old version. Based on this idea, engineers from Pusher developed Wave, a tool that automatically monitors Deployment’s associated ConfigMap/Secret and then changes it to trigger Deployment upgrades. The unique feature of this tool is that it automatically searches ConfigMap/Secret in the Deployment PodTemplate, then calculates all the data in it once and puts it in the PodTemplate annotations; when there is a change, it recalculates the hash and updates the PodTemplate annotations to trigger Deployment. Upgrade. Coincidentally, there is another tool in the open source community, Reloader, that does the same thing – the difference is that Reloader allows users to choose which ConfigMap/Secret to listen to.

Analysis and Comments

Upgrade gray, back pot two lines of tears. Whether it’s upgrading the application image or changing the configuration, remember to do a new grayscale release and verification.

In addition, we also see that the immutable infrastructure has brought a new perspective to build cloud computing applications. Developing in this direction not only makes the architecture safer and more reliable, but also integrates with other major tools to give full play to the role of cloud native community, and to achieve “curve overtaking” for traditional application services. For example, by fully combining the wave project above with the weighted routing function in Istio, the website can achieve a small amount of traffic to verify the new version of the configuration.

Server-side Apply

Kubernetes is a declarative resource management system. The user defines the desired state locally, and then updates the part of the current cluster state specified by the user through kubectl application. But it’s far less simple than it sounds.

The original kubectl application is based on the client. Apply can’t simply replace the overall state of a single resource, because other people will also change resources, such as controllers, admissions, webhooks. So how can we ensure that when we change a resource, we will not cover other people’s changes? Then there is the existing 3-way merge: the user saves the last applied state in Pod annotations, makes 3-way diff according to the latest state (last applied, user specified state) in the next application, and generates a patch to send to APIServer. But it’s still a problem! Apply’s original intention is to let individuals specify which resource fields are managed by others. However, the original implementation neither prevents different individuals from tampering with each other’s fields, nor informs users and resolves conflicts when they occur. For example, when I was working in CoreOS, the controllers and users in the product would change some special labels of Node objects. As a result, conflicts occurred, which led to cluster failure and could only be repaired by someone.

This kind of Krushurian fear hangs over every k8s user, and now we finally have the dawn of victory – the server-side apply. APIServer can do diff and merge operations, and many fragile phenomena have been solved. More importantly, instead of using last-applied annotations, server-side apply provides a declarative API (Managed Fields) to specify who manages which resource fields. When conflicts occur, such as when both kubectl and controller change to the same field, non-Admin requests return errors and prompt for resolution.

Analysis and Comments

Mom doesn’t have to worry about my kubectl apply anymore. Although it’s still the Alpha phase, it’s only a matter of time before the server-side apply replaces the client. This makes it safer and more reliable for different components to change the same resource at the same time.

In addition, we also see that with the development of the system, especially the extensive use of declarative APIs, there will be fewer local logic and more server-side logic. There are many advantages on the server side: many operations, such as kubectl dry-run, diff, will be easier to implement on the server side; providing HTTP endpoint will make it easier to build functions such as apply into other tools; putting complex logic on the server side for implementation and publication will make it easier to manage and control and let users. Enjoy safe, consistent and high quality service.

Gitops

In the meeting, a panel discussed the benefits of Gitops. Here’s a summary.

First, Gitops made the whole team more democratic. Everything has been written down. You can read it if you want. Any change needs to be pulled before it is released, not only to make it clear to you, but also to allow you to participate in reviewing input opinions. All changes and discussions are recorded on tools such as Github, which allows you to look at history at any time. All these make teamwork more fluent and professional.

Second, Gitops makes publishing safer and more stable. Code can no longer be released at will, and need to be reviewed by the relevant person in charge, or even by more than one person. When you need to roll back, the original version exists in Git. There is an audit history of who released what code at what time. These various publishing processes are more professional and make the publishing results more stable and reliable.

Analysis and Comments

Gitops is not only to solve a technical problem, but also to make use of the version, history, audit, permission transfer of Github and other tools to make the team collaboration and release process more professional and process-oriented.

If Gitops can be widely promoted, its impact on the whole industry will be enormous. For example, anyone who goes to any company can release code quickly.

The ideas of Configuration as code and Git as the source of truth embodied in Gitops are still worthy of our study and practice.

Automated Canary Rollout

Canary rollout refers to importing a small amount of traffic into the new version and analyzing and verifying whether the online behavior is normal. If everything is normal, continue to switch traffic gradually to the new version until the old version has no traffic and is destroyed. We know that in tools like Spinnaker, there will be a manual validation and adoption step. This step can actually be replaced by automation tools, after all, everything checked is quite mechanical, such as checking the success rate and p99 delay.

Based on the above ideas, engineers from Amadeus and Datadog shared how to use tools such as Kubernetes, Operator, Istio, Prometheus to make Canary releases. The idea is that the whole Canary publishing is abstracted into a CRD, and then a canary publishing process becomes enough to write a declarative YAML file. Operator will automatically complete complex operations after receiving the user-created YAML file. The main steps are as follows:

Deployment + Service

Change the Istio Virtual Service configuration to switch part of the traffic to the new version;
Check whether the success rate and p99 response time of the new version of services in Istio metrics meet the requirements.
If satisfied, the entire application is upgraded to a new version; otherwise, it is rolled back.

Coincidentally, Weave also has an open source automated Canary publishing tool, Flagger. The difference is that Flagger will gradually cut to the new version, such as 5% traffic per new cut, until the traffic is cut to destroy the old version directly.

Analysis and Comments

Use Canary release for a while, and use it all the time. Canary publishing is an important process of application management, which helps to improve the success rate of publishing and system stability.

In addition, we also see that these complex operations and maintenance processes will be simplified and standardized in the cloud primitive era. Through the CRD abstraction, the complex process steps will become several short API objects for users. Using Operator for automated operation and maintenance, users can use these functions as long as they are on the Kubernetes standard platform. Istio and Kubernetes, as top-level standardization platforms, provide powerful basic capabilities to make users easier to use.

Write at the end

In this article, we review some new knowledge about application management and deployment in KubeCon:

When the configuration file changes, do a new application publishing reasons and methods.

There are many problems in client kubectl application, one of which is to tamper with each other’s resource fields. These are solved in the implementation of server-side apply.

Gitops not only solves a technical problem, but also specializes and streamlines team collaboration and release processes.

Using top-level standardization platforms such as Kubernetes, Operator, Istio and Prometheus, we can simplify the operation and maintenance of Canary publishing and reduce the threshold for developers to use.

These new ideas also make us lament: in the past, we always envy the “infrastructure of other people’s homes”, they are always so excellent and far from reach. Now, open source projects and technical standards are lowering the barriers for every developer to use these technologies. On the other hand, a subtle change is taking place — “self-developed” basic software has to face the law of diminishing marginal effects, leading to more and more companies like Twitter to join the cloud primitive camp. Embracing open source ecology and technology standards has become an important opportunity and challenge for Internet enterprises. Only by building cloud-oriented applications and architectures with the help of cloud and open source, can we be fully prepared to sail in this cloud revolution.



Author: Wood Ring

Read the original text

This article is the original content of Yunqi Community, which can not be reproduced without permission.

Recommended Today

Implementation of PHP Facades

Example <?php class RealRoute{ public function get(){ Echo’Get me’; } } class Facade{ public static $resolvedInstance; public static $app; public static function __callStatic($method,$args){ $instance = static::getFacadeRoot(); if(!$instance){ throw new RuntimeException(‘A facade root has not been set.’); } return $instance->$method(…$args); } // Get the Facade root object public static function getFacadeRoot() { return static::resolveFacadeInstance(static::getFacadeAccessor()); } protected […]