By Wang Yinli (Yun Zheng)
Source|Alibaba cloud official account
Attack skill, short; theory, long; practice, win. It can be said that the conscience of a city is reflected in the sewers. No matter how many high-rise buildings and how magnificent the construction of the city is, as long as it rains, the rain becomes the tester of the conscience of the city. If we compare the construction of cloud native system with urban construction, what should be the conscience of cloud native system? Who is the cloud born storm? Who is the tester of cloud original conscience?
Cloud computing brings a lot of business value, mainly as follows:
- Fast iterationAll the martial arts in the world are fast. If we want to win a place in the cruel market competition, we must strike first. The essence of cloud nativity is to help business iterate quickly, and the core element is continuous delivery.
- Safe and reliable: cloud native can quickly recover from errors through observable mechanism. At the same time, it can restrict illegal use through various isolation methods such as logical multi rent and physical multi rent.
- Elastic expansion: by transforming the traditional application into cloud native application, it can achieve elastic expansion and contraction, better cope with the peak and low flow, and achieve the purpose of reducing cost and improving efficiency.
- Open source co constructionCloud native technology can help cloud manufacturers to open the cloud market better through open source technology, and attract more developers to build an ecological environment. From the beginning, cloud native technology has chosen a “flywheel evolution” road, which realizes a positive cycle of rapid growth through the ease of use and openness of technology, and promotes enterprise business to the cloud and its own technology through growing application examples The continuous improvement of the layout.
Next, this paper will analyze all aspects of cloud nativity, including basic concepts, common technologies, and a complete platform construction system, so that you can have a preliminary understanding of cloud nativity.
What is cloud native?
1. Definition of cloud native
The definition of cloud Nativity has been changing, and different organizations have different understandings. CNCF and pivot are more famous. Here is the latest definition of CNCF:
Cloud native technology helps organizations build and run scalable applications in new dynamic environments such as public cloud, private cloud and hybrid cloud. The representative technologies of cloud origin include container, service grid, micro service, immutable infrastructure and declarative API.
These technologies can build a loosely coupled system with good fault tolerance, easy management and easy observation. Combined with reliable automation means, cloud native technology enables engineers to easily make frequent and predictable major changes to the system.
The cloud native Computing Foundation (CNCF) is committed to cultivating and maintaining a vendor neutral open source ecosystem to promote cloud native technologies. By democratizing cutting-edge models, we can make these innovations available to the public.
In addition, as a leader in cloud computing, Adam Wiggins, founder of heroku, has compiled the famous twelve elements of cloud (the twelve factor app:https://12factor.net/zh_cn/))。 Later, as a leader of cloud computing, Kevin Hoffman of pivot (which has been acquired by VMware) published beyond the 12 factor app, adding three new elements based on the original 12 elements, namely cloud native 15 elements.
The fifteen elements are the ideal practice standard for developing SaaS applications, which integrates almost all their experience and wisdom. Fifteen elements are suitable for any language development of back-end application services, process automation and standardization, reduce the learning cost of new employees, and draw a clear line with the underlying operating system to ensure maximum portability.
The following figure gives an overview of all the definitions and features of cloud origin
2. The original nature of cloud
Literally, cloud origin can be divided into two parts: cloud and origin.
Cloud is relative to local. Traditional applications must run on local servers. Now popular applications run on the cloud. Cloud includes IAAs, PAAS and SaaS.
Native means native,When we started to design the application, we considered that the application will run in the cloud environment in the future. We should make full use of the advantages of cloud resources, such as the flexibility and distributed advantages of cloud services。
Cloud primogenesis includes not only technology (micro service, agile infrastructure), but also management (Devops, continuous delivery, Conway’s law, reorganization, etc.). Cloud is also a collection of cloud technology and enterprise management methods.
1) Cloud is not the business itself
Several people ask me what cloud nativity is, and I will ask them, if you want your business to iterate quickly, what do you want cloud Nativity to be. Cloud is certainly not a specific thing, but represents how to pursue the essence of the problem, what it is, is what it is, it is a set of methodology.
The essence of cloud nativity is to help business iterate quickly, not the business itself, not the technology stack, not mechanically. We should not look at what we have, but what the customer wants.
In fact, cloud Nativity represents the progress of science and technology. We should not only improve the iterative efficiency of new business, but also break the iterative efficiency of old business. A good architecture is generally compatible with human stupidity, so the old business here may be a historical burden or a prejudice brought about by the knowledge bottleneck.
We are becoming old all the time and creating new all the time. Only when people dare to question themselves, the past and the authority can they have the power and insight to create new things.
2) Cloud is not cloud computing
There are great differences between cloud computing and cloud native, which are mainly reflected in the following six aspects:
Cloud native applications come from cloud native. As mentioned earlier, they are built and deployed in the cloud, truly accessing the power of the cloud infrastructure. Cloud computing applications are usually developed internally using traditional infrastructure, and can be remotely accessed in the cloud after adjustment.
Cloud native applications are designed as multi tenant instance hosting (micro Service Architecture). Cloud computing applications run on internal servers, so they don’t have any multi tenant instances.
Cloud native applications are highly scalable and can make real-time changes to a single module without disturbing the entire application. Cloud computing applications need to be upgraded manually, which can lead to application interruption and shutdown.
Cloud native applications do not require any investment in hardware or software, because they are carried out on the cloud and can usually be obtained from the licensees, so they are relatively cheap to use. Cloud computing applications are often expensive because they need to be upgraded to meet changing needs.
Because there is no need for hardware or software configuration, cloud native applications are easy to implement quickly. Cloud computing applications need to customize a specific installation environment.
3) Cloud primogenesis itself is complex
It’s not only technology that cloud has changed, but also business. Since cloud Nativity will help the business to iterate quickly, the business code and project process are bound to undergo fundamental changes. Typically, the business is lighter and lighter, the base is thicker and thicker, the data processing is more and more automated, and the number of non-human users is more and more.
Next, we can explore the original nature of cloud from the three short histories of yuvarherali.
With the development of artificial intelligence in the 21st century, human society will gradually transition from humanism to datalism. If human society is a relatively large data network, including human emotions are only biological algorithms selected by evolutionary theory, then everyone is just one of the data processors, which can be Homo sapiens, virtual human, or future super human. We can take the difference between communism and capitalism as an example. Communism is a centralized algorithm, which calculates everyone’s needs through the national data network and then allocates them; capitalism is a distributed algorithm, in which a few capitalists control most of the social resources.
It can be said that the previous data is an isolated island, which can be deployed on several physical machines and managed well without affecting others. Today, all applications are online, and gradually become a viable asset. The constraints of applications will become more and more strict and complex. All data flow and dependence are completely unpredictable. It can’t be solved by the shop alone.
Cloud nativity is actually very complex. Its essence is to connect data and process data from disorder to information, knowledge and wisdom. The complexity of cloud native comes from the fact that it wants to accommodate more complex transactions and structures, but on one hand, cloud native is actually very simple, because it brings endless convenience and rich functions to end users without their perception. Complexity and simplicity are relative,The more complex the bottom, the simpler the top。
What is cloud native application?
What is cloud native application? What is the relationship with cloud Nativity? The definition of cloud native application is basically as follows:
Cloud native application refers to the application designed and developed for deployment and operation on cloud platform. Cloud native applications not only package applications as docker images, but also deploy images to kubernetes container cloud to run. To be fair, most traditional applications can run on the cloud platform without any changes. However, this mode of operation can not really enjoy the dividends of the cloud. We also call it cloud hosting applications.
In addition, cloud native applications can be classified in different ways. According to the business scenario, it can be classified by status and function.
1. Classification by status
Cloud native applications are mainly divided into stateless applications and stateful applications. Whether there is a state or not is mainly reflected in whether the state of the application instance needs to be perceived. In kubernetes, the application instance is pod, and the stateful application essentially depends on the state of pod.
1) Stateless application
Stateless applications are applications that do not depend on the local running environment, and the instances are independent of each other and can scale freely.
Features of stateless applications:
- Examples of stateless application can be compared to livestock, nameless and disposable;
- The running instance will not store persistent data locally;
- All information (except log and monitoring data) of stopped instances will be lost.
2) Stateful application
Stateful applications are applications that rely on the local running environment. There are dependency and start-up sequence relationships between instances. Data persistence is needed and can’t be scaled at will.
Features of stateful applications
- Examples of stateful applications can be compared to pet, famous, and undeniable
- Requirements of instance upgrade and gray level on start stop sequence, such as distributed master selection
- Rely on instance information, such as ID, name, IP, MAC, Sn, etc
- Need to do data persistence, rely on local files and configuration.
3) The transformation between stateless and stateful
Stateful application and stateless application can transform each other. Most middleware applications are stateful, such as zookeeper, rocketmq, etcd, mysql, etc. Most business applications are stateless applications, such as web applications, query applications and so on.
- From stateless to stateful
For example, when a relatively simple cloud product is deployed in the public cloud, it can rely on the infrastructure of the public cloud, so it is stateless; but when the private cloud is deployed, it needs to solve its own environment and dependence on other baas, so it is stateful. This is the gap between the infrastructure and the mode of operation and maintenance.
In general, we don’t advocate that the dependencies between applications are too complex. Especially in the case of proprietary cloud, the complex dependencies bring a lot of environmental problems. It’s almost necessary to move the whole public cloud to the proprietary cloud, which is a big mental burden for us and our customers.
- Stateful to stateless
Business application should be stateful in nature, but it can transfer stateful to middleware with the help of middleware, operation and maintenance API, baas and serverless. The stateful application that can be transferred to stateless application is also called pseudo stateful application.
Transform to stateless through Middleware: most business applications can use middleware products on the public cloud to realize the ability of computing, storage and network. For example, web applications can use RDS and other database products, and only implement the core business logic by opening and relying on RDS instances through baas capabilities.
Transformation to stateless through operation and maintenance API: applications with special operation and maintenance logic can call operation and maintenance API to transfer the complexity of operation and maintenance. For example, metaq needs to switch between active and standby. At this time, the selected active API provided by etcd on kubernetes is used to mark the metaq instance, and the metaq developer can operate and operate metaq like a stateless application.
Transform to stateless through serverless: for applications with very simple business logic, it is not necessary to package the image, but can be developed directly through various serverless platforms and handed over to the platform for operation and maintenance.
In order to better identify pseudo state, we should define whether there is state or not from the nature of application rather than state. As for zookeeper, etcd and MYSQL, which are completely dependent on their own application code for operation and maintenance, even if they are relatively complete stateful applications, it is difficult to transform them.
So the transformation from stateful to stateless, does stateful disappear? In fact, there is a state in essence. In fact, facing the final state, it is not to say that we do not do some operation and maintenance operations, but to hand these operation and maintenance operations to the platform according to the change of state in order to achieve the desired state. This process is the operation and maintenance of the life cycle. It’s not that statefulness is reduced, but statefulness is not exposed to users. Kubernetes actually solved the problem of pod’s presence. For stateful applications, we need to pay attention to the life cycle of pod. Turning business operators into platform operators is the main workload of transforming stateful applications into stateless applications.
In the cloud native system, we should try our best to turn stateful applications into stateless applications, so that we can make the best use of the benefits of cloud native, and leave the observability and high availability to the cloud platform to guarantee, while the developers only need to care about the business nearest to the customers.
With the progress of technology, stateful applications will continue to become stateless applications. Only a few middleware related to cache, message, and storage need stateful operation and maintenance, and slowly sink to the bottom layer. Most people don’t need to understand the difference between the two.
2. Classification by function
If the application in cloud native is divided by function, it can include business application and operation and maintenance application.
1) Business application
Business application is a kind of application that business development engineers develop business code through Java, go, Python and other languages, and then package it as an application after image deployment. Business applications are mainly used to solve business problems and achieve specific business functions. The main deliverable of business application is image.
In the serverless platform, the business application can also be some function code, which can be mirrored, or it can be directly deployed to the multi language running environment without mirroring.
2) Operation and maintenance application
As the cloud native focus needs to solve the problem of application operation and maintenance automation, and the business application can not solve the problem of its own operation and maintenance, that is, it can not manage itself, so it needs the operation and maintenance application to manage the business application.
Operation and maintenance application is the operation and maintenance code developed by operation and maintenance engineers with yaml and helm, and then distributed to kubernetes for deployment. Operation and maintenance applications are mainly used to solve operation and maintenance problems and realize special operation and maintenance logic. The main deliverable of operation and maintenance application is yaml.
Theoretical exploration of cloud origin
1. Everything is data
In fact, from Devops to aiops, there is another dataops. Kubernetes’ terminal oriented state is like a black box. People don’t know how the running state is, just like they can run to the end. No one knows whether you run fast or I run fast. Therefore, compared with terminal oriented state, there is an observable state to measure whether the process of reaching the terminal state is perfect and healthy.
Therefore, we must have data thinking in the usual design, and carry out more data modeling, otherwise observable is also cooking without rice. Let’s take a look at the data in all aspects of cloud nativity?
- We need to edit the resource configuration and issue it through gitops or k8s command, which is also called data-driven, that is, everything is configured with data.
- All kinds of logic of resources need to execute a series of actions, which can be triggered in many ways, that is, everything executes data.
- The internal life cycle of resources needs to be choreographed, and the dependency relationship between resources also needs to be choreographed.
- K8s is based on event driven architecture. The change of various resource states on k8s will produce events, that is, everything is event data.
- Event flow is log, business record is log, action change is log. Structured log is the root of observability, that is, everything is log.
- Whether it’s configuration instructions, dependency choreography, or events, they all revolve around resources. All APIs are called by the main body of resources, that is, everything is resource data.
2. Multi dimensional business combination theory
People often tell me that cloud based technologies are so popular that they let us go to the cloud all day. Apart from saving costs, why can’t I see any obvious help for the rapid delivery of business? I think maybe you haven’t found a business architecture that is particularly suitable for the cloud native era.
Some people say that Chinese is the best ideographic language in the world, because Chinese is a two-dimensional language, with more than 2000 basic words. Other words are by analogy, changeable, good in form and spirit, and broad in thinking. However, English is only a one-dimensional language. When a new thing appears, you have to create a new word. There is no tone, and words of the same kind can not be related. However, you are good at expressing non massive information, such as programming, mathematical and chemical expressions, etc. From here, we can draw a conclusion that the underlying technology is more convenient to use machine language 0-1, while the upper business needs a multi-dimensional business model.
It can be said that cloud Nativity brings not only the development of technology, but also the profound change of business. Now, do we have a set of business model that can guide the complex business in the era of cloud Nativity?
Typical architecture, such as micro service architecture, event driven architecture, middle platform architecture, but it seems that they can not solve the problem. The author also made some explorations, invented a set of multi-dimensional business combination theory, and represented it in the way of vertical and horizontal diagram.
Meaning of each figure:
- Vertical and horizontal chart: subdivide various fields with crisscross lines and area blocks
- Point: the smallest unit of business function and business assembly
- Horizontal line: Micro platform, PAAS, single service subject
- Vertical line: business software, SaaS
- Cylinder: business field or technology field
- Area block: solution or one-stop workbench, which can control permissions by tenant, product and service.
We can see from the figure that the isolated area and expanded scope of each domain, the vertical and horizontal layers will become more and more, and the domain will be divided more and more finely.
For example, there is an application of a trading system that relies on message queues and databases and wants to be deployed to kubernetes in the public cloud. Assuming that there are no such layers today, the students in charge of the trading system need to buy their own public cloud machines, then deploy kubernetes, then deploy middleware, and then deploy the trading system, and need to solve various network and stability problems. The results can be imagined.
In addition, we can also see the value of the vertical and horizontal chart from the development of technology. With the rapid development of technology, business students feel that things are not as simple as before. Because the complexity of business is increasing, and the iteration speed is required to be higher. Many concepts of microservice, container, and middle platform are designed to accelerate innovation. Decoupling is for better combination, so how to control granularity? This can be seen from the development of physics. In theory, the higher the human civilization evolves, the more micro it will be, and the more macro it will be, such as quantum mechanics and relativity. So the size of granularity matches the innovation ability of today’s society.
In the future, we need to play the technology ecology, and the combination and arrangement innovation of technology points will inevitably become the main theme. It can be said that single point technology is difficult to play its value and settle down, and it is also very easy to be replaced. It is difficult to achieve ecology by integrating single point technology. This road is very difficult to last long. A good platform, in which any technical point is replaceable.The era of technology choreography has come. The ultimate goal of cloud nativity is solution delivery, not cost, for faster innovation。
3. Facing the terminal state theory
The terminal state theory is similar to the data-driven theory, which makes the software system closer to the ultimate theory of human instructions. K8s in the end state oriented, responsive programming in the data-driven, so that events to the system to manage, we only need to know what we want, not care about how to achieve.
It can be said that in the whole kubernetes design concept, terminal state oriented is its core concept and the key to operation and maintenance automation. For example, my application needs 10 instances. In case of machine failure, help me automatically change one. These requirements are submitted to the system after declaration, and the system will automatically complete the things that users expect. And this way is a kind of design oriented to terminal state. The core method of end state oriented design is to use “declarative API”.
The following mainly takes the deployment as an example. The core logic is to take the custom Cr (myapp) as the final state and the deployment as the running state, and write the related reconcile logic by comparing the inconsistency of attributes.
A diagram explains the relationship between various resources and controller
The following conclusions can be drawn from the figure:
- The process of replicas between my app Cr and deployment is unidirectional.
- My app drives deployment and deployment drives pod.
- The status of pod is fed back to deployment, the status of deployment is fed back to my app, and then the status of APP reaches running.
But the end state oriented design in kubernetes is not complete enough. It does not design the end state definition of the whole life cycle of various resources, such as how to define the resource state machine, how to rely on baas and config, how to insert hooks, how to subscribe to events and handle them, and how to design the completeness and health.
The essence of operation and maintenance is process oriented, so the process also needs to be defined. For example, the end of one’s life is towards death. Is the end really what we yearn for? We need to broaden the width of life and find the meaning of happiness. The operation and maintenance in cloud native is similar. All resources have a life cycle. If there is a life cycle, there is a process. If there is a process, there is a state. If there is a state, there is a state machine.
4. Central management theory
The nature of cloud nativity is to connect business or data. For example, in order not to be locked by cloud vendors, it needs to cross cloud; in order to live in different places, it needs to cross region; in edge computing, in order to simplify management or form logical clusters, it needs to cross kubernetes cluster. In these scenarios, centralization is the problem that must be solved.
It can be said that as big as a country, as small as a zookeeper, the so-called cross XXX, there must be a centralized management organization. Generally speaking, our physical isolation is mainly to isolate the data center. There are many kinds of data. We are mainly concerned with the data used for scheduling. The scheduling data are relatively simple instructions representing the user. We call it configuration. Therefore, the centralized management in cloud native needs a global scheduling center and a global configuration center. In complex scenarios, it can be used in every node A physical cluster can be added with a client agent that can receive and parse instructions. For example, the design of Prometheus monitoring is like this. We need to add a monitoring agent to each node to monitor the system and collect data for reporting.
5. The theory of arrangement moving up
They can’t arrange and manage themselves. They must be self-contained, so there are always higher level objects to arrange themselves. For example, the architecture of all cluster scheduling systems can’t be expanded horizontally. If more servers need to be managed, the only way is to create multiple clusters. In addition, the container can’t arrange itself, so kubernetes appears. In addition, in the distributed selector, there can only be one master, if there are two For example, in the same team, there can only be one supervisor. If there are two supervisors, there must be one supervisor above the two supervisors to make the final decision.
In addition, the position of each layer is not unchangeable, and the business stack is gradually moving up. Today we think that complex things will all be automated in the future.
The key of decoupling is self closing loop, the key of combination is layout, and the key of automation is scheduling and coordination.
There is also a phenomenon in cloud nativity, that is, many functions can be referred to resource choreography. For example, cloud service application is called resource choreography, operation and maintenance scheduling is called resource choreography, and application deployment is also called resource choreography. The resources are very large, the arrangement is also very large, and the resources + arrangement is a big increase. In kubernetes, everything is a resource, machine is a resource, storage and computing are resources, and service is also a resource; all combinations are choreographed, and there is choreography when there is dependence. Even when we talk about human right and wrong, we can say who is choreographing? So when we talk about the arrangement, we must add a determiner, otherwise there will be the problem of unclear positioning.
In addition, there are essential differences between scheduling and coordination. For example, in the container platform, although scheduling and choreography belong to the same part, they are not responsible for the same content. Scheduling is the process of reasonably allocating idle resources in the distributed system to the processes that need to run and encapsulating them with containers. Choreography is the process of health check, automatic expansion and reduction, automatic restart, rolling release, etc. of the containers in the system. In addition, we need to design a controller to control the state of resources in the process of reaching the end state. This process is called coordination. More vividly, in the application lifecycle management, the pod generated by workload is scheduling, the hook mounted is scheduling, and the event consumed is coordination.
6. Never fail
It’s also called dependency relativity. The only systems that will never fail are those that keep you alive. You are in a certain link of the system call chain. Believe in the stability of the system you depend on, and let it cover for you.
Let’s take the business application environment layered model as an example. We divide the business application environment into test environment, pre release environment and production environment. The business application relies on middleware, which needs to run on kubernetes. In general, the underlying infrastructure environment that business applications rely on is generally highly reliable, otherwise something big will happen. So when you test your business application, you mainly test your core functions. You need to believe that your upstream is stable, otherwise the design of the test system will be extremely complex. Of course, in the monitoring link, you need to monitor the upstream system problems related to your own business system. Once there is an alarm, you can find the students of the upstream system.
7. Life cycle theory
Software architecture is to meet the growing business needs, split the original life cycle, form a new core life cycle (subject unchanged) and non core life cycle (subject change), and the non core life cycle can be completed by others, and finally merge the results of concurrent execution of each sub life cycle to complete the total life cycle.
From the development of technology, we can see that the granularity of application is smaller and smaller, and more technical code is sinking to the underlying infrastructure.
It is not polite to say that the operation and maintenance of business on cloud native application platform mainly includes the operation and maintenance of resources such as pod, configuration, baas applications, products and solutions. The key to realize automation is to define the life cycle of each resource, and arrange the hooks and subscription events of each stage for consumption.
8. Dimension reduction theory
In the past two years, there has been a very popular word, which is called “dimension reduction strike”, “it’s none of your business to destroy you”, and it comes from the science fiction “three bodies”. The general meaning is to use the advanced creatures to fight the creatures in the lower world. To put it in a more popular way is to use misplaced competition to keep you ahead of your competitors. In cloud nativity, whether it is technology or business, if it is full of rebellious spirit and dare to innovate, it can produce dimensionality reduction attack. There are three ways to achieve dimension reduction
- From quantitative change to qualitative change: from small to large, innovation can happen anytime and anywhere. To a certain extent, the impact of cloud origin on business is fundamental and visible.
- TransDimensional parachute: from left to right, overtaking at corners, and moving from one industry to another related industry, such as a team working as a container platform, it’s easy to turn to apaas.
- Entrance monopoly: from top to bottom, hide the underlying implementation. For example, a team working on a technology platform used to use a charging component, but when it develops, it is likely to develop the component by itself, and the charging component will be greatly affected.
In addition, we can choose different R & D modes according to different business scenarios
- Bottom up: start from the bottom and use MVP minimum availability principle to develop business system. From small technology point to large combination innovation, it will ultimately meet the ultimate goal of cloud origin, improve delivery efficiency, and shorten the cycle of innovation iteration.
- Top down: gradually push down the technical architecture from the business perspective, so that the designed system will not deviate from the business itself, and the possibility of reconstruction is also small.
- Original mode: it should be developed according to the original idea. For example, there are three development paths of PAAS: SaaS > PAAS, IAAs > PAAS, and native PAAS. Which one is better? I believe most people will choose native PAAS. Take car making as an example. You can’t build a wheel and put it into the market. You must have a car that can run first.
9. Gap theory
As early as 1991, Jeffery Moore put forward the famous “gap theory” according to the characteristics of high-tech industry and the life cycle of high-tech enterprises. Based on “innovation communication”, this theory divides the life cycle of innovative technology and products into five stages: innovators, early adopters, early mass, late mass and laggard.
Kubernetes became the de facto standard of container choreography at the end of 2017. After that, the cloud native ecology with kubernetes as the core continued to break out. It can be said that it has crossed the gap in the communication cycle, entered the early mass stage of early major, and began to occupy the mainstream market with huge potential.
10. Flywheel theory
Flywheel effect refers to that in order to make the stationary flywheel rotate, you have to make a lot of effort at the beginning, push it again and again, every turn is very hard, but every effort will not be in vain, the flywheel will rotate faster and faster. After reaching a critical point, the gravity and impulse of the flywheel will become part of the driving force. At this time, you don’t need to work harder, the flywheel will still rotate quickly and continuously.
In fact, the flywheel effect is also a compound interest effect. Take the rise of AWS as an example. The three pillar businesses of AWS are the key to start the flywheel effect
- Super value prime membership service, as long as 99 dollars a year, you can enjoy a lot of value-added services.
- Markerplace third-party seller platform, in addition to Amazon’s own products, other sellers can also enter Amazon to sell their own products directly.
- AWS cloud service, its main function is to provide cloud services to large and small enterprises. Whether you are a large company or a small enterprise, you can build your entire IT system on Amazon cloud service with stable performance.
The core technology of cloud native
The development of cloud native technology is very fast. Since the concept of cloud native was put forward, there are endless new technology incubators every year. This chapter mainly introduces various common open source technologies of cloud native.
1. Operation and maintenance technology
From template technology to configuration technology, and then to programming technology, the flexibility of operation and maintenance is gradually enhanced. Template technology is too rigid to be abstracted into real world objects; although programming technology is very flexible, its complexity is very high, adding a lot of uncontrollable factors, and its operation and maintenance cost is very high. Therefore, from my point of view, dynamic configuration technology will gradually replace template technology and become the mainstream in the future.
So is it better to have a strictly constrained language or a flexible and versatile language? I think it has something to do with its usage scenarios. Blindly unifying only obliterates the richness of business and practices the theory that “common is useless”.
1) Template technology
Yaml is a high readability format used to express data serialization. In kubernetes, terminally oriented, data-driven and declarative APIs are all represented by yaml.
But yaml can’t embody the idea of object-oriented design, we can’t associate all kinds of flat yaml fragments, and we can’t clearly infer the development track of transactions. Moreover, the way of embedding JSON and other scripts in yaml is also turning the language into a poor universal language. In order to solve a series of problems of yaml, the community has gradually developed various technologies to enhance yaml, such as dynamic configuration and operation and maintenance framework. If kubernetes is the operating system of the future, yaml is the assembly language of the future.
Helm is kubernetes’ package management tool. But obviously, it doesn’t just want to be a package management tool, it also includes template rendering and simple dependency configuration.
Helm still continues the shortcomings of yaml, but simply piles yaml together. At the same time, the debugging cost of complex template syntax is very high, such as various process control structures combined with space indentation, which is a disaster for people with bad eyes.
Kubernetes universal declarative operator provides a way to build a product level kubernetes operator declaratively. In addition to some simple automation enhancement for kubernetes, some more complex scenarios need to be solved manually, and Kudo is used to help developers fully automate.
The package structure of Kudo is similar to that of helm, but the execution plan of resources is added based on helm. Compared with helm, the actions of execution plan are only apply, and delete and toggle are added.
Metacontroller is an extended service for kubernetes that encapsulates most of the basic functions required by custom controllers. When you create a custom controller through the API of metacontroller, you only need to provide a business logic function you need in your controller. These business logic functions will be triggered through webhooks.
Metacontroller seems to have a simple configuration, but it wants to solve business problems by technical means, and the solutions are limited. At present, it mainly includes two means: the first is to solve the business problems by technical means, and the second is to solve the business problems by technical means
One is to build a controller for a group of objects; the other is to add new behaviors for existing objects.
2) Configuration technology
Cloud configuration and related systems are considered in the design of cue, but not limited to this domain. It derives its formalism from relational programming language. At the same time, cue continues the idea of JSON superset. The key innovation in technology lies in the implementation of type design based on set theory, which can be said to be an open source implementation of bcl idea. At present, the ecology of cue is not very strong, and there are no supporting development tools, but fortunately, several teams of Ali are actively developing it.
Jsonnet is an open source configuration language of Google, which is used to make up for the shortcomings exposed by JSON. It is fully compatible with JSON, and adds some features that JSON does not have, including annotation, reference, arithmetic operation, conditional operator, array and object depth, function introduction, local variable, inheritance, etc. jsonnet program is compiled into a data format compatible with JSON. In short, jsonnet is an enhanced version of JSON.
The ecosystem of jsonnet is relatively perfect. Both jsonnet files and libsonnet have development tools and open source UI components. At present, Promethus and kubeless both use this dynamic configuration language.
HCl is a configuration language built by hashicorp. The goal of HCl is to build a man-machine friendly structured configuration language for use with command-line tools, but specifically for Devops tools, servers, etc. HCl is also fully compatible with JSON. That is, JSON can be used as a fully valid input to systems that expect to use HCl. This helps to make the system interoperable with other systems.
Kusion is a high-level special language and tool chain based on cloud native infrastructure, which provides complete technical stack support of “compile to cloud” in addition to immutable business image. Kusion consists of KCl language and tool chain, kusionctl tool, kusion models SDK and ocmp practice definition.
KCl is a dynamic strong type configuration language for configuration definition and verification. It focuses on the configuration & Policy programming scenario and takes the service cloud native configuration system as the design goal. However, as a configuration language, it is not limited to the cloud native domain. KCl absorbed the concept design of declarative and OOP programming paradigm, and carried out a lot of optimization and function enhancement for cloud native configuration scenarios.
Kusion is developed internally by Alibaba and has not yet been open source.
3) Programming technology
Operator is a framework developed by coreos to simplify the management of complex stateful applications. It is a state aware controller that can automatically create, manage and configure application instances by extending kubernetes API.
Generally, an operator project must include CRD and controller, and webhook is optional. If kubernetes is an “operating system”, then operator is the first layer application of kubernetes, which provides services to higher level users by using kubernetes “extended resources” interface. The implementation of operator mainly includes operator SDK and kubebuilder. Currently, kubebuilder is widely used in Ali.
We hope to solve various problems of native operators by designing a universal operator platform. The core objectives of this platform include:
- Simplify and standardize operator writing (multilingual, simplified framework, lower user threshold).
- Sink operator’s core competence and unified control (the center controls all user operators).
- Improve user operator performance (horizontal expansion, multi cluster, thin cache).
- Control the gray level and runtime risk of the operator (improve monitoring, gray level rollback ability, control explosion radius, permission control, access restriction).
The operator platform is developed internally by Alibaba and has not yet been open source.
Pulumi is an open source project of architecture as code. It is the easiest way to create and deploy cloud software using container, server free function, hosting services and infrastructure on any cloud. Pulumi adopts the concept of infrastructure as code and immutable infrastructure, and gives you automation and repeatability advantages from your favorite language (not yaml or DSL).
At the heart of pulumi is a cloud object model that combines with the runtime to understand how to write programs in any language, understand the cloud resources needed to execute them, and then plan and manage your cloud resources in a powerful way. This kind of cloud runtime and object model is essentially language and cloud neutral, which is why we can support so many languages and cloud platforms.
Ballerina is an open source compiled strongly typed language. Ballerina is an open source programming language and platform for application programmers in the cloud era to easily write software that can work normally. Ballerina is a combination design of language and platform, agile and easy to integrate, aiming at simplifying integration and microservice programming.
Ballerina is a language for integration and simplification. Based on the interaction of sequence diagram, ballerina has built-in support for common integration mode and connector, including distributed transaction, compensation and circuit breaker. With the first-class support for JSON and XML, ballerina can easily and effectively build strong integration across network terminals.
Cdk8s is a new framework written with typescript released by AWS labs. It allows us to use some object-oriented programming languages to define the resource list of kubernetes. Cdk8s eventually generates the native KUML file of kubernetes, so we can use cdk8s to define the running kubernetes application resources anywhere.
Terraform is a tool for building, changing, and safely and effectively versioning infrastructure. Terraform can manage existing and popular service providers as well as customized internal solutions. Terraform features include: architecture is code, execution plan, resource map, change automation, etc.
4) Application Technology
Application centric standards are used to build cloud native application platforms. OAM comprehensively considers the solution of application delivery on public cloud, private cloud and edge cloud, and puts forward a general model, so that each platform can show the ability of application deployment and operation and maintenance on a unified high-level abstraction, so as to solve the problem of cross platform application delivery.
The core concepts of OAM are as follows:
- The first core idea is the components that make up the application, which may include a collection of microservices, a database, and a cloud load balancer.
- The second core idea is to describe the set of application operation and maintenance characteristics (traits), such as elastic scaling and progress. They are very important to the operation of applications, but they are implemented in different ways in different environments.
- Finally, in order to translate these descriptions into specific applications, operation and maintenance personnel use application configuration to combine components and corresponding features to build specific instances of applications that should be deployed
Kubevela is a simple and highly scalable application management platform and core engine. Kubelvela is based on kubernetes and OAM technology. For application developers, kubevela is a cloud native application management platform with very low mental burden. Its core function is to let developers define and deliver modern microservice applications on kubernetes conveniently and quickly without knowing any details related to kubernetes itself. At this point, Kube Vela can be considered the heroku of the cloud native community.
Openkruise is a standard extension of kubernetes. It can be used with native kubernetes and provides more powerful and efficient capabilities for managing application containers, sidecars, image distribution, etc. Openkruise includes the following resources:
- Cloneset: provides more efficient and controllable application management and deployment capabilities, supports elegant in place upgrade, specified deletion, configurable publishing order, parallel / gray publishing and other rich strategies.
- Advanced statefulset: Based on the enhanced version of the native statefulset, the default behavior is exactly the same as that of the native statefulset. In addition, it provides in situ upgrade, parallel publishing (maximum unavailable), publishing pause and other functions.
- Sidecarset: manage sidecar containers uniformly, and inject the specified sidecar container into the pod that meets the selector condition.
- United deployment: deploy applications to multiple zones through multiple subset workloads.
- Broadcastjob: configure a job to run a pod task on all nodes in the cluster.
- Advanced daemonset: Based on the enhanced version of the native daemonset, the default behavior is the same as that of the native. In addition, it provides publishing strategies such as grayscale batching, node label selection, pause, hot upgrade, etc.
2. Micro service
Baas refers to the back-end services that business applications rely on. It needs a service directory for users to select the middleware they need to use, and then select rules through the baas plan. After creating a service instance, it is bound through the baas connector and the endpoint of baas. For more principles, please refer to the service center section of cloud native application platform.
Service catalog is the incubation project of kubernetes community, which aims to access and manage the service broker provided by a third party, so that the applications hosted on kubernetes can use the external services represented by service broker.
The open service broker API project enables independent software vendors, SaaS providers and developers to easily provide support services for workloads running on cloud native platforms such as cloud foundry and kubernetes. The specification has been adopted by many platforms and thousands of service providers. It describes a set of simple API endpoints that can be used to provide, acquire and manage service products. Participants in the project come from Google, IBM, pivot, red hat, sap and many other leading cloud companies.
Spring cloud connector provides a simple abstraction for JVM based applications running on the cloud platform. It can discover bound services and deployment information at runtime, and support the registration of discovered services as spring bean. It is based on the plug-in model, so that the same compiled application can be deployed locally or on any number of cloud platforms, and supports custom service definition through Java service provider interface (SPI).
The purpose of service mesh is to solve the communication and governance problems between services after the micro service of system architecture. The service grid is composed of ﹣ sidecar nodes.
Istio provides a simple way to establish a network for deployed services. The network has the functions of load balancing, inter service authentication, monitoring and so on, without any changes to the service code. Istio’s capabilities are as follows:
- Istio is suitable for container or virtual machine environment (especially k8s), compatible with heterogeneous architecture.
- Istio uses sidecar (sidecar mode) proxy service network, does not need to make any changes to the business code itself.
- Automatic load balancing of HTTP, grpc, websocket and TCP traffic.
- Istio can control the traffic behavior in a fine-grained way through rich routing rules, Retry, fail over and fault injection, and support access control, rate limit and quota.
- Istio automatically measures, logs and tracks all traffic in and out of the cluster.
Currently, both alimesh and ASM use the istio scheme.
Linker is a transparent service grid, which aims to add service discovery, load balancing, fault handling, instrumentation and routing to all inter service communication transparently, so that modern applications are safe and reliable without intruding into the implementation of the application itself.
As a transparent http / grpc / thrift / proxy, linker can be added to existing applications with the least configuration, no matter what language these applications are written in. Linker D can work with many common protocols and service discovery back ends, including scheduled environments such as mesos and kubernetes.
3）Micro Service Framework
Dapr is an open source, portable, event driven application runtime developed by Microsoft. It enables developers to easily build elastic, micro service stateless and stateful applications, which run on the cloud and edge. As a sidecar, dapr is more like the runtime of microservice, providing functions that the program does not have. The main functions of dapr are as follows:
- Service invocation: service to service invocation enables method invocation on remote services, including retries, regardless of where the remote service is running in a supported managed environment.
- State management: through the state management of key / value pairs, it is easy to write long-running, high availability stateful services and stateless services in the same application.
- Publish and subscribe messages between services: the event driven architecture can simplify the level of scalability, and make it have the ability of fault recovery.
- Event driven resource binding: resource binding and trigger are further built on the event driven architecture, which can receive and send events from any external resources (such as database, queue, file system, blob storage, webhooks, etc.), so as to achieve scalability and flexibility.
- Virtual role: a pattern of stateless and stateful objects, which makes concurrency simple through method and state encapsulation. Dapr provides many functions when its virtual actors are running, including concurrency, status, lifecycle management of role activation / deactivation, and timers and reminders to wake up roles.
- Distributed tracing between services: using W3C trace context standard, it is easy to diagnose and observe inter service calls in production, and push events to the tracing and monitoring system.
Dubbo is an open-source Java based distributed service framework (SOA) for high-performance RPC (a kind of remote invocation), which is dedicated to providing high-performance and transparent RPC remote service invocation scheme and SOA Service governance scheme. Currently, the HSF used in Ali will be gradually replaced by Dubbo.
Spring cloud provides developers with development tools to operate in distributed systems (such as configuration management, service discovery, circuit breaker, intelligent routing, micro agent, control bus, one-time token, global lock, decision election, distributed session and cluster state). Using spring cloud, developers can quickly implement these patterns.
At present, Alibaba has made an enhancement based on the native spring cloud framework and Alibaba middleware, which is called spring cloud Alibaba.
- Spring Cloud：https://spring.io/projects/spring-cloud
- Spring Cloud Alibaba：https://spring.io/projects/spring-cloud-alibaba
In essence, serverless does not need others to perceive the server. It can be divided into kubernetes serverless, APP serverless, baas serverless, FAAS serverless, data serverless and so on according to different serverless scenarios.
In the non container era, serverless has developed to a certain extent in the field of big data and artificial intelligence, such as the ODPs, TPP and other platforms in Alibaba. However, the arrival of the container era has greatly accelerated the development of serverless.
In addition, the development of serverless in the front-end field is very coquettish, there are a variety of easy-to-use very good serverless platform.
Cloudevents is a specification for describing event data in a common format to provide cross service, platform and system interaction.
Event formats specify how cloudevents are serialized using certain encoding formats. Compatible cloudevents implementations that support these encodings must follow the encoding rules specified in the corresponding event format. All implementations must support the JSON format.
Serverless framework is a very popular serverless application framework in the industry. Developers can deploy a complete and available serverless application framework without caring about the underlying resources. The serverless framework has the capabilities of resource arrangement, automatic scaling and event driven, covering the whole life cycle of coding debugging testing deployment, helping developers quickly build serverless applications by linking cloud resources.
Kubeless is a server less framework based on kubernetes, which allows you to deploy a small amount of code without worrying about the underlying infrastructure pipeline. It uses kubernetes resources to provide automatic expansion, API routing, monitoring, troubleshooting and other functions. Kubliss has three core concepts:
- Function: represents the user code to be executed, and contains runtime dependency, build instruction and other information.
- Trigger: represents the event source associated with the function. If we compare the event source to the producer and the function to the executor, then the trigger is the bridge between them.
- Runtime: represents the environment on which the function runs.
Nuclio is a high-performance “no server” framework that focuses on data, I / O, and compute intensive workloads. It is well integrated with popular data science tools such as jupyter and kubeflow, supports various data and streaming media sources, and supports execution through CPU and GPU. The nuclio project started in 2017 and has been developing rapidly. Many startups and businesses now use nuclio in production.
Fission is an open source serverless product led by private cloud service provider platform9. It uses kubernetes’ flexible and powerful editing ability to complete the management and scheduling of containers, and focuses on the development of FAAS functions. Its development goal is to become an open source alternative to AWS lambda. Fission consists of three core concepts
- Function: represents a code fragment written in a specific language that needs to be executed.
- Trigger: used to correlate functions and event sources. If we compare the event source to the producer and the function to the executor, then the trigger is the bridge between them.
- Environment: the specific language environment used to run user functions.
Openfaas is a popular and easy-to-use no service framework (although not as good as openwhisk in the table above). But it’s not as popular as openwhite, and the code is submitted on an individual basis. In addition to the contribution of individual developers in their spare time, VMware also hired a team to maintain openfaas full-time.
Openwhisk is a mature no service framework supported by Apache foundation and IBM. IBM cloud function service is also based on openwhik. The main contributors are IBM employees. Openwhisk makes use of CouchDB, Kafka, nginx, redis and zookeeper. It has many underlying components, so it increases some complexity.
FN is a native server less computing platform that can run on the user side or in the cloud. It needs to use the docker container. The main contributors of this project are from Oracle. There is also a new function called FN flow, which can be used to arrange multiple functions, similar to openwhisk.
Serverless devs is Alibaba’s first open source serverless Developer Platform and the first cloud native lifecycle management platform supporting mainstream serverless services / frameworks in the industry. Through this platform, developers can experience multi cloud serverless products with one click, and deploy serverless projects quickly.
Knative is Google’s open source serverless architecture solution, which aims to provide a set of simple and easy-to-use serverless solution and standardize serverless. At present, the participating companies are mainly Google, pivot, IBM and red hat, which have just been released on July 24, 2018 and are still in the stage of rapid development. Knative is to solve the problem of building, deploying and running the container based serverless application. In addition, the original build function of knative has been abandoned and replaced by Tekton.
Gitops is a fast and secure method for developers or operators to maintain and update complex applications running in kubernetes or other declarative choreography frameworks. The four principles of gitops are as follows:
- Describe the whole system declaratively
- The target state of the system is version controlled by GIT
- Changes to the target state are automatically applied to the system after approval
- Driving convergence & reporting deviation
For students who do not have a management and control system and need to use black screen operation temporarily, you can choose gitops. If you have a management and control system, it is not recommended to use gitops, otherwise you need to ensure the consistency of the status of the controlled database, GIT files and kubernetes runtime files. There is one more link in the process, and the error rate is high.
Argo is a cloud based workflow / pipeline engine, and Argo workflow is implemented in the form of CRD. Every step of Argo workflow is a container. Multi step workflow is modeled as a sequence of tasks, or based on DAG to capture the dependencies between tasks. Argo mainly includes the following functions:
- Argo workflow: a declarative workflow engine
- Argo CD: declarative gitops continuous delivery
- Argo events: event based dependency management
- Argo rollouts: Cr that supports grayscale, blue-green deployment.
As each step of Argo is pod, it takes up server resources extremely, so it needs to be used cautiously for production level business system.
Tekton is a powerful and flexible kubernetes native framework for creating CI / CD systems. By abstracting the underlying implementation details, developers are allowed to build, test and deploy across multi cloud environments or local systems. The overall architecture abstraction of Tekton is very good, and it can basically solve the layout problems under all containers.
But every step is also a pod, which is as resource intensive as Argo.
5. Cluster management
Kubernetes Federation (kubefed) allows you to coordinate the configuration of multiple kubernetes clusters through a set of APIs in a managed cluster. The purpose of kubefed is to provide a mechanism to express which clusters should be managed and how they should be configured. The mechanism provided by kubefed is an intentional underlying mechanism, which aims to lay the foundation for more complex multi cluster use cases, such as deploying multi geographic applications and disaster recovery.
K3s is a lightweight kubernetes, it is easy to install, binary package is less than 40MB, only need 512MB ram to run. It is very suitable for edge, IOT, CI, arm and other scenarios. K3s is a simplified and lightweight k8s produced by rancher. It can be seen from the name that k3s is less than k8s.
K9s provides a terminal UI to interact with your kubernetes cluster. The purpose of the project is to simplify the process of browsing, observing and managing applications. K9s continuously monitors kubernetes changes and provides subsequent commands to interact with the resources you observe. K9s is a “single screen” utility favored by administrators. K9s provides a full screen terminal UI based on curses, which can interact with your kubernetes cluster.
Minikube is an easy tool to run kubernetes locally. It can easily create a stand-alone kubernetes cluster in a virtual machine on your laptop. It is easy to try kubernetes or use kubernetes for daily development. Minikube is equivalent to a kubernetes single node running locally, in which we can create pods to create corresponding services.
Openyurt focuses on the concept of “cloud edge integration”. Relying on kubernetes’ powerful container application choreography ability, openyurt meets the demands of application distribution, delivery and control of cloud edge integration. Openyurt can help users solve the problems of large-scale application delivery, operation and maintenance, management and control on massive side and end resources, and provide central service sinking channel to realize seamless docking with edge computing applications. At the beginning of the design of openyurt, we put great emphasis on maintaining the consistency of user experience, not increasing the burden of user operation and maintenance, so that users can really easily “extend your native kubernetes to edge”.
Openshift is a cloud development platform as a service (PAAS) developed by red hat. Free and open source cloud computing platform enables developers to create, test and run their applications, and deploy them to the cloud. Openshift supports a wide range of programming languages and frameworks, such as Java, ruby and PHP. In addition, it also provides a variety of integrated development tools, such as eclipse integration, JBoss Developer Studio and Jenkins. Openshift only deploys the operator application, and proposes the maturity of the operator. It has its own operator application definition template. Compared with other container platforms, it is relatively light.
Cloud foundry is the industry’s first open source PAAS cloud platform developed by pivot company. It supports a variety of frameworks, languages, runtime environments, cloud platforms and application services, enabling developers to deploy and expand applications in seconds without worrying about any infrastructure problems.
The combination of cloud foundry and spring cloud connector supports the service dependency of spring applications very well. However, cloud foundry is quite heavy and existed before the era of container. It is very difficult to operate and maintain, so it should be used with caution.
Kubesphere is a distributed, multi tenant, multi cluster, enterprise level open source container platform based on kubernetes developed by qingcloud. It has powerful and perfect network and storage capabilities, and provides perfect multi cluster management, CI / CD through minimalist human-computer interaction , microservice governance, application management and other functions help enterprises quickly build, deploy and operate container architecture on cloud, virtualization, physical machine and other heterogeneous infrastructure, and realize application agile development and full life cycle management.
Kubesphere can be said to be the work of conscience of the industry. The interactive experience is very good and the function is very perfect. Kubesphere and app matrix almost undertake the operation and maintenance of all business applications and cloud products of qingcloud. The current Alibaba cloud products are basically vertical operation and maintenance systems.
Azure is an operating system based on cloud computing developed by Microsoft. Its original name is “windows azure”. Like azure services platform, it is the name of Microsoft’s “software and services” technology. The main goal of Microsoft azure is to provide a platform for developers to help develop applications that can run on cloud server, data center, web and PC. In addition, through azure’s service fabric, scalable and reliable micro services (or non micro services) can be easily developed, packaged, deployed and managed.
Anthos is a hybrid cloud / multi cloud management platform developed by Google with kubernetes as the core. Its main function is to protect customers’ network connections and applications, and provide cloud service support in the form of container deployment. It was developed because customers want to use a single programming model, which allows them to choose and flexibly transfer their workload to Google cloud and other cloud platforms (such as azure and AWS) without making any changes.
Heroku is a cloud service provider of salesforce, which provides various convenient cloud services, such as server, database, monitoring, computing, etc. And it provides a free version, which makes it very convenient for those of us who usually want to do some small things. Although it is sometimes limited by the length and downtime, it is enough for personal small programs.
Crossplane is an open source multi cloud platform control panel developed by upbond company, which is used to manage your cloud native applications and infrastructure across environments, clusters, regions and clouds. Crossplane can be installed in an existing kubernetes cluster to add managed service provisioning, or deployed as a dedicated control plane for multi cluster management and workload scheduling.
At present, OAM and crossplane community are working together to build an open community focusing on standardized applications and infrastructure.
Rancher is the complete software stack for teams that use containers. It addresses the operational and security challenges of managing multiple kubernetes clusters on any infrastructure, and provides the Devops team with an integration tool for running containerized workloads.
Rancher’s Rio is a micropaas that can be layered on top of any standard kubernetes cluster. Users can easily deploy services to kubernetes and automatically obtain continuous delivery, DNS, HTTPS, routing, monitoring, automatic expansion, Canary deployment, GIT triggered build and so on. All this requires is kubernetes cluster and Rio cli.
7. Big data and AI
Kubeflow is a machine learning tool library released by Google. Kubeflow project aims to make machine learning on kubernetes easy, convenient and scalable. Its goal is not to rebuild other services, but to provide a simple way to find the best OSS solution.
Fluid is an open source cloud native infrastructure project. Driven by the separation of computing and storage, fluid aims to provide a layer of efficient and convenient data abstraction for AI and big data cloud native applications, abstracting data from storage, so as to achieve the following goals:
- Through data affinity scheduling and distributed cache engine acceleration, the integration between data and computing is realized, so as to accelerate the access of computing to data.
- The data is managed independently of the storage, and the resource is isolated through the namespace of kubernetes to realize the data security isolation.
- The data from different storage is combined for operation, which has the opportunity to break the data island effect caused by the difference of different storage.
Kubetee is a cloud native large-scale cluster confidential computing framework, which aims to solve the related problems in the whole process from development, deployment to operation and maintenance of tee trusted execution environment technology in cloud native environment. Kubetee is an integrated solution for how to use tee technology in cloud native scenarios, including a collection of multiple frameworks, tools and microservices.
Problems existing in cloud computing
Is stateless really omnipotent?
Although we advocate that applications should be transformed into stateless applications, for example, deployment in kubernetes is specifically for stateless applications, some state machine frameworks also recommend that pipelines should be designed as stateless, and functions in FAAS are basically stateless, but is stateless really omnipotent? For example, for some functions with high QPS that need to search the database for a large amount of computation, would it be better to cache the data locally?
2. Is it really feasible to connect in one place and operate everywhere?
It can be said that the cloud native technology stack is moving up and closer to the business. For example, for application operation and maintenance, we originally wanted to create a technology to take everything. As long as middleware is connected to an application platform, it can be exported to various public and private clouds with this application platform. But through a long time of practice, we found that different customers have different requirements, and there are differences in various cloud infrastructures. It is difficult to “access at one place and operate everywhere”. If we blindly pursue unification, we will only fall into a big mud pit where nothing can be done.
3. Where is the difficulty in the middle stage?
Since the theory of China Taiwan can be put forward, it must be in line with the business background at that time. So why is the later practice not ideal? I think superficially that the main problem lies in the deep-rooted to C gene, which is difficult to change with a large and comprehensive business theory. We need to continue to explore, from the business and technical aspects to improve and improve the middle platform theory.
4. What customers want and say is different?
You will find that when customers decide to buy your product, they talk with you about some big functions, such as multi live in different places, unitization, multi rent isolation, limited demotion, etc.; but after they buy it back, they find that they use some basic functions. This is because the customers who decide to buy and use are not the same group of people, so we must deeply explore what the users of products want, so as to establish a long-term cooperation mechanism.
5. Can the same application model really dominate the world?
Behind every application model, there is a need for a corresponding platform. The application is a business oriented layer. There are not only cloud based applications, but also various industry applications. Different business scenarios have different usage and delivery processes for applications. In addition, almost every platform has its own application model, so the application model itself serves a certain application platform. For example, openshift, cloudfoundry and kubesphere all have their own application models based on the concept abstraction of native kubernetes. Therefore, the same application model can only be used in a vertical scene.
The future of cloud native
The development of cloud native technology has become an irresistible trend, and now is the best time for cloud native technology to be applied to commercial products. After the change of technology system, the business model will inevitably change. We all know that the future will change. How to seize the opportunity of cloud origin and find the important outlet of the times?
The only way out is to break down the old system and cognition.