Skip to main content

OpenKruise 0.8.0, A Powerful Tool for Sidecar Container Management

· 阅读需要 1 分钟
Mingshan Zhao
Member of OpenKruise

OpenKruise 是阿里云开源的云原生应用自动化管理套件,也是当前托管在 Cloud Native Computing Foundation (CNCF) 下的Sandbox项目。它来自阿里巴巴多年来容器化、云原生的技术沉淀,是阿里内部生产环境大规模应用的基于Kubernetes之上的标准扩展组件,也是紧贴上游社区标准、适应互联网规模化场景的技术理念与最佳实践。

OpenKruise在2021.3.4发布了最新的v0.8.0版本,其中增强了SidecarSet的能力,特别是对日志管理类Sidecar有了更加完善的支持。

背景

Sidecar是云原生中一种非常重要的容器设计模式,它将辅助能力从主容器中剥离出来成为单独的sidecar容器。在微服务架构中,通常也使用sidecar模式将微服务中的配置管理、服务发现、路由、熔断等通用能力从主程序中剥离出来,从而极大降低了微服务架构中的复杂性。随着Service Mesh的逐步风靡,sidecar模式也日益深入人心,在阿里巴巴集团内部也大量使用sidecar模式来管理诸如运维、安全、消息中间件等通用组件。

在Kubernetes集群中,Pod不仅可以实现主容器与sidecar容器的构建,同时提供了许多功能强大的workload(例如:deployment、statefulset)来对Pod进行管理、升级。但是随着kubernetes集群上的业务日益增多,sidecar容器的种类与规模也随之日益庞大,对线上sidecar容器的管理和升级成为了愈发繁杂的工作:

  1. 业务Pod里面包含了运维、安全、代理等多个sidecar容器,业务线同学不仅要完成自身主容器的配置,而且还需要熟悉这些sidecar容器的配置,这不仅增加了业务同学的工作量,同时也无形增加了sidecar容器配置的风险。
  2. sidecar容器的升级需要连同业务主容器一起重启(deployment、statefulset等workload基于Pod销毁、重建的模式,来实现Pod的滚动升级),推动和升级支撑着线上数百款业务的sidecar容器,必然存在着极大的业务阻力。
  3. 作为sidecar容器的提供者对线上诸多各种配置以及版本的sidecar容器没有直接有效的升级手段,这对sidecar容器的管理意味着极大的潜在风险。

阿里巴巴集团内部拥有着百万级的容器数量连同上面承载的上千个业务,因此,sidecar容器的管理与升级也就成为了亟待完善的主题。因此,我们总结了内部许多sidecar容器的通用化需求,并将其沉淀到OpenKruise上面,最终抽象为SidecarSet作为管理和升级种类繁多sidecar容器的利器。

OpenKruise SidecarSet

SidecarSet是OpenKruise中针对sidecar抽象出来的概念,负责注入和升级kubernetes集群中的sidecar容器,是OpenKruise的核心workload之一。它提供了非常丰富的功能,用户使用SidecarSet可以非常方便实现sidecar容器的管理。主要特性如下:

  1. 配置单独管理:为每一个sidecar容器配置单独的SidecarSet配置,方便管理
  2. 自动注入:在新建、扩容、重建pod的场景中,实现sidecar容器的自动注入
  3. 原地升级:支持不重建pod的方式完成sidecar容器的原地升级,不影响业务主容器,并包含丰富的灰度发布策略

注意:针对Pod中包含多个容器的模式,其中对外提供主要业务逻辑能力的容器称之为 主容器,其它一些如日志采集、安全、代理等辅助能力的容器称之为 Sidecar容器。例如:一个对外提供web能力的pod,nginx容器提供主要的web server能力即为 主容器,logtail容器负责采集、上报nginx日志即为 Sidecar容器。本文中的SidecarSet资源抽象也是为解决 Sidecar容器 的一些问题。

Sidecar logging architectures

应用日志可以让你了解应用内部的运行状况,日志对调试问题和监控集群活动非常有用。应用容器化后,最简单且最广泛采用的日志记录方式就是写入标准输出和标准错误。

但是,在当前分布式系统、大规模集群的时代下,上述方案还不足以达到生产环境的标准。首先,对于分布式系统而言,日志都是分散在单个容器里面,没有一个统一汇总的地方。其次,如果发生容器崩溃、Pod被驱逐等场景,会出现日志丢失的情况。因此,需要一种更加可靠,独立于容器生命周期的日志解决方案。

Sidecar logging architectures 是将logging agent放到一个独立的sidecar容器中,通过共享日志目录的方式,实现容器日志的采集,然后存储到日志平台的后端存储。

logsidecar

阿里巴巴以及蚂蚁集团内部同样也是基于这种架构实现了容器的日志采集,下面我将介绍OpenKruise SidecarSet如何助力 Sidecar日志架构在kubernetes集群中的大规模落地实践。

自动注入

OpenKruise SidecarSet基于kubernetes AdmissionWebhook机制实现了sidecar容器的自动注入,因此只要将sidecar配置到SidecarSet中,不管用户用 CloneSet、Deployment、StatefulSet 等任何方式部署,扩出来的 Pod 中都会注入定义好的 sidecar 容器。

inject sidecar

Sidecar容器的所有者只需要配置自身的SidecarSet,就可以在业务无感知的情况下完成sidecar容器的注入,这种方式极大的降低了sidecar容器使用的门槛,也方便了sidecar所有者的管理工作。为了满足sidecar注入的多种场景,SidecarSet除containers之外还扩展了如下字段:

# sidecarset.yaml
apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
name: test-sidecarset
spec:
# 通过selector选择pod
selector:
matchLabels:
app: web-server
# 指定 namespace 生效
namespace: ns-1
# container definition
containers:
- name: logtail
image: logtail:1.0.0
# 共享指定卷
volumeMounts:
- name: web-log
mountPath: /var/log/web
# 共享所有卷
shareVolumePolicy:
type: disabled
# 环境变量共享
transferEnv:
- sourceContainerName: web-server
# TZ代表时区,例如:web-server容器中存在环境变量 TZ=Asia/Shanghai
envName: TZ
volumes:
- name: web-log
emptyDir: {}
  • Pod选择器
    • 支持selector来选择要注入的Pod,如示例中将选择labels[app] = web-server的pod,将logtail容器注入进去,也可以在所有的pod中添加一个labels[inject/logtail] = true的方式,来实现全局性的sidecar注入。
    • namespace:sidecarSet默认是全局生效的,如果只想对某一个namespace生效,则配置该参数
  • 数据卷共享:
    • 共享指定卷:通过volumeMounts和volumes可以完成与主容器的特定卷的共享,如示例中通过共享web-log volume来达到日志采集的效果
    • 共享所有卷:通过 shareVolumePolicy = enabled | disabled 来控制是否挂载pod主容器的所有卷卷,常用于日志收集等 sidecar,配置为 enabled 后会把应用容器中所有挂载点注入 sidecar 同一路经下(sidecar中本身就有声明的数据卷和挂载点除外)
  • 环境变量共享 可以通过 transferEnv 从其它容器中获取环境变量,会把名为 sourceContainerName 容器中名为 envName 的环境变量拷贝到本sidecar容器,如示例中 日志sidecar容器共享了主容器的时区TZ,这在海外环境中尤其常见

注意:kubernetes社区对于已经创建的Pod不允许修改container数量,所以上述注入能力只能发生在Pod创建阶段,对于已经创建的Pod需要通过重建的方式来注入。

原地升级

SidecarSet不仅实现sidecar容器的注入,而且复用了OpenKruise中原地升级的特性,实现了在不重启Pod和主容器的前提下单独升级sidecar容器的能力。由于这种升级方式基本上能做到业务方无感知的程度,所以sidecar容器的升级已不再是上下交困的难题,从而极大解放了sidecar的所有者,提升了sidecar版本迭代的速度。

inplace sidecar

注意:kubernetes社区对于已经创建的Pod只允许修改 container.image 字段,因此对于sidecar容器的修改包含除 container.image 的其它字段,则需要通过Pod重建的方式,不能直接原地升级。

为了满足一些复杂的sidecar升级场景,SidecarSet除了原地升级以外,还提供了非常丰富的灰度发布策略。

灰度发布

灰度发布应该算是日常发布中最常见的一种手段,它能够比较平滑的完成sidecar容器的发布,尤其是在大规模集群的场景下,强烈建议使用这种方式。下面是 首批暂停,后续基于 最大不可用 滚动发布 的例子,假设一个有1000个pod需要发布:

apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
name: sidecarset
spec:
# ...
updateStrategy:
type: RollingUpdate
partition: 980
maxUnavailable: 10%

上述配置首先发布(1000 - 980)= 20 个pod之后就会暂停发布,业务可以观察一段时间发现 sidecar 容器正常后,调整重新 update SidecarSet 配置:

apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
name: sidecarset
spec:
# ...
updateStrategy:
type: RollingUpdate
maxUnavailable: 10%

这样调整后,对于余下的 980 个pod,将会按照最大不可用的数量(10% * 1000 = 100)的顺序进行发布,直到所有的pod都发布完成。

Partition 的语义是 保留旧版本 Pod 的数量或百分比,默认为 0。这里的 partition 不表示任何 order 序号。如果在发布过程中设置了 partition:

  • 如果是数字,控制器会将 (replicas - partition) 数量的 Pod 更新到最新版本。
  • 如果是百分比,控制器会将 (replicas * (100% - partition)) 数量的 Pod 更新到最新版本。

MaxUnavailable 是发布过程中保证的,同一时间下最大不可用的 Pod 数量,默认值为 1。用户可以将其设置为绝对值或百分比(百分比会被控制器按照selected pod做基数来计算出一个背后的绝对值)。

注意:maxUnavailable 和 partition 两个值是没有必然关联。举例:

  • 当 {matched pod}=100,partition=50,maxUnavailable=10,控制器会发布 50 个 Pod 到新版本,但是发布窗口为 10,即同一时间只会发布 10 个 Pod,每发布好一个 Pod 才会再找一个发布,直到 50 个发布完成。
  • 当 {matched pod}=100,partition=80,maxUnavailable=30,控制器会发布 20 个 Pod 到新版本,因为满足 maxUnavailable 数量,所以这 20 个 Pod 会同时发布。

金丝雀发布

对于有金丝雀发布需求的业务,可以通过strategy.selector来实现。方式:对于需要率先金丝雀灰度的pod打上固定的labels[canary.release] = true,再通过strategy.selector.matchLabels来选中该pod

apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
name: sidecarset
spec:
# ...
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
canary.release: "true"
maxUnavailable: 10%

上述配置只会发布打上金丝雀labels的容器,在完成金丝雀验证之后,通过将 updateStrategy.selector 配置去掉,就会继续通过 最大不可用 来滚动发布。

打散发布

SidecarSet对于pod的升级顺序,默认按照如下规则:

  • 对升级的pod集合,保证多次升级的顺序一致
  • 选择优先顺序是(越小优先级越高): unscheduled < scheduled, pending < unknown < running, not-ready < ready, newer pods < older pods

除了上述默认发布顺序之外,scatter打散策略允许用户 自定义将符合某些标签的 Pod 打散 到整个发布过程中。比如,对于像 logtail 这种全局性的 sidecar container,一个集群当中很可能注入了几十个业务pod,因此可以使用基于 应用名 的方式来打散logtail的方式进行发布,实现 不同应用间打散灰度发布 的效果,并且这种方式可以同 最大不可用 一起使用。

apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
name: sidecarset
spec:
# ...
updateStrategy:
type: RollingUpdate
# 配置pod labels,假设所有的pod都包含labels[app_name]
scatterStrategy:
- key: app_name
value: nginx
- key: app_name
value: web-server
- key: app_name
value: api-gateway
maxUnavailable: 10%

注意:当前版本必须要列举所有的应用名称,我们将在下个版本支持 只配置label key 的智能打散方式。

总结

本次 OpenKruise v0.8.0 版本的升级,SidecarSet特性主要是完善了 日志管理类Sidecar 场景的能力,后续我们在持续深耕SidecarSet稳定性、性能的同时,也将覆盖更多的场景,比如下一个版本将会增加针对 Service Mesh场景 的支持。同时,我们也欢迎更多的同学参与到 OpenKruise 社区来,共同建设一个场景更加丰富、完善的 K8s 应用管理、交付扩展能力,能够面向更加规模化、复杂化、极致性能的场景。

UnitedDeploymemt - Supporting Multi-domain Workload Management

· 阅读需要 1 分钟
Fei Guo
Maintainer of OpenKruise

Ironically, probably every cloud user knew (or should realized that) failures in Cloud resources are inevitable. Hence, high availability is probably one of the most desirable features that Cloud Provider offers for cloud users. For example, in AWS, each geographic region has multiple isolated locations known as Availability Zones (AZs). AWS provides various AZ-aware solutions to allow the compute or storage resources of the user applications to be distributed across multiple AZs in order to tolerate AZ failure, which indeed happened in the past.

In Kubernetes, the concept of AZ is not realized by an API object. Instead, an AZ is usually represented by a group of hosts that have the same location label. Although hosts within the same AZ can be identified by labels, the capability of distributing Pods across AZs was missing in Kubernetes default scheduler. Hence it was difficult to use single StatefulSet or Deployment to perform AZ-aware Pods deployment. Fortunately, in Kubernetes 1.16, a new feature called "Pod Topology Spread Constraints" was introduced. Users now can add new constraints in the Pod Spec, and scheduler will enforce the constraints so that Pods can be distributed across failure domains such as AZs, regions or nodes, in a uniform fashion.

In Kruise, UnitedDeploymemt provides an alternative to achieve high availability in a cluster that consists of multiple fault domains - that is, managing multiple homogeneous workloads, and each workload is dedicated to a single Subset. Pod distribution across AZs is determined by the replica number of each workload. Since each Subset is associated with a workload, UnitedDeployment can support finer-grained rollout and deployment strategies. In addition, UnitedDeploymemt can be further extended to support multiple clusters! Let us reveal how UnitedDeployment is designed.

Using Subsets to describe domain topology

UnitedDeploymemt uses Subset to represent a failure domain. Subset API primarily specifies the nodes that forms the domain and the number of replicas, or the percentage of total replicas, run in this domain. UnitedDeployment manages subset workloads against a specific domain topology, described by a Subset array.

type Topology struct {
// Contains the details of each subset.
Subsets []Subset
}

type Subset struct {
// Indicates the name of this subset, which will be used to generate
// subset workload name prefix in the format '<deployment-name>-<subset-name>-'.
Name string

// Indicates the node select strategy to form the subset.
NodeSelector corev1.NodeSelector

// Indicates the number of the subset replicas or percentage of it on the
// UnitedDeployment replicas.
Replicas *intstr.IntOrString
}

The specification of the subset workload is saved in Spec.Template. UnitedDeployment only supports StatefulSet subset workload as of now. An interesting part of Subset design is that now user can specify customized Pod distribution across AZs, which is not necessarily a uniform distribution in some cases. For example, if the AZ utilization or capacity are not homogeneous, evenly distributing Pods may lead to Pod deployment failure due to lack of resources. If users have prior knowledge about AZ resource capacity/usage, UnitedDeployment can help to apply an optimal Pod distribution to ensure overall cluster utilization remains balanced. Of course, if not specified, a uniform Pod distribution will be applied to maximize availability.

Customized subset rollout Partitions

User can update all the UnitedDeployment subset workloads by providing a new version of subset workload template. Note that UnitedDeployment does not control the entire rollout process of all subset workloads, which is typically done by another rollout controller built on top of it. Since the replica number in each Subset can be different, it will be much more convenient to allow user to specify the individual rollout Partition of each subset workload instead of using one Partition to rule all, so that they can be upgraded in the same pace. UnitedDeployment provides ManualUpdate strategy to customize per subset rollout Partition.

type UnitedDeploymentUpdateStrategy struct {
// Type of UnitedDeployment update.
Type UpdateStrategyType
// Indicates the partition of each subset.
ManualUpdate *ManualUpdate
}

type ManualUpdate struct {
// Indicates number of subset partition.
Partitions map[string]int32
}

multi-cluster controller

This makes it fairly easy to coordinate multiple subsets rollout. For example, as illustrated in Figure 1, assuming UnitedDeployment manages three subsets and their replica numbers are 4, 2, 2 respectively, a rollout controller can realize a canary release plan of upgrading 50% of Pods in each subset at a time by setting subset partitions to 2, 1, 1 respectively. The same cannot be easily achieved by using a single workload controller like StatefulSet or Deployment.

Multi-Cluster application management (In future)

UnitedDeployment can be extended to support multi-cluster workload management. The idea is that Subsets may not only reside in one cluster, but also spread over multiple clusters. More specifically, domain topology specification will associate a ClusterRegistryQuerySpec, which describes the clusters that UnitedDeployment may distribute Pods to. Each cluster is represented by a custom resource managed by a ClusterRegistry controller using Kubernetes cluster registry APIs.

type Topology struct {
// ClusterRegistryQuerySpec is used to find the all the clusters that
// the workload may be deployed to.
ClusterRegistry *ClusterRegistryQuerySpec
// Contains the details of each subset including the target cluster name and
// the node selector in target cluster.
Subsets []Subset
}

type ClusterRegistryQuerySpec struct {
// Namespaces that the cluster objects reside.
// If not specified, default namespace is used.
Namespaces []string
// Selector is the label matcher to find all qualified clusters.
Selector map[string]string
// Describe the kind and APIversion of the cluster object.
ClusterType metav1.TypeMeta
}

type Subset struct {
Name string

// The name of target cluster. The controller will validate that
// the TargetCluster exits based on Topology.ClusterRegistry.
TargetCluster *TargetCluster

// Indicate the node select strategy in the Subset.TargetCluster.
// If Subset.TargetCluster is not set, node selector strategy refers to
// current cluster.
NodeSelector corev1.NodeSelector

Replicas *intstr.IntOrString
}

type TargetCluster struct {
// Namespace of the target cluster CRD
Namespace string
// Target cluster name
Name string
}

A new TargetCluster field is added to the Subset API. If it presents, the NodeSelector indicates the node selection logic in the target cluster. Now UnitedDeployment controller can distribute application Pods to multiple clusters by instantiating a StatefulSet workload in each target cluster with a specific replica number (or a percentage of total replica), as illustrated in Figure 2.

multi-cluster	controller

At a first glance, UnitedDeployment looks more like a federation controller following the design pattern of Kubefed, but it isn't. The fundamental difference is that Kubefed focuses on propagating arbitrary object types to remote clusters instead of managing an application across clusters. In this example, had a Kubefed style controller been used, each StatefulSet workload in individual cluster would have a replica of 100. UnitedDeployment focuses more on providing the ability of managing multiple workloads in multiple clusters on behalf of one application, which is absent in Kubernetes community to the best of our knowledge.

Summary

This blog post introduces UnitedDeployment, a new controller which helps managing application spread over multiple domains (in arbitrary clusters). It not only allows evenly distributing Pods over AZs, which arguably can be more efficiently done using the new Pod Topology Spread Constraint APIs though, but also enables flexible workload deployment/rollout and supports multi-cluster use cases in the future.

Learning Concurrent Reconciling

· 阅读需要 1 分钟
Fei Guo
Maintainer of OpenKruise

The concept of controller in Kubernete is one of the most important reasons that make it successful. Controller is the core mechanism that supports Kubernetes APIs to ensure the system reaches the desired state. By leveraging CRDs/controllers and operators, it is fairly easy for other systems to integrate with Kubernetes.

Controller runtime library and the corresponding controller tool KubeBuilder are widely used by many developers to build their customized Kubernetes controllers. In Kruise project, we also use Kubebuilder to generate scaffolding codes that implement the "reconciling" logic. In this blog post, I will share some learnings from Kruise controller development, particularly, about concurrent reconciling.

Some people may already notice that controller runtime supports concurrent reconciling. Check for the options (source) used to create new controller:

type Options struct {
// MaxConcurrentReconciles is the maximum number of concurrent Reconciles which can be run. Defaults to 1.
MaxConcurrentReconciles int

// Reconciler reconciles an object
Reconciler reconcile.Reconciler
}

Concurrent reconciling is quite useful when the states of the controller's watched objects change so frequently that a large amount of reconcile requests are sent to and queued in the reconcile queue. Multiple reconcile loops do help drain the reconcile queue much more quickly compared to the default single reconcile loop case. Although this is a great feature for performance, without digging into the code, an immediate concern that a developer may raise is that will this introduce consistency issue? i.e., is it possible that two reconcile loops handle the same object at the same time?

The answer is NO, as you may expect. The "magic" is enforced by the workqueue implementation in Kubernetes client-go, which is used by controller runtime reconcile queue. The workqueue algorithm (source) is demonstrated in Figure 1.

workqueue

Basically, the workqueue uses a queue and two sets to coordinate the process of handling multiple reconciling requests against the same object. Figure 1(a) presents the initial state of handling four reconcile requests, two of which target the same object A. When a request arrives, the target object is first added to the dirty set or dropped if it presents in dirty set, and then pushed to the queue only if it is not presented in processing set. Figure 1(b) shows the case of adding three requests consecutively. When a reconciling loop is ready to serve a request, it gets the target object from the front of the queue. The object is also added to the processing set and removed from the dirty set (Figure 1(c)). Now if a request of the processing object arrives, the object is only added to the dirty set, not to the queue (Figure 1(d)). This guarantees that an object is only handled by one reconciling loop. When reconciling is done, the object is removed from the processing set. If the object is also shown in the dirty set, it is added back to the back of the queue (Figure 1(e)).

The above algorithm has following implications:

  • It avoids concurrent reconciling for the same object.
  • The object processing order can be different from arriving order even if there is only one reconciling thread. This usually would not be a problem since the controller still reconciles to the final cluster state. However, the out of order reconciling may cause a significant delay for a request. workqueue-starve.... For example, as illustrated in Figure 2, assuming there is only one reconciling thread and two requests targeting the same object A arrive, one of them will be processed and object A will be added to the dirty set (Figure 2(b)). If the reconciling takes a long time and during which a large number of new reconciling requests arrive, the queue will be filled up by the new requests (Figure 2(c)). When reconciling is done, object A will be added to the back of the queue (Figure 2(d)). It would not be handled until all the requests coming after had been handled, which can cause a noticeable long delay. The workaround is actually simple - USE CONCURRENT RECONCILES. Since the cost of an idle go routine is fairly small, the overhead of having multiple reconcile threads is low even if the controller is idle. It seems that the MaxConcurrentReconciles value should be overwritten to a value larger than the default 1 (CloneSet uses 10 for example).
  • Last but not the least, reconcile requests can be dropped (if the target exists in dirty set). This means that we cannot assume that the controller can track all the object state change events. Recalling a presentation given by Tim Hockin, Kubernetes controller is level triggered, not edge triggered. It reconciles for state, not for events.

Thanks for reading the post, hope it helps.

Kruise Workload Classification Guidance

· 阅读需要 1 分钟
Fei Guo
Maintainer of OpenKruise
Siyu Wang
Maintainer of OpenKruise

Kubernetes 目前并没有为一个应用应该使用哪个控制器提供明确的指引,这尤其不利于用户理解应用和 workload 的关系。 比如说,用户通常知道什么时候应该用 Job/CronJob 或者 DaemonSet,这些 workload 的概念是非常明确的 -- 前者是为了任务类型的应用部署、后者则是面向需要分发到每个 node 上的长期运行 Pod。

但是另一些 workload 比如 DeploymentStatefulSet 之间的界限是比较模糊的。一个通过 Deployment 部署的应用也可以通过 StatefulSet 部署,StatefulSet 对 Pod 的 OrderedReady 策略并非是强制的。而且,随着 Kubernetes 社区中越来越多的自定义 controllers/operators 变的成熟,用户就越难以为自己的应用找到一个最合适的 workload 来管理,尤其是一些控制器的功能上都存在重合部分。

Kruise 尝试在两个方面来缓解这个问题:

  • 在 Kruise 中谨慎设计新的控制器,避免不必要的功能重复给用户来带困扰
  • 为所有提供出来的 workload 控制器创建一个分类机制,方便用户更容易理解它们的使用场景。我们下面会详细描述一下,首先是 controller 命名上的规范:

Controller 命名惯例

一个易于理解的 controller 名字对于用户选用是非常有帮助的。经过对内外部不少 Kubernetes 用户的咨询,我们决定在 Kruise 中实行以下的命名惯例(这些惯例与目前上游的 controller 命名并不冲突):

  • Set 后缀:这类 controller 会直接操作和管理 Pod,比如 CloneSet, ReplicaSet, SidecarSet 等。它们提供了 Pod 维度的多种部署、发布策略。
  • Deployment 后缀:这类 controller 不会直接地操作 Pod,它们通过操作一个或多个 Set 类型的 workload 来间接管理 Pod,比如 Deployment 管理 ReplicaSet 来提供一些额外的滚动策略,以及 UnitedDeployment 支持管理多个 StatefulSet/AdvancedStatefulSet 来将应用部署到不同的可用区。
  • Job 后缀:这类 controller 主要管理短期执行的任务,比如 BroadcastJob 支持将任务类型的 Pod 分发到集群中所有 Node 上。

Set, DeploymentJob 都是被 Kubernetes 社区广泛接受的概念,在 Kruise 中给他们定义了明确的扩展规范。

我们能否对有相同后缀的 controller 做进一步区分呢?通常来说前缀前面的名字应该是让人能一目了然的,不过也有一些情况下很难一语描述 controller 自身的行为。可以看一下 StatefulSet 来源的这个 issue,社区用了四个月的时间才决定用 StatefulSet 这个名字代替过去的 PetSet,尽管新名字也让人看起来比较困惑。

这个例子说明了有时候一个精心计划的名字也不一定有助于标识这个 controller。因此,Kruise 并不打算解决这个问题,而是通过以下的标准来帮助对 Set 类型的 controller 分类。

固定 Pod 名字

StatefulSet 的一个独有的特性是支持一致的 Pod 网络和存储标识,这在本质上是通过固定 Pod 名字来实现的。Pod 名字可以用于标识网络和存储,因为它是 DNS record 的一部分,并且可以作为 PVC 的名字。既然 StatefulSet 下的 Pod 都是通过同一个模板创建出来的,为什么需要这个特性呢?一个常见的例子就是用于管理分布式一致性服务,比如 etcd 或 Zookeeper。这类应用需要知道集群构成的所有成员,并且在重建、发布后都需要保持原有的网络标识和磁盘数据。而像 ReplicaSet, DaemonSet 这类的控制器是面向无状态的,它们在新建 Pod 时并不会复用过去的 Pod 名字。

为了支持有状态,控制器的实现上会比较固定。StatefulSet 依赖于给每个 Pod 名字中加入一个序号,在扩缩容和滚动升级的时候都需要按照这个序号的顺序来执行。但这样一来,StatefulSet 也就无法做到另一些增强功能,比如:

  • 当缩小 replicas 时选择特定的 Pod 来删除,这个功能在跨多个可用区部署的时候会用到。
  • 把一个存量的 Pod 接管到另一个 workload 下面(比如 StatefulSet

我们发现很多云原生应用并不需要这个有状态的特性来固定 Pod 名字,而 StatefulSet 又很难在其他方面做扩展。为了解决这个问题,Kruise 发布了一个新的控制器 CloneSet 来管理无状态应用,CloneSet 提供了对 PVC 模板的支持,并且为应用部署提供了丰富的可选策略。以下表中比较了 Advanced StatefulSet 和 CloneSet 一些方面的能力:

FeaturesAdvanced StatefulSetCloneSet
PVCYesYes
Pod nameOrderedRandom
Inplace upgradeYesYes
Max unavailableYesYes
Selective deletionNoYes
Selective upgradeNoYes
Change Pod ownershipNoYes

目前对于 Kruise 用户的建议是,如果你的应用需要固定的 Pod 名字(网络和存储标识),你可以使用 Advanced StatefulSet,否则 CloneSet 应该是 Set 类型控制器的首选。

总结

Kruise 会为各种 workload 选择明确的名字,本文目标是能为 Kruise 用户提供选择正确 controller 部署应用的指引。 希望对你有帮助!