Skip to main content

2 posts tagged with "sidecar"

View All Tags

· 8 min read
Mingshan Zhao

OpenKruise is an open source management suite developed by Alibaba Cloud for cloud native application automation. It is currently a Sandbox project hosted under the Cloud Native Computing Foundation (CNCF). Based on years of Alibaba's experience in container and cloud native technologies, OpenKruise is a Kubernetes-based standard extension component that has been widely used in the Alibaba internal production environment, together with technical concepts and best practices for large-scale Internet scenarios.

OpenKruise released v0.8.0 on March 4, 2021, with enhanced SidecarSet capabilities, especially for log management of Sidecar.

Background - How to Upgrading Mesh Containers Independently

SidecarSet is a workload provided by Kruise to manage sidecar containers. Users can complete automatic injection and independent upgrades conveniently using SidecarSet.

By default, sidecar upgrade will first stop the old container and start a new one. This method is particularly suitable for sidecar containers that do not affect Pod service availability, such as log collection agents. However, for many proxies or sidecar containers for runtime, such as Istio Envoy, this upgrade method does not work. Envoy functions as a Proxy container in the Pod to handle all traffic. If users restart in this scenario to upgrade directly, the service availability of the Pod will be affected. Therefore, the release and capacity of the application should be taken into consideration. The sidecar release cannot be independent of the application.

how update mesh sidecar

Tens of thousands of pods in Alibaba Group communicate with each other based on Service Mesh. Mesh container upgrades may make business pods unavailable. Therefore, the upgrade of the mesh containers hinders the iteration of Service Mesh. To address this scenario, we worked with the Service Mesh team to implement the hot upgrade capability of the mesh container. This article focuses on the important role SidecarSet is playing during the implementation of the hot upgrade capability of mesh containers.

SidecarSet Helps Lossless Hot Upgrade of Mesh Containers

Mesh containers cannot perform direct in-place upgrades like the log collection class container. The mesh container must provide services without interruption, but an independent upgrade will make the mesh service unavailable for some time. Some well-known mesh services in the community, such as Envoy and Mosn, provide smooth upgrade capabilities by default. However, these upgrade methods cannot be integrated properly with cloud-native, and Kubernetes does not have an upgrade solution for such sidecar containers.

OpenKruise SidecarSet provides the sidecar hot upgrade mechanism for the mesh container. Thus, lossless Mesh container hot upgrade can be implemented in a cloud-native manner.

apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
name: hotupgrade-sidecarset
spec:
selector:
matchLabels:
app: hotupgrade
containers:
- name: sidecar
image: openkruise/hotupgrade-sample:sidecarv1
imagePullPolicy: Always
lifecycle:
postStart:
exec:
command:
- /bin/sh
- /migrate.sh
upgradeStrategy:
upgradeType: HotUpgrade
hotUpgradeEmptyImage: openkruise/hotupgrade-sample:empty
  • upgradeType: “HotUpgrade” indicates this type of sidecar container, which is hot upgrade.
  • hotUpgradeEmptyImage: When performing hot upgrade on sidecar containers, businesses need to provide an empty container for container switchover. The Empty container has the same configuration as the sidecar container (except for the image address), such as command, lifecycle, and probe.

The SidecarSet hot upgrade mechanism includes two steps: injection of Sidecar containers of the hot upgrade type and Mesh container smooth upgrade.

Inject Sidecar Containers of the Hot Upgrade Type

For Sidecar containers of the hot upgrade type, two containers will be injected by SidercarSet Webhook when creating the Pod:

  • {sidecar.name}-1: As shown in the following figure, envoy-1 represents a running sidecar container, for example, envoy:1.16.0.
  • {sidecar.name}-2: As shown in the following figure, envoy-2 represents the “hotUpgradeEmptyImage” container provided by the business, for example, empty:1.0.

inject sidecar

This Empty container does not have any practical work while running the Mesh container.

Smooth Mesh Container Upgrade

The hot upgrade process is divided into three steps:

  1. Upgrade: Replace the Empty container with the sidecar container of the latest version, for example, envoy-2.Image = envoy:1.17.0
  2. Migration: Run the “PostStartHook” script of the sidecar container to upgrade the mesh service smoothly
  3. Reset: After the mesh service is upgraded, replace the sidecar container of the earlier version with an Empty container, for example, envoy-1.Image = empty:1.0

update sidecar

The preceding three steps represent the entire process of the hot upgrade. If multiple hot upgrades on a Pod are required, users only need to repeat the three steps listed above.

Core Logic of Migration

The SidecarSet hot upgrade mechanism completes the mesh container switching and provides the coordination mechanism (PostStartHook) for containers of old and new versions. However, this is only the first step. The Mesh container also needs to provide the PostStartHook script to upgrade the mesh service smoothly (please see the preceding migration process), such as Envoy hot restart and Mosn lossless restart.

Mesh containers generally provide external services by listening to a fixed port. The migration process of mesh containers can be summarized as: pass ListenFD through UDS, stop Accept, and start drainage. For mesh containers that do not support hot restart, you can follow this process to modify the mesh containers. The logic is listed below:

migration

Migration Demo

Different mesh containers provide different services and have different internal implementation logics, so the specific Migrations are also different. The preceding logic only presents some important points, with hopes to benefit everyone in need. We have also provided a hot upgrade Migration Demo on GitHub for reference. Next, we will introduce some of the key codes:

  1. Consultation Mechanism First, users must check whether it is the first startup or hot upgrade smooth migration to start the Mesh container. Kruise injects two environment variables called SIDECARSET_VERSION and SIDECARSET_VERSION_ALT to two sidecar containers to reduce the communication cost of the mesh container. The two environment variables determine whether it is running the hot upgrade process and whether the current sidecar container version is new or old.
// return two parameters:
// 1. (bool) indicates whether it is hot upgrade process
// 2. (bool ) when isHotUpgrading=true, the current sidecar is newer or older
func isHotUpgradeProcess() (bool, bool) {
// Version of the current sidecar container
version := os.Getenv("SIDECARSET_VERSION")
// Version of the peer sidecar container
versionAlt := os.Getenv("SIDECARSET_VERSION_ALT")
// If the version of the peer sidecar container is "0", hot upgrade is not underway
if versionAlt == "0" {
return false, false
}
// Hot upgrade is underway
versionInt, _ := strconv.Atoi(version)
versionAltInt, _ := strconv.Atoi(versionAlt)
// version is of int type and monotonically increases, which means the version value of the new-version container will be greater
return true, versionInt > versionAltInt
}
  1. ListenFD Migration Use the Unix Domain Socket to migrate ListenFD between containers. This step is also a critical step in the hot upgrade. The code example is listed below:
  // For code conciseness, all failures will not be captured

/* The old sidecar migrates ListenFD to the new sidecar through Unix Domain Socket */
// tcpLn *net.TCPListener
f, _ := tcpLn.File()
fdnum := f.Fd()
data := syscall.UnixRights(int(fdnum))
// Establish a connection with the new sidecar container through Unix Domain Socket
raddr, _ := net.ResolveUnixAddr("unix", "/dev/shm/migrate.sock")
uds, _ := net.DialUnix("unix", nil, raddr)
// Use UDS to send ListenFD to the new sidecar container
uds.WriteMsgUnix(nil, data, nil)
// Stop receiving new requests and start the drainage phase, for example, http2 GOAWAY
tcpLn.Close()

/* The new sidecar receives ListenFD and starts to provide external services */
// Listen to UDS
addr, _ := net.ResolveUnixAddr("unix", "/dev/shm/migrate.sock")
unixLn, _ := net.ListenUnix("unix", addr)
conn, _ := unixLn.AcceptUnix()
buf := make([]byte, 32)
oob := make([]byte, 32)
// Receive ListenFD
_, oobn, _, _, _ := conn.ReadMsgUnix(buf, oob)
scms, _ := syscall.ParseSocketControlMessage(oob[:oobn])
if len(scms) > 0 {
// Parse FD and convert to *net.TCPListener
fds, _ := syscall.ParseUnixRights(&(scms[0]))
f := os.NewFile(uintptr(fds[0]), "")
ln, _ := net.FileListener(f)
tcpLn, _ := ln.(*net.TCPListener)
// Start to provide external services based on the received Listener. The http service is used as an example
http.Serve(tcpLn, serveMux)
}

Successful Mesh Container Hot Upgrade Cases

Alibaba Cloud Service Mesh (ASM) provides a fully managed service mesh platform compatible with open-source Istio service mesh from the community. Currently, ASM implements the Sidecar hot upgrade capability (Beta) in the data plane based on the hot upgrade capability of OpenKruise SidecarSet. Users can upgrade the data plane version of service mesh without affecting applications.

In addition to hot upgrades, ASM supports capabilities, such as configuration diagnosis, operation audit, log access, monitoring, and service registration, to improve the user experience of service mesh. You are welcome to try it out!

Summary

The hot upgrade of mesh containers in cloud-native has always been an urgent but thorny problem. The solution in this article is only one exploration of Alibaba Group, giving feedback to the community with hopes of encouraging better ideas. We also welcome everyone to participate in the OpenKruise community. Together, we can build mature Kubernetes application management, delivery, and extension capabilities that can be applied to more large-scale, complex, and high-performance scenarios.

· 11 min read
Mingshan Zhao

OpenKruise is an open source management suite developed by Alibaba Cloud for cloud native application automation. It is currently a Sandbox project hosted under the Cloud Native Computing Foundation (CNCF). Based on years of Alibaba's experience in container and cloud native technologies, OpenKruise is a Kubernetes-based standard extension component that has been widely used in the Alibaba internal production environment, together with technical concepts and best practices for large-scale Internet scenarios.

OpenKruise released v0.8.0 on March 4, 2021, with enhanced SidecarSet capabilities, especially for log management of Sidecar.

Background

Sidecar is a very important cloud native container design mode. It can create an independent Sidecar container by separating the auxiliary capabilities from the main container. In microservice architectures, the Sidecar mode is also used to separate general capabilities such as configuration management, service discovery, routing, and circuit breaking from main programs, thus making the microservice architectures less complicated. Since the popularity of Service Mesh has led to the prevalence of the Sidecar mode, the Sidecar mode has also been widely used within Alibaba Group to implement common capabilities such as O&M, security, and message-oriented middleware.

In Kubernetes clusters, pods can not only support the construction of main containers and Sidecar containers, but also many powerful workloads, such as deployment and statefulset to manage and upgrade the main containers and Sidecar containers. However, with the ever-growing businesses in Kubernetes clusters day by day, there have also been various Sidecar containers with a larger scale. Therefore, management and upgrades of online Sidecar containers are more complex:

  1. A business pod contains multiple Sidecar containers, such as O&M, security, and proxy containers. The business team should not only configure the main containers, but also learn to configure these Sidecar containers. This increases the workloads of the business team and the risks in Sidecar container configuration.
  2. The Sidecar container needs to be restarted together with the main business container after the upgrade. The Sidecar container supports hundreds of online businesses, so it is extremely difficult to coordinate and promote the upgrades of a large number of online Sidecar containers.
  3. If there are no effective updates for Sidecar containers with different online configurations and versions, it will pose great potential risks to the management of Sidecar containers.

Alibaba Group has millions of containers with thousands of businesses. Therefore, the management and upgrades of Sidecar containers have become a major target for improvement. To this end, many internal requirements for the Sidecar containers have been summarized and integrated into OpenKruise. Finally, these requirements were abstracted as SidecarSet, a powerful tool to manage and upgrade a wide range of Sidecar containers.

OpenKruise SidecarSet

SidecarSet is an abstracted concept for Sidecar from OpenKruise. As one of the core workloads of OpenKruise, it is used to inject and upgrade the Sidecar containers in Kubernetes clusters. SidecarSet provides a variety of features so that users can easily manage Sidecar containers. The main features are as follows:

  1. Separate configuration management: Each Sidecar container is configured with separate SidecarSet configuration to facilitate management.
  2. Automatic injection: Automatic Sidecar container injection is implemented in scenarios of Pod creation, Pod scale-out and Pod reconstruction.
  3. In-place upgrade: Sidecar containers can be upgraded in-place without the reconstruction of any pods, so that the main business container is not affected. In addition, a wide range of gray release policies are included.

Note: For a Pod that contains multiple container modes, the container that provides the main business logic to the external is the main container. Other containers provide auxiliary capabilities such as log collection, security, and proxy are Sidecar containers. For example, if a pod provides web capabilities outward, the nginx container that provides major web server capabilities is the main container. The logtail container is the Sidecar container that is responsible for collecting and reporting nginx logs. The SidecarSet resource abstraction in this article also solves some problems of the Sidecar containers.

Sidecar logging architectures

Application logs allow you to see the internal running status of your application. Logs are useful for debugging problems and monitoring cluster activities. After the application is containerized, the simplest and most widely used logging is to write standard output and errors.

However, in the current distributed systems and large-scale clusters, the above solution is not enough to meet the production environment standards. First, for distributed systems, logs are scattered in every single container, without a unified place for congregation. Logs may be lost in scenarios such as container crashes and Pod eviction. Therefore, there is a need for a more reliable log solution that is independent of the container lifecycle.

Sidecar logging architectures places the logging agent in an independent Sidecar container to collect container logs by sharing the log directory. Then, the logs are stored in the back-end storage of the log platform. logsidecar

This architecture is also used by Alibaba and Ant Group to realize the log collection of containers. Next, this article will explain how OpenKruise SidecarSet helps a large-scale implementation of the Sidecar log architecture in Kubernetes clusters.

Automatic Injection

OpenKruise SidecarSet has implemented automatic Sidecar container injection based on Kubernetes AdmissionWebhook mechanism. Therefore, as long as the Sidecar is configured in SidecarSet, the defined Sidecar container will be injected into the scaled pods with any deployment patterns, such as CloneSet, Deployment, or StatefulSet. inject sidecar

The owner of Sidecar containers only needs to configure SidecarSet to inject the Sidecar containers without affecting the business. This greatly reduces the threshold for using the Sidecar containers, and facilitates the management of Sidecar owners. In addition to containers, SidecarSet also extends the following fields to meet various scenarios of Sidecar injection:

# sidecarset.yaml
apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
name: test-sidecarset
spec:
# Select Pods through the selector
selector:
matchLabels:
app: web-server
# Specify a namespace to take effect
namespace: ns-1
# container definition
containers:
- name: logtail
image: logtail:1.0.0
# Share the specified volume
volumeMounts:
- name: web-log
mountPath: /var/log/web
# Share all volumes
shareVolumePolicy:
type: disabled
# Share environment variables
transferEnv:
- sourceContainerName: web-server
envName: TZ
volumes:
- name: web-log
emptyDir: {}

Pod selector

  • The selector is supported to select the pods to be injected. In the above example, the pod of labels[app] = web-server is selected to inject the logtail container. Alternatively, labels[inject/logtail] = true can be added in all pods to inject a global Sidecar.
  • namespace: SidecarSet is globally valid by default. This parameter can be configured to make it valid to a specific namespace.

Data volume sharing

  • Share the specified volume: Use volumeMounts and volumes to share a specified volume with the main container. In the above example, a web-log volume is shared to achieve log collection.
  • Share all volumes: Use shareVolumePolicy = enabled | disabled to specify whether to mount all volumes in the Pod's main container, which is often used for Sidecar containers such as log collection. If the configuration is enabled, all mount points in the application container are injected into the same Sidecar path, unless there are data volumes and mount points declared by Sidecar.)

Share environment variables

Use transferEnv to obtain environment variables from other containers, which copies the environment variable named envName in the sourceContainerName container to the current Sidecar container. In the above example, the Sidecar container of logs shares the time zone of the main container, which is especially common in overseas environments.

Note: The number of containers for the created Pods cannot be changed in the Kubernetes community. Therefore, the injection capability described above can only occur during the Pod creation phase. For the created Pods, Pod reconstruction is required for injection.

In-place Upgrade

SidecarSet not only allows to inject the Sidecar containers, but also reuses the in-place update feature of OpenKruise. This realizes the upgrade of the Sidecar containers without restarting the Pod and the main container. Since this upgrade method does not affect the business, upgrading the Sidecar containers is no longer a pain point. Thus, it brings a lot of conveniences to Sidecar owners and speeds up the Sidecar version iteration.

inplace sidecar

Note: Only the modification on the container.image fields for the created Pods is allowed by the Kubernetes community. Therefore, the modification on other fields of Sidecar containers requires the reconstruction of Pod, and the in-place upgrade is not supported.

To meet the requirements in some complex Sidecar upgrade scenarios, SidecarSet provides the in-place upgrade and a wide range of gray release strategies.

Gray Release

Gray release is a common method that allows a Sidecar container to be released smoothly. It is highly recommended that this method is used in large-scale clusters. Here is an example of Pod's rolling release based on the maximum unavailability after the first batch of Pod release is suspended. Suppose that there are 1,000 Pods to be released:

apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
name: sidecarset
spec:
# ...
updateStrategy:
type: RollingUpdate
partition: 980
maxUnavailable: 10%

The configuration above is suspended after the former release of 20 pods (1000 – 980 = 20). After the Sidecar container has been normal for a period of time in business, adjust the update SidecarSet configuration:

apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
name: sidecarset
spec:
# ...
updateStrategy:
type: RollingUpdate
maxUnavailable: 10%

As such, the remaining 980 Pods will be released in the order of the maximum unavailable numbers (10% * 1000=100) until all Pods are released.

Partition indicates that the number or percentage of Pods of the old version is retained, with the default value of 0. Here, the partition does not represent any order number. If the partition is set up during the release process:

  • If it is a number, the controller will update the pods with (replicas – partition) to the latest version.
  • If it is a percentage, the controller will update the pods with (replicas * (100% - partition)) to the latest version.

MaxUnavailable indicates the maximum unavailable number of pods at the same time during the release, with the default value of 1. Users can set the MaxUnavailable value as absolute value or percentage. The percentage is used by the controller to calculate the absolute value based on the number of selected pods.

Note: The values of maxUnavailable and partition are not necessarily associated. For example:

  • Under {matched pod} = 100, partition = 50, and maxUnavailable = 10, the controller will release 50 pods to the new version, but the release is limited to 10. That is, only 10 pods are released at the same time. A pod is released one after another until 50 pods are all released.
  • Under {matched pod} = 100, partition = 80, and maxUnavailable = 30, the controller will release 20 Pods to the new version. The controller releases all 20 pods at the same time because the number of maxUnavailable is met.

Canary Release

For businesses that require canary release, strategy.selector can be considered as a choice. The solution is to mark the labels[canary.release] = true into the Pods that require canary release, and then use strategy.selector.matchLabels to select the pods.

apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
name: sidecarset
spec:
# ...
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
canary.release: "true"
maxUnavailable: 10%

The above configuration only releases the containers marked with canary labels. After the canary verification is completed, rolling release is performed based on the maximum unavailability by removing the configuration of updateStrategy.selector.

Scatter Release

The upgrade sequence of pods in SidecarSet is subject to the following rules by default:

  • For the pod set upgrade, multiple upgrades with the same order are guaranteed.
  • The selection priority is (the smaller, the more prioritized): unscheduled < scheduled, pending < unknown < running, not-ready < ready, newer pods < older pods.

In addition to the above default release order, the scatter release policy allows users to scatter the pods that match certain labels to the entire release process. For example, for a global sidecar container like logtail, dozens of business pods may be injected into a cluster. Thus, the logtail can be released after being scattered based on the application name, realizing scattered and gray release among applications. And it can be performed together with the maximum unavailability.

apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
name: sidecarset
spec:
# ...
updateStrategy:
type: RollingUpdate
scatterStrategy:
- key: app_name
value: nginx
- key: app_name
value: web-server
- key: app_name
value: api-gateway
maxUnavailable: 10%

Note: In the current version, all application names must be listed. In the next version, an intelligent scattered release will be supported with only the label key configured.

Summary

In the OpenKruise v0.8.0, the SidecarSet has been improved in terms of log management in Sidecar scenarios. In the later exploration of the stability and performance of SidecarSet, more scenarios will be covered at the same time. For example, Service Mesh scenario will be supported in the next version. Moreover, more people are welcomed to participate OpenKruise community to improve the application management and delivery extensibility of Kubernetes for scenarios featuring large scale, complexity and extreme performance.