Five years ago I wrote a blog post against Kubernetes sidecars. I wrote that in response to the early days of sidecar containers becoming in vogue, and in response to Airbnb’s talk demonstrating the 7 containers they need for service discovery.

At Yelp, we held off on running sidecars at all, and stuck to the single-container model on k8s for a long time, relying on host daemons for almost everything extra. Pod specs were trivial. Containers connected to yocalhost (169.254.255.254) to complex network shenanigans.

At Netflix, the initial sidecars implementation took a unique approach: injected process. Pod specs were also trivial, and the injected processes lived with the main container. We never had to worry about breaking anything that was live. Once the process started in the pod, it lived there for the life of the pod.

But then later came the more traditional k8s sidecar model, with real containers on the pod spec.

After living these different ways of running code, I would like to share some opinions.

I Still Prefer The “Injected” Process “Sidecar”

As mentioned, Netflix still uses the injection method of adding the lowest level form of “Sidecars”, as described in this talk. I put sidecars in quotes, because for many, these are not “true” sidecars.

And it is true, these don’t show up on a k8s pod. They are not even containers, even though they use a container image to get their binaries.

Here is an example sshd injected process and the associated container image using a statically build openssh sshd binary.

Pros:

  • They are systemd units that execute the equivalent of “docker run”
  • These injected processes just consume ram/cpu of the main container (or not!)
  • They don’t even need to be “in” the container, they can be a “hidecar”, not bound to pod spec limitations.

Cons:

  • Require a custom runtime to inject
  • Not industry standard
  • The lifecycle is not well-defined (DIY, generally tied to the pod)

For any process that the app owner doesn’t want to care about at all, I think these system services are just fine. sshd is not on the same playing field as nginx, when it comes to a k8s pod.

They can be transparent (not on the pod) because they are supposed to be.

I still see the pod object as a declaration of the user’s intent on what to run, and low level system process are not part of that.

I Still Prefer System Daemons (Not Necessarily DaemonSets)

I don’t like the idea of running system software as pods on k8s.

I understand the desire to use a unified language and runtime to run code, but I just like the idea of separating out Infrastructure from Application.

But that means having two things. It means making modifications to your OS image, which brings in the need for some other distribution mechanism (OS packages, Configuration Management, etc).

To me this is a feature, but to others this is a bug.

With Systemd on an OS, you can very easily ensure that kubelet only starts if the dns daemon is up and ready.

On k8s with pods, ensuring application pods don’t on a node before critical daemon sets are ready is much more complex.

DaemonSets Are Still Dangerous

I still think that DaemonSets are just asking for trouble when it comes to running a Kubernetes environment where you are not the owner of the pods.

It is different when there is a single team managing the whole cluster and the pods that run on it.

But in a platform environment, there a lot of ways for DaemonSets to go wrong during change.

Even with maxSurge on a DaemonSet, there will be some downtime during the upgrade.

Roll forward/back is fundamentally disconnected from the application itself, making it more difficult to get feedback when things break.

It becomes very difficult to make backwards-incompatible changes when you don’t control the consuming pods.

DaemonSets are not the worst, but I still prefer traditional system daemons.

Standard Kubernetes Sidecars Are Still Cumbersome

Since writing my blog post in 2020, Kubernetes got native Sidecar containers in 1.33 (stable).

This is pretty good, they are just init containers that continue running.

The downside is that they are just init containers that continue running (different restart policy).

That is OK! This is a primitive upon which to build on top.

But how do you actually update them?

OpenKruise’s SidecarSet provides one real-world implementation of how to actually update sidecars in a controlled way.

Without tooling like this, using something like the traditional istio sidecar injector, doesn’t give you a controlled way of actually upgrading the sidecar. This type of admission injection only applies to newly launched pods. Actually upgrading a sidecar with this to all pods in a controlled way is left as an exercise to the reader.

It is not that I’m against Kubernetes Sidecars, it is just I think that Kubernetes sidecars are best suited to situations where the sidecar is critically tied to the configuration and lifecycle of the pod (like a service mesh).

If the thing is some sort of always-on dialtone service that has nothing to do with the pod itself (like DNS), why does it need to be part of the pod (k8s sidecar) in the first place?

But Rebuilding My App Is Slow/Hard!

Yea, if your main app image is super big and slow to build, sidecars are way for you to skip solving that problem.

But now in 2026, doing monthly deploys is not the norm, and true CI/CD is the standard.

But Polyglot!

I would argue that, in part 2, the effort to maintain a sidecar AND the client libraries (in different languages) needed to talk to that sidecar could be the same as supporting a library and the language-shims to talk to that library as native code. Maybe.

Conclusion

I’m still pretty torn on Sidecars generally.

  • I still think the industry is sidecar-crazy, I’m hoping to push the pendulum back (thanks for reading!).
  • I still think that the pod spec has becoming a dumping ground of complexity.
  • I still think the pod spec should be an indication of the end users intent, not a representation of everything required to run that pod.
  • I still think the best sidecar is … NO sidecar

But enough complaining. If sidecars are not great, then what should we realistically do? I wish the industry pendulum would swing back to a the world of using libraries. See part 2 of this series for some ideas on how we could do that.


Comment via email