Linkerd: high memory usage on Kubernetes

I’m running linkerd on kubernetes, and the pod is displaying high memory usage:

The memory usage can’t be generated by heavy traffic, as there is none. I tried adding limits to the Deployment configuration, however this didn’t seem to solve the problem, as Kubernetes just kills the pod.

Is it possible to optimise the memory through linkerd configuration? Or is this an expected behaviour?
The linkerd configuration I use can be found here: I’m running linkerd 1.0.2

Edit: Added linkerd version

Hi there,

Which linkerd version is this? 1.0.2?

Yes, I’m running 1.0.2

I just tried it out and 1.0.2 does have higher memory consumption than 1.0.0 did at startup. It was roughly 160mb like you’re seeing. We’ll fix that and get it back down to where 1.0.0 is.

Linkerd starting memory use is typically around 100-120mb and can go up with high volume of traffic but our experience is that it stays around the starting memory use until your traffic goes above 10k RPS.

Thank you for looking into this.
Although, I think 100-120mb is still high usage for when linkerd is idle. Is it possible to reduce this by altering the configuration, or is this the default behaviour of linkerd?

That is the default behavior currently. We agree that it’s higher than we would like and are working on getting that number down to something much lower: reducing memory consumption is a high priority for us.

1 Like

I think there’s also a memory leak somewhere.
I’ve been running linkerd with very little traffic for the past two days, however the memory usage seems to grow:

Is there any debug information I can provide to help you diagnose this?

@Vytautas Can you file a Github issue with your configs, deployment environment, and a description of the traffic you’re running through Linkerd? (E.g. gRPC, HTTP, rough RPS.) If you’re seeing a possible memory leak, I don’t want to lose track of this. Thank you!

1 Like