I have seen this pattern more times than I can count: a team sets up their infrastructure, and someone decides that every single connection between internal services must be encrypted with TLS. Load balancer to reverse proxy — TLS. Reverse proxy to application server — TLS. Application server to database proxy — TLS. Every hop, a full handshake. Every service, its own certificate. The reasoning is always the same: "We need security everywhere." What follows is predictable too — slower responses, higher CPU bills, and a certificate renewal process that keeps someone up at night.
This post breaks down the real cost of internal re-encryption and what to do instead.
What the anti-pattern looks like
In a typical web application stack, a request travels through several services before a response comes back. In a well-designed setup, TLS is terminated once at the edge — the load balancer or reverse proxy that faces the internet — and traffic moves in plaintext across a private, trusted network to reach backend services.
The anti-pattern flips this on its head. Every internal boundary gets its own TLS layer. Compare the two approaches:
Anti-pattern — TLS everywhere:
flowchart LR
A1("Client") --->|TLS| B1("HAProxy")
B1 --->|TLS| C1("Varnish")
C1 --->|TLS| D1("Nginx")
D1 --->|TLS| E1("App Server")
E1 --->|TLS| F1("DB Proxy")
Recommended — terminate once:
flowchart LR
A2("Client") --->|TLS| B2("HAProxy"):::edge
B2 --->|plaintext| C2("Varnish")
C2 --->|plaintext| D2("Nginx")
D2 --->|unix socket| E2("App Server")
E2 --->|plaintext| F2("DB Proxy")
classDef edge fill:#f0fdf4,stroke:#22c55e,color:#166534
In the first path, TLS is terminated and re-established four times internally. In the second, it is terminated once, and everything behind the edge communicates over a private network or Unix sockets. The data never leaves a trusted boundary.
How re-encryption works under the hood
When a request hits your edge proxy, the TLS session is terminated: the proxy decrypts the traffic, inspects headers, makes routing decisions. So far, so normal.
But in the re-encryption model, the proxy then initiates a brand new TLS connection to the next hop. That means a fresh TCP handshake followed by a full TLS handshake — key exchange, certificate verification, cipher negotiation, the whole ceremony. The next service terminates that session, does its work, and then does it all over again for the next hop downstream.
With TLS 1.2, each handshake adds one to two round trips of latency. TLS 1.3 reduces that to one round trip for new connections, but the CPU cost of the cryptographic operations remains. Multiply that by the number of hops, and multiply again by the number of requests per second, and the overhead becomes very real.
What it actually costs you
CPU overhead
Every TLS handshake involves asymmetric cryptography — RSA or ECDHE key exchange, certificate signature verification. These operations are expensive. On a service handling thousands of requests per second, re-encrypting at every internal hop can consume 10-20% of CPU that could be doing actual work. I have seen teams scale up their infrastructure by 30% just to absorb TLS overhead on internal links that did not need encryption.
Latency
Each re-encryption adds measurable latency. On a four-hop internal path with TLS 1.2, you can easily add 8-12 milliseconds of pure handshake overhead per request. That does not sound like much until you are serving latency-sensitive APIs where your total budget is 50 milliseconds. TLS 1.3 and connection reuse help, but they do not eliminate the cost — they just reduce it.
Certificate management explosion
Every service that terminates TLS needs its own certificate. In a microservices environment with 20 or 30 services, that means 20 or 30 certificates to provision, rotate, and monitor. Each certificate has an expiration date. Each expiration is a potential outage. I have been called in to debug production failures that turned out to be nothing more than an expired internal certificate that nobody was tracking.
Automated certificate management with something like cert-manager or Vault PKI helps, but it is additional infrastructure to build, maintain, and troubleshoot. That complexity has a cost, and it is only justified when you actually need per-service encryption.
Operational complexity
Debugging becomes significantly harder when every connection is encrypted. You cannot simply capture packets on an internal network and read the traffic. You need to manage CA certificates, trust stores, and certificate chains at every service. Upgrading a TLS library means touching every service. A cipher deprecation means updating configuration in dozens of places.
False sense of security
This is the part that frustrates me the most. Teams adopt TLS everywhere because it sounds secure, but they often skip the things that would actually improve their security posture. If an attacker is on your private network, you have much bigger problems than whether internal traffic is encrypted. Meanwhile, the same team might have wide-open security groups, no network segmentation, shared database credentials, or application-level vulnerabilities that TLS does absolutely nothing to address.
Security is about threat models, not checkboxes.
How to fix it
None of this requires exotic tooling.
Terminate TLS once at the edge
Your load balancer or edge reverse proxy — HAProxy, Nginx, whatever you prefer — handles TLS termination for all incoming traffic. It is the only component that needs a publicly trusted certificate. Behind it, traffic flows in plaintext over your private network.
# haproxy frontend — tls termination at the edge
frontend https_in
bind *:443 ssl crt /etc/haproxy/certs/site.pem
default_backend varnish_cache
# backend connection over plaintext
backend varnish_cache
server cache1 127.0.0.1:6081 check
Use Unix sockets where possible
When services run on the same host, Unix sockets eliminate the network stack entirely. No TCP overhead, no port management, and no possibility of network-level interception. This is what I use for Nginx-to-application communication in most of my deployments.
# nginx upstream via unix socket — no network exposure
upstream app {
server unix:/run/app/app.sock;
}
Use network-level isolation instead
VLANs, security groups, network namespaces, and firewall rules provide real isolation without the overhead of per-connection encryption. If your backend services are on an isolated network that only your edge proxy can reach, plaintext is perfectly acceptable.
Reserve mTLS for where it matters
Mutual TLS has legitimate use cases: multi-tenant environments where workloads from different customers share infrastructure, zero-trust architectures where you genuinely cannot trust the network, and compliance requirements that mandate encryption in transit regardless of network topology.
If you are in one of those situations, use a service mesh like Istio or Linkerd. They handle mTLS transparently — automatic certificate provisioning, rotation, and enforcement — without you having to wire it into every application. The mesh sidecar proxy handles encryption so your application code stays clean.
But if you are running a single-tenant application on a private network, a service mesh for internal encryption is overengineering. Terminate at the edge and move on.
When re-encryption is actually justified
I do not want to be absolute about this. There are cases where internal TLS makes sense:
- Regulatory compliance: some standards (PCI DSS, HIPAA) may require encryption of specific data in transit even on internal networks. Read the actual requirements carefully — they are often more nuanced than "encrypt everything."
- Shared infrastructure: if your application shares a physical network with untrusted workloads, encrypting internal traffic is reasonable.
- Zero-trust architecture: if you are genuinely implementing zero-trust (not just using it as a buzzword), per-service identity and encryption are part of the model.
In these cases, invest in proper tooling — a service mesh, automated PKI, certificate monitoring — rather than bolting TLS onto every service manually.
TLS is essential at the edge. It protects your users' data as it crosses the internet. But blindly re-encrypting at every internal hop is not a security strategy — it is a tax on performance, a source of operational complexity, and often a distraction from the security work that would actually matter.
Terminate once, isolate your network, and spend your engineering effort on the things that move the needle: patching, access control, monitoring, and application-level security. Your CPU budget and your on-call engineers will thank you.