There is a pattern I encounter regularly when auditing CI/CD pipelines: the application gets built from source for development, then built again for staging, and built yet again for production. Sometimes the build commands are identical. Sometimes they are not. Either way, it is a problem.
This is one of those practices that feels reasonable on the surface — "we build for each environment so it is configured correctly" — but quietly introduces risk into every single deployment you make.
What the anti-pattern looks like
The typical setup has a CI pipeline with separate build stages per environment. The pipeline checks out source code, runs a build command, produces a Docker image or a binary, and pushes it to a registry with an environment-specific tag. Then it does the same thing for the next environment.
Sometimes the differences are intentional: different build flags, different base images, different dependency lockfiles. Sometimes the differences are accidental: a dependency got a patch release between the staging and production builds, or a build cache was warm for one and cold for the other.
A simplified picture of the two approaches:
flowchart LR
subgraph anti["rebuild per environment"]
direction TB
S1("Source code") --> B1("Build for dev")
S1 --> B2("Build for staging")
S1 --> B3("Build for production")
B1 --> D1("Dev artifact"):::warn
B2 --> D2("Staging artifact"):::warn
B3 --> D3("Prod artifact"):::warn
end
subgraph correct["build once, promote"]
direction TB
S2("Source code") --> B4("Build once")
B4 --> A1("Immutable artifact"):::good
A1 --> P1("Deploy to dev")
A1 --> P2("Deploy to staging")
A1 --> P3("Deploy to production")
end
classDef warn fill:#fef2f2,stroke:#ef4444,color:#991b1b
classDef good fill:#f0fdf4,stroke:#22c55e,color:#166534
style anti fill:none,stroke:none
style correct fill:none,stroke:none
On the left, you have three separate build processes producing three separate artifacts. On the right, one build produces one artifact that gets promoted through environments unchanged.
Why people do this
I have seen a few common reasons teams end up here.
The first is environment-specific configuration baked into the build. Database URLs, API endpoints, feature flags — all hardcoded at build time. The application has no mechanism to read configuration at runtime, so the only way to change behavior per environment is to rebuild.
The second is Dockerfile proliferation. Teams create Dockerfile.dev, Dockerfile.staging, Dockerfile.prod with different base images, different installed packages, or different build arguments. Each environment gets its own recipe.
The third is simply inertia. The pipeline was set up this way early on when there was only one environment, then copied for each new one. Nobody questioned it because deployments mostly worked.
What goes wrong
The core issue: what you tested is not what you deploy.
When you rebuild for production, you are creating a new artifact that has never been tested by anyone. Even if the source code is identical, the artifact is not.
Non-deterministic dependency resolution. If your build pulls dependencies without pinning exact versions and checksums, you can get different transitive dependencies between builds. I have seen production incidents caused by a minor patch release of a logging library that landed between the staging and production builds. Staging worked fine. Production threw exceptions on startup.
Build environment drift. CI runners update. Base images get new tags pushed to them. Build caches expire. The machine that built your staging image last Tuesday is not in the same state as the machine building your production image today.
Wasted compute and time. Every rebuild costs CI minutes and wall-clock time. If your build takes 15 minutes and you have three environments, that is 45 minutes of pipeline time instead of 15. For teams deploying multiple times per day, this adds up to real money and real delays.
Harder rollbacks. When production breaks and you need to roll back, you want to point to a known-good artifact. If every deployment is a fresh build, your "rollback" is actually another build from an older commit — which may not produce the same result it did last time.
Security surface. Each build is an opportunity for supply chain compromise. A dependency mirror could serve a tampered package. A build script could behave differently based on timing or environment variables. One build means one exposure window. Three builds means three.
How to fix it
Build once, promote the same artifact through every environment, and inject configuration at runtime.
Build once, tag with the commit
Your CI pipeline should produce exactly one artifact per commit (or per merge to your main branch). Tag it with the Git commit SHA, not with environment names.
# build once, tag with commit sha
build:
stage: build
script:
- docker build -t registry.example.com/myapp:${CI_COMMIT_SHA} .
- docker push registry.example.com/myapp:${CI_COMMIT_SHA}
This artifact is now immutable. It does not change. It does not get rebuilt. It is the artifact.
Promote, do not rebuild
Deploying to staging means pulling and running that exact image. Deploying to production means pulling and running that same exact image. No new build step.
# promote to production — same artifact, no rebuild
deploy-production:
stage: deploy
script:
- docker pull registry.example.com/myapp:${CI_COMMIT_SHA}
- docker tag registry.example.com/myapp:${CI_COMMIT_SHA} registry.example.com/myapp:production
- docker push registry.example.com/myapp:production
The production tag is a convenience pointer. The real identity of the artifact is the commit SHA.
Inject configuration at runtime
Your application should read its configuration from the environment, not from the build. This means environment variables, mounted config files, or a secrets manager.
FROM node:22-slim
WORKDIR /app
COPY . .
RUN npm ci --omit=dev
# no hardcoded config — everything comes from the environment at runtime
CMD ["node", "server.js"]
The same image runs everywhere. The difference between staging and production is the environment it is placed into, not the code it contains.
Use content-addressable references
Docker tags are mutable. Someone can push a new image to myapp:production and the old one is gone. Image digests are immutable. Pin your deployments to digests when possible.
# pinned by digest — guaranteed immutable
image: registry.example.com/myapp@sha256:a1b2c3d4e5f6...
This eliminates an entire class of "it worked yesterday" problems.
The business case
For business owners reading this: when your team says "we tested this in staging and it passed," that statement is only true if staging and production run the exact same artifact. If they rebuild for production, they are deploying untested software. Every single time. That is the reliability angle, and it alone should be enough to justify the change.
There is a cost angle too. Redundant builds burn CI compute. For teams on usage-based CI pricing, cutting two-thirds of your build minutes is a direct cost reduction — and the time savings compound, because developers waiting for pipelines are developers not shipping.
And if you operate in a regulated environment, you already know you need to prove that what you tested is what you deployed. With immutable artifacts, the SHA matches and you are done. With per-environment rebuilds, you cannot prove it, because it is not true.
Where to start
If you are currently rebuilding per environment, a practical migration path looks like this.
First, make your application read all environment-specific configuration from environment variables or config files. This is the prerequisite. If your app has hardcoded URLs or feature flags baked in at build time, fix that before anything else.
Second, consolidate to a single Dockerfile with no environment-specific logic. If you need different behavior, control it through runtime configuration.
Third, restructure your pipeline to build once and deploy many times. The build stage produces an artifact. The deploy stages consume it.
Fourth, start using image digests in your deployment manifests. Tags are convenient for humans. Digests are reliable for machines.
In my experience, this migration takes a few days for a small team and pays for itself within the first month — in faster pipelines, fewer "works in staging, broken in production" incidents, and simpler rollback procedures.
The principle is simple: if you want to trust your deployments, stop rebuilding what you already built.