I've been building governance rules for AI-assisted development. Part of that means catching patterns that look safe but aren't.
Here's one that keeps showing up:
password = os.getenv("DB_PASSWORD", "dev-default")
The AI generated this. Looks reasonable, right? If the environment variable isn't set, fall back to a default. The code won't crash. Problem solved.
Except this is actually a security vulnerability waiting to happen.
The Problem
If production is misconfigured—the environment variable isn't set, the secret manager isn't accessible, the deployment script has a typo—this code will silently use the dev password.
No error. No alert. No indication that anything is wrong.
Your application connects to... something. Maybe the dev database. Maybe nothing at all, because "dev-default" isn't a real credential. Maybe it fails later in a confusing way that takes hours to debug.
What it won't do is fail immediately and obviously at startup. Which is exactly what it should do.
Why LLMs Do This
LLMs are trained to make code "work." They've seen millions of examples where fallback values prevent crashes. They've learned that code that runs is better than code that throws exceptions.
For most values, that's correct. A missing LOG_LEVEL can safely default to "INFO". A missing RETRY_COUNT can default to 3.
But for secrets and critical configuration, failing loudly is the correct behavior. If the database password is missing, the application should refuse to start. If the API key isn't configured, it should throw an exception immediately—not five minutes later when it tries to make an API call.
The AI doesn't understand this distinction. It's optimizing for "code runs" when it should be optimizing for "failure is obvious."
The Fail-Safe vs. Fail-Fast Tension
This is a classic software engineering tension:
- Fail-safe: If something goes wrong, keep running in a degraded state
- Fail-fast: If something goes wrong, stop immediately and make noise
For user-facing features, fail-safe is often better. If the recommendation engine is down, show a generic list instead of an error page.
For configuration and secrets, fail-fast is always better. If the app is misconfigured, you want to know now—not after it's been running for three days with the wrong database.
LLMs default to fail-safe. They need guardrails to know when fail-fast is appropriate.
The Fix
We added a governance rule that catches this pattern at commit time. It looks for getenv(..., "default") patterns on environment variables that look like secrets (PASSWORD, SECRET, KEY, TOKEN, etc.).
The correct pattern for secrets:
# Option 1: No fallback, let it crash
password = os.environ["DB_PASSWORD"] # KeyError if missing
# Option 2: Explicit fail-fast
password = os.getenv("DB_PASSWORD")
if not password:
raise RuntimeError("DB_PASSWORD environment variable is required")
Both options make misconfiguration obvious. The app won't start. Someone will notice. The problem will be fixed before it causes real damage.
This pattern shows up in connection strings, API client initialization, authentication setup, and anywhere else secrets are loaded. If you're using AI assistants, audit these areas specifically. Look for fallback values that shouldn't exist.
The Broader Lesson
AI assistants are incredibly useful. They're also optimizing for objectives that don't always match yours.
"Code that runs" is not the same as "code that's secure."
"No errors" is not the same as "correct behavior."
"Helpful defaults" are not always helpful.
This isn't a reason to avoid AI assistants. It's a reason to add guardrails. Governance rules that catch patterns the AI gets wrong. Pre-commit hooks that validate security-sensitive code. Automated checks that enforce fail-fast where it matters.
The AI does the heavy lifting. The guardrails make sure it follows the rules.
When building with AI assistance, identify the areas where "code runs" isn't the right objective—security, configuration, critical paths. Then build explicit checks for those areas. Don't trust the AI's defaults. Verify them.