Mastering `cm-debugging`: The Complete Guide

Hướng dẫn chi tiết về Mastering `cm-debugging`: The Complete Guide trong Vibe Coding dành cho None.

Skills used: cm-debugging

I will begin by reading the cm-debugging skill documentation and the Vibe Coding guide to ensure the article accurately reflects the technical implementation and philosophy of the Cody Master kit.

I will read the cm-debugging.md file located in the docs/skills/ directory to understand the specific implementation and workflow of this skill.

Mastering cm-debugging: The Complete Guide

You’ve been there before. You’re in the middle of a perfect “vibe session,” your AI agent is churning out beautiful React components, and suddenly—red text. A test fails. A build breaks. The “vibe” is officially killed.

Your first instinct might be to tell the AI: “I think it’s the API route, try fixing that.” The AI apologizes (it always does), generates a “quick fix,” and pushes it. But then, a new error appears. You try again. And again. Thirty minutes later, you’re four layers deep in a recursive loop of “I’m sorry, let me try another approach,” and your codebase looks like a digital Frankenstein.

In Vibe Coding, the greatest danger isn’t the bug itself—it’s the guessing. When an AI agent guesses, it creates technical debt at the speed of light. To solve this, we created cm-debugging: a systematic, four-phase protocol designed to turn your AI from a “guesser” into a “detective.”


The Philosophy: Root Cause or Bust

The “Iron Law” of the Cody Master kit is simple: No fixes without root cause investigation first.

In traditional development, we call this “Systematic Debugging.” In Vibe Coding, we call it “Staying in the Flow.” If you allow your AI to propose a fix before it can explain exactly why the bug is happening, you aren’t debugging—you’re gambling.

A Symptom Fix is like putting a bucket under a leaking pipe. It stops the floor from getting wet (for now), but the pipe is still broken. A Root Cause Fix is replacing the pipe. cm-debugging forces the agent to find the pipe every single time.


Core Concepts: The Four Phases of Mastery

cm-debugging isn’t just a set of instructions; it’s a mental model. It breaks the debugging process into four distinct, non-negotiable phases.

Phase 1: The Detective Work (Investigation)

Before a single line of code is changed, the agent must become a forensic scientist. This phase focuses on three things:

  1. Reading, Not Skimming: Most bugs are explained in the error message, yet most users (and “lazy” AI prompts) skip them. cm-debugging mandates reading the full stack trace, noting line numbers, and identifying the exact file path where the crash occurred.
  2. Consistent Reproduction: If you can’t trigger the bug reliably, you can’t fix it. The agent must find the exact steps—the “Reproduction Script”—that makes the bug appear every time.
  3. The “What Changed?” Diff: Bugs rarely appear in a vacuum. By analyzing recent Git commits and dependency updates, the agent identifies the “Patient Zero” of the error.

Phase 2: Pattern Analysis (The Comparison)

Why does the UserLogin component fail while the UserSignup component works perfectly? Phase 2 is about finding “Working Examples.” By comparing broken code against successful patterns already present in your codebase, the agent identifies the subtle differences—missing configuration, incorrect environment variables, or slight syntax deviations—that are causing the friction.

Phase 3: The Scientific Method (Hypothesis)

This is where the “vibe” meets the “science.” Instead of saying “it’s probably X,” the agent must state a formal hypothesis: “I believe the root cause is X because of evidence Y.”

It then performs a Minimal Test. It changes one variable—the smallest possible increment—to see if the hypothesis holds true. If it doesn’t, we don’t keep adding code; we delete the change and go back to Phase 1.

Phase 4: Implementation (The Cure)

Only after the hypothesis is verified does the agent move to implementation. This phase relies heavily on TDD (Test-Driven Development).

  • First, write a test that fails because of the bug.
  • Second, implement the single fix that makes that test pass.
  • Third, run the full cm-quality-gate to ensure no regressions were introduced.

Practical Example: The “Ghost” in the Production Machine

Let’s look at a real-world scenario. You’ve built a sleek “Contact Us” form. It works perfectly on your local machine, but as soon as you deploy it to Vercel, it returns a 500 Internal Server Error.

The “Wrong” Way (The Guessing Loop)

  • User: “The form is broken in production. Fix it.”
  • AI: “I apologize. It’s likely a CORS issue. I’ll add a CORS header.” (Fail)
  • AI: “Sorry, maybe it’s the database connection string. I’ll wrap it in a try/catch.” (Fail)
  • AI: “I’ll try refactoring the whole API route…” (Total Chaos)

The cm-debugging Way

Phase 1: Investigation The agent checks the production logs (using wrangler tail or Vercel logs). It sees: RuntimeError: Secret KEY_NOT_FOUND. It realizes the bug is an environment variable mismatch.

Phase 2: Pattern Analysis The agent checks the auth-check route, which does work in production. It sees that auth-check uses process.env.API_KEY, while the broken form uses process.env.SENDGRID_KEY.

Phase 3: Hypothesis “I hypothesize the SENDGRID_KEY is missing from the production environment variables, even though it exists in .env.local.”

Phase 4: Implementation The agent doesn’t just “try” to fix it. It instructs the user: “Please add the SENDGRID_KEY to your Vercel Dashboard. I will add a check in the code to provide a clear error message if it’s missing in the future.”

The bug is solved forever, and the code is more robust.


Best Practices & Pro Tips

1. The Rule of Three (The Architectural Stop)

If your AI agent has tried three different fixes and the bug is still there, STOP.

This is a core feature of cm-debugging. When three hypotheses fail, it’s rarely a “bug”—it’s usually an architectural flaw. The code might be too coupled, or you’re fighting against the framework’s intended design. At this point, you should switch to cm-brainstorm-idea to discuss a different structural approach rather than brute-forcing more code.

2. Diagnostic Instrumentation (Logging)

If a bug is deep inside a multi-component system (e.g., UI -> API -> Database), don’t guess where it’s breaking. Use the “Boundary Logging” technique:

  • Log the data entering the API.
  • Log the data exiting the API.
  • Log the data entering the Database.

When you see “Good Data” go in and “Bad Data” come out, you’ve found the exact component that needs your attention.

3. Use the Correct Prompts

To trigger the best behavior from cm-debugging, use these “Thinking Keywords” in your chat:

  • “Stop guessing. Use cm-debugging to find the root cause.”
  • “Trace the data flow from the form submission to the database.”
  • “Show me the diff of the last working version vs. the current broken version.”
  • “Ultrathink the error message before you propose a fix.”

Conclusion: Engineering the “Vibe”

Debugging is often viewed as the “janitorial work” of coding—something to be finished as quickly as possible so you can get back to the fun stuff. But in the world of Vibe Coding, debugging is where the most important learning happens.

By using cm-debugging, you aren’t just fixing a broken button. You are teaching your AI agent (and yourself) about the deep structure of your application. You are ensuring that every “fix” is a permanent improvement, not a temporary patch.

The next time you see that red error text, don’t panic and don’t guess. Take a breath, invoke the “Iron Law,” and let cm-debugging turn your frustration into a masterclass in systematic engineering.

Trust the process. Find the root cause. Master the vibe.


Key Takeaways Checklist:

  • Did I reproduce the bug consistently?
  • Can I explain the root cause in one sentence?
  • Did I find a working example to compare against?
  • Have I written a failing test before applying the fix?
  • If I’ve tried 3 fixes already, have I stopped to question the architecture?

← Explore more Cody Master Skills