Mastering `cm-code-review`: The Complete Guide

Hướng dẫn chi tiết về Mastering `cm-code-review`: The Complete Guide trong Vibe Coding dành cho None.

Skills used: cm-code-review

Mastering cm-code-review: The Complete Guide

The “Vibe Coding” era has fundamentally shifted the bottleneck of software engineering. We are no longer limited by how fast we can type or how quickly we can look up syntax on Stack Overflow. With the Cody Master suite, we are coding at the speed of thought. However, this velocity introduces a new, more dangerous friction point: the Verification Gap.

You’ve felt it. You prompt an agent to build a complex feature, it generates 400 lines of elegant-looking TypeScript, and everything seems to work in the browser. But deep down, there’s a nagging anxiety. Did it handle the edge case where the WebSocket disconnects during a state transition? Is it accidentally logging PII in the production telemetry? Is this “vibe” actually a house of cards?

This is where cm-code-review becomes your most critical asset. It is not just a command; it is a professional-grade protocol designed to enforce architectural integrity, security, and long-term maintainability in an AI-augmented workflow. This guide explores how to move beyond “blind merging” and master the art of the AI-driven technical gate.


Core Concepts: Why cm-code-review is Different

Traditional code review is a human-to-human social process. In Vibe Coding, code review is a Verification Lifecycle. The cm-code-review skill (and its underlying components like requesting-code-review, receiving-code-review, and finishing-a-development-branch) treats your codebase as a living system that requires constant epistemic validation.

The Problem of Performative Agreement

One of the biggest risks in AI-assisted development is “performative agreement.” This happens when an agent suggests a change, and the human—or another agent—simply says “Looks good!” to maintain momentum. cm-code-review is explicitly designed to break this cycle. It demands evidence before assertions. It forces the reviewer (AI or human) to look for “lost-in-the-middle” logic and context rot that often plagues high-velocity development.

The Three Pillars of the Review Protocol

  1. Requesting (The Intent Phase): Clearly communicating what needs to be verified and against which requirements.
  2. Receiving (The Rigor Phase): Processing feedback with technical skepticism. This is where you challenge the “hallucination” and ensure the fix doesn’t break a distant dependency.
  3. Finishing (The Integration Phase): Making the strategic decision on how to land the code—whether it’s a surgical merge, a formal PR, or a complete branch cleanup.

How It Works: The Anatomy of a Review

When you activate cm-code-review, you aren’t just running a linter. You are invoking a senior architect persona that analyzes your changes through multiple lenses.

1. Requirement Traceability

The first thing the protocol does is look back at your product.md or plan.md. It asks: “Did we actually build what we said we would?” If the plan called for a robust error-handling strategy for the Stripe integration, but the code only uses a basic try-catch with a console.log, the review will flag this as a critical failure—even if the code “runs.”

2. The “Sharp Edges” Scan

AI agents often use the most common patterns found in their training data. Sometimes, those patterns contain “sharp edges”—deprecated APIs, inefficient loops, or security vulnerabilities like unsanitized innerHTML. cm-code-review utilizes the sharp-edges knowledge base to identify these subtle traps before they reach production.

3. Verification of Evidence

The protocol refuses to accept “fixed it” as a status. It looks for the existence of new test cases in the tests/ directory. It checks if the verification-before-completion skill was used to run the build and lint commands. If there is no evidence of a successful test run, the review is incomplete.


Practical Example: Reviewing a New Feature

Let’s walk through a real-world scenario. Imagine you’ve just used an agent to implement a new “Rate Limiter” for your API.

Step 1: Requesting the Review

Once the implementation is done, you trigger the protocol:

gemini activate_skill cm-code-review
# Followed by a directive
"Review the new rate-limiter implementation in src/middleware/."

The agent won’t just look at the file. It will:

  • Read src/middleware/rate-limiter.ts.
  • Search for where it’s imported in app.ts.
  • Look for a corresponding test file like tests/middleware.test.ts.

Step 2: Analyzing the Findings

The review might return something like this:

Finding 1 (High): The rate limiter uses a simple in-memory Map. In a distributed environment (our current tech stack uses Cloudflare Workers), this state will not be shared across instances, leading to ineffective limiting.

Finding 2 (Medium): Missing 429 status code headers (Retry-After) which are required by our api-patterns.md standard.

Finding 3 (Low): Variable naming in the sliding window logic is ambiguous (t1, t2).

Step 3: Receiving and Iterating

This is where many developers fail. They might simply say, “Fix these issues.” Instead, the receiving-code-review protocol encourages you to verify the fix:

"Implement the suggested fixes using Upstash Redis for shared state, and add a test case that verifies the Retry-After header is present."

The agent then applies the changes and must run the new test to prove the fix works.

Step 4: Finishing the Branch

Finally, the finishing-a-development-branch tool presents you with options:

  • Merge and Delete: For small, verified fixes.
  • Create PR: For features requiring human stakeholder sign-off.
  • Cleanup: If the “vibe” went off-track and you need to revert and try a different architectural approach.

Best Practices for Vibe Coders

To truly master cm-code-review, you need to integrate it into your “muscle memory.” Here are the tactical tips used by the most elite AI-augmented engineers.

1. Never Skip the “Red” Phase

Before you allow an agent to “Fix” a bug found during review, make sure there is a failing test. This is the heart of the TDD (Test-Driven Development) workflow. If the review finds a bug, your first command should be: “Write a test that reproduces this bug and confirm it fails.” Only then do you move to the “Green” phase of fixing it.

2. Use “Context Compression”

If you are reviewing a large feature, your context window can get cluttered. Use the cm-continuity skill to summarize the current state of the review before moving into the next iteration. This ensures the agent doesn’t forget the architectural constraints discussed three turns ago.

3. Audit the Reviewer

Don’t assume the AI reviewer is always right. Use the receiving-code-review skill to challenge findings that seem technically questionable. If the reviewer suggests a library that isn’t in your tech-stack.md, point it out. You are the lead architect; the AI is your senior reviewer.

4. Categorize Your Reviews

For complex projects, specify the type of review you want:

  • Security Review: Focus on OWASP Top 10 and data leakage.
  • Performance Review: Focus on O(n) complexity and database N+1 issues.
  • Stylistic Review: Focus on naming conventions and project-specific idioms.

The “Zero Hallucination” Goal

The ultimate goal of cm-code-review is to reach a state of Zero Hallucination in your production code. In Vibe Coding, hallucinations aren’t just “facts” the AI gets wrong; they are structural weaknesses—functions that look correct but fail under load, or “helper” files that create circular dependencies.

By enforcing a strict review gate, you transform the AI from a “code generator” into a “disciplined engineer.” You move away from the “Ship and Pray” mentality and toward “Verify and Scale.”

The Checklist for a Successful Review:

  • Tests: Does the change include automated tests that cover the happy path and edge cases?
  • Standards: Does the code adhere to the local style-guides/ and tech-stack.md?
  • Security: Are all user inputs validated? Are secrets handled via environment variables?
  • Documentation: Have the README or inline JSDoc comments been updated to reflect the change?
  • Evidence: Has the build command (npm run build, cargo build, etc.) been executed successfully?

Conclusion: The New Standard of Craftsmanship

Mastering cm-code-review is the difference between a “Vibe Coder” who builds demos and a “Cody Master” who builds systems. Speed is a superpower, but without the discipline of verification, that superpower will eventually lead to a catastrophic system failure.

As you integrate this skill into your workflow, remember that the “Vibe” isn’t just about how the UI looks—it’s about the integrity of the internal logic. Use cm-code-review to build code that is as robust as it is innovative. The next time you hit that merge button, do it with the absolute confidence that every line has been challenged, verified, and proven worthy of your project.

Are you ready to elevate your code? Activate cm-code-review on your next task and experience the power of AI-driven technical rigor.