Code Review in the Era of Vibe Coding: What to Look For

Hướng dẫn chi tiết về Code Review in the Era of Vibe Coding: What to Look For trong Vibe Coding dành cho tech-lead.

I will start by researching the local definitions of “Vibe Coding” and related concepts in the codebase to ensure the article aligns with the project’s specific terminology and philosophy.

Code Review in the Era of Vibe Coding: What to Look For

The landscape of software engineering is undergoing a seismic shift. We have moved from the era of manual syntax craftsmanship to what is now being called “Vibe Coding.” For the uninitiated, Vibe Coding is the practice of using high-level, intent-driven AI agents—like Claude Code, Cursor, or Gemini Antigravity—to generate substantial portions of a codebase based on natural language descriptions. As a Tech Lead, you are likely feeling the tremors of this shift most acutely in your Pull Request (PR) queue.

Yesterday, a 500-line PR represented a week of focused human effort. Today, it represents a thirty-second “vibe session” by a developer who described a feature and let the agent run. The problem? Traditional code review techniques are failing. You cannot review 2,000 lines of AI-generated code line-by-line without burning out or, worse, becoming a “rubber stamp” for potential architectural disasters.

In the era of Vibe Coding, the role of the Tech Lead transitions from a “syntax linter” to a “symphony conductor.” You are no longer just looking for missing semicolons; you are auditing intent, structural integrity, and the “messy middle” that AI often glosses over. This guide outlines exactly what you should look for when a developer says, “I just vibed this feature—can you take a look?”


Core Concepts: The Anatomy of a “Vibe” Review

To review code in this new era, you must first understand the “Semantic Gap.” This is the space between the developer’s prompt (the intent) and the AI’s output (the code). AI agents are world-class at syntax but have no inherent understanding of your specific business context, your long-term scaling pains, or the “skeletons in the closet” of your legacy modules.

1. Review the Strategy, Not Just the Syntax

In a Vibe Coding workflow—specifically one powered by frameworks like CodyMaster—the agent generates an implementation_plan.md before writing a single line of code. As a reviewer, your first stop shouldn’t be the src/ directory; it should be the plan.

If the plan is flawed, the code is irrelevant. Look for:

  • Boundary Violations: Is the AI trying to modify a database schema in a PR that should only be updating the UI?
  • Dependency Bloat: Did the agent “vibe” a new library (e.g., moment.js) into existence when the project already uses date-fns?
  • Missing Infrastructure: Does the plan mention migrations, environment variables, or secret management?

2. The Hallucination of Coherence

AI-generated code possesses a dangerous quality: it looks “correct.” It is almost always perfectly formatted, follows modern ES6+ patterns, and uses descriptive variable names. This “Hallucination of Coherence” can lull a reviewer into a false sense of security.

The code might look like a beautiful, multi-tier architecture, but upon closer inspection, it might be passing null values into a non-nullable database column or ignoring the fact that a specific API call is asynchronous. You must look for Logical Voids—places where the code is syntactically perfect but semantically empty.

3. The “Happy Path” Bias

Agents are inherently optimistic. They are trained on documentation and examples that showcase the “Happy Path”—where the user enters the right data, the server is always up, and the internet never flakes.

When reviewing vibed code, your primary duty is to be the Chief Pessimist. Look for the absence of:

  • Graceful Degradation: What happens when the Supabase connection times out?
  • Input Sanitization: The AI built a beautiful form, but did it remember to prevent XSS or SQL injection?
  • Idempotency: If the user clicks the “Submit” button three times during a slow “vibe-generated” transaction, do you end up with three duplicate orders?

Practical Example: The “Vibed” Onboarding Flow

Imagine a developer on your team uses an agent to build a “Simple 3-step Onboarding Flow.” They submit a PR with 1,200 lines of code across 15 files. Here is how you dissect it using the Vibe Coding Review Framework.

Step 1: Check the Plan and Prompt

Ask the developer to include the prompt history or the implementation_plan.md. You notice the prompt was: “Make a 3-step wizard for user profiles: Basic Info, Preferences, and Summary. Save to the ‘profiles’ table in Supabase. Make it look like Stripe’s UI.”

Your Critique: The prompt is too vague about state persistence. If the user refreshes on Step 2, do they lose everything?

Step 2: Audit the State Management

You look at the generated React code. The AI used a giant useState hook in the parent component.

  • The Issue: While this “works,” it’s a “Vibe Trap.” For a 3-step flow, you want local storage persistence or a state machine.
  • What to Look For: See if the AI handled the onBeforeUnload event or if it’s just hoping the user never hits the back button.

Step 3: Inspect the “Glue” Code

AI is great at components but bad at “glue.” Look at the api/onboarding.ts file.

  • The Red Flag: The AI generated a hardcoded URL: fetch('https://localhost:3000/api/save').
  • The Fix: The agent forgot to use the project’s unifiedConfig or .env system. It “vibed” a solution that only works on the developer’s machine.

Step 4: Security and Validation

In Step 1 (Basic Info), the AI generated a field for “Username.”

  • The Reviewer’s Question: Does it check for duplicates before submission? Or does it wait for the database to throw a 409 error?
  • The Security Hole: You notice the AI didn’t include a rate-limit check on the server-side route. It vibed the feature but ignored the infrastructure required to prevent a bot from creating 10,000 profiles.

Best Practices & Tips for Tech Leads

1. Enforce the “8-Gate” Protocol

Vibe Coding is fast, so your “Gates” must be automated. Do not accept a PR unless the automated pipeline has passed these specific “Vibe Checks”:

  • Gate 1 (Secret Shield): Did the agent accidentally hardcode an API key it found in its training data?
  • Gate 3 (Test Coverage): In Vibe Coding, we follow TDD (Test-Driven Development). The agent should write the test before the code. If a PR has 1,000 lines of code and 0 lines of tests, it’s not a “vibe”—n’t is a liability.
  • Gate 4 (i18n): Did the agent hardcode strings like "Submit" and "Cancel"? Automated scripts (like scan-hardcoded.js) should catch this before you even open the PR.

2. Review the “Diff Intent”

Instead of looking at every line, look at the Structural Diff. Use tools that summarize what changed at a high level.

  • Is the AI introducing a new architectural pattern (e.g., switching from Redux to Zustand) without a discussion?
  • Is it deleting code that it thought was “unused” but was actually a critical edge-case handler?

3. Use “Vibe-Check” Sessions

For complex features, stop doing asynchronous PR reviews. Get on a 10-minute “Vibe-Check” call. Ask the developer to explain why the agent chose a specific pattern. If the developer says, “I don’t know, the AI just did it,” that is a signal that the code is unreviewed and unmaintained.

4. Optimize the “Continuity”

In the CodyMaster ecosystem, we use cm-continuity to help agents remember project context. As a Tech Lead, you should periodically audit the CONTINUITY.md or the skills/ folder. Ensure the “project vibe” (your specific naming conventions, your preferred error-handling patterns) is documented so the agents can learn.

If the AI keeps making the same mistake (e.g., using any in TypeScript), update your style-guide.md or create a specific skill-rule that blocks that behavior.


The “Dirty Dozen” Vibe Coding Red Flags

When reviewing, keep this checklist of common AI-generated errors next to your monitor:

  1. Ghost Dependencies: Libraries imported but not listed in package.json.
  2. The “Never-Ending” Component: A 500-line React file that should have been 5 components.
  3. Silent Fails: try { ... } catch (e) { console.log(e) } — the classic AI “I don’t know what to do with this error” pattern.
  4. Mock Data Leaks: Hardcoded const users = [{id: 1, name: "Test"}] left in production code.
  5. Inefficient Queries: SELECT * from a table with 50 columns when only 2 were needed.
  6. Type-Script “Cheating”: Excessive use of any, as any, or // @ts-ignore.
  7. Prop Drilling: Passing state through 7 layers of components because the AI didn’t want to set up a Context provider.
  8. Missing Loading States: Beautiful UI that stays blank for 3 seconds while fetching data without a spinner.
  9. Non-Responsive Design: Elements that overlap or disappear on a 375px screen.
  10. Z-Index Wars: Randomly assigned z-index: 9999 to fix a positioning issue.
  11. Duplicated Logic: Re-implementing a formatDate utility that already exists in src/utils/.
  12. Insecure Storage: Saving sensitive user data in localStorage without encryption.

Conclusion: Lead the Vibe, Don’t Fight It

Vibe Coding is not a threat to your expertise; it is a multiplier for it. In the past, your time was wasted catching syntax errors. Now, your time is freed to focus on what actually matters: Architecture, Security, and User Experience.

The most successful Tech Leads in the era of Vibe Coding are those who lean into the speed but double down on the standards. You are the curator of the codebase. Your job is to ensure that while the “vibe” is high and the velocity is fast, the foundation remains unbreakable.

When you review a “vibed” PR, you are essentially auditing the developer’s ability to guide the AI. A good Vibe Coder produces code that follows your project’s rules, includes comprehensive tests, and handles errors gracefully. A bad Vibe Coder produces a “black box” that works today but breaks tomorrow.

Train your team to “vibe” with discipline. Encourage the use of TDD, enforce the 8-gate deploy pipeline, and never—ever—accept code that the developer cannot explain. The “vibe” might be generated by a machine, but the responsibility remains human.

Happy Vibe Reviewing!