Mastering `cm-dockit`: The Complete Guide

Hướng dẫn chi tiết về Mastering `cm-dockit`: The Complete Guide trong Vibe Coding dành cho None.

Skills used: cm-dockit

In the high-velocity world of Vibe Coding, speed is often the primary metric of success. We prompt, the AI generates, and within minutes, a complex feature is live. However, this velocity comes with a hidden tax: Context Rot. When you are moving at the speed of thought, documentation is usually the first casualty. You end up with a codebase that works perfectly today but becomes a “black box” by next Tuesday. Your AI agents start hallucinating because they no longer understand the original intent, and you find yourself spending more time explaining the project to your tools than actually building.

This is the exact problem cm-dockit was engineered to solve. It isn’t just a “documentation generator” in the traditional sense—it is a Knowledge Systematization Engine. While tools like JSDoc or Doxygen focus on technical signatures, cm-dockit focuses on meaning. It bridges the gap between raw source code and the business logic, user personas, and operational procedures that give that code purpose.

Core Concepts: Beyond Auto-Generated Comments

To master cm-dockit, you must first shift your mental model of what documentation should be. In the Cody Master ecosystem, documentation serves two masters: the human developer and the AI agent.

1. The Multi-Dimensional Scan

Unlike basic parsers, cm-dockit performs a multi-dimensional analysis of your workspace. It doesn’t just look at function names; it analyzes:

  • Architectural Patterns: Is this a Clean Architecture setup? A monolithic MVC? cm-dockit identifies the structural skeleton.
  • Behavioral Intent: By analyzing the flow of data and the naming of components, it infers the “Jobs To Be Done” (JTBD).
  • User Personas: It looks at the UI/UX layer to determine who the end-user is and what their primary pain points are.

2. The Artifact Hierarchy

cm-dockit produces a structured knowledge base categorized into several high-signal artifacts:

  • Personas & JTBD: Definitions of who the app is for and what they are trying to achieve.
  • Process Flows: High-level logical maps of how a user moves through a feature.
  • SOP (Standard Operating Procedures): Step-by-step guides for developers (or agents) on how to maintain or extend specific modules.
  • Technical references: Exhaustive, LLM-readable API and module documentation.

3. LLM-First Documentation

This is the “secret sauce.” cm-dockit generates documentation that is specifically formatted to be ingested by other AI agents. It uses semantic headers, clear dependency mapping, and “Context Anchors” that allow a tool like cm-continuity or cm-planning to understand the codebase in seconds, rather than minutes of expensive token-scanning.


Practical Example: Systematizing a “Chaos” Project

Imagine you’ve spent the last three hours vibe-coding a “Serverless Analytics Dashboard.” You have 45 new files, a complex database schema in Supabase, and a maze of Edge Functions. You’re exhausted, and the AI is starting to lose the thread of how the authentication flow interacts with the billing tier.

This is the moment to invoke cm-dockit.

Step 1: The Initial Scan

You trigger the tool with a directive to analyze the current state:

gemini "use cm-dockit to scan the src/ directory and generate a full knowledge base in /docs"

cm-dockit begins its work. It reads your auth.ts, your billing-provider.js, and your React components. It notices that you have a “Pro” check in your middleware and an “Admin” role in your database.

Step 2: Reviewing the Persona Artifact

Instead of a dry list of files, cm-dockit creates docs/PERSONA.md. It might look like this:

Primary Persona: Data-Driven Founder

  • Goal: Wants to see real-time MRR and churn without writing SQL queries.
  • Pain Point: Traditional tools are too slow to set up.
  • Key Interaction: The “One-Click Export” and the “Anomaly Detection” notification toggle.

By formalizing this, the next time you ask an AI to “Add a new button,” it knows the button should be designed for a non-technical founder, not a backend engineer.

Step 3: The SOP for Future Agents

The most powerful output is the docs/sop/feature-extension.md. cm-dockit identifies the pattern of how you add new metrics to the dashboard.

### SOP: Adding a New Analytics Metric
1.  Define the SQL view in `supabase/migrations`.
2.  Add the metric key to `src/consts/metrics.ts`.
3.  Create a new component in `src/components/charts` using the `BaseChart` wrapper.
4.  Update the `DashboardLayout` to include the new chart.

Now, when you want to add a “Customer Lifetime Value” chart, you simply point the agent to this SOP. The chance of the agent “guessing” the wrong file structure drops to near zero.


Best Practices for Knowledge Mastery

To get the most out of cm-dockit, you should integrate it into your regular development heartbeat. It is not a “once-per-project” tool; it is a living part of the cycle.

1. Documentation as a Quality Gate

In the Cody Master workflow, a feature isn’t “done” until it is documented. Before you merge a branch or close a session, run a cm-dockit update. This prevents the “knowledge gap” from ever forming. If the tool can’t explain what you just built, it means your code is too complex or your naming is poor.

2. The “Feed the Agent” Strategy

When starting a new task in a large codebase, your first prompt should often be: "Read docs/architecture.md and docs/sop/ before proposing a plan." Because cm-dockit has optimized these files for AI ingestion, the agent will have a much higher “First-Time Success Rate” (FTSR).

3. Customizing the Output

cm-dockit supports different output formats. If you are building an internal tool for a team, use the VitePress Premium mode. This generates a beautiful, searchable static site that looks like professional documentation. If you are working solo and just want to keep the AI in sync, stick to the Strategic Markdown mode, which prioritizes density and semantic clarity.

4. Guarding Against Context Rot

Projects fail when the “Vibe” and the “Reality” diverge. If you change your billing logic but don’t update the docs, the AI will keep trying to use the old logic. Master cm-dockit by using the --diff flag to see how your architectural intent has shifted over time. This acts as a mirror for your own design decisions.


Troubleshooting and Advanced Tips

Scenario: The AI is generating too much fluff. If you find the generated docs are becoming wordy, use the compact setting. This forces cm-dockit to use bullet points and code blocks rather than long paragraphs. This is the preferred mode for “Pure Vibe Coding” where tokens are precious.

Scenario: Missing deep logic. cm-dockit is smart, but it can’t read your mind. If a specific business rule is critical (e.g., “Never delete a user who has an active subscription”), add a comment with a @logic tag in your code. cm-dockit is trained to look for these anchors and promote them to the ARCHITECTURE.md or SOP.md files immediately.

The “Continuity Loop” For the ultimate experience, link cm-dockit with cm-continuity. Use cm-dockit to generate the “Long-Term Memory” (the docs) and cm-continuity to handle the “Short-Term Memory” (the current session state). This creates a tiered knowledge system that makes your workspace virtually hallucination-proof.


Conclusion: The Architecture of Success

In the era of AI-driven development, the role of the engineer is shifting from “writer of code” to “architect of knowledge.” Code is increasingly cheap and disposable; knowledge of why that code exists is the true value.

Mastering cm-dockit allows you to move at the speed of Vibe Coding without the usual side effects of technical debt and confusion. It ensures that your project remains a coherent system, no matter how fast it grows or how many different AI agents touch it. By investing a few seconds in a “Knowledge Scan,” you save hours of future debugging and explanation.

Stop just writing code. Start documenting the Vibe. Use cm-dockit to turn your “black box” into a transparent, scalable, and intelligent knowledge base that grows as fast as you do.