Framework in practice
Methodology
There's a difference between code that happens to be generated by AI and software that works. The first cuts corners on architecture, skips security reviews, and produces technical debt that compounds. The second treats AI as an accelerant within a disciplined engineering process, where every output from an AI tool gets evaluated by a human engineer for correctness, security, and maintainability.
InTech doesn't practice AI-generated code. We practice AI-assisted development. And that distinction changes everything about what ships.
Framework Explainer
Methodology pages explain the standards we use to reduce ambiguity before and during the build.
Every day, teams spin up ChatGPT, prompt it for a feature, copy the output into their codebase, and call it done. It feels fast. And it is, until the database schema breaks under load, or the authentication logic has a vulnerability that takes three weeks to debug, or the code is so opaque that nobody can modify it six months later.
The real cost of AI-generated code isn't the time saved upfront. It's the debt incurred: unmaintainable architecture, missing error handling, third-party integrations chosen for convenience instead of fit, no observability, and no documented reasoning for the decisions made.
When someone on your team inevitably needs to modify that code, they're starting from scratch because nothing was recorded about why it was built the way it was.
InTech's approach is different. We use AI tools to handle scaffolding, boilerplate, first drafts of functions, and test generation. But an experienced engineer reviews every single output before it goes into your codebase. That engineer owns the architecture, the security posture, and the long-term maintainability. AI accelerates execution. Humans exercise judgment.
InTech delivery follows a methodology called CRAFT, which stands for Context, Rationale, Automate, Fortify, and Telemetry. It's designed to catch the gaps that AI-generated code leaves wide open.
Before a line of code is written, InTech establishes a shared understanding of what's actually being built and why. This happens through an Intent Contract: a document that defines the problem being solved, the user affected, the outcome metric that proves success, the constraints, and what's explicitly in scope and out of scope.
An Intent Contract isn't a requirements document. It's a north star. It's what keeps a project from drifting into feature bloat or solving the wrong problem. When the team gets stuck on a technical decision, they reference the Intent Contract to decide whether a particular approach serves the outcome metric or not.
This phase prevents the first expensive mistake: building the wrong thing fast.
Every material technical decision: architecture choice, data model decision, third-party integration selection, authentication approach: gets recorded in a Decision Record. Not just the decision, but the reasoning: why this option was chosen, what alternatives were evaluated, what constraints drove the choice.
Decision Records serve two purposes. First, they force clear thinking at the moment a decision is made. Second, they prevent the same decisions from being relitigated six months later when someone asks "why did we build it this way?"
When AI generates code that makes an architectural assumption, a decision record explains why that assumption holds, or whether it needs to be reconsidered. Without that record, future engineers are either blocked or forced to guess.
This is where AI tools enter the process. Prompts are precise because the requirements are clear and documented. AI generates function stubs, scaffolding, test cases, and routine code. Engineers review the output for correctness, security implications, performance characteristics, and alignment with the documented architecture.
Automation is risk-proportional. Generating a utility function gets light-touch review. Designing the authentication flow gets rigorous evaluation. The riskier the code, the more human judgment it receives.
Quality gates are mandatory, not optional. Before code ships, tests must pass. The rollback path must be validated in a staging environment. Environment variables must be securely managed. Observability must be live: if something breaks in production, your team will know within minutes. QA sign-off isn't a courtesy; it's a gate.
"It feels ready" is not a sufficient quality standard when you're running production software that people depend on. Fortify ensures that what leaves the pod is actually ready.
Success isn't measured by deployment date. It's measured against the outcome metric defined in the Intent Contract, observed through live telemetry post-launch. Thirty days after launch, the team reviews that telemetry: is the feature actually reducing customer friction? Is it being used the way we expected? What broke that we didn't anticipate?
This closes the loop. What you learn from telemetry informs the next iteration, the next decision record, the next Intent Contract.
InTech doesn't assign freelancers or rotate engineers across projects. We staff dedicated pods: integrated teams that own the outcome.
A pod includes a dedicated engineer or engineers (depending on scope), a Project Delivery Lead who ensures clarity and removes blockers, a DevOps engineer, QA resources, and UI/UX support proportional to the product's needs. The same people work on your product for the entire engagement. They own the codebase, the architecture, and the outcomes.
InTech offers three pod models:
Express Pod is designed for 30-day fixed-fee MVP delivery. It includes one engineer (part-time dedicated) and a structured four-week cadence: Week 1 focuses on clarity and alignment, Week 2 is build, Week 3 is feedback, and Week 4 is finalization and launch. This model is for founders who need to validate a hypothesis quickly and can't wait for a longer engagement.
Build Pod is a predictable monthly retainer for one full-time dedicated engineer, with a two-month minimum commitment. This works for founders who have a clear product direction and need sustained engineering capability to execute a roadmap.
Scale Pod is a predictable monthly retainer for two engineers and is built for larger products with multiple workstreams, or teams that need to move faster across a complex roadmap.
All infrastructure: GitHub repositories, Railway deployments, Supabase databases, Cloudflare configurations: is set up in your own accounts from day one. You always own your infrastructure. You're never dependent on InTech-managed accounts or integrations.
An InTech pod delivers more than code. You receive a documented, maintainable codebase with decision rationale embedded throughout. You receive Intent Contracts that clarify strategy. You receive a team that works in your timezone (all InTech engineers operate in Eastern Time, whether based in Florida or Panama). You receive weekly progress visibility and a clear runway to each milestone.
When the engagement ends, you inherit a codebase that your own team can understand, modify, and own. You're not paying for technical debt in disguise.
Q: Does AI-assisted development mean the code is less reliable?
No. AI generates code quickly, but without human judgment, it often misses security implications, scalability concerns, and architectural coherence. InTech's approach inverts this: AI handles routine coding tasks fast, and engineers focus their attention on the decisions that matter most: the ones that determine whether the system is secure, scalable, and maintainable. The result is code that's actually more reliable than code written entirely by humans trying to move fast.
Q: What if we have existing code we want to integrate with?
That's normal. The pod inherits your existing codebase, understands its architecture and constraints, and extends it according to your roadmap. Decision Records are created for integration choices. The goal is a coherent codebase, not a system where old code and new code follow different patterns.
Q: How much involvement do we need to have?
You need to be available for clarity on the Intent Contract and for feedback during each phase. The Project Delivery Lead will schedule weekly syncs and flag blockers early. Expect an hour or two per week during active build phases. If you're hands-off entirely, the pod can't validate that it's building the right thing.
Q: Can we switch engineers mid-project?
We don't rotate engineers across projects. If circumstances require a pod change, we discuss it before committing. Continuity matters because context is expensive to rebuild.
Q: What happens after launch?
The telemetry review happens 30 days post-launch. After that, the relationship can transition to maintenance mode (as-needed support), evolve into a Build Pod for sustained development, or end cleanly with full knowledge transfer. It's your call.
Q: Do you use my choice of tech stack?
We work primarily with Node.js/TypeScript, Railway, Supabase, and Cloudflare because these tools are reliable and developer-friendly. If you have hard constraints for different technologies, we can discuss fit. The goal is to use tools that let your team maintain the system confidently long-term.
Related Methodology
What Is the CRAFT Methodology?
Learn how CRAFT methodology governs AI-assisted product development with clarity before code. A delivery system that prevents faster waste.
Read nextProduct Engineering for the AI Era
How unified product and engineering teams build faster without technical debt. Discover why the traditional product/engineering split no longer works.
Read nextPRD vs. Intent Contract: A Practical Comparison
Compare [Product Requirements Document](https://www.agilealliance.org/glossary/requirements/)s and Intent Contracts. Understand the structural differences, when each works best, and how they coexist in modern product development.
Read next