treeru.com

Building Websites with AI Coding Tools — How Far Can Prompts Really Go?

"Just type a prompt and a website appears" — how much of that is actually true? After building 4 production websites using AI coding tools, we have concrete answers. About 65% of the code was AI-generated, development speed improved roughly 3x, and we still had to manually write or fix ~35% of everything. Here's the full breakdown of what works, what doesn't, and whether the cost is justified.

AI Usage Ratio by Task Type

We measured exactly how much of each task type the AI handled across 4 website projects. The overall code was ~65% AI-generated, but the ratio varies dramatically by task type.

Task TypeAI GeneratedManual EditSatisfaction
Component Markup85%15%High
Styling (Tailwind)80%20%High
API Routes60%40%Moderate
Business Logic30%70%Low
Database Schema50%50%Moderate
Auth / Security20%80%Low

The pattern is clear: visible UI work (markup, styling) reaches 80–85% AI utilization with high satisfaction, while invisible logic (business rules, security) drops to 20–30%. Component markup and Tailwind styling are safe to delegate almost entirely, but authentication flows and payment logic still need human expertise.

What AI Coding Tools Excel At

There are three areas where AI tools consistently deliver excellent results. Focusing your AI usage on these domains maximizes return on investment.

Code generation: React component drafts, repetitive pattern generation, boilerplate automation, and TypeScript type definitions. Give the AI a clear component spec and it produces a working draft in seconds. For example, a single prompt like "create 3 pricing cards with Tailwind" generates a complete grid layout with hover effects and responsive breakpoints.

Styling: Tailwind class combinations, responsive layouts, dark mode support, and animations. AI tools have effectively memorized the entire Tailwind utility library, making them faster than any human at composing complex class strings.

Refactoring: Code structure improvements, function extraction, bulk renaming, and pattern consistency enforcement. Point the AI at a messy file and ask it to split into clean, focused modules — it handles this remarkably well.

Where AI Coding Tools Fall Short

Equally important is knowing when not to use AI tools. These areas consistently produce results that take longer to fix than writing from scratch.

Design judgment: AI can make a single component look good, but it cannot judge overall page balance. Whitespace rhythm, color harmony, visual flow, and the subtle hierarchy that makes a page "feel right" — these require human aesthetic sense. Use AI to implement design decisions, not to make them.

Business logic: Discount policies, member-tier permissions, complex state machines, and domain-specific calculations. No matter how detailed your prompt, AI consistently misses edge cases. For domain logic with real-world consequences, write it yourself.

Security code: Authentication flows, CSRF protection, input validation, and SQL injection prevention. AI-generated security code looks plausible but often contains subtle vulnerabilities. Security-critical code demands expert review regardless of who — or what — wrote it.

DomainWhy AI FailsRecommended Approach
Overall DesignLacks visual context awarenessDesign mockup first, then AI codes it
Payment SystemsMisses edge casesDesign manually, AI assists only
DB MigrationsCannot assess existing data impactPlan manually, generate scripts only
Third-party APIsOutdated API knowledgeAlways verify against official docs

5 Prompt Writing Principles That Actually Work

The same AI tool produces wildly different results depending on prompt quality. After hundreds of iterations, we distilled these 5 principles that consistently improve output quality.

1. Specify file names. Instead of "fix the Header," say "modify the navigation menu in components/Header.tsx." Giving the AI a concrete file path eliminates ambiguity and produces targeted changes.

2. Describe the expected result concretely. "Make it pretty" fails every time. "Use bg-primary color, rounded-xl corners, and shadow-lg for a card style" succeeds consistently. Visual specificity translates directly to code accuracy.

3. Reference existing patterns. "Create a new card using the same style as BlogCard.tsx" gives the AI a concrete template. Pattern-referenced prompts maintain consistency across your codebase without repeated style specifications.

4. One task per prompt. Breaking complex requests into sequential steps dramatically improves accuracy. Each focused instruction produces a verifiable result before moving to the next step.

5. State constraints explicitly. "Without external libraries," "using only Tailwind," "compatible with the existing API" — constraints eliminate entire categories of unwanted output. The more boundaries you define, the better the result fits your architecture.

# Bad prompt: Vague
"Make the page better"

# Good prompt: Specific
"In app/(site)/blog/page.tsx:
1. Change blog card grid to grid-cols-1 md:grid-cols-2 lg:grid-cols-3
2. Add category badge to each card (bg-primary/10 text-primary rounded-full)
3. Match existing BlogCard component style
4. Unify date format to 'YYYY.MM.DD'"

Investing 5 extra minutes writing a precise prompt saves 30 minutes of fixing the output. "Specific once, correct once" beats "vague fast, fix repeatedly" every time.

Cost vs Productivity — The Real Numbers

AI coding tools aren't free. The question is whether the productivity gain justifies the subscription cost. We compared actual project hours with and without AI assistance.

Project TypeWith AI ToolsManual OnlyTime Saved
Corporate Website (10 pages)~40 hours~120 hours67%
Blog System~20 hours~60 hours67%
Admin Dashboard~30 hours~80 hours63%
Monthly AI Tool Cost$70–$210$0

At freelancer rates ($35–$70/hour), the monthly AI tool cost equals just 3–6 hours of freelancer work. The time savings from AI tools far exceed this. ROI is recovered within the first week of usage on any reasonably sized project.

The ~65% time reduction applies primarily to UI-focused work. Backend-heavy projects see closer to 40–50% savings, which is still substantial. The key insight: AI tools don't eliminate developer hours — they shift hours from typing to reviewing, which is a far more valuable use of expert time.

How AI Is Changing the Developer's Role

As AI coding tools become standard, the developer's role is fundamentally shifting — from writing code to designing and verifying code. This isn't a reduction in skill requirements; it's a redirection.

ResponsibilityBefore AI ToolsWith AI Tools
Code WritingWrite everything manuallyAI drafts, human reviews
ArchitectureDesign + implementFocus on design, AI assists implementation
TestingManual + automated mixAI generates test code
Code ReviewPeer developersAI pre-review + peer review
Core SkillTyping speed, syntax memoryDesign thinking, prompt craft

The developers who thrive in this new environment share common traits: they write clear requirements, they review AI output critically rather than accepting it blindly, and they understand architecture deeply enough to guide AI toward correct solutions.

Five skills matter most going forward: requirements definition (translating business needs into precise technical specs), code review ability (catching AI mistakes that look correct), prompt communication (getting the best output on the first try), architecture design (making structural decisions AI can't), and security awareness (understanding threats AI doesn't consider).

Summary

AI generates ~65% of website code, but ~35% still requires manual writing or correction. The sweet spot is UI work — markup (85% AI) and styling (80% AI) — while business logic (30%) and security (20%) remain firmly human territory.

Prompt quality determines everything. Five specific principles — file names, concrete expectations, pattern references, single-task focus, and explicit constraints — consistently produce better AI output than vague requests.

The ROI is clear: 63–67% time savings across project types, with monthly tool costs equivalent to just 3–6 hours of freelancer work. Development speed roughly triples, and the investment pays for itself within a week.

The developer role is evolving, not disappearing. The shift from coder to architect-reviewer demands higher-level skills: design thinking, critical review, and the ability to guide AI tools effectively. Developers who adapt to this model become dramatically more productive.