· 6 min read

What I Learned Building Products with AI

Lessons from integrating AI into real-world products. Covering prompt engineering, MCP servers, browser automation, AI agents, and production considerations.

Share:

I’ve spent the past year integrating AI into various products. Some of those integrations shipped to production. Others got scrapped after a week. Here’s what I took away from all of it.

Start with the Problem, Not the Technology

It’s tempting to add AI to everything. But the best AI-powered features start with a clear user problem. Ask yourself what task is tedious, repetitive, or requires expertise that users don’t have.

Prompt Engineering is an Art

Writing effective prompts is harder than it looks. A few principles that have served me well.

  • Be specific about the output format you want
  • Provide examples when possible
  • Break complex tasks into smaller steps
  • Test with edge cases early and often
// Instead of this
const prompt = "Summarize this text";

// Try this
const prompt = `Summarize the following text in 2-3 sentences.
Focus on the main argument and key supporting points.
Use simple, clear language.

Text: ${text}`;

Handling Uncertainty

AI models are probabilistic. They will sometimes produce unexpected results. Building robust systems means thinking about a few things.

  1. Always validate AI outputs before using them. Models hallucinate, and unchecked outputs will break things downstream.
  2. Have a fallback path for when the AI fails. It will fail.
  3. Let users correct mistakes through feedback loops. This also gives you training signal for improving prompts later.

The Human Element

The best AI products keep humans in the loop. They augment human capabilities rather than trying to replace them entirely. This builds trust and produces better outcomes.

Extending AI with MCP

One of the biggest limitations of AI models is their fixed knowledge cutoff. They don’t know about the library update released last week or the API change in the latest framework version. This is where the Model Context Protocol (MCP) makes a real difference.

MCP is a standard for connecting AI to external tools and data sources. Instead of copying documentation into prompts, you let the AI pull what it needs in real-time.

Here’s a typical MCP configuration.

{
  "mcpServers": {
    "context7": {
      "command": "npx",
      "args": ["-y", "@upstash/context7-mcp"]
    },
    "cloudflare-docs": {
      "command": "npx",
			"args": ["mcp-remote", "https://docs.mcp.cloudflare.com/mcp"]
    }
  }
}

With this setup, the AI can query up-to-date library documentation through Context7 or search Cloudflare’s docs directly. No more outdated answers about deprecated APIs.

Browser Automation for Research

Many websites aggressively block programmatic access. Bot detection, CAPTCHAs, JavaScript-rendered content. These all create barriers for AI trying to gather information. The solution is to give the AI control of a real browser.

Playwright MCP lets the AI do exactly this. The AI can navigate pages, interact with elements, and see the rendered content just like a human would.

Two use cases have proven especially valuable.

Bypassing restrictions for research

Some documentation sites block direct API access. Take Paystack’s documentation. When you ask the AI to fetch their API docs directly, it gets blocked. With Playwright MCP, the workflow just works.

“Look up the Paystack API docs for creating a transaction”

The AI opens a real browser, navigates to the docs, captures a snapshot of the rendered page, and extracts exactly what you need. No code required from you. The AI has browser capabilities and uses them when direct access fails.

Visual debugging

When debugging UI issues, screenshots are worth a thousand words. The AI can navigate to the broken state, take a screenshot, and analyze what’s wrong. It sees exactly what you see. You’re not writing Playwright code. You’re giving the AI browser capabilities, and it decides when and how to use them.

Building Effective AI Agents

Single prompts work well for simple tasks, but complex workflows benefit from agents. These are AI systems that can plan, execute steps, and use tools autonomously.

When building agents, I’ve learned a few key principles.

Know when to use agents vs single prompts. If a task can be described completely upfront and doesn’t require intermediate decisions, a single prompt is simpler and more predictable. Agents shine when the next step depends on the results of the previous one.

Break tasks into executable steps. Agents work best with clear, discrete actions. “Research this topic and write a report” is vague. “Search for recent articles, summarize the top 3, then synthesize findings” gives the agent a roadmap.

Give agents the right tools. An agent is only as capable as its toolkit. MCP servers for documentation, file system access for code, browser automation for research. Each tool expands what the agent can accomplish.

Keep humans in the loop for critical decisions. Agents should pause and ask before irreversible actions. Deleting files, sending emails, making purchases. These need human approval. Autonomy without oversight leads to expensive mistakes.

Production Considerations

Running AI in production brings its own challenges. It’s not just about getting good outputs. It’s about reliability, cost, and observability.

Cost management. Token usage adds up fast. Cache responses where possible. Use smaller models for simpler tasks. Set hard limits on token budgets per request.

Monitoring AI behavior. Log prompts, responses, and any errors. Track metrics like response quality, latency, and failure rates. You need visibility into what your AI is actually doing.

Handling rate limits and failures. AI APIs have rate limits and occasionally fail. Implement retries with exponential backoff. Have fallback behavior when the AI is unavailable.

Versioning prompts and configurations. Prompts are code. Version them. A small prompt change can dramatically alter outputs. Treat it like any other deployment.

What I’m Doing Differently Now

A year ago, I reached for AI when it felt exciting. Now I reach for it when it solves a real problem. That shift sounds small, but it changed how I build.

I spend more time on prompt design than I used to spend on some feature specs. I treat MCP configurations like infrastructure. I test agent workflows the same way I test API endpoints.

The tools are changing fast. MCP keeps expanding, models keep getting better, and agents are handling tasks I wouldn’t have trusted them with six months ago. But the fundamentals stay the same: validate outputs, keep humans in the loop, and build for the failure case.

If you’re starting out with AI in your products, start small. Pick one workflow that’s tedious and well-defined. Automate that. Learn from it. Then expand.