HUMAN-AI COLLABORATION STORY

From RFP to Response in 10 Days

How AI agents helped create a comprehensive RFP response with data visualizations, documentation, and analysis—while teaching us about the future of work

Duration
10 Days
AI Tools Used
5
Pages Created
5
Human Hours
~23
01 / OVERVIEW

The Challenge

On November 5, 2025, Tata Trusts sent out an RFP for a "Design & Development of Geographical Data Visualisation Dashboard." The deadline? November 17—just 13 days away.

The Traditional Approach
A typical RFP response would require 2-3 weeks of work from a team of specialists: business analysts to decode requirements, data scientists to create sample visualizations, technical writers to craft the proposal, and designers to make it presentable. The cost? Easily $20,000-$50,000 in labor, with no guarantee of winning the bid.

But what if AI agents could handle most of this work—not replacing human expertise, but amplifying it? This is the story of how one person, leveraging AI agents, created a comprehensive RFP response in just 10 days, spending only ~23 hours of human time.

81%
Time Saved
23 hours vs. 120+ hours traditional
90%
Cost Reduction
1-person effort vs. 5-person team
100%
Deliverables
All requirements met on time
02 / TIMELINE

10 Days, 10 Major Milestones

From receiving the RFP to creating a comprehensive response with interactive visualizations, here's how the 10 days unfolded. Each milestone represents a collaboration between human judgment and AI capabilities.

Day 1
Day 2
Day 3
Day 4
Day 5
Day 6
Day 7
Day 8
Day 9
Day 10
1 Nov 5: Formal Acknowledgement

The Challenge: The RFP document was bureaucratic and dense—typical for large organizations but difficult to quickly parse.

The AI Solution: Used ChatGPT to draft a formal acknowledgement email. Instead of spending 30 minutes crafting the perfect corporate response, the AI generated it in seconds.

Prompt Used:
"Reply formally to this email."

Human Role: Reviewed and personalized the response, ensuring it matched professional standards. Time spent: 5 minutes.

Read detailed notes in process.md

2 Nov 5: Comprehensive Question Generation

The Challenge: RFPs often lack crucial details. Experienced pre-sales teams know to ask clarifying questions early, but identifying all gaps requires deep domain knowledge.

The AI Solution: ChatGPT analyzed the RFP and generated 129 detailed questions across 11 categories—from technical architecture to payment terms.

Prompt Used:
Suggest a comprehensive list of questions to ask based on the document. According to the RFP and email, who should I send these clarifications to?

Human Role: Reviewed questions for relevance, removed duplicates, and prioritized based on experience. Identified that some questions revealed assumptions about their technical maturity. Time spent: 30 minutes.

See all 129 questions in process.md

3 Nov 11: Pre-bid Meeting Preparation

The Challenge: Pre-bid meetings are crucial for understanding unstated requirements and building relationships, but they're also opportunities to reveal (or hide) your expertise.

The AI Solution: Claude analyzed the RFP and generated strategic questions to ask during the meeting, focusing on understanding the organizational context and true needs.

AI Insight: Bureaucracy Patterns
Claude identified that the RFP likely suffered from "procurement-led documentation" where technical requirements get filtered through non-technical stakeholders. This insight proved accurate in the pre-bid meeting.

Human Role: Used AI insights to guide conversation strategy. During the call, actively listened for signals about organizational maturity, technical constraints, and decision-making processes. Time spent: 1 hour prep + 1 hour meeting.

Read preparation notes in process.md • View pre-bid FAQ

4 Nov 11: RFP Conversion & Data Generation

The Challenge: The RFP was a 47-page PDF. Working with PDFs is painful—you can't easily search, reference, or version control them. Plus, sample datasets were needed for visualization demos.

The AI Solution:

  • Claude: Converted PDF to well-structured Markdown (with triple-checking to preserve even typos)
  • ChatGPT: Generated 4 realistic fake datasets (500-2,000 rows each) that matched Tata Trusts' domain
Data Generation Prompt:
Generate 5 realistic fake datasets that will be powerful to demonstrate data visualizations on top of. STEP 1. List 10 different kinds of data the Tata Trusts team will likely be analyzing STEP 2. For each, list the insights and analyses they would be looking for STEP 3. Evaluate these datasets on domain relevance, ease of visualizing, and impactfulness STEP 4. Pick the top 4 and generate realistic data with embedded patterns

Human Role: Reviewed datasets for realism. Asked AI to add missing fields (partner names, realistic geographic distributions). Verified data patterns made sense for the domain. Time spent: 1 hour.

See conversion process • Data generation details

5 Nov 11: Brand Research & RFP Story

The Challenge: Creating materials that match a client's brand requires extensive research into their visual identity, tone, and design patterns—normally a designer's multi-day task.

The AI Solution: Claude Code analyzed Tata Trusts' website, extracted brand guidelines (colors, typography, design patterns), and created a comprehensive style guide. Then generated an interactive "RFP Story" website explaining the opportunity.

Extracted: Colors & Typography
Created: Component Library
Generated: Bootstrap-based Templates
Built: Interactive Visualizations

Human Role: Reviewed visual consistency, requested refinements to match brand tone, fact-checked claims about Tata Trusts' history and impact. Time spent: 2 hours.

Read style guide creation notes • View style guide

6 Nov 11: Data Story & Dashboard Creation

The Challenge: Creating professional data visualizations requires: analyzing data patterns, choosing appropriate chart types, implementing interactive features, ensuring responsive design, and weaving it into a compelling narrative. This is typically a 3-5 day task for a skilled developer.

The AI Solution: Claude Code created:

  • Data Story: NYT-style scrollytelling narrative with 16 embedded charts
  • Interactive Dashboard: Filterable, responsive dashboard with all visualizations
  • Modular Architecture: Each chart as independent ESM module for reusability
  • UX Features: Filters, exports, fullscreen mode, responsive design
Where AI Struggled
The AI initially got Observable Plot's radius scaling wrong. It took 3 iterations and asking it to "read the documentation and take screenshots" before it correctly implemented dynamic circle sizes. Lesson: AI needs feedback loops and verification steps.

Human Role: Defined analysis priorities, reviewed chart effectiveness, tested interactivity, identified bugs (especially the radius issue), and requested UX improvements. Time spent: 4 hours across multiple iterations.

Read full visualization development story • Chart creation template

7 Nov 12: Analysis Planning & Process Documentation

The Challenge: Stakeholders need to understand not just what was delivered, but what insights are possible. Plus, documenting the process itself for future reference and learning.

The AI Solution: ChatGPT analyzed the generated datasets and:

  • Proposed 25 different analyses with rationale and visual recommendations
  • Ranked them by visual impact and usefulness
  • Identified "big, useful, surprising insights" vs. standard reporting

Human Role: Created this process documentation (process.md) capturing every step, every prompt, every iteration, and every lesson learned. This document you're reading now. Time spent: 3 hours.

See analysis planning notes • View full analysis list

8 Nov 12: RFP Response Creation

The Challenge: Transform all the work—analysis, visualizations, and insights—into a professional proposal that addresses every RFP requirement while showcasing the innovative AI-driven approach.

The AI Solution: Claude Code created a comprehensive proposal response including:

  • Full Proposal: Detailed response addressing all RFP sections from executive summary to pricing
  • Compliance Checklist: Line-by-line mapping of RFP requirements to proposal sections
  • Review Guide: Flagged areas needing human verification with importance ratings
  • Architecture Diagrams: Mermaid component diagrams showing the AI-agent architecture
  • Build System: Automated HTML/PDF generation from Markdown for professional delivery
The Innovation: AI Agents as the Solution
Rather than proposing a traditional fixed dashboard, the response proposes an AI agent (Claude Code or GPT-5 Codex) that can create any visualization on demand. The existing data story and dashboard serve as examples of what the agent can build—not the final product itself. This shifts the paradigm from "building features" to "enabling capabilities."

Human Role: Made strategic decisions about tech stack (Azure over AWS, OAuth over basic auth), refined assumptions, ensured alignment with organizational constraints, and iterated through multiple revisions to perfect the proposal. Time spent: 3 hours across multiple iterations.

Read proposal creation process • View final proposal

9 Nov 12: Process Story & Landing Page

The Challenge: All the deliverables exist, but they need to be woven into a cohesive learning experience that teaches three distinct audiences how to leverage AI for RFP responses.

The AI Solution: Claude Code created a comprehensive tutorial system:

  • This Process Story: NYT-style narrative explaining the 7-day journey with prompts, iterations, and lessons
  • Landing Page: Beautiful hub linking to all deliverables with clear context
  • Shared Design System: Extracted common styles into style.css for visual consistency
  • GitHub Pages Deployment: Automated build and deployment pipeline
For Pre-sales Teams
How to use AI to respond to RFPs 5-10x faster
For Developers
How to build data viz with AI as a force multiplier
For Everyone
Understanding AI capabilities, limits, and collaboration patterns

Human Role: Designed the learning experience, structured the narrative for maximum impact, ensured all links and references worked correctly, and manually created the GitHub Action for deployment (AI couldn't create workflows without permission). Time spent: 4 hours including debugging.

Read process story creation notes • View landing page

10 Nov 14: Fill Proposal Gaps with AI Agent

The Challenge: The proposal framework existed, but it needed to be filled with actual company information, team credentials, project references, and legal documents. The pre-sales team shared 580+ files (~1GB) of company documents—CVs, project reports, certifications, and more. Manually extracting and organizing this information would take days.

The AI Solution: Used Codex CLI with gpt-5.1-codex (high reasoning mode) to:

  • Document Analysis: Processed Excel sheets (using in2csv), PDFs (using pdftotext), and Word docs (using pandoc)
  • Information Extraction: Gathered company information, authorized signatory details, project references, team member CVs, and legal documents
  • Gap Analysis: Identified missing information and created a consolidated list of remaining gaps
  • CV Standardization: Created professionally formatted 2-page PDFs with company branding for each team member
  • Best-Fit Matching: Selected optimal team members for required roles (GIS specialist, trainer, etc.) when exact matches weren't available
AI Reasoning Insights

The AI agent's reasoning process revealed interesting patterns:

  • "I'm sorting out how to efficiently review a long 627-line info-needed file..."
  • "Since ripgrep can't search zipped formats like .docx or PDFs, I'm considering unzipping or converting Excel sheets to CSV..."
  • "I'm noting that GIS-specific profiles seem missing, so I may have to create a GIS specialist role based on example projects..."
580+ Files Processed
~31M Tokens Used
Standardized CVs Created

Human Role: Provided the document corpus, reviewed extracted information for accuracy, made final decisions on team composition, sanitized confidential information before committing, and manually collated the final proposal information. Time spent: 3 hours across multiple iterations.

Read full extraction process • View remaining gaps

03 / PROCESS

The Human-AI Collaboration Pattern

This wasn't about AI replacing human work—it was about finding the right collaboration pattern. Here's what emerged as the most effective workflow:

Phase Human Responsibilities AI Responsibilities
Strategy & Planning • Define objectives
• Identify constraints
• Prioritize requirements
• Assess organizational context
• Generate comprehensive question lists
• Identify potential gaps
• Suggest analysis approaches
• Pattern recognition in documents
Content Creation • Define content strategy
• Review for accuracy
• Ensure brand alignment
• Add domain expertise
• Draft initial content
• Structure information
• Format consistently
• Generate variations
Technical Implementation • Define requirements
• Test functionality
• Identify edge cases
• Verify correctness
• Write code
• Implement features
• Debug (with guidance)
• Optimize performance
Quality Assurance • Define quality criteria
• Test user experience
• Verify domain accuracy
• Final approval
• Self-review code
• Check consistency
• Suggest improvements
• Run automated tests
The Key Pattern: Iterative Refinement

The most effective prompts weren't single shots—they were conversations:

  1. Start broad: "Create a data visualization dashboard"
  2. Review output: Human identifies gaps, errors, or improvements
  3. Refine specifically: "The radius scaling is wrong. Read the docs and take screenshots to verify"
  4. Iterate: Repeat until quality meets standards
04 / LESSONS LEARNED

What We Learned About AI Agents

Through this intensive week of collaboration, clear patterns emerged about what AI agents excel at, where they struggle, and how humans can most effectively guide them.

For Pre-sales Teams: Speed Without Sacrificing Quality

What Worked:

  • Question Generation: AI generated 129 clarifying questions in minutes—far more comprehensive than a team would produce in hours
  • First Drafts: Formal communications, technical responses, and standard documentation were instant
  • Pattern Recognition: AI identified organizational maturity signals and procurement patterns from document structure
  • Research Synthesis: Quickly synthesized information from multiple sources (RFP, website, pre-bid notes)

What Didn't Work:

  • Strategic Insight: AI couldn't identify which questions would build relationships vs. reveal ignorance
  • Reading the Room: During pre-bid meetings, human judgment was essential for reading tone and adapting strategy
  • Win Theme Development: Creating compelling narratives still requires human understanding of client pain points

The Takeaway: Use AI to accelerate documentation and research, but reserve strategic decisions for human expertise. AI makes you 5x faster at execution, not 5x better at strategy.

For Developers: AI as a Force Multiplier

What Worked:

  • Boilerplate Elimination: No time wasted on navbar code, CSS reset, or Bootstrap setup
  • Library Expertise: AI knew Observable Plot, D3, and Chart.js patterns better than most developers
  • Rapid Prototyping: From concept to working prototype in minutes, not hours
  • Consistency: Applying design patterns across multiple charts automatically
  • Documentation: AI generated inline comments and README content as it coded

What Didn't Work:

  • Edge Cases: AI missed responsive design issues and cross-browser compatibility concerns
  • Library Nuances: Got Observable Plot's radius scaling wrong—needed explicit documentation reference
  • Performance: Didn't consider data volume implications or optimize rendering until asked
  • Accessibility: Keyboard navigation and screen reader support required explicit prompting

The Takeaway: AI excels at implementation but needs human oversight for edge cases and best practices. The workflow becomes: Human architects → AI implements → Human refines → AI iterates. This cycle is 10x faster than solo coding.

For Everyone: Understanding AI Capabilities

Where AI Excels:

  • Pattern Application: Once shown a pattern, AI applies it consistently across contexts
  • Comprehensive Generation: Exhaustive lists, variations, and alternatives
  • Synthesis: Combining information from multiple sources into coherent outputs
  • Transformation: Converting formats (PDF → Markdown, data → visualizations)
  • Iteration Speed: Making changes is instant—no "cost" to trying alternatives

Where AI Struggles:

  • Judgment: Can't assess "is this good enough?" or "is this appropriate?"
  • Context: Misses implicit requirements or organizational nuances
  • Verification: Claims certainty even when wrong (e.g., the radius scaling bug)
  • Creativity: Generates within patterns, rarely breaks conventions innovatively
  • Taste: Can't distinguish "technically correct" from "actually good"

Where Human Expertise Helps:

  • Direction: "What should we build?" is still a human question
  • Evaluation: "Is this good?" requires domain knowledge and experience
  • Debugging: When AI hits limits, humans diagnose the root cause
  • Quality Bar: Humans define what "done" looks like
  • Ethics & Impact: Assessing consequences requires human judgment

The Takeaway: AI doesn't replace expertise—it amplifies productivity for those who already know what they're doing. The 10x gain comes from eliminating grunt work, not eliminating knowledge requirements.

The "Verification Imperative"

The most critical lesson: AI outputs require verification. This isn't about "trust but verify"— it's about understanding that AI confidently produces outputs without self-assessment. The Observable Plot radius bug is a perfect example: AI generated code that looked right, ran without errors, and was confidently wrong.

Best Practice: Always ask AI to verify its own work by checking documentation, taking screenshots, or running tests. "Make it work" is not enough—you need "Make it work AND prove it works."

05 / RESULTS & DELIVERABLES

What Got Built

In 10 days of part-time work (~23 human hours), here's what emerged from the human-AI collaboration:

đź“„
RFP Analysis
  • 47-page PDF converted to structured Markdown
  • Pre-bid meeting transcription & FAQ
  • 129 clarifying questions across 11 categories
  • Comprehensive process documentation
🎨
Brand & Design
  • Complete brand style guide
  • Interactive RFP story website
  • Consistent design system across pages
  • Responsive, accessible layouts
📊
Data & Analysis
  • 4 realistic datasets (3,500+ total rows)
  • 25 proposed analyses ranked by impact
  • 16 implemented interactive visualizations
  • Modular, reusable chart architecture
🚀
Deliverables
  • NYT-style data story with scrollytelling
  • Interactive filterable dashboard
  • This process tutorial
  • Complete GitHub repository
The Real Achievement

This isn't just about speed—it's about possibility. A comprehensive RFP response with custom data visualizations was previously only feasible for well-funded teams or large agencies. Now, a knowledgeable individual with AI tools can produce comparable quality in a fraction of the time and cost.

This democratizes access to complex work. Small businesses can compete with larger firms. Individual consultants can take on ambitious projects. Pre-sales teams can explore more opportunities. The bottleneck shifts from "Do we have capacity?" to "Is this worth pursuing?"

The Future is Collaborative

This project demonstrates that the future of work isn't about AI replacing humans or humans refusing AI. It's about finding the right collaboration pattern—where AI handles the grunt work and humans provide judgment, strategy, and quality control.

The teams that thrive will be those that learn to orchestrate AI effectively—knowing what to delegate, what to verify, and when to override. This isn't a skill you can ignore anymore. It's becoming as fundamental as knowing how to search Google or use a spreadsheet.

Start experimenting. Start learning. The tools are here. The opportunity is now.