🔍 Advanced · 30 min

Research System Design

How to build a research agent that actually finds signal in noise. Source prioritisation, query design, bias detection, and the synthesis patterns ZELDA uses to produce intelligence reports overnight.

Updated March 23, 2026 ⏱ 30 min read 🔴 Advanced Requires: Content Machine
▶ ZELDA · Research cycle output · 23:47
Signal — actionable findings
Market gap
No competitor offers done-for-you at under $50/month. Closest is $297.
Conversation
"AI agent" searches up 340% YoY. SMB segment underserved in content.
Competitor move
Zapier raised pricing 40% last month. Negative sentiment spike on Reddit.
Noise — filtered out
Low credibility
14 SEO-optimised listicles with no primary data. Discarded.
Outdated
3 articles from 2024 with pre-agent-era framing. Not relevant.
Vendor bias
2 "independent" comparisons written by competitor marketing teams. Flagged.

Signal vs noise

The biggest problem with AI research agents isn't that they can't find information — it's that they find too much and can't tell the difference between what matters and what doesn't. A research agent that returns 40 sources of varying quality is less useful than one that returns 5 high-signal findings with clear sourcing.

ZELDA's research system is built around a single principle: filter aggressively before synthesising. She identifies and discards noise first, then works with what remains.

The SSBAA market intelligence ZELDA produced on Day 1: AI agent market at $7.6B (2025) → $183B (2033), 49.6% CAGR. Average business response time: 47 hours. AI agent response time: 60 seconds. Top creator MRR on comparable platforms: $73K/month. Zapier ARR: $400M. n8n: $40M ARR, $2.3B valuation. All sourced, all verifiable.

Source prioritisation

Not all sources are equal. ZELDA uses a three-tier source hierarchy:

1

Primary sources — always preferred

Company earnings reports, SEC filings, government data, peer-reviewed research, official press releases, direct survey data. These are facts with provenance. ZELDA cites these by default when available.

2

Secondary sources — use with attribution

Reputable journalism (WSJ, FT, Reuters, TechCrunch), established analyst reports (Gartner, IDC), well-sourced blog posts from known experts. Used when primary data isn't available. Always attributed.

3

Tertiary sources — verify before using

Forum posts, social media, SEO content, aggregator sites. These can surface trends and conversations but contain high rates of misinformation. ZELDA flags these as "unverified" and doesn't include them in reports without corroboration.

Query design

How you ask ZELDA to research something determines the quality of what you get back. Vague queries produce vague outputs. Here's the query structure we use:

# Bad query — too vague
Research AI agents for small business
# Good query — specific, scoped, output-defined
Research task for ZELDA:
Topic: Pricing landscape for AI agent services targeting SMBs
Scope: Competitors under $100/month, US and AU markets
Find: Exact pricing tiers, what's included, any recent changes
Ignore: Enterprise pricing, self-hosted tools, chatbot-only products
Output: Comparison table + 3 key findings + one gap we can exploit
Max sources: 8 · Min source tier: 2

Information extraction

Once ZELDA has identified relevant sources, the extraction phase pulls structured data from unstructured content. The patterns she uses:

Bias detection

Research agents will reproduce whatever bias exists in their sources unless you explicitly instruct them to check for it. The three bias patterns ZELDA is instructed to flag:

⚠️ Vendor bias: "Independent" comparisons written by companies with a stake in the outcome. ZELDA checks author attribution and flags any piece where the author works for or has investment in one of the companies being compared.

⚠️ Recency bias: Over-indexing on the most recent information at the expense of context. ZELDA includes publication dates on all sources and flags when a claim is based solely on articles under 30 days old.

⚠️ Confirmation bias: Only surfacing sources that support a pre-existing view. ZELDA's research prompt explicitly asks her to include at least one source that contradicts the expected finding.

Synthesis patterns

Raw research isn't useful. Synthesis is what turns 12 sources into a finding you can act on. ZELDA uses three synthesis patterns:

Convergence synthesis

Multiple independent sources pointing at the same conclusion. The more independent sources agree, the higher confidence the finding. ZELDA reports confidence as High / Medium / Low based on source count and tier.

Gap synthesis

What does the market want that nobody is providing? ZELDA compares complaint patterns in competitor reviews against available products. Repeated complaints with no existing solution = opportunity.

Trend synthesis

Directional change over time. Not just "the market is $7.6B" but "the market was $2.1B two years ago and is projected at $183B in seven years." The trajectory matters more than the snapshot.

Research reports

ZELDA outputs research as structured markdown reports, not free-form prose. Every report follows this structure:

# Research Report Template
## Executive Summary (3 bullet points max)
- The single most important finding
- The most significant opportunity
- The most significant threat
## Key Findings (numbered, sourced)
1. Finding — [Source: Name, Date, Tier]
## Competitor Landscape
Comparison table: Name | Price | Key feature | Gap
## Flagged Items (bias, low confidence, needs verification)
## Recommended Actions (max 3)

Market intelligence we use at SSBAA

Here's the competitive landscape data ZELDA surfaced on Day 1 that shaped our positioning:

MetricData pointSource tier
AI agent market size (2025)$7.6 billionTier 1
AI agent market size (2033 projection)$183 billionTier 1
CAGR49.6%Tier 1
Avg business response time to leads47 hoursTier 2
AI agent response time60 secondsTier 2
Buyers going with first responder78%Tier 2
Zapier ARR (2025 est.)$400MTier 2
n8n ARR$40MTier 1
Cheapest done-for-you competitor$297/monthTier 2
SSBAA positioning gapDone-for-you at $29/monthInternal

Scheduling research

ZELDA runs a research cycle every night at 22:30. The nightly scope is focused — competitor moves, relevant news, anything that might affect the content brief for the next day. The weekly deep-dive runs Sunday nights and covers the full market landscape.

# Nightly research cron
openclaw cron add "Nightly research" "30 22 * * *" \
"Scan for: competitor pricing changes, AI agent news,
SSBAA mentions, relevant Reddit/Twitter conversations.
Output: 3-bullet summary to MEMORY.md + flag anything urgent to YOSHI."
# Weekly deep-dive cron
openclaw cron add "Weekly market intel" "0 21 * * SUN" \
"Full market research cycle. Output full report to
/workspace/delivery-queue/market-intel-YYYY-MM-DD.md"