Research Agent: Case Study
We built our own market intelligence agentbefore we sold it to anyone else.
6 RSS feeds. 15 subreddits. GitHub trending. A structured briefing lands every morning at 8AM. Here is the architecture and what it actually delivers.
Note: The sources and tools shown below are what we use for our own intelligence system. Your build monitors whatever matters to your business: competitors, regulators, industry publications, customer channels, using the feeds and APIs that fit your stack.
The Architecture
Three layers. Runs without you.
Daily briefings, intraday Reddit scans, and a weekly deep dive. All automated, all delivered to wherever your team reads things.
The agent never surfaces the same item twice. Deduplication runs before synthesis so you only see what is actually new.
The Numbers
What it monitors and how often.
How It Works
Three jobs. Three cadences.
Daily AI Landscape Briefing
blogwatcher CLI Brave Search GitHub TrendingThe agent starts by reading unread articles from six RSS feeds using a local feed reader. These are publications chosen specifically for signal quality, not volume. It does not read everything published. It reads what was not already read since yesterday.
It then runs a targeted web search for developments from the past 24 hours that feeds might have missed, typically Anthropic and OpenAI announcements, model releases, and infrastructure news. Finally it checks GitHub trending for repos gaining traction in the agent and LLM spaces.
The output is a 2-minute brief: 5 to 6 items, each with a one-line description, a relevance note, a red or yellow or green urgency flag, and a link. No filler. Feeds are marked as read after each run so nothing surfaces twice.
Reddit Signal Monitoring
Brave Search (Reddit) Freshness filter Scoring engineAt 8AM, 12PM, and 4PM, a discovery agent searches 15 subreddits for posts that match a buying signal profile. The subreddits span a wide range of industries and business functions: small business, legal tech, manufacturing, logistics, healthcare, accounting, real estate, recruiting, and more. The list grows as new verticals show up in the work.
Every result is checked against a freshness cutoff before it goes anywhere. Posts older than 48 hours are dropped. Reddit comments on old posts receive no visibility, there is no point engaging with a thread that is already dead.
Surviving posts get scored on seven signals: buying intent, automation pain, tried-DIY attempts, budget or size mentions, business context, hiring intent, and target industry. Posts scoring 5 or above trigger a Signal alert with a suggested reply. Posts scoring 3 to 4 get logged to an unenriched signals file for later review. Everything below 3 is discarded.
Weekly Deep Dive
Sonnet (larger model) GitHub Trending Broad web searchFriday's run is different. The daily briefing is optimized for speed and brevity. It uses a faster, cheaper model and caps its scope to 24 hours. The weekly deep dive uses a more capable model and looks back across the full week.
It covers the top five developments across the week, what each one means in practical terms, and whether it warrants immediate action. It includes the top three to five trending GitHub repositories in the agent and LLM spaces, with star counts and a note on whether they are worth integrating. It closes with a patterns section: what themes are emerging across the week's data, and what the likely trajectory is over the next 30 to 60 days.
The format is deliberately longer than the daily brief. This is the week-in-review, not a quick scan. It is sized for a 10-minute read on a Friday morning before the weekend.
What It Delivers
A briefing, not a firehose.
The goal is not to surface everything. It is to surface the five things that actually matter today, with a clear note on whether each one requires action now, watching, or filing away.
We read this every morning. It takes about two minutes. When something is red-flagged, we act on it the same day. When something is green, we file it and move on. There is no inbox to manage, no RSS reader to open, no dashboard to check.
The same design principle applies regardless of the domain: competitor monitoring, regulatory tracking, talent market intelligence. The output should be actionable in under five minutes.
Direct upgrade path for our agent stack. Review model pricing before Monday client calls.
Worth watching. May simplify our CRM layer for future builds. Not urgent.
Agent browser automation. Useful for form-filling workflows. File for later.
Useful reference for outreach. Add to pitch context.
Four items. Two minutes. No inbox required.
The Point
The sources change. The architecture stays the same.
We monitor the AI landscape because that is the domain that matters to us. The same pipeline, ingest, filter, synthesize, deliver, deduplicate, works for any intelligence problem.
Competitor Monitoring
Track pricing page changes, job postings, press releases, and review site feedback for specific competitors. Deliver a weekly brief on what changed.
Regulatory Tracking
Monitor agency publications, Federal Register filings, and trade association updates for rule changes relevant to your industry. Flag anything that requires a response.
Talent Market Intelligence
Track hiring trends in your space: what roles competitors are adding, what skills they are looking for, where talent is moving. Useful for workforce planning and comp benchmarking.
Fixed price. Two to four weeks. You own the code.