Login
← Back to Frameworks
AI Workflows

How I Use AI to Process 100+ Sources a Week

The bottleneck in research has never been access to information. It has been the cost of processing it.

In crypto, the firehose runs constantly: protocol documentation, governance proposals, on-chain analytics dashboards, podcasts, newsletters, research reports, and an endless scroll of commentary.

A single person cannot meaningfully engage with all of it. Attempting to do so guarantees shallow coverage.

The shift I have made is to use AI not as an oracle but as an intake layer. One that compresses volume into structured summaries I can actually reason about.

The three-stage workflow

Research workflow diagram showing collection, triage, and synthesis stages
Research workflow diagram showing collection, triage, and synthesis stages

Stage 1: Collection

RSS feeds, email subscriptions, and a handful of custom scrapers pull raw material into a single queue every morning.

Tools I use for collection: Feedly for RSS aggregation, custom Python scripts for on-chain data snapshots, Readwise for article highlights, and a Telegram bot that monitors key channels.

Stage 2: Triage

A language model reads each item and produces a structured summary with a relevance score, key claims, and any data points worth tracking.

This is not analysis. It is compression. The goal is to reduce a 3,000-word report to five sentences that capture what is new, what is claimed, and what evidence is offered.

Stage 3: Synthesis

I review the compressed queue, flag anything that connects to an existing thesis, and do the actual thinking. The AI handles breadth so that I can focus on depth.

What makes this actually work

Two things make this work that are easy to overlook.

First: domain-specific prompt engineering. Generic summarisation misses what matters in crypto research because it does not know what counts as a meaningful claim versus background context.

I have spent significant time tuning the instructions so that the triage layer surfaces the right things:

Changes in protocol parameters. Shifts in capital flows. New technical architectures. Contradictions between what teams say and what on-chain data shows.

Second: knowing where the boundary is. AI is good at compression and pattern-matching across large volumes. It is not good at evaluating whether a thesis is sound. That judgment still has to come from you, and no amount of automation changes that.

The net effect

I cover roughly five times the source material I could manage manually, while spending less time on intake and more on the work that actually generates insight.

The trap to avoid is treating the AI output as finished analysis. It is raw material with better formatting.

The value is not in what the model concludes but in what it surfaces for your attention. Used correctly, it is the best research assistant available. Used lazily, it is an expensive way to feel informed while understanding nothing.

The Confluence Brief

Frameworks and analysis like this, twice a week. No hype, no filler.