Cross-platform intent detection: how repco scans Reddit and LinkedIn in real time
Kamil
on
Outreach Science

Most outreach tools watch one channel and call it done. repco watches Reddit every 15 minutes and LinkedIn every 2–4 hours, classifies every match on a 1–10 scale, and only surfaces signals worth replying to. Here's exactly how the detection stack works.
Single-channel monitoring is half the picture. A founder who rants about a competitor on LinkedIn often follows up by asking for alternatives in r/SaaS the same week. A consultant searching for a fractional CFO pings their LinkedIn network and posts in r/startups within days. If you're only listening to one feed, you miss half the signal — and the half you miss is usually the higher-intent one.
repco was built around the assumption that buying conversations don't stay on one platform. The detection stack watches Reddit and LinkedIn together, classifies intent through a two-tier system that keeps cost predictable, and surfaces only the signals worth your reply. This post walks through exactly how it works, with the cadence, the scoring framework, and the deduplication logic that keeps the inbox clean.
Key takeaways
Single-channel monitoring misses 40–60% of buying intent because B2B operators research across Reddit + LinkedIn, not on one platform alone.
repco runs a two-tier detection stack: keyword/regex catches 80–90% of signals at zero AI cost, Claude Sonnet 4.6 classifies the ambiguous 10–20%.
Reddit gets monitored every 15 minutes, LinkedIn every 2–4 hours — cadence is platform-aware on purpose because LinkedIn detection trips faster than Reddit's.
Only signals scored 6+ on a 1–10 intent scale reach your inbox — below 5 is noise.
Posts older than 48 hours are dropped — by then the conversation has moved on and replies don't get read.
Why does single-channel monitoring miss most buying intent?
Buying conversations don't live on one platform. A typical B2B research cycle in 2026 spans Reddit (where buyers ask the community what to use), LinkedIn (where they ask their network and rant about current tools), Twitter/X (where they post hot takes), and niche communities (Slack, Discord, Indie Hackers). Watching only Reddit or only LinkedIn captures 40–60% of the public intent on most queries.
The operators we talked to before building repco described the same broken workflow: they'd see a competitor's name pop up in r/SaaS, think "I should reply to that," and miss the same person ranting about the same competitor on LinkedIn three days later because they didn't have a tool watching both. Whoever caught the LinkedIn rant first won the deal.
repco was built to fix this from day one. The detection stack treats Reddit and LinkedIn as one conversation surface, not two products glued together.
The two-tier detection stack
repco runs every monitoring query through two layers, optimized for different signal types and very different cost profiles. The split keeps Claude inference cost predictable while catching the long-tail ambiguous signals that pure keyword matching misses.
Layer 1 — structural matching
A keyword and regex matcher hits 80–90% of signals at zero AI cost. Phrases like "alternative to Apollo," "looking for a tool that," or "renewal hit — anyone got recommendations" are caught with a fast pattern match. Competitor brand mentions, intent verbs (need, looking, alternative, switching), and frustration language (joke, broken, awful, expensive) are all classified deterministically.
This layer is fast and cheap because pattern matching costs nothing per match. The tradeoff: it catches obvious signals but misses paraphrased intent ("my CRM stack is killing me" doesn't trigger a regex but is a strong buy signal).
Layer 2 — Claude Sonnet 4.6 classification
The ambiguous 10–20% — paraphrased intent, sarcasm, indirect frustration, complex product comparisons — gets routed to Claude Sonnet 4.6 for classification. Each ambiguous match returns four fields:
Field | Possible values | Used for |
|---|---|---|
|
| Categorizes the shape of intent for routing |
|
| Decides whether the signal reaches your inbox at all |
| One-sentence explanation | Auditable trace, shown next to the post in the dashboard |
| Short phrase | Seeds the DM draft — e.g. "lead with Apollo limitation" |
Only signals scored 6 or higher reach your inbox. A 9 or 10 means "DM this within the hour." A 7 means "good fit but reply when you have time." A 5 means "adjacent intent, only reply if you can add real value."
This split keeps cost predictable. A typical user burns roughly 50–100 Claude classifications per day, not 1,440 (the count if every post in the feed went through inference).
Cadence per platform — and why it matters
The cadence is platform-aware on purpose. The two platforms have very different thread half-lives and very different detection sensitivity to scraping behavior.
Platform | Cadence | Why this cadence |
|---|---|---|
every 15 minutes | Threads die fast — Reddit's own data shows comment engagement collapses after the first 6 hours. Replies that land after 30 comments don't get read. | |
every 2–4 hours | Posts have a longer half-life (24–48 hours), and more frequent scraping triggers detection. LinkedIn's transparency reports show automated enforcement up 31% YoY. |
Slamming LinkedIn every 15 minutes burns sessions and accounts — the platform's anti-automation classifier flags accounts whose scrape pattern doesn't match human browsing. Spacing Reddit at 2 hours means you lose threads before you can reply.
The right answer is asymmetric: Reddit fast, LinkedIn slow. Most single-platform tools get this wrong by applying one cadence to both — and pay for it in either missed signals or banned accounts.
If you're scaling LinkedIn outreach yourself, the rules for staying inside LinkedIn's detection envelope cover the warmup, behavioral noise, and volume caps that keep accounts alive.
How does repco score intent on a 1–10 scale?
Intent strength scoring mirrors what an experienced operator does in their head when reading a thread — except it runs on every match, consistently, without the fatigue. The framework:
Score | What the post looks like | Action |
|---|---|---|
9–10 | Direct ask, names category or competitor, recent (under 6h) | DM within the hour |
7–8 | Clear problem in your category, no specific tool named | Same-day reply |
5–6 | Adjacent problem, your product could fit but isn't an obvious answer | Reply only if you can add real value |
1–4 | Vague, off-topic, or rant with no ask | Skip |
Claude returns the score with a one-sentence reasoning trace, so you can audit why something got an 8 instead of a 6. This matters because intent classification is the place where AI judgment most often diverges from human judgment — and you need to be able to spot-check it.
In practice, ~30% of matches land at 6+, ~50% at 4–5, ~20% below 4. The bottom 70% never reaches your inbox. The output you actually see is roughly 15–30 high-quality signals per day for a properly configured watchlist.
For anyone who wants to run this scoring framework manually before automating, the Reddit playbook walks through exactly how to score signals 1–10 in your head and act on them.
Deduplication and freshness
Every signal is keyed to its post_url. If the same post is matched twice (different keywords trigger on the same thread), it surfaces once. If a buyer posts the same question across r/SaaS and r/Entrepreneur within 48 hours, both signals appear because the threads are different even if the content is similar — different audiences, different timing, both worth a reply.
Freshness is hard-capped at 48 hours. Posts older than that are dropped because:
Reddit thread engagement drops 90%+ after 48 hours
LinkedIn post reach decays similarly past day 2
Buyers who asked 3 days ago have usually already picked a tool
A late reply on a stale thread is worse than no reply — it signals a brand that's slow and not really listening. Better to skip and wait for the next signal.
What you actually see in the dashboard
In the dashboard, signals appear as cards with the post excerpt, the platform, the intent score, and the one-sentence reasoning trace from Claude:
r/SaaS · u/jakub_founder · 4 min ago
"Anyone know a good tool for finding leads on social? Tried Apollo but it's email-only."
Intent: 9/10 · Direct · Mentions explicit pain (Apollo limitation) and asks for alternative.
The agent has typically already drafted a 3-sentence DM by the time you click in — referencing the specific Apollo limitation the prospect mentioned, suggesting repco as the alternative, and ending with a question rather than a CTA. You approve, edit, or skip; sending happens from your own account.
For a side-by-side of how this differs from a database tool like Apollo, the repco vs Apollo breakdown covers channel, cost, and personalization. For where this fits among other Apollo alternatives, our 8-tool comparison maps the category.
Why cross-platform monitoring is the moat
Single-channel tools see fragments of intent. repco sees the full conversation arc — the rant on LinkedIn, the ask on Reddit, the comment under the competitor's post — and replies first.
Three compounding effects make this defensible:
Same buyer, multiple signals, one inbox. When a prospect appears on both platforms, repco recognizes the cross-channel pattern and surfaces it as one high-priority signal instead of two unrelated matches.
Private database growth. Every prospect, every signal, every conversation lives in your private database. By month three you have a warm-prospect graph competitors can't replicate.
Channel-aware DM drafting. A LinkedIn DM and a Reddit DM have different optimal lengths, registers, and patterns. Same prospect, different message — because the conversational norms differ.
This is what we mean when we say repco is a signals company, not a database company. The product compounds; databases don't.
Frequently asked questions
How does repco avoid getting Reddit and LinkedIn accounts banned?
The detection stack runs alongside an account-safety layer: 7-day progressive warmup for new accounts, behavioral noise (scroll dwell, randomized timing, idle gaps), volume caps tied to account age, and IP/device isolation per account. The same rules a careful human operator would follow, automated. We covered the LinkedIn-side rules in how to DM on LinkedIn without getting banned.
Why Claude Sonnet 4.6 specifically and not GPT or open-source?
Claude Sonnet 4.6 has the best price-to-quality ratio for short classification tasks (under 500 tokens in/out) and consistently produces the structured JSON we need without hallucinated fields. We benchmarked GPT-4o and Llama 3.1 alongside; Claude won on consistency of intent_strength scoring across edge cases. We re-evaluate quarterly.
Can I customize the keywords and competitor watchlist?
Yes — every keyword, regex pattern, competitor name, and intent verb is configurable per workspace. Most users start with a 5-keyword + 3-competitor watchlist and expand based on what scores high in week one.
How much does the Claude classification layer cost in practice?
A typical user burns 50–100 classifications per day at roughly $0.003–$0.006 each on Sonnet 4.6 — so $0.15–$0.60/day. Annual cost is bundled into repco's subscription; you don't pay Anthropic directly.
Bottom line
Cross-platform monitoring is the moat. The two-tier detection stack keeps cost predictable while catching the ambiguous signals pure keyword matching misses. The asymmetric cadence keeps accounts safe while staying fast where speed matters. The 1–10 scoring framework keeps your inbox clean and your replies high-leverage.
If you want to run this loop by hand before automating, the Reddit playbook is the cheapest starting point. When manual stops scaling — typically around 5 subreddits and 15 keywords — repco is the gap-filler.
About the author
Kamil is the founder of repco.ai — the AI sales rep that finds buyers publicly asking for products like yours on Reddit and LinkedIn. 15 years across marketing and sales, building and running companies in industrial, IT, investments, and real estate. Serial founder; building repco from the gap he kept hitting himself — outbound channels that work for solo founders and small teams, not enterprise sales orgs. Designed the two-tier detection stack after months of manually monitoring 8 subreddits and saved LinkedIn searches by hand.
Previous post:
Your next customer is asking for what you sell - right now
No credit card · Takes 60 seconds





