Skip to main content
Competitive intelligence isn’t about having data — it’s about knowing when something changes and what to do about it. This recipe sets up ongoing monitoring for a company, product, or person and produces a structured change alert when something meaningful shifts — direction change, share-of-voice spike, new co-occurrence pattern, or strategic pivot signal.

Who this is for

  • Product leaders tracking competitors to inform roadmap decisions
  • Investors monitoring portfolio companies or sector players
  • Consultants maintaining competitive context for ongoing client engagements

Step 1: Get the current profile

from gildea_sdk import Gildea

client = Gildea()

# Full entity intelligence
entity = client.entities.get("Anthropic")

print(f"Scale: {entity['scale']}")
print(f"Direction: {entity['direction']}")
print(f"Notability: {entity['notability']}")
print(f"Reasoning: {entity['notability_reasoning']}")
print(f"Signals: {entity['signal_count']}")

# Trend stats
trend = entity["trend"]
print(f"Share of voice: {trend['share_of_voice']:.1%}")
print(f"Slope: {trend['theil_sen_slope']:.4f}")
print(f"This week: {trend['current_week']} signals")

# Who they co-occur with
for rel in entity["related_entities"][:5]:
    print(f"  Co-occurs with {rel['name']} ({rel['co_occurrence_count']} signals)")

Step 2: Pull recent signals about the entity

# Latest signals mentioning this entity
signals = client.signals.list(entity="Anthropic", limit=10)

for signal in signals["data"]:
    print(f"[{signal['published_at'][:10]}] {signal['title']}")
    print(f"  Source: {signal['registrable_domain']}")
    print(f"  Verified units: {signal['verified_text_unit_count']}")

Step 3: Search for specific intelligence

# Semantic search scoped to this entity
results = client.search("Claude model pricing strategy", entity="org:/anthropic", limit=5)

for hit in results["data"]:
    print(f"[{hit['relevance_score']:.4f}] {hit['unit']['text']}")
    print(f"  From: {hit['citation']['signal_title']}")

Step 4: Detect changes over time

Run this on a schedule (daily or weekly) and compare against the previous snapshot:
import json
from pathlib import Path

SNAPSHOT_FILE = Path("anthropic_snapshot.json")

# Get current state
current = client.entities.get("Anthropic")
current_snapshot = {
    "direction": current["direction"],
    "notability": current["notability"],
    "scale": current["scale"],
    "signal_count": current["signal_count"],
    "share_of_voice": current["trend"]["share_of_voice"],
    "slope": current["trend"]["theil_sen_slope"],
}

# Compare to previous snapshot
if SNAPSHOT_FILE.exists():
    previous = json.loads(SNAPSHOT_FILE.read_text())
    
    if current_snapshot["direction"] != previous["direction"]:
        print(f"DIRECTION CHANGED: {previous['direction']} -> {current_snapshot['direction']}")
    
    if current_snapshot["notability"] != previous["notability"]:
        print(f"NOTABILITY CHANGED: {previous['notability']} -> {current_snapshot['notability']}")
    
    sov_delta = current_snapshot["share_of_voice"] - previous["share_of_voice"]
    if abs(sov_delta) > 0.02:
        print(f"SHARE OF VOICE SHIFT: {sov_delta:+.1%}")

# Save current snapshot
SNAPSHOT_FILE.write_text(json.dumps(current_snapshot, indent=2))

Step 5: Embed for local comparison

Track how your own strategy docs relate to expert reasoning about this entity:
import numpy as np

# Your thesis about the competitor
my_thesis = "Anthropic's moat is RLHF research depth, not distribution."
my_vec = np.array(client.embed(my_thesis)["embedding"])

# Get their latest signal with embeddings
signals = client.signals.list(entity="Anthropic", limit=1)
signal = client.signals.get(signals["data"][0]["signal_id"], include="embeddings")

# Find claims that relate to your thesis
for arg in signal["decomposition"].get("arguments", []):
    for claim in arg.get("claims", []):
        if "embedding" in claim:
            sim = float(np.dot(my_vec, np.array(claim["embedding"])))
            if sim > 0.6:
                print(f"[{sim:.3f}] {claim['unit']['text']}")

Step 6: Generate the change alert

When your scheduled run detects meaningful changes, synthesize them into an actionable alert. This is the deliverable your team actually reads.
import json

# Collect change data (from Step 4 comparison)
changes_detected = []
new_signals = []

if SNAPSHOT_FILE.exists():
    previous = json.loads(SNAPSHOT_FILE.read_text())
    current = client.entities.get("Anthropic")
    current_snapshot = {
        "direction": current["direction"],
        "notability": current["notability"],
        "scale": current["scale"],
        "signal_count": current["signal_count"],
        "share_of_voice": current["trend"]["share_of_voice"],
        "slope": current["trend"]["theil_sen_slope"],
        "top_co_occurrences": [
            {"name": r["name"], "count": r["co_occurrence_count"]}
            for r in current["related_entities"][:5]
        ],
    }

    if current_snapshot["direction"] != previous["direction"]:
        changes_detected.append(
            f"Direction changed from {previous['direction']} to {current_snapshot['direction']}"
        )
    sov_delta = current_snapshot["share_of_voice"] - previous["share_of_voice"]
    if abs(sov_delta) > 0.02:
        changes_detected.append(f"Share of voice shifted {sov_delta:+.1%}")
    if current_snapshot["notability"] != previous["notability"]:
        changes_detected.append(
            f"Notability changed from {previous['notability']} to {current_snapshot['notability']}"
        )
    signal_delta = current_snapshot["signal_count"] - previous["signal_count"]
    if signal_delta > 0:
        changes_detected.append(f"{signal_delta} new signals since last check")

# Pull headlines of new signals
recent = client.signals.list(entity="Anthropic", limit=5)
for s in recent["data"]:
    new_signals.append({
        "title": s["title"],
        "source": s["registrable_domain"],
        "date": s["published_at"][:10],
    })

alert_data = json.dumps({
    "entity": "Anthropic",
    "changes": changes_detected,
    "current_state": current_snapshot,
    "recent_signals": new_signals,
}, indent=2)

SYSTEM_PROMPT = """You are a competitive intelligence analyst producing a change
alert for a product or investment team. You will receive entity monitoring data
showing what changed since the last check.

Rules:
- Lead with what changed and why it matters — not a data recitation.
- If direction or notability changed, that's the headline. Explain the implication.
- If share of voice shifted significantly (>2%), explain what that likely means
  (more/less expert attention = more/less market relevance).
- Reference specific new signal headlines to ground the alert in real events.
- End with 1-2 specific recommended actions based on the changes.
- If nothing meaningful changed, say so clearly in one sentence. Don't manufacture
  urgency.
- Keep it under 250 words. This is a Slack message or email, not a report.

Output format (markdown):

## Competitor Alert: [Entity Name]

**Status:** [Changed | Stable] | **Direction:** [value] | **SoV:** [value]

### What Changed
<2-3 sentences on the most important shifts>

### Recent Signal Headlines
<bulleted list of new signal titles with source>

### So What
<1-2 sentences: what this means for YOUR strategy/portfolio/engagement>

### Recommended Actions
<1-2 specific next steps>
"""

USER_PROMPT = f"""Generate a competitor change alert from this monitoring data:

{alert_data}
"""

# Pass to your LLM, or print for manual use
print("=== SYSTEM PROMPT ===")
print(SYSTEM_PROMPT)
print("=== USER PROMPT ===")
print(USER_PROMPT)

Example output artifact

## Competitor Alert: Anthropic

**Status:** Changed | **Direction:** rising | **SoV:** 8.2% (+2.4%)

### What Changed
Anthropic's share of voice jumped 2.4% this week, moving from stable to rising
direction. This coincides with 12 new expert signals — the highest weekly volume
in 3 months. The notability rating shifted from "moderate" to "high," meaning
Anthropic is now one of the most-discussed entities in the AI economy.

### Recent Signal Headlines
- "Anthropic's Claude 4 benchmarks suggest narrowing gap with GPT-5" (theinformation.com)
- "Enterprise Claude API adoption doubles in Q1" (semianalysis.com)
- "Anthropic raises Series D at $60B valuation" (bloomberg.com)

### So What
If Anthropic is on your competitive radar, this is an inflection point — not
gradual movement. The combination of product momentum (Claude 4) and enterprise
traction (API adoption) suggests they're transitioning from research lab to
commercial competitor. If you've been treating them as a slower-moving player,
update that assumption now.

### Recommended Actions
- Pull the full decomposition on the Claude 4 benchmark signal to understand
  specific capability claims and how they compare to your product
- Run a co-occurrence check between Anthropic and your key enterprise accounts
  to see if they're showing up in the same competitive contexts

Interpreting results

ChangeWhat it meansAction
Direction: stable -> risingEntity is gaining expert attention momentumEscalate monitoring frequency. Something is happening.
Direction: rising -> decliningPeak attention may have passedAssess whether this is a real decline or just normalization after a news cycle.
SoV spike > 3%Major event or inflection pointPull recent signals immediately. This is time-sensitive intelligence.
Notability: moderate -> highEntity crossed a threshold into top-tier discussionAdd to your primary watchlist if not already there.
New co-occurrence partnerEntity is being discussed alongside a new playerInvestigate — new partnerships, competitive dynamics, or M&A rumors.
No changes across multiple checksEntity is in a steady stateReduce monitoring frequency. Redirect attention to entities that are moving.

API calls per monitoring run

  • 1 call for entity profile
  • 1 call for recent signals list
  • Optional: 1-5 calls for signal detail on new signals
  • Optional: 1 call for semantic search
Total: 3-8 calls per entity per run. Monitor 10 competitors daily for ~50-80 calls/day — well within Pro tier.