Skip to main content
A daily briefing that just lists what happened is a news feed. A briefing that tells you what changed, why it matters, and what to do about it is a decision tool. This recipe builds the latter — an agent that pulls new signals from Gildea, extracts the verified intelligence, and produces a structured morning briefing your team can act on.

Who this is for

  • Product leaders who need a 2-minute morning read on competitive moves and market shifts
  • Investors tracking portfolio-relevant AI developments without reading 30 newsletters
  • Consultants maintaining current market context for client conversations

The pattern

  1. Poll for new signals since your last check
  2. Fetch full decomposition + embeddings for each
  3. Extract all verified text units (thesis sentences, argument sentences, claims)
  4. Synthesize into a structured briefing via LLM

Step 1: Pull new signals

from datetime import datetime, timedelta
from gildea_sdk import Gildea

client = Gildea()

# 1. Get signals published since yesterday
yesterday = (datetime.now() - timedelta(days=1)).strftime("%Y-%m-%d")
signals = client.signals.list(published_after=yesterday, limit=50)

# 2. Fetch full decomposition + embeddings for each
detailed_signals = []
for signal in signals["data"]:
    detail = client.signals.get(signal["signal_id"], include="evidence,embeddings")
    detailed_signals.append(detail)

# 3. Extract all verified claims for the briefing
claims = []
for signal in detailed_signals:
    decomp = signal["decomposition"]
    # Analysis signals: claims nested under arguments
    for arg in decomp.get("arguments", []):
        for claim in arg.get("claims", []):
            claims.append({
                "text": claim["unit"]["text"],
                "source": signal["title"],
                "domain": signal["registrable_domain"],
            })
    # Event signals: claims at top level
    for claim in decomp.get("claims", []):
        claims.append({
            "text": claim["unit"]["text"],
            "source": signal["title"],
            "domain": signal["registrable_domain"],
        })

What you get from each signal

Each signal’s decomposition gives you:
  • Thesis sentences — the author’s central argument (analysis signals)
  • Argument sentences — supporting reasoning
  • Claims — specific verifiable facts (NLI-scored)
  • Summary sentences — what happened (event signals)
  • Evidence — source snippets each claim was verified against (with include=evidence)
  • Embeddings — 768-dim vectors for local similarity (with include=embeddings)
The verified claims are the most valuable input for a briefing — they’re specific, fact-checked, and attributed.

Step 2: Filter by entity or theme

Narrow the briefing to what matters to you:
# Only signals mentioning NVIDIA
signals = client.signals.list(entity="NVIDIA", published_after=yesterday)

# Only signals tagged with a specific theme
signals = client.signals.list(theme="Infrastructure", published_after=yesterday)

# Combine filters
signals = client.signals.list(entity="OpenAI", theme="Competitive Dynamics", published_after=yesterday)

Step 3: Extract theses and summaries alongside claims

Claims are the verified facts, but theses and summaries provide the “so what.” Extract both for a richer briefing:
import json

briefing_input = []
for signal in detailed_signals:
    decomp = signal["decomposition"]
    entry = {
        "title": signal["title"],
        "source": signal["registrable_domain"],
        "published": signal["published_at"][:10],
        "content_type": signal.get("content_type", "unknown"),
    }

    # Analysis signals: thesis + claims
    if "thesis" in decomp and decomp["thesis"]:
        entry["thesis"] = decomp["thesis"].get("text", "")
        entry["claims"] = []
        for arg in decomp.get("arguments", []):
            for claim in arg.get("claims", []):
                entry["claims"].append(claim["unit"]["text"])

    # Event signals: summary + claims
    if "summary" in decomp and decomp["summary"]:
        entry["summary"] = decomp["summary"].get("text", "")
        entry["claims"] = []
        for claim in decomp.get("claims", []):
            entry["claims"].append(claim["unit"]["text"])

    briefing_input.append(entry)

Step 4: Synthesize the briefing

briefing_json = json.dumps(briefing_input, indent=2)

SYSTEM_PROMPT = """You are an AI market intelligence analyst producing a daily
executive briefing. You will receive structured signal data from Gildea's verified
intelligence database — each signal has been decomposed into thesis/summary and
verified claims.

Rules:
- This briefing is for a busy decision-maker. They have 2 minutes. Respect that.
- Group developments by THEME, not by signal. If 3 signals are about infrastructure
  spending, that's one theme section with the key findings consolidated.
- Lead each theme section with the single most important takeaway, then supporting
  claims.
- Always attribute: "(source: domain.com)" after each claim.
- Include a "Watch List" section for items that aren't actionable yet but could
  become important.
- End with an "Action Items" section: specific things the reader should investigate,
  discuss, or decide based on today's intelligence.
- If there are fewer than 3 new signals, say so. A quiet day is useful information.
- Never fabricate claims or sources. Only use what's in the data.
- Keep the entire briefing under 500 words.

Output format (markdown):

# Daily AI Intelligence Briefing — [Date]

**Signal count:** [N] new signals | **Coverage:** [list of themes represented]

## [Theme Name 1]: [One-line headline]
<2-4 sentences synthesizing what happened, citing verified claims and sources>

## [Theme Name 2]: [One-line headline]
<2-4 sentences synthesizing what happened, citing verified claims and sources>

[...additional themes as needed...]

## Watch List
<Bulleted list of 1-3 items that are emerging but not yet actionable>

## Action Items
<Bulleted list of 1-3 specific things to do today based on this intelligence>
"""

USER_PROMPT = f"""Produce today's AI market intelligence briefing from these
verified signals:

{briefing_json}
"""

# Pass SYSTEM_PROMPT and USER_PROMPT to your LLM of choice.
# Example with Anthropic SDK:
#
# import anthropic
# llm = anthropic.Anthropic()
# response = llm.messages.create(
#     model="claude-sonnet-4-20250514",
#     max_tokens=1500,
#     system=SYSTEM_PROMPT,
#     messages=[{"role": "user", "content": USER_PROMPT}],
# )
# briefing = response.content[0].text

print("=== SYSTEM PROMPT ===")
print(SYSTEM_PROMPT)
print("=== USER PROMPT ===")
print(USER_PROMPT)

Example output artifact

# Daily AI Intelligence Briefing — April 11, 2026

**Signal count:** 24 new signals | **Coverage:** Infrastructure, Competitive
Dynamics, Regulatory & Legal, Application Layer

## Infrastructure: NVIDIA supply constraints tightening into Q3
NVIDIA's H200 allocation is reportedly 40% below OEM expectations for Q3,
with hyperscaler pre-orders consuming the majority of available units
(source: semianalysis.com). TSMC's CoWoS packaging capacity remains the
binding constraint, with expansion not expected to meaningfully help until
Q1 2027 (source: theinformation.com). AMD's MI350 is seeing increased
enterprise interest as a hedging strategy (source: thechinabriefing.com).

## Competitive Dynamics: Anthropic and Google closing the enterprise gap
Two independent sources report Anthropic's enterprise API revenue grew
faster than OpenAI's in Q1 2026, though from a smaller base
(source: theinformation.com, bloomberg.com). Google's Gemini Pro is
reportedly winning government contracts previously dominated by
Azure OpenAI (source: federalnewsnetwork.com).

## Regulatory & Legal: EU AI Act enforcement timeline accelerating
The European Commission released draft technical standards for high-risk
AI systems 3 months ahead of schedule (source: euractiv.com). Compliance
costs for frontier model providers are estimated at $2-5M per model
(source: ft.com).

## Watch List
- Meta's Llama 4 licensing terms are being renegotiated with enterprise
  customers — could signal open-source strategy shift
- Two signals mention "AI agent liability frameworks" — emerging regulatory
  category to track

## Action Items
- If you depend on NVIDIA H200 supply: escalate procurement conversations
  now. Q3 allocation windows are closing.
- If you compete with Anthropic or Google in enterprise: pull their entity
  co-occurrence data to see which accounts they're winning.
- If you have EU exposure: download the draft technical standards and
  start gap analysis. 3 months less prep time than expected.

Step 5 (optional): Store embeddings for cumulative intelligence

If you store the embeddings alongside the text, future briefings can reference past context:
import numpy as np

# Embed a question about your briefing
question_vec = np.array(client.embed("Is NVIDIA's supply advantage sustainable?")["embedding"])

# Compare against all stored claim embeddings
for claim in stored_claims:
    sim = float(np.dot(question_vec, np.array(claim["embedding"])))
    if sim > 0.7:
        print(f"[{sim:.3f}] {claim['text']}")
This is how the briefing agent evolves from “daily summary” to “living knowledge base” — each day’s verified intelligence compounds with everything that came before.

Interpreting results

SignalWhat it meansAction
24+ signals in a dayHigh-activity day in AI marketsLonger briefing justified. Look for theme clustering.
< 5 signalsQuiet day. Not everything is breaking news.Briefing should say so. Redirect attention to strategic work.
3+ signals on the same themeTheme is heating upWarrants its own briefing section and likely a deeper dive.
New entity appearing in signalsEmerging player or productAdd to watch list. If it persists for 3+ days, promote to monitored entity.
Claims contradicting yesterday’s briefingFast-moving or contested situationFlag explicitly. Decision-makers need to know when the ground is shifting.

API calls per run

  • 1 call to list signals (returns up to 50)
  • N calls to get signal detail (one per signal)
  • At 20-30 new signals/day: ~25 API calls per briefing run
Fits comfortably in Pro tier (2,000 requests/month).