Skip to main content

Documentation Index

Fetch the complete documentation index at: https://hc.pillargtm.com/llms.txt

Use this file to discover all available pages before exploring further.

PILLAR MCP Server

PILLAR is the first Revenue Architecture Operating System (RAOS) with a native MCP (Model Context Protocol) server. Query your revenue architecture — account health, pipeline, signals, renewals, scoring, plays, tasks, financial cascade, market intelligence, AI-generated narratives, and governed writes — directly from any AI assistant.

Quick Start

1. Generate an API Key

Navigate to Settings > Integrations and scroll to the API Keys & MCP Server section. Click Generate API Key and save the key — it’s only shown once.

2. Configure Your AI Assistant

Add the following to your Claude Desktop, Cursor, or VS Code MCP configuration:
{
  "mcpServers": {
    "pillar": {
      "url": "https://app.pillargtm.com/api/mcp",
      "headers": {
        "Authorization": "Bearer pk_live_your_key_here"
      }
    }
  }
}

3. Start Querying

Ask your AI assistant natural language questions about your revenue data:
  • “What’s my pipeline summary?”
  • “Which accounts are at risk this quarter?”
  • “Show me critical signals and the top 3 save plays that have worked on similar accounts”
  • “Simulate what NRR going from 108% to 115% would mean for ARR and AE headcount”
  • “What’s the top expansion opportunity in the West territory right now?”
  • “Ask PILLAR: who’s at risk of churning next quarter and why?”

Endpoint

POST https://app.pillargtm.com/api/mcp
GET  https://app.pillargtm.com/api/mcp  (server discovery)
Authentication: Bearer API key in the Authorization header.

Available Tools (129)

PILLAR’s MCP surface spans 129 tools across 14 categories, covering every layer of the revenue architecture. The vertical_intelligence category (63 tools live) is the competitive moat — it carries PILLAR’s canonical district + federal-program datasets that horizontal Revenue AI platforms structurally cannot answer.
Coverage as of May 2026: The vertical_intelligence category is backed by 51 jurisdictions (50 states + DC + federal) and 26 federal datasets (8 IPEDS components + 8 Higher Ed sources + 10 K-12 sources), exposed through 63 MCP tools. Per-district coverage is at 51/51 jurisdictions for assessment proficiency (5.03M cells across ~19,700 LEAs), cohort graduation (391k cells), accountability status (24k cells), and engagement/chronic absenteeism (104k cells) — the four priority district-grain tables. K-12 state funding allocations (NEW): 46 of 51 jurisdictions (90.2%), 114,699 per-LEA rows, 957B+capturedfromprimarystateaidprograms(CALCFF,TXFSP,NYFoundationAid,ILEBF,FLNetStateFEFP,OHNetStateFunding,WABEA,MIBulletin1014,etc.).HEstateaid(NEW):7,876perinstitutionrowsacross58jurisdictions,957B+ captured from primary state aid programs (CA LCFF, TX FSP, NY Foundation Aid, IL EBF, FL Net State FEFP, OH Net State Funding, WA BEA, MI Bulletin 1014, etc.).** **HE state aid (NEW): 7,876 per-institution rows across 58 jurisdictions, 7.94B captured — IPEDS SFA per-institution state grant aid (3,669 institutions FY22-23 + 3,693 FY21-22) plus 11 state-specific programs (TX TEXAS Grant, CA Strong Workforce, 9 MI scholarship/grant programs). Per-state DOE deep ingest covers 27+ states at the recent-year grain; federal EDFacts SY 2020-21 backfill closes the long tail to 51/51 for the priority surfaces. The schema, ingest pipeline, MCP wrappers, and 550+ build-time-enforced Guarantees are runtime-truth — every commit blocks merge unless the canonical-shape validators (G-X-31 through G-X-40) accept every row landing in the 26 federal-data + 47 state-funding tables. All Round 8 tools (Scorecard Field-of-Study, FSA CDR/GE/HCM/distress-score, SHEEO SHEF, NC-SARA, Carnegie 2025, IPEDS HR/ADM/AY/AL/EF-CIP, CCD School Universe, EDGE Locale, CRDC ×2, OSEP IDEA-B, McKinney-Vento, Title III, Migrant Ed, Perkins V, NSLP-CEP, NIEER) are live and MCP-callable today.
The list below is representative — the source of truth is the tool catalog at src/lib/mcp/tool-catalog.ts, and new tools ship continuously as vertical-intelligence surfaces (K-12 state-calendar procurement windows, federal Title program eligibility, cooperative-contract lookups, NCES district enrichment, state DOE assessment + accountability + graduation) come online.

Tier 0 — Core Surface

The foundational tool set that shipped with the original MCP server. Still the most-invoked tier for day-to-day agent workflows.

Core Intelligence

ToolDescription
get_dashboardFull GTM health snapshot: ARR, pipeline, NRR, at-risk accounts, signal count, forecast health
get_pipeline_summaryPipeline totals: open, commit, best-case, weighted pipeline, NRR, ARR
search_accountsSearch accounts by name, segment, or territory with health/risk/priority scores
get_account_360Full account detail: scores, contacts, signals, opportunities, district intelligence
get_active_signalsActive signals filtered by severity (CRITICAL/WARNING/INFO) or family (RENEWAL/PIPELINE/EXPANSION/ACCOUNT/COVERAGE)
get_account_healthHealth, risk, and priority scores with scoring decomposition showing which factors drive each score
get_renewal_riskUpcoming renewals with risk scores, filtered by days out (30/60/90)

Operational

ToolDescription
get_forecastQuarterly forecast: won, commit, probable, upside with override amounts and rep-level detail
get_revenue_bowtieFull acquire → close → retain/expand funnel with conversion rates at each stage
get_territory_economicsTerritory P&L: revenue, costs, yield ratios, health classification per territory
get_leadsLead funnel with ICP fit scores, behavioral scores, and funnel stage. Filter by status or score threshold
acknowledge_signalUpdate a signal’s status (ACKNOWLEDGED, IN_PROGRESS, RESOLVED). Write.
create_playCreate an intervention play for an account (SAVE_PLAY, EXPANSION_PLAY, ONBOARDING_PLAY). Write.
complete_playComplete a play with mandatory outcome (RENEWED, CHURNED, EXPANDED, LOST) and auto-update linked renewal. Write.
get_board_reportExecutive summary for board meetings: ARR, NRR, GRR, pipeline coverage, risk distribution, EBITDA
get_scoring_rulesScoring rule summary with weights and categories

Data Readiness

ToolDescription
get_data_readinessCheck if CRM data is ready for scoring: readiness level (ready/degraded/blocked), scoring-ready account count, blocking issues
get_affected_recordsIdentify specific accounts with data quality issues. Filter by issue code (no_contacts, no_recent_activity, missing_segment, zero_arr, score_imbalance, negative_arr)

Flywheel & Benchmarks

ToolDescription
get_calibration_statsFlywheel scoring calibration dashboard: outcome distribution, weight drift from baseline, top rules by calibration impact
get_calibration_historyScoring weight calibration history: pending feedback count, current weights, and chronological calibration event log
get_benchmark_percentilesBlueprint benchmark percentile rankings: pillar scores and operational metrics ranked against anonymized peer cohort
list_benchmark_cohortsList available benchmark cohorts with member counts and snapshot data. Cohorts group orgs by industry, revenue range, and maturity
get_benchmark_opt_in_statusCheck whether the organization is opted in to anonymized benchmarking and which cohort is assigned

Connector Observability

ToolDescription
list_connectorsList all configured data connectors across CRM, email/calendar, product usage, and support ticketing with per-provider status, last sync, and health
get_sync_logRecent connector sync runs from the unified sync log. Shows successes, failures, records synced, and timing. Filter by connector name
get_account_data_sourcesPer-account breakdown of which data sources contributed to its score: CRM, product analytics, support tickets, or contracts provenance

Tier A — Plays, Tasks, Expansion (8 tools)

Plays Intelligence

ToolDescription
list_playsList plays in flight. Filter by account, state (PENDING, ACTIVE, COMPLETED, DISMISSED), or owner. Answers “what’s already in flight?”
get_play_effectivenessTemplate-level win rate, ARR impact, health delta, and which templates are underperforming. Filterable by segment/tier/territory
get_play_recommendationsSystem-suggested plays per account based on signal patterns, scoring, and historical effectiveness

Tasks & Activities

ToolDescription
list_tasksPlatform task list with filters (assignee, status, account, source, priority). Includes summary counts and overdue flag
get_activity_timelineEmail, call, meeting, task, and play event timeline. Filter by account, user, or date range

Expansion Whitespace

ToolDescription
get_expansion_summaryOrg-wide expansion whitespace: total addressable expansion ARR, top candidate accounts, product-level whitespace distribution
get_expansion_signalsLayer-I signals tagged as expansion triggers (adoption surge, champion promoted, new use case detected)
get_expansion_affinityProduct co-adoption matrix and cross-sell probability per product pair

Tier B — Financial Cascade (9 tools)

ToolDescription
simulate_nrr_impactSimulate ARR / headcount / ROI impact of moving NRR from current to target. Returns AE and CSM-equivalent headcount, retain-vs-acquire cost comparison
get_revenue_impactARR-at-stake analysis across renewal window, expansion opportunity value, churn exposure, net revenue position
get_budget_varianceGTM budget variance vs plan per period (rep cost, marketing spend, CS cost)
get_procurement_calendarProcurement windows by account: fiscal year alignment, renewal overlap. Filterable by territory/segment/upcoming
get_procurement_forecastProcurement-window-aware forecast (different from stage-based get_forecast)
get_cohort_curvesCustomer retention + NRR cohort curves by acquisition period. Investor-grade
list_cohort_definitionsAvailable cohort slicing definitions (acquisition quarter, segment, deal size, territory)
get_forecasted_renewalsML-forecasted renewal outcomes combining base rates + current risk scores
get_customer_readinessImplementation and adoption readiness: onboarded, stuck, milestone status, time-to-value risk

Tier C — Market Intelligence (5 tools)

ToolDescription
get_tam_samTAM / SAM / SOM sizing by vertical, filterable by fiscal year
get_territory_equityTerritory fairness analysis: ARR balance, account distribution, quota-to-pipeline ratios
simulate_headcount_changeProject ARR uplift, ramp lag, cost impact, and breakeven quarter of adding/removing reps
get_district_intelligenceK-12 district enrichment (enrollment, demographics, bond measures, superintendent changes) via Starbridge
get_data_healthOrg-wide data health dashboard: freshness lag, orphan counts, duplicate counts, pipeline blockers

Tier D — AI Orchestration (7 tools)

ToolDescription
ask_pillarAsk PILLAR’s native RAG layer a question. Returns grounded answer with citations from CRM + signals + scoring
list_narrativesList cached AI-generated narratives (board briefings, forecast commentary, account deep-dives)
get_narrativeFull prose body + citations for a specific narrative
generate_action_planAI-authored per-account action plan: owner assignments, sequenced tasks, expected outcomes, SLAs
generate_account_intelligenceMulti-source account brief (CRM + signals + scoring + district intel + contact web)
generate_coaching_insightsRep-level forecast accuracy, deal-velocity patterns, stuck-deal flags, 1:1 talking points
generate_board_narrativeBoard-grade prose from the current dashboard data

Tier E — Scoring Transparency (5 tools)

ToolDescription
get_scoring_backtestHistorical accuracy of scoring models: how well did risk/priority/expansion scores predict actual outcomes over 12 months
get_scoring_profileOrg’s customized scoring rule weights and active configuration
get_forecast_weightsCustom forecast probability weights per stage + category mappings
get_contracts_configContracts object configuration: date models, term lengths, expansion vs new-biz classification
get_scoring_thresholdsOrg’s custom score-severity thresholds (CRITICAL vs WARNING vs INFO per model)

Tier F — Governed Writes (5 tools)

ToolDescription
create_taskCreate a platform task. Source type distinguishes NBA / signal / play / MQL handoff / manual. Governance log entry automatic. Write.
update_taskUpdate status, assignee, priority, or due date. Write.
update_signal_statusSet signal to ACKNOWLEDGED / IN_PROGRESS / RESOLVED / SNOOZED / DISMISSED with optional reason. Write.
set_renewal_dispositionSet renewal disposition directly (RENEWED / CHURNED / EXPANDED / LOST / PENDING). Write.
opt_in_benchmarksOpt the org in or out of anonymized peer-cohort benchmarking. Write.

Tier G — Vertical Intelligence (63 tools live)

The competitive moat. Horizontal revenue platforms (Salesforce, HubSpot, Gong, Clari) cannot ship these tools because they don’t maintain federal education datasets, accreditor calendars, per-state procurement timing, or canonicalized state DOE assessment data. PILLAR does — the Tier G surface answers questions the customer’s own procurement office and CFO operate in.
K-12 + HigherEd specific tools backed by maintained federal datasets (IPEDS — now including the 5 IPEDS-extension components Human Resources / Admissions / Academic Year Tuition / Academic Libraries / Enrollment by CIP — plus NCES CCD, EDFacts, F-33 district finance, College Scorecard institution + field-of-study, Carnegie Classification 2025, FSA Cohort Default Rates / Gainful Employment / NSLDS aggregate / HCM, SHEEO SHEF, NC-SARA, CRDC, OSEP IDEA Part B, McKinney-Vento, Title III ELA, Migrant Education, Perkins V CTE, NCES EDGE Locale Codes, NSLP/CEP, NIEER State of Preschool), accreditor cycles, state fiscal calendars, Title program formulas, and all 50 states + DC canonicalized into a unified per-LEA assessment / accountability / graduation surface (51/51 jurisdictions covered for assessment proficiency, cohort graduation, accountability status, and engagement/chronic absenteeism via per-state DOE deep ingest for 27+ states + federal EDFacts SY 2020-21 backfill closing the long tail). K-12 — State DOE academic outcomes (NCES LEAID-keyed, 51 jurisdictions unified)
ToolDescription
get_district_assessment_proficiencyPer-LEA proficiency cell (subject × grade × subgroup × year) across all 51 jurisdictions (27+ states via per-state DOE deep ingest at recent-year grain; federal EDFacts SY 2020-21 fills the remaining 24 jurisdictions to 51/51), normalized to a unified pct_proficient_or_above column with documented per-state policy footprint
get_district_assessment_trendYear-over-year proficiency trend with continuity_break flags across assessment-family transitions (PARCC→MCAP, FSA→FAST, AIR→Cambium)
get_district_assessment_benchmarksDistrict proficiency vs state aggregates (P25 / median / P75) AND NAEP state + national, with mandatory standards_comparability_note whenever cross-family figures juxtapose
get_district_assessment_subgroup_gapsRanked gap-from-all-students per subgroup; suppression envelope preserved (suppressed cells appear with gap_pct: null, never silently dropped)
get_district_accountability_statusCanonical CSI / TSI / LRAP / REC / GS status with the is_intervention_priority boolean derived from a single source of truth
get_district_graduation_metricsMulti-year-window graduation rates (4yr / 5yr / 6yr / 7yr) with explicit extension_coverage documenting which extensions are available
get_district_engagement_metricsChronic absenteeism, attendance, dropout per LEA × subgroup with worst-subgroup ranking
get_district_advanced_coursework_metricsAP/IB/dual-credit enrollment + pass rates with ZERO_PASSING_NOTE flag for the “access without success” pattern
get_district_ccmr_metricsCollege/Career/Military Readiness composites with explicit CCMR_COMPOSITE_NOTE warning that composites aren’t comparable across states
get_district_growth_metricsYoY improvement scores with GROWTH_VS_LEVEL_NOTE to prevent conflating growth with absolute proficiency
get_district_early_childhood_metricsPre-K + early literacy with coverage_scope_note (frequently public-school-enrolled only)
get_district_graduation_pathway_metricsTX-specific graduation pathway codes with state-attribution preservation (no silent cross-state collapse)
get_district_refusal_ratesNY 3-8 assessment opt-out rates per LEA × subject × subgroup
get_state_naep_comparisonState assessment trend vs NAEP trend for the 11 Tier-1 states; surfaces naep_disagrees: true when state cut-score recalibrations diverge from federal benchmark
get_ingestion_healthPer-state ingest freshness + last-publish lag
HigherEd — IPEDS institutional intelligence (UNITID-keyed)
ToolDescription
get_ipeds_enrollment_trend10-year enrollment trend for a higher-ed institution (by IPEDS UNITID): total + FTE headcount, Pell-eligible share, first-generation share, trend direction, CAGR, and a computed enrollment-cliff risk score (0-100).
score_institution_tuition_dependencyTuition-and-fees / total-revenue ratio over 5 years with dependency score. Higher = more exposed to enrollment-cliff → vendor ability-to-pay risk.
get_pell_grant_institutional_share% of undergrads receiving Pell, total Pell dollars, 5-year trend, and federal-budget-sensitivity score. Higher Pell reliance = more exposure to federal appropriation risk.
K-12 — Federal Title dollars (NCES LEAID-keyed)
ToolDescription
get_district_title_allocationsPer-district federal Title dollars. Title I-A nationally via F-33 C14 (~15.6k districts × 5 FYs); Title III-A via F-33 C36 (~5.6k districts × 5 FYs); aggregate OTHER-FED-RESTR bucket via F-33 C25 (~15k districts × 5 FYs — bundles II-A / IV-A / V / VI / VII / VIII / CTE non-Perkins / ESSER); Titles II-A and IV-A as a 3-district sample (LAUSD / Chicago / NYC) from EDFacts. Every row carries source, reporting_lag_months, and the aggregate bucket has is_aggregate: true with aggregate_includes listed so DRAFTER can’t mistake it for a specific Title.
Public-sector — Procurement calendars + compliance
ToolDescription
get_fiscal_year_procurement_windowsWhen a given state’s K-12 districts open and close their buying windows — and which months are off-limits for procurement conversations.
get_state_higher_ed_budget_cycleA state’s HigherEd budget calendar: legislative session windows, governor sign-off month, earliest disbursement month.
get_federal_title_programsCatalog of ESEA / ESSA Title programs (I-A, I-B, I-C, I-D, II-A, III-A, IV-A, IV-B, V, VI, VII, VIII) with FY2025 federal appropriations, ESSA evidence-tier requirements, and product-category fit for EdTech vendors.
get_accreditation_review_cycleA higher-ed institution’s next accreditation self-study, site visit, and decision dates across 8 regional + national accreditors (SACSCOC, HLC, MSCHE, NECHE, NWCCU, WSCUC, ACCJC, DEAC).
score_cooperative_contract_eligibilityWhether a target district / institution can buy through a named cooperative (NASPO ValuePoint, TIPS, Sourcewell, CRC, OMNIA Partners, E&I, MHEC, PEPPM, etc.) — turns a 3-month procurement cycle into 30 days.

The canonicalization claim

PILLAR canonicalizes 51 jurisdictions (50 state DOEs + DC + federal) with a documented policy footprint, a structural honesty layer, and sixteen independent layers of accuracy verification: Round 1-5 reconciliation layer (closes “is the state-DOE proficiency number right?”)
  • Macro-level reconciliation against state-published statewide aggregates with 24-state coverage (G-X-25)
  • Micro-level spot-checks against 17 hand-validated district fixtures across 13 states including the load-bearing LDOE R36→036 alias (G-X-26)
  • External NAEP trend-direction cross-validation for the 11 Tier-1 states with a live MCP route at /api/vertical/state-naep-comparison (G-X-27)
  • Silent-corruption canary on every ingested cell with queue-backed weekly review via the value_unknown_alarms table (G-X-28)
  • Federal Title pass-through reconciliation between EDFacts allocations and SEA-published disbursements (G-X-29)
  • Per-district Title allocation spot-checks closing the loop on “we know proficiency AND federal allocation are right for the same district” (G-X-30)
Round 8 federal-data canonical-shape layer (closes “is the federal-dataset row right?”)
  • IPEDS-extension shape discipline for Human Resources / Admissions / Academic Year Tuition / Academic Libraries / Enrollment by CIP — including biennial-even-year discipline on EF-CIP and pre-2014 collection_status discipline on Academic Libraries (G-X-31)
  • OPEID padding integrity on the institution_crosswalk join — the only authorized path between UNITID-keyed (IPEDS, Scorecard, Carnegie) and OPEID-keyed (FSA CDR/GE/NSLDS/HCM, NC-SARA) datasets (G-X-32)
  • College Scorecard shape validators on institution-level + field-of-study tables (G-X-33)
  • Carnegie 2025 four-dimension derivation disciplineis_r1/is_r2 MUST be derivable from research_activity_designation; SAEC eligibility flag MUST be true when SAEC classification is present (G-X-34)
  • FSA regulatory-status discipline preventing accidental publish-rate inference during the 2019-2023 GE rescission gap; CDR status enum + HCM level enum locked (G-X-35)
  • SHEEO SHEF + NC-SARA state-level shape with USPS-keyed JSONB integrity (G-X-36)
  • CRDC biennial discipline — collection year MUST be even; suspensions ≤ 2× total enrollment sanity check (G-X-37)
  • CCD School Universe + EDGE locale enum — title_i_status, charter_status, magnet_status, virtual_indicator, locale_code all locked to documented value sets (G-X-38)
  • OSEP IDEA Part B + K-12 federal program state-aggregate shape (G-X-39)
  • NCES EDGE entity-type-conditional ID-length (school=12 / lea=7-10 / postsecondary=1-6 digits) + NIEER 0-10 quality benchmark hard cap (G-X-40)
Each verification layer is a build-time-enforced Guarantee with a cited spec entry — see The Guarantee → Vertical Intelligence (X) for the full structural enforcement chain. Runtime-truth status (April 2026): 550 Guarantee tests pass on every commit; 0 route type errors; 26 federal datasets ingested with 890,000+ canonical rows ready for live upsert across 35 ingest scripts.

Why this differentiates PILLAR

State DOEs each express proficiency on a different scale, suppression with different sentinels, accountability in 4-tier vs 5-tier vs A-F, with subgroup labels that vary across all 51 jurisdictions. Each state DOE essentially publishes data that’s only legible inside its own bureaucracy. Without the canonicalization layer, a query like “show me districts with declining ELA proficiency under 50%, chronic absenteeism above 20%, and accountability rating in the bottom two tiers” is structurally impossible across state lines — every comparison would require state-specific knowledge of cut-scores, sentinels, and subgroup mappings. PILLAR’s vertical intelligence MCP layer makes that query a single tool call that runs in milliseconds and returns the unified answer with the comparability caveats baked in. Provenance + honest coverage limits. Every Tier G response carries row-level source strings and a response-level data_provenance section documenting reporting lag and known gaps. For example, get_district_title_allocations discloses the three dead-end paths that prevent national per-LEA II-A / IV-A coverage (the Q1 2026 ed.gov site reorg deleted the per-state workbooks; Internet Archive has zero XLSX snapshots of the ed.gov Title paths; USAspending.gov records only state-level primary awards and double-counts carryover obligations). Similarly, the per-state assessment ingestion documents per-state policy footprints (TN “Approached/Met/Exceeded” vs LA “Mastery and above” vs WI “Advanced+Meeting”) and surfaces continuity_break flags whenever year-over-year comparison crosses an assessment-family transition. Consumers should never guess at data availability — the route tells them what it has and what it doesn’t.

Compatible AI Assistants

PILLAR’s MCP server works with any MCP-compatible client:
  • Claude Desktop — Anthropic’s AI assistant
  • Claude Code — CLI development assistant
  • Cursor — AI-first code editor
  • VS Code — GitHub Copilot with MCP support
  • ChatGPT Desktop — OpenAI’s assistant
  • Windsurf — Codeium’s AI editor
  • Zed — High-performance editor with MCP

Security

  • API keys are stored as SHA-256 hashes — the raw key is never persisted
  • Each key is scoped to a single organization via Row-Level Security
  • Keys can be revoked instantly from Settings > Integrations
  • All data is org-scoped — you can only query your own organization’s data
  • MCP server inherits PILLAR’s multi-tenant isolation
  • All tools are defined in a single source-of-truth catalog at src/lib/mcp/tool-catalog.ts which both the external MCP server and the in-app Drafter runtime consume. No drift between surfaces.

Example Conversations

CRO Morning Briefing:
“What does my GTM health look like this morning?” Claude calls get_dashboard → returns ARR, pipeline, at-risk accounts, signal count, forecast health
Renewal Review with NRR simulation:
“Which renewals are at risk in the next 30 days — and what would saving all of them do to NRR?” Claude calls get_renewal_risk with days_out: 30 → totals at-risk ARR → calls simulate_nrr_impact with current and projected NRR to show ARR uplift + AE-equivalent headcount
Account Deep Dive with Effectiveness:
“Tell me about Houston ISD and recommend the best save play template for a district like this” Claude calls search_accountsget_account_360get_play_effectiveness filtered to the account’s segment/tier → recommends the highest-win-rate template with evidence
Signal Triage with Task Creation:
“Show me critical signals, acknowledge the top one, and create a follow-up task for the account owner” Claude calls get_active_signals with severity: "CRITICAL"update_signal_statuscreate_task with source_type: “signal”
Procurement-aware Forecast:
“What’s our forecast by procurement window instead of CRM stage?” Claude calls get_procurement_forecast → explains the difference vs stage-based forecast → flags which deals are likely to slip based on procurement alignment
Board Briefing Generation:
“Generate a board-grade narrative for Tuesday’s meeting and pull the cohort retention curves to go with it” Claude calls generate_board_narrativeget_cohort_curves → composes the pack
Scoring Credibility Check:
“How accurate has PILLAR’s renewal risk score been historically?” Claude calls get_scoring_backtest → reports hit rate, false positives, and calibration drift
Connector Health Check:
“Are all my data sources syncing correctly?” Claude calls list_connectorsget_sync_log for any connectors showing errors
Score Provenance:
“Where does the health score for Houston ISD come from?” Claude calls get_account_data_sources → returns which values came from CRM vs product analytics vs support tickets

Rate Limits

The MCP server inherits PILLAR’s standard API rate limits. For most tools, this is effectively unlimited for normal AI assistant usage patterns. If you encounter rate limiting, contact support.