coral source add github
coral source add linear
coral source add slack
— 3 sources · 18 tables
Query data across any API, database or file system with SQL. No custom integrations, no ETL, no glue code.
SELECT i.title, p.url, p.author
FROM pagerduty.incidents i
JOIN github.pulls p
ON p.merged_at BETWEEN
i.started_at AND i.resolved_at
WHERE i.urgency = 'high';
| title | url | author |
|---|---|---|
| checkout latency spike | acme/api/pull/1842 | mchen |
| payment timeout cluster | acme/pay/pull/921 | asingh |
| cart API 5xx burst | acme/cart/pull/447 | dlee |
— PagerDuty × GitHub · 3 rows · 84ms
— semantic hints applied · hot path cached
01Connect your sources
Point Coral at your APIs, databases and files. Each becomes a readonly schema.
coral source add github
coral source add linear
coral source add slack
— 3 sources · 18 tables
02Query across them with SQL
JOINs across any combination of sources. Coral handles auth, pagination, rate limits and schema mapping.
SELECT m.text, m.author, i.status
FROM slack.messages m
JOIN linear.issues i
ON m.text LIKE '%' || i.key || '%'
WHERE m.channel = '#engineering'
AND i.status != 'done'
ORDER BY m.ts DESC LIMIT 5;
— Slack × Linear · 3 rows · 190ms
03Plug it into any agent or framework
Use Coral over MCP or from the CLI. One runtime shared across agents.
codex mcp add coral -- coral mcp-stdio
claude mcp add coral -- coral mcp-stdio
npx skills add withcoral/skills
✓ One runtime shared across agents
04Turn usage into semantic context
Coral learns relationships, joins, and schema hints from every query.
coral sql "DESCRIBE EXTENDED
pagerduty.incidents"
columns
title Utf8 incident title
urgency Enum low | high
service Utf8 service key
relationships
service → datadog.service_health
service → github.deployments
started_at → github.pulls.merged_at
— 142 queries · 12 hints · 73% cache
We benchmarked Coral vs data source MCPs on the complex tasks that typify coding agent workloads. With Coral, Claude Code is ↑ 31% more accurate and ↓ 70% lower cost.
Coral is a read layer, so agents can query across sources without mutating upstream systems. Safety without brittle sandboxing.
Scoped tokens, workspace isolation, and per-source permissions. Give agents exactly the access they need and nothing more.
Query pushdown, caching, and efficient pagination keep queries responsive while reducing unnecessary API calls and token-heavy tool loops.
Coral uses query patterns, schema access history, and source statistics to make discovery, caching, and execution better over time.
Unblock broken changes by combining PRs, CI failures, and linked issues in one query.
SELECT pr.number, pr.title, ci.failed_step, li.key AS linked_issue
FROM github.pulls pr
JOIN buildkite.builds ci ON ci.commit_sha = pr.head_sha
JOIN linear.issues li ON li.branch_name = pr.head_ref
WHERE ci.state = 'failed'
ORDER BY ci.finished_at DESC LIMIT 10;
Coral is open source under Apache 2.0. Install it, connect a source, and run your first query.
Wrappers solve one source at a time. Coral centralizes auth, retries, pagination, rate limiting, schema mapping, caching, and cross-source joins once for all agents.
MCP provides a standard for connecting AI tools to data sources. Coral sits between your agents and your data sources, adding governance, cross-source joins, and caching on top. You can use Coral alongside MCP or as a standalone data layer — it provides a SQL interface that works with any agent framework.
Coral stores a small amount of contextual metadata locally and queries your APIs on demand. Hot paths are cached locally with a TTL.
No. Coral connects directly to your existing SaaS APIs, databases, and object stores. There's no ETL pipeline, no data migration, and no warehouse required. Your data stays where it is.
No. Coral is a data layer, not an agent framework. It provides a single SQL connection that any agent — regardless of framework — can use to access governed data across your systems.
No. Coral runs locally on your machine or infrastructure. Your data never leaves your environment — queries are executed against your sources directly and results stay on your infrastructure.