Previously known as "AI Feature Tracker." Renamed to "AI Capability Reference" in February 2026 to better reflect the project's ontology-first, capability-centric approach.
Plain-English reference for AI capabilities, plans, constraints, and implementations.
A single source of truth for answering questions like:
- "Is ChatGPT Agent Mode available on the $8/mo plan?" (No, Plus or higher)
- "Can I use Claude Cowork on Windows?" (Not yet, macOS only)
- "Which open/self-hosted models can I realistically run on my hardware?" (Depends on VRAM)
Built for fellow AI facilitators, educators, designers, and anyone who needs accurate, current information about AI tool availability.
There is no database behind this project. Every piece of data — features, pricing tiers, platform support, talking points — lives in plain markdown files under data/. A single build script (node scripts/build.js) reads those files and renders the entire static site into docs/.
That's the whole stack: markdown files, javascript, and Git.
This means contributing doesn't require a dev environment, an ORM, or a running database. You edit a .md file, open a PR, and the CI rebuilds the site. Git provides versioning, review, and an audit trail for every change. If you can read a markdown table, you can read (and fix) the data.
This reference covers:
- major consumer-facing AI products with meaningful public usage or visibility
- commercially available AI systems that ordinary people can sign up for and use
- important self-hosted/open model families and runtimes that people can realistically run or choose locally
This reference does not aim to catalog every enterprise AI vendor, infrastructure platform, or niche model release.
The ~1% market share idea is used here as a practical inclusion heuristic, not a strict ontology field, because public usage data is inconsistent and is usually measured at the product level rather than the model level.
All prices are listed in USD. Feature availability and pricing reflect the United States region by default; availability may differ in other regions.
| Platform | Vendor | Features Tracked |
|---|---|---|
| ChatGPT | OpenAI | Agent Mode, Canvas, Voice, Sora, DALL-E, Search, Deep Research, Codex, Custom GPTs |
| Claude | Anthropic | Code, Cowork, MCP, Connectors, Projects, Artifacts, Extended Thinking, Vision |
| Copilot | Microsoft | Office Integration, Designer, Vision, Voice, Agent Builder |
| Gemini | Advanced, NotebookLM, AI Studio, Deep Research, Gems, Workspace, Imagen, Veo, Live, Project Astra | |
| Perplexity | Perplexity AI | Comet Browser, Agent Mode, Pro Search, Focus, Collections, Voice |
| Grok | xAI | Chat, Aurora (images), DeepSearch, Think Mode, Voice |
| Meta | Meta | Llama 3.3, Llama 4 |
| Mistral | Mistral | Codestral, Mistral Large/Nemo, Mistral Small 3 |
| DeepSeek | DeepSeek | DeepSeek V3 / R1 |
| Alibaba | Alibaba | Qwen 2.5, Qwen 3, Qwen 3.5, Qwen-Coder |
| Ollama | Ollama | Self-hosted runtime product |
| LM Studio | LM Studio | Self-hosted runtime product |
- Plan-by-plan availability — See exactly which tier unlocks each feature
- Capability-first view — Browse plain-English capabilities in addition to the feature view
- Platform support — Windows, macOS, Linux, iOS, Android, web, terminal, API
- Talking points — Ready-to-use sentences for presentations (click to copy)
- Category filtering — Voice, Coding, Research, Agents, and more
- Price tier filtering — Find features at your budget
- Provider toggles — Focus on specific platforms
- Dark/light mode — Toggle for your preference
- Permalinks — Link directly to any feature with shareable URLs
- Shareable URLs — Filter state preserved in URL parameters
- Maintainer-led — Managed by SnapSynapse; public issues and PRs welcome
This site is designed to meet WCAG 2.1 AA standards:
- Keyboard navigation — Full keyboard support with ↑/↓/j/k to navigate cards, Enter to copy, Tab to move between interactive elements
- Skip link — "Skip to main content" link for screen reader users (visible on focus)
- Focus indicators — Clear 2px accent-colored outlines on all interactive elements
- Color contrast — Minimum 4.5:1 contrast ratio for all text in both light and dark modes
- Reduced motion — Animations and transitions disabled when
prefers-reduced-motionis enabled - Touch targets — Minimum 44px touch targets on mobile for easier tapping
- ARIA attributes — Live regions announce filter count changes, decorative images marked with
aria-hidden - Semantic HTML — Proper heading hierarchy, landmark regions, and button/link semantics
If you're interested in accessibility tooling, check out skill-a11y-audit — a companion project that automates WCAG audits as a reusable AI skill. If you like what you see here, you'll love that.
Data in this reference is kept current through a semi-automated process combining scheduled AI verification with human review.
How it works:
Every Sunday, a four-model cascade queries Gemini, Perplexity, Grok, and Claude to cross-check all tracked features — pricing tiers, platform availability, status, gating, and regional restrictions. To prevent provider bias, models are skipped when verifying features from their own platform (e.g. Gemini is not asked about Google features). A change is only flagged when at least three models agree on a discrepancy. Confirmed changes are surfaced as GitHub issues or pull requests for human review — nothing is auto-merged.
Link integrity is checked separately every Saturday, catching broken or redirected source URLs across all platforms.
Features are also marked with a Checked date. Anything not re-verified within 7 days is treated as stale and prioritised in the next run.
Found outdated info? Want to add a feature? See CONTRIBUTING.md.
Quick version:
- Edit the relevant record in
data/platforms/,data/model-access/,data/products/, ordata/implementations/ - Include or preserve the evidence source link
- Run
node scripts/sync-evidence.js - Run
node scripts/validate-ontology.js - Submit a PR
This project includes an automated feature verification system that uses multiple AI models to cross-reference all feature data.
- Pricing tiers — Which subscription plans have access
- Platform availability — Windows, macOS, Linux, iOS, Android, web, terminal, API
- Status — GA, Beta, Preview, Deprecated
- Gating — Free, Paid, Invite-only, Org-only
- Regional availability — Global vs region-restricted features
- URLs — Feature page links are valid and accessible
- Multi-model cascade — Queries Gemini, Perplexity, Grok (X/Twitter), and Claude
- Bias prevention — Skips same-provider models (e.g., won't ask Gemini about Google features)
- Consensus required — Needs 3 models to confirm a change before flagging
- Auto-changelog — Confirmed changes are logged to each feature's changelog
- Human review — Creates issues/PRs for review, never auto-merges
# Verify all features
node scripts/verify-features.js
# Verify a specific platform
node scripts/verify-features.js --platform claude
# Check only stale features (>7 days since last check)
node scripts/verify-features.js --stale-only
# Dry run (no issues created)
node scripts/verify-features.js --dry-runTwo link checkers serve different purposes:
CI checker (check-links.js) — runs in GitHub Actions weekly. Uses canonical categories from the collaboration protocol:
okbrokensoft-blocked(e.g., persistent 403/bot protection; informational)rate-limited(429; informational)timeoutneeds-manual-review
The checker fails CI on actionable problems (broken, timeout) while keeping soft-blocked/rate-limited as non-actionable signals.
node scripts/check-links.js # Check all links
node scripts/check-links.js --broken-only # Show only broken linksBrowser checker (check-links-browser.js) — runs locally through a real Chrome browser via Chrome DevTools Protocol. Bypasses all bot protection, captures page titles for content verification, and shows redirects. Zero external dependencies.
# Terminal 1: Start Chrome with remote debugging
# (--user-data-dir avoids conflicts with your normal browser session)
# macOS (Chrome)
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome \
--remote-debugging-port=9222 --user-data-dir=/tmp/chrome-link-check
# macOS (Brave, Edge, or any Chromium browser also works)
# Linux
google-chrome --remote-debugging-port=9222 --user-data-dir=/tmp/chrome-link-check
# Terminal 2: Run the checker
node scripts/check-links-browser.js # All platforms
node scripts/check-links-browser.js -p claude # One platform
node scripts/check-links-browser.js --help # All optionsRequires API keys (verification only): GEMINI_API_KEY, PERPLEXITY_API_KEY, XAI_API_KEY, ANTHROPIC_API_KEY
See VERIFICATION.md for full documentation.
This repo includes a lightweight secret scan for the current tree and full Git history.
node scripts/scan-secrets.jsUse --current-only or --history-only to narrow the scan when needed.
# Clone the repo
git clone https://github.com/snapsynapse/ai-capability-reference.git
cd ai-capability-reference
# Build the dashboard
node scripts/build.js
# Sync ontology-native evidence records
node scripts/sync-evidence.js
# Validate ontology records
node scripts/validate-ontology.js
# Open it
open docs/index.htmlArchived platform bundles live under data/archive/platforms/ for historical reference, and node scripts/build.js reads only active evidence from data/platforms/.
Ontology-native evidence records live in data/evidence/index.json and are seeded with node scripts/sync-evidence.js.
Canonical cross-platform skill sources live under skills.
All repo skills should use skills/<name>/src/ as the only canonical source location. Do not commit mirrored skill copies under .claude/skills/, .perplexity/skills/, or other platform-local folders; those are local install locations, not source of truth.
Repo skills also assume the companion skill-provenance metaskill is available anywhere the bundles are imported. Treat it as a required dependency for bundle identity, manifest continuity, and cross-surface handoff.
Build platform exports with:
node scripts/build-skill-bundles.jsThat generates Perplexity, Claude, and Codex-ready outputs under each skill's local dist/ directory without turning the repo itself into a pile of hand-maintained exports.
Platform data is stored in simple markdown files. Example:
## Feature Name
| Property | Value |
|----------|-------|
| Category | agent |
| Status | ga |
### Availability
| Plan | Available | Limits | Notes |
|------|-----------|--------|-------|
| Free | ❌ | — | Not available |
| Plus | ✅ | 40/month | Message limit |
### Talking Point
> "Your presenter-ready sentence with **key details bolded**."
### Sources
- [Official docs](https://example.com)See data/_schema.md for the full specification.
The site auto-deploys via GitHub Actions when changes are pushed to main.
-
Build job (
.github/workflows/build.yml)- Runs
node scripts/build.jsto regenerate all pages underdocs/ - If output changed, commits it back to
mainwith[skip ci]to prevent loops - Runs on both pushes and PRs (PRs only validate the build, no commit)
- Runs
-
Deploy job (same workflow)
- Uploads
docs/folder to GitHub Pages - Only runs on pushes to
main, not PRs
- Uploads
-
FTP deploy (
.github/workflows/deploy-ftp.yml)- Parallel deployment to snapsynapse.com via locked ftp
- Requires
FTP_HOST,FTP_USER,FTP_PASSsecrets
To enable GitHub Pages on a fork:
- Go to Settings → Pages
- Under "Build and deployment", select GitHub Actions
- The workflow will deploy to
https://<username>.github.io/ai-capability-reference/
node scripts/build.jsOutput files:
docs/index.html— Capability homepagedocs/implementations.html— Detailed Availability (implementation matrix)docs/constraints.html— Access & Limits (constraint view)docs/capabilities.html— Redirect stub (backward compat)docs/about.html— About page (generated from README.md)
MIT - see LICENSE
Created by SnapSynapse for the AI training community. With help from Claude Code, of course.
Found an error? Open an issue or submit a PR!
