LLM-Friendly Documentation
How Raypx provides machine-readable documentation for AI assistants.
Raypx implements the llms.txt standard, which provides structured, machine-readable documentation for AI assistants like Claude, ChatGPT, and Cursor. This lets AI tools quickly access up-to-date project documentation without scraping or guessing.
What is llms.txt?
llms.txt is a convention for exposing project documentation in a format optimized for large language models. Instead of relying on AI assistants to scrape and parse HTML pages, you serve a plain text index and full content at known URLs.
Available Routes
Raypx exposes four documentation endpoints:
| Route | Format | Description |
|---|---|---|
/llms.txt | Plain text | Index page listing all documentation pages with titles and URLs. |
/llms-full.txt | Plain text | Full text content of every documentation page concatenated together. |
/llms.mdx/docs/{'{path}'} | Markdown | Individual documentation page as rendered Markdown. |
/llms.txt — Index
The index page lists every documentation page in a structured format:
# Raypx
> Deploying Raypx to production with Docker or other platforms.
- [Deployment](/docs/deployment)
- [Docker](/docs/deployment/docker)
- [Environment Variables](/docs/deployment/environment-variables)
- [CI/CD](/docs/deployment/ci-cd)
...This is generated by Fumadocs' llms() helper from the content source.
/llms-full.txt — Full Content
This endpoint concatenates the full Markdown content of every documentation page into a single response. Each page is separated by a blank line and includes its title and URL:
# Deployment (/docs/deployment)
Deploying Raypx to production with Docker or other platforms...
# Docker (/docs/deployment/docker)
Building and running Raypx with Docker...This is ideal for AI assistants that need comprehensive context about the project.
/llms.mdx/docs/{'{path}'} — Individual Pages
Fetch a single documentation page as Markdown:
curl /llms.mdx/docs/deployment/dockerReturns the rendered Markdown for that specific page with a Content-Type: text/markdown header.
Implementation
The routes are defined in src/routes/:
Index and Full Text
The index uses Fumadocs' built-in llms() function:
import { llms } from "fumadocs-core/source"
import { source } from "@/lib/docs/source"
export const Route = createFileRoute("/llms.txt")({
server: {
handlers: {
GET() {
return new Response(llms(source).index())
},
},
},
})The full text endpoint fetches all pages and concatenates their rendered Markdown:
export const Route = createFileRoute("/llms-full.txt")({
server: {
handlers: {
GET: async () => {
const scan = source.getPages().map(getLLMText)
const scanned = await Promise.all(scan)
return new Response(scanned.join("\n\n"))
},
},
},
})Individual Page Route
The catch-all route handles requests for individual documentation pages:
export const Route = createFileRoute("/llms.mdx/docs/$")({
server: {
handlers: {
GET: async ({ params }) => {
const slugs = params._splat?.split("/") ?? []
slugs.pop()
const page = source.getPage(slugs)
if (!page) throw notFound()
return new Response(await getLLMText(page), {
headers: { "Content-Type": "text/markdown" },
})
},
},
},
})Content Negotiation
Raypx detects whether a request comes from an AI assistant or a regular browser using content negotiation. When an AI client sends an Accept: text/plain or Accept: text/markdown header, the server can respond with the appropriate format.
The LLM middleware in the request lifecycle checks for these headers and routes the request to the llms.txt handlers when applicable. This happens before the regular middleware chain so AI clients get fast, lightweight responses without the overhead of full HTML rendering.
Why This Matters
- AI-assisted development: Tools like Claude Code, Cursor, and GitHub Copilot can read your documentation directly from the running server instead of relying on stale training data.
- Always up-to-date: Since the content is served from the same source as the documentation site, any documentation change is immediately available to AI assistants.
- Bandwidth efficient: Plain text responses are much smaller than full HTML pages, reducing latency for AI clients.
- Standards-based: The llms.txt format is an emerging standard adopted by many open-source projects, making it easier for AI tools to understand your project structure.