Skip to main content

Spaces

Spaces is the in-app AI assistant in Medplum Provider. A user types a question in natural language; under the hood a chain of Medplum bots translates the question into FHIR API calls, executes them against the project, summarizes the results in the chat, and optionally renders a generated chart inside the same view.

Spaces is shipped as an example implementation. The Provider app source lives in examples/medplum-provider and the bot source in examples/medplum-demo-bots/src/spaces-bots. Use this page to set Spaces up in your own project, understand what it does behind the chat input, and tune the parts that are designed to be tuned.

The assistant's behavior – what it says, what it refuses, which FHIR strategies it uses, how it summarizes, what charts it favors – is determined by three Communication resources that you author for your deployment. Spaces ships without canonical prompts on purpose: the right behavior depends on what your clinic plans to use the feature for. See Author System Prompt Communications below.

The feature is reachable at /Spaces/Communication in the Provider app. It is gated on the project having both the ai and bots features enabled.

What You Can Do With Spaces

The translator bot can issue any FHIR request the requester's AccessPolicy permits, so the practical capability surface is broad. Typical prompts fall into a few categories:

  • Search and lookup – "Find the patient John Smith." "Which patients are scheduled with Dr. Chen this week?"
  • Clinical summary – "Summarize Maria Garcia's last three encounters." "What medications is this patient on?"
  • Visualize trends – "Show a growth chart for this patient." "Plot hemoglobin A1c over the last two years."
  • Operational reporting – "Chart provider utilization across the clinic for the past month." "How many lab orders did we place last week?"
  • Schedule and tasks – "Schedule a follow-up with Dr. Patel next Tuesday at 10am." "Create a task to fax the imaging report to the referring provider."
  • Orders and updates – "Order a CBC for this patient." "Place a referral to cardiology." "Update the patient's phone number to 555-0142."

The first four bullets are read-only and end at the summary bot. The visualization and reporting prompts additionally invoke the visualizer bot to render an interactive chart in the side panel. The last two bullets trigger live FHIR writes through the same loop.

caution

Writes happen as soon as the loop reaches them. Treat clinically significant prompts (orders, referrals, status changes) the same way you would any draft – review the resulting resource before relying on it, and consider scoping the ai feature to AccessPolicies that do not include write access to high-risk resource types.

How Spaces Works

Spaces is not a single AI call. Each user prompt drives a short pipeline of bots and a tool-use loop:

Three things to keep in mind:

  • The Provider UI, not the bots, executes the FHIR requests that the translator suggests. Every request runs under the signed-in user's access policies, so the assistant can never read or write resources the user could not access directly.
  • The bots are the only thing that talks to OpenAI. The browser never sees the OpenAI API key. All AI traffic flows through the server-side $ai operation, which reads the key from project secrets.
  • Conversation history is persisted as Communication resources, so transcripts are searchable, auditable, and survive a page reload.

Prerequisites

Enable Project Features

Spaces requires the ai and bots features on your Medplum project. Both are disabled by default. Contact info@medplum.com to enable them on your account; project administrators cannot toggle these features directly.

The Provider app's Spaces page short-circuits if either feature is missing, and the $ai operation rejects the request server-side if ai is not enabled.

Configure The OpenAI API Key

Add your OpenAI key as a project secret named OPENAI_API_KEY. The $ai operation reads it from the project on every call and forwards it to OpenAI's chat completions endpoint. It is never sent to the client.

You must be a project administrator to add secrets. In the Medplum App:

  1. Open Project Admin (left sidebar, or app.medplum.com/admin/project).
  2. Click the Secrets tab.
  3. Click Add and enter OPENAI_API_KEY as the name, string as the type, and your OpenAI key as the value.

See Bot Secrets for the same workflow described in more detail.

caution

If the key is missing, the $ai operation returns OpenAI API key not configured in project secrets and every Spaces turn fails. Set this before deploying the bots.

Deploy The Spaces Bots

Spaces depends on three bots that ship under examples/medplum-demo-bots/src/spaces-bots and are already registered in that project's medplum.config.json. The Provider UI resolves each one by Identifier (system https://www.medplum.com/bots), so the identifier values below are load-bearing.

Identifier ValueSource FileRole
ai-fhir-request-toolsfhir-translator-bot.tsTranslator. Emits fhir_request tool calls and a visualize flag.
ai-resource-summary-ssefhir-summary-bot.tsStreaming summary bot (Server-Sent Events).
ai-component-generator-ssefhir-visualizer-bot.tsStreaming Recharts / Mantine Chart() JSX generation.

To deploy:

  1. Clone the demo-bots project (or copy the spaces-bots directory and the matching entries from medplum.config.json into your own bot project).
  2. Build and deploy each bot following Bot Basics and Bots In Production.
  3. After deployment, open each Bot resource in the Medplum App and add an identifier entry whose system is https://www.medplum.com/bots and whose value matches the table above. The Provider UI uses this identifier to find the bot, so a missing or mistyped value silently breaks the feature.

Author System Prompt Communications

Each of the three Spaces bots loads its system prompt from a Communication resource at request time, not from code. These prompts are operator-authored content. They decide what Spaces does in your clinic: its tone, what it refuses, which FHIR-call strategies the translator prefers, how aggressively the summary bot narrates, which chart types the visualizer reaches for. Treat writing them as part of building the feature, not as a setup step that copies a canned recipe.

Three Communications are required, one per bot. The identifier system is http://medplum.com/ai-spaces; the value matches the bot identifier value used by the Provider UI.

BotPrompt Communication identifier.valuePayload Shape
ai-fhir-request-toolsai-fhir-request-toolsBoth payloads are required. payload[0] is the system prompt. payload[1] is a profile-context template; any {{ref}} is replaced at request time with the requester's reference string (for example, Practitioner/abc-123) and appended to the prompt.
ai-resource-summary-sseai-resource-summary-ssepayload[0] is the system prompt. No profile-context template.
ai-component-generator-sseai-component-generator-ssepayload[0] is the system prompt. No profile-context template.

If any Communication is missing, the corresponding bot throws ("... system prompt is not available") and that part of the loop fails: a missing translator prompt stops the user's message; a missing summary prompt leaves the chat without narration; a missing visualizer prompt leaves the chart panel empty.

The example below seeds all three Communications. The bodies are thin but functional – the loop will run end-to-end so you can verify wiring – they are not production prompts. Replace each payload[0].contentString with text authored for your deployment before exposing Spaces to users.

// One-time setup: create the three Communications that hold the system prompts
// for the translator, summary, and visualizer bots. The bodies below are a
// thin wiring skeleton - the loop runs end-to-end with these in place, but
// you should replace each payload[0].contentString with prompts authored for
// your deployment before exposing Spaces to users.

// Translator: produces the fhir_request tool calls that drive the loop.
await medplum.createResource<Communication>({
resourceType: 'Communication',
status: 'completed',
identifier: [{ system: 'http://medplum.com/ai-spaces', value: 'ai-fhir-request-tools' }],
payload: [
{
contentString: [
'You are a FHIR data assistant for Medplum.',
'Use the fhir_request tool for every FHIR operation - never invent results.',
'For updates, first GET the resource, then PUT the modified full resource.',
'Set visualize=true on the tool call when the result should be a chart',
'(for example trends or values over time).',
].join('\n'),
},
{
// payload[1] is a profile-context template; {{ref}} is replaced at request
// time with the requester's reference string (e.g. Practitioner/abc-123).
contentString: 'The requester is {{ref}}. Scope queries to data they are entitled to see.',
},
],
});

// Summary: narrates the FHIR responses the translator collected.
await medplum.createResource<Communication>({
resourceType: 'Communication',
status: 'completed',
identifier: [{ system: 'http://medplum.com/ai-spaces', value: 'ai-resource-summary-sse' }],
payload: [
{
contentString: [
'You translate FHIR response bundles into clear, human-readable summaries.',
'Lead with the most clinically relevant detail. Be concise.',
'If the response is an empty bundle, say so plainly and suggest what to try next.',
].join('\n'),
},
],
});

// Visualizer: produces a self-contained Chart() React component when asked.
await medplum.createResource<Communication>({
resourceType: 'Communication',
status: 'completed',
identifier: [{ system: 'http://medplum.com/ai-spaces', value: 'ai-component-generator-sse' }],
payload: [
{
contentString: [
'You generate a self-contained function Chart() React component that visualizes FHIR data.',
'Use only the Recharts and Mantine components already in scope - do not write import statements.',
'Prefer LineChart or AreaChart for trends over time and BarChart or PieChart for categorical counts.',
].join('\n'),
},
],
});

Conversation Data Model

Each conversation in Spaces is a small tree of Communication resources. Splitting the thread header from each turn keeps the agent loop recoverable mid-iteration and lets the history sidebar list conversations with a single search.

Topic Communication

One per conversation. Created on the user's first message.

FieldValue
identifiersystem http://medplum.com/ai-message, value ai-message-topic
senderReference to the user who started the conversation
statusin-progress
topic.textFirst 100 characters of the user's first message, used as the sidebar title
note[0].textJSON-encoded { "model": "..." } capturing the model chosen for the conversation
// Topic headers carry identifier ai-message-topic. The Provider UI filters by
// the current user's reference so each user only sees their own conversations.
const profile = await medplum.getProfile();
const topics = await medplum.searchResources('Communication', {
identifier: 'http://medplum.com/ai-message|ai-message-topic',
sender: profile?.id ? `${profile.resourceType}/${profile.id}` : '',
_sort: '-_lastUpdated',
_count: '10',
});
console.log(topics);

The Provider UI filters by sender so users only see their own conversations in the sidebar. Adjust this filter in your own UI if you need a shared inbox.

Message Communications

One per turn (user, assistant, tool). Each is linked to its topic by partOf.

FieldValue
identifiersystem http://medplum.com/ai-message, value ai-message
partOf[0].referenceCommunication/<topic-id>
statuscompleted
payload[0].contentStringJSON of { role, content, tool_calls, tool_call_id, resources, componentCode, sequenceNumber }

sequenceNumber is the source of truth for ordering; _lastUpdated is only used for pagination. The Provider UI persists every tool call and tool response on its own message so a mid-loop failure leaves a readable transcript.

// Each turn is a child Communication linked via partOf. payload[0].contentString
// is JSON of { role, content, tool_calls, tool_call_id, resources, componentCode, sequenceNumber }.
const topicId = 'example-topic-id';
const messages = await medplum.searchResources('Communication', {
'part-of': `Communication/${topicId}`,
_sort: '_lastUpdated',
_count: '100',
});
const turns = messages
.filter((m) => m.payload?.[0]?.contentString)
.map((m) => JSON.parse(m.payload?.[0]?.contentString as string))
.sort((a, b) => (a.sequenceNumber ?? 0) - (b.sequenceNumber ?? 0));
console.log(turns);

The Bots

All three bots are thin wrappers over $ai. They accept a Parameters resource with a messages JSON array and a model string, and they return a Parameters resource the Provider UI can consume directly.

FHIR Translator Bot

ai-fhir-request-tools. Converts the conversation history into one or more fhir_request tool calls. Each call carries an HTTP method (GET / POST / PUT / DELETE), a FHIR path, and an optional body. The bot also reports a visualize boolean, derived from the tool-call arguments, that tells the UI whether the final answer should be rendered as a chart.

Inputs:

ParameterTypeRequiredDescription
messagesvalueStringYesJSON-encoded conversation history (OpenAI chat format).
modelvalueStringNoOpenAI model to use. Defaults to gpt-4 if omitted. The shipping Provider UI always passes a model from the chat-input dropdown, so this default only applies when you invoke the bot directly.

Outputs: content (string, may be null when the model only returns tool calls), tool_calls (JSON array of { id, function: { name, arguments } }), and visualize (boolean).

The translator is the only bot that loops. The Provider UI re-invokes it after every batch of tool calls until the model decides it has enough context to answer.

// Direct invocation of the translator bot. The Provider UI does this on every
// loop iteration; you only need this snippet to build your own client.
const translatorResponse = await medplum.executeBot(
{ system: 'https://www.medplum.com/bots', value: 'ai-fhir-request-tools' },
{
resourceType: 'Parameters',
parameter: [
{
name: 'messages',
valueString: JSON.stringify([{ role: 'user', content: 'Find the patient named John Smith' }]),
},
{ name: 'model', valueString: 'gpt-4' },
],
} satisfies Parameters
);
console.log(translatorResponse);

FHIR Summary Bot

ai-resource-summary-sse. Takes the conversation history after the loop has finished and produces a streaming, plain-language narration of the resources the translator pulled.

Inputs: same messages and model parameters as the translator.

Outputs: content (the narration), streamed as SSE chunks.

FHIR Visualizer Bot

ai-component-generator-sse. Only invoked when the translator set visualize=true at least once during the loop. Receives the resolved FHIR resources and produces a self-contained function Chart() React component, streamed to the UI inside a fenced code block. The generated component uses pre-scoped Recharts primitives (line, bar, area, pie, scatter, composed) and Mantine layout primitives – no import statements required.

Inputs: messages, model, and fhirData (valueString, JSON array of resolved FHIR resources collected during the loop).

Outputs: a streamed JSX code block that the Provider UI parses with a small streaming code extractor and renders in the right-hand panel of the chat view.

The Agent Loop

Spaces is a true ReAct-style agent, not a single OpenAI round trip. Per user prompt, the translator may chain several FHIR calls before producing a final answer.

The loop in spaceMessaging.ts runs like this:

  1. Invoke the translator with the current conversation.
  2. If it returns no tool calls, exit the loop with its content as the final answer.
  3. If it returns tool calls, execute each fhir_request against the FHIR API (GET / POST / PUT / DELETE), append the responses to the conversation, and go back to step 1.

While the loop runs the UI surfaces the active call as Step N: <METHOD> <path> so users can see what the model is doing in real time.

Iteration Cap

The loop is capped at MAX_AGENT_ITERATIONS = 10. When the cap is hit, the loop short-circuits to the summary bot and the response is appended with a note telling the user the request reached the processing limit. Increasing the cap lets the agent answer more complex chained questions; decreasing it bounds per-prompt cost.

Adjusting The Cap

Because Spaces ships as an example implementation, the cap is a constant in the Provider source, not a runtime setting. To change it, fork examples/medplum-provider and edit MAX_AGENT_ITERATIONS in src/utils/spaceMessaging.ts.

tip

If you find users routinely hitting the cap, tighten the translator system prompt first. Many long loops come from the model fetching adjacent data it does not need.

Streaming Behavior

The summary and visualizer bots stream their output as Server-Sent Events from OpenAI through $ai to the browser, so users see the narration appear word-by-word and the chart code render as it is generated. Tool calls run only on the non-streaming path – this is a constraint of OpenAI's streaming protocol, surfaced in the $ai operation docs.

Model Selection

The Provider UI exposes a model selector in the chat input. The chosen model flows through to every bot via the model parameter and on to $ai. The dropdown contents are a UI concern; change it in ChatInput.tsx if you want to expose different models. The bots themselves are model-agnostic – they pass whatever string they are given.

Customizing System Prompts

Spaces' behavior – when it calls FHIR, how it phrases summaries, which chart types it reaches for – is determined almost entirely by the three system-prompt Communications, not by the bot code (the bots are thin wrappers around $ai). To change behavior, edit payload[0].contentString on the relevant Communication. The next user message picks up the new prompt without a bot redeploy.

// To tune the translator at runtime, edit payload[0].contentString on the
// existing Communication. The next user message picks up the new prompt
// without redeploying any bot.
const existing = await medplum.searchOne('Communication', {
identifier: 'http://medplum.com/ai-spaces|ai-fhir-request-tools',
});
if (existing?.id) {
await medplum.updateResource<Communication>({
...existing,
payload: [
{
contentString: [
'You are a FHIR data assistant for Medplum.',
'Use the fhir_request tool for every FHIR operation.',
'When the user asks about vitals, prefer Observation searches that include both code and date.',
].join('\n'),
},
existing.payload?.[1] ?? { contentString: 'The requester is {{ref}}.' },
],
});
}

Guidance per bot:

  • Translator (ai-fhir-request-tools). Keep the instruction telling the model to use the fhir_request tool for every FHIR operation – removing it lets the model invent results. Append clinic-specific guidance rather than rewriting the whole prompt. Use payload[1] for anything that depends on the requester; {{ref}} is the only per-user context the bot has by default.
  • Summary (ai-resource-summary-sse). Tune narration depth, terminology, and whether the bot mentions resource IDs or sticks to clinical content. This prompt also sets the tone users hear most.
  • Visualizer (ai-component-generator-sse). Bias toward specific chart types or labelling conventions for your specialty, and remind the model that only the pre-scoped Recharts and Mantine components are available – it must not write import statements.
caution

A prompt that contradicts the tool schema (for example, telling the translator not to call any tools) will break the loop in subtle ways – the translator returns plain text, the summary bot gets no tool responses to narrate, and the chat shows a generic "I was unable to generate a response" message. Test prompt changes against representative user questions before rolling them out.

Security And Cost

  • The OpenAI API key lives only in Project.secret. The browser cannot read it, and bots receive it only inside their handler scope.
  • AccessPolicy applies to every FHIR request the loop executes. The assistant cannot read or write resources the requester is not entitled to. Audit-sensitive deployments should also restrict which Practitioner accounts have the ai feature exposed in their session.
  • Every loop iteration is one OpenAI call. A user asking a multi-hop question can drive five to ten model calls per prompt. Monitor your OpenAI usage dashboard and consider rate-limiting the bot endpoints in production.
  • All Spaces activity is recorded as standard FHIR AuditEvent resources, the same way every other Medplum write is.

See Also