---
title: Streaming
description: Stream real-time text responses from AI models and other async sources to chat platforms.
type: guide
prerequisites:
  - /docs/usage
---

# Streaming



Chat SDK accepts any `AsyncIterable<string>` as a message, enabling real-time streaming of AI responses and other incremental content to chat platforms. For platforms with native streaming support (Slack), you can also stream structured `StreamChunk` objects for rich content like task progress cards and plan updates.

## AI SDK integration

Pass an AI SDK `fullStream` or `textStream` directly to `thread.post()`:

```typescript title="lib/bot.ts" lineNumbers
import { ToolLoopAgent } from "ai";

const agent = new ToolLoopAgent({
  model: "anthropic/claude-4.5-sonnet",
  instructions: "You are a helpful assistant.",
});

bot.onNewMention(async (thread, message) => {
  const result = await agent.stream({ prompt: message.text });
  await thread.post(result.fullStream);
});
```

### Why `fullStream` over `textStream`?

When AI SDK agents make tool calls between text steps, `textStream` concatenates all text without separators — `"hello.how are you?"` instead of `"hello.\n\nhow are you?"`. The `fullStream` contains explicit `finish-step` events that Chat SDK uses to inject paragraph breaks between steps automatically.

Both stream types are auto-detected:

```typescript
// Recommended: fullStream preserves step boundaries
await thread.post(result.fullStream);

// Also works: textStream for single-step generation
await thread.post(result.textStream);
```

## Custom streams

Any async iterable works:

```typescript title="lib/bot.ts" lineNumbers
const stream = (async function* () {
  yield "Processing";
  yield "...";
  yield " done!";
})();

await thread.post(stream);
```

## Platform behavior

| Platform    | Method               | Description                                             |
| ----------- | -------------------- | ------------------------------------------------------- |
| Slack       | Native streaming API | Uses Slack's `chatStream` for smooth, real-time updates |
| Teams       | Post + Edit          | Posts a message then edits it as chunks arrive          |
| Google Chat | Post + Edit          | Posts a message then edits it as chunks arrive          |
| Discord     | Post + Edit          | Posts a message then edits it as chunks arrive          |

The post+edit fallback throttles edits to avoid rate limits. Configure the update interval when creating your `Chat` instance:

```typescript title="lib/bot.ts" lineNumbers
const bot = new Chat({
  // ...
  streamingUpdateIntervalMs: 500, // Default: 500ms
});
```

### Disabling the placeholder message

By default, post+edit adapters send an initial `"..."` placeholder message before the first chunk arrives. You can disable this to wait for real content before posting:

```typescript title="lib/bot.ts" lineNumbers
const bot = new Chat({
  // ...
  fallbackStreamingPlaceholderText: null,
});
```

You can also customize the placeholder text:

```typescript title="lib/bot.ts"
const bot = new Chat({
  // ...
  fallbackStreamingPlaceholderText: "Thinking...",
});
```

## Markdown healing

During streaming, chunks often arrive mid-word or mid-syntax — for example, `**bold` before the closing `**` arrives. The SDK automatically heals incomplete markdown in intermediate renders using [remend](https://www.npmjs.com/package/remend), so messages always display with correct formatting while streaming.

The final message uses the raw accumulated text without healing, so the original markdown is preserved.

## Table buffering

When streaming content that contains GFM tables (e.g. from an LLM), the SDK automatically buffers potential table headers until a separator line (`|---|---|`) confirms them. This prevents tables from briefly flashing as raw pipe-delimited text before the table structure is complete.

This happens transparently — no configuration needed.

## Structured streaming chunks (Slack only)

For Slack's native streaming API, you can yield `StreamChunk` objects alongside plain text for rich content:

```typescript title="lib/bot.ts" lineNumbers
import type { StreamChunk } from "chat";

const stream = (async function* () {
  yield { type: "markdown_text", text: "Searching..." } satisfies StreamChunk;

  yield {
    type: "task_update",
    id: "search-1",
    title: "Searching documents",
    status: "in_progress",
  } satisfies StreamChunk;

  // ... do work ...

  yield {
    type: "task_update",
    id: "search-1",
    title: "Searching documents",
    status: "complete",
    output: "Found 3 results",
  } satisfies StreamChunk;

  yield { type: "markdown_text", text: "Here are your results..." } satisfies StreamChunk;
})();

await thread.post(stream);
```

### Chunk types

| Type            | Fields                             | Description                                                              |
| --------------- | ---------------------------------- | ------------------------------------------------------------------------ |
| `markdown_text` | `text`                             | Streamed text content                                                    |
| `task_update`   | `id`, `title`, `status`, `output?` | Tool/step progress cards (`pending`, `in_progress`, `complete`, `error`) |
| `plan_update`   | `title`                            | Plan title updates                                                       |

### Task display mode

Control how `task_update` chunks render in Slack by passing `taskDisplayMode` in stream options:

```typescript
await thread.stream(stream, {
  taskDisplayMode: "plan", // Group all tasks into a single plan block
});
```

| Mode         | Description                                            |
| ------------ | ------------------------------------------------------ |
| `"timeline"` | Individual task cards shown inline with text (default) |
| `"plan"`     | All tasks grouped into a single plan block             |

Adapters without structured chunk support extract text from `markdown_text` chunks and ignore other types.

## Stop blocks (Slack only)

When streaming in Slack, you can attach Block Kit elements to the final message using `stopBlocks`. This is useful for adding action buttons after a streamed response completes:

```typescript title="lib/bot.ts" lineNumbers
await thread.stream(textStream, {
  stopBlocks: [
    {
      type: "actions",
      elements: [{
        type: "button",
        text: { type: "plain_text", text: "Retry" },
        action_id: "retry",
      }],
    },
  ],
});
```

## Streaming with conversation history

Combine message history with streaming for multi-turn AI conversations.
Use [`toAiMessages()`](/docs/api/to-ai-messages) to convert chat messages into the `{ role, content }` format expected by AI SDKs:

```typescript title="lib/bot.ts" lineNumbers
import { toAiMessages } from "chat";

bot.onSubscribedMessage(async (thread, message) => {
  // Fetch recent messages for context
  const result = await thread.adapter.fetchMessages(thread.id, { limit: 20 });

  const history = await toAiMessages(result.messages);

  const response = await agent.stream({ prompt: history });
  await thread.post(response.fullStream);
});
```

See the [`toAiMessages` API reference](/docs/api/to-ai-messages) for all options including `includeNames`, `transformMessage`, and attachment handling.
