Migrating v4 to v5 in Vercel's AI SDK
Couple confusing breaking changes between the versions on streaming responses, image attachments and message parts.

I've discovered coding back in 2013 and three years later I spent all my summer building my first Laravel app which is still in production by the non-profit I've built for.
Now I'm struggling to find the balance between enjoying the power of "I can build this myself" and not chocking myself to death trying to build everything myself.
As it is common for developers to be less articulate, I decided to leverage writing about my endeavours, to keep me up.
This is about updating the backend while not breaking the frontend, over which I cannot update or release in a timely manner (it is a Chrome Extension - users running multiple versions, slow CWS review queue).
Background
Basically one of my side projects is this extension for the flame looking swiping app - Hard to guess, I know! The APIs provided for it is hosted on the Cloudflare Workers. Amazing service for the value it brings just for a few dollars a month.
Since the early days, once I discovered Vercel’s AI SDK I started using it as it seems like an absolute no brainer to offer beautiful streaming experiences while keeping majority of flexibility of choosing models and providers and ofc designing the UI.
When version 5 came out and I looked into it first time end of August, it felt completely overwhelming. I was recovering from burnout at work and my brain wasn’t fully functional. I tried running the codemod it failed and so I left it.
Fast forward to the beginning of October, I am doing so much better, and looked into it again and actually it seemed really quite simple this time.
My biggest breaking changes
To preface, I created a new git worktree for both backend and frontend where I started to integrate things step by step focusing on as few changes as possible.
The v5 response uses Server Sent Events (SSE) & pipeV5StreamToV4Response
If you are new to SSE, check out MDN docs at https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events
This is by far the biggest breaking change and everyone would be affected. All versions up to the v5 were using Vercel’s proprietary invention, which was either basic text-only stream or a more complex data stream supporting tool calling etc. It’s actually pretty interesting and cool thing. If you’ve never written one yourself, feel free to check out their docs. The image below is from their v4 docs.
v4: https://v4.ai-sdk.dev/docs/ai-sdk-ui/stream-protocol#stream-protocols
v5: https://ai-sdk.dev/docs/ai-sdk-ui/stream-protocol#stream-protocols

Frankly, as cool as it is, I have no idea why they chose it in the first place. SSE was already available at the ChatGPT release in 2022 and I have even written a stream in PHP backend. Although this is exactly what I’m guilty of, reinventing the wheel. It seems cool and exciting at first but then it brings the maintenance burden and limitations. For what’s worth it - I really do appreciate SSE protocol.
I was looking into how I could rewrite the responses. The text protocol was out the window since I already use tool calling. Fortunately no images at this point via messages.
Lucky me, and I really mean it, someone shared a snippet of code that remaps the v5 message parts to v4 on project’s GitHub issue. The function is called pipeV5StreamToV4Response Once I tested that it actually works, I only needed to detect the client version. I went ahead and updated client headers to indicate whether it’s on a new version or old one.
This is how I plugged it into my backend API
// in /src/functions/ai/useAI.ts
...
// clientVersionValue comes from headers, provided from arguments
const convertToV4 = clientVersionValue !== 'v5';
const result = streamText({
temperature: 0.6,
maxOutputTokens: 1_000,
messages: modelMessages, // provided from arguments
});
if (convertToV4) {
const stream = result.toUIMessageStream({ onError });
return pipeV5StreamToV4Response(stream, {
headers,
});
}
return result.toUIMessageStreamResponse({
onError,
headers,
});
// in /src/functions/helpers/pipeV5StreamToV4Response.ts
import type { UIMessageChunk } from 'ai';
// adapted from
// https://github.com/vercel/ai/issues/7993#issuecomment-3180974654
// credit for Mastra.AI for writing this piece of wonderful code
// also related:
// https://github.com/davidondrej/AI-CRM/blob/e7e3848a6ec99f4eee2c29d9208bd4bf11b0f6c9/crm-app/docs/v5-vercel-sdk.md
/** Pipes an AI SDK v5 UIMessage stream to a response in a v4 compatible format. */
export function pipeV5StreamToV4Response(
stream: ReadableStream<UIMessageChunk>,
responseInit?: ResponseInit,
): Response {
const v4Response = createV4Response(stream, responseInit);
return v4Response;
}
function createV4Response(
v5Stream: ReadableStream<UIMessageChunk>,
responseInit?: ResponseInit,
) {
const v4Stream = v5Stream
.pipeThrough(createV5ToV4Transformer())
.pipeThrough(new TextEncoderStream());
return new Response(v4Stream, {
status: 200,
headers: {
'Cache-Control': 'no-cache',
},
...responseInit,
});
}
type StreamState = {
messageCounter: number;
stepCounter: number;
};
function createV5ToV4Transformer() {
const state: StreamState = {
messageCounter: 0,
stepCounter: 0,
};
return new TransformStream<UIMessageChunk, string>({
transform(chunk, controller) {
try {
const v4Chunk = transformV5ChunkToV4(chunk, state);
if (v4Chunk) {
controller.enqueue(v4Chunk + '\n');
}
} catch (transformError) {
// noop
}
},
});
}
/** Map of v5 stream prefixes to v4 stream prefixes. */
const DataStreamStringPrefixes = {
text: '0',
data: '2',
error: '3',
message_annotations: '8',
tool_call: '9',
tool_result: 'a',
tool_call_streaming_start: 'b',
tool_call_delta: 'c',
finish_message: 'd',
finish_step: 'e',
start_step: 'f',
reasoning: 'g',
source: 'h',
redacted_reasoning: 'i',
reasoning_signature: 'j',
file: 'k',
};
function transformV5ChunkToV4(chunk: UIMessageChunk, state: StreamState) {
switch (chunk.type) {
case 'text-start':
return null;
case 'text-delta':
return `${DataStreamStringPrefixes.text}:${JSON.stringify(
chunk.delta,
)}`;
case 'text-end':
return null;
case 'error':
try {
const errorData =
typeof chunk.errorText === 'string'
? {
message: chunk.errorText,
code: 'STREAM_ERROR',
timestamp: Date.now(),
}
: chunk.errorText;
return `${DataStreamStringPrefixes.error}:${JSON.stringify(
errorData,
)}`;
} catch {
return `${DataStreamStringPrefixes.error}:${JSON.stringify({
message: 'Stream error',
code: 'TRANSFORM_ERROR',
})}`;
}
case 'tool-input-start':
case 'tool-input-delta':
// Don't stream partial tool calls for v4
return null;
case 'tool-input-available':
return `${DataStreamStringPrefixes.tool_call}:${JSON.stringify({
toolCallId: chunk.toolCallId,
toolName: chunk.toolName,
args: chunk.input,
})}`;
case 'tool-output-available':
return `${DataStreamStringPrefixes.tool_result}:${JSON.stringify({
toolCallId: chunk.toolCallId,
result: chunk.output,
})}`;
case 'reasoning-start':
return null;
case 'reasoning-delta':
return `${DataStreamStringPrefixes.reasoning}:${JSON.stringify(
chunk.delta,
)}`;
case 'reasoning-end':
return null;
case 'start':
return null;
case 'finish':
return `${DataStreamStringPrefixes.finish_message}:${JSON.stringify(
{
finishReason: 'stop',
metadata: chunk.messageMetadata,
},
)}`;
case 'start-step':
state.stepCounter++;
const stepId = generateId(`step-${state.stepCounter}`);
return `${DataStreamStringPrefixes.start_step}:${JSON.stringify({
messageId: stepId,
})}`;
case 'finish-step':
return `${DataStreamStringPrefixes.finish_step}:${JSON.stringify({
finishReason: 'stop',
usage: {
promptTokens: 0,
completionTokens: 0,
},
isContinued: false,
})}`;
case 'message-metadata':
return `${
DataStreamStringPrefixes.message_annotations
}:${JSON.stringify([
{
messageId: generateId('msg'),
metadata: chunk.messageMetadata,
},
])}`;
case 'file':
if ('url' in chunk && 'mediaType' in chunk) {
return `${DataStreamStringPrefixes.file}:${JSON.stringify({
url: chunk.url,
mediaType: chunk.mediaType,
})}`;
}
return null;
case 'source-url':
return `${DataStreamStringPrefixes.source}:${JSON.stringify({
id: chunk.sourceId,
title: chunk.title,
url: chunk.url,
})}`;
case 'source-document':
return `${DataStreamStringPrefixes.source}:${JSON.stringify({
id: chunk.sourceId,
title: chunk.title,
mediaType: chunk.mediaType,
filename: chunk.filename,
})}`;
default:
// Unknown chunk type, ignore
return null;
}
}
function generateId(prefix: string) {
return `${prefix}-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
The biggest thing solved. I now have streaming working with both the new and old clients.
Mixed up types & Remapping messages with attachments
The following part of migration was remapping the attachments. I began using those to send the match pictures a long to the model provider so LLMs can see the profile as well.
It took me a good hour of deep focus to carefully read documentation on both versions. Vercel did change things up and there was a lot of mixing between mediaType, mimeType and just type for providing the mime type, as well as the content and it got me really confused. I don’t quite know if I missed something or the team got wrangled up in their own types.
For example, in v4 they had experimental_attachments with FileList since I provide URLs and don’t host those images myself, I need either an encoded data-uri or publicly accessible URL. So I used Attachment in v4 and FileUIPart in v5.
v4: https://v4.ai-sdk.dev/docs/ai-sdk-ui/chatbot#attachments-experimental
v5: https://ai-sdk.dev/docs/ai-sdk-ui/chatbot#attachments
Example objects between versions
// version 4
{
name: 'earth.png',
contentType: 'image/png',
url: 'https://example.com/earth.png',
}
// version 5
{
type: 'file',
filename: 'earth.png',
mediaType: 'image/png',
url: 'https://example.com/earth.png',
}
One confusing aspect was the FileUIPart type, which appears in multiple packages with different signatures. Vercel - but why???
During branch merging between versions I came to find out that actually one can import FileUIPart part from different packages. Take a look, the identical type however has a completely different signature.
import type { FileUIPart } from '@ai-sdk/ui-utils';
and
import type { FileUIPart } from 'ai';
Perhaps it wasn’t a big deal doing so in v4 as the backend somehow worked (I didn’t dive deep) and was prep-ing for the v5. However upon updating the client, I had to do quite a bit of digging to find out that I was fooling myself with a wrong type all along.
Upon sharing this on twitter, Lars Grammel explained that @ai-sdk/ui-utils shall not be used for v5 (it remains active for v4). Chances are the node_modules folder wasn’t clean and hence I pulled it in accidentally.
The image below is preparing my client running
v4.

and updating the options argument for the submit - sendMessage method:

As for the backend, in v5 I had to write a tiny little remapping to inject messages as message parts
// Start of v4 remapping -----------------------------------------------------
// This is only received from v4, delete once all clients are on v5
const remappedMessages = isSdkV4
? messages.map((message) => {
const parts = message.parts || [];
if (
'experimental_attachments' in message &&
Array.isArray(message.experimental_attachments)
) {
message.experimental_attachments.forEach((attachment) => {
parts.push({
type: 'file',
mediaType: attachment.contentType,
url: attachment.url,
});
});
delete message.experimental_attachments;
}
return {
...message,
parts,
};
})
: messages;
const modelMessages = convertToModelMessages(remappedMessages);
// console.log({ modelMessages, messages });
// End of v4 remapping
const res = streamText({
...
messages: modelMessages,
...
});
For the clarity, in v4 the experimental_attachments are automagically handled inside streamText function - it just works! In v5 I have to remap using the snippet above into the parts before applying convertToModelMessages on the messages and sending to the streamText function.
Perhaps, after reading this section, you can see how many types there are. If I had to work on this migration more, I would create a mind map with all the types and the way they are used to have a clarity at sight. If I got as many bonuses as how many keys there are to send the mimeType of the attachment/image, I’d be on vacation in Hawaii haha.
Worth a mention, the attachments had to be rendered by me in UI and they were not tied to any message by the useChat logic - it was up to me, the developer, to decide where and how I render them. In v5 they became the part of the message and would be rendered as a file type:
{
messages.map(message => (
<div key={message.id}>
{message.parts.map((part, index) => {
if (part.type === 'text') {
return <div key={index}>{part.text}</div>;
} else if (
part.type === 'file' &&
part.mediaType.startsWith('image/')
) {
return <img key={index} src={part.url} />;
}
})}
</div>
));
}
This is a perfect intro to transition to the next and the last major section.
Update frontend client to use message.parts instead of message.content
This is technically not a breaking change because this was already available in v4 and I was partially using it in the frontend - but not everywhere. So I had to go over each place and ensuring I no longer relied on the deprecated message.content property for rendering.
However, there were a change in the way the parts are made and rendered.
It became more of a free-form array with support for various parts and I strongly agree with that type of a design. It’s very SOLID.
docs: https://ai-sdk.dev/docs/migration-guides/migration-guide-5-0#content--parts-array
For example, tool calls now include tool name and reasoning is now a part matching the standard reply in itself. This makes designing re-usable components so much more intuitive and simple.
// version 4
{
message.parts.map(part => {
if (part.type === 'tool-invocation') {
return <div>{part.toolInvocation.toolName}</div>;
}
});
}
{
message.parts.map((part, index) => {
if (part.type === 'reasoning') {
return (
<div key={index} className="reasoning-display">
{part.reasoning}
</div>
);
}
});
}
// version 5
// Type-safe tool parts with specific names
{
message.parts.map(part => {
switch (part.type) {
case 'tool-getWeatherInformation':
return <div>Getting weather...</div>;
case 'tool-askForConfirmation':
return <div>Asking for confirmation...</div>;
}
});
}
{
message.parts.map((part, index) => {
if (part.type === 'reasoning') {
return (
<div key={index} className="reasoning-display">
{part.text}
</div>
);
}
});
}
The easy stuff
Using codemod would import old Zod version
I have no idea why but when I tried the npx @ai-sdk/codemod v5 to update the backend, it’d update all Zod imports to import the version 3 - import * as z from "zod/v3";. It happened before and after my own migration. I had already upgraded to the version 4 in package.json. Might be a bug or some stale cache issue. Who knows. I just had to revert those lines.
Overall, codemod was only helpful to see that I haven’t missed anything myself but by means was it a replacement for entire work of mine.
Keep a sharp eye on that.
Updating small renames
The rest of changes were mostly easy such as changing maxTokens to maxOutputTokens, renaming coreMessages to modelMessages and so on so far.
Fortunately there weren’t many variables that needed to be updated so I am really grateful for that.
Those are well documented in the official migration guide: https://ai-sdk.dev/docs/migration-guides/migration-guide-5-0.
I think my biggest score is that I never stored these messages in the database. The chats are sort of ephemeral. They’re cached on the client and in the transit to an extent but that’s it. I really feel like I scored a ton for not needing to migrate database schemas or alike.
Summary
It was somewhat overwhelming at first but after a weekend and 10-20 hours of deep focus, I was done and very happy.
If I were to do it again, I don’t think I’d change anything major. My biggest lesson was to keep changes in previous versions as up-to date as possible (i.e. adopt message parts instead of dot-content or dot-reasoning).
My biggest payout in my project repositories came out - keep complexity simple and have some abstraction between someone else code (the AI SDK - in this case) and my own code logic. That is, while my extension supports multiple platforms such as whatsapp, tinder, bumble, I have shared set of methods that accept a standardized structure by the provider and thus I only need to handle this set of methods to adapt to the new structure to the AI SDK. It’s like a mini proxy.
Typically, I feel lazy to write those on small projects but as soon as there are multiple sources, it is such a no-brainer investment. If one day I wanna ditch Vercel to use any other proxy, it’d not be too hard (hopefully that day never comes).
Hope that helps!


