Vercel Flags is a feature flag provider built into the Vercel platform. Create and manage feature flags with targeting rules, user segments, and environment controls directly in the Vercel Dashboard.
The Flags SDK provides a framework-native way to define and use these flags within Next.js and SvelteKit applications, integrating directly with your existing codebase:
flags.ts
import{ vercelAdapter }from"@flags-sdk/vercel"
import{ flag }from"flags/next"
exportconst showNewFeature =flag({
key:"show-new-feature",
adapter:vercelAdapter()
})
Once you define a flag, you can use them within your application in a few lines of code:
app/page.tsx
import{ showNewFeature }from"~/flags"
exportdefaultasyncfunctionPage(){
const isEnabled =awaitshowNewFeature()
return isEnabled ?<NewDashboard/>:<OldDashboard/>
}
For teams using other frameworks or custom backends, the Vercel Flags adapter supports the OpenFeature standard, allowing you to plug Vercel Flags into their provider agnostic SDK.
Claude Opus 4.7 from Anthropic is now available on Vercel AI Gateway.
Opus 4.7 is optimized for long-running, asynchronous agents and handles complex, multi-step tasks with reliable agentic execution. The model shows gains on knowledge-worker tasks, particularly where it needs to visually verify its own outputs.
Opus 4.7 is also stronger at programmatic tool-calling with image-processing libraries to analyze charts and figures, including pixel-level data transcription. It has high-resolution image support, which is useful for computer use, screenshot understanding, and document analysis workflows. Opus 4.7 now has improved memory, with agents that maintain structured memory store across turns seeing more reliable recall and fewer dropped facts without additional prompting.
To use Claude Opus 4.7 set model to anthropic/claude-opus-4.7 in the AI SDK. You can also try a new effort level: 'xhigh'.
import{ streamText }from'ai';
const result =streamText({
model:'anthropic/claude-opus-4.7',
prompt:'Explain the halting problem in one paragraph.',
providerOptions:{
anthropic:{
thinking:{ type:'adaptive'},
effort:'xhigh',
},
},
});
Opus 4.7 also introduces the task budgets feature. Task budgets let you set a total token budget for an agentic turn via taskBudget. The model sees a countdown of remaining tokens, which it uses to prioritize work, plan ahead, and wind down gracefully as the budget is consumed. Thinking content is also now omitted by default for Opus 4.7. To receive thinking content, set display to 'summarized':
import{ streamText }from'ai';
const result =streamText({
model:'anthropic/claude-opus-4.7',
prompt:'Research how this codebase handles authentication and suggest improvements.',
AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.
You can now access Bytedance's latest state-of-the-art video generation model, Seedance 2.0, via AI Gateway with no other provider accounts required.
Seedance 2.0 is available on AI Gateway in two variants: Standard and Fast. Both share the same capabilities. Standard produces the highest quality output, while Fast prioritizes generation speed and lower cost.
Seedance 2.0 is strong at maintaining motion stability and fine detail across frames, producing consistent output even in complex scenes with facial expressions and physical interactions. The model also generates synchronized audio natively, with support for speech in multiple languages and dialects.
Beyond text-to-video and image-to-video, Seedance 2.0 adds multimodal reference-to-video, letting you combine image, video, and audio inputs as reference material in a single generation. It also supports video editing and video extension, along with professional camera movements, multi-shot composition, and in-video text rendering.
To use this model, set model to bytedance/seedance-2.0 or bytedance/seedance-2.0-fast in the AI SDK or try it out in the AI Gateway Playground.
Text to Video
Generate video from a text prompt. Describe the scene, camera movement, and audio for the model to produce.
import{ experimental_generateVideo as generateVideo }from'ai';
const{ videos }=awaitgenerateVideo({
model:'bytedance/seedance-2.0',
prompt:
`Black triangle sticker peels off laptop and zips across the office. It smashes
through the window and into the San Francisco sky.`,
aspectRatio:'16:9',
resolution:'720p',
duration:5,
});
Image to Video
Generate video from a starting image. The model animates the image based on the text prompt while preserving the visual content of the source frame.
import{ experimental_generateVideo as generateVideo }from'ai';
const{ videos }=awaitgenerateVideo({
model:'bytedance/seedance-2.0',
prompt:{
image: catImageUrl,
text:'The cat is celebrating a birthday with another cat.',
},
duration:10,
providerOptions:{
bytedance:{ generateAudio:true},
},
});
Reference to Video
Generate video using image, video, or audio references as source material. You can combine multiple reference types in a single generation to control visual style, motion, and sound.
import{ experimental_generateVideo as generateVideo }from'ai';
const{ videos }=awaitgenerateVideo({
model:'bytedance/seedance-2.0',
prompt:'Replace the cat in [Video 1] with the lion from [Image 1].',
duration:10,
providerOptions:{
bytedance:{
referenceImages:[Image 1],
referenceVideos:[Video 1],
generateAudio:true,
},
},
});
AI Gateway does not charge any markup on video generation: Seedance 2.0 and 2.0 Fast are at the same price as going direct to the Bytedance provider.
Workflow run log filtering is now supported on Vercel, making it easy to view all logs associated with a workflow run in one place instead of piecing them together across individual requests.
You can use the “View Logs” button from the workflow run details page to jump directly into the Logs tab. From there, filter logs using the Workflow Run ID and Workflow Step ID to quickly find logs for specific runs or steps.
Workflow runs on Vercel already provide run-level observability, including step progression, payloads, outputs, and performance metrics. With this addition, you can now also access all related logs directly from the familiar Vercel Logs dashboard. Learn more about the Workflow SDK.
Elastic build machines, released in beta on March 24, are now generally available for all Pro and Enterprise customers, and are now the default for all new Pro teams.
Rather than a one-size-fits-all approach, Vercel evaluates each project individually and assigns the right machine for its actual needs balancing speed and cost. Over 400 teams and 6,000 projects have enabled Elastic as their default build machine.
During the beta, approximately 80% of projects have reduced their costs by switching to smaller build machine while maintaining their build speeds. The remaining 20% of projects have been auto upgraded to machines with more CPUs and memory improving their build speed.
Teams using Observability Plus receive alerts when anomalies are detected in their applications to help quickly identify, investigate, and resolve unexpected behavior.
Alerts help monitor your app in real-time by surfacing unexpected changes in usage or error patterns:
Usage anomalies: unusual patterns in your application metrics, such as edge requests or function duration.
Error anomalies: abnormal error patterns, such as sudden spikes in 5XX responses on a specific route.
Once an anomaly is detected, Vercel Agent can automatically investigate the issue, identify the likely root cause, analyze the impact, and suggest next steps for remediation.
View alerts directly in your dashboard, or subscribe via email, Slack, or webhooks to get notified wherever your team works. You can also customize what alerts you receive using alert rules.
This feature is available for all teams with Observability Plus at no additional cost.