VercelLogotypeVercelLogotype
    • AI Cloud
      • v0

        Build applications with AI

      • AI SDK

        The AI Toolkit for TypeScript

      • AI Gateway

        One endpoint, all your models

      • Vercel Agent

        An agent that knows your stack

      • Sandbox

        AI workflows in live environments

    • Core Platform
      • CI/CD

        Helping teams ship 6× faster

      • Content Delivery

        Fast, scalable, and reliable

      • Fluid Compute

        Servers, in serverless form

      • Observability

        Trace every step

    • Security
      • Bot Management

        Scalable bot protection

      • BotID

        Invisible CAPTCHA

      • Platform Security

        DDoS Protection, Firewall

      • Web Application Firewall

        Granular, custom protection

    • Company
      • Customers

        Trusted by the best teams

      • Blog

        The latest posts and changes

      • Changelog

        See what shipped

      • Press

        Read the latest news

      • Events

        Join us at an event

    • Learn
      • Docs

        Vercel documentation

      • Academy

        Linear courses to level up

      • Knowledge Base

        Find help quickly

      • Community

        Join the conversation

    • Open Source
      • Next.js

        The native Next.js platform

      • Nuxt

        The progressive web framework

      • Svelte

        The web’s efficient UI framework

      • Turborepo

        Speed with Enterprise scale

    • Use Cases
      • AI Apps

        Deploy at the speed of AI

      • Composable Commerce

        Power storefronts that convert

      • Marketing Sites

        Launch campaigns fast

      • Multi-tenant Platforms

        Scale apps with one codebase

      • Web Apps

        Ship features, not infrastructure

    • Tools
      • Marketplace

        Extend and automate workflows

      • Templates

        Jumpstart app development

      • Partner Finder

        Get help from solution partners

    • Users
      • Platform Engineers

        Automate away repetition

      • Design Engineers

        Deploy for every idea

  • Enterprise
  • Pricing
  • All Posts
  • Engineering
  • Community
  • Company News
  • Customers
  • v0
  • Changelog
  • Press
  • No "" results found at this time.
    Try again with a different keyword.

    Featured articles

  • Feb 3

    Introducing the new v0

    Since v0 became generally available in 2024, more than 4 million people have used it to turn their ideas into apps in minutes. v0 has helped people get promotions, win more clients, and work more closely with developers. AI lowered the barrier to writing code. Now we're raising the bar for shipping it. Today, v0 evolves vibe coding from novelty to business critical. Built for production apps and agents, this release includes enterprise-grade security and integrations teams can use to ship real software, not just spin up demos. The limitations of vibe coding We're at an inflection point where anyone can create software. But this freedom has created three problems for the enterprise. Vibe coding is now the world's largest shadow IT problem. AI-enabled software creation is already happening inside every enterprise, and employees are shipping security flaws alongside features: credentials copied into prompts, company data published to the public internet, and databases get deleted, all with no audit trail. Demos are easy to generate, but production features aren't. Prototyping is one of the most popular use cases for marketers and PMs, but the majority of real software work happens on existing apps, not one-off creations. Prototypes fail because they live outside real codebases, require rewrites before production, and create handoffs between tools and teams. The old Software Development Life Cycle is overloaded with dead-ends. The legacy SDLC relies on countless PRDs, tickets, and review meetings. Feedback cycles take weeks or months. Vibe coding has overloaded these outdated processes with thousands of good ideas that will never see the light of day, frustrating engineers and their stakeholders. We took these problems to heart and rebuilt v0 from the ground up. From 0 to shipped: What's new Work on existing codebases Instead of engineers spending weeks on re-writes for production, v0’s new sandbox-based runtime can import any GitHub repo and automatically pull environment variables, and configurations from Vercel. Every prompt generates production-ready code in a real environment, and it lives in your repo. No more copying code back and forth. Bring git to your entire team Historically, marketers and PMs weren’t comfortable setting up and troubleshooting a local dev environment. With v0, they don’t have to. A new Git panel lets you create a new branch for each chat, open PRs against main, and deploy on merge. Pull requests are first-class and previews map to real deployments. For the first time, anyone on a team, not just engineers, can ship production code through proper git workflows. Democratize data, safely Building internal reports and data apps typically requires painful setup of ETL pipelines and scheduled jobs. With v0, you can connect your app directly to the tables you need. Secure integrations with Snowflake and AWS databases mean anyone can build custom reporting, add rich context to their internal tools, and automate data-triggered processes. Stay secure by default Vibe coding tools optimize for speed and novelty, discarding decades of software engineering best practices. v0 is built on Vercel, where security is built-in by default and configurable for common compliance needs. Set deployment protection requirements, connect securely to enterprise systems, and set proper access controls for every app. How our customers use the new v0 Product leaders turn PRDs into prototypes, and prototypes into PRs, shipping the right features, fast. They go from "tell sales there's another delay" to "it's shipped." Designers work against real code, refining layouts, tweaking components, and previewing production with each update. They go from "another ticket for frontend" to "it's shipped." Marketers turn ideas into site updates immediately, edit landing pages, changing images, fixing copy, and publishing, all without opening a ticket. They go from "please, it's a quick change" to "it's shipped." Engineers unblock stakeholders without breaking prod, making quick fixes, importing repos, and letting business users open PRs, all in a single tab. They go from "I can't keep up with the backlog" to "it's shipped." Data teams ship dashboards the business actually uses, building custom reports and analytics on top of real data with just a few prompts. They go from "that's buried in a notebook" to "it's shipped." GTM teams close deals with the demo customers actually asked for, create live previews, mock data, and branded experiences in minutes. They go from "let's show the standard deck" to "it's shipped." What's next Today, you can use v0 to ship production apps and websites. 2026 will be the year of agents. Soon, you’ll be able to build end-to-end agentic workflows in v0, AI models included, and deploy them on Vercel’s self-driving infrastructure. Welcome to the new v0. We can’t wait to see what you build. Sign up or log in to try the new v0 today. Snowflake, GitHub, AWS are trademarks of their respective owners.

    Zeb Hermann
  • Dec 15

    How to prompt v0

    Working with v0 is like working with a highly skilled teammate who can build anything you need. v0 is more than just a tool, it’s your building partner. And like with any great collaborator, the quality of what you get depends on how clearly you communicate. The more specific you are, the better v0's output becomes. From our testing, good prompts consistently deliver: Faster generation time (30-40% faster with less unnecessary code, fewer credits spent) Smarter UX decisions (v0 understands intent and optimizes accordingly) Cleaner, more maintainable code This guide shows you a framework that consistently produces these results. The framework: Three inputs that drive great prompts After building hundreds of applications ourselves and learning from v0...

    Esteban Suárez
  • Jan 7

    How we made v0 an effective coding agent

    Last year we introduced the v0 Composite Model Family, and described how the v0 models operate inside a multi-step agentic pipeline. Three parts of that pipeline have had the greatest impact on reliability. These are the dynamic system prompt, a streaming manipulation layer that we call “LLM Suspense”, and a set of deterministic and model-driven autofixers that run after (or while!) the model finishes streaming its response. What we optimize for The primary metric we optimize for is the percentage of successful generations. A successful generation is one that produces a working website in v0’s preview instead of an error or blank screen. But the problem is that LLMs running in isolation encounter various issues when generating code at scale. In our experience, code generated by LLMs can have errors as often as 10% of the time. Our composite pipeline is able to detect and fix many of these errors in real time as the LLM streams the output. This can lead to a double-digit increase in success rates.

    Max Leiter

    Latest news.

  • v0
    Feb 3

    Introducing the new v0

    Since v0 became generally available in 2024, more than 4 million people have used it to turn their ideas into apps in minutes. v0 has helped people get promotions, win more clients, and work more closely with developers. AI lowered the barrier to writing code. Now we're raising the bar for shipping it. Today, v0 evolves vibe coding from novelty to business critical. Built for production apps and agents, this release includes enterprise-grade security and integrations teams can use to ship real software, not just spin up demos. The limitations of vibe coding We're at an inflection point where anyone can create software. But this freedom has created three problems for the enterprise. Vibe coding is now the world's largest shadow IT problem. AI-enabled software creation is already happening inside every enterprise, and employees are shipping security flaws alongside features: credentials copied into prompts, company data published to the public internet, and databases get deleted, all with no audit trail. Demos are easy to generate, but production features aren't. Prototyping is one of the most popular use cases for marketers and PMs, but the majority of real software work happens on existing apps, not one-off creations. Prototypes fail because they live outside real codebases, require rewrites before production, and create handoffs between tools and teams. The old Software Development Life Cycle is overloaded with dead-ends. The legacy SDLC relies on countless PRDs, tickets, and review meetings. Feedback cycles take weeks or months. Vibe coding has overloaded these outdated processes with thousands of good ideas that will never see the light of day, frustrating engineers and their stakeholders. We took these problems to heart and rebuilt v0 from the ground up. From 0 to shipped: What's new Work on existing codebases Instead of engineers spending weeks on re-writes for production, v0’s new sandbox-based runtime can import any GitHub repo and automatically pull environment variables, and configurations from Vercel. Every prompt generates production-ready code in a real environment, and it lives in your repo. No more copying code back and forth. Bring git to your entire team Historically, marketers and PMs weren’t comfortable setting up and troubleshooting a local dev environment. With v0, they don’t have to. A new Git panel lets you create a new branch for each chat, open PRs against main, and deploy on merge. Pull requests are first-class and previews map to real deployments. For the first time, anyone on a team, not just engineers, can ship production code through proper git workflows. Democratize data, safely Building internal reports and data apps typically requires painful setup of ETL pipelines and scheduled jobs. With v0, you can connect your app directly to the tables you need. Secure integrations with Snowflake and AWS databases mean anyone can build custom reporting, add rich context to their internal tools, and automate data-triggered processes. Stay secure by default Vibe coding tools optimize for speed and novelty, discarding decades of software engineering best practices. v0 is built on Vercel, where security is built-in by default and configurable for common compliance needs. Set deployment protection requirements, connect securely to enterprise systems, and set proper access controls for every app. How our customers use the new v0 Product leaders turn PRDs into prototypes, and prototypes into PRs, shipping the right features, fast. They go from "tell sales there's another delay" to "it's shipped." Designers work against real code, refining layouts, tweaking components, and previewing production with each update. They go from "another ticket for frontend" to "it's shipped." Marketers turn ideas into site updates immediately, edit landing pages, changing images, fixing copy, and publishing, all without opening a ticket. They go from "please, it's a quick change" to "it's shipped." Engineers unblock stakeholders without breaking prod, making quick fixes, importing repos, and letting business users open PRs, all in a single tab. They go from "I can't keep up with the backlog" to "it's shipped." Data teams ship dashboards the business actually uses, building custom reports and analytics on top of real data with just a few prompts. They go from "that's buried in a notebook" to "it's shipped." GTM teams close deals with the demo customers actually asked for, create live previews, mock data, and branded experiences in minutes. They go from "let's show the standard deck" to "it's shipped." What's next Today, you can use v0 to ship production apps and websites. 2026 will be the year of agents. Soon, you’ll be able to build end-to-end agentic workflows in v0, AI models included, and deploy them on Vercel’s self-driving infrastructure. Welcome to the new v0. We can’t wait to see what you build. Sign up or log in to try the new v0 today. Snowflake, GitHub, AWS are trademarks of their respective owners.

    Zeb Hermann
  • v0
    Jan 7

    How we made v0 an effective coding agent

    Last year we introduced the v0 Composite Model Family, and described how the v0 models operate inside a multi-step agentic pipeline. Three parts of that pipeline have had the greatest impact on reliability. These are the dynamic system prompt, a streaming manipulation layer that we call “LLM Suspense”, and a set of deterministic and model-driven autofixers that run after (or while!) the model finishes streaming its response. What we optimize for The primary metric we optimize for is the percentage of successful generations. A successful generation is one that produces a working website in v0’s preview instead of an error or blank screen. But the problem is that LLMs running in isolation encounter various issues when generating code at scale. In our experience, code generated by LLMs can have errors as often as 10% of the time. Our composite pipeline is able to detect and fix many of these errors in real time as the LLM streams the output. This can lead to a double-digit increase in success rates.

    Max Leiter
  • v0
    Dec 27

    Stopping the slow death of internal tools

    Companies spend millions of dollars in time and money trying to build internal tools. These range from lightweight automations and dashboards to fully custom systems with dedicated engineering teams. Most businesses can’t justify focusing developers on bespoke operational tools, so non-technical teams resort to brittle and insecure workarounds: custom Salesforce formulas and fields, complex workflow automations, spreadsheets, and spiderwebs of integrations across platforms. They are trying to build software without actually building software, and most of the tools end up collecting dust. v0’s AI agent changes this equation. Business users can build and publish real code and apps on the same platform that their developers use, safely integrate with internal and external systems, and secure everything behind existing SSO authentication.

    Zeb and Eric
  • v0
    Dec 15

    How to prompt v0

    Working with v0 is like working with a highly skilled teammate who can build anything you need. v0 is more than just a tool, it’s your building partner. And like with any great collaborator, the quality of what you get depends on how clearly you communicate. The more specific you are, the better v0's output becomes. From our testing, good prompts consistently deliver: Faster generation time (30-40% faster with less unnecessary code, fewer credits spent) Smarter UX decisions (v0 understands intent and optimizes accordingly) Cleaner, more maintainable code This guide shows you a framework that consistently produces these results. The framework: Three inputs that drive great prompts After building hundreds of applications ourselves and learning from v0...

    Esteban Suárez
  • v0
    Dec 15

    Build smarter workflows with Notion and v0

    Notion has become the trusted, connected workspace for teams. It's where your PRDs, specs, and project context live. v0 helps those teams turn ideas into dashboards, apps, and prototypes. Today, those workflows connect. You can now securely connect v0 to your Notion workspace, so everything it builds is grounded in your existing docs and databases. Wherever your team's knowledge lives in Notion, v0 can now build on top of it.

    Caroline Ciaramitaro
  • v0
    Nov 24

    How we built the v0 iOS app

    We recently released v0 for iOS, Vercel’s first mobile app. As a company focused on the web, building a native app was new territory for us. Our goal was to build an app worthy of an Apple Design Award, and we were open-minded on the best tech stack to get there. To that end, we built dozens of iterations of the product prior to our public beta. We experimented with drastically different tech stacks and UI patterns. We took inspiration from apps which speak the iPhone’s language, such as Apple Notes and iMessage. v0 had to earn a spot on your Home Screen among the greats. After weeks of experimentation, we landed on React Native with Expo to achieve this. We are pleased with the results, and our customers are too. In fact, the influx of messages from developers asking how the app feels so native compelled us to write a technical breakdown of how we did it. Table of contents How we built the v0 chat experience Building a composable chat Sending your first message Fading in the first assistant message Sending messages in an existing chat How we solved messages scrolling to the top Taming the keyboard Scrolling to end initially Floating composer Make it float Make it native Pasting images Fading in content Sharing code between web and native Styling Native menus Native alerts Native bottom sheets Looking forward How we built the v0 chat experience When you’re away from your computer, you might have a quick idea you want to act on. Our goal was to let you turn that idea into something tangible, without requiring context switching. v0 for iOS is the next generation of your Notes app, where your ideas get built in the background. We did not set out to build a mobile IDE with feature parity with our website. Instead, we wanted to build a simple, delightful experience for using AI to make things on the go. The centerpiece of that experience is the chat. To build a great chat, we set the following requirements: New messages animate in smoothly New user messages scroll to the top of the screen Assistant messages fade in with a staggered transition as they stream The composer uses Liquid Glass and floats on top of scrollable content Opening existing chats starts scrolled to the end Keyboard handling feels natural The text input lets you paste images and files The text input supports pan gestures to focus and blur it Markdown is fast and supports dynamic components While a number of UI patterns have emerged for AI chat in mobile apps, there is no equivalent set of patterns for AI code generation on mobile. We hadn’t seen these features in existing React Native apps, so we found ourselves inventing patterns on the fly. It took an extraordinary amount of work, testing, and coordination across each feature to make it meet our standards. Building a composable chat To meet our requirements, we structured our chat code to be composable on a per-feature basis. Our chat is powered by a few open source libraries: LegendList, React Native Reanimated, and React Native Keyboard Controller. To start, we set up multiple context providers. The provider wraps the MessagesList: Next, our messages list implements these features as composable plugins, each with its own hook. The following sections break down each hook to demonstrate how they work together. Sending your first message When you send a message on v0, the message bubble smoothly fades in and slides to the top. Immediately after the user message is done animating, the assistant messages fade in. When the user sends a message, we set a Reanimated shared value to indicate the animation should begin. Shared values let us update state without triggering re-renders. With our state tracked in Reanimated, we can now animate our UserMessage. Notice that UserMessageContent is wrapped with an Animated.View which receives props from useFirstMessageAnimation. How useFirstMessageAnimation works This hook is responsible for 3 things: Measure the height of the user message with itemHeight, a Reanimated shared value Fade in the message when isMessageSendAnimating Signal to the assistant message that the animation is complete Thanks to React Native’s New Architecture, ref.current.measure() in useLayoutEffect is synchronous, giving us height on the first render. Subsequent updates fire in onLayout. Based on the message height, window height, and current keyboard height, getAnimatedValues constructs the easing, start, and end states for translateY and progress. The resulting shared values are passed to useAnimatedStyle as transform and opacity respectively. And there we have it. Our first message fades in using Reanimated. Once it’s done animating, we’re ready to fade in the first assistant message response. Fading in the first assistant message Similar to UserMessage, the assistant message content is wrapped in an animated view that fades in after the user message animation completes. This fade in behavior is only enabled for the first assistant message in the chat, where index === 1. Messages in existing chats will have different behavior than messages in new chats. What happens if you open an existing chat that has one user message and one assistant message? Will it animate in again? No, because the animations here only apply if isMessageSendAnimating is true, which gets set onSubmit and cleared when you change chats. Sending messages in an existing chat We’ve covered how v0 handles animating in messages for new chats. For existing chats, however, the logic is entirely distinct. Rather than rely on Reanimated animations, such as the one in useFirstMessageAnimation, we rely on an implementation of scrollToEnd(). So all we need to do is scroll to end if we’re sending a message in an existing chat, right? In a perfect world, this is all the logic we’d need. Let’s explore why it’s not enough. If you recall from the introduction, one of our requirements is that new messages have to scroll to the top of the screen. If we simply call scrollToEnd(), then the new messages will show at the bottom of the screen. We needed a strategy to push the user message to the top of the chat. We referred to this as “blank size”: the distance between the bottom of the last assistant message, and the end of the chat. To float the content to the top of the chat, we had to push it up by the amount equal to the blank size. Thanks to synchronous height measurements in React Native's New Architecture, this was possible to do on each frame without a flicker. But it still required a lot of trickery and coordination. In the image above, you’ll notice that the blank size is dynamic. Its height depends on the keyboard’s open state. And it can change on every render, since the assistant message streams in quickly and with unpredictable sizes. Dynamic heights are a common challenge in virtualized lists. The frequently-updating blank size took that challenge to a new level. Our list items have dynamic, unknown heights that update frequently, and we need them to float to the top. For long enough assistant messages, the blank size could be zero, which introduced a new set of edge cases. How we solved it We tried many different approaches to implementing blank size. We tried a View at the bottom of the ScrollView with height, bottom padding on the ScrollView itself, translateY on the scrollable content, and minimum height on the last system message. All of these ended up with strange side effects and poor performance, often due to the need for a layout with Yoga. We ultimately landed on a solution that uses the contentInset property on ScrollView to handle the blank size without jitters. contentInset maps directly to the native property on UIScrollView in UIKit. We then paired contentInset together with scrollToEnd({ offset }) when you send a message. An assistant message’s blank size is determined by the combination of its own height, the height of the user message that comes before it, and the height of the chat container. Implementing useMessageBlankSize To implement blank size, we start with a hook called useMessageBlankSize in the assistant message: useMessageBlankSize is responsible for the following logic: Synchronously measure the assistant message Measure the user message before it Calculate the minimum distance for the blank size below the assistant message Keep track of what the blank size should be when the keyboard is opened or closed Set the blankSize shared value at the root context provider Lastly, we consume blankSize and pass it to the contentInset of our ScrollView: useAnimatedProps from Reanimated lets us update props on the UI thread on each frame without triggering re-renders. contentInset saw great performance and worked far better than every previous attempt. Taming the keyboard Building a good chat experience hinges on elegant keyboard handling. Achieving native feel in this area was tedious and challenging with React Native. When v0 iOS was in public beta, Apple released iOS 26. Every time a new iOS beta version came out, our chat seemingly broke entirely. Each iOS release turned into a game of cat-and-mouse of reproducing tiny discrepancies and jitters. Luckily, Kiryl, the maintainer of react-native-keyboard-controller, helped us address these issues, often updating the library within 24 hours of Apple releasing a new beta. Building useKeyboardAwareMessageList We used many of the hooks provided by React Native Keyboard Controller to build our own keyboard management system tailored to v0’s chat. useKeyboardAwareMessageList is our custom React hook responsible for all of our keyboard handling logic. We render it alongside our chat list, and it abstracts away everything we need to make the keyboard feel right. While the consumption is a one liner, its internals are about 1,000 lines of code with many unit tests. useKeyboardAwareMessageList primarily relies on the upstream useKeyboardHandler, handling events like onStart, onEnd, and onInteractive, together with a number of Reanimated useAnimatedReaction calls to retry events in particular edge cases. useKeyboardAwareMessageList also handles a number of strange behaviors in iOS. For example, if you send an app to the background when the keyboard is open and then refocus the app, iOS will inexplicably fire the keyboard onEnd event three times. Because we relied on imperative behavior when events fired, we came up with tricks to dedupe repeat events and track app state changes. useKeyboardAwareMessageList implements the following features: Shrink the blankSize when the keyboard opens If you’re scrolled to the end of the chat, and there’s no blank size, shift content up when the keyboard opens If you have scrolled high up enough, and there’s no blank size, show the keyboard on top of the content, without shifting the content itself When the user interactively dismisses the keyboard via the scroll view or text input, drag it down smoothly If you’re scrolled to the end of the chat, and the blank size is bigger than the keyboard, the content should stay in place If you’re scrolled to the end of the chat and the blank size is greater than zero, but it should be zero when the keyboard is open, shift content up so that it lands above the keyboard There was no single trick to get this all working. We spent dozens of hours using the app, noticing imperfections, tracing issues, and rewriting the logic until it felt right. Scrolling to end initially When you open an existing chat, v0 starts the chat scrolled to end. This is similar to using the inverted prop on React Native’s FlatList , which is common for bottom-to-top chat interfaces. However, we decided not to use inverted since it felt incompatible with an AI chat where messages stream in multiple times per second. We opted not to autoscroll as the assistant message streams. Instead, we let the content fill in naturally under the keyboard, together with a button to scroll to the end. This follows the same behavior as ChatGPT’s iOS app. That said, we wanted an inverted-list-style experience when you first opened an existing chat. To make this work, we call scrollToEnd when a chat first becomes visible. Due to a complex combination of dynamic message heights and blank size, we had to call scrollToEnd multiple times. If we didn’t, our list would either not scroll properly, or scroll too late. Once the content has scrolled, we call hasScrolledToEnd.set(true) to fade in the chat. Floating composer Inspired by iMessage’s bottom toolbar in iOS 26, we built a Liquid Glass composer with a progressive blur. We used @callstack/liquid-glass to add interactive Liquid Glass. By wrapping the glass views with LiquidGlassContainerView, we automatically get the view morphing effect. Make it float After adding the Liquid Glass, the next step was making it float on top of the chat content. In order to make the composer float on top of the scrollable content, we took the following steps: Add position: absolute; bottom: 0 to the composer Wrap the composer in KeyboardStickyView from react-native-keyboard-controller Synchronously measure the composer, and store its height in context using a shared value Add the composerHeight.get() to our ScrollView’s native contentInset.bottom property However, this was not enough. We are still missing one key behavior. As you type, the text input’s height can increase. When you type new lines, we want to simulate the experience of typing in a regular, non-absolute-positioned input. We had to find a way to shift the chat messages upwards, but only if you are scrolled to the end of the chat. In the video below, you can see both cases. At the start of the video, content shifts up with new lines since the chat is scrolled to the end. However, after scrolling up in the chat, typing new lines will not shift the content. useScrollWhenComposerSizeUpdates Enter useScrollWhenComposerSizeUpdates. This hook listens to the height of the composer and automatically scrolls to end when needed. To consume it, we simply call it in MessagesList: First, it sets up an effect using useAnimatedReaction to track composer height changes. Next, we call autoscrollToEnd. As long as you’re close enough to the end of the scrollable area, we automatically scroll to the end of the chat. Without this, entering new lines in the composer would overlap the bottom of the scrollable area. useScrollWhenComposerSizeUpdates lets us conditionally simulate the experience of a view that is not absolute-positioned. As we saw in earlier code, we unfortunately relied on a number of setTimeout and requestAnimationFrame calls to scrollToEnd. That code will understandably raise eyebrows, but it was the only way we managed to get scrolling to end working properly. We’re actively collaborating with Jay, the maintainer of LegendList, to build a more reliable approach. Make it feel native React Native’s built-in TextInput felt out of place in a native chat app. By default, when you set multiline={true}, the TextInput shows ugly scroll indicators, which is inconsistent with most chat apps. Swiping up and down on the input will bounce its internal content, even if you haven’t typed any text yet. Additionally, the input doesn't support interactive keyboard dismissal. To fix these issues, we applied a patch to RCTUITextView in native code. This patch disables scroll indicators, removes bounce effects, and enables interactive keyboard dismissal. Our patch also adds support for swiping up to focus the input. We realized we needed this after watching testers frustratingly swipe up expecting the keyboard to open. While maintaining a patch across React Native updates is not ideal, it was the most practical solution we found. We would have preferred an official API for extending native views without patching, and we plan on contributing this patch to React Native core if there is community interest. Pasting images To support pasting images and files in the text input, we used an Expo Module that listens to paste events from the native UIPasteboard. If you paste long enough text, onPaste will automatically turn the pasted content into a .txt file attachment. Since it was difficult to extend the existing TextInput in native code, we use a TextInputWrapper component which wraps TextInput and traverses its subviews in Swift. For more in-depth examples of creating native wrapper components, you can watch my 2024 talk, “Don’t be afraid to build a native library”. Fading in streaming content When an AI’s assistant message streams in, it needs to feel smooth. To achieve this, we created two components: <FadeInStaggeredIfStreaming /> <TextFadeInStaggeredIfStreaming /> As long as an element gets wrapped by one of these components, its children will smoothly fade in with a staggered animation. Under the hood, these components render a variation of FadeInStaggered, which handles the state management: useIsAnimatedInPool is a custom state manager outside of React that allows a limited number of ordered elements to get rendered at once. Elements request to join the pool when they mount, and isActive indicates if they should render an animated node. After the onFadedIn callback fires, we evict the element from the pool, rendering its children directly without the animated wrapper. This helps us limit the number of animated nodes that are active at once. Lastly, FadeIn renders a staggered animation with a delay of 32 milliseconds between elements. The staggered animations run on a schedule, animating a batch of 2 items at a time. When the queue of staggered items becomes higher than 10, we increase the number of batched items according to the size of the queue. TextFadeInStaggeredIfStreaming uses a similar strategy. We first chunk words into individual text nodes, then we create a unique pool for text elements with a limit of 4. This ensures that no more than 4 words will fade in at a time. One issue we faced with this approach is that it relies heavily on firing animations on mount. As a result, if you send a message, go to another chat, and then come back to the original chat before the message is done sending, it will remount and animate once again. To mitigate this, we implemented a system that keeps track of which content you've already seen animate across chats. The implementation uses a DisableFadeProvider towards the top of the message in the tree. We consume it in the root fade component to avoid affecting the pool if needed. While it might look unusual to explicitly rely on useState's initial value in a non-reactive way, this let us reliably track elements and their animation states based on their mount order. Sharing code between web and native When we started building the v0 iOS app, a natural question arose: how much code should we share between web and native? Given how mature the v0 web monorepo was, we decided to share types and helper functions, but not UI or state management. We also made a concerted effort to migrate business logic from client to server, letting the v0 mobile app be a thin wrapper over the API. Building a shared API Sharing the backend API routes between a mature Next.js app and a new mobile app introduced challenges. The v0 web app is powered by React Server Components and Server Actions, while the mobile app functions more like a single-page React app. To address this, we built an API layer using a hand-rolled backend framework. Our framework enforces runtime type safety by requiring input and output types specified with Zod. After defining the routes, we generate an openapi.json file based on each route’s Zod types. The mobile app consumes the OpenAPI spec using Hey API, which generates helper functions to use with Tanstack Query. This effort led to the development of the v0 Platform API. We wanted to build the ideal API for our own native client, and we ultimately decided to make that same API available to everyone. Thanks to this approach, v0 mobile uses the same routes and logic as v0’s Platform API customers. On each commit, we run tests to ensure that changes to our OpenAPI spec are compatible with the mobile app. In the future, we hope to eliminate the code generation step entirely with a type-level RPC wrapper around the Platform API. Styling v0 uses react-native-unistyles for styles and theming. My experience with React Native has taught me to be cautious of any work done in render. Unlike other styling libraries we evaluated, Unistyles provides comprehensive theming without re-rendering components or accessing React Context. Native menus Beyond Unistyles for themes and styles, we did not use a JS-based component library. Instead, we relied on native elements where possible. For menus, we used Zeego, which relies on react-native-ios-context-menu to render the native UIMenu under the hood. Zeego automatically renders Liquid Glass menus when you build with Xcode 26. Native alerts React Native apps on iOS 26 experienced the Alert pop-up rendering offscreen. We reproduced this in our own app and in many popular React Native apps. We patched it locally and worked with developers from Callstack and Meta to upstream a fix in React Native. Native bottom sheets For bottom sheets, we used the built-in React Native modal with presentationStyle="formSheet". However, this came with a few downsides which we addressed with patches. Modal dragging issues First, when dragging the sheet down, it temporarily froze in place before properly dismissing. To resolve this, we patched React Native locally. We worked with Callstack to upstream our patch into React Native, and it’s now live in 0.82. Fixing Yoga flickering If you put a View with flex: 1 inside a modal with a background color, and then drag the modal up and down, the bottom of the view flickers aggressively. To solve this, we patched React Native locally to support synchronous updates for modals in Yoga. We collaborated with developers from Callstack, Expo and Meta to upstream this change into React Native core. It's now live in React Native 0.82. Looking forward After building our first app using React Native with Expo, we aren’t looking back. If you haven't tried v0 for iOS yet, download it and let us know what you think with an App Store review. We're hiring developers to join the Vercel Mobile team. If this kind of work excites you, we'd love to hear from you. At Vercel, we're committed to building ambitious products at the highest caliber. We want to make it easy for web and native developers to do the same, and we plan to open-source our findings. Please reach out on X if you would like to beta test an open source library for AI chat apps. We look forward to partnering with the community to continue improving React Native.

    Fernando Rojo
  • v0
    Nov 4

    Build and deploy data applications on Snowflake with v0

    We're announcing an integration with Snowflake for v0. With this, you can connect v0 to Snowflake, ask questions about your data, and build data-driven Next.js applications that deploy directly to Snowflake. The application and authentication are managed through Vercel's secure vibe coding architecture, while compute runs on Snowflake's secure and governed platform, ensuring that your data never leaves your Snowflake environment. Sign up for the waitlist to get notified when it's ready for testing.

    Max, Jason, and Nicolás
  • v0
    Sep 18

    What you need to know about vibe coding

    In February 2025, Andrej Karpathy introduced the term vibe coding: a new way of coding with AI, “[where] you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” Just months later, vibe coding is completely reshaping how developers and non-developers work. Over 90% of U.S. developers use AI coding tools, adoption is accelerating for other roles, and English has become the fastest growing programming language in the world. We explore this shift in detail in our new State of Vibe Coding. Here are a few of the key takeaways.

    Zeb and Keith
  • v0
    Aug 22

    AI-powered prototyping with design systems

    Prototyping with AI should feel fast, collaborative, and on brand. Most AI tools have cracked the "fast" and "collaborative" parts, but can struggle with feeling "on-brand". This disconnect usually stems from a lack of context. For v0 to produce output that looks and feels right, it needs to understand your components. That includes how things should look, how they should behave, how they work together, and all of the other nuances. Most design systems aren’t built to support that kind of reasoning. However, a design system built for AI enables you to generate brand-aware prototypes that look and feel production ready. Let's look at why giving v0 this context creates on-brand prototypes and how you can get started.

    Will Sather
  • v0
    Aug 11

    v0.dev -> v0.app

    With a single prompt, anyone can go from idea to deployed app with UI, content, backend, and logic included. v0 is now agentic, helping you research, reason, debug, and plan. It can collaborate with you or take on the work end-to-end. From product managers writing specs to recruiters launching job boards, v0 is changing how teams operate. One prompt. Real software. AI that generates code is helpful, but agentic AI that thinks, plans, and builds with context is even better. Agentic AI moves beyond trial-and-error prompts. You describe what you want, and it builds. This means, v0 can figure out the right steps, remember what you’ve built, handle the complexity across every possible aspect of an app, all while remaining secure. Founders are using v0 to create everything from their first investor pitch deck—designed, editable, and on...

    Zeb Hermann
  • v0
    Aug 4

    v0: vibe coding, securely

    Vibe coding has changed how software gets built. Tools like v0 make it possible to turn ideas into working prototypes in seconds. Anthropic's CEO predicts 90% of code will be AI-generated in 3-6 months. Adoption is accelerating fast, and for many builders, we're already there. But here's the uncomfortable truth: The faster you build, the more risk you create Last week, a viral app leaked 72k selfies and government IDs. This wasn’t a hack or advanced malware. It was caused by default settings, misused variables, and the absence of guardrails. A misconfigured Firebase bucket that was mistakenly left public for anyone to access. The app was built quickly, shipped without security review, and went viral.

    Ty, Liz, and Kevin
  • v0
    Jul 29

    Join the v0 Ambassador Program

    Since launch, we’ve seen a growing wave of people building with v0 and sharing what they’ve created, from full-stack apps to UI experiments. Now, we’re going a step further by sponsoring builders innovating and showcasing what’s possible with v0. Today we’re launching the v0 Ambassador Program as a way to recognize and enable members of our community who create, share, and inspire. Apply to join the v0 Ambassador Program and help others discover the magic of what's possible with v0.

    Alli and Esteban

Ready to deploy? Start building with a free account. Speak to an expert for your Pro or Enterprise needs.

Start Deploying
Talk to an Expert

Explore Vercel Enterprise with an interactive product tour, trial, or a personalized demo.

Explore Enterprise

Get Started

  • Templates
  • Supported frameworks
  • Marketplace
  • Domains

Build

  • Next.js on Vercel
  • Turborepo
  • v0

Scale

  • Content delivery network
  • Fluid compute
  • CI/CD
  • Observability
  • AI GatewayNew
  • Vercel AgentNew

Secure

  • Platform security
  • Web Application Firewall
  • Bot management
  • BotID
  • SandboxNew

Resources

  • Pricing
  • Customers
  • Enterprise
  • Articles
  • Startups
  • Solution partners

Learn

  • Docs
  • Blog
  • Changelog
  • Knowledge Base
  • Academy
  • Community

Frameworks

  • Next.js
  • Nuxt
  • Svelte
  • Nitro
  • Turbo

SDKs

  • AI SDK
  • Workflow DevKitNew
  • Flags SDK
  • Chat SDK
  • Streamdown AINew

Use Cases

  • Composable commerce
  • Multi-tenant platforms
  • Web apps
  • Marketing sites
  • Platform engineers
  • Design engineers

Company

  • About
  • Careers
  • Help
  • Press
  • Legal
  • Privacy Policy

Community

  • Open source program
  • Events
  • Shipped on Vercel
  • GitHub
  • LinkedIn
  • X
  • YouTube

Loading status…

Select a display theme:
v0

Build applications with AI

AI SDK

The AI Toolkit for TypeScript

AI Gateway

One endpoint, all your models

Vercel Agent

An agent that knows your stack

Sandbox

AI workflows in live environments

CI/CD

Helping teams ship 6× faster

Content Delivery

Fast, scalable, and reliable

Fluid Compute

Servers, in serverless form

Observability

Trace every step

Bot Management

Scalable bot protection

BotID

Invisible CAPTCHA

Platform Security

DDoS Protection, Firewall

Web Application Firewall

Granular, custom protection

Customers

Trusted by the best teams

Blog

The latest posts and changes

Changelog

See what shipped

Press

Read the latest news

Events

Join us at an event

Docs

Vercel documentation

Academy

Linear courses to level up

Knowledge Base

Find help quickly

Community

Join the conversation

Next.js

The native Next.js platform

Nuxt

The progressive web framework

Svelte

The web’s efficient UI framework

Turborepo

Speed with Enterprise scale

AI Apps

Deploy at the speed of AI

Composable Commerce

Power storefronts that convert

Marketing Sites

Launch campaigns fast

Multi-tenant Platforms

Scale apps with one codebase

Web Apps

Ship features, not infrastructure

Marketplace

Extend and automate workflows

Templates

Jumpstart app development

Partner Finder

Get help from solution partners

Platform Engineers

Automate away repetition

Design Engineers

Deploy for every idea