Midjourney has launched its first AI video generation model, V1, marking the company’s shift from image generation to full multimedia content creation. Starting today, Midjourney’s nearly 20 million users can animate uploaded or AI-generated images via the website.
Midjourney CEO David Holz said the V1 model was the company’s next step towards its ultimate destination, creating AI models “capable of real-time open-world simulations.” The company also said it had bigger plans for its AI video models than just generating B-roll for Hollywood movies or commercials for the ad industry.
Nick St. Pierre, Creative Director and unofficial Midjourney shill, observed that the V1 only did image-to-video conversions. Users were allowed four 24fps generations at 480p per clip, and they worked with any aspect ratio image. He also noted V1’s cost efficiency, claiming that 20 video clips for ~$4 were better than the $3 per video on Veo. The generation speed was notably faster.
According to Pierre, V1 has custom settings allowing users to control the video model’s outputs. The automatic animation setting will make an image move randomly, and the manual setting will allow users to describe, in text, a specific animation they want to add to their video. While the videos generated with V1 are only five seconds long, users can extend them by four seconds up to four times, meaning that V1 videos could get as long as 21 seconds.
Had a chance to try out @midjourney video!
Really impressed by a bunch of my outputs – it makes a big difference to start from a quality image, and the coherence is strong.
The auto-prompting is great. And it's also a relatively fast and cheap model, which never hurts 😅 pic.twitter.com/yvf3nN9dmC
— Justine Moore (@venturetwins) June 18, 2025
The Midjourney team said it would charge 8x more for a video generation than a typical image generation, meaning subscribers will run out of their monthly allotted generations much faster when creating videos than images. It also mentioned that it planned to develop AI models for producing 3D rendering and real-time AI models.
The cheapest way to try out V1 at launch will be by subscribing to Midjourney’s $10-per-month Basic plan. Subscribers to Midjourney’s $60-a-month Pro plan and $120-a-month Mega plan will have unlimited video generations in the company’s slower “Relax” mode. Midjourney said it will reassess its pricing for video models over the next month. Holz claimed that Midjourney’s prices were over 25 times cheaper than what the market had shipped before.
“Our goal is to give you something fun, easy, beautiful, and affordable so that everyone can explore. We think we’ve struck a solid balance. Though many of you will feel a need to upgrade at least one tier for more fast-minutes.”
–David Holz, CEO at Midjourney
According to Holz, the actual costs of producing models and the prices Midjourney charged were challenging to predict. He added that the company would do its best to give users access right now. However, he clarified that the Midjourney team would watch everyone use the technology over the next month and adjust everything to ensure the company was operating sustainably.
The Midjourney CEO laid out next year’s plans, claiming that the inevitable destination of V1’s technology was models capable of real-time open-world simulations. Basically, the company was looking to build an AI system that generated imagery in real time. Users could command it to move around in a 3D space where the environments and characters also moved, and users could interact with everything.
Holz pointed out that to do this, the company needed visuals (our first image models), it needed to make those images move (video models), users needed to be able to move themselves through space (3D models), and they needed to be able to do this all fast (real-time models). He added that the next year involved building these pieces individually, releasing them, and slowly putting them together into a unified system. The Midjourney boss said it might be expensive at first, but the final product would be something everyone could use.
Holz promised that more would come from his company over the next few weeks and months, adding that his team had learned a lot while building video models. He also pointed out that much of this learning would return to Midjourney’s image models in the coming weeks or months. But for now, Holz says to press “Animate”—and make those images move.
Your crypto news deserves attention - KEY Difference Wire puts you on 250+ top sites