It's official... Midjourney has Video!
Midjourney today Introduced the V1 Video Model • June 18 at 12:35PM
As iMidjourney just launched something big — and if you’re a creative professional or visual storyteller, it’s time to lean in.
For the first time, we can now animate our Midjourney images directly within the platform. It’s called Image-to-Video, and while this is officially Version 1 of their video model, it’s not just a test — it’s a glimpse of what’s coming next in real-time, interactive AI visuals.
Let me walk you through what this is, how it works, and what you can actually do with it.
The Big Picture
Midjourney has been building toward something much bigger than still images. As David (the founder) puts it:
“We believe the inevitable destination of this technology
are models capable of real-time open-world simulations.”
That means immersive, interactive, 3D-rendered environments you can move through — where every character, object, and camera angle responds dynamically.
To get there, they’re rolling out foundational tools in sequence:
1 • ✅ Images (done)
2 • ✅ Video (this release)
3 • 🟡 3D models (coming)
4 • 🟡 Real-time responsiveness (on the roadmap)
This is just step two.
What You Can Do Right Now (V1 Features)
“Image-to-Video” Workflow
Start with a standard Midjourney image. Then click Animate. That’s it.
You’ll get:
Four 5-second video clips per job
Two motion settings: Low Motion and High Motion
Ability to extend videos (4 seconds at a time, up to 4x)
Upload your own image and use it as a start frame

Motion Options
Automatic Mode: Midjourney guesses the motion based on image content
Manual Mode: You describe exactly how you want the scene to move
“There’s an ‘automatic’ animation setting… very fun. Then there’s a ‘manual’ animation button which lets you describe to the system how you want things to move and the scene to develop.” — David, Midjourney
High vs. Low Motion
Low Motion is ambient: slow pans, subtle movement. (Sometimes too subtle.)
High Motion is dynamic: subject and camera move. (Sometimes a bit chaotic.)
In the samples - the full range of motion is not evident, so the comparison between Low Motion and High Motion is not as clear.
Pricing & Access
Web-only launch (for now)
A video job = 8x the cost of an image job
But since you get 4 video clips, it’s about the same cost as an upscale
That works out to ~1 image-worth per second of video
Relax mode for video will be tested for Pro plans and up
Honestly? That’s insanely cheap for motion content generation — over 25x cheaper than anything else in the market today. Do I think it replaces all the other video tools out there? No. I still have my Luma, Suno, Veo, Klive, Hailuoa, Runway, and other tools at the ready.
What This Means for Creative Workflows
This isn’t just for fun (though it is fun). It’s useful. Strategic. Game-changing.
You can now:
Animate product concepts
Create storytelling sequences
Explore cinematic moods
Bring static portraits to life
Rapid-prototype video ads or trailers
And we’re just getting started. As David says:
“Properly utilized it’s not just fun, it can also be really useful, or even profound
— to make old and new worlds suddenly alive.”
Yes. That.
My Take
If you’re using Midjourney already, this video release expands your playground. If you’re not? This might be the feature that finally tips the scales.
I’ll be sharing experiments, tips, and motion prompt structures over the coming weeks. For now: go make something move.
For my paying community - here are some prompts and renders to enjoy:
Keep reading with a 7-day free trial
Subscribe to AI Lab to keep reading this post and get 7 days of free access to the full post archives.