
Midjourney's first AI video model brings your images to life
What's the story
Midjourney, a leading player in the AI image generation space, has launched its first-ever AI video generation model, dubbed V1.
The innovative tool works by taking an image as input and generating as many as four five-second videos from it.
Currently, V1 is only accessible through Discord and on the web.
Market competition
Midjourney enters the AI video generation space
The launch of V1 puts Midjourney in direct competition with other AI video generation models from OpenAI, Runway, Adobe, and Google.
However, unlike its competitors who are focused on creating controllable AI video models for commercial use, Midjourney has always been known for its unique AI image models that cater to creative individuals.
User control
How to use the new V1 model
The V1 model comes with a range of custom settings to give users more control over the video outputs.
You can use an auto setting to make an image move randomly or a manual one where you can describe in text what kind of animation you want added to your video.
The amount of camera and subject movement can also be controlled by selecting "low motion" or "high motion" in settings.
Future plans
Midjourney's vision for the future
In a blog post, Midjourney CEO David Holz revealed that their AI video model is just a stepping stone toward a bigger goal: creating AI models "capable of real-time open-world simulations."
After launching the V1 model, Midjourney plans to develop AI models for 3D renderings and real-time applications. This ambitious vision sets them apart from other companies in the space.
Legal issues
Lawsuit from Disney and Universal Studios
The launch of V1 comes just a week after Midjourney was sued by Disney and Universal Studios.
The lawsuit accuses the company of using copyrighted characters in photos generated by its AI image models.
This has raised concerns among Hollywood studios about the potential impact of AI image and video-generating models on their creative work.