News from the AI & ML world

DeeperML - #midjourney

David Crookes@Latest from Tom's Guide //
Midjourney, a leading AI art platform, officially launched its first video model, V1, on June 18, 2025. This new model transforms images into short, animated clips, marking Midjourney's entry into the AI video generation space. V1 allows users to animate images, either generated within the platform using versions V4-V7 and Niji, or uploaded from external sources. This move sets the stage for a broader strategy that encompasses interactive environments, 3D modeling, and real-time rendering, highlighting the company’s long-term ambitions in immersive media creation.

Early tests of V1 show support for dynamic motion, basic scene transitions, and a range of camera moves, supporting aspect ratios including 16:9, 1:1, and 9:16. The model uses a blend of image and video training data to create clips that are roughly 10 seconds long at 24 frames per second, although other sources indicate clips starting at 5 seconds, with the ability to extend to 20 seconds in 5-second segments. The goal of Midjourney is aesthetic control rather than photorealistic realism with this model. The company is prioritizing safety and alignment before scaling, so at the moment, the alpha is private with no current timeline for general access or pricing.

Midjourney’s V1 distinguishes itself by focusing on animating static images, contrasting with text-to-video engines like OpenAI’s Sora and Google’s Veo 3, and it stands as an economically competitive choice. It is available to all paid users, starting at $10 per month, with varying levels of fast GPU time and priority rendering depending on the plan. Pricing options include a Basic Plan, Pro Plan and Mega Plan, designed to accommodate different usage needs. With over 20 million users already familiar with its image generation capabilities, Midjourney's entry into video is poised to make a significant impact on the creative AI community.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • Fello AI: On June 18, 2025, AI art platform Midjourney officially entered the AI video generation space with the debut of its first video model, V1.
  • Shelly Palmer: Midjourney Set to Release its First Video Model
  • PCMag Middle East ai: Midjourney will generate up to four five-second clips based on the images you input, though it admits that some settings can produce 'wonky mistakes.'
  • www.techradar.com: Midjourney just dropped its first AI video model and Sora and Veo 3 should be worried
  • www.tomsguide.com: Midjourney video generation is here — but there's a problem holding it back
  • PPC Land: AI image generator introduces video capabilities on June 18, addressing compression issues for social platforms.
  • eWEEK: Midjourney V1 AI Video Model: A New Worthy Competitor to Google, OpenAI Products
  • AI GPT Journal: Key Takeaways: Midjourney’s Introduction to Image-to-Video Technology Midjourney, a prominent figure in AI-generated visual content,... The post appeared first on .
Classification:
David Crookes@Latest from Tom's Guide //
Midjourney has officially launched its first image-to-video generation model, named V1, marking its entry into the competitive AI video market. This new model enables users to transform static images, whether generated within Midjourney or uploaded, into short, dynamic video clips. Unlike some competitors that rely on text-to-video generation, Midjourney's V1 focuses on animating existing visuals, building upon the platform's established expertise in AI-generated imagery. The model supports features such as dynamic motion, basic scene transitions, and various camera moves, with aspect ratios of 16:9, 1:1, and 9:16, catering to diverse creative needs.

The V1 model generates four variations of each video, each approximately five seconds in length at 24 frames per second. Users can extend these videos in four-second increments, up to a maximum of 21 seconds, allowing for greater control over the final output. Midjourney offers two primary motion dynamics settings: "Low Motion" for subtle animations and atmospheric visuals, and "High Motion" for dynamic movements and lively subject animations. Users can choose automatic prompting, where Midjourney determines motions based on the image context, or manual prompting, where they explicitly instruct the desired animation style via text prompts. However, its founder, David Holz, said the goal is aesthetic control, not realism.

Priced starting at $10 per month, the Basic plan grants access to the V1 model, making it available to a wide range of users. However, generating videos consumes significantly more GPU resources compared to image generation, approximately eight times as much, which will eat in to monthly credits faster. The launch of Midjourney’s V1 positions it alongside industry leaders like Google and OpenAI, although each company approaches video generation with different focuses and strengths. While V1 is currently accessible via the Midjourney website and Discord, the company acknowledges that the costs of running the model are still hard to predict.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • AI GPT Journal: Midjourney Introduces Image-to-Video Generation Model: What You Need to Know
  • Fello AI: Midjourney Video V1 Is Here! How Does It Compare to Google Veo 3 & OpenAI Sora?
  • Shelly Palmer: Midjourney Set to Release its First Video Model
Classification: