Meta Unveils Movie Gen an AI Model Capable of Generating Video and Audio Content
Meta has announced Movie Gen a new AI model capable of generating realistic video and audio clips rivaling tools from OpenAI and ElevenLabs. The model can create content based on user prompts and generate synchronized sound effects.

Meta Unveils Movie Gen an AI Model Capable of Generating Video and Audio Content
On Friday Meta the parent company of Facebook announced the launch of a new artificial intelligence model called Movie Gen which is designed to create realistic video and audio clips in response to user prompts. This innovation is positioned as a direct competitor to existing tools from leading media generation startups such as OpenAI and ElevenLabs.
Meta showcased samples of Movie Gen’s capabilities including videos of animals swimming and surfing as well as clips that utilized real photos of individuals to depict them engaging in various activities such as painting. The AI model is also capable of generating background music and sound effects that sync with the generated video content according to a blog post released by the company. One of the notable demonstrations included a video where Movie Gen inserted pom-poms into the hands of a man running through a desert and transformed a skateboarder’s environment from a dry parking lot to a splashing puddle.
The videos created by Movie Gen can reach lengths of up to 16 seconds while the accompanying audio can last for up to 45 seconds. Meta has provided data from blind tests indicating that Movie Gen performs favorably compared to offerings from other startups including Runway OpenAI ElevenLabs and Kling.
This announcement arrives at a time when Hollywood is grappling with the implications of generative AI video technology. Earlier this year Microsoft-backed OpenAI demonstrated its product Sora which can generate feature film-like videos based on text prompts. As industry technologists explore the potential of these tools to enhance and expedite filmmaking concerns have also been raised about the ethical implications of using systems that may have been trained on copyrighted material without authorization. Lawmakers have voiced apprehensions regarding the use of AI-generated deepfakes particularly in electoral contexts around the world including in the United States Pakistan India and Indonesia.
Despite the model's capabilities Meta indicated that it is unlikely to release Movie Gen for open use by developers similar to its Llama series of large-language models. The company is assessing the risks associated with each individual model and declined to comment specifically on Movie Gen's evaluation. Instead Meta aims to work directly with the entertainment industry and other content creators to explore the applications of Movie Gen and plans to integrate it into its own products in the upcoming year.
Meta has clarified that Movie Gen was developed using a combination of licensed and publicly available datasets. Meanwhile OpenAI has been engaging with Hollywood executives and agents throughout the year to discuss potential partnerships related to Sora although no formal agreements have emerged from these discussions thus far. The anxieties surrounding AI in the entertainment sector were heightened in May when actress Scarlett Johansson accused OpenAI of imitating her voice without permission for its chatbot. In September Lions Gate Entertainment which is behind major franchises like “The Hunger Games” and “Twilight” announced it would provide AI startup Runway access to its film and television library for training purposes allowing filmmakers to utilize the resulting model to augment their creative work.
Click Here to Visit
What's Your Reaction?






