Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Join our daily and weekly newsletters for the latest updates and exclusive content in the industry’s leading AI coverage. Learn more
Runway AI Inc. Today, the most developed AI video generation model included in the next stage of the competition to create means that could change the production of film production. New GEN-4 system Provides character and stage sequence in large numbers of innings – a capacity that deviates from most AI video generators to date.
New York Based Startup, Supported Google, Nvidia and Righteousnessreleased “GEN-4“With all paid subscribers and enterprise customers, with the additional features planned at the end of this week. Users can create five and ten-t-thirds in 720p resolution.
Releasing, Openai’s image generation several days later, created a cultural phenomenon, asked the CHIBLI-style pictures with millions of users. The viral trend was so popular that he was temporarily crashed by Openai’s servers, CEO SAM Altman Tweeting “Our GPUs melt“Due to the unprecedented demand. Ghibli style images caused several surveyed disputes with copyright, which AI could imitate different artistic styles.
Character and stage sequence – Multi-image and protect the same visual elements along the angle – AI video generation in ACHilles’Sa ‘. When a character changed in a thin way between the face of cuts or background element, the artificial nature of the content is immediately clear to the audience.
The call is due to how these models work in the fundamental level. The previous AI generators treated each frame with a creative work separately, only with empty connections between them. Imagine that each room full of artists draws a movie without taking a movie before or after a movie – the result would be visually separated.
Runway’s Gen-4 What amount does the continuous memory of visual elements seem to solve this problem. Once a character, object or environment is established, the system can show it from different angles when maintaining the main features. This is not only a technical development; The difference between creating interesting visual fabrics and telling true stories.
The focus allows you to create new pictures and videos, which are not used, Gen-4, consistent styles, subjects, places and more. Allows you to durable and control in your stories.
We came together to test the model’s narrative opportunities … pic.twitter.com/iyz2baew2U
– runway (@runwayml) March 31, 2025
According to Runway’s documentation, Gen-4 allows users to provide reference images of subjects and create consistent performances from different angles. The company claims that the model can show the model with real action while maintaining the subject, object and style sequence.
To demonstrate the capabilities of the model, a few short films created in the runway are completely generated in the runway. A movie, “New York is a zoopark“Cinematic demonstrates the visual effects of the model with the placement of real animals in New York settings. Other, titled”Search“The researchers who are looking for a mysterious flower and less than one week are followed.
Gen-4 is being built in previous vehicles of the runway. In October, the company was released MiserableDirectors a feature that allows directing to capture the facial expressions from the smartphone video and transfer to AI characters. The next month, the runway added in advance Camera control like 3D Gen-3 allows Alpha Turbo to enlarge and reduce views while maintaining character forms to users.
This trajectory reveals the strategic vision of the runway. While competitors focused on creating more realistic single images or clips, the runway has collected components of the full digital production pipeline. The approach solves the problems of performance, coverage and visual sustainability, such as current directors, more than updated technical obstacles, coverage and visual sustainability.
The evolution of the vicinity facing consistent world models understands that an assistant director in the AI is really useful in the Logic of the traditional production. The difference between creating a technological demo and creating construction tools professionals, you can access work flows.
The financial impact is important for the runway that is reported to be raised by A New financing round This will assess the company from $ 4 billion. According to financial statements, a goal is to acquire start Annual income $ 300 million This year after the start of new products and a API for video-creating models.
The flight strip followed Hollywood partnership, To provide an agreement with Lionsgate Create a special AI video generation model based on more than 20,000 names of the studio. The company also established Hundred movie fundsFilmmakers offers up to $ 1 million to produce movies using AI.
“We believe that the best stories should be said yet, but this traditional financing mechanisms read on new and emerging images within the larger industry ecosystem,” he said.
However, technology increases concerns for film industry specialists. One 2024 Education is commissioned by Animation Guild The film’s film, 75% of production companies reduced, decreased or eliminated. Research projects will be affected by more than 100,000 entertainment work until 2026.
Like other AI companies, the runway checks legal on the training information. The company is currently in a lawsuit, which is currently brought by artists used by copyrighted work for unauthorized operation. As the defense of the flight strip referred to a fair use doctrine, although the courts still need to manage this application of the copyright law.
Copyright dispute intensified last week Openai’s Studio Ghibli featureUsers, Hayao Miyazaki’s animation studio allows you to create images in different ways, without open permission. Unlike Openai, which refuses to create images in the style of residential artists, but in the studio styles, the policy of style mimismi obviously obviously detailed detail.
This difference is becoming increasingly arbitrary because the AI models become more complex. Line between learning and copying the styles of special creators from extensive traditions approached invisibility. When an AI can imitate visual language to develop Miyazaki decades, is it important that it is important to ask the studio or the artist’s self?
When questioned information sources, the runway refused to direct the runway to competitive concerns. This opacity has become a standard experience between AI developers, but remains a dispute point for the creators.
Marketing agencies explore how the tools such as educational content are creators and corporate communication groups GEN-4 Video production can be regulated, the question varies from technical capacity to a creative application.
The technology for Filmmakers is both the opportunity and is broken. Independent developers have previously accessed great studios, traditional VFX and animated experts, and access to visual effect opportunities for encounters an indefinite future.
The truth is the truth that the technical restrictions have never been to prevent most people to prevent mandatory movies. The ability to maintain visual sustainability suddenly can not create the generation of the story. What it can do is with a sufficient friction that can test more people with a visual narrative without the need for special training or expensive equipment.
Perhaps the deepest side Gen-4 It is not what can create our relationship with visual media. We are in the technical skill or budget of the garden in the production, but we enter a period where there is no imagination and purpose. Anyone in a world where anyone can describe can create an important question: What’s worth showing?
As a film created by a movie, as we require a reference image and more, the most relevant question is able to make compulsory video, but to say something, but it is in your fingertips, we can find something meaningful to say something.