Startups developing the technology for AI-generated content are pushing the boundaries of what they offer as the technology continues to gain popularity. RunwayML made its new, more realistic video generation model available a few weeks ago. Haiper, an AI video firm based in London that was formed by Yishu Miao and Ziyu Wang, former researchers at Google Deepmind, is now introducing Haiper 1.5, a new visual foundation model.
Haiper 1.5 is an incremental version that is accessible on the company’s mobile and online platforms. It enables users to create 8-second videos from text, images, and video suggestions—twice as long as Haiper’s original model.
Also Read: Google Introduces ‘Google Vids’ AI Video Creator in Limited Beta Release
The company also revealed intentions to enter the picture production space and a new upscale feature that lets users improve the quality of their content.
Only four months have passed since Haiper came out of stealth. Despite being in its early stages and without the funding of other AI firms, the company boasts of having onboarded over 1.5 million people to its platform, indicating its strong standing. With an expanded line of AI solutions, it now hopes to attract more users and challenge competitors like Runway.
What benefits does the Haiper AI video platform offer?
Haiper, which debuted in March, is following in the footsteps of Runway and Pika by offering consumers a full-featured platform for creating videos that are driven by an internally taught perceptual foundation model. It’s really simple to use: the user simply needs to type in a text prompt describing anything they can think of, and the model will generate the content based on that description, along with prompts to change things like backgrounds, characters, objects, and creative styles.
Also Read: Synthesia 2.0 Launches AI Video Platform for Businesses with Advanced Avatars
Haiper initially created a 2-4 second video by processing word prompts or animated pre-existing pictures. The capacity served its purpose; however, content authors frequently expressed concern that it was too short to address more general use cases. With the release of the most recent model, this issue has been resolved by increasing the generation length by two times to eight seconds.
In a manner akin to what we have observed on other AI video products, such as Luma’s new Dream Machine model, it can even lengthen users’ previous 2- and 4-second generations to 8 seconds.
That’s not all, though.
Haiper first released two-second high-definition videos, with longer snippets available in standard definition. This is now different from the most recent version, which allows users to create a clip of any length in SD or HD resolution.
Also Read: Luma AI Unveils ‘Dream Machine’: A Game Changer in AI-Driven Video Production
Additionally, customers may improve all of their video generations to 1080p with a single click thanks to an integrated upscaler that doesn’t interfere with current operations. Even user-owned photos and videos can be used with the tool. To get better, they simply need to upload them to the upscaler.
Haiper is expanding its platform with a new image model in addition to the upscaler. With the text-to-video feature, users will be able to create graphics based on text prompts and then animate them to produce flawless video output. Before going on to the animation stage, users will be able to test, examine, and refine their material thanks to Haiper’s integration of picture generation into the video generation pipeline.
Developing AGI using the world’s perspective
Although Haiper’s updated model and samples are promising—especially considering the company’s supplied samples—the community has not yet had a chance to analyze them. The picture model was not available when VentureBeat attempted to access the tools on the company’s website, and the upscaler and eight-second-long generations were only available to customers who paid for the $24/month, yearly Pro plan.
Also Read: Haiper AI Text to Video Generation: How to Use? Is it better than OpenAI Sora?
The business’s CEO, Miao, announced that the company intends to expand the availability of 8-second movies through various channels, such as a credit system. Additionally, the image model will be released later this month at no cost, with the opportunity to upgrade for quicker and more simultaneous generations.
The platform’s two-second films seem to be more consistently of higher quality than its longer videos, which are nonetheless rather inconsistent. Our four-second videos occasionally lacked (or overused) subject and object details, particularly when the content was motion-heavy.
However, it is anticipated that Haiper’s generational quality will rise in the wake of these changes and those that are in the works. The business claims that to produce true-to-life content, it intends to improve the comprehension of the world through its perceptual foundation models. This essentially entails developing artificial general intelligence (AGI) that is capable of simulating the emotional and physical components of reality, including the smallest visual details like light, motion, texture, and object interactions.
Also Read: Google VEO AI Video Generator Tool: Features, Prompts, and How to Use it?
It will be interesting to watch how the business develops in this area and competes with competitors that still seem to be ahead in the race for AI videos, such as Runway, Pika, and OpenAI.