Generative artificial intelligence (AI) has taken massive leaps in the past few months. Text, video, and audio generator tools like GPT, Sora, and Eleven Labs, have taken the digital landscape by storm. Among them, AI video generators have dominated the Gen AI landscape.
Runway, an artificial intelligence startup, has taken it up by a notch. This startup is rapidly developing tools that aim to transform video generation for the better.
Runway recently released Act-One, a groundbreaking tool for animating character images with incredible fidelity, consistency, and fluidity.
Act-One is an AI video generation tool that “allows you to bring a character image to life by uploading a driving performance to precisely influence expressions, mouth movements, and more.”
This article will explore Runway’s Act-One and how to use it with tips on inputs, settings, and best practices to maximize your results. Let’s begin.
Best Practices for Input Selection
Following best practices for input selection can make a big difference in the quality of your output. Here are some key recommendations:
Driving Performance
The driving performance refers to the video source for expressions. In other words, it is the source/original video that will guide the expressions and movements of the AI-generated character. It should meet these specific criteria to translate effectively:
- Good Lighting: You have to make sure that the video is well-lit, with clearly defined facial features.
- Face Position: The subject of the video should be forward-facing, with their face consistently in the frame from around the shoulders up.
- Limited Movement: While Act One is impressive, it is not without its faults. Excessive head or body movement can result in poor animation quality. The body movement should be kept to a minimum to avoid distortions.
- No Face Obstructions: Avoid any occlusions, like hands covering the face or face moving out of the frame.
- Clear Mouth Movements: Clear, well-defined mouth movements help create accurate lip-syncing.
- Compliance with Trust & Safety: Ensure your video content adheres to the platform’s Trust & Safety standards.
Character Image
The character image refers to the image that will be animated by Act-One. It is the AI-generated image that follows the expressions and movements defined by the driving performance. Here are the ideal features for a good character image:
- Well-Defined Features: The face should be clear, with good lighting to define facial features.
- Forward-Facing: Like the driving performance, a forward-facing image offers better alignment.
- Shoulders and Up Framing: The images should be framed from the shoulders up, or waist up to provide optimal animation results. Only close-ups and mid-shots can be used at the moment.
- Avoid Non-Human Characters: Act-One is designed to work best with human characters. It will not work with animals, objects, or any non-human entity.
How to Use Act-One?
Here is a step-by-step guide on using Act-One:
Step 1: Upload the Driving Performance
- Go to the “Generative Video” section in your dashboard and select Gen-3 Alpha as your model.
- Click the Act-One icon in the toolbar on the left side.
- Drag and drop a new video or select an existing one from your assets to serve as the driving performance.
When uploading, make sure the driving performance meets the best practices mentioned above. Once uploaded, the platform will automatically run preliminary face detection to ensure the video is suitable for Act-One.
Step 2: Select the Character Image
Once your driving performance is ready, the next step is to choose the character image that will be animated.
- In the bottom half of the Act-One window, choose an image from preset options or upload a custom image.
- For optimal results, make sure your image is well-lit, forward-facing, and framed from the shoulders up.
Act-One supports a range of character images, but images that closely follow best practices tend to deliver more consistent results. Keep in mind that while experimental images can sometimes produce unique outputs, adhering to guidelines increases the likelihood of a clean and realistic result.
Step 3: Generating the Act-One Video
Now, let’s generate your video:
- Before generating, hover over the duration modal to see the credit cost. Act-One charges 10 credits per second, with a minimum of 5 seconds. Even videos shorter than 5 seconds will still be billed for 50 credits.
- Once you are satisfied with your selections and aware of the credit cost, click “Generate.”
- Your video will start processing. It will be available for review in your session once complete.
Pricing
Act-One charges a 50-credit minimum charge for driving performance videos as long as they are 5 seconds or shorter. For videos exceeding 5 seconds, a per-second charge of 10 credits applies.
This fee is calculated based on the total duration, rounded up to the nearest decimal.
Here is the simplified version of Act-One’s pricing. Take a look:
- Base Rate: 10 credits per second.
- Minimum Charge: There is a minimum of 50 credits, covering videos up to 5 seconds.
- Additional Seconds: Each additional second adds 10 credits, with partial seconds rounded up.
Common Issues and How to Resolve Them?
If you encounter any errors or notice issues in your output, here is how you can resolve them:
- For Face Detection Problems: Use a character image that follows best practices to improve alignment and clarity.
- For Artifacts or Distortions: Re-run the generation process with the same inputs, as minor artifacts may sometimes be resolved on a subsequent try.
- For Audio Issues: Make sure the audio is not excessively noisy or distorted. It should also comply with Trust & Safety standards.
A Few More Tips
Here are a few more tips to enhance your experience:
- You can experiment with angles. Although forward-facing images work best, slight variations in angle can sometimes yield interesting results.
- Use consistent lighting across both the driving performance and character image. This will lead to smoother animations.
- You can also experiment with various expressions in the driving performance video to see how different emotions impact the character’s animation.
- Given the 30-second limit, you have to plan the narrative or purpose of each video to make the most of the duration available.
The Bottom Line
Runway is changing video generation with its innovative tools. Act-One is a great platform for creating dynamic and engaging videos. With a little experimentation and planning, you can create stunning visuals that captivate your audience and leave a lasting impression.