Adobe, the maker of creative tools including Photoshop, Elements, Illustrator, Lightroom and video editing software Premiere Pro, has officially revealed its Firefly Video generative AI model.
Announced at AdobeMAX, the “world’s largest creativity conference” in Miami, this technology has ability to generate new and realistic-looking video without filming, as well as extending clips and helping editors save both time and money.
The Firefly Video model joins Adobe’s existing Image, Vector and Design models, currently used in its existing suite of creative tools.
Adobe’s new Firefly Video model is also the first “commercially safe” generative video model. This means that it has been trained on licensed content, including Adobe Stock and public domain content, and never on the work created by Adobe users.
Editors can try out the new features today with the current beta of Adobe Premiere Pro, along with other capabilities using Firefly AI model. We can also expect see other capabilities added later this year, however, here’s a run down of what’s available to use:
Generative Extend
Generative Extend leverages generative AI to seamlessly add extra length to existing video clips, and gives editors those precious few extra seconds they need to fill in content gaps. For example, if you want your clip to end on the beat of a song, you simply drag the clip’s length with the Generative Extend tool forward to the right point, and the generative AI will find a way to fill in the missing frames of your video in a natural way.
This is also handy if you’re working with a clip that cuts a little too early; almost magically, the generative AI model will figure out a way to extend the scene. The system can also sample the ambient background noise and continue it through the extended scene as well.
Adobe’s Firefly Video model can also correct awkward camera actions or eye-lines that shifts that happen mid-shot. This creates a smoother pan, for example, where the camera might have been jiggled, or corrects where a person is looking. It’s quite amazing.
There are a few limitations to keep in mind, however. First, it won’t extend music due to copyright issues, and, unsurprisingly, dialogue will be muted in the extended clips.
Also, Generative Extend needs to be sent to the cloud for processing, but this can work in the background, leaving you to get on with your work.
The generative features won’t currently work with 4K video, and the supported video formats are:
- 1920×1080 or 1280×720 resolutions
- 16:9 aspect ratio
- 12-30fps
- 8 bit, SDR
- Mono and stereo audio
Once the clip is done, you can rate how well it did using ‘Good Output’ or ‘Poor Output’, or ask it to generate the clip extension again.
For more on Generative Extend, check out the Adobe blog.
Text to Video & Image to Video
This is a very useful set of features that uses the Firefly Video generative AI model to create completely new video. For now, it can be accessed in limited public beta on Firefly.adobe.com.
There are a few different ways to use it, starting with Text to Video, which essentially creates a realistic video clip from a text prompt. Also, you refine your videos using camera controls including angle, motion and zoom to fine tune your results.
In addition to a prompt, the Image to Video option starts with a still image, guiding the generative model to create a video based on your photo. You can then refine it using text or switch to the camera controls, for example, to change to a wider lens than that what was used in your photo for a different look.
Using the system does take a bit of practice to find the right descriptive terms to make a video. Here are some examples from Adobe, along with the text prompts:
Text to Video and Image to Video can for all kinds of ideation and creative endeavors, such as b-roll, or pre-visualisations for planning actual shoots, GFX or animated work, generating highly stylized titles and more.
For more on Firefly Video AI model, check out the Adobe blog.
Object addition and removal
While not available in the public beta yet, Adobe demonstrated the Object addition and removal feature, which looks to be a real time saver.
Essentially, this uses AI to identify objects in your video, enabling smart tracking and masking to be applied to the objects throughout the clip. Adobe demonstrated this with a pride of lions, where it identified each lion from the background. Interestingly, the lions and the tanned grass behind them were all similar colours, showing how good the tool was identifying objects with little contrast between them.
Once masked and tracked, you can easily apply effects such as background removal, applying text behind the objects, adding a colour grade or correction, creating duplicates or removing objects.
Object selection and mask tracking is a very powerful tool, and one that editors will certainly love when it becomes available some time in 2025.
Valens Quinn attended AdobeMAX conference in Miami as a guest of Adobe Australia
The post Generative AI meets video: Adobe’s game-changing tools for creators appeared first on GadgetGuy.
0 comments:
Post a Comment