- Aug 17, 2017
- 1,609
- Content source
- https://arstechnica.com/?p=1986424
On Tuesday, Stability AI released Stable Video Diffusion, a new free AI research tool that can turn any still image into a short video—with mixed results. It's an open-weights preview of two AI models that use a technique called image-to-video, and it can run locally on a machine with an Nvidia GPU.
Last year, Stability AI made waves with the release of Stable Diffusion, an "open weights" image synthesis model that kick started a wave of open image synthesis and inspired a large community of hobbyists that have built off the technology with their own custom fine-tunings. Now Stability wants to do the same with AI video synthesis, although the tech is still in its infancy.
Right now, Stable Video Diffusion consists of two models: one that can produce image-to-video synthesis at 14 frames of length (called "SVD"), and another that generates 25 frames (called "SVD-XT"). They can operate at varying speeds from 3 to 30 frames per second, and they output short (typically 2-4 second-long) MP4 video clips at 576×1024 resolution.
New “Stable Video Diffusion” AI model can animate any still image
Given GPU and patience, SVD can turn any image into a 2-second video clip.
arstechnica.com