It's only the littest AI video product around — I've been on board from the start. They might not have PhDs, but the way this startup has entered the scene and evolved its creator tool has been a masterclass in the lean startup. They saw what we, the AI creators, were dealing with — a complex ControlNet, EbSynth, and Stable Diffusion workflow for our AI video fix — and streamlined that hacky pipeline into just a couple of clicks. Their openness and responsiveness to creator feedback is noteworthy — special shoutout to the founder, @stokebuilder, for being a damn good lawyer turned PM and leading one of the rare positive (and non-cringe) shifts from Web3 to AI.
So how does Kaiber stack up with the rest of the options including open source? Unless I'm looking for some fine-grain control, I'm all in on Kaiber for video2video. It's my top pick for social media content, even ahead of Runway Gen 1 (despite its 15-second limit upturn). Kaiber, on the other hand, supports 60-second videos, and their move towards storyboard and crafting multi-segment content with animatable parameters? It's really promising.
What would I like to see improved?
- Improve keyframe selection logic to select keyframes based on the contents of the scene across time
- Enforce visual consistency between keyframes
- A bit faster processing time! (though i'm guessing EbSynth is prolly the culprit here)