Interesting. The issue occurs because I have WebGL disabled, causing the createShader function you're using to throw an error. You can reproduce this by going to chrome://settings, disabling "Use hardware acceleration when available," refreshing the page, and then triggering the same error.
Sorry for duplicate post. Also this feature is enabled by default, but causes issues with several sites.
Interesting. The issue occurs because I have WebGL disabled, causing the createShader function you're using to throw an error. You can reproduce this by going to chrome://settings, disabling "Use hardware acceleration when available," refreshing the page, and then triggering the same error.
I appreciated using the Nvidia Tools for remapping webcam eye-contact until I was reviewing a recording and noticed that it changed my eye color. But it’s been a bit. Perhaps an undocumented feature that newer versions/models fixed.
Original dev here. Agree this video looks like uncanny valley but it's likely because the lighting of the original video is off + I baggy eyes (I was sleep deprived).
Would recommend trying it on other videos, it is surprisingly good. Although there definitely are areas to improve.
Hi Mokshith, we talked about releasing a project using the Sieve API and Revideo before! This is definitely something we still want to do. I'll reach out!
What was the orchestration system used for? Was it in the case where there were many models that needed to be run one after another. I know it's a huge problem in video processing to be able to increase speed a ton. My company Sieve (see profile) is building infrastructure specifically for running ML models on video which is why I'm curious.
It was built for AI driven container orchestration, configuration synthesis from high level constraints.
Yes ML workloads is particularly complex, because they have both batch oriented data flows (training), and service oriented data flows (inference). There aren't many systems that can adequately express both.
This looks really cool. Are there ways to orchestrate jobs? Like having one notebooks output trigger another based on some logic? I'm imagining running a bunch of different deep learning models on separate notebooks, or running the same model on different chunks of a piece of data in parallel.