Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I have no formal training (...) I have real time models that run on the CPU (..) and as far as I know they're more performant than anything else out there > You do not need to be a data scientist. Anybody can do it. That said, a good GPU will help a lot. I'm using two 1080Ti in SLI and they're pretty decent

An alternative is that, by not knowing what you are doing, you may not see all the options that exist -- and when you hit a problem too hard, you just throw more hardware (GPUs) at it.

This is not to say it is not sometimes a valid approach, but I'd be wary of someone who say hasn't had any formal training in C, and says that his stuff is more performant that anything out there- just because lack of training causes not knowing stuff that already exists.



> An alternative is that, by not knowing what you are doing, you may not see all the options that exist -- and when you hit a problem too hard, you just throw more hardware (GPUs) at it.

Maybe some will. I just explained that I'm running my models on CPUs, so I'm actually developing sparse and efficient resource constrained models that evaluate quickly.

I've been working with libtorch's JIT engine in Rust (tch.rs bindings).

I'm currently trying to adapt Melgan to the Voice Conversion problem domain so I can get real time, high-fidelity VC without using a classical vocoder. WORLD works great and quickly, but it's a poor substitute for the real thing as it only maps the fundamental frequency, spectral envelope, and aperiodicity. Melgan is super high quality and faaast.


Are you working on VC (input: speech of one speaker, output: the same spoken content, but sounds like another speaker) or speaker-adaptive speech synthesis (input: text, output: speech)?

Also check out ParallelWaveGAN, another high-quality and very fast (on CPU) neural vocoder.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: