Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there any way to run images from a camera real-time into GoogLeNet?

E.g. like if I want to scan areas around me to see if there are any perspectives in my environment that light up the "snake" neurons or the dog neurons???



The Jetson TX2 can run GoogLeNet in real-time with the onboard camera, so it's definitely possible on mainstream GPUs too.

https://github.com/dusty-nv/jetson-inference

Inspecting layer activations in real-time is trickier, but presumably possible.


You might find it interesting to look at Jason Yosinski's "deep vis" framework: http://yosinski.com/deepvis


I just found CaffeJS which can kind of do what I want. . It doesn't show the individual neurons, but does go from webcam to classification: https://chaosmail.github.io/caffejs/webcam.html


If not in realtime, you could always take a video and post-process it offline?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: