Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The ONNX format have been develop by Microsoft and Faceboof and is really well supported by pytorch as it was develop for exchange between Pytorch and Caffe2. The ONNX Runtime is use by lots of software for inference only to run on windows without having to manage specific vendor provider. (fi, WinML runtime and ONNX runtime is the same code) TVM is also use but currently I have the impression it's more widely use in embed device.

ONNX Runtime enable also provider specific inference code like TensorRT, CUDA, CoreML etc... without having you to change your code



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: