Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The API of Tensorflow and PyTorch is quite large. Even Tensorflow has a dialect specifically for TPU, so the notion of accelerator independence (especially when considering performance) has not been realized yet (though this is still a work in progress).

Anyway, how could you reuse your code wholesale when one of the most common operations is calling “.cuda()” on a tensor?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: