Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Genuine question: Is your concern primarily based on principles, or are you sincerely worried that OpenAI having access to your data could lead to practical, tangible negative consequences (beyond principles / psychological effects)?


I listed some of my concerns here[1]. It is mostly based on principles, but also on the fact that we don't know what these models will be used for in the future. We can trust OpenAI to do the right thing today, but even if they're not involved in the data broker market, your data is only a bug, breach or subpoena away from 3rd party hands.

Also, OpenAI is not the only company in this market anymore. Google, Facebook and Microsoft have competing products, and we know the privacy track record of these companies.

I have an extreme take on this, since for me this applies to all "free" proprietary services, which I avoid as much as possible. The difference with AI tools is that they ask for a much deeper insight into your mind, so the profile they build can be far more accurate. This is the same reason I've never used traditional voice assistant tools either. I don't find them more helpful than doing web searches or home automation tasks manually, and I can at least be somewhat in control of my privacy. I might be deluding myself and making my life more difficult for no reason, but I can at least not give them my data voluntarily. This is why I'll always prefer self-hosting open source tools, over using a proprietary service.

[1]: https://news.ycombinator.com/item?id=35304261




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: