What real world neural network algorithm is "fully baked by the academic community"? I don't think there are any.
I don't think there are companies with products based on AI where the AI has to work for the company. Google uses AI for search but search can screw up and search returns a lot of just indexed results. There's no "real world application" where AI works reliably (ie, gives a result that you can count on). That doesn't stop deep networks from an improvement on applications that were previous a combination of database queries. But this same only-relative-usefulness can be problematic when companies and institutions delegate AI to make decisions where it doesn't hurt them being wrong but it can mightly screw some random person (from credit to parole to whatever).
The relative improvement is both an oversell and an undersell depending on the context. For many applications the correct answer may be that a reasoned set of DB queries is about as good as it gets owing to lack of data, no better algorithm exiting, or product experience being mildly impacted by changes to the DB fetching component.
When confronted with these uncertainties internal stakeholder will often swing from "we just need more scientists working on this problem" to "it works fine, why would we spend time on this?" attitudes. The former almost always leads to over-investment where 3 teams of people are working on what should be one individuals project. The latter can sometimes be right, but I've also seen Fortune 500 Search rankings that have never been tuned let alone leverage an ML model.
I don't think there are companies with products based on AI where the AI has to work for the company. Google uses AI for search but search can screw up and search returns a lot of just indexed results. There's no "real world application" where AI works reliably (ie, gives a result that you can count on). That doesn't stop deep networks from an improvement on applications that were previous a combination of database queries. But this same only-relative-usefulness can be problematic when companies and institutions delegate AI to make decisions where it doesn't hurt them being wrong but it can mightly screw some random person (from credit to parole to whatever).