Just as others walk their integration back I believe. Interesting but I am sure it will not do much good in the end. The car was a good idea horribly delivered and not for the lack of great engineering but due to terrible leadership. My 2c
There are hundreds of these operations at any one moment in time, some more legit than others (voip dial backs, short message farms for scammers) not sure why they are making this out to be the end all and be all of this type of things. Telcos have the ability to lock these down pretty quickly using proximity of devices alone, but the almighty dollar is more important ;)
It's just another distraction from the unreleased Epstein files. The admin has been doing this periodically since people started asking about the files - taking something mundane and blowing it up into front page news.
I get the concern expressed, but the fear-mongering is getting a little much these days. Innovation can be scary, and at this time, people are making assumptions based purely on things that we do not know. How this will impact the future of business and technology has yet to be determined. Only time will tell.
We must be careful as we chart this new scary world of large language models and Artificial Intelligence and their impacts on humanity, but we do need to slow down on using scare tactics.
Please note I do not fault the author or anyone else for this representation of these new technologies. Nonetheless, I find it counterproductive to our discussions about setting guidelines and ensuring accountability in developing these models and their use.
Right now, it sounds more like the CRISPR discussion all over again.
There are similarities, sure. But there are also stark differences. Due to the existence of ChatGPT, the GPT-3 API, and the general viability of natural language prompting, LLMs are now essentially commoditised. They are now in the hands of orders of magnitude more people. Barring sector-specific regulations, people are free to iterate (with varying degrees of care, ethical consideration, and success) at a much faster pace compared with the field of medicine, or even academia in general, where there’s non-zero involvement of ethics committees.
At DAYJOB we already have immense domain expertise to tune GPT-3 and prove its reliability in our sector. For giggles I also implemented an incredibly naive approach to a problem we set out to solve, and still ended up with a result that’s considered very impressive, and is usually the sort of thing many companies have spent countless hours working toward.
My sector certainly won’t be an edge case. And we all know that everyone and their dog is trying to see how GPT-3 can deliver value. It’s all happening at the same time, and very quickly.
As someone that’s generally quite jaded and skeptical of new technologies, my experience in my day job has completely changed my perspective. At this stage I’m willing to go out on a limb and say that this is going to be quite disruptive to labour markets at the very least. And this itself could very well be at the level where it raises serious ethical and societal questions. I’ll happily eat humble pie if I’m wrong.
The point about this being more generally available for tinkering is fair, and from my experiments and usage, I can state it is impressive as absolute hell. First, however, we need a discussion focused on how we manage work as industry groups on at least trying to manage the proliferation of the technologies.
The experiments to determine the answers must not be in the sole pervue of corporations. Executives of corporations have fiduciary duty only to the shareholders.
So a completely liberal approach to traversing the space of pervasive AI in society, with stated 10% probability for catastrophic results (number is per Sam Altman), can not be left to a decision making process that only seeks to maximize profits.
To "be careful as we chart" decisively means it can not be treated as a mere innovation to be subjected to market forces. That's really the only fundamental issue. This isn't a 'product' and 'market' may happily seek a local maxima which then leads to the "10%" failed state. That's it. Address that and we can safely explore away.
>Innovation can be scary, and at this time, people are making assumptions based purely on things that we do not know.
Here's the thing. Before chatGPT, it was pretty much a given that society was under more or less zero risk of losing jobs to AI.
Now with GPT-4 that zero risk has changed to unknown risk.
That is a huge change and it is a change that would be highly unwise to not address.
I agree that only time will tell. But as humans we act on predictions of the future. We all have to make a bet on what that future will be.
Right now this blog post is writing about a scenario that although speculative is also very realistic. It is, again, unwise to dismiss the possibility of a realistic scenario.