Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You have no idea why a software system with very few physical constraints and which can iterate incredibly fast might improve faster than an algorithm that has to drive a physical vehicle through a world full of edge cases?


The fact that we have zero awareness about the edge cases in the world of general intelligence doesn't give us right to think there aren't any. The world of the minds isn't necessarily simpler than physical world and we know about it even less.

For example think of all cognitive biases and myriad of illusions we have to struggle very hard with when we apply out minds to any problem and want to have any hopes of achieving even remotely useful results.

AI alarmists speak of human minds as if they are some perfect tool of progress completely forgetting that those minds failed to achieve any significant progress for tens of thousands of years and what pushed us forward wasn't more intelligent mind. Just a practice of how to deal with shortcomings of our minds.

Another thing is resource cost. Look at how much it costs to run a single instance of semi-retarded LLM. I'd say we are lucky if we are only one computational revolution away from AGI like the one we got from discovering semiconductors.

Humanity dreams of golems for millenia. First we made them out of sticks and clay. Now we are making them out of iron and silicon which is a significant progress but I'm not sure how many such technological jumps we'll need to enjoy or suffer the creation of actual golems.


The fact that there are edge cases is actually part of what worries AI doomers. We'll be deploying these AI systems to non-critical areas where they can still do a lot of damage.


This reminds me of how totalitarian systems create their enemies. Enemy is at the same time portrayed as dangerously incompetent but also incredibly dangerously cunning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: