Hacker Newsnew | past | comments | ask | show | jobs | submit | smk_'s commentslogin

Those who can't do, teach. Don't read. Do.


And those who can do are often terrible teachers.


If I took 2 weeks off from work I could build this prototype quite easily. We're in an interesting period where the space of possibilities is so large it just takes a while for the "market" to exhaust it.


Quizlet has a feature to build flashcards using AI. I'm sure they could write a backend service that just chunked the entire chapter.


It doesn't work well enough yet. The flashcards it generates don't actually fit well into its own ecosystem. When you try to build the "quizzes", the wrong answers are trivially spottable. Further, even the generated questions are stilted don't hit parity with manually generated flashcards.

My use of ChatGPT for this purpose is so far mostly limited to a sanity check, e.g. "Do these notes cover the major points of this topic?" Usually it'll spit back out "Yep looks good" or some major missed point, like The Pacific Railway Act of 1862 for a topic on the Civil War's economic complexity.

I'll also use it to reformat content, "Convert these questions and answers into Anki format."


John von Neumann, who contributed to the Manhattan project, published an essay called “Can we Survive Technology?” on the topic: https://geosci.uchicago.edu/~kite/doc/von_Neumann_1955.pdf in 1955.


It’s more likely he tried to cover up massive Alameda losses due to all the usual reasons people turn to illegal acts: shame, myopia, gambling addiction. One core EA principle is integrity. If he was actually “EA”, this wouldn’t have happened the way it did.


That’s no true Scotsman.


Philosophy is the constantly moving (and narrowing) circle of unanswered questions.


You are a psychopath.


I would imagine this fact is obvious to any Tesla engineers reviewing code. I would imagine they look at more factors than simply lines of code.


Longtermism in a moral context rewards thinking about the value of future beings that do not yet exist. Existential risk is important in this consideration, because it is an event that severely limits the amount of future beings that could exists. The strongest argument against longtermism is a flavour of a person-affecting moral view.


Smart people tend to be libertarian-leaning. It is a most natural predisposition. Most people living in the west are economically privileged.

Ajross, you don't seem to really believe what you are saying. Sci-fi arguments? Really? Since when is the risk from nuclear war, pandemics or artificial intelligence constrained to sci-fi? The Spanish flu, Black Death and the Plague. The Cold War. These events should all inform our way of thinking about risk.


> Smart people tend to be libertarian-leaning.

The smartest people I know are communists, but I imagine the end state of both is pretty similar - everybody enjoying a surplus economy where work is optional and done for the betterment of ourselves and others.


Personally I'm looking forward to an Amazon home. Ring, Echo and Roomba – I will feel well taken care of and always safe.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: