If you sample only from the 10,000 or so current Olympic athletes, you will draw similarly incorrect conclusions about the 8 billion planetary general population's athletic ability.
One of the few areas of reliable statistics about US software developer pay come from the US Government Bureau of Labor Statistics. The median wage as of May 2023:
$132,270
This means half of all full time employed devs are higher, and half are lower. The mean is more skewed by higher earners but is similar:
$138,110
It also varies quite widely by geographic location, from a mean high of $173,780 in California to only $125,890 in Texas, from $199,800 in San Jose to $132,500 in Austin to $98,960 in rural Kansas (where I have actually developed software before!)
The short of it is, the vast majority of software developers do not make the top salaries. Even L6 is rare within the top tier of tech. There is a lot of delusion in this field around pay, so it's important to be well informed. As a field we are still very well paid compared to most other jobs especially considering our safe working conditions and lack of needed credentials and education. Compared to most of the work on this planet, it's still a goldmine.
Tried "Sailflow Galveston Bay", which for google search returns the Galveston Bay / Kemah wind/weather report for sailing in the area. For GPT Search, returned the Galveston Buoy as the first result, which is WRONG, as it is 10+nm off the coast of Galveston with often VERY different weather than Galveston Bay. The second GPT search result was just the Sailflow home page.
So in this case at least, GPT Search is far inferior and dangerously incorrect were someone to rely on these search results for weather information.
This one’s on Jack Welch - a pioneer in short term gain over long term building. You absolutely can juice a company’s performance by going dog-eat-dog, but inevitably when the smoke clears you’re left with jackals and hyenas stretched too thin.
Always worth mentioning that this culturally altered America in a way that we’ll probably never unwind.
> Always worth mentioning that this culturally altered America in a way that we’ll probably never unwind.
I think this about a lot of things, such as certain events in politics or generative AI. I'm curious how you apply this to ruthless cutthroat policies at a handful of (admittedly quite large) tech companies?
Why is no one making higher dpi ebook readers? I've been waiting decades now for an ebook that would actually have the resolution of printed 600 dpi pages. The chunky text simply makes ebooks for me uncomfortable and unpalatable for long reads.
Probably because it's enough for most people. I have a Paperwhite with (I think) 300 dpi and unless I reduce the font size to the minimum and look really close I can't see any issues at any reading distance. It feels like a printed book to me.
My understanding is that e-ink with higher than 300 dpi is very difficult to produce which means it is rather expensive and doesn't look that much better to most people. Additionally, people think of an e-reader as a sub-$200 device so the market for a premium high DPI e-reader is just rather small. People are already complaining about the price of the Kindle Colorsoft, think what they would say if amazon put out a Kindle high DPI and it was in the $400-$500 range.
I had the same problem, the solution is to start with extremely easy content, either from a language learning site/channel or content targeted at toddlers. Even 2 year olds already have quite advanced language skills! Peppa Pig is a perennial favorite. You build up from there with childrens' shows, cartoons, and at some point later on introduce graded readers. Watching full-speed native TV shows is like the final exam after 1500+ hours of study, and even then may have a lot not understood if you aren't familiar with the slang/dialect. This is especially true for heavily dialectical languages such as Chinese where it's common to always watch with subtitles on.
Besides watching things aimed at young children, another tactic I have found effective is to watch something you already know very well in English (or whatever your native language). For me, this has been South Park. I'll watch episodes that I practically know by heart already, so that even if I don't understand all the Spanish words I can pick up things from knowing what is happening.
"The power of SRS (spaced repetition system) cannot be overstated"
For an alternative take, there is at least some evidence that SRS is entirely unnecessary and can even hinder language learning. I know it at least is not required by first language speakers, and have also seen many examples of fluent second and third language speakers who never use SRS, or any other kind of "practiced" language acquisition such as learning vocabulary, grammar, etc.
> For an alternative take, there is at least some evidence that SRS is entirely unnecessary and can even hinder language learning.
I do not believe that the above is an alternative take. Most people who do SRS pair it with tons of comprehensible input. Also, a lot of takedowns on SRS tend to actually be takedowns on memorizing 1:1 translations of words at all which is all they assume people do with SRS. I've never done those, because I think word lists are bad and 1:1 translations from L1->L2 are bad because they are always wrong (languages are different, not substitution ciphers.) I almost only deal in complete sentences in SRS, and clozes.
There's also a piece of advice given by David Parlett in an old book about learning languages straight from possibly incomplete printed grammars and native or anthropological recordings: "learn the hard stuff first." There are some things about languages that are central, complex, and should just be learned by rote. Romance conjugations are some of those things. Using SRS to learn how to conjugate reflexively and automatically in Spanish (after probably 50K card reviews) was the best thing I could have done to open up a world of comprehensible input.
One of the points in Input Hypothesis is something I'm sure everybody would agree with:
This states that learners progress in their knowledge of the language when they comprehend language input that is slightly more advanced than their current level. Krashen called this level of input "i+1", where "i" is the learner's interlanguage and "+1" is the next stage of language acquisition.
Informally I've always called it "walking the knife's edge" - you have to be always on the slight edge of feeling uncomfortable to realize meaningful gains. I mean it makes logical sense. The brain is ALWAYS trying to optimize away through chunking/patterns/etc. so you have to be constantly challenging it.
It's the reason why there's a huge skill difference between a driver at one month vs 1 year, but a far less difference between a driver at one year vs ten years.
It is important to note, no matching audio dialog, or even an attempt at something like dialog. This seems to be way beyond current full video generation models.
Hassaan isn't working but Carter works great. I even asked it to converse in Espanol, which it does (with a horrible accent) but fluently. Great work on the future of LLM interaction.
This would be WONDERFUL with a Spanish-native accent as a language tutor, but as you've already got English you should try marketing this to the English-learning world. There is a huge dearth of native English speaker interaction in worldwide language instruction, and it's typically only available to the most privileged of students. Your system could democratize this so anyone with an affordable fee (say $10-20/month, subsidized for the poorest) could practice speaking and have their own personal tutor. The State Department and Defense Language Institute might love this as well as, if trained on languages like Iraqi Arabic and Korean would allow live-exercise training prior to deployment.
It can also function as an instructional tutor in a way that feels natural and interactive, as opposed to the clunkiness of ChatGPT. For instance, I asked it (in Spanish) to guide me through programming a REST API, and what frameworks I would use for that, and it was giving coherent and useful responses. Really the "secret sauce" that OpenAI needs to actually become integrated into everyday life.